In my previous articles, I introduced Apache Spark and Docker as breakthrough technologies. Now, I would like to share with you how I managed to combine these technologies to create a fully functional application.
This project was born out of a hackathon organized by IBM. Sparkathon’s goal was to utilize weather data and Analytics for Apache Spark for IBM Bluemix to build weather-related mobile applications. This hackathon offered an exciting opportunity to showcase the power of these technologies an application. Consequently, IBM, who recently procured the digital part of The Weather Channel, was evidently supporting the event in a bid to promote their Spark technology.

The product of this collaboration was a simple yet useful tool: the My Perfect Weather app. This ingenious app allows users to filter by temperature, wind speed, precipitation type, and probability of precipitation to determine the best travel destinations that match their perfect weather requirements. The service then collates the top five matching destinations, indicating days with perfect weather according to the user’s requirements.
Creating this app presented some challenges. The app needed to run on IBM’s Bluemix platform, integrate with a travel search service, and involve quite a bit of data processing. However, using Spark and Docker, the complex task of managing weather data became significantly more manageable.

The use of Docker proved particularly beneficial during the deployment phase. With Docker, I was able to push my Docker image and see it running on the Bluemix platform without any prior knowledge of Cloud Foundry Apps or worries about Scala buildpacks.
Despite the success and functionality of the My Perfect Weather app, there were still some challenges. For instance, a limitation was discovered concerning scheduling Spark jobs on IBM Bluemix, which affects the app’s ability to update weather data. In addition, I faced a small discrepancy in the Insights for Weather API documentation, which I resolved through observation and assumptions.
However, these challenges did nothing to diminish the sense of accomplishment. I am proud of implementing a unique idea, efficiently combining a range of diverse technologies, and enabling the demonstration of IBM Bluemix’s capabilities. I learned a great deal through this project. The experience of working through hiccups and overcoming challenges made the success even more satisfying.
I hope to improve My Perfect Weather further by adding more weather controls, broadening coverage, and incorporating Spark MLlib. Additionally, I encourage IBM to enhance its scheduling capabilities to allow service automation. If you are interested in this project, you can check out [My Perfect Weather](http://myperfectweather.eu/). You can also view the source code on my [Github](https://github.com/radek1st/my-perfect-weather) page.
In conclusion, the integration of Apache Spark and Docker in building a desirable application has been an exciting journey. I hope our work could serve as inspiration for other developers, contributing to their understanding of these technologies’ capabilities.
*This article was updated in 2025 to reflect modern realities.*
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.