It’s finally here!
In a previous post I touched on the fact we would be examining HPC (High Performance Computing), parallel processing and Hadoop in relation to Home Automation.
As our home network grows with multiple devices, tablets, microprocessors and similar running on it we reach a point where we have many CPU’s which are often sitting dormant. Being able to leverage this spare computational power for working on other tasks in our home poses an interesting challenge.
One method we can use in the case of multiple Raspberry Pi units is to use applications that run using MPI (Message Passing Interface). The MPI provides us with a set of base routines and functionality for communicating between multiple CPU’s.
Originally written with C and Fortran in mind, there are many libraries for languages including Python and Java that allow us to implement MPI based applications.
MPI has eight basic concepts, these are:
Communicator – An object that connects groups of processes in the MPI session.
Point-to-point basics – Allows process to process communication
Collective basics – Performs communication amongst all processes in a process group
Derived datatypes – Used for dealing with datatypes in MPI applications
One-sided communication – A set of operations used for dealing with scenarios where synchronization would be an inconvenience for example the manipulation of matrices and calculating matrix products.
Collective extensions – For example non-blocking collective operations (A combination of non-blocking point-to-point functionality with collective basics) which can help reduce the over-head on communications.
Dynamic process management – The ability for an MPI process to help with the creation of new processes or establish communication between MPI processes.
MPI-IO – The abstraction of the I/O management on parallel systems to the MPI
These eight concepts can help us to decide what functionality we wish to use in our home automation applications that implement MPI.
To give an example of how this may be useful in the home let’s look at the following example.
Redundant media unit and thermostat controller
In this scenario we have a thermostat controller running on a Raspberry Pi that calculates various temperature data and adjusts the thermostats, blinds and similar as needed. This thermostat controller for example may also use weather data to decide whether a device that acts as a damp/water sensor should be switched on or off based upon predicted rain fall levels.
Over time as we store more historical data and get weather streams via the web we have an ever increasing amount of data to work with. More data should hopefully provide more precision. However working with more data also means we are crunching more numbers and thus placing an ever increasing burden on the CPU of the Raspberry Pi.
We can optimize our code and database in order to help tackle performance issues, however this will only get us so far.
Now let’s imagine we have a second Raspberry Pi running XMBC in our home, which we use on occasions to watch streamed TV from the web. This Raspberry Pi may be sitting idle for much of the day, so it would be great if we could leverage it’s CPU to crunch our weather data while it is sitting dormant.
This is where using MPI can help. A program written using MPI can check if the media unit is being used and if not set its CPU to work on processing temperature data.
Using the eight basic concepts of MPI we can write an application that will handle the communication between the thermostat controller and XMBC RPi and pass calculations back and forth.
This model also allows us to add further devices to our network that can be used by the thermostat controller for parallel processing.
Our thermostat controller is being written in Java and will run on Tomcat, therefore in the next post on MPI we will look at its Java implementation and how we can use this in our code.
We will explore the techniques used for writing distributed applications and look at a scalable model that allows for the addition of extra devices to the network.