It's heating up in Texas, which means my kids are already living in our pool. In spite of the fact that the water temperature is closer to "Polar bear plunge" levels than the "Texas outdoor bathtub" temps it will reach by August, they waded-in back in April and are taking their meals poolside like I'm running some kind of all-inclusive resort on the outskirts of Austin.
So, with an increase in pool traffic and, ahem, foreign substances from said traffickers, the pool pump is already working overtime. And while I love that my kids love having a pool, as its chief maintainer, I have more of a love-hate relationship with the water-filled hole in my backyard.
One of the most frustrating parts of the system is the DE Filter Pump that runs for about 6-8 hours per day in the summer. DE (meaning diatomaceous earth) filters help keep your pool clear and clean by cycling the water through a set of grids coated with DE powder. The key factor in the performance of the filter is its water pressure when active. On my filter, the pressure is always visible through an analog dial on top of the filter basket.
If the water is flowing through between 15 and 25 PSI, the filter will do its job and the pool will look nice. If it's too low (5-14 PSI) or too high (> 26 PSI), the pool goes green and my kids will complain as they demand more ice for their next round of bespoke juice refills.
When the DE filter is running in the summer, and the pool is active, the pump pressure can go high or low in a matter of hours. And as much as I love the idea of sitting by the pool and starting at an analog pressure dial, I would rather defer that job to a machine that can do the monitoring for me, and then get a text message when I need to service the pump.
Hey ML, Tell Me When I Need to Clean My FilterI considered replacing the existing gauge with a smart dial, or putting a sensor in the line to monitor pressure, but then I saw a blog post from Pete Warden that inspired me to take a different approach. Pete's post "How screen scraping and TinyML can turn any dial into an API" highlighted a GitHub project that uses an ESP32, camera and TensorFlow to add intelligence to an analog power meter.
Machine vision on analog sensors is the ultimate retrofit, allowing you to add smarts to anything, without modifying the existing system! So I set out to build my own version of an analog screen scraper for my filter.
In this article, I'll share what I created, and show you how to:
- Rig up an edge vision device with a Raspberry Pi, PiJuice Hat and Pi Camera v2.
- Use Edge Impulse Studio and the Edge Impulse Linux CLI to capture training data, build, and tune an image classification model that detects the various states of the needle on a pressure gauge.
- Deploy that model to my Raspberry Pi and create a Python program to periodically sample images of the gauge.
- Use the Blues WirelessNotecard and its built-in cellular connection to send inferencing results to the Notehub.io cloud service.
- Create an alerting Route in Notehub.io to send a text message through Twilio when the tank pressure is too low or too high.
If you want a brief overview of the project, check out the video below.
Let's get started!
Assemble the Hardware, Part 1There are two hardware configurations needed for this project: one for the model data capture and training phase, and one for the deployment phase. For data capture and training, I started with a Raspberry Pi 4, attached a v2 camera module, and placed both in a waterproof case on top of a tripod. Then, I positioned the rig next to my filter basket where the Pi camera has a close-up view of the pressure gauge.
The end goal for this project is to have a project running on solar power and a PiJuice Hat. But this is a non-starter for the data capture phase of the project where I planned to take at least a few hundred photos of the tank in various states. So for this phase, I connected a standard Pi-friendly power supply and ran an extension cable to an outlet on my house.
Capture Training DataOften, the most labor-intensive part of an ML project is the training data capture and labeling phase. Even with a transfer learning project (where you take an existing model and fine-tune it with your own data, meaning you can use a smaller data set and still get an accurate result) you need to capture and organize hundreds of data points before you start training.
Thankfully, Edge Impulse makes this simple. And, since they've recently launched support for Linux-based SBCs (including the Pi), you can leverage their tools to go from training data to an optimized model in just a few hours!
To get started with Edge Impulse on the Raspberry Pi, you can follow their guide, which walks through setting up a new Pi to installing the Edge Impulse Linux CLI. You then run the CLI with the edge-impulse-linux
command to log-in and connect your Pi to the Edge Impulse Studio service.
This command essentially opens a socket between your Pi and Edge Impulse Studio so you can capture live data for training and testing from the browser. It was 90 degrees in Austin when I did the initial capture and I was grateful to be ensconced in my office for the duration.
Once connected, the Edge Impulse Studio Data Acquisition screen is where you'll load up your training set. Since your Pi is connected to Edge Impulse, this is as simple as defining a label and clicking the capture button. Edge Impulse takes care of the rest.
You should expect to take a lot of photos, depending on the number of labels (or classes) you are looking to detect. For my filter tank, the states I am interested in are:
- Off (0 PSI)
- Low (1-14 PSI)
- Normal (14-25 PSI)
- High (>25 PSI)
For the first pass, I captured and labeled about 50-75 photos at each state, or 250 in total. For the Low, Normal and High states, I made sure to capture a variety of photos of the needle in various spots to make sure the model could infer the existence of a range.
Once you have a dataset with all of the labels you wish to classify, the next step is to create an impulse. This is the meat of the model creation process where you tell Edge Impulse how to preprocess and classify each image in your model. For my project, I followed the image classification tutorial and was able to get things configured in just a few minutes, from image classification, to feature generation. I might have also spent several minutes just mousing around in the feature explorer and feeling all futuristic.
The final step before model creation is transfer learning. Transfer learning is an innovative technique that allows you to get all of the "smarts" from an existing model and add in your dataset to retrain the final few layers of the model to classify to the labels you care about. Without transfer learning, you'd have to take thousands of images, or more, to create a reasonably accurate model.
For my initial pass, I kept the defaults on the settings screen and clicked "Start training." On the first pass, I ended up with a quantized model that was about 84% accurate and an unoptimized model that was about 95% accurate.
I was happy with the initial results, but if you're not with your own model, this is the point at which you can add additional data to the training set, tweak the Neural Network settings and train again until you get the result you want. It can take a bit of trial and error, but just imagine how tedious this would be without Edge Impulse and the magic of transfer learning!
Once you have a model you like, the next step is to run it against some realtime data! If your Pi is still connected to Edge Impulse Studio, you can click on the Live Classification screen, take a sample from the Pi camera and perform an inference pass through your model. If all looks good, you're ready to deploy the model to your Pi!
If you captured test data, you can also do a single pass validation against all of your test images in the Model Testing Screen.
To deploy the model to your Pi, use the edge-impulse-linux-runner
command. When passed with no arguments, this will download the model, turn on the camera and start running each frame through your model.
You can also use the --download
argument to specify a file location for the model for later use. This is what I did in order to use the model from within my Python application.
With the model on your Pi, it's time for a more real-world deployment. For my project, I removed the power supply, then added the PiJuice Hat and a solar panel to keep the LiPo on the PiJuice hat topped-up. Using the headless version of the PiJuice utility, I set a minimum charge of 10% before the Pi goes to sleep and a Wakeup on charge trigger of 80%.
This stage of the process will likely take some trial and error to make sure you have the solar panel in a location with plenty of sun available, and to tweak the PiJuice settings. Since turning the Pi Camera on and running our ML model are both power-draws on the system, I tuned my application to only run in daylight hours and when my pump is running, and adjusted the sleep period between captures to be 1 hour since I don't really need to be notified right away if the pump needs attention. I also turned off the power hungry Wi-Fi on the Pi (because I have cellular, which you'll see in a bit) and shut down other non-essential services.
Build a Local Tank Monitor Application with PythonML model? check. Hardware deployed? check. Now it's time to write some code.
With the Raspberry Pi you have a number of options for creating a local application. personally, I am partial to Python and since Edge Impulse and Blues Wireless have Python SDKs, I created a Python app to monitor my tank.
Since you'll be working with the Pi camera in Python, the first thing I suggest doing is to install OpenCV, which frankly can be a pain on Pi devices. Use the approach that works for you, but I found that this easy guide got me to a functional install of OpenCV in just a few minutes after other, pip-based approaches were non-starters for me.
Once OpenCV was in place, I installed the Edge Impulse Python SDK using pip.
pip3 install edge_impulse_linux
Using the camera example from the Python SDK source as a guide, I created a quick project to load my model, grab a sample from the camera, perform an inference, and output the result as a list of tuples sorted highest to lowest.
dir_path = os.path.dirname(os.path.realpath(__file__))
modelfile = os.path.join(dir_path, '../model/pressure_model.eim')
with ImageImpulseRunner(modelfile) as runner:
try:
model_info = runner.init()
print('Loaded runner for "' + model_info['project']['owner'] +
' / ' + model_info['project']['name'] +
' (v' + str(model_info['project']['deploy_version']) + ')"')
labels = model_info['model_parameters']['labels']
videoCaptureDeviceId = 0
camera = cv2.VideoCapture(videoCaptureDeviceId)
ret = camera.read()[0]
if ret:
backendName = camera.getBackendName()
w = camera.get(3)
h = camera.get(4)
print("Camera %s (%s x %s) in port %s selected."
%(backendName,h,w, videoCaptureDeviceId))
camera.release()
else:
raise Exception("Couldn't initialize selected camera.")
for res, img in runner.classifier(videoCaptureDeviceId):
if "classification" in res["result"].keys():
print('classification runner response\n',
sorted(res['result']['classification'].items(),
key=lambda x:x[1], reverse=True))
break
finally:
if (runner):
runner.stop()
If you're interested in seeing the complete source for this project, check out the GitHub repo.
When run my Python program output will look like this:
Loaded runner for "Brandon Satrom / pool-tank-monitor (v1)"
Camera V4L2 (480.0 x 640.0) in port 0 selected.
classification runner response
[('tank-pressure-low', 0.9246085286140442), ('tank-pressure-off', 0.03462480753660202), ('tank-pressure-high', 0.03320816159248352), ('tank-pressure-normal', 0.007558567449450493)]
In this case, the model is 92.5% certain that my tank pressure is low.
Use Cellular at the Edge to Send Inferencing ResultsInferencing at the edge is amazing, but what's even more amazing is sending model results to a cloud app without streaming across bandwidth-hogging, privacy-skirting image data! And this is where the Blues Wireless Notecard fits perfectly into the picture.
The Notecard is a cellular and GPS-enabled device-to-cloud data-pump that comes with 500 MB of data and 10 years of cellular for $49.
The Notecard itself is a tiny 30mm x 34mm SoM with an m.2 connector. To make integration in an existing project easier, Blues Wireless provides host boards called Notecarriers. I used the Notecarrier Pi HAT for this project and popped it right between the PiJuice Hat and Raspberry Pi.
On the cloud side, the Notecard ships preconfigured to communicate with Notehub.io, which enables secure data flow from device-to-cloud. Notecards are assigned to a project in Notehub. Notehub can then route data from these projects to your cloud of choice, or integrate with a third party service like Twilio..
Blues provides a Python SDK and I can install it (and periphery for I2C communications between the Pi and Notecard) with a couple of Pip commands.
pip3 install python-periphery
pip3 install note-python
If you're interested in seeing the complete source for this project, check out the GitHub repo.
Then, to add the Notecard to an existing Python app running an Edge Impulse model, you'll do following:
- Initialize the Notecard and configure how it should communicate with the Notehub.io cloud service.
- Send an event (called a "Note") with the result of each inference run from the model.
- And finally, if the result indicates that the tank pressure is too high or too low, send a second alert event that will be picked up by Notehub and forwarded to Twilio.
First, initialize the Notecard by specifying a productUID
(which is the name of a Notehub.io project that we'll create in a minute) and setting the cellular connection mode
to periodic
so that the notecard conserves battery and doesn't maintain an active connection to the cell network or Notehub.
from periphery import I2C
import notecard
port = I2C("/dev/i2c-1")
card = notecard.OpenI2C(port, 0, 0, debug=True)
req = {"req": "hub.set"}
req["product"] = "com.company.person:my_product"
req["mode"] = "periodic"
req["outbound"] = 60
req["inbound"] = 120
req["align"] = True
card.Transaction(req)
Next, in the section of the program that captures images and runs them through the Edge Impulse SDK, you can send an event to the Notecard with inference results using a note.add
request.
req = {"req": "note.add"}
req["sync"] = True
note_body = {"inference_time": res['timing']['dsp'] +
res['timing']['classification']}
print('Result (%d ms.) ' % (res['timing']['dsp'] +
res['timing']['classification']), end='')
print('', flush=True)
sorted_items = sorted(res['result']['classification'].items(),
key=lambda x:x[1], reverse=True)
inferred_state = sorted_items[0][0]
note_body["tank-state"] = inferred_state
note_body["classification"] = res['result']['classification']
req["body"] = note_body
card.Transaction(req)
The snippet above sends every inference to the Notecard, and is useful for building a historical dashboard or real-time view of my tank state. But for my purposes, I also want to send special alerts if the pressure is too low or too high. To do this, I check the inferred_state
variable, which contains the highest-confidence label from the model, and do another note.add
. You'll notice that this note includes a filename (tank-alert.qo
) which I will use when setting up the Twilio alert.
req = {"req": "note.add"}
req["sync"] = True
req["file"] = "tank-alert.qo"
if inferred_state == 'tank-pressure-low':
req["body"] = {"message": "Tank pressure is low. Clean impeller."}
card.Transaction(req)
elif inferred_state == 'tank-pressure-high':
req["body"] = {"message": "Tank pressure is high. Backwash filter."}
card.Transaction(req)
When I run the complete program, the output looks like this:
Using model at /home/pi/pool-tank-monitor/src/../model/pressure_model.eim
Connecting to Notecard...
Configuring Product: com.blues.bsatrom:pool_tank_monitor...
{"req": "hub.set", "product": "com.blues.bsatrom:pool_tank_monitor", "mode": "periodic", "outbound": 60, "inbound": 120, "align": true}
{}
Taking a sample from the camera...
Loaded runner for "Brandon Satrom / pool-tank-monitor (v8)"
Camera V4L2 (480.0 x 640.0) in port 0 selected.
classification runner response {'tank-pressure-high': 0.0018513306276872754, 'tank-pressure-low': 0.26895418763160706, 'tank-pressure-normal': 0.7111062407493591, 'tank-pressure-off': 0.018088309094309807}
Result (13 ms.)
{"req": "note.add", "sync": true, "body": {"inference_time": 13, "tank-state": "tank-pressure-normal", "classification": {"tank-pressure-high": 0.0018513306276872754, "tank-pressure-low": 0.26895418763160706, "tank-pressure-normal": 0.7111062407493591, "tank-pressure-off": 0.018088309094309807}}}
{"total":2}
Pausing until next capture...
Getting Notes into NotehubNow it's time to take this project to the cloud! You'll recall that Notehub.io is the pre-configured destination for the Notecard, and the cloud side of the Blues Wireless story. To get your Notecard talking to Notehub, you'll need to create a free account, and then a project.
- Navigate to Notehub.io and login, or create a new account.
- Using the New Project card, give your project a name and
ProductUID
. - Copy that
ProductUID
and send it using theproduct
argument of ahub.set
request above.
That's it! When the Python script runs, the Notecard will associate itself with this Notehub project. Any notes (events) you send will show up in the Events panel when received.
And now for the final piece of the puzzle: alerting! I said at the start of this article that my goal was to get a notification when my tank pressure is too low or too high. And now that I have my data in Notehub, I can send it along to Twilio by creating a Route, which is Notehub speak for sending your data along to another cloud app or service.
For my app, I created a Route to Twilio using the guide at dev.blues.io and my Twilio account. The Route is set to fire only on Notes in the tank-alert.qo
Notefile and uses the provided body message as the contents of the text.
And voila, I can relax indoors far away from the chaos and demanding customers outside and know that my Raspberry Pi will tell me when my pool pump needs my attention.
You'll likely find that any real-world ML project requires a bit of refinement when deployed into the wild, and my project was no different. With 250 photos, I started with a model that was about 84% accurate but had trouble distinguishing between low and normal pressure states at times. Over the course of a few days, I snapped another 150 photos at various times and used the Feature Explorer in Edge Impulse Studio to find images in my dataset that weren't grouping well. The end result was a much more accurate model.
All in all, using Edge Impulse, the Notecard, Notehub, and Twilio, I was able to build an accurate model to monitor an analog device, "turn a dial into an API, " and build a deploy-anywhere cellular solution in a matter of days.
I hope you'll consider Edge Impulse and the Notecard in a future ML project, and I can't wait to see what you build.
Comments