Whether it's the electrical grid, roads, public lighting, or one of many other countless examples, we all depend on some type of infrastructure in order to exist within our complex, modern society. One of the most highly used types of public utility is the water/sewer system which is responsible for delivering safe drinking water, taking away our waste, and managing runoff from rain showers. But the pipes buried underneath our feet tend to corrode over time and become far more susceptible to leaking, thus posing a hazard to public health.
In this tutorial, we will go over how to use the new Coral Dev Board Micro from Google in a project that aims to detect small leaks via sound and then allow workers to remotely view the conduit's condition before a major fault occurs.
The Coral Dev Board Micro is similar to its larger Coral Dev Board counterpart in that it too contains a specialized Tensor Processing Unit (TPU) for performing efficient quantized machine learning operations. However, unlike the Coral Dev Board, the Micro variant houses a dual-core Arm CPU meant for bare-metal, FreeRTOS, and/or Arduino programming rather than a Linux OS. In addition to the Arm Cortex-M7 and Cortex-M4 cores, the board houses an onboard camera, microphone, several exposed GPIO pins, and two expansion connectors on the back for adding either Power over Ethernet (PoE) or WiFi/BLE capabilities via add-on boards.
All of the code for this project can be found here on GitHub and additional documentation can be found here in the Coral docs. All instructions are for Linux and steps might differ if using a different OS!
Start by cloning the repository locally with:
git clone --recurse-submodules -j8
https://github.com/having11/pipeline-leak-detection
and then install the necessary tools with:
cd pipeline-leak-detection && bash
setup.sh
On the hardware side, attach the Wireless Add-on Board to the back, as this is what enables the board to send images to a client over WiFi.
In order to connect to a WiFi network, each run of the scripts/flashtool.py
utility will need to include the --wifi_ssid NETWORK_NAME
and --wifi_psk NETWORK_PASSWORD
option flags/values.
Build the project with:
bash build.sh
and then upload it to the connected board (should be in the root directory of the project):
python3 scripts/flashtool.py --app pipeline_detection --wifi_ssid NETWORK_NAME --wifi_psk NETWORK_PASSWORD
The M7 core is responsible for nearly all operations within the project, owing to its faster speeds and connectivity to the TPU. The code begins by initializing the camera and M4 core before turning on the WiFi module. Next, an HTTP server is started to deliver images and the YamNet model is initialized with the TPU delegation for the interpreter. In order to get audio data from the onboard microphone, a coralmicro::AudioService instance is configured to read samples at 16KHz into a buffer via direct memory access (DMA) for later retrieval within the main loop.
Running the YamNet interpreter involves filling the input tensor with the audio samples, preprocessing the stream of data into a spectrogram that the model can understand, and then invoking the quantized TFLite model on the TPU. The last layer of the model returns a list containing every YamNet class along with a probability, and only the classes with a probability of 30% or greater are kept. Finally, these filtered classes are checked against a list of known sounds, including dripping, running water, and several other similar ones.
If there is a match, the M4 core is started in parallel with the M7 core. Currently, the application running on the M4 simply blinks an LED until the M7 notifies it to stop, and this idea of concurrency could be applied to the M4 handling an external sensor or manipulating a peripheral in response to a leak.
Other than sound, the Coral Dev Board Micro also hosts a web server that is able to respond to external requests for images and other data. For example, sending an HTTP GET request to /camera_stream
will return a jpeg image, while sending a different GET request to /index.html
provides a simple web page for viewing live frames with the option to resize/rotate them.
I have also created a small Edge Impulse model using captures from the Coral Dev Board Micro which determines if a pipe has defects or not. That project is available here. It was trained on a dataset of 107 RGB images which were passed to a 96x96 MobileNetV2 machine learning classification block, resulting in 100% training accuracy with a mere 0.26 loss value.
The model can then be deployed back to the Coral Dev Board Micro as a C++ library or reused on another embedded device to perform image-only leak detection tasks.
Detecting leaksWith this configuration, the device can actively monitor ambient sounds for dripping water, splatters, and more, all without much in the way of power consumption thanks to the onboard TPU. Beyond working in low-light environments, being able to utilize both sound and vision can provide another layer of reliability when it comes to predicting and catching when larger infrastructure might be failing.
Comments