Sound classification is one of the most widely used applications of Machine Learning. A new use case for wearables is an environmental audio monitor for individuals with hearing disabilities. This is a wearable device that has a computer which can listen to the environment sounds and classify them. In this project, I focused on giving tactile feedback when vehicle sounds are detected. The Machine Learning model can detect ambulance and firetruck sirens as well as cars honking. When these vehicles are detected, the device then gives a vibration pulse which can be felt by the person wearing the device. This use case can be revolutionary for people who have hearing problems and even deaf people. To keep people safe from being injured, the device can inform them when there is a car, ambulance or firetruck nearby so that they can identify it and move out of the way.
I used Edge Impulse Platform to train a sound classification model and deployed it to the Syntiant TinyML board. The project can be used, for example, to help people with hearing impairments navigate the streets safely. At the same time, this wearable device is ideal say for people strolling through the streets, enjoying the melodies of Michael Jackson, while inadvertently neglecting the surrounding traffic!
The Syntiant TinyML board is a tiny development board with a microphone and accelerometer, USB host microcontroller and an always-on Neural Decision Processor™, featuring ultra low-power consumption, a fully connected neural network architecture, and supported by Edge Impulse.
You can find the public Edge Impulse project here: Environmental Audio Monitor. To add this project into your Account projects, click "clone this project" at the top of the window. Next, go to "Deploying to Syntiant TinyML board" section to see how you can deploy the model to the Syntiant TinyML board.
Components and hardware configurationSoftware components:
Hardware components:
- 3D printed components for the wearable
- Syntiant TinyML board
- Vibration motor module
- 3.7V LiPo battery. I used one with 500m
- Veroboard/stripboard
- 1 220 Ω resistor
- 1 2N3904 transistor
- 1 5.7 k Ω resistor
- Some jumper wires and male header pins
- Soldering iron and soldering wire
- Super glue. Always be careful when handling glues!
I first searched for open datasets of ambulance siren, firetruck siren, car horns and traffic sounds. I used the Kaggle dataset of Emergency Vehicle Siren Sounds and the Isolated urban sound database for the key sounds. From these datasets, I created the classes "ambulance_firetruck" and "car_horn".
In addition to the key events that I wanted to be detected, I also needed another class that is not part of them. I labelled this class as "unknown" and it has sounds of traffic, people speaking, machines, and vehicles, among others. Each class has 1 second of audio sounds.
In total, I had 20 minutes of data for training and 5 minutes of data for testing. For part of the "unknown" class, I used Edge Impulse keywords dataset. From this dataset, I used the “noise” audio files.
The Impulse design was very unique as I was targeting the Syntiant TinyML board. Under "Create Impulse" I set the following configurations:
The window size is 968ms and window increase is 484ms milliseconds(ms). I then added the "Audio (Syntiant)" processing block and the "Classification" Learning block. For a detailed explanation of the Impulse Design for the Syntiant TinyML audio classification, checkout the Edge Impulse documentation.
The next step was to extract Features from the training data. This is done by the Syntiant processing block. On the Parameters page, I used the default Log Mel filterbank energy features and they worked very well. The Feature explorer is one of the most fun options in Edge Impulse. In the feature explorer, all data in your dataset are visualized in one graph. The axes are the output of the signal processing process and they can let you quickly validate whether your data separates nicely. I was satisfied with how my features separated for each class. This enabled me to proceed to the next step, training the model.
Under "Classifier" I set the number of training cycle as 100 with a learning rate of 0.0005. Edge Impulse automatically designs a default Neural Network architecture that works very well without requiring the parameters to be changed. However, if you wish to update some parameters, Data Augmentation can improve your model accuracy. Try adding noise, masking time and frequency bands and inspect your model performance with each setting.
I then clicked “Start training” and waited for a few minutes for the training to be complete. Upon completion of the training process, I got an accuracy of 97.6%, which is pretty good!
When training the model, I used 80% of the data in the dataset. The remaining 20% is used to test the accuracy of the model in classifying unseen data. We need to verify that our model has not overfit by testing it on new data. If your model performs poorly, then it means that it overfit (crammed your dataset). This can be resolved by adding more dataset and/or reconfiguring the processing and learning blocks if needed. Increasing performance tricks can be found in this guide.
On the left bar, we click "Model testing" then "Classify all". The current model has a performance of 97.8% which is pretty good and acceptable.
From the test data, we can see the first sample has a length of 3 seconds. I recorded this in a living room which had a computer playing siren sounds and at the same time a television was playing a movie. In each timestamp of 1 second, we can see that the model was able to predict the ambulance_firetruck class. I took this as an acceptable performance of the model and proceeded to deploy it to the Syntiant TinyML board.
To deploy our model to the Syntiant Board, first click "Deployment" on the left side panel. Here, we will first deploy our model as a firmware for the board. When our audible events: ambulance_firetruck and car_horn are detected, the onboard RGB LED will turn on. When the "unknown" sounds are detected, the onboard RGB LED will be off. This firmware runs locally on the board without requiring internet connectivity and also with minimal power consumption.
Under "Build Firmware" select Syntiant TinyML.
Next, we need to configure posterior parameters. These are used to tune the precision and recall of our Neural Network activations, to minimize False Rejection Rate and False Activation Rate. More information on posterior parameters can be found here Responding to your voice - Syntiant - RC Commands
Under "Configure posterior parameters" click "Find posterior parameters". Check all classes apart from "unknown", and for calibration dataset we use "No calibration (fastest)". After setting the configurations, click "Find parameters".
This will start a new task, so we have to wait until it is finished.
When the job is completed, close the popup window and then click "Build" option to build our firmware. The firmware will be downloaded automatically when the build job completes.
Once the firmware is downloaded, we first need to unzip it. Connect a Syntiant TinyML board to your computer using a USB cable. Next, open the unzipped folder and run the flashing script based on your Operating System.
We can connect to the board's firmware over Serial. To do this, open a terminal, select the COM Port of the Syntiant TinyML board with settings 115200 8-N-1 settings (in Arduino IDE, that is 115200 baud Carriage return). Sounds of ambulance sirens, firetruck sirens, and cars horns will turn the RGB LED red.
We can view the inference results monitored using the Serial monitor.
For the "unknown" sounds, the RGB LED is off. In configuring the posterior parameters, the detected classes that we selected are the ones which trigger the RGB LED lighting.
Let’s now explore the step-by-step process of assembling the wearable.
1. Deploy the model to the Syntiant TinyML boardOn the Edge Impulse Studio, we first deploy the impulse as an optimized Syntiant NDP 101/120 library. This packages all the signal processing blocks, configuration and learning blocks up into a single package. Afterwards, I updated the Arduino code from the Edge Impulse library and added functions to analyze the model's predictions. The code can be obtained from this GitHub repository. The repository also has the instructions on how to install the required libraries and upload the Arduino code to the Syntiant TinyML board.
The Arduino code turns GPIO 1 HIGH
when an ambulance, firetruck or car siren/horn sounds are detected. GPIO 1 is then used to trigger a motor control circuit that creates a vibration. If you want to turn GPIO 2 or 3 high and low you can use the commands OUT_2_HIGH(), OUT_2_LOW(), OUT_3_HIGH() and OUT_3_LOW() respectively. These functions can be found in the syntiant.h
file.
Once the code is uploaded to the Syntiant TinyML board, we can use the Serial Monitor (or any other similar software) to see the logs being generated by the board. Open a terminal, select the COM Port of the Syntiant TinyML board with 115200 8-N-1 settings (in Arduino IDE, that is 115200 baud Carriage return). Sounds of ambulance sirens, firetruck sirens, and cars horns will turn the RGB LED red. For the "unknown" sounds, the RGB LED is off.
2. 3D print the wearable partsThe wearable's components can be categorized into two parts: the electronic components and the 3D printed components.
The 3D printed component files can be downloaded from printables.com or thingiverse.com. The wearable casing is made up of two components: one holds the electrical components while the other is a cover. I 3D printed these with PLA material.
The other 3D printed components are flexible wrist straps. These are similar to the ones found on watches. I achieved the flexibility by printing them with TPU material. Note that if you do not have a good 3D printer you may need to widen the strap's holes after printing.
I then used super glue to attach the wrist straps to the case. Always be careful when handling glues!
Finally, the last 3D printed component is the wearable's dock/stand. This component is not important to the wearable's functionality. A device's dock/stand is just always cool! It keeps your device in place, adds style to your space, and saves you from the existential dread of your device being tangled in cables.
The wearable's electronic components include:
- Syntiant TinyML board
- 3.7V LiPo battery - the wearable's case can hold a LiPo battery which has a maximum dimension of 38mm x 30mm x 5mm
- Vibration motor module
- Circuit board for controlling the vibration motor module - the wearable's case can hold a circuit board that has a maximum dimension of 34mm x 28mm x 5mm
The image below shows the annotation of the Syntiant TinyML board. GPIO 1, GND and the 5V pad on the bottom side are used for this smart wearable.
The Syntiant TinyML board has a LiPo battery connector and copper pads that we can use to connect our battery to the board. I chose to solder some wires on the copper pads for the battery connection. Note that the "VBAT" copper pad is for the battery's positive terminal.
The next task is to setup the circuit board for controlling the vibration motor. This control circuit receives a signal from the Syntiant TinyML board GPIO and generates a signal that turns the vibration motor on or off. It can be easily soldered on a veroboard/stripboard with the following components:
- 1 220 Ω resistor – one end connects to the Syntiant GPIO 1, the other ends connects to base of transistor
- 1 2N3904 transistor - the emitter pin is connected to negative terminal of the battery
- 1 5.7 k Ω resistor – one end connects to the positive terminal of the battery, the other end connects to the collector of the transistor
Below is a circuit layout for the wearable. The motor's control circuit is represented by the transistor and resistors on the breadboard. The slide switch is optional. The case however has a slot on the side for placing one
Once the electronic parts have been assembled, they can be put in the wearable's case according to the layout in the image below.
Below are some images of the assembly that I obtained.
As I was working on the electronic components, I was not so sure if the vibrations from the motor will be noticeable on the wearable. Fortunately, the motor module works very well! The wearable's vibration strength is similar to a smartphone's vibration. This can be seen in the video below showing how the motor vibrates from test code running on the Syntiant TinyML board.
Below is video of the wearable on a person's hand. The Syntiant TinyML board is able to detect an ambulance siren sound in the background and signals it to the person through a vibration.
Below are some additional photos of the wearable.
This environmental sensing wearable is one of the many solutions that TinyML offers. The Syntiant TinyML Board is a tiny board with a microphone and accelerometer, USB host microcontroller and an always-on Neural Decision Processor, featuring ultra low-power consumption, a fully connected neural network architecture, and fully supported by Edge Impulse. I have always been fascinated by this tiny board and this made it the perfect choice for this project!
RatPack is another fascinating wearable that has been created using the Syntiant TinyML board. The huge African pouched rat has been given this gear to enable them to communicate with their human handlers when they come across a landmine or other interesting object. Please checkout the documentation to learn more about this fascinating project. All of the wearable's required components are open source, and the documentation provides step-by-step instructions so you can easily create your own.
Comments
Please log in or sign up to comment.