When it comes to most things in life, more is better! Ok, fine, that's not always true...more homework never sounded good in school, and more swings on a golf course is bad for your scorecard. But when it comes to machine learning projects, more data is better, more sensors are better, and more training is better.In this project, we'll explore a topic called Sensor Fusion, which is quite simply, the process of combining data from different types of sensors or similar sensors in different locations, to provide increased information and make better decisions and classifications.
To demonstrate this, we'll use the Sony Spresense development board, and the CommonSense add-on board that contains an array of sensors. We'll use Edge Impulse to build a machine learning model that leverages data from 2 of the on-board sensors (accelerometer + light sensor), then deploy it to the Spresense to perform inferencing directly on the device.
- Sony Spresense Mainboard
- Sony Spresense Extension Board
- SensiEdge CommonSENSE
- SD Card
The Sony Spresense mainboard contains a Sony CXD5602 Processor with 6 Arm Cortex M4F cores, GPIO, SPI, I2C, I2S, UART, 1.5mb of RAM, and 8mb of flash. The CommonSENSE add-on board fits into the pre-populated headers on the Spresense, and contains 10 (!) sensors to interpret and understand the environment:
- Accelerometer
- Magnetometer
- Gyro
- Air Quality
- Temperature
- Microphone
- Proximity
- Humidity
- Pressure
- Light
Simply place the CommonSENSE on the Spresense, and push it in firmly. Next, take that combo and attach it to the Extension board. Place an SD Card in the slot, and attach the "sandwich" of all three boards to your development laptop or desktop with a microUSB cable plugged into the port on the Spresense.Once connected, we need to flash Edge Impulse CommonSense firmware to the board. Download the firmware from the Edge Impulse documentation repository here, and unzip the file. Inside, there are flashing applications for Windows, Mac, and Linux. You can use whichever suits your OS, but I am using an Ubuntu virtual machine, so I used the Linux version: flash_linux.sh
.
With the firmware flashed, you can use the device as-is to collect data and feed it into Edge Impulse to create a dataset, train a machine learning model, and then deploy that model back to the Spresense.
Machine Learning WorkflowStep 1: Data Collection
The input for a machine learning model is data, and as mentioned above, more is better! There are many projects demonstrating the Edge Impulse workflow and of course the official Documentation explains everything as well, but we'll move quickly through the steps here. First, make sure the Spresense is connected to the Edge Impulse Studio, and click on Data Acquisition. With the device attached, you will be able to select a sensor, or combination of sensors, from the drop-down menu. Add a Label and choose an interval to sample for, then click "Start Sampling".
In the accompanying YouTube video embedded down below, I wanted to build a hypothetical smart running jacket. The goal for the jacket was to light up at night, while running. To achieve that, I can make use of the CommonSense IMU and Light sensor, and create a sensor fusion model that incorporates both inputs. I began by capturing about 4 to 5 minutes of data for each of 6 classes: my target (running) motion in the light, target motion in the dark, idle daylight, idle dark, other motions in the light, and other motions in the dark. I used 10 second intervals, so this gave me about 100 data points to work with. Again, more is better, but this got me started.
Step 2: Model DesignNext I moved onto building an Impulse. This step takes the data that you have collected, and prepares it for use in model development. Click on Impulse Design on the left navigation, and you will see that model training consists of 4 Blocks. The first one is configured for you, as the project has already determined that we are using Time-series data, based on the sensor readings we captured. The Processing Block needs to be added; select "Spectral Analysis" then check all of the sensors (we'll need them all, though you can individually choose these depending upon your project's needs). In the Learning Block, choose Classification, and you should see the 6 classes of data represented. Finally, click "Save Impulse" to move to configuration of the Spectral Features.
In the Spectral Analysis detail, you can inspect the raw features of any data element, and make changes as necessary, though the "Autotune" feature is helpful to quickly find optimal settings. I chose to Autotune, and then moved on to Generating Features.
When you click the "Generate Features" button, it will take a few minutes for a build to run, and then at the end of the process you will be presented with a 2-dimensional representation on the features gathered from the dataset. You should see nice clustering and separation, as shown here. If your classes are not separating or the plot has overlapping elements, you might need to re-tune the features or ensure you are collecting the right data.
At this point you can move on to the Classifier and configure the actual training that will be done, to build the model. The default settings will likely work fine, though in the screenshot below I was experimenting with additional training cycles to better understand its impact on accuracy. When you are ready, click "Start training" and you will see the build log on the right-hand side of the screen as the training is performed. When finished, you will be presented with a Confusion Matrix that shows you the accuracy on a validation set of data, and the Data Explorer will once again show a plot of how inferences are separated / clustered.
With our model built, we can now deploy it to the Spresense + CommonSense.
Step 3: Deployment
To get this machine learning model onto our hardware, we need to generate some firmware and flash it to the board with a similar process that we used earlier to get the data acquisition firmware onto the board.
Click on Deployment on the left navigation, and make sure the Spresense + CommonSense is already selected, then click "Build" at the bottom.
When complete, a download will automatically be generated. Unzip the downloaded file, and you will once again find flashing utilities for Mac, Windows, and Linux. I again used Linux, so I just ran the shell script.
With the firmware and model now loaded onto the Spresense, I started local inferencing by running:
edge-impulse-run-impulse
This started inferencing, and as I moved the board it was able to recognize the motions I had trained it on, and accurately detect light versus dark if I covered up the light sensor!
Below is a full video walkthrough of the complete process, but you can skip to the end if you just want to see the results. :-)
Video TutorialConclusionWith the Sony Spresense and the CommonSense add-on board, you can quickly and easily protype sensor fusion models that incorporate multiple sensors to fine-tune identification and classification of the world around you. Just keep in mind the golden rule of machine learning: MORE DATA = BETTER.
Comments
Please log in or sign up to comment.