This Hackster page is associated with North Carolina State University Electrical and Computer Engineering Senior Design Project 03: TI Machine Learning Robotics for the Fall 2021 and Spring 2022 semesters. The goal of this project is to make machine learning more accessible and test the capabilities of Texas Instrument's robotics systems learning kit also known as the RSLK MAX. This page is one of the two pages, with this page showcasing the audio machine learning capabilities using the RSLK MAX.
Check out our Terrain Recognition Project
The GitHub repository will include optional PCB files as well as some of our examples on Edge Impulse Models and prediction code you may needed to get started: https://github.com/asoakley/TI-ML-Tutorial
Our demo videos can be found here.
1. Set Up The Hardware Platform1.1 Assemble the TI-RSLK MAXThis project will be using the TI-RSLK MAX robotics kit that can be purchased from the Texas Instruments website: https://www.ti.com/tool/TIRSLK-EVM. The robotics kit is easy to build, does not require any soldering, and can be constructed within 15-30 minutes.
Texas Instruments has an assembly guide here: https://www.ti.com/lit/ml/sekp164/sekp164.pdf?ts=1650900966929
The assembly guide has four parts: Part 1 is the construction guide for the RSLK MAX kit, Part 2 is the assembly of the MSP432 LaunchPad (not necessary if the kit was purchased), Part 3 is the disassembly of the RSLK MAX, and Part 4 is the option to attach an OLED screen to the robot. While the guide goes through the construction with all the bells and whistles, some parts can be skipped and are optional for this guide. The bumper switches constructed in Step 1.1 and the 8 channel QTRX line following sensor in Step 1.6 are the optional parts of the robotics kit that can be installed if wanted. The entirety of Part 4 is also optional.
Additional Information about the TI-RSLK MAX can be found here:
https://university.ti.com/programs/RSLK/
1.2 Attach the Audio BoardWe will only be using the microphone from the Audio BoosterPack for Machine Learning.
TI has an Audio BoosterPack that can be purchased here: https://www.ti.com/tool/BOOSTXL-AUDIO.
The Audio BoosterPack contains sensors that can be used for TinyML. The Audio BoosterPack has an onboard microphone with the associated amplifiers.
If you are interested in building a custom PCB with additional sensors, see the later part of this page to learn more.
2. Set Up The Software Platform2.1 Install the Arduino IDEThis project utilizes the Arduino IDE to program and flash firmware to the TI-RSLK MAX.
Download and install the latest version here: https://www.arduino.cc/en/software
2.2 Install Support for the MSP432Once Arduino IDE is installed, run the program. Navigate to File > Preferences > Additional Board Manager URLs and paste the link below.
http://s3.amazonaws.com/energiaUS/packages/package_energia_index.json
Navigate to Tools > Boards > Boards Manager. Find and Install “Energia MSP432 EMT RED boards” via the search bar or by scrolling down the list. This should take 10-20 minutes.
2.3 Install the Arduino Libraries for the RLSKFor ease of access, the libraries needed for both the RSLK have been included on the project GitHub repo at https://github.com/asoakley/TI-ML-Tutorial.
To add a.ZIP library in Arduino IDE, navigate to Sketch > Include Library > Add.ZIP Library… Select the file path of the library and click Okay. A message saying “Library added” should appear. To check if the library was added, navigate to Sketch > Include Library and see if the name of the library is listed under “Contributed Libraries.”
2.4 Install the Edge Impulse Data ForwarderThe Edge Impulse Data Forwarder, which is part of the Edge Impulse CLI (Command Line Interface), is the tool used in this project to collect and forward data to Edge Impulse via a serial connection.
Install Python 3 on your host computer.
Install Node.js v14 or higher on your host computer. Be sure to install the Additional Node.js tools (called Tools for Native Modules on newer versions) when prompted.
Install Edge Impulses CLI tools by entering the following in the Command Prompt window of your device:
npm install -g edge-impulse-cli --force
3. General ML Model Creation3.1 Flash Data Sampling CodeBefore you can start training models, you want to program your device to sample and correctly format data. Code for data sampling can be found in the GitHub repository at https://github.com/asoakley/TI-ML-Tutorial/tree/main/testing%20and%20training. For your own custom sensors or data, “DataSamplingTemplate.ino” is a basic implementation that contains instructions on how to edit it for your projects. The other files are for projects that have been created as examples and are further discussed in Section 5.
Once your data sampling code is ready to upload, use a micro USB cable to connect your robot to the Arduino IDE. Under Tools > Boards, select “RED Launchpad w/ msp432p401r EMT (48MHz)” under “Energia MSP432 (32-bits) RED Boards.” Press the “Upload” arrow to flash the program onto the RSLK. The robot should now be continuously sampling data when it is powered on.
3.2 Connect the MSP432 to Edge ImpulseFirst, the robot must be connected using a Micro USB cable to a computer’s USB port.
Type this command in the Command Prompt to access the Data Forwarder:
edge-impulse-data-forwarder
If you are starting fresh, you will be asked to:
- Log into Edge Impulse
- Select your serial port: “COMx (Texas Instruments Incorporated)”
- Select which project you wish to upload data to
- Provide names for your sensor axes (e.g. “audio”)
When connecting to the data forwarder again after logging in and selecting a project, you will only be asked to select your serial port. It is important to note that the MSP432 will have two serial connections: one for UART, and the other for the debug probe. If the wrong port is chosen, the data forwarder will fail to auto-detect the data frequency. In this event, simply restart the data forwarder.
To clean out all previous configurations, enter this command on startup:
edge-impulse-data-forwarder --clean
If the correct serial port is selected in the setup process, the data forwarder will auto-detect the frequency of the incoming data. This auto-detected frequency can be manually overwritten with your desired frequency of xxxx Hz on startup with:
edge-impulse-data-forwarder --frequency xxxx
For additional information on the Edge Impulse Data Forwarder and the rest of the Edge Impulse CLI, please refer to https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-overview.
3.3 Data AcquisitionAfter the MSP432 is connected to Edge Impulse, navigate to the data acquisition tab, and ensure the correct device is connected.
Select the desired label for the data about to be recorded (in this case audio).
Set the frequency and time to record.
Select the “Record Sample” button and begin recording data, ensuring there is a good split between training and testing data. This data can be set to be split automatically or can be manually divided.
If data has not been split, it can be accomplished by navigating to the dashboard, scrolling down to the “Danger Zone”, selecting “Perform Train / Test Split”, selecting “Yes” on the dialog box, and entering the requested phrase to perform the split on the data. We recommend an 80/20 train/test split.
3.4 Impulse Design3.4.1GeneralEdge Impulse Model
Depending on the desired functionality of the model, the impulse design will vary. In general, there are three different types of blocks that can be selected that will be used in the creation of the machine learning model: Time Series Data, Processing, and Learning.
For time-series data, the window size, window increase, and frequency can be set. From there, a preprocessing block can be selected, or the raw data may be used. The learning block is then selected, which depends on the desired performance of the model. For classification tasks, like our examples, select “Classification (Keras)”.
Select the “Save Impulse” button to move on to the next step, where preprocessing features can be customized. Based on the characteristics of your data, you can adjust how the data is altered before being passed through the classifier. For example, if using the Spectral Analysis block, you can adjust the cut-off frequency, filter order, FFT length, number of peaks, peaks threshold, and power edges. After this step, Edge Impulse uses the preprocessing functions (or lack thereof) and creates features for all of your data points.
Once the features have been extracted, the ML model can be trained by setting the Neural Network block settings (classification for example). Here, you can set the number of training cycles, the learning rate, validation set size, and also the option to automatically balance the dataset. We recommend at least 100 training cycles with the default validation set size (20%) and learning rate (0.0005). The neural network architecture can also be adjusted when it comes to dense or convolutional layers, the number of neurons in each layer, and the addition of more layers or dropout.
Select the “Start Training” button to begin the training process. This may take several minutes, depending on the number of training cycles, learning rate, and amount of features.
After the model training has been completed, the model demonstrates the training performance in terms of accuracy and loss, a confusion matrix for each label with the prediction accuracy, a feature explorer, and the estimated on-device performance. The Feature explorer can be adjusted to view the key features for the dataset to further refine any outliers or ambiguities.
You may also select a model version with the options being “Quantized (int8)” and “Unoptimized (float32)”. Quantized takes up less flash memory, but Unoptimized may be more accurate if some of your class features are very close together on the feature map. Most of the time the two versions will have similar if not the same accuracy, so Quantized is typically recommended.
3.5 Model TestingModel testing involves running the previously generated ML model against test data, where it then outputs the model testing results. These results show the accuracy via a confusion matrix and feature explorer. While this testing is not representative of live data, it allows for further adjustments to be made to improve the results.
Another tool available to further test and optimize a ML model is Live Classification. This can be accomplished by connecting a desired device, setting the data recording settings, and classify either live data or existing test samples.
After classifying each sample of either live data or test samples, the classification includes the detailed results, showing the timestamps of the sampling, the predictions with their confidence ratings, and the spectral features chart.
3.6 DeploymentOnce the ML model has been trained, tested, and updated to the desired level, the impulse can be deployed. For this use-case, under the “Create Library” section, select “Arduino Library”. There is an option to enable the EON Compiler, utilize this if it improves the memory requirements without significantly impacting the model performance. The available optimizations are also demonstrated, with predicted RAM usage, flash usage, latency, and accuracy.
Select the “Build” button at the bottom of the page, which will download the model as a.ZIP file, making it ready for deployment in the Arduino IDE.
4. Flashing Models Onboard The Robot4.1 Upload Model Library to Arduino IDERefer to Section 2.3 on adding.ZIP libraries to Arduino IDE. After adding the.ZIP file deployed from Edge Impulse, navigate to the Documents section on your computer. From there access Arduino > libraries > {name of your Edge Impulse project}_inferencing > src > edge-impulse-sdk > porting > arduino. Open the file “ei_classifier_porting.cpp” with a text editor and add the highlighted line below. Do this or else the code will not compile.
If you have already done this and are uploading a new version of your model with the same name, you will need to delete the “{name of your Edge Impulse project}_inferencing” folder and repeat the process with the new.ZIP file.
4.2 Flash Live Classification CodeGo to https://github.com/asoakley/TI-ML-Tutorial/tree/main/prediction for live onboard classification programs. The file “PredictionsTemplate.ino'' contains the required code for running the model along with instructions on incorporating your own sensor data and robot commands. Edit the template to fit your Edge Impulse model, sensors, and desired outputs. Plug in the RSLK to your computer with a micro USB cable, make sure your board in the IDE is set to “RED Launchpad w/ msp432p401r EMT (48MHz)” and then press “Upload” to flash the code onto your robot. Compiling the Edge Impulse model will take several minutes. After this process is complete, your robot is ready to run your machine learning model. If the robot is plugged into the computer while running, you may view the serial terminal in Arduino IDE to see the confidence values of each class after every classification.
5. Edge Impulse Navigation and Use5.1 Audio Recognition5.1.1 Data SamplingTo begin training a model for Audio Recognition, you need to run a program that samples and sends the data to Edge Impulse via a serial connection with the Edge Impulse data forwarder. For this project, you will download and run “AudoTest_V2.ino'' from the “training and testing folder” of the project Github repository to gather your training data. In this example, the microphone is sampled at 1000 Hz.
5.1.2 Model DesignThe model trained in this project contains three classes, “Clockwise”, “Drive”, and “Noise”. Each class was trained for around 7 minutes. Clockwise and Drive were sampled 30 seconds at a time, with the commands being repeated periodically. These 30-second samples were then split into many smaller 1.5-second samples, each containing an instance of a voice command. All noise samples were sampled 30 seconds at a time, but they were not split into smaller segments.
The following figure contains the settings used in the “Create Impulse” tab of the project. The project uses a 1500 ms window size, 500 ms window increase, 1000 Hz sampling frequency, and zero-padded data. The project also includes an MFCC block for signal processing and a Classification block for learning.
The figures below contain the parameters for the MFCC block and the Neural Network Classifier block.
After training the model, the feature explorer shows a similar spread of the different voice commands along with noise. The red blips indicate incorrect predictions, which can be removed to further optimize the model accuracy.
For this model, choosing either model version of quantized (int8) or unoptimized (float32) did not make a difference in the accuracy or loss. The quantized model had a lower peak RAM and flash usage, resulting in it being the best option for this case.
Our completed Audio Recognition Edge Impulse project can be found at:
https://studio.edgeimpulse.com/studio/78815.
Our deployed Arduino model library, called “ei-ti-ml-audio-classification-arduino-1.0.28.zip”, can be found in the “libraries” folder of the project Github.
5.1.3 Onboard ClassificationThe onboard classification program used in this project, called “AudioPredictions_V1'' samples the microphone just like before, but instead of sending that data to Edge Impulse, it is fed into the onboard Audio Recognition model for inferencing. This program declares a buffer to hold 1.5 seconds of audio data, which is equivalent to 1500 audio samples. The prediction algorithm runs automatically. Whenever the RSLK is sampling the microphone and filling the data buffer, the RGB LED lights red. When the RGB LED turns off, the program runs the classifier. Once the classifier finishes, it will output the confidence values of all the classes. These results can be viewed on the serial port. The class with the highest confidence value is chosen as the final prediction. Based on the class with the highest confidence value, the robot then sets the motors for a movement reflecting the recognized voice command. In this use-case, the command “Drive” should make the robot drive forward for 1 second, and the command “Clockwise” should make the robot spin clockwise for 1 second. If the robot predicts “Noise”, then no action is performed and the program goes right back to sampling data again. During onboard classification, the timing of voice commands is very important. If a voice command is given while the microphone is not sampling, the command may be cut off and incorrectly classified.
[OPTIONAL]: Assembling Custom PCB for SensorsTo consolidate multiple TI Boosterpack sensors into one package, we created a custom PCB that integrates the BMI160 and microphone that can be used for TinyML.
All PCB files can be found here: https://github.com/asoakley/TI-ML-Tutorial/tree/main/PCB
The Custom PCB is designed on Eagle which you can replicate or reiterate using the following Eagle libraries:
https://github.com/asoakley/TI-ML-Tutorial/tree/main/PCB/Eagle%20Libraries
The PCB Gerbers can be manufactured by submitting them here: https://www.makerfabs.com/pcb-prototyping-service.html
Following the dropdowns:
Make sure to get a stencil as it will make assembling the custom PCB much easier. The stencil will help you apply solder paste for your components on the custom PCB.
This video can help you on how to use your PCB stencil: https://www.youtube.com/watch?v=5AyxuuFjZSI&ab_channel=Eslam%27sLab
Submit the gerber files by pressing this button and proceed to checkout as normal.
To get materials for the custom PCB, all of the parts can be purchased from vendors such as Mouser, Digikey, and Amazon. The bill of materials can be found here: https://github.com/asoakley/TI-ML-Tutorial/blob/main/PCB/Bill%20of%20Material.xlsx
Additionally, you will need to have access to or purchase a soldering station with an iron and a heat gun, solder paste, solder, tweezers, heat resistant tape, desoldering wick, flux and rubbing alcohol (isopropyl 70% or higher should be fine).
To assemble a blank PCB, it is wise to clean the PCB with alcohol (isopropyl) before soldering. Start with soldering on the SMD components, first using the stencil to apply solder paste.
Next, you can use the tweezer to pick and place the SMD components onto the PCB and apply even heat with the heat gun to reflow the SMD components to the board, this includes all the resistors and capacitors as well as the microphone op-amp.
Afterward, you can solder on the microphone and BMI160 breakout board using heat-resistant tape to hold components down while you solder. The microphone (POM-2242P-C33-R) has a datasheet that must be consulted before soldering it down.
Finally, you can solder on the 2x10 header pins and the 2x3 and 2x6 optional header pins. Again use the heat-resistant tape to hold down the headers, making sure that they are even with the board before soldering.
If you are unsure about how to solder there are plenty of tutorials online, and video sharing platforms such as YouTube have many videos that can be watched.
ResourcesArduino IDE for TI-RSLK MAX
https://www.hackster.io/measley2/robotics-system-workshop-arduino-programming-on-ti-rslk-max-d33faa
Edge Impulse Data Forwarder
https://docs.edgeimpulse.com/docs/cli-installation
Arduino BMI160 Library
https://github.com/hanyazou/BMI160-Arduino
Onboard Classification for MSP432
Comments