I recently get hold of an Arduino Nano RP2040 Connect and wonder what can I achieve with this tinyML & WiFi-ready board. While I searching for post/project/tutorial for this board, I was inspired by this https://www.hackster.io/kate-vasilenko/tiny-ml-air-writing-recognition-with-nicla-sense-me-ae6a11 and decided to replicate similar proof-of-concept project using the popular Edge Impulse studio. My goal is to learn gesture ML project and how to improve the inferencing accuracy.
I'll skip the introduction about Nano RP2040 since you can find it at their official page. What I want to to highlight is, there isn't much post/projects about this board running tinyML project, and even the popular Edge Impulse has no official support for it yet.. well not FULL support yet. This EI github repo is actually supporting the PICO RP2040, so what I need is just some modification to get the IMU sensor (LSM6DSOX) to read the accelerometer & gyroscope value, as discussed in this EI thread.
- Git clone the repo to your machine, https://github.com/edgeimpulse/firmware-pi-rp2040.
- Go to firmware-pi-rp2040/ThirdParty/Wire/src/Wire.h and change the SDA & SCL pin. Which is the GPIO12 & GPIO13 respectively of RP2040.
// Wire
#define SDA (12u) //8
#define SCL (13u) //9
- If you read the readme in the firmware repo, it says you need to install the PICO_SDK in your machine, so git clone that repo in your machine, GitHub - raspberrypi/pico-sdk & set the path variable as where you put this pico sdk folder.
- Back to your firmware folder, run the lines as in the readme. The output uf2 file will be located in the firmware-pi-rp2040/build folder.
FYI, my machine is windows10, and I ran the above in WSL Ubuntu 18.04.
The steps above should get the Nano RP2040 to connect to EI studio, but need to run the edge-impulse-daemon CLI in powershell.
1.1 Little Tweak to Gyro ValueAs the header says, I did a little tweak to the gyro value after numerous of trial & error, because I found that the default gyro value captured by EI firmware is in dps (degree per second). When both accel & gyro value are displayed in the same chart, the accel value is relatively weak/invisible/insignificant compared to the gyro. I wasn't paying attention to this until I realize that my 1st attempt which was trained using accelXYZ & gyroXYZ, failed miserably during inference, even though the testing accuracy looked promising. You can see how weak the accelXYZ signal being captured in the EI studio.
So, I decided to tweak the gyro value to scale it down so that it stays in the similar range as accel. Back to the firmware repo, open the <ei_inertialsensor.cpp> in firmware-pi-rp2040/edge-impulse/ingestion-sdk-platform/sensors, add a coefficient for gyro value in function below.
float *ei_fusion_inertial_sensor_read_data(int n_samples)
{
if (IMU.accelerationAvailable()) {
IMU.readAcceleration(imu_data[0], imu_data[1], imu_data[2]);
imu_data[0] *= CONVERT_G_TO_MS2;
imu_data[1] *= CONVERT_G_TO_MS2;
imu_data[2] *= CONVERT_G_TO_MS2;
}
if (n_samples > 3 && IMU.gyroscopeAvailable()) {
IMU.readGyroscope(imu_data[3], imu_data[4], imu_data[5]);
}
/* Add gyro coef 0.1 to scale down the dps */
for(int i = 3; i < 6; i++){
imu_data[i] = imu_data[i] * 0.1;
}
#ifdef DEBUG
for (int i = 0; i < INERTIAL_AXIS_SAMPLED; i++) {
ei_printf("%f ", imu_data[i]);
}
ei_printf("\n");
#endif
return imu_data;
}
Rebuild the firmware, load the uf2 and re-connect as a new project in EI studio, now you can see both accelXYZ & gyroXYZ are comparable.
Video below showcase how to collect gesture data with the built-in IMU sensor of Arduino Nano RP2040 in EI studio.
2.0 TrainingAs mentioned above, my 1st attempt didn't end well even though the training/testing accuracy is considerably high. So I'll jump straight to the 2nd attempt which train using scaled-down gyro value.
I basically have no idea what pre-processing to start with so just pick the recommended spectral analysis block which will compute the frequency and power of signal over time, 2 second window in this case. There are some tooltips on each parameter, and after some research online and trial & error, I find that scaling up the axes gives better accuracy, probably because even the weaker signal will be scaled up for extracting the feature. Besides, increasing the cut-off frequency also help which I believe could be due to less noise signal being fed to the model (with low-pass filter enabled).
For the training part, as seen in the impulse page, the classification block is using neural network (in Keras). This is extermely handy for beginner like me who doesn't have in-depth knowledge about ML, need not worry about tuning the hyperparameter of the model. The training parameter is shown below, which I had increased the learning rate and number of iteration. As a suggestion to improve user experience, I think the training output window could be visualized like tensorboard, e.g., viewing the mAP as training goes. The training result is decent enough for a tinyML project after series of trial and error, and the loss is also good enough.
When testing with the test dataset, the result is about 70%, not too bad for a tinyML project. We will see how it perform during inference with Nano RP2040.
Arduino Nano RP2040 is not officially supported yet (at the time of writing). When building firmware in EI studio, you will find official supported device such as Arduino Nano BLE 33 Sense & Pi RP2040, but not Nano RP2040, so I can only build the Arduino library and modify the sketch to upload to Nano RP2040.
EI Studio will generate a library folder which you can learn how to deploy in the Arduino IDE here. There are several examples in the library, in which I'm using the 'nano_ble33_sense_accelerometer' for my inferencing test.
Some modifications need to to done to the example sketch:
- Replace the IMU sensor library
#include <Arduino_LSM6DSOX.h> //#include <Arduino_LSM9DS1.h>
- Change the EI classifier flag
#if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_FUSION // EI_CLASSIFIER_SENSOR_ACCELEROMETER
- Frame size needs to change from 3 to 6 since I have AccXYZ + GyrXYZ now.
- Adding a coefficient to scale down the gyro after reading accelerometer.
// Add varialbe for gyro coefficient to scale down
float gyro_coef = 0.1;
// Change ix+= from 3 to 6
for (size_t ix = 0; ix < EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE; ix += 6) {
// Determine the next tick (and then sleep later)
uint64_t next_tick = micros() + (EI_CLASSIFIER_INTERVAL_MS * 1000);
IMU.readAcceleration(buffer[ix], buffer[ix + 1], buffer[ix + 2]);
for (int i = 0; i < 3; i++) {
if (fabs(buffer[ix + i]) > MAX_ACCEPTED_RANGE) {
buffer[ix + i] = ei_get_sign(buffer[ix + i]) * MAX_ACCEPTED_RANGE;
}
}
buffer[ix + 0] *= CONVERT_G_TO_MS2;
buffer[ix + 1] *= CONVERT_G_TO_MS2;
buffer[ix + 2] *= CONVERT_G_TO_MS2;
// Read gyro & multiply with coefficient to scale down
IMU.readGyroscope(buffer[ix+3], buffer[ix + 4], buffer[ix + 5]);
for(int i = 3; i < 6; i++){
buffer[ix + i] = buffer[ix + i] * gyro_coef;
}
delayMicroseconds(next_tick - micros());
}
This sketch will read the IMU signal for 2 second window then perform inference to recognize what is the air-written alphabet, as shown in video below.
Lastly, what I notice in this project, since I'm using the spectral analysis method, the speed of my air-writing does influence the inference since it changes the frequency of the data. I'm wondering what would be a better method to be able to recognize the air-writing without influence (or less) of speed, be it slower or faster. Hopefully this post can trigger otherS to share their insight/experience :)
Comments