Alright, let's get started with our Gesture Recognition on Microcontroller with TinyML! Let's go! Using gesture recognition with the in-built light sensor, the Wio Terminal will be able to recognize the rock, paper and scissor gesture and display the corresponding image on the screen.
"Kids everywhere play rock, paper, scissors. The game is very old. It is said that people played it 2,000 years ago, in China" - Source.
Here's some refresher for the game of rock, paper, scissors:
This project is inspired by the game and I would like to try out how well the device such as Wio Terminal from Seeed Studio able to perform the edge classification using the TinyML technology powered by Edge Impulse.
In this detailed tutorial, we will cover the following:
- What's TinyML?
- Why TinyML Is Such A Big Deal?
- Create and select models
- Data Acquisition (rock, paper, scissors)
- Training and Deployment
- Programming & Model Usage
This model uses the Wio Terminal to recognize hand gestures: rock, paper and scissors using built-in Light sensor. It is quite hard to accomplish using rule-based programming because gestures are not performed the same way all the time. Trying to solve this problem using traditional programming requires hundreds of different rules for each mode of action. Even if we consider an idealized situation, we still miss a lot of motions such as speed, angle, or direction. A slight change in these factors requires a new set of rules to be defined but machine learning can handle these variations very easily. A well-known possibility is to use camera sensors combined with machine learning to recognize gestures. Using a light sensor is equivalent to using only a one-pixel point of the camera to recognize gestures which is a completely different level of challenge. Let's meet the challenge with this project.Project VideoWhat's TinyML?
TinyML is a type of machine learning that shrinks deep learning networks to fit on tiny hardware. It brings together Artificial Intelligence and intelligent devices.
It is 45x18mm of Artificial Intelligence in your pocket. Suddenly, the do-it-yourself weekend project on your Arduino board has a miniature machine learning model embedded in it. Ultra-low-power embedded devices are invading our world, and with new embedded machine learning frameworks, they will further enable the proliferation of AI-powered IoT devices. Source
Why TinyML Is Such A Big DealWhile machine-learning (ML) development activity most visibly focuses on high-power solutions in the cloud or medium-powered solutions at the edge, there is another collection of activity aimed at implementing machine learning on severely resource-constrained systems.
Known as TinyML, it’s both a concept and an organization — and it has acquired significant momentum over the last year or two.
“TinyML deployments are powering a huge growth in ML deployment, greatly accelerating the use of ML in all manner of devices and making those devices better, smarter, and more responsive to human interaction,” said Steve Roddy, vice president of product marketing for Arm‘s Machine Learning Group.
So, TinyML is the trend and BIG OPPORTUNITY right now. Probably many of you think that it's very complicated to do projects that involved TinyML? Well, thanks to tools such as Codecraft by Seeed Studio, you can now get started easily, literally by spending just 1 hour, thru this article, you will be successful in deploying your first TinyML project! Don't believe me? Try it out and you will be surprise!
Seeed Studio's graphical programming platform, Codecraft has made it so easy for everyone to get started creating their TinyML projects! The TinyML engine is powered by Edge Impulse! Even better is that, it will automatically convert the blocks based coding into text based coding (C++) for you to expand your projects ideas! It's incredible awesome! You should definitely check it out!
Go to https://ide.tinkergen.com/. Select "(for TinyML) Wio Terminal".
1.1 Create the "Gesture Recognition (Built-in Light Sensor)" model
Click on "Model Creation" on the embedded machine learning box on the middle left. Then select the "Gesture Recognition (Built-in Light Sensor)" as shown below.
Enter the name for the model according to the requirements.
Click Ok and the window will automatically switch to the "Data Acquisition" interface.
2.1 Default Labels
There are 3 default labels (rock, paper, scissors) that are automatically created for you. You can use it without any changes unless you want to have different names for your labels or add additional labels such as "idle" label for when there is no gesture presented.
2.2 Collecting data from custom labels and Data Acquisition Program Modification
Sampling data for custom labels is similar to the steps for capturing default labels.
- Add or modify the labels.
- Upload the data acquisition program.
- Collect data.
2.2.1 Adding labels
In the label screen, click the " + " sign as shown below:
Enter the label name and click "OK". In this case, I am going to add a label named "idle" to represent the result of no gesture.
The new label "idle" is added to the labels bar after successful addition.
I have modified the default data acquisition program to include the data collection of "idle" label when the 5-way switch is pressed up!
2.3 Connect Wio Terminal and Upload Data Acquisition Program
Connect Wio Terminal to laptop using the USB-C Cable. Click on the Upload button, and this action will upload the default data acquisition program. Typically, it takes around 10 seconds to upload. Once successfully uploaded, a pop-up window will appear to indicate “Upload successfully”.
Click “Roger” to close the window and return to the data acquisition interface.
Note: You need to download “Codecraft Assistant” to be able to Connect and upload code on Codecraft online IDE.
Caution: For the web version of Codecraft, if you don't install or run the Device Assistant, you may get the message in the image below that you haven't opened the Device Assistant yet. In this case you can check this page for further information: Download, installation and "Device Assistance" Usage .
2.4 Data Acquisition
In the upper right hyperlink, you will find a step-by-step introduction to data acquisition.
Follow the instructions to collect data.
Pay attention on below:
- Wio Terminal button location (A, B, C, & 5-way switch)
- Animated gif has been accelerated; the actual action can slightly slow down.
- Please notice the red tips.
- Point the curser over Description Texts for more detailed content
Wio Terminal will be displaying below information during the data collection process.
Start and end collecting data according to the Wio Terminal screen:
Indicates the data is being collected
Indicates the Data collection is completed
Now, the data acquisition step is completed.
Step 3: Training and DeploymentClick on “Training & Deployment”, and you will be seeing the model training interface as shown below.
Waveforms for rock, paper, scissors and idle raw data are as shown below for reference. (can be viewed from the "Sample data" tab.
rock
paper
scissors
idle
3.1 Select neural network and parameters
Select the suitable neural network size: small
, medium
and large
Set parameters:
- number of training cycles (
positive integer
), - learning rate (
number from 0 to 1
) - minimum confidence rating (number from 0 to 1)
The interface provides default parameter values.
In this case we are using medium
. It will take quite a long time. Be patient!
3.2 Start training the model
Click “Start training”. When you click “Start training”, the windows will display “Loading..”! Wait for the training to be done!
The duration of “Loading..” varies depending on the size of the selected neural network (small, medium and large) and the number of training cycles. The larger the network size is and the greater number of training cycles are, the longer it will take.
You can also infer the waiting time by observing the “Log”. In the figure below, “Epoch: 68/500” indicates the total number of training rounds is 68 out of total of 500 rounds.
After loading, you can see "TrainModel Job Completed" in the "Log", and "Model Training Report" tab will be appeared on the interface.
3.3 Observe the model performance to select the ideal model
In the “Model Training Report” window, you can observe the training result including the accuracy, loss and performance of the model.
If the training result is not satisfactory, you can go back to the first step of training the model by selecting another size of the neural network or adjust the parameters and train it until you get a model with satisfactory results. If changing the configurations do not work, you may want to go back to collect the data again.
3.4 Deploy the ideal model
In the “Model Training Report” window, click on “Model Deployment”
Once the deployment is completed, click “Ok” to go the “Programming” windows which is the last step before we deploy the model to the Wio Terminal.
4.1 Write the program for using the model
In the “Programming” interface, click on “Use Model” to use the deployed model.
I have created the below sample program to display the rock or paper or scissors image on the Wio Terminal screen when the result of the prediction is rock or paper or scissors respectively!
Check out Seeed Studio guide on how to display custom images on the Wio Terminal:
rps.bmp
rock.bmp
paper.bmp
scissors.bmp
Note: The show "image.bmp" block on Codecraft is corresponding to 8 bit of the bmp image. So when you are converting it, do take note that to convert it to 8-bit colour convert bmp image. (Two options: Enter 1 for 8-bit colour convert; Enter 2 for 16-bit colour convert)
Based on the Wio Terminal LCD Loading Images tutorial on Seeed wiki, it's obvious that the image file need to be converted to 8 bit.
4.2 Upload the program to Wio Terminal
Click the “Upload” button. You will see the “Just chill while it is uploading” window.
The first upload time usually longer, and it increases with the complexity of the model.
The uploading time for smaller models takes about 4 minutes or even longer (depending on the performance of your laptop).
Once done uploaded, the “Upload successfully” window will be shown.
4.3 Testing
Make a gesture “scissors” to see if Wio Terminal's screen can shows scissors image. Try other gestures and see if the Wio Terminal can recognize your gestures and shows the corresponding image on the screen.
Congratulations! You have completed your TinyML model!
Well, there is still room for improvement to train a ML model to achieve higher accuracy. Now, I would like to challenge you to make that happen and leave a comment below if you ever made this!
GiveawaysIn additional to the publication of this article, I would like to share few pieces of good news! If you want to build a project like mine, here are some tips to get a free Wio Terminal!
- FREE Giveaway: 10 Wio Terminals and No-Code Programming TinyML Course for STEAM Educators! Seeed Studio is hosting a giveaway on their LinkedIn page! Check it out! It's ending on 16th Sept 2021 (Thursday)
- 2021 Imagine workshop by Edge Impulse where Seeed Studio is partnering to giveaway 50 Wio Terminal to workshop participants! Check it out!
"Get inspired, Make Great Things Happen!"
Written by
Comments