First of all, I would like to give credit to Arduino Team for providing the base framework. If you have not had the chance to go through it, I highly suggest you go through it thoroughly before proceeding further,
Link: https://blog.arduino.cc/2019/10/15/get-started-with-machine-learning-on-arduino/
So first of all we will be starting by generating a machine learning model using google collab and then we shall load it into Arduino IDE to generate and upload the sketch. Note that the collab is a linux framework to generate the necessary header file (model.h) to be included with the code.
Generating the Data for Machine Learning Model
First we need data to train our model. It will be in the form of a csv file with all the points along the columns. You need N datasets for N outputs.
Now you will find a program to create dataset with the first link. It is IMU_Capture.ino file. Import this into your Arduino IDE, and upload to the device. I personally went ahead with three movements which meant three separate datasets.
1. Punching move: Move the device carefully straight ahead to get one set point. It will be visible in the serial monitor. Be careful not to jerk the device and get junk datapoint, this will ruin your model!!!
Keep it consistent for 10 to 12 datapoints. Keep on repeating the move, till you get 10 different sets or more if you like.
2. Defend move: Use your imagination but keep it consistent for ten to 12 datapoints. Keep on repeating the move.
3. Circular Motion
Note we are training three separate gestures which will be later used in our game. If you have troubles, you may use the dataset given with this project!
Training the Model in Google CollabPlease follow the steps given in the link below to start off making your model
We will go through it step by step:
First we execute these two commands:
!apt-get -qq install xxd
!pip install pandas numpy matplotlib
This sets up the linux engine and loads the necessary packages.
Then we have to train the neural network but before that we need to edit the data and reformat it in order for TensorFlow to understand.
Now load the three datasets into collab and run the code in order. The first and second rearrange the data and the third is where the magic happens, the model is trained. (Press Ctrl+Enter to execute the code)
If you take a look at the training code then, you will find the following command:
model.add(tf.keras.layers.Dense(15, activation='relu'))model.add(tf.keras.layers.Dense(NUM_GESTURES, activation='softmax'))You cannot touch the bottom line as the number of output must match the number of cells in the outermost (output) layer. However, you may add as many and however many cells you like in the inner layer. Just copy paste the second command, it will add it sequentially as the next hidden layer. Atmost your trial and error will mean loss of convergence in some cases but it is still fun to try!
For this example, we will follow the standard model. But don't fret to try out something new!!!
Finally, we train the model and we get the error below 10^(-10) which is exceptional.
Then follow the other two blocks and download your model.h
Creating the SketchNow download the model.h and import it into Arduino IDE.
Here is the change, now upload mymodelclassifier and model2.h or your model.h (Please remember to change header definition, files attached with this project)
If you look at one part of code, we are iterating through the output value. If the value is greater than 0.7 then it is a hit and the corresponding ID (1: Punch, 2: Defend, 3: Summon) is returned. This will be used by Raspberry Pi pyserial to make the required game action!
Now, if you open the serial terminal we will be able to see numbers corresponding to
Now if we open the serial monitor, we can see the output for corresponding gesture. Note a zero is returned if nothing is detected among the trained moves.
Do not expect world class accuracy with 10/12 datapoints. Often datapoints range into hundreds with multiple users making the same movement. But it should solve our purpose.
Game on Raspberry PiNext, we will switch over to Raspberry Pi. You may use any game, in fact I highly suggest you make your own using pygame to fully utilize our trained model.
However, for this we will use the following game for our tutorial:
https://www.pygame.org/project-Street+pyghter-1860-3264.html
All credits goes to the owner for the game...
The first and foremost thing is to connect the Pi to our nano ble with usb. Next run the following command in the terminal to get the COM Port no. or the Linux equivalent
For me it is ttyAMC0. Please change this part in the code as per your com port.
We will then open the serial port and read the data with a minute timeout. We will get empty string 99% of the time. This is to ensure the game doesn't hang. Else if you hold the update routine then it is pointless.
You may go through the code. We open the serial port as per the commands shown below:
ser.port = "/dev/ttyACM0"
ser.baudrate = 9600
ser.timeout = 0.01 # Make it lower if you want a smoother framerate.
I would like to make it clear that there are far more advanced techniques to achieve the same but the one shown here is the simplest of all the approaches
Then we will add the code for the buttons to activate as per our serial port values instead of keyboard Please check the code in Round.py
We are changing the game to think a key is pressed when in reality a serial value has been received.
Download the game from the given link and replace Round.py with my code (attached with this project)
Please do try to follow and compile the game with your model. How well does the nano track your movements
A few Common Errors:
1. Orientation in all movements: You must orient the board the same when performing all your movements else you may get junk response. Let the top of the board always face the top or vice versa
2. Please include the proper header files and your serial com port. Else your code won't compile or worse give runtime errors.
3. No convergence: No easy answer for this but the original model converges pretty well!
Comments