The industry currently have bionic hand prosthesis that work mainly using electromyography and allow the use of modules that are generally activated through cellphones for the reproduction of gestures of interest in everyday life. In the present project, an alternative module was developed so that it is possible to use voice commands to execute 5 gestures through Tiny Machine Learning.
In this project, only the voice commands are tested, whithout needing an activation word, to simplify the tests.
Operation DescriptionFor the hardware, a nano 33 BLE sense, to run the keyword spotting model, and 5 servo motors SG90, to move the prosthesis fingers, were used. The built-in RGB LED was configured to blink in purple color in order to verify the correct moment to say a word. Other colors were used to verify if the inference happened correctly for each word. The following images show the movements the prosthesis made for each word.
LabelsOk - LED color Blue
One - LED color Yellow
Two - Led color Red
Rock - LED color Cyan
Thumbs Up - LED color Green
To collect this amount of audios, about 40 people have recorded the words in portuguese that represent gestures with hand, as “um”, (it means ‘one’), “dois” (it means ‘two’), “ok” (the same meaning as ‘ok’), “rock” (the same meaning as ‘rock’n roll’ gesture), “joia” (it means the gesture ‘thumbs up’) and the label “nada”, that represents ‘nothing’, or any noise or audios that are not these words.
People have autorized to collect about 40 seconds or more for each word.
All the data have been splited in 1 second samples, this way the best training has been achieved.
Even though MFCC is better to voices, when the dataset was smaller, the MFE processing block with Transfer Learning as learning block, had better results than MFCC, so, when the dataset growth to this, the blocks have been kept, and the good result as well.
These are the parameters that were used for the features generating:
And the features were arranged like this:
Then, to train the model, the neural network was like this:
The parameters used was the recommended by Edge Impulse, all the changes did not improved the model, so these were the best results achieved.
The training output was like this:
To control the prosthetic hand with the inferencing results, a few changes were made in the Arduino library generated in the edge impulse deployment page. First, the five servo motors SG90 were added and then they were initialized in the position of 10 degrees to make the prosthesis look like a resting human hand.
To make the gestures, a function was created with a switch case structure. Each case represents a gesture that is made by simply changing the position of the servo motors that moves the correct fingers to 170 degrees. Along the lines were also added the function to turn on the corresponding LED colors.
Check the Edge Impulse Project to access the full TinyML model
Comments