Autism, or autism spectrum disorder (ASD), refers to a broad range of conditions characterized by challenges with social skills, repetitive behaviours, speech and nonverbal communication. According to the Centers for Disease Control, autism affects an estimated 1 in 59 children in the United States today.
This disease usually shows at an early age of 2 or 3. Treating the children at this early age is very difficult. I have worked closely with a project where we used a drone to help autism affected kids to interactively learn i.e moving drone will do spatial movement and sits on the answer cards answered by kids.
Since it is very hard to get concentrated attention of ASD affected kid we need to make the learning interactive. How do we make this interactive ?... with the help of voice recognition face recognition etc. But wait...!, All these things need internet. Internet connectivity is still a challenge in many developing nations. Thanks to tensor flow lite and Sparkfun Artemis ATP board. Now we can do offline voice recognition on this board after training the model.
Another added advantage is ASD affected kids pronunciation will change over time but the trained voice model contains huge data set and hence it will be able to recognise the command
Our goal is to make learning interactive for specially-abled kids.
Here's how we're gonna do it.
Firstly I would like to thank this project for setting basic framework. ATPTensorflowMicroSpeech.
I wanted to work on a project for using MPU6050 IMU sensor for predictive maintenance of industrial machines. But setting up toolchain for the generating and making the trained model was so difficult, I gave on the last day and resumed my old project YesICan.
Step 1: Gather the parts:I have used Red Board Artemis as the brain of the project. I'm using Buzzer, Servo and a display for interactive output.
Follow the procedure to generate the model to recognise the following words.
yes
no
up
down
left
right
on
off
stop
go
I generated all possible commands provided in microspeech example. I'm using Collaboratory and the Jupyter notebook from the example to make the trained model.
Collaboratory is the free google powered Jupyter style notebook compiler.
My training took approximately 2 hours and I was able to generate the final output file.
Copy the generated output hex array file onto a text editor and use this to make your.cpp file which goes into the board.
I have added many other functionalities to make the tool interactive.
STEP 4: Test the tool with ServoHere servo motor is used as an output to make the learning interactive. Here the same trained model can be used to make a robot which can function as a voice command based robot or a tool with an arrow choosing answers among the multiple options.
Working Video:
https://github.com/vishwasnavada/YesICan/
Please Check readame.md
Step 5: Test the tool with the automated keypress:
Test the tool with the automated keypress on windows machine based on voice input. This is done by using the HID compatibility of the Arduino Pro Micro.
Working Video:https://github.com/vishwasnavada/YesICan/
Please Check readame.md
Comments