The SAMA7G54-Ek is the latest low-power MPU 32-Bits of Microchip. This tutorial will explain how to run the Keyword recognition demo made by the MPU32 Marketing Team. In this demo, the SAMA7G5 runs a Machine Learning Model trained to recognize eight keywords: Left, Right, Up, Down, Yes, No, Go, Stop.
I) Prepare the setup1)Flash the SD CardFirst of all, you have to flash your SD card with the “sdcard_blank.img” image included with this documentation. This Linux4SAM 2022.04 image has been adapted to allow the running of AI/ML Applications. You can find it at the "Code" section of this page.
Hint : If you need instructions about how to flash a SD Card, follow this link: Linux4Sam – Flash SD Card2) Copy the files
Once the SD card is flashed, with a Linux Distribution on your host, copy at least the directories : “dependencies” and “Keyword_recognition”to the root file system of the SD Card.
Hint : If you need a tutorial about the creation of a Linux Virtual Machine, follow this link.
You should have this folder structure :
root/
├─ Keyword_recognition/
│ ├─ audio_files/
│ ├─ audio_reco_inference.py
│ ├─ audio_reco_inference_button.py
│ ├─ recording.wav
│ ├─ simple_audio_model_numpy.tflite
├─ dependencies/
│ ├─ tflite_runtime-2.8.0-cp38-cp38-manylinux2014_armv7l.whl
│ ├─ argparse-1.4.0-py2.py3-none-any.whl
#ls
Keyword_recognition dependencies
#ls Keyword_recognition/
audio_files
audio_reco_inference_button.py
audio_reco_inference.py
recording.wav
simple_audio_model_numpy.tflite
#ls dependencies/
tflite_runtime-2.8.0-cp38-cp38-manylinux2014_armv7l.whl
argparse-1.4.0-py2.py3-none-any.whl
Hint :
If you need help installing the driver for the TTL-to-USB connector (UART) you can follow this tutorial :
Linux4Sam - SAMA7G54-Ek - TTL-to-USB connector3)Setup the hardware
Now you have to setup hardware, this image describes how to do so.
The debug console has a baud rate of 115 200.
For this demo, the microphones will be used. To make them usable by the Linux image, you will have to update the boot configuration.
Once your setup is ready, follow these steps:
- Reset the system by pressing the “nRst”button, this is the middle one.
- While your system is booting, keep pressing any key, you should enter the U-Boot mode :
U-Boot 2022.01-linux4sam-2022.04 (Jun 09 2022 - 10:02:15 +0200)
CPU: SAMA7G5
Crystal frequency: 24 MHz
CPU clock : 800 MHz
Master clock : 200 MHz
Model: Microchip SAMA7G5-EK
DRAM: 512 MiB
MMC: mmc@e1204000: 0, mmc@e1208000: 1
Loading Environment from FAT... OK
In: serial@200
Out: serial@200
Err: serial@200
Net: eth0: ethernet@e2800000, eth1: ethernet@e2804000
Hit any key to stop autoboot: 0
=>
- Type printenv and press “Enter”
- ·You can see the bootcmd variable, this is the one you will update :
- Type edit bootcmd and press “Enter”
- At the end of the line, type : #pdmc0 and press “Enter”
- Type saveenv and press “Enter”
- Type boot to reboot the system
Warning:
Perform the previous steps carefully and follow the instructions precisely, not doing so can cause damage to the image, and make it unusable.II)Configure and launch the demo2) Install some dependencies
Now, you have to install two dependencies: argparse and tflite_runtime. Argparse is a python library which supports passing arguments during the interpretation of a python script.
tflite_runtime is a lighter version of TensorFlow and is mostly used for inferencing.tflite machine learning models.
To do so you can use one of the two methods described below:
a) Using the whl files included :- Go to the “dependencies” directory
- Type :
pip install tflite_runtime-2.8.0-cp38-cp38-manylinux2014_armv7l.whl
pip install argparse-1.4.0-py2.py3-none-any.whl
b) Using “pip install” ”(not necessary for the demo)This section is only an example about how to install python packages using the “pip install package-name” command.
To install tflite_runtime and argparse you will have to use “pip”. But first of all, you need to connect the board to the internet:
- Plug an ethernet cable to the “1Gbps Ethernet” module.
- Activate the interface :
ifup eth0
- Setup the current date and time (for SSL certificate compatibility reasons):
date YYYY-MM-DD
date HH:MM:SS
- Install the packages
#pip install tflite_runtime
#pip install argparse
3) Run the demosNow that everything is setup, you can run the demos. There are two main categories of demos : Static audio recognition and dynamic audio recognition.
a) Run the static demoTo run the static keyword recognition demo, it means, a recognition using a.wav file, you need to run the script and inputting in argument the path to the.wav file.
For example:
python3 ./audio_reco_inference.py --input audio_files/left.wav
You should get something like :
# python3 ./audio_reco_inference.py --input audio_files/left.wav
***********************************************************
*** Welcome to the SAMA7G54-Ek Audio Recognition demo ***
*** Made with love by the MPU32 Marketing Team ***
*** Feel free to contact us if needed ***
***********************************************************
Reading the input wavefile : audio_files/left.wav
Reading of the file is successful
Running inference...
Inference done in 252.29 ms
>>> Key Word Detected --> LEFT
See all the outputs bellow :
Score for label down is 0.00 %
Score for label go is 0.00 %
Score for label left is 99.94 %
Score for label no is 0.00 %
Score for label right is 0.01 %
Score for label stop is 0.00 %
Score for label up is 0.05 %
Score for label yes is 0.01 %
done.
#
b) Run the infinite dynamic demoTo run a demo that will perform the recognition in a loop, just run this command:
python3 ./audio_reco_inference.py
And wait for the “3, 2, 1, Go!” before speaking to the microphone (PDMC0):
# python3 ./audio_reco_inference.py
***********************************************************
*** Welcome to the SAMA7G54-Ek Audio Recognition demo ***
*** Made with love by the MPU32 Marketing Team ***
*** Feel free to contact us if needed ***
***********************************************************
Starting Audio processing
Will begin audio recording soon
3...
2...
1...
Go !
Recording done
Running inference...
Inference done in 230.32 ms
>>> Key Word Detected --> STOP
See all the outputs bellow :
Score for label down is 2.02 %
Score for label go is 1.19 %
Score for label left is 0.00 %
Score for label no is 0.00 %
Score for label right is 0.05 %
Score for label stop is 96.64 %
Score for label up is 0.10 %
Score for label yes is 0.00 %
c) Run the dynamic demo using the user buttonTo run the demo using the user button to begin recording, just run this command :
python3 ./audio_reco_inference_button.py
Press the button to launch the audio recording process and wait for the “Ready ? Go !” before speaking.
# python3 ./audio_reco_inference_button.py
***********************************************************
*** Welcome to the SAMA7G54-Ek Audio Recognition demo ***
*** Made with love by the MPU32 Marketing Team ***
*** Feel free to contact us if needed ***
***********************************************************
Starting Audio processing
Waiting for button press ....
User button press detected !
Ready ?
Go !
Recording done
Running inference...
Inference done in 223.08 ms
>>> Key Word Detected --> STOP
See all the outputs bellow :
Score for label down is 0.02 %
Score for label go is 0.05 %
Score for label left is 0.00 %
Score for label no is 0.01 %
Score for label right is 0.00 %
Score for label stop is 98.15 %
Score for label up is 1.77 %
Score for label yes is 0.00 %
Comments
Please log in or sign up to comment.