The SAMA7G54-Ek is the latest low-power MPU 32-Bits of Microchip. This tutorial will explain how to run the Object recognition demo made by the MPU32 Marketing Team. In this demo, the SAMA7G5 runs a Machine Learning Model trained to recognize 1000 objects, animals and other things in images.
I) Prepare the setup1)Flash the SD CardFirst of all, you have to flash your SD card with the “sdcard_blank.img” image included with this documentation. This Linux4SAM 2022.04 image has been adapted to allow the running of AI/ML Applications.
Hint:
If you need instructions about how to flash a SD Card, follow this link : Linux4Sam – Flash SD Card
The SD Card is the same as the one used for keyword recognition demo.2) Copy the files
Once the SD card is flashed, with a Linux Distribution on your host, copy at least the directories : “dependencies” and “Image_classification”to the root file system of the SD Card.
Hint:
If you need a tutorial about the creation of a Linux Virtual Machine, follow this link.
You should have this folder structure :
root/
├─ Image_classification/
│ ├─MIPI_Camera/
│ ├─USB_Camera/
├─ dependencies/
│ ├─tflite_runtime-2.8.0-cp38-cp38-manylinux2014_armv7l.whl
│ ├─argparse-1.4.0-py2.py3-none-any.whl
#ls
Image_Classification dependencies
#ls Image_Classification/
MIPI_Camera USB_Camera
#ls dependencies/
tflite_runtime-2.8.0-cp38-cp38-manylinux2014_armv7l.whl
argparse-1.4.0-py2.py3-none-any.whl
3)Setup the hardwareNow you have to setup hardware, this image describes how to do so.
The debug console has a baud rate of 115 200.
Now, you have to install two dependencies: argparse and tflite_runtime. Argparse is a python library which supports passing arguments during the interpretation of a python script.
tflite_runtime is a lighter version of TensorFlow and is mostly used for inferencing.tflite machine learning models.
To do so you can use one of the two methods described below:
a) Using the whl files included :- Go to the “dependencies” directory
- Type :
#pip install tflite_runtime-2.8.0-cp38-cp38-manylinux2014_armv7l.whl
#pip install argparse-1.4.0-py2.py3-none-any.whl
b) Using “pip install” ”(not necessary for the demo)This section is only an example about how to install python packages using the “pip install package-name” command.
To install tflite_runtime and argparse you will have to use “pip”. But first of all, you need to connect the board to the internet:
- Plug an ethernet cable to the “1Gbps Ethernet” module.
- Activate the interface :
#ifdown eth0
#ifup eth0
- Setup the current date and time (for SSL certificate compatibility reasons):
#date YYYY-MM-DD
#date HH:MM:SS
- Install the packages
#pip install tflite_runtime
#pip install argparse
3) Run the demoNow that everything is setup, you can run the demo, there is three versions, a static recognition demo, a recognition demo launched by the user button and a third one performing recognition in a loop.
#cd Image_Classification
#cd USB_Camera
a)Object recognition demo launched by user button.To run the demo, type :
#python3 ./img_reco_with_pressed_button.py
You should see something like :
# python3 ./img_reco_with_pressed_button.py
***********************************************************
*** Welcome to the SAMA7G54-Ek Object Recognition demo ***
*** Using an USB Cam and the user button ***
*** Made with love by the MPU32 Marketing Team ***
*** Feel free to contact us if needed ***
***********************************************************
usb 1-3: reset high-speed USB device number 2 using atmel-ehci
CONNECT TO : 10.159.227.174:5000
* Serving Flask app "video_stream_flask" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
Loading module
Module loaded... running interpreter
Press the user button to launch the capture and inference
As written, when you are ready, you can press the “USER BUTTON” to launch the capture and inference :
--------Button press detected-------------
0.749020: 682:notebook, notebook computer
time: 834.840ms
<======= Object Detected with Inference time ==========>
This demo runs a webserver, to stream the USB Camera video flow and to display the detected object. To connect to this webserver, open a web browser (OS independent) and browse to the indicated IP Address:
***
Warning:
To be able to connect to the webserver, host and target have to be in the same network.
***
b) Object recognition demo in a loopTo launch this version of the demo, type :
#python3 ./infinite_camera_object_reco.py
c)Static object recognitionTo run this version of the demo, you have to run a specific script and specify an input image, for example:
#python3 ./static_img_reco.py -I images/car.jpeg
Comments