This will walk you through setting up your MaaXBoard with the hardware and software you must have, direct you through changes to make on an Edge impulse project and additional package installation steps, and result in running a script with an Edge Impulse model to see how it performs in terms of processing speed, power, and accuracy.
The Edge Impulse model that will the main focus of this benchmark is the Faster Object, More Objects (FOMO) model that Edge Impulse studio provides as opposed to the default MobileNetV2 SSD for a better benchmark. For object detection, the FOMO model is set to take a 224x224 input for the image size like the other models in Monica Houston's Benchmarking Machine Learning on MaaXBoard project (MobileNet v.2, Mobilenet v.1, EfficientNet v0) whereas the MobileNetV2 SSD model only takes a 320x320 input for image size. Compared to MobileNetV2 SSD, FOMO is 30x faster and captures the position of objects instead of the whole size in the image. This is stated in the Announcing FOMO (Faster Objects, More Objects) Edge Impulse blog. The default Edge Impulse model has also been tested and its results are also compared to the other models.
In addition, this benchmarking project was done with the Yocto image instead of the Debian image Monica used in her project. The Debian image is used for main desktop features for embedded systems. The Yocto image has more flexibility with custom features for embedded systems Below is an image from Kosta Zertsekel's article titled Yocto vs Ubuntu for Embedded that sums up the difference between the two.
At the end of this, you will see how the Edge Impulse model compares to other models and in the future can be used to determine what Edge Impulse models would be needed for object detection projects.
PrerequisitesTo start this project you will need to have the following completed:
- An Edge Impulse object detection model (Work through the Installing and Running Edge Impulse on the MaaXBoard project)
- Image(s) that you intend to test the model on
- This was done with the MaaXBoard connected to an Ubuntu workstation (Go through the SSH section of Getting Started with Yocto on MaaXBoard)
When connecting power to the MaaXBoard, connect the USB Type-C adapter to the power measuring device, then connect the device to the MaaXBoard's USB Type-C port.
On the side of the power measuring device, you will see two buttons labeled K1 and K2 and two switches labeled PWR and PD. IF the PWR switch is on the right, the device will turn on. If the switch is on the left, slide the switch to the right to turn it on. Navigate to the screen that shows the voltage, current, power and other specs by repeatedly pressing either K1 or K2.
As you run each benchmark with the commands you'll see below, you'll see an increase in the current (labeled with unit Amps(A)), showing you how much power the object detection models consume. After running each benchmark, hold the K2 button to reset measuring values (mAh and mWh) to 0 before running the next model.
I have provided the benchmarking scripts in the Code section. For it to run successfully you will need to download the Edge Impulse model as a.eim file, obtain any image that you want to test the model on, and use the ei_benchmark_template.py script.
Object Detection Project Files
Here is the link to my Edge Impulse project that you can reuse or modify: https://studio.edgeimpulse.com/public/287135/live
Change the Image Size and Model for Training
For a more comparable benchmark, the input image size on the Edge Impulse model should be changed to 224x224 instead of the initial 320x320 size in the Installing and Running Edge Impulse on the MaaXBoard project.
- Keep the 320x320 size if you would like to stick to the default MobileNetV2 SSD model
Log onto your Object Detection project on Edge Impulse, click on the Create impulse tab under Impulse Design.
Change the image size to 224x224.
Click on the Object detection tab and click on the Choose a different model button.
Select the FOMO (Faster Objects, More Objects) MobileNetv2 0.1 model.
- Compared to other models, you shouldn't get any issues and you'll get good results.
Click the Start training button.
Training Results
Result of training for FOMO.
Result of training for default MobileNetV2 SSD model.
After training the models, you'll see that the default model got the better score, but the FOMO model displayed a significantly faster inference time.
Deploying the Model
Go to the Deployment tab on the far left.
Click in the search bar to look at deployment options.
Select the Linux (AARCH64) option, so that the downloaded model is compatible with the MaaXBoard.
Click on the Build button and the.eim file should be downloaded.
Before moving on, make sure both the MaaXBoard and your computer are connected to the same network.
- An Ethernet cable must be used if the MaaXBoard is unable to connect to the Network wirelessly.
Connect to your MaaXBoard through ssh.
Ex: ssh root@YOUR_SSID
Password: avnet
Use the scp
command in your computer's terminal to transfer files.
Ex:
scp benchmark_v3-linux-aarch64-v9.eim root@'IP Address of MaaXBoard':/home/root
Use the same command format to transfer the script from your computer to the MaaXBoard.
Ex:
scp ei_benchmark_template.py root@'IP Address of MaaXBoard':/home/root
Use scp
to transfer the images from your computer to the MaaXBoard.
Ex:
scp -r /home/frank/MaaXBoard_folders root@'IP Address of MaaXBoard':/home/root
Now your board has the files it needs. The next steps involve making a virtual environment and installing necessary libraries and packages.
Set Up Virtual Environmentsudo apt install python3-venv
python3 -m venv myenv2
myenv2/bin/activate
In this new environment, copy and paste each command in consecutive order to avoid errors involving libraries and modules.
sudo apt remove python3
sudo apt install python3
pip install dataclasses
pip install edge_impulse_linux
pip install six
There! You've installed most of the needed libraries and packages. The next step is installing PortAudio.
Install PortaudioDownload PortAudio Source Code: Go to the PortAudio downloads page and download the latest stable release source code (a .tgz
file).
Use scp
to transfer the file to the MaaXBoard root directory.
scp 'portaudio.tgz' root@'IP Address of MaaXBoard':/home/root
Extract and Compile: After downloading, extract the contents and navigate to the extracted folder. Then, follow these commands:
tar -zxvf portaudio.tgz # Replace 'portaudio.tgz' with the downloaded file name
cd portaudio
./configure && make
sudo make install
Install PyAudio with the line below:
pip install pyaudio
You may see the following warning, but don't worry:
Successfully installed pyaudio-0.2.14 WARNING: There was an error checking the latest version of pip.
sudo find / -name libportaudio.so.2
This will help ensure we're using the correct path to the libportaudio.so.2
library file.
/home/root/portaudio/lib/.libs/libportaudio.so.2
Make sure that you're in your virtual environment:
source myenv/bin/activate
Copy and paste the next line into the terminal. If needed, replace the path below with the one obtained after using the sudo find
command:
export LD_LIBRARY_PATH=/usr/local/lib:/home/root/portaudio/lib/.libs:$LD_LIBRARY_PATH
- Every time you want to run the script, you'll have to copy and paste that
export
line whenever you're in the virtual environment to avoid an ImportError
Run the script with the command format below:
python3 ei_benchmark_template.py /home/root/benchmark_v3-linux-aarch64-v9.eim /home/root/MaaXBoard_folders/test_pics/fruit.jpeg
After the code successfully runs, you should have a new, annotated image saved in the same directory the original image was in
Use scp
to see the annotated image on your computer:
scp /home/root/MaaXBoard_folders/test_pics/annotated_image.jpeg 'username'@'Computer or Workstation IP Address':/homedirectory
After running the benchmarking script and moving it to your computer, you will see bounding boxes on the objects that were used for one Edge Impulse model, and you'll see labels with small marks on another model.
Benchmark for Edge ImpulseMobileNet Model Build (320x320) on MaaXBoard:
Found 4 bounding boxes (686 ms.)
Processing label: banana, Score: 1.00
Processing label: banana, Score: 0.85
Processing label: apple, Score: 0.85
Processing label: banana, Score: 0.62
FOMO Model Build (224x224) on MaaXBoard:
Found 2 bounding boxes (43 ms.)
Processing label: apple, Score: 0.95
Processing label: banana, Score: 0.79
With the image size reduced and the model not utilizing full-size bounding boxes, the FOMO model runs significantly faster and is still accurate.
Comparison of ResultsCompared to the other models in Monica Houston's Benchmarking Machine Learning on MaaXBoard project (results shown below), the Edge Impulse model results are slower than a few other models in processing speed, but it has a pretty high accuracy.
- Note the results below are the result of models ran on the Yocto image instead of the Debian image. The accuracy scores are the same but the inference speed is slower compared to when the models were run on Monica's project.
Open another terminal window, connect to your MaaXBoard through ssh
and follow the instructions in the Downloading the model and benchmark script section to get the models (the commands shown below):
wget https://hacksterio.s3.amazonaws.com/uploads/attachments/1143280/RASPBERRYPI.zip
unzip RASPBERRYPI.zip
cd RASPBERRYPI
The script for the other models should work if Pillow v 9.4 is installed:
pip uninstall Pillow
pip install Pillow==9.4
Run the script for the models and get the results.
- If it doesn't work, refer to the benchmark_tf_lite.py and the benchmark_classification.py scripts that I provide.
Benchmark for MobileNet v.2 on MaaXBoard:
python3 benchmark_tf_lite.py --model tflite_for_rpi_V2/mobilenet_v2.tflite --label tflite_for_rpi_V2/coco_labels.txt --input fruit.jpg --output out.jpg --runs 1000
Elapsed time is 582.9663990699992 ms
apple score = 0.953125
banana score = 0.83984375
Benchmark for MobileNet v.1 on MaaXBoard:
python3 benchmark_tf_lite.py --model tflite_for_rpi_V1/mobilenet_v1.tflite --label tflite_for_rpi_V1/coco_labels.txt --input fruit.jpg --output out.jpg --runs 1000
Elapsed time is 450.69164340600037 ms
apple score = 0.80078125
apple score = 0.515625
Benchmark for EfficientNet v0 on MaaXBoard:
python3 benchmark_classification.py --model efficientnet-lite/efficientnet_lite0_int8_1.tflite --label efficientnet-lite/ImageNetLabels.txt --input fruit.jpg --runs 1000
Elapsed time is 370.6618236890008 ms
0.125490: banana
0.043137: hook
0.027451: zucchini
These are the results of the other models and they're compared to the Edge Impulse models in the table and graphs below:
Based on the results, the Edge Impulse models consumes more power compared to the other models. Depending on the image input size (320x320 vs 224x224) and the use of or omission of bounding boxes, the inference for object detection can be faster or slower compared to other models. In terms of accuracy, the Edge Impulse models did well!
Now you can see how the models you make on Edge Impulse compares to other models used on the MaaXBoard!
Comments
Please log in or sign up to comment.