Get the Yocto Image by downloading the zip file from this link and extracting it by using software like 7zip.
Use software like Balena Etcher to flash the image file on an SD card
Follow these steps from the Getting started with Yocto on MaaXBoard project in the following order
- Set up headlessly
- Connect to ethernet
- After connecting the ethernet, open the terminal on your OS log into the MaaXBoard as the root user (work through SSH step on project)
Ex. ssh root@YOUR_SSID
Password: avnet
-After logging in as root user, follow the expand the file system step
Installing Edge Impulse CLI and Linux SDK on the MaaXBoardCopy and paste the following lines in the terminal for the installation
rm -f /etc/apt/sources.list.d/*
vi /etc/apt/sources.list.d/maaxboard.list
After the command from the last line, press I to go into edit mode and add the line below
deb https://docs.avnet.com/amer/smart_channel/repo/maaxboard/ /
After adding the line above, hit the Esc key to get out of edit mode, then type “:wq” and hit enter to save the changes
After the changes have been made and saved, continue copying and pasting the lines below
wget -q -O - https://docs.avnet.com/amer/smart_channel/repo/maaxboard/KEY.gpg | apt-key add -
rm -rf /var/lib/apt/lists/*
apt update
After entering the last command, you should see:
Get:1 https://docs.avnet.com/amer/smart_channel/repo/maaxboard InRelease [1940 B]
Get:2 https://docs.avnet.com/amer/smart_channel/repo/maaxboard Packages [1999 kB]
Fetched 2001 kB in 12s (166 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.
Continue copying and pasting the following lines
apt install -y gnupg curl opencv sox v4l-utils
apt install -y libglib-2.0-0 libglib-2.0-dev libexpat-dev libjpeg-dev libjpeg62 libturbojpeg0
apt install -y gstreamer1.0 gstreamer1.0-dev gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps ****error with recent Yocto image.
apt install -y packagegroup-core-sdk packagegroup-core-standalone-sdk-target packagegroup-go-sdk-target
apt install -y nodejs nodejs-npm
apt remove -y gobject-introspection
mkdir src && cd src
wget https://github.com/libvips/libvips/releases/download/v8.10.5/vips-8.10.5.tar.gz
tar -xzf vips-8.10.5.tar.gz && cd vips-8.10.5
./configure --prefix=/usr/
make -j4 && make install
npm install edge-impulse-cli -g --unsafe-perm=true
npm install edge-impulse-linux -g --unsafe-perm=true
Edge Impulse Linux SDK can be run on the MaaXBoard now!
- Before starting Edge Impule on the MaaxBoard, make sure there’s an existing account and a project made
- Make sure the webcam is connected with the USB port located on top of another USB port (since this is an object detection project)
When you enter the following line in the terminal,
edge-impulse-linux --disable-microphone
You should see something like this:
With successful login on the terminal, you’ll be provided a link to the project you selected and the webcam plugged in the MaaxBoard can be used to capture images.
Data Collection on Edge ImpulseLog onto the Edge Impulse Studio
Either select an existing object detection project or click on Create new project
When you get to the Dashboard page, go to the Project info section and make sure the Labeling method is set to Bounding boxes (object detection)
On the same Dashboard page, under Getting Started, select Collect new data
From this point, a page pops up that shows how to collect data in three ways:
1. Scanning the QR code to get data from your smartphone
2. Connecting to your computer
3. Connecting to your device or development board
In this object detection project example, I collected image data on only two classes (apple, banana). Any image added had a picture of an apple, a picture of a banana, or a picture of both an apple and banana (approximately 70 labels for each class)
- For more details on going about data collection through a device or development board, refer to the Collect new data with the Edge Impulse Linux section Monica Houston’s Hard Hat Detection project and/or Edge Impulse’s Build Your Own Object Detection System with Machine Learning video
When all the image data is added, make sure the images for training are around 80% and the images for testing are around 20%.
If there are images that need bounding boxes and labels, click on the Labeling queue
In the queue, the YOLOv5 classifier can some be helpful with adding bounding boxes and labels if there are many images that need labeling
After the data collection and annotating is finished, the next step is to create an impulse.
In the Create Impulse section under Impulse Design, for the image data, set the width and height to 320 px. Click on Add a processing block and select Image. Click on Add a learning block and select Object Detection (Images). Click on Save Impulse
In the Image section under Impulse Design, in the parameters tab, set the color depth to RGB. Click on Save Parameters. Click on the Generate features tab on the top left
Click on the Generate features button in the training set box
It shouldn't take long for the feature explorer graph to be made. With this graph there should be a cluster of images in the same class (apple in blue on the bottom right and banana in orange on the top left). Although there are a few bananas with all the apples in the example above, the model won't be heavily impacted by this. However if different classes are clustered together, the primary solution is adding more data to the model.
In the Object detection section under Impulse Design, in the Neural Network settings tab, select a model that is recommended for object detection (in this project it's MobileNetV2 SSD). Click on StartTraining and the precision score will develop and display on the right
In this example the precision score is 97% which is pretty good. When the score is high enough (at least 80%), test the accuracy of your model in the Model testing tab on the right.
In the Test data section in the Model testing tab, click on Classify all and the accuracy results will develop and display on the right.
If you're confident in your accuracy score then it's time to deploy your model on the MaaXBoard!
Model DeploymentWith your Edge Impulse model ready and MaaXBoard set up with a USB camera, you can start deploying.
Close all open terminal windows and open a new one
Log into the MaaXBoard as a root user and copy and paste the following command:
edge-impulse-linux-runner
What you see next should be something similar to the image below
You'll have a live classification in both the terminal and browser. Both will show the inference time (in ms) along with the accuracy score and bounding boxes for apples and bananas.
Congrats on running an Edge Impulse object detection model on the MaaXBoard!
Comments
Please log in or sign up to comment.