The MaaXBoard Mini is Avnet's board based on NXP's i.MX 8M Mini processor, which features four Cortex-A53 core platforms clocked at 1.8GHz per core. It features a Pi hat interface, as well as four USB ports, BLE 4.0, WIFI, a MIPI camera interface, and a MIPI display interface.
In February this year, I benchmarked MaaXBoard Mini next to 8 comparable single board computers on a couple Machine Learning models. It outperformed both Raspberry Pi 4 and the NVIDIA Jetson Nano (TF) for speed on Mobilenet SSD V.1 and V.2, and it outperformed all of the boards but Movidius NCS and the MaaXBoard on power consumption.
I think it's the perfect little single board computer for machine learning, and I wanted to pair it up with Edge Impulse, the ideal software platform for training Machine Learning at the Edge.
Why hard hat detection?Hard hat detection is a useful computer vision application for construction site safety. Did someone walk into the build site without proper safety gear? Use a machine learning model to alert your construction supervisor about an unsafe situation.
There's a couple of setup steps you'll need to do to get Edge Impulse CLI to run on your MaaXBoard or MaaXBoard Mini.
Setup your MaaXBoardThe first thing you'll need to do is go through my Headless Setup guide to get your MaaXBoard Mini set up with the Debian OS, create a non-root user, and connect via SSH.
Create a new project on Edge ImpulseThe next step is to create an account on Edge Impulse if you don't have one already.
Go to the Object Detection project in Edge Impulse and select "clone this project."
What is the Edge ImpulseLinux CLI or SDK?
The Edge Impulse Linux CLI or SDK (the documentation uses these two terms interchangeably) is necessary to download and run the.eim files that are packaged for linux. It also allows connection to the board from the web interface and collection of sensor data. There are four different language SDKs. In this case, we'll be using the node.js SDK.
Install the Edge Impulse CLI and Linux SDK for Node.jsPower up your MaaXBoard Mini and log in as the non-root user that you created during the Headless Setup.
Install node.js on your MaaXBoard:
set -e
sudo apt install curl
curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
sudo apt install -y nodejs
node -v
The command above should give you the node version on your path. In this case, it should be 14.x
.
You'll need a number of dependencies, including libvips8, which is a requirement for installing the image processing library sharp. Installing libvips8 takes 20+ minutes, so get a cup of tea or something:
sudo apt install -y gcc g++ make build-essential pkg-config glib2.0-dev libexpat1-dev sox v4l-utils libjpeg62-turbo-dev
wget https://github.com/libvips/libvips/releases/download/v8.10.5/vips-8.10.5.tar.gz
tar xf vips-8.10.5.tar.gz
cd vips-8.10.5
./configure
make -j
sudo make install
sudo ldconfig
You can now delete the zipped installation folders node-v12.13.0-linux-arm64.tar.xz and vips-8.10.5.tar.gz.
Finally, install the edge-impulse CLI and Linux SDK:
sudo npm install edge-impulse-cli -g --unsafe-perm=true
sudo npm install edge-impulse-linux -g --unsafe-perm=true
If you see the messages + edge-impulse-linux@[VERSION] and + edge-impulse-cli@[VERSION], SUCCESS! The Edge Impulse CLI is now installed on your MaaXBoard.
There are three main ways to collect data in Edge Impulse
- Use the Edge Impulse Linux SDK to connect the board and collect data via the web interface
- Collect data on the board, and upload it via the GUI uploader
- Collect data on the board, or use an existing dataset, and upload it using the Edge Impulse Uploader from the Edge Impulse CLI on your host PC.
The project you copied earlier in Edge Impulse already has some data associated with it. Before you start collecting data, you'll need to delete this from the project. Go to the Data Acquisition tab and select the check mark to select multiple items. Select all and choose "delete selected." Do the same thing on the "Test data" tab.
I'm using a USB webcam - the HD Pro Webcam C920. It's a bit overkill for taking images that are only 320x320px, but I like this one because it has a mount so I can attach it to a tripod.
Connect your USB webcam to one of the USB ports on the MaaXBoard Mini. Next, make sure you have webcam permissions. Edit the /etc/rc.local file to give yourself camera permissions each time the board boots:
sudo nano /etc/rc.local
Include this line just before "exit 0
" within rc.local:
sudo chmod -R a+rwx /dev/video1
On MaaXBoard, video0 is the MIPI-CSI camera and video1+ are USB cameras. Once you've edited rc.local, run:
sync
sudo reboot
After logging back in, you should now see that you have camera permissions when you run ls -l /dev/video*
.
Run the Edge Impulse Linux SDK on your MaaXBoard Mini:
edge-impulse-linux
When prompted, log in to your Edge Impulse account and name your device (I named mine "mini"). When you go to your project page on Edge Impulse, you'll now see your MaaXBoard Mini is connected:
If you want to connect to a different project, you can always clear your credentials by typing:
edge-impulse-linux --clean
You can also escape from the CLI by typing ctrl-c.
With your board connected, it's possible to collect data from your MaaXBoard Mini directly using Edge Impulse's web interface. On the Data Acquisition page, select your device, enter a label (for object detection, this will simply be the filename and no label will be created), select your sensor (choose the Camera) and start sampling!
You'll see a preview of your camera feed.
Sampling data with Edge Impulse is very useful, but it can only collect one image at a time. Since I need to collect a lot of data over a period of time, it's easiest to write a script to collect data on the MaaXBoard, zip it, and send it to my PC for uploading.
In this case, I'll use the ffmeg library to take images continuously:
sudo apt install ffmpeg
Make a folder named hardhats and cd into it. Run a script like the following to start collecting 322 pixel images (the maximum size for an image in an Object Detection project in Edge Impulse).
I happen to have an actual construction site in my basement, so I let ffmeg run as I painted my basement, taking my hardhat off halfway through.
mkdir hardhats
cd hardhats
nohup ffmpeg -s 320x320 -i /dev/video1 -r 2 hardhats-%04d.jpg
Zip your folder named hardhats:
sudo apt install zip
zip -r hardhats.zip hardhats
On my host PC, I copy the zipped folder over:
scp ebv@192.168.1.14:hardhats.zip .
Then I simply unzip and upload the entire folder with the upload button on the Data Acquisition tab of Edge Impulse's interface:
You can also easily upload data with the edge-uploader cli from your host computer. This has the added benefit of being able to upload data that has already been annotated, as long as the annotations are in the correct format.
First, install Python3 and Node.js v14 or higher. Then install the CLI. Here, I'm installing it on my Mac (full instructions for Linux, Windows, and Mac are here):
npm install -g edge-impulse-cli --force
To upload data to Edge Impulse, run edge-impulse-uploader
followed by your data directory e.g.:
edge-impulse-uploader /Users/monica/Documents/EdgeImpulseTalk/restsubset2898/output/.*jpg
This will walk you through configuring the CLI to upload to your desired project. It will let you know if the files uploaded successfully or not.
If you want to reconfigure the CLI with a different project, run:
edge-impulse-uploader --clean
Annotate dataEdge Impulse lets you annotate data directly in the web interface. Click on the three dots by the side of a datapoint and select Edit labels. You can also edit all of the unlabeled images at once by selecting the Labeling queue tab.
Drag your mouse on the image to label it. Enter a label name and click Set Label. Finally, select Save labels.
If you choose to use a pre-annotated dataset or annotate your data in another tool (to learn more about annotating data, see my project "Annotating data for Machine Learning"), you can upload your annotations by saving them in a single file on your host PC called bounding_boxes.labels. You can see more about the specific JSON annotation format used by Edge Impulse under the heading "Bounding Boxes" in Edge Impulse's docs.
I've written a python script to convert from the popular annotation format Pascal VOC to the Edge Impulse JSON format. If your labels are in another format, you can convert them to Pascal VOC using Roboflow's useful tools, and then convert them to Edge Impulse format by using my script.
To upload your annotations, place bounding_boxes.labels in the same folder as your images and run the edge impulse uploader.
Now that your images are labeled, it's time to setup for training. To do this, go to the Impulse design
tab. On the first sub tab, Create Impulse
, there's not much to be done for an object detection model, since the parameters here are constrained.
Select the Image
tab, and click on Generate features. Click the "Generate features" button.
Different features ideally should cluster in different groups. If you're seeing that they're all clustered together, check out the different features that are close to each other to see if maybe there's a problem with your data.
You can hover over and click each data point to see the image it relates to. By doing this, I was able to see that the images in the middle tended to have annotations that were very small, so I raised the minimum size for the annotation, uploaded my data again, and got better data this way.
Train modelTime to train your model. If you're familiar with Keras, it's possible to use Keras mode to select model settings. From the GUI, you can easily select a different model, number of training cycles, learning rate, and score threshold. Finally, select Start Training to train your model.
Once your model is trained, you'll see a Precision score and estimated on-device performance.
Edge Impulse lets you test your test your project on your board. As long as your board is connected, you can test with live classification from your MaaXBoard Mini. Simply select your device like you did during data acquisition and select Start Sampling.
By default, images taken here are added to the test set. You can move an item to the training set by selecting the three dots next to "Summary" and selecting Move to training set. Once it's been moved to the training set, it will also show up under the Data Acquisition
tab under the Labeling queue, where you can add labels.
You can then test your model using these test images. To upload data specifically for the test set, you must specify that in your upload. Upload your test data and specify --category testing
.
edge-impulse-uploader --category testing C:\Users\044560\Documents\EdgeImpulseTalk\testset\output\*.jpg
Your training set should be 80% of your images, while your test set should be 20%. If you have too many or too few images in your test set, you'll see a warning button on the Data Acquisition
tab. You can fix your splits by clicking it and selecting Performtrain / test split.
You'll be able to see all your test data under the Model Testing
sidebar. To test your model on your test data, simply select Classify all. Once the images have been classified, you can get a better view by clicking on the three bars next to the image and selecting "Show classification."
Here, you can see exactly what your model classified incorrectly, and use this information to improve your model. For instance, my model only classified the largest helmet correctly in this image, so I can infer that it performs poorly when the subject is too far away:
If your model is getting decent accuracy, it's good to save a version of it. I like to note the accuracy and the dataset subsets I used, as well as the number of training cycles and the Learning Rate in the description of the version:
To restore your project, click on it and select Restore. In the popup, give your project a name and select Restore version. Edge Impulse will restore your version as an entirely new project, with the full original dataset.
To actually run your model live, first you must build it under the Deployment tab. Select Linuxboards and then select build:
Run edge-impulse-linux again and connect to your project if you're not still connected. Download the model file and run it with edge-impulse-linux-runner:
edge-impulse-linux
edge-impulse-linux-runner --download modelfile.eim
edge-impulse-linux-runner
You can see what your model is seeing by going to the IP address listed in the terminal output, E.G.
Send an SMS whenever your model detects an unsafe situationCreate an account on twilio and purchase a "from" phone number on the Buy a Number page to allow sending SMS. Setting up a free account gives you a $15 credit, so you shouldn't have to spend any money.
I've forked a simple webapp from Edge Impulse that runs your model and sends a text alert via Twilio each time a person without a hard hat is detected.
Install git on your MaaXBoard Mini and clone the repository:
sudo apt install git
git clone https://github.com/zebular13/example-linux-with-twilio.git
Move your.eim file into the repo folder:
mv modelfile.eim example-linux-with-twilio.git/modelfile.eim
Enter the repo and install the dependencies:
cd example-linux-with-twilio.git
npm install
Start the application via:
npm run build
node build/webserver-twilio.js modelfile.eim
You'll see alerts in the terminal, in a your webbrowser at http://localhost:4911, and you'll also start getting alerts on your phone!
Comments