eIQ is a free collection of libraries and development tools that enable machine learning inference on NXP's i.MX hardware (don't ask me what eIQ stands for because I don't know).
- You can learn how to build NXP's i.MX eIQ layer into Yocto in the tutorial here.
- There is more information about eIQ in NXP’s imx machine learning Users Guide(Rev 4, May 2020), or on their overview here.
eIQ supports most major machine learning frameworks like Tensorflow, Onnx, and Glow, as well as OpenCV. It definitely can make machine learning easier - if you're running an image on MaaXBoard that already includes eIQ you won't have to worry about the many steps of building and installing Tensorflow and Tensorflow Lite like you would otherwise.
eIQ should also perform faster inference than the average Tensorflow build because it includes compilers and libraries optimized specifically for MaaXBoard's i.MX 8M processor.
Getting startedThe first step is to build your own version of Yocto with eIQ support. You can check out my tutorial Building your own Yocto for MaaXBoard for instructions on how to do this.
After your Yocto project builds, it will be in .wic format, and it's zipped as a bz2 file. Unzip it. This can be done using bunzip on Linux:
bunzip2 ZEUSlite-image-ml-maaxboard-ddr4-2g-sdcard-20210415084942.rootfs.wic.bz2
or by using a software like Archive Utility on Mac. Next, burn it to an SD card. It's best to use an SD card with at least 32GB because we're going to be downloading large machine learning models. Finally, go through the steps in Getting Started with Yocto on MaaXBoard to get set up.
SSHIn order to SSH into your board, you'll need to modify /etc/ssh/sshd_config
nano /etc/ssh/sshd_config
and change the line:
#PermitRootLogin prohibit-password
to
PermitRootLogin yes
OpenCV is an open-source computer vision library. If you're familiar with OpenCV already, you'll find it easy to work with in the Yocto image. The face detection app uses the haarcascades algorithms supplied by OpenCV (find them in /usr/share/opencv4/haarcascades/) to overlay bounding boxes on the video where it detects facial features. In order to view the video with bounding boxes in real time, you'll need a display.
No display? You can still test facial recognition by commenting line 43 (cv2.imshow('Video', canvas)
) and writing the file to video. This will save the camera output as a video named "face_detect.mp4," which you can then copy to another PC to view.
Attached in the code section of this project is the python file for testing facial recognition in OpenCV. Switch to bash and create a python file named "face_detect.py":
bash
nano face_detect.py
Copy and paste the contents of face_detect.py into the file.
Connect a USB webcam to one of your MaaXBoard's USB ports. Run the app:
python3 face_detect.py
If you're using a display, you should see face detection start up.
You can find even more OpenCV examples in this directory:
cd /usr/share/OpenCV/samples/bin/
TENSORFLOWDifferent versions of Yocto have different versions of Tensorflow. The older version of eIQ supports Tensorflow 1.12, and the most recent version, released in April 2021, supports Tensorflow 2.4.
Change directories into the folder that contains the Tensorflow examples and download and unzip the inception model. Note that on the latest version, the tensorflow lite examples folder will be at /usr/bin/tensorflow-lite-2.1.0/examples:
cd /usr/bin/tensorflow-1.12.0/examples
wget storage.googleapis.com/download.tensorflow.org/models/inception5h.zip
unzip inception5h.zip
Run the benchmark:
./benchmark --graph=tensorflow_inception_graph.pb --max_num_runs=10
TENSORFLOW LITETensorFlow Lite is TensorFlow's lightweight solution for mobile and embedded devices. It enables low-latency inference of on-device machine learning models with a small binary size and fast performance supporting hardware acceleration. To run tflite Inference with eIQ on the board, there are currently two options:
- TFLite runtime: C++ (older version) or python (latest)
- ARMNN runtime
In most of the cases you will get the best performance with quantization aware training and running inference with ARMNN. We'll be going over how to build and run Tensorflow using the Arm NN SDK in Part 2 of this tutorial.
Running Tensorflow Lite inference using pythonThe older version of eIQ only supports the C++ API for Tensorflow and Tensorflow Lite. The latest version, released April 2021, does include a Python Tensorflow Lite interpreter.
If you have the older version, you can still train your models using Python on your host PC, and run them using a C++ app. It's recommended to use the same version of Tensorflow as eIQ uses (Tensorflow 1.12 in this case) to avoid any compatibility issues when running inference.
If you have the latest version of eIQ, you can run the default image classification app, label_image, using the python interpreter on the included photo of Grace Hopper (you can download your own images to test as well):
python3 label_image.py
Run the benchmark:
Try out the Tensorflow lite benchmark located in /usr/bin/[VERSION]/examples, E.G. /usr/bin/tensorflow-lite-2.1.0/examples:
cd /usr/bin/tensorflow-lite-1.12.0/examples
wget http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz
tar -xzvf mobilenet_v1_1.0_224_quant.tgz
./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite
The output will look like this:
You can also run the default image classification app, label_image:
./label_image -m mobilenet_v1_1.0_224_quant.tflite -t 1 -i grace_hopper.bmp -l labels.txt
Build a C++ TFLite example from source:
Building examples takes quite a bit of RAM, so I recommend increasing your swap file size before undertaking this:
fallocate -l 2G /swapfile
dd if=/dev/zero of=/swapfile bs=1024 count=2097152
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
You can check the swap creation with “swapon –show” again:
swapon -show
To make sure that the swap file is activate at boot we need to add one line to the /etc/fstab file. Open the file with the nano editor by typing:
nano /etc/fstab
Add this line at the beginning of the file:
/swapfile swap swap defaults 0 0
Install Tensorflow Lite 1.12 Libraries:
To build your own examples, you'll need to download the Tensorflow repository. In your home directory, clone Tensorflow. Checkout the same version that your version of eIQ includes, in this case version 1.12:
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
git checkout r1.12
Because Tensorflow 1.12 is fairly old at this point, I ran into some trouble while building it because the source for eigen has moved from bitbucket to gitlab. Thankfully, the mirror for the required version is still up here:
http://mirror.bazel.build/bitbucket.org/eigen/eigen/get/fd6845384b86.tar.gz
I had to change the download link in download_dependencies.sh to get it to work:
nano tensorflow/contrib/lite/tools/make/download_dependencies.sh
comment the current EIGEN_URL and add this line just above it:
EIGEN_URL="
http://mirror.bazel.build/bitbucket.org/eigen/eigen/get/fd6845384b86.tar.gz
"
Comment it out in workspace.bzl as well:
nano tensorflow/workspace.bzl
"ctrl w" to search and enter "eigen-eigen." Comment out the line that says "https://bitbucket.org/eigen/eigen/get/fd6845384b86.tar.gz"
Now you should be able to run the following script without issues:
./tensorflow/contrib/lite/tools/make/download_dependencies.sh
Once the dependencies have downloaded, you should be able to run "make."
make -f tensorflow/contrib/lite/tools/make/Makefile
Build the label_image example
Build the “label_image” example using the GNU C++ compiler:
cd tensorflow/contrib/lite/examples/label_image
g++ --std=c++11 -O3 bitmap_helpers.cc label_image.cc -I ../../../ -I ../../tools/make/downloads/flatbuffers/include -L ../../tools/make/gen/linux_aarch64/lib -ltensorflow-lite -lpthread -ldl -o label_image
Download the TensorFlow model file to the current directory and unpack it:
wget http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz
tar -xzvf mobilenet_v1_1.0_224_quant.tgz
Run the example with the command-line arguments from the default location:
./label_image -m mobilenet_v1_1.0_224_quant.tflite -t 1 -i testdata/grace_hopper.bmp -l ../../java/ovic/src/testdata/labels.txt
The output will be the same as for the sample app:
There are many examples under tensorflow/contrib/lite/examples/ that you can test, and you can also build your own. In the next tutorial, we'll be exploring how to get the best performance possible by preparing models for inference using the Arm NN SDK.
Comments