As I've dug deeper into a bigger project involving Legos and robotics, I've found a need to develop a machine learning model for object/obstacle detection to deploy on my Kria KR260 that I'm using as the heart/brain of the project. While I've previous covered how to use Edge Impulse on the Kria KV260 running the Ubuntu 20.04 image for AMD-Xilinx devices, I found the install process deviated enough on the Kria KR260 running the Ubuntu 22.04 image for AMD-Xilinx devices it was worth an updated write-up.
Prep KR260My past tutorials for the Kria KR260 have used a Linux image created using PetaLinux, but Edge Impulse is much easier to deploy on a debian-based system in my personal opinion (due to more documentation and example projects). So I'll be using the dedicated Ubuntu 22.04 image created by AMD-Xilinx and Canonical for the Kria boards.
There is an great step-by-step tutorial on AMD-Xilinx's website here for how to flash Ubuntu 22.04 onto an SD card and prep the environment with the Xilinx development tools, xlnx-config snap, required for system management.
This project assumes a starting point afterstep 3 of AMD-Xilinx's guide for getting started with Ubuntu.
Install Edge Impulse for LinuxInstall nodejs 12.x and npm
then install the edge-impulse-linux
packages:
ubuntu@kria:~$ sudo apt-get install npm nodejs sox
ubuntu@kria:~$ curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
ubuntu@kria:~$ sudo apt-get install -y nodejs
ubuntu@kria:~$ npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm
Update & upgrade the system:
ubuntu@kria:~$ sudo apt update
ubuntu@kria:~$ sudo apt upgrade
You may be advised to reboot to upgrade some kernel modules. If you are, definitely reboot the KR260 before moving on:
ubuntu@kria:~$ reboot
For good measure, make sure edge-impulse-linux
is fully up-to-date:
ubuntu@kria:~$ sudo npm update -g edge-impulse-linux
Then test out the edge-impulse-linux
package by connecting it to a project in Edge Impulse Studio:
ubuntu@kria:~$ sudo edge-impulse-linux --verbose
The verbose flag is optional, but I find it helpful for troubleshooting or verifying everything is working properly the first time I run Edge Impulse after installation. I found that edge-impulse-linux
must be ran with root privileges or it will not be able to access the video sources in the /dev
directory (you'll get errors that no camera or microphones can be found).
When prompted, enter your Edge Impulse Studio login and which project to connect the KR260 to. Then select the video and audio sources (ie a USB webcam).
The Edge Impulse command line interface (CLI) tools are going to be important for my later projects because they allow users to do things such as creating their own custom DSP blocks, machine learning models, deployment blocks, and transformation blocks.
While edge-impulse-linux
has to be install with nodejs 12.x, edge-impulse-cli
has to be installed and used with nodejs 14.x (edge-impulse-linux
can still be used with 14.x however).
Ubuntu 22.04 requires that nodejs be removed and purged from the system before installing a new version (there is not a direct upgrade option):
ubuntu@kria:~$ curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
ubuntu@kria:~$ sudo apt purge nodejs
ubuntu@kria:~$ sudo apt autoremove
ubuntu@kria:~$ sudo apt-get install -y nodejs
The default directory for npm
in the Xilinx ubuntu image is /usr/local
but it needs to be /home/ubuntu/.npm-global
so check where it initially points to after installation:
ubuntu@kria:~$ npm config get prefix
And if npm
does return that the current directory is /usr/local
update it:
ubuntu@kria:~$ mkdir ~/.npm-global
ubuntu@kria:~$ npm config set prefix '~/.npm-global'
ubuntu@kria:~$ echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.profile
Then validate the directory for npm
has been updated:
ubuntu@kria:~$ npm config get prefix
Finally, validate the nodejs version is in-fact now 14.x and then install edge-impulse-cli
:
ubuntu@kria:~$ node -v
ubuntu@kria:~$ npm install -g edge-impulse-cli
This may take a few minutes, and a few warnings will be displayed but the process should ultimately succeed echoing the edge-impulse-cli
version installed.
As I mentioned initially, I've gone through the process for creating a project in Edge Impulse Studio with the Kria KV260 and the process for creating/training an ML model. The sections from that write-up Create New Project in EI Studio and Build a Machine Learning Model can be followed the same on the Kria KR260 (ignore the Ubuntu 20.04 driver updates in the earlier sections however since the KR260 is using Ubuntu 22.04).
Run Model on KR260Then once a new ML model has been completed in EI Studio, it can be deployed on the KR260 the same way for initial testing:
ubuntu@kria:~$ sudo edge-impulse-linux-runner
This command downloads the model file from EI Studio, compiles it natively then starts running it on newly captured samples from the target sensor; which in my case is images captured from the USB webcam connected to the KR260 with bounding boxes placed around the object(s) the model was trained to detect.
The edge-impulse-linux-runner
command will also print a URL to the KR260's terminal once the model launches successfully if you want to see the actual sample images being captured with the bounding boxes as the model detects the items. Note: make sure the device whose browser you use the URL in is connected to the same network as the KR260.
For the KR260, I trained a new ML model to look for my coffee cup and mouse which it succeeded in!
Download.eim Model File to KR260In order to be able to run the ML model on the KR260 without an internet connection, the.eim model file needs to be downloaded to a permanent directory:
ubuntu@kria:~$ sudo edge-impulse-linux-runner --download modelfile.eim
With the ML model download back to the KR260, firmware can now be developed on it to capture images and feed them to the model then take whatever desired action based on the output from the ML model. For example: I could write a Python script on the KR260 that triggers the webcam to take an image then feed it to the ML model I created, and if the ML model outputs that it classifies the image as it "seeing" my coffee cup then take some other action like turning on an LED.
Install Desired Language SDK for Edge ImpulseEdge Impulse supports custom firmware in a few different languages by providing an SDK for Python, Nodejs, Go, and C++. I decided to go with the Python SDK so I installed it on the KR260 and cloned the Github repository from Edge Impulse with the examples for classifying data:
ubuntu@kria:~$ $ pip3 install edge_impulse_linux
ubuntu@kria:~$ git clone https://github.com/edgeimpulse/linux-sdk-python
I did find that I needed to install some different audio drivers to support the Python SDK on the KR260:
ubuntu@kria:~$ sudo apt install portaudio19-dev python3-pyaudio
As a test, you can take an individual image from the webcam then manually run the image classification script:
ubuntu@kria:~$ python3 ./linux-sdk-python/examples/image/classify-image.py <path_to_model.eim> <path_to_image.jpg>
My next post will cover the details of writing the Python scripts in more detail!
Comments