Edge Impulse is unrivaled with the ease of entry into the development of machine learning (ML) models that it provides its users. The Kria KV260 Vision AI SoM kit is also in it's own niche with it's custom hardware targeted towards and the resources it comes with to be able to get accelerated vision applications up and running on it without the requirement of complex hardware design knowledge. So naturally, Edge Impulse and the Kria KV260 are a perfect powerhouse couple.
While I'm still working on deploying the Edge Impulse service as an accelerated application on the Kria, there is still a ton really cool things that can be done by running Edge Impulse on the Kria Ubuntu 20.04 distribution so I thought a straightforward demo of that would be a good starting point!
This project assumes you've already created an Edge Impulse Studio account, if you haven't do that really quick here. I'm also starting exactly where I left off in my last Ubuntu on Kria post where I updated the boot binary and installed Pynq. It is required that the Kria be connected to your local area network (must be same network that your host PC is connected to) with internet access, I recommend connecting your Kria straight to your router via Ethernet.
Kria Ubuntu 20.04 Updates for Edge ImpulseI previously had Edge Impulse installed and working on my Kria running the Xilinx-specialized distribution of Ubuntu desktop 20.04, so I was surprised when I reinstalled it on my updated Ubuntu image that same way and got the following error that the Edge Impulse tools couldn't connect to the Logitech webcam I had connected to my Kria:
ubuntu@kria:~$ edge-impulse-linux
Edge Impulse Linux client v1.3.8
[SER] Using microphone hw:1,0
Failed to initialize linux tool Error: Cannot find any webcams, run this command with --disable-camera to skip selection
at /usr/lib/node_modules/edge-impulse-linux/build/cli/linux/linux.js:429:23
I quickly narrowed down the issue to a compatibly issue with the Edge Impulse tools as the Linux system was having no issue detecting and communicating with the webcam. Long story short, at the time of me writing this, the Xilinx specialized versions of the Gstreamer library packages are not compatible with the Edge Impulse toolset. Thus, the "regular" Gstreamer packages need to be installed (which removes/replaces the Xilinx version).
It's worth mentioning that this could/will have an impact on anything that is output on the Kria's DisplayPort and HDMI ports: mainly the Desktop output for Ubuntu (this is pretty much the whole reason Xilinx developed their own Gstreamer packages). Since I'm working from the command line on the Kria and EI Studio from a web browser on my host PC, it's not a big deal if the Ubuntu Desktop doesn't work for me for this project. So far I haven't tested the Ubuntu Desktop since, but Pynq and JupyterLab still work just fine from a browser on my host PC.
If you haven't already, install the Edge Impulse tools on the Kria Ubuntu image:
ubuntu@kria:~$ sudo apt-get install nodejs sox
ubuntu@kria:~$ curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
ubuntu@kria:~$ sudo apt-get install -y nodejs
ubuntu@kria:~$ npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm
Update & upgrade everything system-wide:
ubuntu@kria:~$ sudo apt update
ubuntu@kria:~$ sudo apt upgrade
Update the Edge Impulse toolset:
ubuntu@kria:~$ npm update -g edge-impulse-linux
Replace gstreamer-xilinx1.0
and libgstreamer-xilinx
packages with regular gstreamer1.0
and libgstreamer
packages (order matters here so follow the order below):
ubuntu@kria:~$ sudo apt install -y libgstreamer-plugins-good1.0-0
ubuntu@kria:~$ sudo apt install -y gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
And reboot:
ubuntu@kria:~$ sudo reboot
Create New Project in EI StudioLog into Edge Impulse Studio and create a new project. In the new project setup windows, select Images for the data type being collected, and then whether you want to detect a single target object or multiple. I selected single target object for the sake of simplicity in this project.
After clicking Let's get started! navigate to the Data acquisition tab.
With the compatible Gstreamer packages installed on the Kria, connect a webcam to the Kria and run the Edge Impulse tools to connect to the newly created project in EI Studio to start loading data for training an ML model:
ubuntu@kria:~$ edge-impulse-linux
When prompted, provide your EI Studio log in credentials, select the target project in EI Studio to upload the sample data to (the one just created), and select the video source of the connected webcam.
The Kria will appear in the Devices tab with the given name from running the Edge Impulse tools on the Kria's command line.
Once, the Kria is showing as successfully connected in the Devices tab, it's time to start capturing some test data to create and train an ML model with using the Data acquisition tools in EI Studio.
Build a Machine Learning ModelIn the Data acquisition tab of EI Studio, select the Kria from the Device dropdown menu, then select Camera from the Sensor dropdown menu and give a generic name in the Label name box such as "Data". Click the Starting sampling button to take photos of the target object you want to detect. I'm using my NYC coffee cup here.
It is recommended that you take a minimum of 30 photos of the target object to get a quality ML model from Edge Impulse.
Since test images will also be needed for making sure the model is working properly, it is good practice to collect about 20 - 30 extra photos for the test data set. I captured 64 photos total then used the Perform train/test split option from the Dashboard tab to rebalance the data set for me.
To rebalance the data set to automatically and split the data between the training of the ML model and the test data set, go back to the Dashboard tab and click Perform train/test split. Also change the Labeling method from One label per data item to Bounding boxes (object detection) and set the target device for Latency calculations to be the Raspberry Pi 4 as it's the closest to the Zynq UltraScale in the Kria in terms of processor to select from in the list.
Next, the captured photos need to be marked to show the ML model what it's looking for. I personally think using the bounding boxes to draw around the target item is the easiest way to do this which is why I changed the labeling method in the Dashboard.
Go back to the Data acquisition tab and select the Labeling queue. Draw a box around the target item (my NYC coffee mug) and give it a label when prompted. Click through each captured photo to validate/move the box to encompass the target item for object detection.
Once the entire data set (all photos captured - so both the training and test data) has been labeled, the next step is to create an impulse which is your ML model. Edge Impulse is really innovative in the way that they made this so easy with the reuse of their core ML model template.
Switch to the Create impulse sub-tab under the Impulse design tab.
A processing block and learning block need to be added. The processing block for this simple object detection ML model we want is a basic image processing block that preprocesses then normalizes all of the image data. Click Add a processing block then select Image from the pop up window.
For the learning block, EI Studio has a pre-built object detect ML model that you can simply train with your own captured data. Click Add a learning block and select Object Detection (Images) from the pop up window.
Click the Save impulse button to save the processing and learning block configuration.
Next, select the Image sub-tab under Impulse design. The Parameters page is where you can modify the image preprocessing features as desired, but I'm leaving it as the default RGB for this project.
Switch to the Generate features page and click the Generate features button. This is a validation of your test data set to see how good it is before you use it to train your model.
As you can see in the screenshot below, the clustering isn't very tight in the Feature explorer graph, which means I may not have captured the best images for training the model. Ideally you'd like to see a nice tight cluster of dots that basically look like one large dot. I'm kind of curious to see what happens with this data set though so I'm going to go ahead and see what happens with a model trained from it.
Realistically, if I were actually going to use this ML model in a project, I'd go back and recapture a better data set. Which could mean a variety of things such as better lighting when taking the photos, using a higher resolution webcam, better camera angles or just a wider variety of different angles of the object. It may also be an indication that my target object just doesn't stand out well enough from the background and surrounding objects. These are all things you need to keep in mind when creating an ML model for a project and capturing data for it.
Finally it's time to train the ML model with the captured data. Select the Objectdetection sub-tab under Impulse design and click Start training.
This can take a few minutes and depends on the amount of training data you provide. Once complete, a performance score that also included the timing and resource allocation required by the model. It doesn't look terrible so I have hope that this model will still work even with my sub-par training data I gave it.
With the model created and now trained, the last step before deploying it back on the Kria to test in the real world is to see how it does with the test data previous captured. The whole reason we split with captured data was that the test data images were withheld from the data set when the ML model was being trained. So the model has never seen the images, but the images are still marked with where the target images are so EI Studio can report back how well the ML model is able to detect the target object when shown the images for the first time.
In the Model testing tab, click Classify all to see how well the newly created ML model does with the test data captured previously.
As you can see, my ML model was able to detect my NYC cup 66% of the time. Obviously, not ideal, but also not as bad as I thought it would be. This is still insanely impressive for how easily I was able to create this ML model.
You can test your ML model using the Live classification tab, which pulls images straight from the Kria in realtime (the Edge Impulse tools do still need to be running on the Kria using the edge-impulse-linux
command, restart it if you stopped it since the initial data set capture in the first steps).
The Deployment tab shows options for building into custom firmware, which includes exporting the C/C++ code of your model to be able to write it into your own applications. This will have to be another project for another day, so stay tuned.
To run the model directly on the Kria, switch back to the Kria's command line terminal and deploy the EI runner to download the model from EI Studio and run it.
ubuntu@kria:~$ edge-impulse-linux-runner
You'll see the model being downloaded then compiled natively before the terminal starts printing out bounding box data, which indicates that the model is successfully up and running on the Kria!
When the model detects the target item (my NYC cup) it'll print out the image data of the size and location of the bounding box in the captured image. When it sees nothing, no size/location data prints out in the terminal.
You can also open browser window to see what the ML model running on the Kria is seeing. When the model initially starts running, a URL is printed out to the terminal before the bounding box information starts printing. Simply copy+paste this URL into a browser on your host PC connected to the same network as the Kria.
Then of course, send crtl+C to the Kria terminal to halt the ML model and Edge Impulse tools.
Comments