On April 20, 2021, Xilinx announced Kria™, their newest product portfolio of system on modules (SOMs). These adaptive production ready SOMs, are designed to enable users to accelerate their innovation at the edge.
At that time, I wrote an introductory tutorial:
https://avnet.me/kria-tutorial
which covered one of the Kria apps that were available at launch:
- kv260-smartcam
- kv260-aibox-reid
Since then, several other Kria apps have been made available for the KV260 Vision AI Starter Kit, from Xilinx and third party providers:
- kv260-defect-detect
- kv260-nlp-smartvision
- etc...
This project will describe how to deploy one of the VVAS (Vitis Video Analytics SDK) examples (smart_model_select) to the KV260 Vision AI Starter Kit, as a Kria app.
Overview of the Vitis frameworkBefore going further, it is important to understand the various tools and frameworks available from Xilinx.
Vitis is a complete development environment, which allows you to quickly create complex custom applications.
Vitis includes:
- hardware platforms, which span cloud to edge targets.
- the Core development kit, which includes:- divers and runtime- compilers, and debuggin tools- accelerated libraries, covering a range of targeted applications
- as well as support for domain-specific development environments, such as Caffe, TensorFlow, and PyTorch for AI.
Vitis includes open-source, performance optimized libraries that offer out of box acceleration.
Of the many acceleration libraries, we have “AI”, which makes use of the Deep Learning Processing Unit (or DPU).
Vitis-AI encapsulates this AI portion of the Vitis offering.
Vitis-AI includes the Xilinx Model Zoo, which provides an ever-increasing number of pre-optimized models, directly targetable to the DPU core.
The latest addition to the Vitis framework is the Vitis Video Analytics SDK, or VVAS for short. The latest addition to the Vitis framework is the Vitis Video Analytics SDK, or VVAS for short.
VVAS is the integration of Vitis & Vitis-AI into the GStreamer framework, allowing easier application development. GStreamer is open-source and provides the infrastructure for:
- creation of dynamic real-time media pipelines
- handling of buffer management, including zero-copy, etc...
- definition of an open-source plug-in structure for extending functionality
The SD card image for the KV260 can be downloaded from Xilinx's KV260 wiki pages:
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/1641152513/Kria+K26+SOM#SD-Card-Images
Perform the following steps to boot your KV260 with the 2021.1 SD card image :
- download SD card image (petalinux-sdimage-2021.1-update1.wic.xz)
- program the image to a 16GB microSD card (or greater) using Balena Etcher
If the SD card image does not boot, you may need to update the KV26 boot firmware (BOOT.BIN), as described in the following wiki page:
After first boot, you must login as “petalinux” user, and specify your password (twice).
Next, the following app needs to be installed on the system:
- avnet-kv260-vvas-sms
Prior to installing the app, we want to make certain that we are up to date with the latest Xilinx package feeds. This can be done with the following commands:
root@xilinx-k26-starterkit-2021_1:~# sudo dnf clean all
...
root@xilinx-k26-starterkit-2021_1:~# sudo dnf update
...
This may take a significant amount of time, as several hundreds of packages will be upgraded.
To install the app, we need to confirm the presence of the app with the “xmutil getpkgs” command:
root@xilinx-k26-starterkit-2021_1:~# sudo xmutil getpkgs
Searching package feed for packagegroups compatible with: kv260
...
avnet-packagegroup-kv260-vvas-sms.noarch 1.0-1.pl2021_1.0
...
Next, install the “avnet-packagegroup-kv260-vvas-sms” package, as follows:
root@xilinx-k26-starterkit-2021_1:~# sudo dnf install avnet-packagegroup-kv260-vvas-sms
This will take a significant amount of time, as several hundreds (900+) of dependencies will be installed.
Loading the Custom AppAfter successful boot, the KV260 Vision AI starter kit will be running the default app:
- kv260-dp
This can be queried with the "xmutil listapps" command.
root@xilinx-k26-starterkit-2021_1:~# sudo xmutil listapps
Accelerator Base Type #slots Active_slot
kv260-dp kv260-dp XRT_FLAT 0 0,
avnet-kv260-vvas-sms avnet-kv260-vvas-sms XRT_FLAT 0 -1
Socket 9 closed by client
Before we can load our custom app, we first need to unload the kv260-dp app with the "xmutil unloadapp {app}" command:
root@xilinx-k26-starterkit-2021_1:~# sugo xmutil unloadapp kv260-dp
DFX-MGRD> daemon removing accel at slot 0
DFX-MGRD> Removing accel kv260-dp from slot 0
Accelerator successfully removed.
Socket 9 closed by client
root@xilinx-k26-starterkit-2021_1:~# sudo xmutil listapps
Accelerator Base Type #slots Active_slot
kv260-dp kv260-dp XRT_FLAT 0 -1
avnet-kv260-vvas-sms avnet-kv260-vvas-sms XRT_FLAT 0 -1
Socket 6 closed by client
We then load the custom app with the "xmutil loadapp {app}" command:
root@xilinx-k26-starterkit-2021_1:~# sudo xmutil loadapp avnet-kv260-vvas-sms
DFX-MGRD> daemon loading accel avnet-kv260-vvas-sms
DFX-MGRD> Successfully loaded base design.
Accelerator loaded to slot 0
Socket 6 closed by client
root@xilinx-k26-starterkit-2021_1:~# sudo xmutil listapps
Accelerator Base Type #slots Active_slot
kv260-dp kv260-dp XRT_FLAT 0 -1
avnet-kv260-vvas-sms avnet-kv260-vvas-sms XRT_FLAT 0 0,
Socket 9 closed by client
Launching the smart_model_select exampleWith the avnet-kv260-vvas-sms app loaded, we can run the smart_model_select example application.
The application is located in the "/opt/avnet/kv260-vvas-sms/app" directory:
root@xilinx-k26-starterkit-2021_1:~# cd /opt/avnet/kv260-vvas-sms/app
First, we need to configure our monitor for 1080P resolution:
root@xilinx-k26-starterkit-2021_1:/opt/avnet/kv260-vvas-sms/app# source ./setup.sh
setting mode 1920x1080-60.00Hz on connectors 43, crtc 41
testing 1920x1080@NV12 overlay plane 39
[1]+ Stopped(SIGTTIN) modetest -M xlnx -s 43@41:1920x1080-60@AR24 -P 39@41:1920x1080@NV12 -w 40:alpha:0
This will display a diagonal color bar pattern on the monitor as show below:
With our monitor configured for 1080P resolution, we can launch the application:
root@xilinx-k26-starterkit-2021_1:/opt/avnet/kv260-vvas-sms/app# sudo ./smart_model_select
############################################
################## WELCOME #################
############################################
DEBUG: Got the cmd: gst-launch-1.0 multifilesrc location=templates/welcome_1080.jpg ! \
jpegparse ! jpegdec ! \
queue ! fpsdisplaysink video-sink="kmssink driver-name=xlnx sync=false" text-overlay=false sync=false
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Menu displayed on the monitor shows various options available
for input source, ML model, output sink. Each option carry an
index number along side.
Select elements to be used in the pipeline in the sequence of
"input source, ML model, output sink and performance
mode flag" seperated by commas.
eg: 1,1,3,0
Above input will run "filesrc" input, "resnet50" model
"kmssink" used as output sink and performance mode disabled.
Enter 'q' to exit
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
This will display the following image on the monitor, which only acts as a visual reference for the choices that can be made on the command line:
Note that the original image was modified to reflect the two additional input sources that were added to the application:
- USB camera
- MIPI sensor
The examples described below were performed with the following setup:
I do not have the very cool stand that was provided with some of the initial KV260 units, so simply placed the KV260 against the monitor.
In addition to the video files, I have the following two input sources connected to the board:
- USB camera - Logitech BRIO (with NV12 support)
- MIPI sensor - IAS AR1335 module (http://avnet.me/ias-ar1335-datasheet)
The SD card image comes with the following video files pre-installed:
root@xilinx-k26-starterkit-2021_1:/opt/avnet/kv260-vvas-sms/app# ls -la videos
total 175548
drwxr-xr-x 2 root root 4096 Dec 28 10:03 .
drwxr-xr-x 6 root root 4096 Dec 29 06:52 ..
-rw-r--r-- 1 root root 5971635 Dec 28 10:03 CLASSIFICATION.mp4
-rw-r--r-- 1 root root 12801977 Dec 28 10:03 FACEDETECT.mp4
-rw-r--r-- 1 root root 47361877 Dec 28 10:03 REFINEDET.mp4
-rw-r--r-- 1 root root 37869612 Dec 28 10:03 SSD.mp4
-rw-r--r-- 1 root root 37869612 Dec 28 10:03 YOLOV2.mp4
-rw-r--r-- 1 root root 37869612 Dec 28 10:03 YOLOV3.mp4
These video files were obtained from the following source, but can be replaced with any video files, as long as the names are kept the same:
https://www.xilinx.com/member/forms/download/xef.html?filename=videos_smart_model_select.zip
In order to apply a model to the video files, use the "1, #, 3, 0" syntax, where # is a value from 1-16, representing one of the supported models.
For example, to apply model #6 to a video file, launch the example as follows:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Menu displayed on the monitor shows various options available
for input source, ML model, output sink. Each option carry an
index number along side.
Select elements to be used in the pipeline in the sequence of
"input source, ML model, output sink and performance
mode flag" seperated by commas.
eg: 1,1,3,0
Above input will run "filesrc" input, "resnet50" model
"kmssink" used as output sink and performance mode disabled.
Enter 'q' to exit
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
1,6,3,0
...
>>>>> Enter any key to return to main menu <<<<<
The following animated image shows the expected output for this above example:
In order to apply a model to a USB camera input, use the "3, #, 3, 0" syntax, where # is a value from 1-16, representing one of the supported models.
For example, to apply model #16 to the USB camera, launch the example as follows:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Menu displayed on the monitor shows various options available
for input source, ML model, output sink. Each option carry an
index number along side.
Select elements to be used in the pipeline in the sequence of
"input source, ML model, output sink and performance
mode flag" seperated by commas.
eg: 1,1,3,0
Above input will run "filesrc" input, "resnet50" model
"kmssink" used as output sink and performance mode disabled.
Enter 'q' to exit
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
3,16,3,0
...
>>>>> Enter any key to return to main menu <<<<<
The following animated image shows the expected output for this above example:
Note that for real-time execution, the USB camera must support NV12 format. For this reason, I recommend to use the following USB camera:
- Logitech BRIO
Other USB cameras that do not support NV12 format, will require software color space conversion, which will prevent achieving real-time performance. It is possible to accelerate this conversion, but this is beyond the scope of this project.
Running the application with MIPI sensor inputIn order to apply a model to the MIPI sensor input, use the "4, #, 3, 0" syntax, where # is a value from 1-16, representing one of the supported models.
For example, to apply model #14 to the MIPI sensor, launch the example as follows:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Menu displayed on the monitor shows various options available
for input source, ML model, output sink. Each option carry an
index number along side.
Select elements to be used in the pipeline in the sequence of
"input source, ML model, output sink and performance
mode flag" seperated by commas.
eg: 1,1,3,0
Above input will run "filesrc" input, "resnet50" model
"kmssink" used as output sink and performance mode disabled.
Enter 'q' to exit
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
4,14,3,0
...
>>>>> Enter any key to return to main menu <<<<<
The following animated image shows the expected output for this above example:
This porting exercise was very informative to understand Kria's dynamic app loading mechanism, as well as the VVAS infrastructure.
For a detailed description of how the smart_model_select was ported to Kria, please refer to the following blog series on Element14:
The following blogs will cover each of the development steps for this in-depth project tutorial.
- http://avnet.me/kv260-vvas-sms-2021-1-part1
- http://avnet.me/kv260-vvas-sms-2021-1-part2
- http://avnet.me/kv260-vvas-sms-2021-1-part3
- http://avnet.me/kv260-vvas-sms-2021-1-part4
- http://avnet.me/kv260-vvas-sms-2021-1-part5
- http://avnet.me/kv260-vvas-sms-2021-1-part6
- http://avnet.me/kv260-vvas-sms-2021-1-part7
This custom Kria app has been posted to the Kria app store:
User Guides & Data Sheets
- Kria KV260 Vision AI Starter Kit User Guide (UG1089)
- Kria SOM Carrier Card Design Guide (UG1091)
- Kria KV260 Vision AI Starter Kit Data Sheet (DS986)
- Kria K26 SOM Data Sheet (DS987)
On-line Resources
- Kria Product page
https://www.xilinx.com/products/som/kria.html - KV260 Getting Started page
https://www.xilinx.com/products/som/kria/kv260-vision-starter-kit/kv260-getting-started/getting-started.html - KV260 Apps documentation
https://xilinx.github.io/kria-apps-docs/ - Kria App Store
https://www.xilinx.com/products/app-store/kria.html - Kria KV26 SOM wiki page
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/1641152513/Kria+K26+SOM - KV260 Starter Kit Vitis Platforms
https://github.com/Xilinx/kv260-vitis - Vitis Video Analytics SDK (VVAS) Documentation
https://xilinx.github.io/VVAS/main/build/html/index.html - Smart Model Select https://xilinx.github.io/VVAS/main/build/html/docs/Embedded/smart_model_select.html…
- MultiChannel ML https://xilinx.github.io/VVAS/main/build/html/docs/Embedded/Tutorials/MultiChannelML.html
2022/01/04 - Initial Version
2022/01/08 - Add link to detailed development flow on Element14
2022/04/08 - Update project to reflect availability of app from Accelize repository
AcknowledgementsThank you to Marco Hoefle, Stefano Tabanelli , and Michael Uyttersprot from Avnet Silica for their valuable knowledge and help.
Comments
Please log in or sign up to comment.