On April 20, 2021, Xilinx announced Kria™, their newest product portfolio of system on modules (SOMs). These adaptive production ready SOMs, are designed to enable users to accelerate their innovation at the edge.
This project will cover the following components that were announced at launch:
- Kria™ K26 SOM
- Kria™ KV260 Vision AI Starter Kit
- Kria™ Apps on the Xilinx App Store
From a developer's perspective, I also discovered two new features which I found equally deserving of being highlighted:
- XMUTIL - a new platform and app management utility
- IVAS - a new Intelligent Video Analytics Software framework
At the core of the announced portfolio is the first of a family of SOMs, the Kria™ K26 SOM.
This first Xilinx SOM, based on the Zynq® UltraScale+™ MPSoC architecture, is capable of up to 1.4TOPS AI processing and has an integrated H.264/265 video codec. Available in commercial and industrial options, the K26 SOM is optimized for edge vision applications requiring flexibility to adapt to changing requirements.
As you already know with the Avnet SOM portfolio, designing with a SOM provides the following advantages:
- reduced risk, by re-using a proven production ready core circuit
- accelerated time to market, by concentrating your development on your innovative edge
A Zynq® UltraScale+™ MPSoC based SOM provides the additional advantage:
- adaptive solution, which can be re-programmed to future-proof a product
The Kria™ portfolio is designed for the following personas.
From AI developers who need a pre-built platform which is ready to use, to SW developers who need pre-built models with which to create applications, to Hardware developers who will ultimately re-program the programmable logic (PL) and design their own carrier cards, the Kria™ solution allows all developers to get started now.
Kria™ KV260 Vision AI Starter KitAs with all SOMs, a carrier card is needed to breath life into the SOM. Xilinx is providing a Vision ready carrier card, together with a KV26 SOM
The price on the KV260 Starter Kit is mind blowing ($199). It is important to note that the KV26 SOM that comes with this starter kit, is not a production ready SOM. So if you were thinking of buying the starter kit for the SOM, think again.
The Xilinx App StoreXilinx is making Kria™ SOM based solutions available in one centralized location, called the Xilinx App Store.
Kria™ Apps on Xilinx App Store
At launch, three Kria™ Apps were available for download in the Xilinx App Store:
- Smart Camera by Xilinx
- AI Box with ReID by Xilinx
- Defect Detection by Xilinx
Two additional Kria™ Apps from the partner ecosystem, are also announced on the Xilinx App Store
- AI Box for Auto License Plate Recognition by Uncanny Vision (April 30, 2021)
- Denali 3.0 HDR Image Signal Processor by Pinnacle Imaging (May 30, 2021)
The accelerated app that I decided to focus on is the "Smart Camera".
In order to fully appreciate the Kria™ Vision AI Starter Kit, the following components are required:
- SK-KV260-G - KV260 Vision AI Starter Kit
- HW-BACCPAC01-SK-G - KV260 Basic Accessory Pack
I was fortunate to receive an "early access" version of the KV260 kit, but it was not packaged as a production production. For this reason, I will not make any attempt to describe the un-boxing.
In order to explore the KV260, Xilinx's Getting Started Guide is definitely the place to start:https://www.xilinx.com/products/som/kria/kv260-vision-starter-kit/kv260-getting-started/getting-started.html
The getting started guide will instruct you on how to:
- program the SD card image
- connect up and boot the KV260
- walk through the SmartCamera demos
The vision AI kit supports several accelerated apps, but only one SD card image needs to be programmed. Loading of the various accelerated apps is handled with an innovative utility called xmutil.
XMUTIL : a new platform and app management utilityI was surprised to see a new utility called xmutil on this image. The utility can be used to query the platform status, and manage the accelerated apps, as well as several other features.
The get more information on the xmutil utility, call it with the -h argument:
root@xilinx-k26-starterkit-2020_2:~# xmutil -h
usage: xmutil [-h]
{boardid,bootfw_status,bootfw_update,getpkgs,listapps,loadapp,unloadapp,platformstats,ddrqos,axiqos,pwrctl}
...
boardid: Reads all board EEPROM contents. Prints information summary in human readable structure to CLI.
-b Pick which board to print fru data for
-f Enter field to print
-s Enter som EEPROM path
-c Enter cc EEPROM path
bootfw_status: Prints Qspi MFG version and date info along with persistent state values.
bootfw_update: Updates the primary boot device with a new BOOT.BIN in the inactive partition (either A or B).
getpkgs: Queries Xilinx package feeds and provides summary to CLI of relevant packages for active platform based on board ID inf
ormation.
listapps: Queries on target HW resource manager daemon of pre-built apps available on the platform and provides summary to CLI.
loadapp: Loads integrated HW+SW application inclusive of bitstream and starts the corresponding pre-built app SW executable.
unloadapp: Removes accelerated application inclusive of unloading its bitstream. (Takes no argument)
platformstats: Reads and prints a summary of the following performance related information:
CPU Utilization for each configured CPU
RAM utilization
Swap memory Utilization
SOM overall current, power, voltage utilization
SysMon Temperatures(s)
SOM power supply data summary reported by PMICs & ZU+ SysMon sources
ddrqos: Set QOS value for DDR slots on zynqmp platform
axiqos: Set QOS value for AXI ports on zynqmp platform.
pwrctl: PL power control utility.
positional arguments:
{boardid,bootfw_status,bootfw_update,getpkgs,listapps,loadapp,unloadapp,platformstats,ddrqos,axiqos,pwrctl}
Enter a function
args
optional arguments:
-h, --help show this help message and exit
root@xilinx-k26-starterkit-2020_2:~#
The get more information on one of the supported commands, specify the command, followed by the -h argument.
For example, in order to get more information on the platformstats command:
root@xilinx-k26-starterkit-2020_2:~# xmutil platformstats -h
XILINX PLATFORM STATS UTILITY
Usage: platformstats [options] [stats]
Options
-i --interval Specify the decimal value for polling in ms. The default is 1000ms.
-v --verbose Print verbose messages
-l --logfile Print output to logfile
-s --stop Stop any running instances of platformstats
-h --help Show this usuage.
List of stats to print
-a --all Print all supported stats.
-c --cpu-util Print CPU Utilization.
-r --ram-util Print RAM Utilization.
-s --swap-util Print Swap Mem Utilization.
-p --power-util Print Power Utilization.
-m --cma-util Print CMA Mem Utilization.
-f --cpu-freq Print CPU frequency.
To print the power utilization:
root@xilinx-k26-starterkit-2020_2:~# xmutil platformstats -p
Power Utilization
SOM total power : 3560 mW
SOM total current : 708 mA
SOM total voltage : 5025 mV
AMS CTRL
System PLLs voltage measurement, VCC_PSLL : 1193 mV
PL internal voltage measurement, VCC_PSBATT : 716 mV
Voltage measurement for six DDR I/O PLLs, VCC_PSDDR_PLL : 1793 mV
VCC_PSINTFP_DDR voltage measurement : 841 mV
PS Sysmon
LPD temperature measurement : 26 C
FPD temperature measurement (REMOTE) : 26 C
VCC PS FPD voltage measurement (supply 2) : 841 mV
PS IO Bank 500 voltage measurement (supply 6) : 1784 mV
VCC PS GTR voltage : 851 mV
VTT PS GTR voltage : 1799 mV
PL Sysmon
PL temperature
To list the available apps:
root@xilinx-k26-starterkit-2020_2:~# xmutil listapps
Accelerator Type Active
kv260-dp XRT_FLAT 1
base XRT_FLAT 0
I particularly like the command that reads back the FRU identification EEPROM on each of the boards (SOM & carrier card), including product number, revision, and serial number, etc...
root@xilinx-k26-starterkit-2020_2:~# xmutil boardid -h
usage: fru-print.py [-h] [-b {som,cc}] [-f FIELD [FIELD ...]] [-s [SOMPATH]]
[-c [CCPATH]]
print fru data of SOM/CC eeprom
optional arguments:
-h, --help show this help message and exit
-b {som,cc}, --board {som,cc}
Enter som or cc
-f FIELD [FIELD ...], --field FIELD [FIELD ...]
enter fields to index using. (if entering one arg,
it's assumed the field is from board area)
-s [SOMPATH], --sompath [SOMPATH]
enter path to SOM EEPROM
-c [CCPATH], --ccpath [CCPATH]
enter path to CC EEPROM
root@xilinx-k26-starterkit-2020_2:~# xmutil boardid -b som
board:
date: 13127688
fileid: '00'
language: 0
manufacturer: XILINX
part: 5057-02ED
pcieinfo:
Device_ID: '0000'
SubDevice_ID: '0000'
SubVendor_ID: '0000'
Vendor_ID: 10ee
product: SM-K26-XCL2GC-ED
revision: A
serial: XFL1VJ2M4WKW
uuid: 9bbf688bb32042da9cff293128047758
common:
size: 8192
version: 1
multirecord:
DC_Load_Record:
max_V: '2602'
max_mA: a00f
min_V: c201
min_mA: '0000'
nominal_voltage: f401
output_number: '01'
ripple/noise pk-pk: '6400'
MAC_Addr:
MAC_ID_0: 000a3509d4da
Version: '31'
Xilinx_IANA_ID: da1000
SoM_Memory_Config:
Primary_boot_device: QSPI:512Mb
SOM_PL_DDR_memory: PLDDR4:None
SOM_PS_DDR_memory: PSDDR4:4GB
SOM_secondary_boot_device: eMMC:16GB
Xilinx_IANA_ID: da1000
root@xilinx-k26-starterkit-2020_2:~# xmutil boardid -b cc
board:
date: 13205619
fileid: '00'
language: 0
manufacturer: XILINX
part: 5066-01ED
pcieinfo:
Device_ID: '0000'
SubDevice_ID: '0000'
SubVendor_ID: '0000'
Vendor_ID: 10ee
product: SCK-KV-G
revision: B
serial: XFL1ANI12XZR
uuid: afee164eeb7a43339bf426426eed3b18
common:
size: 8192
version: 1
multirecord:
DC_Load_Record:
max_V: e204
max_mA: b80b
min_V: 7e04
min_mA: '0000'
nominal_voltage: b004
output_number: '01'
ripple/noise pk-pk: '6400'
root@xilinx-k26-starterkit-2020_2:~#
Loading the SmartCam appThe first thing to do is to query which apps are available from Xilinx.
xilinx-k26-starterkit-2020_2:~$ xmutil getpkgs
Searching package feed for packagegroups compatible with: kv260
packagegroup-kv260-smartcam.noarch 1.0-r0.0
oe-remote-repo-sswreleases-rel-v2020.2.2-generic-rpm-noarch
packagegroup-kv260-aibox-reid.noarch 1.0-r0.0
oe-remote-repo-sswreleases-rel-v2020.2.2-generic-rpm-noarch
packagegroup-kv260-defect-detect.noarch 1.0-r0.0
oe-remote-repo-sswreleases-rel-v2020.2.2-generic-rpm-noarch
Next, each of any of the apps can be downloaded and installed with the "dnf" utility.
dnf install packagegroup-kv260-smartcam.noarch
It is useful to know that, in addition to the packages that are installed, many of the design specific files are placed in the /opt/xilinx directory.
xilinx-k26-starterkit-2020_2:~$ tree /opt/xilinx/
/opt/xilinx/
|-- README_SMARTCAM
|-- bin
| |-- 01.mipi-rtsp.sh
| |-- 02.mipi-dp.sh
| |-- 03.file-file.sh
| |-- 04.file-ssd-dp.sh
| `-- smartcam
|-- lib
| |-- libivas_airender.so
| `-- libivas_xpp.so
`-- share
|-- ivas
| `-- smartcam
| |-- facedetect
| | |-- aiinference.json
| | |-- drawresult.json
| | `-- preprocess.json
| |-- refinedet
| | |-- aiinference.json
| | |-- drawresult.json
| | `-- preprocess.json
| `-- ssd
| |-- aiinference.json
| |-- drawresult.json
| |-- label.json
| `-- preprocess.json
|-- notebooks
| `-- smartcam
| |-- LICENSE
| |-- images
| | `-- xilinx_logo.png
| `-- smartcam.ipynb
`-- vitis_ai_library
`-- models
`-- kv260-smartcam
|-- densebox_640_360
| |-- densebox_640_360.prototxt
| |-- densebox_640_360.xmodel
| `-- md5sum.txt
|-- refinedet_pruned_0_96
| |-- md5sum.txt
| |-- refinedet_pruned_0_96.prototxt
| `-- refinedet_pruned_0_96.xmodel
`-- ssd_adas_pruned_0_95
|-- label.json
|-- md5sum.txt
|-- ssd_adas_pruned_0_95.prototxt
`-- ssd_adas_pruned_0_95.xmodel
17 directories, 31 files
With the "Smart Camera" app downloaded and installed in our root file system, let's load it with the xmutil utility.
root@xilinx-k26-starterkit-2020_2:~# xmutil listapps
Accelerator Type Active
kv260-dp XRT_FLAT 1
base XRT_FLAT 0
kv260-smartcam XRT_FLAT 0
root@xilinx-k26-starterkit-2020_2:~# xmutil loadapp kv260-smartcam
Remove previously loaded accelerator, no empty slot
root@xilinx-k26-starterkit-2020_2:~# xmutil unloadapp
Removing accel /lib/firmware/xilinx/kv260-dp
root@xilinx-k26-starterkit-2020_2:~# xmutil loadapp kv260-smartcam
...
DFX-MGRD> Loaded kv260-smartcam successfully
root@xilinx-k26-starterkit-2020_2:~# xmutil listapps
Accelerator Type Active
kv260-dp XRT_FLAT 0
base XRT_FLAT 0
kv260-smartcam XRT_FLAT 1
The "loadapp" command performs the following operations:
- load bitstream for accelerated app
- load dynamic device tree content specific to accelerated app
- load device drivers specific to the accelerated app
For the "Smart Camera" app, the following executables and scripts are provided:
- 01.mipi-rtsp.sh
- 02.mipi-dp.sh
- 03.file-file.sh
- 04.file-ssd-dp.sh
- smartcam
The installation of the "Smart Camera" app also installed three pre-built models:
root@xilinx-k26-starterkit-2020_2:~# tree /opt/xilinx/share/vitis_ai_library
/opt/xilinx/share/vitis_ai_library
`-- models
`-- kv260-smartcam
|-- densebox_640_360
| |-- densebox_640_360.prototxt
| |-- densebox_640_360.xmodel
| `-- md5sum.txt
|-- refinedet_pruned_0_96
| |-- md5sum.txt
| |-- refinedet_pruned_0_96.prototxt
| `-- refinedet_pruned_0_96.xmodel
`-- ssd_adas_pruned_0_95
|-- label.json
|-- md5sum.txt
|-- ssd_adas_pruned_0_95.prototxt
`-- ssd_adas_pruned_0_95.xmodel
5 directories, 10 files
The installation of the "Smart Camera" app also installed the following jupyter notebook:
root@xilinx-k26-starterkit-2020_2:~# tree /opt/xilinx/share/notebooks/
/opt/xilinx/share/notebooks/
`-- smartcam
|-- LICENSE
|-- images
| `-- xilinx_logo.png
`-- smartcam.ipynb
2 directories, 3 files
Downloading video filesThe following on-line documentation instructs the user to download two video files.
https://xilinx.github.io/kria-apps-docs/master/docs/smartcamera/docs/app_deployment.html
As per the instructions, I downloaded the following two videos from pixabay.com:
I renamed these mp4 files to walking_humans.mp4 and traffic_with_rain.mp4, and transcoded them with the following commands on a linux machine:
ffmpeg -i walking_humans.mp4 -c:v libx264 -pix_fmt nv12 -vf scale=1920:1080 -r 30 walking_humans.nv12.1920x1080.h264
ffmpeg -i walking_humans.mp4 -c:v libx264 -pix_fmt nv12 -vf scale=3840:2160 -r 30 walking_humans.nv12.3840x2160.h264
ffmpeg -i traffic_with_rain.mp4 -c:v libx264 -pix_fmt nv12 -vf scale=1920:1080 -r 30 traffic_with_rain.nv12.1920x1080.h264
ffmpeg -i traffic_with_rain.mp4 -c:v libx264 -pix_fmt nv12 -vf scale=3840:2160 -r 30 traffic_with_rain.nv12.3840x2160.h264
I then copied the transcoded files (*.h264) to the SD card image, on the "boot" partition, in a "videos" directory, which corresponds to the following location on the embedded platform:
/media/sd-mmcblk0p1/videos
I then rebooted, and re-loaded the "Smart Camera" app.
Real-Time 4K Face DetectionThe first executable script that I tested was the 02.mipi-dp.sh script. At first glance, it appears to be running the same face detection that we we see on many edge platforms. However, there is much more going on in this version of the example.
A naïve implementation of face detection would consist of :
- downscaling image to lower resolution (ie. 640x360 for densebox model)
- running face detection at lower resolution
- displaying results at lower resolution
In this version, however, the full resolution (1080P or 4K) of the input image is preserved in the pipeline. Only the face detection (densebox) is performed at lower resolution, as shown in the following simplied block diagram.
This has the benefit of having higher resolution face ROIs. For the additional processing that will inevitably be done, higher resolution faces translates to better results.
To run the face detection at 1080P, use either of the following commands:
02.mipi-dp.sh
02.mipi-dp.sh 1920 1080
To run the face detection at 4K resolution, use the following command:
02.mipi-dp.sh 3840 2160
Smart IP Camera in less than 5 minutes !The second script that I tested was the 01.mipi-rtsp.sh script. This is definitely a fast way to create an IP camera.
To run the IP camera at 1080P, use either of the following commands:
01.mipi-rtsp.sh
01.mipi-rtsp.sh 1920 1080
To run the IP camera at 4K resolution, use the following command:
01.mipi-rtsp.sh 3840 2160
This essentially performs the same processing as the 02.mipi-dp.sh script, but instead of displaying the results to the DisplayPort monitor, it encodes the video with the embedded VCU, and streams it to the network via the RTSP protocol.
In order to capture the RTSP stream, VLC or FFMPEG can be used.
I chose to test with ffmpg. On my linux machine, I first needed to install ffmpeg:
$ sudo apt install ffmpeg
Then, capturing the RTSP stream was super easy with the following command:
$ ffplay rtsp://{kv260_ip_adress}:554/test
Real-Time 4K Object DetectionThe third script I tested was the 04.file-ssd-dp.sh script. This example makes use of the ssd_adas_pruned_0_95 model, for object detection.
To run the example at 1080P resolution, use either of the following commands:
04.file-ssd-dp.sh /media/sd-mmcblk0p1/videos/traffic_with_rain.nv12.1920x1080.h264 1920 1080
To run the example at 4K resolution, use either of the following commands:
04.file-ssd-dp.sh /media/sd-mmcblk0p1/videos/traffic_with_rain.nv12.3840x2160.h264 3840 2160
The following commands achieves the same result by calling the smartcam executable directly:
/opt/xilinx/bin/smartcam --file /media/sd-mmcblk0p1/videos/traffic_with_rain.nv12.3840x2160.h264 --target dp --width 3840 --height 2160 -r 30 --aitask ssd
We can also call the smartcam executable directory with the walking_humans video, and specifying "facedetect" as the aitask:
/opt/xilinx/bin/smartcam --file /media/sd-mmcblk0p1/videos/walking_humans.nv12.3840x2160.h264 --target dp --width 3840 --height 2160 -r 30 --aitask facedetect
We can also implement a pedestrian detection by specifying "refinedet" as the aitask:
/opt/xilinx/bin/smartcam --file /media/sd-mmcblk0p1/videos/walking_humans.nv12.3840x2160.h264 --target dp --width 3840 --height 2160 -r 30 --aitask refinedet
A python version of the IP camera is available as a jupyter notebook.
In order to access the notebook, we must first launch Jupyter-Lab.
I noticed that I need to be a user other than "root" to do this, so I first switched back to the "petalinux" user.
root@xilinx-k26-starterkit-2020_2:~# su petalinux
We also need to know the IP address of the KV260 embedded platform, which can be obtained with "ifconfig"
root@xilinx-k26-starterkit-2020_2:~# ifconfig
...
inet addr:192.168.0.152 Bcast:192.168.0.255 Mask:255.255.255.0
...
Finally, launching Jupyter-Lab is accomplished with the following command, by specifying the correct IP address of your KV260.
petalinux@xilinx-k26-starterkit-2020_2:~$ jupyter-lab --ip=192.168.0.152 &
...
To access the notebook, open this file in a browser:
file:///home/petalinux/.local/share/jupyter/runtime/nbserver-1145-open.html
Or copy and paste one of these URLs:
http://192.168.0.152:8888/?token=6dae07dd4168bf26850904cadd43526fe146fcc59de4b384
or http://127.0.0.1:8888/?token=6dae07dd4168bf26850904cadd43526fe146fcc59de4b384
On the PC side, the jupyter notebook can be accesed by copying the link that contains the token. For example, in my case this was:
http://192.168.0.152:8888/?token=6dae07dd4168bf26850904cadd43526fe146fcc59de4b384
To navigate to the smartcam notebook:
- double-click on the "smartcam" directory in the explorer (left side)
- then, double-click on the "smarcam.ipynb" file
Two options can be configured in the notebook:
- aitask : which can be set to "facedetect" or "refinedet"
- DP_output : which can be set to "True" (for output to DP) or "False" (for output to RTSP)
Next, the appropriate python packages are imported.
The gstreamer pipeline is created, section by section, accompanied with very informative documentation.
To complete the gstreamer pipeline, for the case of DP ouput:
For reference, I captured the gstreamer pipeline for the DP output here:
mediasrcbin media-device=/dev/media1 v4l2src0::io-mode=dmabuf v4l2src0::stride-align=256 ! video/x-raw, width=1920, height=1080, format=NV12, framerate=30/1 ! tee name=t ! queue ! ivas_xmultisrc kconfig="/opt/xilinx/share/ivas/smartcam/facedetect/preprocess.json" ! queue ! ivas_xfilter kernels-config="/opt/xilinx/share/ivas/smartcam/facedetect/aiinference.json" ! ima.sink_master ivas_xmetaaffixer name=ima ima.src_master ! fakesink t. ! queue ! ima.sink_slave_0 ima.src_slave_0 ! queue ! ivas_xfilter kernels-config="/opt/xilinx/share/ivas/smartcam/facedetect/drawresult.json" ! queue ! kmssink driver-name=xlnx plane-id=39 sync=false fullscreen-overlay=true
The jupyter notebook can be changed to send the output to a RTSP stream. I will leave this as an exercise to the reader.
Suffice to say that there is a LOT going on in this jupyter notebook !
It is a great vehicle for getting familiar with gstreamer. But even more interesting is that we have received our first glipse at Xilinx's new Intelligent Video Analytics Software (IVAS) framework.
IVAS - a new Intelligent Video Analytics Software frameworkIf we dive deeper into the Smart Camera hardware, we discover much more infrastructure in place than just the DPU core. In face, we have an example of what Xilinx is calling Whole Application Acceleration (WAA).
In real world applications, traditional computer vision functions are omni-present, and complement the newer state-of-the-art AI algorithms. In order to achieve real-time performance, these traditional computer vision functions need to be accelerated in addition to the AI algorithms.
I found the following on-line documentation to be very helpful in understanding the hardware design for the "Smart Camera" app.
https://xilinx.github.io/kria-apps-docs/master/docs/smartcamera/docs/hw_arch_accel.html
As expected, we can see that the design includes the DPU core:
The design also implements the following Pre-Process accelerator:
The Pre-Process accelerator implements the following three functions in hardware:
- NV12 to BGR : performs color format conversion from NV12 to BGR
- Resizing: Scales down the original 4K/1080p frame to lower resolution
- Quantizing: Performs normalization of BGR pixels (scaling and shifting)
The three functions are implemented with the Vitis Vision Library:
- https://github.com/Xilinx/Vitis_Libraries/tree/master/vision/L2/examples/cvtcolor
- https://github.com/Xilinx/Vitis_Libraries/tree/master/vision/L2/examples/resize
- https://github.com/Xilinx/Vitis_Libraries/tree/master/vision/L3/benchmarks/blobfromimage
It is worth mentionning that Xilinx has several other accelerated libraries for various applications.
Now that we better understand the underlying hardware, we can start to appreciate what IVAS is bringing to the software developer.
IVAS is a gstreamer based plug-in that allows users to:
- access the DPU via the Vitis-AI-Library
- access accelerated functions from the Vitis Vision Library
I found the following on-line documentation very helpful in understanding how IVAS enables a gstreamer based application to access the hardware.
https://xilinx.github.io/kria-apps-docs/master/docs/smartcamera/docs/sw_arch_accel.html
We can now see a more elaborate version of the simplified block diagram I suggested previously.
There is a lot more to cover with respect to IVAS, but this is outside the intended scope of this write-up.
What Next ?In summary, I was very impressed by the Xilinx KV260 Vision AI Starter Kit.
Despite this very successful exploration, I only scratched the surface of what the KV260 can achieve.
I still need to experiment with the "AI Box - ReID" and "Defect Detect" apps.
What other feature would you like to see covered ?
ReferencesUser Guides & Data Sheets
- Kria KV260 Vision AI Starter Kit User Guide (UG1089)
- Kria SOM Carrier Card Design Guide (UG1091)
- Kria KV260 Vision AI Starter Kit Data Sheet (DS986)
- Kria K26 SOM Data Sheet (DS987)
On-line Resources
- Kria Product page
https://www.xilinx.com/products/som/kria.html - KV260 Getting Started page
https://www.xilinx.com/products/som/kria/kv260-vision-starter-kit/kv260-getting-started/getting-started.html - KV260 Apps documentation
https://xilinx.github.io/kria-apps-docs/ - Kria App Store
https://www.xilinx.com/products/app-store/kria.html - Kria KV26 SOM wiki page
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/1641152513/Kria+K26+SOM - KV260 Starter Kit Vitis Platforms
https://github.com/Xilinx/kv260-vitis
2021/04/20 - Initial Version, Kria™ product launch
Comments