Avnet recently released Vitis 2019.2 platforms for several of their hardware platforms. These platforms also support the Vitis-AI flow from Xilinx.
The Xilinx Vitis-AI repository ( github.com/Xilinx/Vitis-AI ) provides an excellent tutorial called DPU-TRD on targeting the DPU AI engine to a custom VITIS platform. The steps required to recompile the models and applications for a different DPU architecture different than B4096, however, are not clear. This tutorial provides instructions on targeting the DPU AI engine (B2304) to the Avnet VITIS platforms, as well as recompiling the models and applications, for this B2304 DPU architecture.
This guide provides detailed instructions for targeting the Xilinx Vitis-AI 1.0 flow to the following Avnet Vitis 2019.2 platforms:
- Ultra96-V2 Development Board
- UltraZed-EV SOM (7EV) + FMC Carrier Card
- UltraZed-EG SOM (3EG) + IO Carrier Card
- UltraZed-EG SOM (3EG) + PCIEC Carrier Card
Once the tools have been setup, there are five (5) main steps to targeting an AI applications to one of the Avnet platforms:
- 1 - Build the Hardware Design
- 2 - Compile the Model from the Xilinx AI Model Zoo
- 3 - Build the AI applications
- 4 - Create the SD card content
- 5 - Execute the AI applications on hardware
IMPORTANT NOTE : The Ultra96-V2 Development Board requires a PMIC firmware update. See section "Known Issues - Ultra96-V2 PMIC firmware update" below for more details.
IMPORTANT UPDATE : Please note that I have posted a new version of this tutorial for Vitis-AI 1.1, in two parts:
Vitis-AI 1.1 flow for Avnet Vitis platforms - Part 1
Vitis-AI 1.1 flow for Avnet Vitis platforms - Part 2
Setup - Install the Xilinx ToolsThis project requires the following tools:
- Vitis 2019.2 Unified Software Platform
- Docker
- Vitis-AI v1.0
Refer to Xilinx Vitis Unified Software Platform for instructions on installing Vitis 2019.2 on your linux machine.
Refer to Install Docker for instructions on installing Docker on your linux machine.
Next, clone v1.0 of the the Vitis-AI git repository.
1. Clone Xilinx’s Vitis-AI github repository:
$ git clone https://github.com/Xilinx/Vitis-AI
$ cd Vitis-AI
$ git checkout v1.0
$ export VITIS_AI_HOME="$PWD"
Setup - Install the Avnet Vitis platformsThis guide can be used for any of the Avnet Vitis platforms, which will be denoted by {platform}.
- ULTRA96V2 : Ultra96-V2 Development Board
- UZ7EV_EVCC : UltraZed-EV SOM (7EV) + FMC Carrier Card
- UZ3EG_IOCC : UltraZed-EG SOM (3EG) + IO Carrier Card
- UZ3EG_PCIEC : UltraZed-EG SOM (3EG) + PCIEC Carrier Card
1. Download the Vitis platform for the appropriate board using one of the links below, and extract to the hard drive of your linux machine:
- ULTRA96V2 : http://avnet.me/ultra96v2-vitis-2019.2
- UZ7EV_EVCC : http://avnet.me/uz7ev-evcc-vitis-2019.2
- UZ3EG_IOCC : http://avnet.me/uz3eg-iocc-vitis-2019.2
- UZ3EG_PCIEC : http://avnet.me/uz3eg-pciec-vitis-2019.2
2. Specify the location of the Vitis platform, by creating the SDX_PLATFORM environment variable that specified to the location of the.xpfm file
For the ULTRA96V2 platform, this should look similar to the following:
$ export SDX_PLATFORM=/home/Avnet/vitis/platform_repo/ULTRA96V2/ULTRA96V2.xpfm
For the UZ7EV_EVCC platform, this should look similar to the following:
$ export SDX_PLATFORM=/home/Avnet/vitis/platform_repo/UZ7EV_EVCC/UZ7EV_EVCC.xpfm
For the rest of this document, the platform denoted will be denoted by {platform}.
Replace all instances of “{platform}” with the appropriate platform name, such as “ULTRA96V2”.
Part 1 - Build the Hardware ProjectThe creation of the Hardware Project is well documented on Xilinx’s Vitis-AI github repository, specifically the DPU-TRD section.
https://github.com/Xilinx/Vitis-AI/tree/v1.0/DPU-TRD
DPU TRD Vitis Flow
https://github.com/Xilinx/Vitis-AI/blob/v1.0/DPU-TRD/prj/Vitis/README.md
1. Make a copy of the DPU-TRD directory for your platform:
$ cd $VITIS_AI_HOME
$ cp -r DPU-TRD DPU-TRD-{platform}
$ export TRD_HOME=$VITIS_AI_HOME/DPU-TRD-{platform}
$ cd $TRD_HOME/prj/Vitis
2. Edit the dpu_conf.vh file, to specify the architecture and configuration of the DPU, according to the available resources on the Vitis platform.
$ cd $TRD_HOME/prj/Vitis
$ vi dpu_conf.vh
For example, to target a DPU with B2304 architecture, and keep all parameters the same as default, make the following changes to the dpu_conf.vh file
//`define B4096
`define B2304
3. Edit the config_file/prj_config file, to specify the connectivity of the DPU (and optionally SFM) cores
$ vi config_file/prj_config
First, specify the number of DPU cores to instantiate in the design as 1.
[connectivity]
...
nk=dpu_xrt_top:1
Specify which frequencies to use for the 1x and 2x clocks
The Avnet Vitis platforms (ULTRA96V2, UZ7EV_EVCC, UZ3EG_IOCC, and UZ3EG_PCIEC) have the following clocks defined in their hardware design:
We will use the 150MHz & 300MHz clocks to connect the DPU.
[clock]
id=0:dpu_xrt_top_1.aclk
id=1:dpu_xrt_top_1.ap_clk_2
NOTE : the dpu_xrt_top_1.ap_clk_2 must be 2X the frequency of dpu_xrt_top_1.aclk
If also targeting the SFM core, specify the 150MHz clock to connect the SFM.
id=0:sfm_xrt_top_1.aclk
NOTE : For the ULTRA96V2, targeting the DPU (B2304) + SFM will not fit in the available resources
In order to connect up the DPU (and SFM) core(s), we also need to specify which AXI interconnects to use.
[connectivity]
sp=dpu_xrt_top_1.M_AXI_GP0:HPC0
sp=dpu_xrt_top_1.M_AXI_HP0:HP0
sp=dpu_xrt_top_1.M_AXI_HP2:HP1
NOTE : the same port can be specified twice, in which case additional AXI interconnect will be added if needed.
Leave the other settings the same.
4. Build the one of the following targets
DPU core only (recommended for ULTRA96V2, UZ3EG_IOCC, and UZ3EG_PCIEC)
$ make KERNEL=DPU DEVICE={platform}
DPU core + SFM core (recommended for UZ7EV_EVCC)
$ make KERNEL=DPU_SM DEVICE={platform}
The make will build the individual DPU (and SFM) core(s), then build the complete hardware project.
The Vivado project will be located in the following directory:
DPU-TRD-{platform}/prj/Vitis/binary_container_1/link/vivado/vpl/prj/prj.xpr
5. The output binaries will be located in the following directory:
$ tree binary_container_1/sd_card
├── BOOT.BIN
├── dpu.xclbin
├── image.ub
├── README.txt
└── {platform}.hwh
NOTE : The.hwh file contains details about the hardware implementation, and will be used during model compilation.
Part 2 - Compile the Models from the Xilinx Model ZooThe Xilinx Model Zoo is a repository of free pre-trained deep learning models, optimized for inference deployment on Xilinx™ platforms.
This project will concentrate on the models for which examples applications have been provided. It is important to know the correlation between model and application. This table includes a non-exhaustive list of application that were verified with corresponding models from the model zoo.
1. The first step is to download the pre-trained models from the Xilinx Model Zoo:
$ cd $VITIS_AI_HOME/AI-Model-Zoo
$ source ./get_model.sh
This will download version 1.0 of the model zoo ( all_models_1.0.zip )
2. Launch the tools docker from the Vitis-AI directory
$ cd $VITIS_AI_HOME
$ sh -x docker_run.sh xilinx/vitis-ai:tools-1.0.0-cpu
3. Within the docker session, launch the "vitis-ai-caffe" Conda environment
$ conda activate vitis-ai-caffe
(vitis-ai-caffe) $
4. Create a modelzoo directory, and copy the hardware handoff (.hwh) file
(vitis-ai-caffe) $ cd DPU-TRD-{platform}
(vitis-ai-caffe) $ mkdir modelzoo
(vitis-ai-caffe) $ cd modelzoo
(vitis-ai-caffe) $ cp ../prj/Vitis/binary_container_1/sd_card/{platform}.hwh .
5. Use the dlet tool to generate your.dcf file
(vitis-ai-caffe) $ dlet -f {platform}.hwh
6. The previous step will generate a dcf with a name similar to dpu-11-18-2019-18-45.dcf.Rename this file to {platform}.dcf
(vitis-ai-caffe) $ mv dpu*.dcf {platform}.dcf
7. Create a file named “custom.json” with the following content
{"target": "dpuv2", "dcf": "./{platform}.dcf", "cpu_arch": "arm64"}
8. To create a generic recipe for compiling a caffe model, create a script named “compile_cf_model.sh” with the following content
model_name=$1
modelzoo_name=$2
vai_c_caffe \
--prototxt ../../AI-Model-Zoo/models/${modelzoo_name}/compiler/deploy.prototxt \
--caffemodel ../../AI-Model-Zoo/models/${modelzoo_name}/compiler/deploy.caffemodel \
--arch ./custom.json \
--output_dir ./compiled_output/${modelzoo_name} \
--net_name ${model_name} \
--options "{'mode': 'normal'}"
9. Create a directory for the compiled models (that matches the one specified in the previous script)
(vitis-ai-caffe) $ mkdir compiled_output
10. To compile the resnet50 model, use the following command:
(vitis-ai-caffe) $ source ./compile_cf_model.sh resnet50 cf_resnet50_imagenet_224_224_7.7G
This will create a dpu_resnet_0.elf file in the following directory:
./compiled_output/cf_resnet50_imagenet_224_224_7.7G/dpu_resnet50_0.elf
11. To compile the densebox model, use the following command:
(vitis-ai-caffe) $ source ./compile_cf_model.sh densebox cf_densebox_wider_360_640_1.11G
This will create a dpu_densebox.elf file in the following directory:
./compiled_output/cf_densebox_wider_360_640_1.11G/dpu_densebox.elf
12. To compile the other caffe models used in the next sections, use the following commands:
(vitis-ai-caffe) $ source ./compile_cf_model.sh yolo dk_yolov3_cityscapes_256_512_0.9_5.46G
(vitis-ai-caffe) $ source ./compile_cf_model.sh ssd cf_ssdtraffic_360_480_0.9_11.6G
(vitis-ai-caffe) $ source ./compile_cf_model.sh segmentation cf_fpn_cityscapes_256_512_8.9G
13. Verify the contents of the directory with tree
(vitis-ai-caffe) $ tree
├── compiled_output
│ ├── cf_densebox_wider_360_640_1.11G
│ │ ├── densebox_kernel_graph.gv
│ │ └── dpu_densebox.elf
│ ├── cf_fpn_cityscapes_256_512_8.9G
│ │ ├── dpu_segmentation.elf
│ │ └── segmentation_kernel_graph.gv
│ ├── cf_resnet50_imagenet_224_224_7.7G
│ │ ├── dpu_resnet50_0.elf
│ │ └── resnet50_kernel_graph.gv
│ ├── cf_ssdtraffic_360_480_0.9_11.6G
│ │ ├── dpu_ssd.elf
│ │ └── ssd_kernel_graph.gv
│ └── dk_yolov3_cityscapes_256_512_0.9_5.46G
│ ├── dpu_yolo.elf
│ └── yolo_kernel_graph.gv
├── compile_model.sh
├── custom.json
├── {platform}.dcf
└── {platform}.hwh
6 directories, 15 files
14. Exit the tools docker
(vitis-ai-caffe) $ exit
Part 3 - Compile the AI ApplicationsThe Vitis-AI 1.0 provides two different APIs, the DNNDK API, and the VART API.
The DNNDK API is the low-level API used to communicate with the AI engine (DPU). This API is the recommended API for users that will be creating their own custom neural networks, targeted to the Xilinx devices.
The Vitis-AI-Library API, also called Vitis-AI RunTime (VART), is a higher level of abstraction that simplifies development of AI applications. This API is recommended for users wishing to leverage the existing pre-trained models from the Xilinx Model Zoo in their custom applications.
Part 3.1 - Compile the DNNDK API based ApplicationsThe DNNDK API is the low-level API used to communicate with the AI engine (DPU). This API is the recommended API for users that will be creating their own custom neural networks, targeted to the Xilinx devices.
1. Launch the runtime docker from the Vitis-AI directory
$ cd $VITIS_AI_HOME
$ sh -x docker_run.sh xilinx/vitis-ai:runtime-1.0.0-cpu
2. Create a directory for the applications, and copy over the resnet50 and face_detection projects
$ cd DPU-TRD-{platform}
$ mkdir dnndk_samples
$ cp -r ../mpsoc/dnndk_samples_zcu104/common dnndk_samples/.
$ cp -r ../mpsoc/dnndk_samples_zcu104/resnet50 dnndk_samples/.
$ cp -r ../mpsoc/dnndk_samples_zcu104/face_detection dnndk_samples/.
3. Create the dataset directory, with images, for use by the resnet50 and inception-v1 applications
$ cp -r ../mpsoc/dnndk_samples_zcu104/dataset dnndk_samples/.
$ cp app/Vitis/samples/resnet50/img/* dnndk_samples/dataset/image500_640_480/.
RESNET50
4. For the resnet50 application, replace the files in the model directory for the one we previously built
$ cd dnndk_samples/resnet50
$ rm model/*
$ cp ../../modelzoo/compiled_output/cf_resnet50_imagenet_224_224_7.7G/* model/.
5. For the resnet50 application, edit the “Makefile” to change the name of the dpu_resnet50.elf to dpu_resnet50_0.elf
MODEL = $(CUR_DIR)/model/dpu_resnet50_0.elf
6. For the resnet50 application, edit the “src/main.cc” file to replace the DPU kernel name “resnet50” to “resnet50_0”
#define KRENEL_RESNET50 “resnet50_0”
7. For the resnet50 application, build the application with the “make” command
$ make
FACE_DETECTION
8. For the face_detection application, replace the files in the model directory for the one we previously built
$ cd ../face_detection
$ rm model/*
$ cp ../../modelzoo/compiled_output/cf_densebox_wider_360_640_1.11G/* model/.
9. For the face_detection application, edit the src/main.cc file to add the following two function calls after the VideoCapture initialization:
VideoCapture camera(0);
if (!camera.isOpenned()) {
cerr << “Open camera error!” << endl;
exit(-1);
}
camera.set(CV_CAP_PROP_FRAME_WIDTH,640);
camera.set(CV_CAP_PROP_FRAME_HEIGHT,480);
10. For the face_detection application, build the application with the “make” command
$ make
11. Exit the runtime docker
$ exit
Part 3.2 - Compile the Vitis-AI-Library based ApplicationsThe Vitis-AI-Library API, also called Vitis-AI RunTime (VART), is a higher level of abstraction that simplifies development of AI applications. This API is recommended for users wishing to leverage the existing pre-trained models from the Xilinx Model Zoo in their custom applications.
1. If not done so already, Launch the runtime docker from the Vitis-AI directory
$ cd $VITIS_AI_HOME
$ sh -x docker_run.sh xilinx/vitis-ai:runtime-1.0.0-cpu
2. Create a directory for the applications, and copy over the resnet50 and face_detection projects
$ cd DPU-TRD-{platform}
$ mkdir vitis_ai_samples
$ cp -r ../mpsoc/vitis_ai_samples_zcu104/common vitis_ai_samples/.
$ cp -r ../mpsoc/vitis_ai_samples_zcu104/resnet50 vitis_ai_samples/.
$ cp -r ../mpsoc/vitis_ai_samples_zcu104/adas_detection vitis_ai_samples/.
$ cp -r ../mpsoc/vitis_ai_samples_zcu104/video_analysis vitis_ai_samples/.
$ cp -r ../mpsoc/vitis_ai_samples_zcu104/segmentation vitis_ai_samples/.
3. Create a images directory, with images, for use by the resnet50 and inception-v1 applications
$ mkdir vitis_ai_samples/images
$ cp app/Vitis/samples/resnet50/img/* vitis_ai_samples/images/.
RESNET50
4. For the resnet50 application, replace the files in the model directory for the one we previously built
$ cd vitis_ai_samples/resnet50
$ rm model/*
$ cp ../../modelzoo/compiled_output/cf_resnet50_imagenet_224_224_7.7G/* model/.
5. For the resnet50 application, edit the “Makefile” to change the name of the dpu_resnet50.elf to dpu_resnet50_0.elf
MODEL = $(CUR_DIR)/model/dpu_resnet50_0.elf
6. For the resnet50 application, edit the “dpuv2_rundir/meta.json” file to replace the DPU kernel name “resnet50” to “resnet50_0”
“vitis_dpu_kernel”: “resnet50_0”,
7. For the resnet50 application, build the application with the “make” command
$ make
ADAS_DETECTION
8. For the adas_detection application, replace the files in the model directory for the one we previously built
$ cd ../adas_detection
$ rm model/*
$ cp ../../modelzoo/compiled_output/dk_yolov3_cityscapes_256_512_0.9_5.46G/* model/.
9. For the adas_detection application, build the application with the “make” command
$ make
SEGMENTATION & VIDEO_ANALYSIS
To re-build these applications, perform steps similar to the ADAS_DETECTION application example. Don’t forget to replace the contents of the model directory with the appropriate compiled model.
HINT : Refer back to the table in “Part 2 - Compile the Models from the Xilinx Model Zoo” in order to determine which compiled model to use for each of the “segmentation” and “video_analysis” applications.
10. Exit the runtime docker
$ exit
Part 4 - Create the SD card content1. Create a “sdcard” directory
$ cd DPU-TRD-{platform}
$ mkdir sdcard
2. Copy the design files (hardware + petalinux) for the DPU design to the “sdcard” directory.
$ cp prj/Vitis/binary_container_1/sd_card/* sdcard/.
3. Copy the applications to the “sdcard” directory
$ cp -r dnndk_samples sdcard/.
$ cp -r vitis_ai_samples sdcard/.
4. Copy the Vitis-AI runtime embedded package to the “sdcard” directory
a. Launch the runtime docker from the Vitis-AI directory
$ cd $VITIS_AI_HOME
$ sh -x docker_run.sh xilinx/vitis-ai:runtime-1.0.0-cpu
b. Copy the “xilinx_vai_board_package” directory to the “sdcard” directory
$ cp -r /opt/vitis_ai/xilinx_vai_board_package DPU-TRD-{package}/sdcard/.
c. Exit the runtime docker
$ exit
d. Return to the DPU-TRD-{platform} directory
$ cd DPU-TRD-{platform}
5. At this point, your “sdcard” directory should have the following contents
$ tree sdcard
sdcard
├── BOOT.BIN
├── dnndk_samples
│ ├── common
│ │ ├── dputils.cpp
│ │ ├── dputils.h
│ │ └── dputils.py
│ ├── dataset
│ │ └── image500_640_480
│ │ ├── bellpeppe-994958.JPEG
│ │ ├── greyfox-672194.JPEG
│ │ ├── irishterrier-696543.JPEG
│ │ ├── jinrikisha-911722.JPEG
│ │ └── words.txt
│ ├── face_detection
│ │ ├── build
│ │ │ ├── dputils.o
│ │ │ └── main.o
│ │ ├── face_detection
│ │ ├── Makefile
│ │ ├── model
│ │ │ ├── densebox_kernel_graph.gv
│ │ │ └── dpu_densebox.elf
│ │ └── src
│ │ └── main.cc
│ └── resnet50
│ ├── build
│ │ ├── dputils.o
│ │ └── main.o
│ ├── Makefile
│ ├── model
│ │ ├── dpu_resnet50_0.elf
│ │ └── resnet50_kernel_graph.gv
│ ├── resnet50
│ └── src
│ └── main.cc
├── dpu.xclbin
├── image.ub
├── README.txt
├── rootfs.tar.gz
├── {platform}.hwh
├── vitis_ai_samples
│ ├── adas_detection
│ │ ├── adas_detection
│ │ ├── build
│ │ │ ├── common.o
│ │ │ └── main.o
│ │ ├── dpuv2_rundir
│ │ │ └── meta.json
│ │ ├── Makefile
│ │ ├── model
│ │ │ ├── dpu_yolo.elf
│ │ │ └── yolo_kernel_graph.gv
│ │ ├── src
│ │ │ ├── main.cc
│ │ │ └── utils.h
│ │ └── video
│ │ └── adas.avi
│ ├── common
│ │ ├── common.cpp
│ │ └── common.h
│ ├── resnet50
│ │ ├── build
│ │ │ ├── common.o
│ │ │ └── main.o
│ │ ├── dpuv2_rundir
│ │ │ └── meta.json
│ │ ├── Makefile
│ │ ├── model
│ │ │ ├── dpu_resnet50_0.elf
│ │ │ └── resnet50_kernel_graph.gv
│ │ ├── resnet50
│ │ ├── src
│ │ │ └── main.cc
│ │ └── words.txt
│ ├── segmentation
│ │ ├── build
│ │ │ ├── common.o
│ │ │ └── main.o
│ │ ├── dpuv2_rundir
│ │ │ └── meta.json
│ │ ├── Makefile
│ │ ├── model
│ │ │ ├── dpu_segmentation.elf
│ │ │ └── segmentation_kernel_graph.gv
│ │ ├── segmentation
│ │ ├── src
│ │ │ └── main.cc
│ │ └── video
│ │ └── traffic.mp4
│ └── video_analysis
│ ├── build
│ │ ├── common.o
│ │ └── main.o
│ ├── dpuv2_rundir
│ │ └── meta.json
│ ├── Makefile
│ ├── model
│ │ ├── dpu_ssd.elf
│ │ └── ssd_kernel_graph.gv
│ ├── src
│ │ └── main.cc
│ ├── video
│ │ └── structure.mp4
│ └── video_analysis
└── xilinx_vai_board_package
├── install.sh
└── pkgs
├── bin
│ ├── ddump
│ ├── dexplorer
│ └── dsight
├── include
│ ├── dnndk
│ │ ├── dnndk.h
│ │ └── n2cube.h
│ └── vai
│ ├── dpu_runner.hpp
│ ├── runner.hpp
│ ├── tensor_buffer.hpp
│ ├── tensor.hpp
│ └── xdpurunner.h
├── lib
│ ├── echarts.js
│ ├── libdpuaol.so
│ ├── libdsight.a
│ ├── libhineon.so
│ └── libn2cube.so
└── python
└── Edge_Vitis_AI-1.0-py2.py3-none-any.whl
45 directories, 84 files
6. Copy the contents of the “sdcard” to the boot partition of the scard
7. If applicable (ie. ULTRA96V2), Extract the “rootfs.tar.gz” to the second partition of the sdcard
Part 5 - Execute the AI applications on hardware1. Boot the target board with the sdcard that was create in the previous section
2. If prompted for a login, specify “root” as login and password.
3. Navigate to the sdcard folder
a. For the ULRA96V2, this can be done as follows:
$ cd /run/media/mmcblk0p1
b. For the UZ7EV_EVCC, UZ3EG_IOCC, and UZ3EG_PCIEC, this can be done as follows:
$ cd /run/media/mmcblk1p1
4. Install the Vitis-AI embedded package
$ cd xilinx_vai_board_package
$ source ./install.sh
The install.sh script will probably fail to copy the dpu.xclbin file to the /usr/lib directory, since it assumes it is located in the /mnt directory. We will copy it manually in the next step
cp: cannot stat ‘/mnt/dpu.xclbin’: No such file or directory
The install.sh script may also fail to install the python support, which is not critical for this tutorial
Warning: pip3 command not found, skip install python support
5. If prompted for a login, again, specify “root” as login and password
6. Re-navigate to the sdcard directory
7. Copy the dpu.xclbin file to the /usr/lib directory
$ cp dpu.xclbin /usr/lib/.
8. Validate the Vitis-AI board package with the dexplorer utility (output for ULTRA96V2 is shown)
$ dexplorer --whoami
[DPU IP Spec]
IP Timestamp : 2019-11-18 18:45:00
DPU Core Count : 1
[DPU Core Configuration List]
DPU Core : #0
DPU Enabled : Yes
DPU Arch : B2304
DPU Target Version : v1.4.0
DPU Freqency : 300 MHz
Ram Usage : Low
DepthwiseConv : Enabled
DepthwiseConv+Relu6 : Enabled
Conv+Leakyrelu : Enabled
Conv+Relu6 : Enabled
Channel Augmentation : Enabled
Average Pool : Enabled
The following [drm] messages were removed from the above example for clarity.
[ 167.684226] [drm] Pid 3053 opened device
[ 167.688194] [drm] Pid 3053 closed device
[ 167.692508] [drm] Pid 3053 opened device
[ 167.710265] [drm] Finding IP_LAYOUT section header
[ 167.710277] [drm] Section IP_LAYOUT details:
[ 167.715087] [drm] offset = 0x54fcf0
[ 167.719349] [drm] size = 0x58
[ 167.723015] [drm] Finding DEBUG_IP_LAYOUT section header
[ 167.726156] [drm] AXLF section DEBUG_IP_LAYOUT header not found
[ 167.731469] [drm] Finding CONNECTIVITY section header
[ 167.737380] [drm] Section CONNECTIVITY details:
[ 167.742420] [drm] offset = 0x54fd48
[ 167.746943] [drm] size = 0x7c
[ 167.750601] [drm] Finding MEM_TOPOLOGY section header
[ 167.753726] [drm] Section MEM_TOPOLOGY details:
[ 167.758773] [drm] offset = 0x54fbf8
[ 167.763299] [drm] size = 0xf8
[ 167.770725] [drm] No ERT scheduler on MPSoC, using KDS
[ 167.779505] [drm] scheduler config ert(0)
[ 167.779507] [drm] cus(1)
[ 167.783509] [drm] slots(16)
[ 167.786205] [drm] num_cu_masks(1)
[ 167.789157] [drm] cu_shift(16)
[ 167.792635] [drm] cu_base(0xb0000000)
[ 167.795854] [drm] polling(0)
[ 167.801745] [drm] Pid 3053 opened device
[ 167.810917] [drm] Pid 3053 closed device
[ 167.819507] [drm] Pid 3053 closed device
9. Define the DISPLAY environment variable
$ export DISPLAY=:0.0
10. Change the resolution of the DP monitor to 640x480
$ xrandr --output DP-1 --mode 640x480
11. Launch the DNNDK API based sample applications
$ cd dnndk_samples
a. Launch the resnet50 application
$ cd resnet50
$ ./resnet50
b. Wait for application to finish, or Press <CTRL-C> to exit
<CTRL-C>
$ cd ..
c. Launch the face_detection application
$ cd face_detection
$ ./face_detection
d. Press <CTRL-C> to exit the application
<CTRL-C>
$ cd ..
12. Launch the Vitis-AI-Library API based sample applications
$ cd ../vitis_ai_samples
a. Launch the resnet50 application
$ cd resnet50
$ ./resnet50 ./dpuv2_rundir
b. Wait for application to finish, or Press <CTRL-C> to exit
<CTRL-C>
$ cd ..
c. Launch the adas_detection application
$ cd adas_detection
$ ./adas_detection ./video/adas.avi ./dpuv2_rundir
d. Press <CTRL-C> to exit the application
<CTRL-C>
$ cd ..
Solution - Pre-built SD card imagesFor convenience, pre-built SD card images have been created for the following Avnet platforms:
- ULTRA96V2 : http://avnet.me/ultra96v2-vitis-ai-1.0-image(MD5SUM = 2d583d179afc1609f38bc2c2e7e6caf0)
- UZ7EV_EVCC : http://avnet.me/uz7ev-evcc-vitis-ai-1.0-image(MD5SUM = 68938d0752a16ddebcef1822d3d3b4f5)
- UZ3EG_IOCC : http://avnet.me/uz3eg-iocc-vitis-ai-1.0-image(MD5SUM = 75d47e7bf7d135a9b366fb383ff80395)
- UZ3EG_PCIEC : http://avnet.me/uz3eg-pciec-vitis-ai-1.0-image(MD5SUM = 72fc19c65f88ae990af43f104be55fa0)
The following table describes the applications that are provided on the pre-built SD card images, as well as the command used to launch each of them:
NOTE : mmcblk#p1 denotes one of either mmcblk0p1 or mmcblk1p1, depending on which platform is being tested.
1. For the video testing applications, which have been provided to test the video infrastructure without the need for the DPU, navigate to the application’s directory, and execute the provided command.
2. For the DNNDK API based AI applications, navigate to the application’s directory, and execute the provided command.
3. For the Vitis-AI-Library based AI applications, navigate to the application’s directory, and execute the provided command.
NOTE : On the pre-built SD card images, additional applications (inception_v1, mobilenet, ...) have been built.
NOTE : On the pre-built SD card images, there are additional images. These images were taken from the following (previous) version of DNNDK:
https://www.xilinx.com/products/design-tools/ai-inference/ai-developer-hub.html#edge
zcu102-dpu-trd-2018-2-190322.zip
zcu102-dpu-trd-2018-2-190322\images\common\image500_640_480\*
Known Issues - Ultra96-V2 PMIC firmware updateFor the case of the Ultra96-V2 Development Board, an important PMIC firmware update is required to run all of the AI applications.
Without the PMIC firmware update, the following AI applications will cause periodic peak current that exceeds the default 4A fault threshold, causing the power on reset to assert, and thus the board to reboot.
- adas_detection
- inception_v1_mt
- resnet50_mt
- segmentation
- video_analysis
The PMIC firmware update increases this fault threshold, and prevents the reboot from occurring.
In order to update the PMIC firmware of your Ultra96-V2 development board, refer to the following instructions:
Known Issues - pose_detectionThe following AI application was attempted, but failed to execute:
- pose_detection
The loading of models seems to execute correctly, but the application errors out with a "segmentation fault" message.
The following AI applications were not attempted:
- inception_v1_mt_py
- miniresnet_py
- resnet50_mt_py
Comments