This project is part 3 of a 4 part series of projects, where we will progressively create a Vitis-AI and ROS2 enabled platform for ZUBoard:
- Part 1 : Building the foundational designs
- Part 2 : Combining designs into a common platform
- Part 3 : Adding support for Vitis-AI
- Part 4 : Adding support for ROS2
The motivation of this series of projects is to enable users to create their own custom AI applications.
We will also be hosting a webinar series on creating a fun robotics application:
- Webinar I : Teaching the ZUBoard to recognize hand gestures
- Webinar II : Controlling a robot with the ZUBoard
In order to add accelerators to our foundational designs, we need to first create Vitis platforms for these designs, which are wrappers describing which resources are available for use by Vitis.
I have chosen to mimic the directory structure used by the Kria platforms to do this, since I find them easy to understand and easy to expand on.
Understanding kria-vitis-platformsBefore creating our own equivalent of the kria-vitis-platforms, we first need to understand it's contents.
As of this writing, the kria-vitis-platform has not yet been released for 2022.2, so we will look at the 2022.1 version : https://github.com/Xilinx/kria-vitis-platforms/tree/xlnx_rel_v2022.1
$ cd ~/Avnet_2022_2
$ git clone -b xlnx_rel_v2022.1 https://github.com/Xilinx/kria-vitis-platforms --recursive
kria-vitis-platforms
├── k26
├── kr260
└── kv260
├── overlays
│ ├── dpu_ip
│ └── examples
│ ├── aibox-reid
│ ├── benchmark
│ ├── defect-detect
│ ├── nlp-smartvision
│ └── smartcam
└── platforms
├── scripts
└── vivado
├── kv260_ispMipiRx_rpiMipiRx_DP
├── kv260_ispMipiRx_vcu_DP
├── kv260_ispMipiRx_vmixDP
└── kv260_vcuDecode_vmixDP
The first level of directories correspond to the platforms (k26, kr260, kv260).
For each platform, the directory structure divides into two main sections:
- platforms
- overlays
The platforms directory contains the source code to re-generate the Vivado projects and Vitis platforms (wrapper).
The overlays directory contains accelerator examples that each target one of the platforms. Here is a graphical representation of the platforms and overlays that we can find for the KV260.
We will want to reproduce the "benchmark" overlay, which contains the largest DPU core that fits in the available resources.
Creating avnet-vitis-platformsIf you do not want to re-create the avnet-vitis-platforms directory structure, you can simply clone the final result from the following github repository:
$ cd ~/Avnet_2022_2
$ git clone -b 2022.2 https://github.com/AlbertaBeef/avnet-vitis-platforms --recursive
Otherwise, follow along the instructions below to re-create this directory structure yourself.
In our avnet-vitis-platforms directory structure, we will also include a "common" directory for content that is used by all platforms.
avnet-vitis-platforms
├── common
│ ├── overlays
│ │ └── dpu_ip
│ └── platforms
│ └── bdf
├── u96v2
│ ├── overlays
│ └── platforms
└── zub1cg
├── overlays
│ ├── dpu_ip (link to ../../common/overlays/dpu_ip)
│ └── examples
│ └── benchmark
└── platforms
├── scripts
└── vivado
└── zub1cg_sbc_base
This project will focus on the content for the common
and zub1cg
directories.
$ cd ~/Avnet_2022_2
$ mkdir -p avnet-vitis-platforms
$ cd avnet-vitis-platforms
Creating the common/platforms
sub-directory structure.
First, we clone the Avnet bdf repository, which contains our board definition files, in the common/platforms
directory:
$ mkdir -p common/platforms
$ git clone https://github.com/Avnet/bdf common/platforms/bdf
Creating the common/overlays
sub-directory structure.
Next, we download the Xilinx DPU ip archive to the common/overlays/dpu_ip
directory:
$ mkdir -p common/overlays
$ wget https://www.xilinx.com/bin/public/openDownload?filename=DPUCZDX8G_ip_repo_VAI_v3.0.tar.gz -O DPUCZDX8G_ip_repo_VAI_v3.0.tar.gz
$ tar -xvzf DPUCZDX8G_ip_repo_VAI_v3.0.tar.gz
$ mv DPUCZDX8G_ip_repo_VAI_v3.0 common/overlays/dpu_ip
Creating the zub1cg/platforms
sub-directory structure.
Now, we want to create the platforms structure for the zub1cg_sbc_base design, by mimic'ing (copying) files from the kv260 content. When we are done, we will have the following files created:
avnet-vitis-platforms
├── ...
└── zub1cg
├── Makefile
├── ...
└── platforms
├── Makefile
├── scripts
│ └── pfm.tcl
└── vivado
└── zub1cg_sbc_base
├── Makefile
├── scripts
│ ├── main.tcl
│ └── config_bd.tcl
└── xdc
└── pin.xdc
Let's start by creating the directories:
$ mkdir -p zub1cg/platforms/scripts
$ mkdir -p zub1cg/platforms/vivado/zub1cg_sbc_base
$ mkdir -p zub1cg/platforms/vivado/zub1cg_sbc_base/scripts
$ mkdir -p zub1cg/platforms/vivado/zub1cg_sbc_base/xdc
$ cd zub1cg
Create the following Makefile with the following content:
avnet-vitis-platforms/zub1cg/Makefile
CP = cp -f
PWD = $(shell readlink -f .)
# the platform directory has to be an absolute path when passed to v++
PFM_DIR = $(PWD)/platforms
PFM_VER = 2022_2
# valid platforms / overlays
PFM_LIST = zub1cg_sbc_base zub1cg_sbc_dualcam
OVERLAY_LIST = benchmark
# override platform name based on overlay
ifeq ($(OVERLAY),benchmark)
override PFM = zub1cg_sbc_base
endif
PFM_XPFM = $(PFM_DIR)/avnet_$(PFM)_$(PFM_VER)/$(PFM).xpfm
VITIS_DIR = overlays/examples
VITIS_OVERLAY_DIR = $(VITIS_DIR)/$(OVERLAY)
VITIS_OVERLAY_BIT = $(VITIS_OVERLAY_DIR)/binary_container_1/link/int/system.bit
.PHONY: help
help:
@echo 'Usage:'
@echo ''
@echo ' make overlay OVERLAY=<val>'
@echo ' Build the Vitis application overlay.'
@echo ''
@echo ' Valid options for OVERLAY: ${OVERLAY_LIST}'
@echo ''
@echo ' make platform PFM=<val> JOBS=<n>'
@echo ' Build the Vitis platform.'
@echo ''
@echo ' Valid options for PFM: ${PFM_LIST}'
@echo ' JOBS: optional param to set number of synthesis jobs (default 8)'
@echo ''
@echo ' make clean'
@echo ' Clean runs'
@echo ''
.PHONY: overlay
overlay: $(VITIS_OVERLAY_BIT)
$(VITIS_OVERLAY_BIT): $(PFM_XPFM)
@valid=0; \
for o in $(OVERLAY_LIST); do \
if [ "$$o" = "$(OVERLAY)" ]; then \
valid=1; \
break; \
fi \
done; \
if [ "$$valid" -ne 1 ]; then \
echo 'Invalid parameter OVERLAY=$(OVERLAY). Choose one of: $(OVERLAY_LIST)'; \
exit 1; \
fi; \
echo 'Build $(OVERLAY) Vitis overlay using platform $(PFM)'; \
$(MAKE) -C $(VITIS_OVERLAY_DIR) all PLATFORM=$(PFM_XPFM)
.PHONY: platform
platform: $(PFM_XPFM)
$(PFM_XPFM):
@valid=0; \
for p in $(PFM_LIST); do \
if [ "$$p" = "$(PFM)" ]; then \
valid=1; \
break; \
fi \
done; \
if [ "$$valid" -ne 1 ]; then \
echo 'Invalid parameter PFM=$(PFM). Choose one of: $(PFM_LIST)'; \
exit 1; \
fi; \
echo 'Create Vitis platform $(PFM)'; \
$(MAKE) -C $(PFM_DIR) platform PLATFORM=$(PFM) VERSION=$(PFM_VER)
.PHONY: clean
clean:
$(foreach o, $(OVERLAY_LIST), $(MAKE) -C $(VITIS_DIR)/$(o) clean;)
$(foreach p, $(PFM_LIST), $(MAKE) -C $(PFM_DIR) clean PLATFORM=$(p) VERSION=$(PFM_VER);)
Create the following Makefile with the following content:
avnet-vitis-platforms/zub1cg/platforms/Makefile
CP = cp -rf
MKDIR = mkdir -p
RM = rm -rf
XSCT = $(XILINX_VITIS)/bin/xsct
JOBS ?= 8
PLATFORM ?= zub1cg_sbc_base
VERSION ?= 2022_2
PFM_DIR = avnet_$(PLATFORM)_$(VERSION)
PFM_PRJ_DIR = xsct/$(PLATFORM)/$(PLATFORM)/export/$(PLATFORM)
PFM_SCRIPTS_DIR = scripts
PFM_TCL = $(PFM_SCRIPTS_DIR)/pfm.tcl
PFM_XPFM = $(PFM_DIR)/$(PLATFORM).xpfm
VIV_DIR = vivado/$(PLATFORM)
VIV_XSA = $(VIV_DIR)/project/$(PLATFORM).xsa
.PHONY: help
help:
@echo 'Usage:'
@echo ''
@echo ' make platform'
@echo ' Generate Vitis platform'
@echo ''
.PHONY: all
all: platform
.PHONY: platform
platform: $(PFM_XPFM)
$(PFM_XPFM): $(VIV_XSA)
$(XSCT) $(PFM_TCL) -xsa $(VIV_XSA)
@$(CP) $(PFM_PRJ_DIR) $(PFM_DIR)
@echo 'Vitis platform available at $(PFM_DIR)'
$(VIV_XSA):
make -C $(VIV_DIR) xsa JOBS=$(JOBS)
.PHONY: clean
clean:
-@$(RM) .Xil boot image linux.bif ws $(PFM_DIR)
make -C $(VIV_DIR) clean
Create the following pfm.tcl script with the following content:
avnet-vitis-platforms/zub1cg/platforms/scripts/pfm.tcl
# Help function
proc help_proc { } {
puts "Usage: xsct -sdx pfm.tcl -xsa <file>"
puts "-xsa <file> xsa file location"
puts "-proc <processor> processor (default: psu_cortexa53)"
puts "-help this text"
}
# Set defaults
set platform "default"
set proc "psu_cortexa53"
# Parse arguments
for { set i 0 } { $i < $argc } { incr i } {
# xsa file
if { [lindex $argv $i] == "-xsa" } {
incr i
set xsafile [lindex $argv $i]
set ws [file rootname [file tail $xsafile]]
set ws "xsct/$ws"
# processor
} elseif { [lindex $argv $i] == "-proc" } {
incr i
set proc [lindex $argv $i]
# help
} elseif { [lindex $argv $i] == "-help" } {
help_proc
exit
# invalid argument
} else {
puts "[lindex $argv $i] is an invalid argument"
exit
}
}
# helper variables
set platform [file rootname [file tail $xsafile]]
set imagedir "image"
file mkdir $imagedir
set bootdir "boot"
file mkdir $bootdir
set biffile "linux.bif"
set f [open $biffile a]
close $f
# Set workspace
setws $ws
# Create platform
platform create \
-name $platform \
-hw $xsafile
# Create domain
domain create \
-name smp_linux \
-os linux \
-proc $proc
# Configure domain
domain config -image $imagedir
domain config -boot $bootdir
domain config -bif $biffile
# Configure platform
platform config -remove-boot-bsp
# Generate platform
platform -generate
Create the following Makefile with the following content:
avnet-vitis-platforms/zub1cg/platforms/vivado/zub1cg_sbc_base/Makefile
RM = rm -rf
VIVADO = $(XILINX_VIVADO)/bin/vivado
JOBS ?= 8
VIV_DESIGN = zub1cg_sbc_base
VIV_PRJ_DIR = project
VIV_SCRIPTS_DIR = scripts
VIV_XSA = $(VIV_PRJ_DIR)/$(VIV_DESIGN).xsa
VIV_SRC = $(VIV_SCRIPTS_DIR)/main.tcl
.PHONY: help
help:
@echo 'Usage:'
@echo ''
@echo ' make xsa'
@echo ' Generate extensible xsa for platform generation'
@echo ''
.PHONY: all
all: xsa
xsa: $(VIV_XSA)
$(VIV_XSA): $(VIV_SRC)
$(VIVADO) -mode batch -notrace -source $(VIV_SRC) -tclargs -jobs $(JOBS)
.PHONY: clean
clean:
$(RM) $(VIV_PRJ_DIR) vivado* .Xil *dynamic* *.log *.xpe
Create the following file with the following content:
avnet-vitis-platforms/zub1cg/platforms/vivado/zub1cg_sbc_base/scripts/main.tcl
set proj_name zub1cg_sbc_base
set proj_dir ./project
set proj_board avnet.com:zuboard_1cg:part0:1.1
set bd_tcl_dir ./scripts
set board xboard_zu1
set rev None
set output {xsa}
set xdc_list {./xdc/pin.xdc}
set src_repo_path {./src}
set jobs 8
# parse arguments
for { set i 0 } { $i < $argc } { incr i } {
# jobs
if { [lindex $argv $i] == "-jobs" } {
incr i
set jobs [lindex $argv $i]
}
}
# set board repo path
set bdf_path [file normalize [pwd]/../../../../common/platforms/bdf]
if {[expr {![catch {file lstat $bdf_path finfo}]}]} {
set_param board.repoPaths $bdf_path
puts "\n\n*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*"
puts " Selected \n BDF path $bdf_path"
puts "*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*\n\n"
} else {
puts "\n\n*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*"
puts " Error specifying BDF path $bdf_path"
puts "*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*\n\n"
return -code ok
}
create_project -name $proj_name -force -dir $proj_dir -part [get_property PART_NAME [get_board_parts $proj_board]]
set_property board_part $proj_board [current_project]
import_files -fileset constrs_1 $xdc_list
#set_property ip_repo_paths $ip_repo_path [current_project]
#update_ip_catalog
# Create block diagram design and set as current design
set design_name $proj_name
create_bd_design $proj_name
current_bd_design $proj_name
# Set current bd instance as root of current design
set parentCell [get_bd_cells /]
set parentObj [get_bd_cells $parentCell]
current_bd_instance $parentObj
source $bd_tcl_dir/config_bd.tcl
save_bd_design
make_wrapper -files [get_files $proj_dir/${proj_name}.srcs/sources_1/bd/$proj_name/${proj_name}.bd] -top
import_files -force -norecurse $proj_dir/${proj_name}.srcs/sources_1/bd/$proj_name/hdl/${proj_name}_wrapper.v
update_compile_order
set_property top ${proj_name}_wrapper [current_fileset]
update_compile_order -fileset sources_1
save_bd_design
validate_bd_design
generate_target all [get_files $proj_dir/${proj_name}.srcs/sources_1/bd/$proj_name/${proj_name}.bd]
set fd [open $proj_dir/README.hw w]
puts $fd "##########################################################################"
puts $fd "This is a brief document containing design specific details for : ${board}"
puts $fd "This is auto-generated by Petalinux ref-design builder created @ [clock format [clock seconds] -format {%a %b %d %H:%M:%S %Z %Y}]"
puts $fd "##########################################################################"
set board_part [get_board_parts [current_board_part -quiet]]
if { $board_part != ""} {
puts $fd "BOARD: $board_part"
}
set design_name [get_property NAME [get_bd_designs]]
puts $fd "BLOCK DESIGN: $design_name"
set columns {%40s%30s%15s%50s}
puts $fd [string repeat - 150]
puts $fd [format $columns "MODULE INSTANCE NAME" "IP TYPE" "IP VERSION" "IP"]
puts $fd [string repeat - 150]
foreach ip [get_ips] {
set catlg_ip [get_ipdefs -all [get_property IPDEF $ip]]
puts $fd [format $columns [get_property NAME $ip] [get_property NAME $catlg_ip] [get_property VERSION $catlg_ip] [get_property VLNV $catlg_ip]]
}
close $fd
set_property synth_checkpoint_mode Hierarchical [get_files $proj_dir/${proj_name}.srcs/sources_1/bd/$proj_name/${proj_name}.bd]
#launch_runs synth_1 -jobs $jobs
#wait_on_run synth_1
launch_runs impl_1 -to_step write_bitstream -jobs $jobs
wait_on_run impl_1
open_run impl_1
set_property platform.board_id $proj_name [current_project]
set_property platform.default_output_type "xclbin" [current_project]
set_property platform.design_intent.datacenter false [current_project]
set_property platform.design_intent.embedded true [current_project]
set_property platform.design_intent.external_host false [current_project]
set_property platform.design_intent.server_managed false [current_project]
set_property platform.extensible true [current_project]
set_property platform.platform_state "pre_synth" [current_project]
set_property platform.name $proj_name [current_project]
set_property platform.vendor "avnet" [current_project]
set_property platform.version "1.0" [current_project]
#write_hw_platform -force -file $proj_dir/${proj_name}.xsa
write_hw_platform -force -file $proj_dir/${proj_name}.xsa -include_bit
validate_hw_platform -verbose $proj_dir/${proj_name}.xsa
exit
Create the following file with the content generated by the "write_bd_tcl -no-ip-version config_bd.tcl" command in the original Vivado project.
avnet-vitis-platforms/zub1cg/platforms/vivado/zub1cg_sbc_base/scripts/config_bd.tcl
Create the following file from the original Vivado project's constraints file:
avnet-vitis-platforms/zub1cg/platforms/vivado/zub1cg_sbc_base/xdc/pin.xdc
#
# Set I/O standards
#
set_property IOSTANDARD LVCMOS18 [get_ports {pl_pb*}]
set_property IOSTANDARD LVCMOS18 [get_ports {rgb_led*}]
set_property IOSTANDARD LVCMOS18 [get_ports {click*}]
set_property IOSTANDARD LVCMOS18 [get_ports {tempsensor*}]
#
# Set I/O location constraints
#
set_property PACKAGE_PIN A8 [get_ports pl_pb_tri_i ]; # HD_GPIO_PB1
set_property PACKAGE_PIN A7 [get_ports {rgb_led_0_tri_o[0]}]; # HD_GPIO_RGB1_R
set_property PACKAGE_PIN B6 [get_ports {rgb_led_0_tri_o[1]}]; # HD_GPIO_RGB1_G
set_property PACKAGE_PIN B5 [get_ports {rgb_led_0_tri_o[2]}]; # HD_GPIO_RGB1_B
set_property PACKAGE_PIN B4 [get_ports {rgb_led_1_tri_o[0]}]; # HP_GPIO_RGB2_R
set_property PACKAGE_PIN A2 [get_ports {rgb_led_1_tri_o[1]}]; # HP_GPIO_RGB2_G
set_property PACKAGE_PIN F4 [get_ports {rgb_led_1_tri_o[2]}]; # HP_GPIO_RGB2_B
#
# Set Click I/O constraints
#
set_property PACKAGE_PIN G7 [get_ports {click_spi_pl_ss_io[0]}]
set_property PACKAGE_PIN G5 [get_ports {click_spi_pl_ss_io[1]}]
set_property IOSTANDARD LVCMOS18 [get_ports {click_spi_pl_ss_io[1]}]
set_property IOSTANDARD LVCMOS18 [get_ports {click_spi_pl_ss_io[0]}]
Creating the zub1cg/overlays
sub-directory structure.
Start by creating a symbolic link to the dpu_ip in the common directory structure:
$ cd ~/Avnet_2022_2/avnet-vitis-platforms/zub1cg/overlays
$ ln -sf ../../common/overlays/dpu_ip dpu_ip
Copy the entire benchmark sub-directory from the KV260 content.
$ mkdir examples
$ cp ~/Avnet_2022_2/kria-vitis-platforms/kv260/overlays/examples/benchmark examples/.
The ZU1CG device does not have as many resources as the K26. For this reason, the DPU configuration must be changed to reflect the available resources:
Edit the DPU configuration file as follows:
avnet-vitis-platforms/zub1cg/overlays/examples/benchmark/dpu_conf.vh
...
//`define B4096
`define B512
...
//`define URAM_ENABLE
`define URAM_DISABLE
...
//`define DSP48_USAGE_HIGH
`define DSP48_USAGE_LOW
...
Building the zub1cg_sbc_base platformWith the avnet-vitis-platforms directory in place, we can now build the Vitis platform the the base design, as follows:
$ cd ~/Avnet_2022_2/avnet-vitis-platforms/zub1cg
$ make platform PFM=zub1cg_sbc_base
The build will take some time...
When complete, we can query the Vitis platform, as follows:
$ platforminfo platforms/avnet_zub1cg_sbc_base_2022_2/zub1cg_sbc_base.xpfm
==========================
Basic Platform Information
==========================
Platform: zub1cg_sbc_base
File: ../platforms/avnet_zub1cg_sbc_base_2022_2/zub1cg_sbc_base.xpfm
Description: zub1cg_sbc_base
=====================================
Hardware Platform (Shell) Information
=====================================
Vendor: avnet
Board: zub1cg_sbc_base
Name: zub1cg_sbc_base
Version: 1.0
Generated Version: 2022.2
Hardware: 1
Software Emulation: 1
Hardware Emulation: 1
Hardware Emulation Platform: 0
FPGA Family: zynquplus
FPGA Device: xczu1cg
Board Vendor: avnet.com
Board Name: avnet.com:zuboard_1cg:1.1
Board Part: xczu1cg-sbva484-1-e
=================
Clock Information
=================
Default Clock Index: 0
Clock Index: 0
Frequency: 150.000000
Clock Index: 1
Frequency: 300.000000
Clock Index: 2
Frequency: 75.000000
Clock Index: 3
Frequency: 100.000000
Clock Index: 4
Frequency: 200.000000
Clock Index: 5
Frequency: 400.000000
Clock Index: 6
Frequency: 600.000000
=====================
Resource Availability
=====================
=====
Total
=====
LUTs: 33938
FFs: 71128
BRAMs: 108
DSPs: 216
==================
Memory Information
==================
Bus SP Tag: HP0
Bus SP Tag: HP1
Bus SP Tag: HP2
Bus SP Tag: HP3
Bus SP Tag: HPC0
Bus SP Tag: HPC1
=============================
Software Platform Information
=============================
Number of Runtimes: 1
Default System Configuration: zub1cg_sbc_base
System Configurations:
System Config Name: zub1cg_sbc_base
System Config Description: zub1cg_sbc_base
System Config Default Processor Group: smp_linux
System Config Default Boot Image: standard
System Config Is QEMU Supported: 1
System Config Processor Groups:
Processor Group Name: smp_linux
Processor Group CPU Type: cortex-a53
Processor Group OS Name: linux
System Config Boot Images:
Boot Image Name: standard
Boot Image Type:
Boot Image BIF: zub1cg_sbc_base/boot/linux.bif
Boot Image Data: zub1cg_sbc_base/smp_linux/image
Boot Image Boot Mode: sd
Boot Image RootFileSystem:
Boot Image Mount Path: /mnt
Boot Image Read Me: zub1cg_sbc_base/boot/generic.readme
Boot Image QEMU Args: zub1cg_sbc_base/qemu/pmu_args.txt:zub1cg_sbc_base/qemu/qemu_args.txt
Boot Image QEMU Boot:
Boot Image QEMU Dev Tree:
Supported Runtimes:
Runtime: OpenCL
With the platform successfully built, we can now build the benchmark overlay, as follows:
$ cd ~/Avnet_2022_2/avnet-vitis-platforms/zub1cg
$ make overlay OVERLAY=benchmark
The build will take some time...
When complete, the build artifacts will be found in the following directory:
overlays/example/benchmark/binary_container_1/sd_card
overlays
└── examples
└── benchmark
└── binary_container_1
└── sd_card
├── arch.json
├── dpu.xclbin
├── zub1cg_sbc_base.hwh
└── zub1cg_sbc_base_wrapper.bit
These are the files that we need to create a firmware overlay, and to compile the models for our specific DPU architecture (B512, low RAM usage, etc...).
Creating the avnet-zub1cg-benchmark appWith the overlay successfully built, we can now create our firmware overlay for this new design, which we will name:
- {vendor}_{platform}_{design}
- avnet_zub1cg_benchmark
Petalinux provides a command to create a yocto recipe for these firmware overlays:
$ petalinux-create -t apps
--template fpgamanager -n {firmware}
--enable
--srcuri"{path}/{firmware}.bit
{path}/{firmware}.dtsi
{path}/{firmware}.xclbin
{path}/shell.json"
--force
Before using this command, however, we need to setup the files required for our firmware.
We start by copying the.bit,.xclbin, and creating the shell.json files for the benchmark design:
$ cd ~Avnet_2022_2/petalinux/projects/zub1cg_sbc_2022_2
$ mkdir -p firmware/avnet_zub1cg_benchmark
$ cp ../../../avnet-vitis-platforms/zub1cg/overlays/examples/benchmark/binary_container_1/sd_card/zub1cg_sbc_base_wrapper.bit firmware/avnet_zub1cg_benchmark/avnet_zub1cg_benchmark.bit
$ cp ../../../avnet-vitis-platforms/zub1cg/overlays/examples/benchmark/binary_container_1/sd_card/dpu.xclbin firmware/avnet_zub1cg_benchmark/avnet_zub1cg_benchmark.xclbin
$ echo '{ "shell_type":"XRT_FLAT", "num_slots":1 }' > firmware/avnet_zub1cg_benchmark/shell.json
We will use the same.dtsi as the base design:
$ cp firmware/avnet_zub1cg_base/avnet_zub1cg_base.dtsi firmware/avnet_zub1cg_benchmark/avnet_zub1cg_benchmark.dtsi
Edit this file to change the overlay name, as follows:
firmware/avnet_zub1cg_benchmark/avnet_zub1cg_benchmark.dtsi
...
&fpga_full {
...
firmware-name = "avnet_zub1cg_benchmark.bit.bin";
...
}
...
We can now create our firmware overlay, as follows:
$ petalinux-create -t apps \
--template fpgamanager -n avnet-zub1cg-benchmark \
--enable \
--srcuri "firmware/avnet_zub1cg_benchmark/avnet_zub1cg_benchmark.bit \
firmware/avnet_zub1cg_benchmark/avnet_zub1cg_benchmark.xclbin \
firmware/avnet_zub1cg_benchmark/avnet_zub1cg_benchmark.dtsi \
firmware/avnet_zub1cg_benchmark/shell.json" \
--force
This will have created new entries in the user-rootfsconfig
and rootfs_config
configuration files. Add the "vitis-ai-library-*" packages to these, as follows:
project-spec/meta-user/conf/user-rootfsconfig
...
CONFIG_avnet-zub1cg-base
CONFIG_avnet-zub1cg-dualcam
CONFIG_xmutil
CONFIG_avnet-zub1cg-benchmark
CONFIG_vitis_ai_library
CONFIG_vitis_ai_library-dev
CONFIG_vitis_ai_library-dbg
...
project-spec/configs/rootfs_config
...
#
# apps
#
CONFIG_avnet-zub1cg-base=y
CONFIG_avnet-zub1cg-dualcam=y
CONFIG_avnet-zub1cg-benchmark=y
#
# user packages
#
...
CONFIG_xmutil=y
CONFIG_vitis_ai_library=y
CONFIG_vitis_ai_library-dev=y
...
Adding the Vitis-AI 3.0 yocto recipesBy default, the petalinux project will build version 2.5 of the vitis_ai_library packages, which is not what we want. Since we want version 3.0 of the vitis_ai_library packages, we need to copy the new yocto recipes, as described here:
https://github.com/Xilinx/Vitis-AI/blob/v3.0/src/vai_petalinux_recipes/README.md
We start by cloning the Vitis-AI 3.0 repository:
$ cd ~/Avnet_2022_2
$ git clone -b v3.0 https://github.com/Xilinx/Vitis-AI
Then we copy the yocto recipes for Vitis-AI 3.0:
$ cd ~Avnet_2022_2/petalinux/projects/zub1cg_sbc_2022_2
$ cp -r ~/Avnet_2022_2/Vitis-AI/src/vai_petalinux_recipes/recipes-vitis-ai project-spec/meta-user/.
For our Vitis implementation, we need to remove one file, the vart_3.0_vivado.bb recipe.
$ rm project-spec/meta-user/recipe-vitis-ai/vart/vart-3.0_vivado.bb
We can now rebuild the petalinux project:
$ petalinux-build
Verifying the avnet-zub1cg-benchmark appIn order to verify our new benchmark app, we need to program our new SD card image to a micro-SD card (of size 32GB or greater).
~/Avnet_2022_2/petalinux/projects/zub1cg_sbc_2022_2/images/linux/rootfs.wic
To do this, we use Balena Etcher, which is available for most operating systems.
Once programmed, insert the micro-SD card into the ZUBoard, and connect up the platform as shown below.
Press the power push-button to boot the board, and login as "root".
After booting linux, login as the "root" user as follows:
zub1cg-sbc-2022-2 login: root
root@zub1cg-sbc-2022-2:~#
The first verification to do is to verify the presence of the benchmark overlay:
root@zub1cg-sbc-2022-2:~# xmutil listapps
Accelerator Accel_type Base Base_type #slots Active_slot
avnet-zub1cg-base XRT_FLAT avnet-zub1cg-base XRT_FLAT (0+0) -1
avnet-zub1cg-dualcam XRT_FLAT avnet-zub1cg-dualcam XRT_FLAT (0+0) -1
avnet-zub1cg-benchmark XRT_FLAT avnet-zub1cg-benchmark XRT_FLAT (0+0) -1
root@zub1cg-sbc-2022-2:~#
The second verification is to load the benchmark overlay:
root@zub1cg-sbc-2022-2:~# xmutil loadapp avnet-zub1cg-benchmark
[ 269.030180] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /fpga-full/firmware-name
[ 269.040314] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /fpga-full/resets
[ 269.050713] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/afi0
[ 269.060215] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_gpio_0
[ 269.070229] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_gpio_1
[ 269.080246] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_gpio_2
[ 269.090261] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_iic_0
[ 269.100186] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_iic_1
[ 269.110113] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_iic_2
[ 269.120043] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_intc_0
[ 269.130051] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_quad_spi_0
[ 269.140405] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_uartlite_0
[ 269.150761] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/system_management_wiz_0
[ 269.791631] xadc a0090000.system_management_wiz: IRQ index 0 not found
[ 269.806520] zocl-drm axi:zyxclmm_drm: IRQ index 32 not found
avnet-zub1cg-benchmark: loaded to slot 0
Note that the following WARNING occurs for the case of a working dynamic device tree, so can be ignored:
OF: overlay: WARNING: memory leak will occur if overlay removed
One thing to note is that after loading the benchmark overlay, the content of the /etc/vart.conf
changed to the following:
root@zub1cg-sbc-2022-2:~# cat /etc/vart.conf
firmware: /lib/firmware/xilinx/avnet-zub1cg-benchmark/avnet-zub1cg-benchmark.xclbin
We can query the active DPU enabled design with the xdputil utility:
root@zub1cg-sbc-2022-2:~# xdputil query
{
"DPU IP Spec":{
"DPU Core Count":1,
"IP version":"v4.1.0",
"generation timestamp":"2023-02-21 21-30-00",
"git commit id":"7d32c41",
"git commit time":2023022121,
"regmap":"1to1 version"
},
"VAI Version":{
"libvart-runner.so":"Xilinx vart-runner Version: 3.0.0-331ba47f80502ef2a1f37b3f7ce616b31c22e577 86 2023-01-02-21:50:29 ",
"libvitis_ai_library-dpu_task.so":"Xilinx vitis_ai_library dpu_task Version: 3.0.0-1cccff04dc341c4a6287226828f90aed56005f4f 86 2023-01-02 14:31:50 [UTC] ",
"libxir.so":"Xilinx xir Version: xir-9204ac72103092a7b253a0c23ec7471481656940 2023-01-02-21:49:01",
"target_factory":"target-factory.3.0.0 860ed0499ab009084e2df3004eeb9ae710c26351"
},
"kernels":[
{
"DPU Arch":"DPUCZDX8G_ISA1_B512_0101000016010200",
"DPU Frequency (MHz)":300,
"IP Type":"DPU",
"Load Parallel":2,
"Load augmentation":"enable",
"Load minus mean":"disable",
"Save Parallel":2,
"XRT Frequency (MHz)":300,
"cu_addr":"0xb0000000",
"cu_handle":"0xaaaae02cb350",
"cu_idx":0,
"cu_mask":1,
"cu_name":"DPUCZDX8G:DPUCZDX8G_1",
"device_id":0,
"fingerprint":"0x101000016010200",
"name":"DPU Core 0"
}
]
}
We can also query the status of the DPU:
root@zub1cg-sbc-2022-2:~# xdputil status
{
"kernels":[
{
"addrs_registers":{
"dpu0_base_addr_0":"0x0",
"dpu0_base_addr_1":"0x0",
"dpu0_base_addr_2":"0x0",
"dpu0_base_addr_3":"0x0",
"dpu0_base_addr_4":"0x0",
"dpu0_base_addr_5":"0x0",
"dpu0_base_addr_6":"0x0",
"dpu0_base_addr_7":"0x0"
},
"common_registers":{
"ADDR_CODE":"0x0",
"AP status":"idle",
"CONV END":0,
"CONV START":0,
"HP_ARCOUNT_MAX":7,
"HP_ARLEN":15,
"HP_AWCOUNT_MAX":7,
"HP_AWLEN":15,
"LOAD END":0,
"LOAD START":0,
"MISC END":0,
"MISC START":0,
"SAVE END":0,
"SAVE START":0
},
"name":"DPU Registers Core 0"
}
]
}
Further verification requires xmodel files that have been compiled for this specific DPU architecture (DPU B512, low RAM usage,...).
Expanding the root file systemBy default, the root files system will have a size of ~4GB.
The next sections will install a significant amount of content, so the root file system must be increased to its full allowable size.
This can be done with the dpu_optimize utility, found in the DPUCZDX8G reference design archive.
root@zub1cg-sbc-2022-2:~# wget https://www.xilinx.com/bin/public/openDownload?filename=DPUCZDX8G_VAI_v3.0.tar.gz -O DPUCZDX8G_VAI_v3.0.tar.gz
root@zub1cg-sbc-2022-2:~# tar -xvzf DPUCZDX8G_VAI_v3.0.tar.gz
root@zub1cg-sbc-2022-2:~# cd DPUCZDX8G_VAI_v3.0/app
root@zub1cg-sbc-2022-2:~/DPUCZDX8G_VAI_v3.0/app# ls
dpu_sw_optimize.tar.gz model samples
root@zub1cg-sbc-2022-2:~/DPUCZDX8G_VAI_v3.0/app# tar -xvzf dpu_sw_optimize.tar.gz
dpu_sw_optimize/
dpu_sw_optimize/zynqmp/
dpu_sw_optimize/zynqmp/README.md
dpu_sw_optimize/zynqmp/functions/
dpu_sw_optimize/zynqmp/functions/zynqmp_qos_en.sh
dpu_sw_optimize/zynqmp/functions/ext4_auto_resize.sh
dpu_sw_optimize/zynqmp/functions/irps5401
dpu_sw_optimize/zynqmp/functions/irps5401.c
dpu_sw_optimize/zynqmp/zynqmp_dpu_optimize.sh
root@zub1cg-sbc-2022-2:~/DPUCZDX8G_VAI_v3.0/app# cd dpu_sw_optimize/zynqmp/
root@zub1cg-sbc-2022-2:~/DPUCZDX8G_VAI_v3.0/app/dpu_sw_optimize/zynqmp# ./zynqmp_dpu_optimize.sh
Auto resize ext4 partition ...[✔]
Start QoS config ...[✔]
After executing the optimization script, the root partition will have the full size of your sdcard minum the size of the boot partition (~1GB).
Compiling the ModelZooBefore we can use the AMD/Xilinx modelzoo with the specific DPU architecture we included in our design, we need to compile those models.
The models can be downloaded from the Xilinx web site with their provided downloader.py python script:
$ cd ~/Avnet_2022_2/Vitis-AI/model_zoo
$ python downloader.py
Each model can be compiled by following the on-line documentation:
https://xilinx.github.io/Vitis-AI/docs/workflow-model-zoo.html
For convenience, I am providing an archive of pre-compiled models for this specific DPU architecture.
Download the following models archive for the B512 DPU architecture to the ZUBoard's root file system (ie. using SSH):
- https://avnet.me/vitis-ai-3.0-models.0-b512-lr
(2023/04/04 : md5sum = b704b959e95a479fd4e0d27266e3202e)
Then extract the archive to the /usr/share/vitis_ai_library
directory:
root@zub1cg-sbc-2022-2:~# cd /usr/share/vitis_ai_library
root@zub1cg-sbc-2022-2:/usr/share/vitis_ai_library# tar -xvzf ~/vitis-ai-3.0-models.0-b512-lr.tar.gz
root@zub1cg-sbc-2022-2:/usr/share/vitis_ai_library# ln -sf models.b512-lr models
Installing the Vitis-AI examplesThe Vitis-AI examples can be found in the Vitis-AI repository under the examples directory:
Vitis-AI
├── ...
└── examples
├── ...
├── vai_library
├── ...
├── vai_runtime
└── ...
These can be copied to the root file system of the SD card image.
They also require archives of images and video files, which can be downloaded from the following links:
- vai_library
vitis_ai_library_r3.0.0_images.tar.gz
vitis_ai_library_r3.0.0_video.tar.gz - vai_runtime
vitis_ai_runtime_r3.0.0_image_video.tar.gz
For convenience, I am providing an archive of pre-compiled examples, that can be downloaded from a single source.
Download the following examples archive to the ZUBoard's root file system (ie. using SSH):
- https://avnet.me/vitis-ai-3.0-examples
(2023/04/04 : md5sum = 0af9ab73387ef8cc0f90e15bddbcbdb4)
Then extract the archive to the /home/root
(~
) directory:
root@zub1cg-sbc-2022-2:~# cd ~
root@zub1cg-sbc-2022-2:~# tar -xvzf vitis-ai-3.0-examples.tar.gz
Automatically booting avnet-zub1cg-benchmarkThe user can configure the image to automatically boot one of the firmware overlays. The /etc/dfx-mgrd/daemon.conf
file indicates which overlay (default_accel) to load at boot in the /etc/dfx/mgrd/default_firmware
.
root@zub1cg-sbc-2022-2:~# cat /etc/dfx-mgrd/daemon.conf
{
"firmware_location": ["/lib/firmware/xilinx"],
"default_accel":"/etc/dfx-mgrd/default_firmware"
}
Recall that we modified this file in the previous project to load the base overlay be default ... we can change it to load the benchmark overlay:
root@zub1cg-sbc-2022-2:~# cat /etc/dfx-mgrd/default_firmware
avnet-zub1cg-base
root@zub1cg-sbc-2022-2:~# echo avnet-zub1cg-benchmark > /etc/dfx-mgrd/default_firmware
root@zub1cg-sbc-2022-2:~# cat /etc/dfx-mgrd/default_firmware
avnet-zub1cg-benchmark
The change will take effect at the next boot.
root@zub1cg-sbc-2022-2:~# reboot
After boot, we start by querying which "apps" are present:
root@zub1cg-sbc-2022-2:~# xmutil listapps
Accelerator Accel_type Base Base_type #slots Active_slot
avnet-zub1cg-base XRT_FLAT avnet-zub1cg-base XRT_FLAT (0+0) -1
avnet-zub1cg-dualcam XRT_FLAT avnet-zub1cg-dualcam XRT_FLAT (0+0) -1
avnet-zub1cg-benchmark XRT_FLAT avnet-zub1cg-benchmark XRT_FLAT (0+0) 0,
root@zub1cg-sbc-2022-2:~###
Notice that the avnet-zub1cg-benchmark overlay has been loaded.
We can query the active DPU enabled design with the xdputil utility:
root@zub1cg-sbc-2022-2:~# xdputil query
{
"DPU IP Spec":{
"DPU Core Count":1,
"IP version":"v4.1.0",
"generation timestamp":"2023-02-21 21-30-00",
"git commit id":"7d32c41",
"git commit time":2023022121,
"regmap":"1to1 version"
},
"VAI Version":{
"libvart-runner.so":"Xilinx vart-runner Version: 3.0.0-331ba47f80502ef2a1f37b3f7ce616b31c22e577 86 2023-01-02-21:50:29 ",
"libvitis_ai_library-dpu_task.so":"Xilinx vitis_ai_library dpu_task Version: 3.0.0-1cccff04dc341c4a6287226828f90aed56005f4f 86 2023-01-02 14:31:50 [UTC] ",
"libxir.so":"Xilinx xir Version: xir-9204ac72103092a7b253a0c23ec7471481656940 2023-01-02-21:49:01",
"target_factory":"target-factory.3.0.0 860ed0499ab009084e2df3004eeb9ae710c26351"
},
"kernels":[
{
"DPU Arch":"DPUCZDX8G_ISA1_B512_0101000016010200",
"DPU Frequency (MHz)":300,
"IP Type":"DPU",
"Load Parallel":2,
"Load augmentation":"enable",
"Load minus mean":"disable",
"Save Parallel":2,
"XRT Frequency (MHz)":300,
"cu_addr":"0xb0000000",
"cu_handle":"0xaaaae02cb350",
"cu_idx":0,
"cu_mask":1,
"cu_name":"DPUCZDX8G:DPUCZDX8G_1",
"device_id":0,
"fingerprint":"0x101000016010200",
"name":"DPU Core 0"
}
]
}
Executing the Vitis-AI ExamplesThere are too many examples to cover in this section, but we can cover an alternative to the face detection example : face mask detection
root@zub1cg-sbc-2022-2:~# cd Vitis-AI/examples/vai_library/samples/yolov4
root@zub1cg-sbc-2022-2:~/Vitis-AI/examples/vai_library/samples/yolov4# ./test_video_yolov4 face_mask_detection_pt 0
The current version of this project has the following known issues:
- vitis_ai_library packages, built from source, do not work
Until this is resolve, a work-around (using pre-built packages) has been provided.
ConclusionI hope this tutorial helped to understand how to add Vitis-AI 3.0 functionality to your ZUBoard and/or custom platform.
If you would like to have the pre-built SDcard image for this project, please let me know in the comments below.
Revision History2023/04/11
Added registration link to webinar series:
http://avnet.me/ZU1-Robotics-webinar-series
Fix instructions in section "Adding the Vitis-AI 3.0 yocto recipes" (need to remove vart_3.0_vivado.bb).
2023/04/04
Preliminary Version
Comments
Please log in or sign up to comment.