This project is part 3 of a 4 part series of projects, where we will progressively create a Vitis-AI and ROS2 enabled platform for Ultra96-V2:
- Part 1 : Building the foundational designs
- Part 2 : Combining designs into a common platform
- Part 3 : Adding support for Vitis-AI
- Part 4 : Adding support for ROS2
The motivation of this series of projects is to enable users to create their own custom AI applications.
Introduction - Part IIIIn order to add accelerators to our foundational designs, we need to first create Vitis platforms for these designs, which are wrappers describing which resources are available for use by Vitis.
I have chosen to mimic the directory structure used by the Kria platforms to do this, since I find them easy to understand and easy to expand on.
Understanding kria-vitis-platformsBefore creating our own equivalent of the kria-vitis-platforms, we first need to understand it's contents.
As of this writing, the kria-vitis-platform has not yet been released for 2022.2, so we will look at the 2022.1 version :https://github.com/Xilinx/kria-vitis-platforms/tree/xlnx_rel_v2022.1
$ cd ~/Avnet_2022_2
$ git clone -b xlnx_rel_v2022.1 https://github.com/Xilinx/kria-vitis-platforms --recursive
kria-vitis-platforms
├── k26
├── kr260
└── kv260
├── overlays
│ ├── dpu_ip
│ └── examples
│ ├── aibox-reid
│ ├── benchmark
│ ├── defect-detect
│ ├── nlp-smartvision
│ └── smartcam
└── platforms
├── scripts
└── vivado
├── kv260_ispMipiRx_rpiMipiRx_DP
├── kv260_ispMipiRx_vcu_DP
├── kv260_ispMipiRx_vmixDP
└── kv260_vcuDecode_vmixDP
The first level of directories correspond to the platforms (k26, kr260, kv260).
For each platform, the directory structure divides into two main sections:
- platforms
- overlays
The platforms directory contains the source code to re-generate the Vivado projects and Vitis platforms (wrapper).
The overlays directory contains accelerator examples that each target one of the platforms. Here is a graphical representation of the platforms and overlays that we can find for the KV260.
We will want to reproduce the "benchmark" overlay, which contains the largest DPU core that fits in the available resources.
Creating avnet-vitis-platformsIf you do not want to re-create the avnet-vitis-platforms directory structure, you can simply clone the final result from the following github repository:
$ cd ~/Avnet_2022_2
$ git clone -b 2022.2 https://github.com/AlbertaBeef/avnet-vitis-platforms --recursive
Otherwise, follow along the instructions below to re-create this directory structure yourself.
In our avnet-vitis-platforms directory structure, we will also include a "common" directory for content that is used by all platforms.
avnet-vitis-platforms
├── common
│ ├── overlays
│ │ └── dpu_ip
│ └── platforms
│ └── bdf
├── zub1cg
│ ├── overlays
│ └── platforms
└── u96v2
├── overlays
│ ├── dpu_ip (link to ../../common/overlays/dpu_ip)
│ └── examples
│ └── benchmark
└── platforms
├── scripts
└── vivado
└── u96v2_sbc_base
This project will focus on the content for the common
and u96v2
directories.
$ cd ~/Avnet_2022_2
$ mkdir -p avnet-vitis-platforms
$ cd avnet-vitis-platforms
Creating the common/platforms
sub-directory structure.
First, we clone the Avnet bdf repository, which contains our board definition files, in the common/platforms
directory:
$ mkdir -p common/platforms
$ git clone https://github.com/Avnet/bdf common/platforms/bdf
Creating the common/overlays
sub-directory structure.
Next, we download the Xilinx DPU ip archive to the common/overlays/dpu_ip
directory:
$ mkdir -p common/overlays
$ wget https://www.xilinx.com/bin/public/openDownload?filename=DPUCZDX8G_ip_repo_VAI_v3.0.tar.gz -O DPUCZDX8G_ip_repo_VAI_v3.0.tar.gz
$ tar -xvzf DPUCZDX8G_ip_repo_VAI_v3.0.tar.gz
$ mv DPUCZDX8G_ip_repo_VAI_v3.0 common/overlays/dpu_ip
Creating the u96v2/platforms
sub-directory structure.
Now, we want to create the platforms structure for the u96v2_sbc_base design, by mimic'ing (copying) files from the kv260 content. When we are done, we will have the following files created:
avnet-vitis-platforms
├── ...
└── u96v2
├── Makefile
├── ...
└── platforms
├── Makefile
├── scripts
│ └── pfm.tcl
└── vivado
└── u96v2_sbc_base
├── Makefile
├── scripts
│ ├── main.tcl
│ └── config_bd.tcl
├── ip
│ └── PWM_w_Int
└── xdc
└── pin.xdc
Let's start by creating the directories:
$ mkdir -p u96v2/platforms/scripts
$ mkdir -p u96v2/platforms/vivado/u96v2_sbc_base
$ mkdir -p u96v2/platforms/vivado/u96v2_sbc_base/scripts
$ mkdir -p u96v2/platforms/vivado/u96v2_sbc_base/ip
$ mkdir -p u96v2/platforms/vivado/u96v2_sbc_base/xdc
$ cd u96v2
Create the following Makefile with the following content:
avnet-vitis-platforms/u96v2/Makefile
CP = cp -f
PWD = $(shell readlink -f .)
# the platform directory has to be an absolute path when passed to v++
PFM_DIR = $(PWD)/platforms
PFM_VER = 2022_2
# valid platforms / overlays
PFM_LIST = u96v2_sbc_base u96v2_sbc_dualcam
OVERLAY_LIST = benchmark
# override platform name based on overlay
ifeq ($(OVERLAY),benchmark)
override PFM = u96v2_sbc_base
endif
PFM_XPFM = $(PFM_DIR)/avnet_$(PFM)_$(PFM_VER)/$(PFM).xpfm
VITIS_DIR = overlays/examples
VITIS_OVERLAY_DIR = $(VITIS_DIR)/$(OVERLAY)
VITIS_OVERLAY_BIT = $(VITIS_OVERLAY_DIR)/binary_container_1/link/int/system.bit
.PHONY: help
help:
@echo 'Usage:'
@echo ''
@echo ' make overlay OVERLAY=<val>'
@echo ' Build the Vitis application overlay.'
@echo ''
@echo ' Valid options for OVERLAY: ${OVERLAY_LIST}'
@echo ''
@echo ' make platform PFM=<val> JOBS=<n>'
@echo ' Build the Vitis platform.'
@echo ''
@echo ' Valid options for PFM: ${PFM_LIST}'
@echo ' JOBS: optional param to set number of synthesis jobs (default 8)'
@echo ''
@echo ' make clean'
@echo ' Clean runs'
@echo ''
.PHONY: overlay
overlay: $(VITIS_OVERLAY_BIT)
$(VITIS_OVERLAY_BIT): $(PFM_XPFM)
@valid=0; \
for o in $(OVERLAY_LIST); do \
if [ "$$o" = "$(OVERLAY)" ]; then \
valid=1; \
break; \
fi \
done; \
if [ "$$valid" -ne 1 ]; then \
echo 'Invalid parameter OVERLAY=$(OVERLAY). Choose one of: $(OVERLAY_LIST)'; \
exit 1; \
fi; \
echo 'Build $(OVERLAY) Vitis overlay using platform $(PFM)'; \
$(MAKE) -C $(VITIS_OVERLAY_DIR) all PLATFORM=$(PFM_XPFM)
.PHONY: platform
platform: $(PFM_XPFM)
$(PFM_XPFM):
@valid=0; \
for p in $(PFM_LIST); do \
if [ "$$p" = "$(PFM)" ]; then \
valid=1; \
break; \
fi \
done; \
if [ "$$valid" -ne 1 ]; then \
echo 'Invalid parameter PFM=$(PFM). Choose one of: $(PFM_LIST)'; \
exit 1; \
fi; \
echo 'Create Vitis platform $(PFM)'; \
$(MAKE) -C $(PFM_DIR) platform PLATFORM=$(PFM) VERSION=$(PFM_VER)
.PHONY: clean
clean:
$(foreach o, $(OVERLAY_LIST), $(MAKE) -C $(VITIS_DIR)/$(o) clean;)
$(foreach p, $(PFM_LIST), $(MAKE) -C $(PFM_DIR) clean PLATFORM=$(p) VERSION=$(PFM_VER);)
Create the following Makefile with the following content:
avnet-vitis-platforms/u96v2/platforms/Makefile
CP = cp -rf
MKDIR = mkdir -p
RM = rm -rf
XSCT = $(XILINX_VITIS)/bin/xsct
JOBS ?= 8
PLATFORM ?= u96v2_sbc_base
VERSION ?= 2022_2
PFM_DIR = avnet_$(PLATFORM)_$(VERSION)
PFM_PRJ_DIR = xsct/$(PLATFORM)/$(PLATFORM)/export/$(PLATFORM)
PFM_SCRIPTS_DIR = scripts
PFM_TCL = $(PFM_SCRIPTS_DIR)/pfm.tcl
PFM_XPFM = $(PFM_DIR)/$(PLATFORM).xpfm
VIV_DIR = vivado/$(PLATFORM)
VIV_XSA = $(VIV_DIR)/project/$(PLATFORM).xsa
.PHONY: help
help:
@echo 'Usage:'
@echo ''
@echo ' make platform'
@echo ' Generate Vitis platform'
@echo ''
.PHONY: all
all: platform
.PHONY: platform
platform: $(PFM_XPFM)
$(PFM_XPFM): $(VIV_XSA)
$(XSCT) $(PFM_TCL) -xsa $(VIV_XSA)
@$(CP) $(PFM_PRJ_DIR) $(PFM_DIR)
@echo 'Vitis platform available at $(PFM_DIR)'
$(VIV_XSA):
make -C $(VIV_DIR) xsa JOBS=$(JOBS)
.PHONY: clean
clean:
-@$(RM) .Xil boot image linux.bif ws $(PFM_DIR)
make -C $(VIV_DIR) clean
Create the following pfm.tcl script with the following content:
avnet-vitis-platforms/u96v2/platforms/scripts/pfm.tcl
# Help function
proc help_proc { } {
puts "Usage: xsct -sdx pfm.tcl -xsa <file>"
puts "-xsa <file> xsa file location"
puts "-proc <processor> processor (default: psu_cortexa53)"
puts "-help this text"
}
# Set defaults
set platform "default"
set proc "psu_cortexa53"
# Parse arguments
for { set i 0 } { $i < $argc } { incr i } {
# xsa file
if { [lindex $argv $i] == "-xsa" } {
incr i
set xsafile [lindex $argv $i]
set ws [file rootname [file tail $xsafile]]
set ws "xsct/$ws"
# processor
} elseif { [lindex $argv $i] == "-proc" } {
incr i
set proc [lindex $argv $i]
# help
} elseif { [lindex $argv $i] == "-help" } {
help_proc
exit
# invalid argument
} else {
puts "[lindex $argv $i] is an invalid argument"
exit
}
}
# helper variables
set platform [file rootname [file tail $xsafile]]
set imagedir "image"
file mkdir $imagedir
set bootdir "boot"
file mkdir $bootdir
set biffile "linux.bif"
set f [open $biffile a]
close $f
# Set workspace
setws $ws
# Create platform
platform create \
-name $platform \
-hw $xsafile
# Create domain
domain create \
-name smp_linux \
-os linux \
-proc $proc
# Configure domain
domain config -image $imagedir
domain config -boot $bootdir
domain config -bif $biffile
# Configure platform
platform config -remove-boot-bsp
# Generate platform
platform -generate
Create the following Makefile with the following content:
avnet-vitis-platforms/u96v2/platforms/vivado/u96v2_sbc_base/Makefile
RM = rm -rf
VIVADO = $(XILINX_VIVADO)/bin/vivado
JOBS ?= 8
VIV_DESIGN = u96v2_sbc_base
VIV_PRJ_DIR = project
VIV_SCRIPTS_DIR = scripts
VIV_XSA = $(VIV_PRJ_DIR)/$(VIV_DESIGN).xsa
VIV_SRC = $(VIV_SCRIPTS_DIR)/main.tcl
.PHONY: help
help:
@echo 'Usage:'
@echo ''
@echo ' make xsa'
@echo ' Generate extensible xsa for platform generation'
@echo ''
.PHONY: all
all: xsa
xsa: $(VIV_XSA)
$(VIV_XSA): $(VIV_SRC)
$(VIVADO) -mode batch -notrace -source $(VIV_SRC) -tclargs -jobs $(JOBS)
.PHONY: clean
clean:
$(RM) $(VIV_PRJ_DIR) vivado* .Xil *dynamic* *.log *.xpe
Create the following file with the following content:
avnet-vitis-platforms/u96v2/platforms/vivado/u96v2_sbc_base/scripts/main.tcl
set proj_name u96v2_sbc_base
set proj_dir ./project
set proj_board avnet.com:ultra96v2:part0:1.2
set bd_tcl_dir ./scripts
set board xboard_zu1
set rev None
set output {xsa}
set xdc_list {./xdc/pin.xdc}
set ip_repo_path {./ip}
set src_repo_path {./src}
set jobs 8
# parse arguments
for { set i 0 } { $i < $argc } { incr i } {
# jobs
if { [lindex $argv $i] == "-jobs" } {
incr i
set jobs [lindex $argv $i]
}
}
# set board repo path
set bdf_path [file normalize [pwd]/../../../../common/platforms/bdf]
if {[expr {![catch {file lstat $bdf_path finfo}]}]} {
set_param board.repoPaths $bdf_path
puts "\n\n*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*"
puts " Selected \n BDF path $bdf_path"
puts "*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*\n\n"
} else {
puts "\n\n*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*"
puts " Error specifying BDF path $bdf_path"
puts "*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*\n\n"
return -code ok
}
create_project -name $proj_name -force -dir $proj_dir -part [get_property PART_NAME [get_board_parts $proj_board]]
set_property board_part $proj_board [current_project]
import_files -fileset constrs_1 $xdc_list
set_property ip_repo_paths $ip_repo_path [current_project]
update_ip_catalog
# Create block diagram design and set as current design
set design_name $proj_name
create_bd_design $proj_name
current_bd_design $proj_name
# Set current bd instance as root of current design
set parentCell [get_bd_cells /]
set parentObj [get_bd_cells $parentCell]
current_bd_instance $parentObj
source $bd_tcl_dir/config_bd.tcl
save_bd_design
make_wrapper -files [get_files $proj_dir/${proj_name}.srcs/sources_1/bd/$proj_name/${proj_name}.bd] -top
import_files -force -norecurse $proj_dir/${proj_name}.srcs/sources_1/bd/$proj_name/hdl/${proj_name}_wrapper.v
update_compile_order
set_property top ${proj_name}_wrapper [current_fileset]
update_compile_order -fileset sources_1
save_bd_design
validate_bd_design
generate_target all [get_files $proj_dir/${proj_name}.srcs/sources_1/bd/$proj_name/${proj_name}.bd]
set fd [open $proj_dir/README.hw w]
puts $fd "##########################################################################"
puts $fd "This is a brief document containing design specific details for : ${board}"
puts $fd "This is auto-generated by Petalinux ref-design builder created @ [clock format [clock seconds] -format {%a %b %d %H:%M:%S %Z %Y}]"
puts $fd "##########################################################################"
set board_part [get_board_parts [current_board_part -quiet]]
if { $board_part != ""} {
puts $fd "BOARD: $board_part"
}
set design_name [get_property NAME [get_bd_designs]]
puts $fd "BLOCK DESIGN: $design_name"
set columns {%40s%30s%15s%50s}
puts $fd [string repeat - 150]
puts $fd [format $columns "MODULE INSTANCE NAME" "IP TYPE" "IP VERSION" "IP"]
puts $fd [string repeat - 150]
foreach ip [get_ips] {
set catlg_ip [get_ipdefs -all [get_property IPDEF $ip]]
puts $fd [format $columns [get_property NAME $ip] [get_property NAME $catlg_ip] [get_property VERSION $catlg_ip] [get_property VLNV $catlg_ip]]
}
close $fd
set_property synth_checkpoint_mode Hierarchical [get_files $proj_dir/${proj_name}.srcs/sources_1/bd/$proj_name/${proj_name}.bd]
#launch_runs synth_1 -jobs $jobs
#wait_on_run synth_1
launch_runs impl_1 -to_step write_bitstream -jobs $jobs
wait_on_run impl_1
open_run impl_1
set_property platform.board_id $proj_name [current_project]
set_property platform.default_output_type "xclbin" [current_project]
set_property platform.design_intent.datacenter false [current_project]
set_property platform.design_intent.embedded true [current_project]
set_property platform.design_intent.external_host false [current_project]
set_property platform.design_intent.server_managed false [current_project]
set_property platform.extensible true [current_project]
set_property platform.platform_state "pre_synth" [current_project]
set_property platform.name $proj_name [current_project]
set_property platform.vendor "avnet" [current_project]
set_property platform.version "1.0" [current_project]
#write_hw_platform -force -file $proj_dir/${proj_name}.xsa
write_hw_platform -force -file $proj_dir/${proj_name}.xsa -include_bit
validate_hw_platform -verbose $proj_dir/${proj_name}.xsa
exit
Create the following file with the content generated by the "write_bd_tcl -no-ip-version config_bd.tcl" command in the original Vivado project.
avnet-vitis-platforms/u96v2/platforms/vivado/u96v2_sbc_base/scripts/config_bd.tcl
Copy the PWM_w_int IP core from the original Vivado project.
avnet-vitis-platforms/u96v2/platforms/vivado/u96v2_sbc_base/ip/PWM_w_Int
$ cp -r ~/Avnet_2022_2/hdl/ip/PWM_w_Int platforms/vivado/u96v2_sbc_base/ip/.
Create the following file from the original Vivado project's constraints file:
avnet-vitis-platforms/u96v2/platforms/vivado/u96v2_sbc_base/xdc/pin.xdc
#######################################################################
# Ultra96 Bluetooth UART Modem Signals
#######################################################################
set_property IOSTANDARD LVCMOS18 [get_ports bt*]
#BT_HCI_RTS on FPGA / emio_uart0_ctsn
set_property PACKAGE_PIN B7 [get_ports bt_ctsn]
#BT_HCI_CTS on FPGA / emio_uart0_rtsn
set_property PACKAGE_PIN B5 [get_ports bt_rtsn]
#######################################################################
# Ultra96 LS Mezzanine UARTs
#######################################################################
set_property IOSTANDARD LVCMOS18 [get_ports ls_mezz_uart*]
#HD_GPIO_2 on FPGA / Connector pin 7
set_property PACKAGE_PIN F8 [get_ports ls_mezz_uart0_rx]
#HD_GPIO_1 on FPGA / Connector pin 5
set_property PACKAGE_PIN F7 [get_ports ls_mezz_uart0_tx]
#HD_GPIO_5 on FPGA / Connector pin 13
set_property PACKAGE_PIN G5 [get_ports ls_mezz_uart1_rx]
#HD_GPIO_4 on FPGA / Connector pin 11
set_property PACKAGE_PIN F6 [get_ports ls_mezz_uart1_tx]
#######################################################################
# Ultra96 LS Mezzanine Resets
#######################################################################
set_property IOSTANDARD LVCMOS18 [get_ports ls_mezz_rst*]
#HD_GPIO_7 on FPGA / Connector pin 31
set_property PACKAGE_PIN B6 [get_ports {ls_mezz_rst[1]}]
#HD_GPIO_14 on FPGA / Connector pin 32
set_property PACKAGE_PIN A7 [get_ports {ls_mezz_rst[0]}]
#######################################################################
# Ultra96 LS Mezzanine Interrupts
#######################################################################
set_property IOSTANDARD LVCMOS18 [get_ports ls_mezz_int*]
#HD_GPIO_8 on FPGA / Connector pin 33
set_property PACKAGE_PIN G6 [get_ports {ls_mezz_int[0]}]
#HD_GPIO_15 on FPGA / Connector pin 34
set_property PACKAGE_PIN C5 [get_ports {ls_mezz_int[1]}]
#######################################################################
# Ultra96 LS Mezzanine PWMs
#######################################################################
# These constraints are used for when connecting the LS Mezzanine PWM to
# the PWM_w_Int custom IP block.
set_property IOSTANDARD LVCMOS18 [get_ports ls_mezz_pwm*]
#HD_GPIO_6 on FPGA / Connector pin 29 / PWM1
set_property PACKAGE_PIN A6 [get_ports {ls_mezz_pwm0[0]}]
#HD_GPIO_13 on FPGA / Connector pin 30 / PWM2
set_property PACKAGE_PIN C7 [get_ports {ls_mezz_pwm1[0]}]
#######################################################################
# Ultra96 WiFi & BT LEDs
#######################################################################
set_property IOSTANDARD LVCMOS18 [get_ports *_en_led*]
#RADIO_LED0 on FPGA / LED D9 / WiFi LED
set_property PACKAGE_PIN A9 [get_ports {wifi_en_led_tri_o[0]}]
#RADIO_LED1 on FPGA / LED D10 / Bluetooth LED
set_property PACKAGE_PIN B9 [get_ports {bt_en_led_tri_o[0]}]
#######################################################################
# Ultra96 Fan
#######################################################################
set_property IOSTANDARD LVCMOS12 [get_ports {fan_pwm_tri_o[0]}]
#FAN_PWM on FPGA
set_property PACKAGE_PIN F4 [get_ports {fan_pwm_tri_o[0]}]
#######################################################################
# Ultra96 High Speed Mezzanine Connections
#######################################################################
set_property IOSTANDARD LVCMOS12 [get_ports hs_mezz_csi0_c*]
#CSI0_C_P on FPGA / Connector pin 2
set_property PACKAGE_PIN N2 [get_ports {hs_mezz_csi0_c[0]}]
#CSI0_C_N on FPGA / Connector pin 4
set_property PACKAGE_PIN P1 [get_ports {hs_mezz_csi0_c[1]}]
###
set_property IOSTANDARD LVCMOS12 [get_ports hs_mezz_csi0_d*]
#CSI0_D0_P on FPGA / Connector pin 8
set_property PACKAGE_PIN N5 [get_ports {hs_mezz_csi0_d[0]}]
#CSI0_D0_N on FPGA / Connector pin 10
set_property PACKAGE_PIN N4 [get_ports {hs_mezz_csi0_d[1]}]
#CSI0_D1_P on FPGA / Connector pin 14
set_property PACKAGE_PIN M2 [get_ports {hs_mezz_csi0_d[2]}]
#CSI0_D1_N on FPGA / Connector pin 16
set_property PACKAGE_PIN M1 [get_ports {hs_mezz_csi0_d[3]}]
#CSI0_D2_P on FPGA / Connector pin 20
set_property PACKAGE_PIN M5 [get_ports {hs_mezz_csi0_d[4]}]
#CSI0_D2_N on FPGA / Connector pin 22
set_property PACKAGE_PIN M4 [get_ports {hs_mezz_csi0_d[5]}]
#CSI0_D3_P on FPGA / Connector pin 26
set_property PACKAGE_PIN L2 [get_ports {hs_mezz_csi0_d[6]}]
#CSI0_D3_N on FPGA / Connector pin 28
set_property PACKAGE_PIN L1 [get_ports {hs_mezz_csi0_d[7]}]
###
set_property IOSTANDARD LVCMOS12 [get_ports hs_mezz_csi1_c*]
#CSI1_C_P on FPGA / Connector pin 54
set_property PACKAGE_PIN T3 [get_ports {hs_mezz_csi1_c[0]}]
#CSI1_C_N on FPGA / Connector pin 56
set_property PACKAGE_PIN T2 [get_ports {hs_mezz_csi1_c[1]}]
###
set_property IOSTANDARD LVCMOS12 [get_ports hs_mezz_csi1_d*]
#CSI1_D0_P on FPGA / Connector pin 42
set_property PACKAGE_PIN P3 [get_ports {hs_mezz_csi1_d[0]}]
#CSI1_D0_N on FPGA / Connector pin 44
set_property PACKAGE_PIN R3 [get_ports {hs_mezz_csi1_d[1]}]
#CSI1_D1_P on FPGA / Connector pin 48
set_property PACKAGE_PIN U2 [get_ports {hs_mezz_csi1_d[2]}]
#CSI1_D1_N on FPGA / Connector pin 50
set_property PACKAGE_PIN U1 [get_ports {hs_mezz_csi1_d[3]}]
###
set_property IOSTANDARD LVCMOS18 [get_ports hs_mezz_csi*_mclk*]
#CSI0_MCLK on FPGA / Connector pin 15
set_property PACKAGE_PIN E8 [get_ports {hs_mezz_csi0_mclk[0]}]
#CSI1_MCLK on FPGA / Connector pin 17
set_property PACKAGE_PIN D8 [get_ports {hs_mezz_csi1_mclk[0]}]
###
set_property IOSTANDARD LVCMOS12 [get_ports hs_mezz_dsi_clk*]
#DSI_CLK_P on FPGA / Connector pin 21
set_property PACKAGE_PIN J5 [get_ports {hs_mezz_dsi_clk[1]}]
#DSI_CLK_N on FPGA / Connector pin 23
set_property PACKAGE_PIN H5 [get_ports {hs_mezz_dsi_clk[0]}]
###
set_property IOSTANDARD LVCMOS12 [get_ports hs_mezz_dsi_d*]
#DSI_D0_P on FPGA / Connector pin 27
set_property PACKAGE_PIN G1 [get_ports {hs_mezz_dsi_d[0]}]
#DSI_D0_N on FPGA / Connector pin 29
set_property PACKAGE_PIN F1 [get_ports {hs_mezz_dsi_d[1]}]
#DSI_D1_P on FPGA / Connector pin 33
set_property PACKAGE_PIN E4 [get_ports {hs_mezz_dsi_d[2]}]
#DSI_D1_N on FPGA / Connector pin 35
set_property PACKAGE_PIN E3 [get_ports {hs_mezz_dsi_d[3]}]
#DSI_D2_P on FPGA / Connector pin 39
set_property PACKAGE_PIN E1 [get_ports {hs_mezz_dsi_d[4]}]
#DSI_D2_N on FPGA / Connector pin 41
set_property PACKAGE_PIN D1 [get_ports {hs_mezz_dsi_d[5]}]
#DSI_D3_P on FPGA / Connector pin 45
set_property PACKAGE_PIN D3 [get_ports {hs_mezz_dsi_d[6]}]
#DSI_D3_N on FPGA / Connector pin 47
set_property PACKAGE_PIN C3 [get_ports {hs_mezz_dsi_d[7]}]
###
set_property IOSTANDARD LVCMOS12 [get_ports {hs_mezz_hsic_str[0]}]
#HSIC_STR on FPGA / Connector pin 57
set_property PACKAGE_PIN A2 [get_ports {hs_mezz_hsic_str[0]}]
###
set_property IOSTANDARD LVCMOS12 [get_ports {hs_mezz_hsic_d[0]}]
#HSIC_DATA on FPGA / Connector pin 59
set_property PACKAGE_PIN C2 [get_ports {hs_mezz_hsic_d[0]}]
Creating the u96v2/overlays
sub-directory structure.
Start by creating a symbolic link to the dpu_ip in the common directory structure:
$ cd ~/Avnet_2022_2/avnet-vitis-platforms/u96v2/overlays
$ ln -sf ../../common/overlays/dpu_ip dpu_ip
Copy the entire benchmark sub-directory from the KV260 content.
$ mkdir examples
$ cp ~/Avnet_2022_2/kria-vitis-platforms/kv260/overlays/examples/benchmark examples/.
The ZU3EG device does not have as many resources as the K26. For this reason, the DPU configuration must be changed to reflect the available resources. Also the operating frequency of the DPU needs to be lowered:
Edit the DPU configuration file as follows:
avnet-vitis-platforms/u96v2/overlays/examples/benchmark/dpu_conf.vh
...
//`define B4096
`define B2304
...
//`define URAM_ENABLE
`define URAM_DISABLE
...
//`define DSP48_USAGE_HIGH
`define DSP48_USAGE_LOW
...
Edit the project configuration file as follows:
avnet-vitis-platforms/u96v2/overlays/examples/benchmark/prj_conf/prj_config_1dpu
...
#freqHz=300000000:DPUCZDX8G_1.aclk
#freqHz=600000000:DPUCZDX8G_1.ap_clk_2
freqHz=200000000:DPUCZDX8G_1.aclk
freqHz=400000000:DPUCZDX8G_1.ap_clk_2
...
Building the u96v2_sbc_base platformWith the avnet-vitis-platforms directory in place, we can now build the Vitis platform the the base design, as follows:
$ cd ~/Avnet_2022_2/avnet-vitis-platforms/u96v2
$ make platform PFM=u96v2_sbc_base
The build will take some time...
When complete, we can query the Vitis platform, as follows:
$ platforminfo platforms/avnet_u96v2_sbc_base_2022_2/u96v2_sbc_base.xpfm
==========================
Basic Platform Information
==========================
Platform: u96v2_sbc_base
File: ../platforms/avnet_u96v2_sbc_base_2022_2/u96v2_sbc_base.xpfm
Description: u96v2_sbc_base
=====================================
Hardware Platform (Shell) Information
=====================================
Vendor: avnet
Board: u96v2_sbc_base
Name: u96v2_sbc_base
Version: 1.0
Generated Version: 2022.2
Hardware: 1
Software Emulation: 1
Hardware Emulation: 1
Hardware Emulation Platform: 0
FPGA Family: zynquplus
FPGA Device: xczu1cg
Board Vendor: avnet.com
Board Name: avnet.com:zuboard_1cg:1.1
Board Part: xczu1cg-sbva484-1-e
=================
Clock Information
=================
Default Clock Index: 0
Clock Index: 0
Frequency: 150.000000
Clock Index: 1
Frequency: 300.000000
Clock Index: 2
Frequency: 75.000000
Clock Index: 3
Frequency: 100.000000
Clock Index: 4
Frequency: 200.000000
Clock Index: 5
Frequency: 400.000000
Clock Index: 6
Frequency: 600.000000
=====================
Resource Availability
=====================
=====
Total
=====
LUTs: 58001
FFs: 126851
BRAMs: 212
DSPs: 360
==================
Memory Information
==================
Bus SP Tag: HP0
Bus SP Tag: HP1
Bus SP Tag: HP2
Bus SP Tag: HP3
Bus SP Tag: HPC0
Bus SP Tag: HPC1
=============================
Software Platform Information
=============================
Number of Runtimes: 1
Default System Configuration: u96v2_sbc_base
System Configurations:
System Config Name: u96v2_sbc_base
System Config Description: u96v2_sbc_base
System Config Default Processor Group: smp_linux
System Config Default Boot Image: standard
System Config Is QEMU Supported: 1
System Config Processor Groups:
Processor Group Name: smp_linux
Processor Group CPU Type: cortex-a53
Processor Group OS Name: linux
System Config Boot Images:
Boot Image Name: standard
Boot Image Type:
Boot Image BIF: u96v2_sbc_base/boot/linux.bif
Boot Image Data: u96v2_sbc_base/smp_linux/image
Boot Image Boot Mode: sd
Boot Image RootFileSystem:
Boot Image Mount Path: /mnt
Boot Image Read Me: u96v2_sbc_base/boot/generic.readme
Boot Image QEMU Args: u96v2_sbc_base/qemu/pmu_args.txt:u96v2_sbc_base/qemu/qemu_args.txt
Boot Image QEMU Boot:
Boot Image QEMU Dev Tree:
Supported Runtimes:
Runtime: OpenCL
With the platform successfully built, we can now build the benchmark overlay, as follows:
$ cd ~/Avnet_2022_2/avnet-vitis-platforms/u96v2
$ make overlay OVERLAY=benchmark
The build will take some time...
When complete, the build artifacts will be found in the following directory:
overlays/example/benchmark/binary_container_1/sd_card
overlays
└── examples
└── benchmark
└── binary_container_1
└── sd_card
├── arch.json
├── dpu.xclbin
├── u96v2_sbc_base.hwh
└── u96v2_sbc_base_wrapper.bit
These are the files that we need to create a firmware overlay, and to compile the models for our specific DPU architecture (B2304, low RAM usage, etc...).
Creating the avnet-u96v2-benchmark appWith the overlay successfully built, we can now create our firmware overlay for this new design, which we will name:
- {vendor}_{platform}_{design}
- avnet_u96v2_benchmark
Petalinux provides a command to create a yocto recipe for these firmware overlays:
$ petalinux-create -t apps
--template fpgamanager -n {firmware}
--enable
--srcuri"{path}/{firmware}.bit
{path}/{firmware}.dtsi
{path}/{firmware}.xclbin
{path}/shell.json"
--force
Before using this command, however, we need to setup the files required for our firmware.
We start by copying the.bit,.xclbin, and creating the shell.json files for the benchmark design:
$ cd ~Avnet_2022_2/petalinux/projects/u96v2_sbc_2022_2
$ mkdir -p firmware/avnet_u96v2_benchmark
$ cp ../../../avnet-vitis-platforms/u96v2/overlays/examples/benchmark/binary_container_1/sd_card/u96v2_sbc_base_wrapper.bit firmware/avnet_u96v2_benchmark/avnet_u96v2_benchmark.bit
$ cp ../../../avnet-vitis-platforms/u96v2/overlays/examples/benchmark/binary_container_1/sd_card/dpu.xclbin firmware/avnet_u96v2_benchmark/avnet_u96v2_benchmark.xclbin
$ echo '{ "shell_type":"XRT_FLAT", "num_slots":1 }' > firmware/avnet_u96v2_benchmark/shell.json
We will use the same.dtsi as the base design:
$ cp firmware/avnet_u96v2_base/avnet_u96v2_base.dtsi firmware/avnet_u96v2_benchmark/avnet_u96v2_benchmark.dtsi
Edit this file to change the overlay name, as follows:
firmware/avnet_u96v2_benchmark/avnet_u96v2_benchmark.dtsi
...
&fpga_full {
...
firmware-name = "avnet_u96v2_benchmark.bit.bin";
...
}
...
We can now create our firmware overlay, as follows:
$ petalinux-create -t apps \
--template fpgamanager -n avnet-u96v2-benchmark \
--enable \
--srcuri "firmware/avnet_u96v2_benchmark/avnet_u96v2_benchmark.bit \
firmware/avnet_u96v2_benchmark/avnet_u96v2_benchmark.xclbin \
firmware/avnet_u96v2_benchmark/avnet_u96v2_benchmark.dtsi \
firmware/avnet_u96v2_benchmark/shell.json" \
--force
This will have created new entries in the user-rootfsconfig
and rootfs_config
configuration files. Add the "vitis-ai-library-*" packages to these, as follows:
project-spec/meta-user/conf/user-rootfsconfig
...
CONFIG_avnet-u96v2-base
CONFIG_avnet-u96v2-dualcam
CONFIG_xmutil
CONFIG_avnet-u96v2-benchmark
CONFIG_vitis_ai_library
CONFIG_vitis_ai_library-dev
CONFIG_vitis_ai_library-dbg
...
project-spec/configs/rootfs_config
...
#
# apps
#
CONFIG_avnet-u96v2-base=y
CONFIG_avnet-u96v2-dualcam=y
CONFIG_avnet-u96v2-benchmark=y
#
# user packages
#
...
CONFIG_xmutil=y
CONFIG_vitis_ai_library=y
CONFIG_vitis_ai_library-dev=y
...
Adding the Vitis-AI 3.0 yocto recipesBy default, the petalinux project will build version 2.5 of the vitis_ai_library packages, which is not what we want. Since we want version 3.0 of the vitis_ai_library packages, we need to copy the new yocto recipes, as described here:
https://github.com/Xilinx/Vitis-AI/blob/v3.0/src/vai_petalinux_recipes/README.md
We start by cloning the Vitis-AI 3.0 repository:
$ cd ~/Avnet_2022_2
$ git clone -b v3.0 https://github.com/Xilinx/Vitis-AI
Then we copy the yocto recipes for Vitis-AI 3.0:
$ cd ~Avnet_2022_2/petalinux/projects/u96v2_sbc_2022_2
$ cp -r ~/Avnet_2022_2/Vitis-AI/src/vai_petalinux_recipes/recipes-vitis-ai project-spec/meta-user/.
For our Vitis implementation, we need to remove one file, the vart_3.0_vivado.bb recipe.
$ rm project-spec/meta-user/recipe-vitis-ai/vart/vart-3.0_vivado.bb
We can now rebuild the petalinux project:
$ petalinux-build
Verifying the avnet-u96v2-benchmark appIn order to verify our new benchmark app, we need to program our new SD card image to a micro-SD card (of size 32GB or greater).
~/Avnet_2022_2/petalinux/projects/u96v2_sbc_2022_2/images/linux/rootfs.wic
To do this, we use Balena Etcher, which is available for most operating systems.
Once programmed, insert the micro-SD card into the Ultra96-V2, and connect up the platform as shown below.
Press the power push-button to boot the board, and login as "root".
After booting linux, login as the "root" user as follows:
u96v2-sbc-2022-2 login: root
root@u96v2-sbc-2022-2:~#
The first verification to do is to verify the presence of the benchmark overlay:
root@u96v2-sbc-2022-2:~# xmutil listapps
Accelerator Accel_type Base Base_type #slots Active_slot
avnet-u96v2-base XRT_FLAT avnet-u96v2-base XRT_FLAT (0+0) -1
avnet-u96v2-dualcam XRT_FLAT avnet-u96v2-dualcam XRT_FLAT (0+0) -1
avnet-u96v2-benchmark XRT_FLAT avnet-u96v2-benchmark XRT_FLAT (0+0) -1
root@u96v2-sbc-2022-2:~#
The second verification is to load the benchmark overlay:
root@u96v2-sbc-2022-2:~# xmutil loadapp avnet-u96v2-benchmark
[ 379.787870] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /fpga-full/firmware-name
[ 379.798021] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /fpga-full/resets
[ 379.808820] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/afi0
[ 379.818317] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/PWM_w_Int_0
[ 379.828422] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/PWM_w_Int_1
[ 379.838522] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_bram_ctrl_0
[ 379.848973] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_gpio_0
[ 379.858989] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_gpio_1
[ 379.869004] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_gpio_2
[ 379.879013] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_intc_0
[ 379.889030] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_uart16550_0
[ 379.899482] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/axi_uart16550_1
[ 379.909935] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/system_management_wiz_0
[ 380.534444] xadc a0080000.system_management_wiz: IRQ index 0 not found
[ 380.545204] zocl-drm axi:zyxclmm_drm: IRQ index 32 not found
avnet-u96v2-benchmark: loaded to slot 0
Note that the following WARNING occurs for the case of a working dynamic device tree, so can be ignored:
OF: overlay: WARNING: memory leak will occur if overlay removed
One thing to note is that after loading the benchmark overlay, the content of the /etc/vart.conf
changed to the following:
root@u96v2-sbc-2022-2:~# cat /etc/vart.conf
firmware: /lib/firmware/xilinx/avnet-u96v2-benchmark/avnet-u96v2-benchmark.xclbin
We can query the active DPU enabled design with the xdputil utility:
root@u96v2-sbc-2022-2:~# xdputil query
{
"DPU IP Spec":{
"DPU Core Count":1,
"IP version":"v4.1.0",
"generation timestamp":"2023-02-21 21-30-00",
"git commit id":"7d32c41",
"git commit time":2023022121,
"regmap":"1to1 version"
},
"VAI Version":{
"libvart-runner.so":"Xilinx vart-runner Version: 3.0.0-331ba47f80502ef2a1f37b3f7ce616b31c22e577 86 2023-01-02-21:50:29 ",
"libvitis_ai_library-dpu_task.so":"Xilinx vitis_ai_library dpu_task Version: 3.0.0-1cccff04dc341c4a6287226828f90aed56005f4f 86 2023-01-02 14:31:50 [UTC] ",
"libxir.so":"Xilinx xir Version: xir-9204ac72103092a7b253a0c23ec7471481656940 2023-01-02-21:49:01",
"target_factory":"target-factory.3.0.0 860ed0499ab009084e2df3004eeb9ae710c26351"
},
"kernels":[
{
"DPU Arch":"DPUCZDX8G_ISA1_B2304_0101000016010405",
"DPU Frequency (MHz)":200,
"IP Type":"DPU",
"Load Parallel":2,
"Load augmentation":"enable",
"Load minus mean":"disable",
"Save Parallel":2,
"XRT Frequency (MHz)":200,
"cu_addr":"0xb0000000",
"cu_handle":"0xaaaaf14410d0",
"cu_idx":0,
"cu_mask":1,
"cu_name":"DPUCZDX8G:DPUCZDX8G_1",
"device_id":0,
"fingerprint":"0x101000016010405",
"name":"DPU Core 0"
}
]
}
We can also query the status of the DPU:
root@u96v2-sbc-2022-2:~# xdputil status
{
"kernels":[
{
"addrs_registers":{
"dpu0_base_addr_0":"0x0",
"dpu0_base_addr_1":"0x0",
"dpu0_base_addr_2":"0x0",
"dpu0_base_addr_3":"0x0",
"dpu0_base_addr_4":"0x0",
"dpu0_base_addr_5":"0x0",
"dpu0_base_addr_6":"0x0",
"dpu0_base_addr_7":"0x0"
},
"common_registers":{
"ADDR_CODE":"0x0",
"AP status":"idle",
"CONV END":0,
"CONV START":0,
"HP_ARCOUNT_MAX":7,
"HP_ARLEN":15,
"HP_AWCOUNT_MAX":7,
"HP_AWLEN":15,
"LOAD END":0,
"LOAD START":0,
"MISC END":0,
"MISC START":0,
"SAVE END":0,
"SAVE START":0
},
"name":"DPU Registers Core 0"
}
]
}
Further verification requires xmodel files that have been compiled for this specific DPU architecture (DPU B2304, low RAM usage,...).
Expanding the root file systemBy default, the root files system will have a size of ~4GB.
The next sections will install a significant amount of content, so the root file system must be increased to its full allowable size.
This can be done with the dpu_optimize utility, found in the DPUCZDX8G reference design archive.
root@u96v2-sbc-2022-2:~# wget https://www.xilinx.com/bin/public/openDownload?filename=DPUCZDX8G_VAI_v3.0.tar.gz -O DPUCZDX8G_VAI_v3.0.tar.gz
root@u96v2-sbc-2022-2:~# tar -xvzf DPUCZDX8G_VAI_v3.0.tar.gz
root@u96v2-sbc-2022-2:~# cd DPUCZDX8G_VAI_v3.0/app
root@u96v2-sbc-2022-2:~/DPUCZDX8G_VAI_v3.0/app# ls
dpu_sw_optimize.tar.gz model samples
root@u96v2-sbc-2022-2:~/DPUCZDX8G_VAI_v3.0/app# tar -xvzf dpu_sw_optimize.tar.gz
dpu_sw_optimize/
dpu_sw_optimize/zynqmp/
dpu_sw_optimize/zynqmp/README.md
dpu_sw_optimize/zynqmp/functions/
dpu_sw_optimize/zynqmp/functions/zynqmp_qos_en.sh
dpu_sw_optimize/zynqmp/functions/ext4_auto_resize.sh
dpu_sw_optimize/zynqmp/functions/irps5401
dpu_sw_optimize/zynqmp/functions/irps5401.c
dpu_sw_optimize/zynqmp/zynqmp_dpu_optimize.sh
root@u96v2-sbc-2022-2:~/DPUCZDX8G_VAI_v3.0/app# cd dpu_sw_optimize/zynqmp/
root@u96v2-sbc-2022-2:~/DPUCZDX8G_VAI_v3.0/app/dpu_sw_optimize/zynqmp# ./zynqmp_dpu_optimize.sh
Auto resize ext4 partition ...[✔]
Start QoS config ...[✔]
After executing the optimization script, the root partition will have the full size of your sdcard minum the size of the boot partition (~1GB).
Compiling the ModelZooBefore we can use the AMD/Xilinx modelzoo with the specific DPU architecture we included in our design, we need to compile those models.
The models can be downloaded from the Xilinx web site with their provided downloader.py python script:
$ cd ~/Avnet_2022_2/Vitis-AI/model_zoo
$ python downloader.py
Each model can be compiled by following the on-line documentation:
https://xilinx.github.io/Vitis-AI/docs/workflow-model-zoo.html
For convenience, I am providing an archive of pre-compiled models for this specific DPU architecture.
Download the following models archive for the B2304 DPU architecture to the Ultra96-V2's root file system (ie. using SSH):
- https://avnet.me/vitis-ai-3.0-models.0-b2304-lr
(2023/04/04 : md5sum = 3e23685ddbce25170a8967912aff8b01)
Then extract the archive to the /usr/share/vitis_ai_library
directory:
root@u96v2-sbc-2022-2:~# cd /usr/share/vitis_ai_library
root@u96v2-sbc-2022-2:/usr/share/vitis_ai_library# tar -xvzf ~/vitis-ai-3.0-models.0-b2304-lr.tar.gz
root@u96v2-sbc-2022-2:/usr/share/vitis_ai_library# ln -sf models.b2304-lr models
Installing the Vitis-AI examplesThe Vitis-AI examples can be found in the Vitis-AI repository under the examples directory:
Vitis-AI
├── ...
└── examples
├── ...
├── vai_library
├── ...
├── vai_runtime
└── ...
These can be copied to the root file system of the SD card image.
They also require archives of images and video files, which can be downloaded from the following links:
- vai_library
vitis_ai_library_r3.0.0_images.tar.gz
vitis_ai_library_r3.0.0_video.tar.gz - vai_runtime
vitis_ai_runtime_r3.0.0_image_video.tar.gz
For convenience, I am providing an archive of pre-compiled examples, that can be downloaded from a single source.
Download the following examples archive to the Ultra96-V2's root file system (ie. using SSH):
- https://avnet.me/vitis-ai-3.0-examples
(2023/04/04 : md5sum = 0af9ab73387ef8cc0f90e15bddbcbdb4)
Then extract the archive to the /home/root
(~
) directory:
root@u96v2-sbc-2022-2:~# cd ~
root@u96v2-sbc-2022-2:~# tar -xvzf vitis-ai-3.0-examples.tar.gz
Automatically booting avnet-u96v2-benchmarkThe user can configure the image to automatically boot one of the firmware overlays. The /etc/dfx-mgrd/daemon.conf
file indicates which overlay (default_accel) to load at boot in the /etc/dfx/mgrd/default_firmware
.
root@u96v2-sbc-2022-2:~# cat /etc/dfx-mgrd/daemon.conf
{
"firmware_location": ["/lib/firmware/xilinx"],
"default_accel":"/etc/dfx-mgrd/default_firmware"
}
Recall that we modified this file in the previous project to load the base overlay be default ... we can change it to load the benchmark overlay:
root@u96v2-sbc-2022-2:~# cat /etc/dfx-mgrd/default_firmware
avnet-u96v2-base
root@u96v2-sbc-2022-2:~# echo avnet-u96v2-benchmark > /etc/dfx-mgrd/default_firmware
root@u96v2-sbc-2022-2:~# cat /etc/dfx-mgrd/default_firmware
avnet-u96v2-benchmark
The change will take effect at the next boot.
root@u96v2-sbc-2022-2:~# reboot
After boot, we start by querying which "apps" are present:
root@u96v2-sbc-2022-2:~# xmutil listapps
Accelerator Accel_type Base Base_type #slots Active_slot
avnet-u96v2-base XRT_FLAT avnet-u96v2-base XRT_FLAT (0+0) -1
avnet-u96v2-dualcam XRT_FLAT avnet-u96v2-dualcam XRT_FLAT (0+0) -1
avnet-u96v2-benchmark XRT_FLAT avnet-u96v2-benchmark XRT_FLAT (0+0) 0,
root@u96v2-sbc-2022-2:~###
Notice that the avnet-u96v2-benchmark overlay has been loaded.
We can query the active DPU enabled design with the xdputil utility:
root@u96v2-sbc-2022-2:~# xdputil query
{
"DPU IP Spec":{
"DPU Core Count":1,
"IP version":"v4.1.0",
"generation timestamp":"2023-02-21 21-30-00",
"git commit id":"7d32c41",
"git commit time":2023022121,
"regmap":"1to1 version"
},
"VAI Version":{
"libvart-runner.so":"Xilinx vart-runner Version: 3.0.0-331ba47f80502ef2a1f37b3f7ce616b31c22e577 86 2023-01-02-21:50:29 ",
"libvitis_ai_library-dpu_task.so":"Xilinx vitis_ai_library dpu_task Version: 3.0.0-1cccff04dc341c4a6287226828f90aed56005f4f 86 2023-01-02 14:31:50 [UTC] ",
"libxir.so":"Xilinx xir Version: xir-9204ac72103092a7b253a0c23ec7471481656940 2023-01-02-21:49:01",
"target_factory":"target-factory.3.0.0 860ed0499ab009084e2df3004eeb9ae710c26351"
},
"kernels":[
{
"DPU Arch":"DPUCZDX8G_ISA1_B2304_0101000016010405",
"DPU Frequency (MHz)":200,
"IP Type":"DPU",
"Load Parallel":2,
"Load augmentation":"enable",
"Load minus mean":"disable",
"Save Parallel":2,
"XRT Frequency (MHz)":200,
"cu_addr":"0xb0000000",
"cu_handle":"0xaaaaf14410d0",
"cu_idx":0,
"cu_mask":1,
"cu_name":"DPUCZDX8G:DPUCZDX8G_1",
"device_id":0,
"fingerprint":"0x101000016010405",
"name":"DPU Core 0"
}
]
}
Executing the Vitis-AI examplesThere are too many examples to cover in this section, but we can cover an alternative to the face detection example : face mask detection
root@u96v2-sbc-2022-2:~# cd Vitis-AI/examples/vai_library/samples/yolov4
root@u96v2-sbc-2022-2:~/Vitis-AI/examples/vai_library/samples/yolov4# ./test_video_yolov4 face_mask_detection_t 0
The current version of this project has the following known issues:
- vitis_ai_library packages, built from source, do not work
Until this is resolve, a work-around (using pre-built packages) has been provided.
ConclusionI hope this tutorial helped to understand how to add Vitis-AI 3.0 functionality to your Ultra96-V2 and/or custom platform.
If you would like to have the pre-built SDcard image for this project, please let me know in the comments below.
Revision History
Revision History2023/04/11
Fix instructions in section "Adding the Vitis-AI 3.0 yocto recipes" (need to remove vart_3.0_vivado.bb).
2023/04/04
Preliminary Version
Comments
Please log in or sign up to comment.