The purpose of this Project is to identify the type of fruit through the camera and the VITIS-AI based on the ZCU104, and then we will use the zcu104 to calculate the price based on the weight data fedback by the electronic scale and the type of fruit obtained throught the camera. Finally, the OR code is generated through the Alipay interface, and the customer only needs to scan the QR code to complete the payment.
STEP1. ZCU104 VCU + ML Platform & BuildIntroductionThis document describes steps that can be used to build the ``ZCU104 VCU 8-channel video decode + ML`` demo provided on the Xilinx download page - https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/embedded-platforms/2019-2.html
PrerequisitesLinux host machine
2019.2 Xilinx tools
Docker
HDMI IP License (if building or modifying the platform)
This document assumes that the 2019.2 Xilinx tools are set up on the host machine.
- Part 0: Project set up
- Create a project directory of your choosing (i.e. ~/zcu104_vcu_ml) and create an environment variable that points to that location.
mkdir ~/zcu104_vcu_ml
export PROJ_DIR=~/zcu104_vcu_ml
- Clone this repository
cd $PROJ_DIR
git clone https://xterra2.avnet.com/xilinx/ZCU104/zcu104-vcu-ml-build-example
- Part 1: Building the ZCU104 Vitis platform with VCU and ML support
This section describes how to build the Vitis embedded platform with VCU and ML support.
- From your Linux host machine clone the Xilinx platform repository
cd $PROJ_DIR
git clone https://github.com/Xilinx/Vitis_Embedded_Platform_Source.git
- Navigate to the zcu104_vcu_ml directory in the cloned repository
cd Vitis_Embedded_Platform_Source/Xilinx_Official_Platforms/zcu104_vcu_ml
- If you are using Ubuntu 18.04 and have less than 16 cores on your machine then you will need to modify the Vivado build script to reduce the number of parallel jobs.
The path to the script is ``vivado/zcu104_vcu_ml_xsa.tcl``. Searching for the keyword ``launch_runs`` will show that the default script is setup to run 16 jobs. On my virtual machine this causes the build to fail since my machine only has 6 cores.
cat -n vivado/zcu104_vcu_ml_xsa.tcl | grep launch_runs
2734 launch_runs impl_1 -to_step write_bitstream -jobs 16
You may need to reduce the number of jobs to a suitable number for your machine
- The ``zcu104_vcu_ml`` platform located in the Xilinx platform repository needs to be modified to host the root file system on the SD card instead of in RAM. Execute the following command to update petalinux project:
bash -x $PROJ_DIR/zcu104-vcu-ml-build-example/scripts/update_plnx.sh
- If you would like to use the Vitis-AI Runtime and Vitis-AI Library then you will need to add additional packages.
bash -x $PROJ_DIR/zcu104-vcu-ml-build-example/scripts/update_for_vart.sh
- Build the platform (this will take some time)
make all
- reconfig petalinux kernel and add the USB CDC ACM driver
cd <petalinux_project_root>
petalinux-config -c kernel
Go to Device Drivers --> USB Support --> USB Gadget Support-->
enable requeire drivers as shown below:
[*]USB functions configurable through configs
[*]Abstract Control Model (CDC ACM)
[*]Mass Storage Gadget
[*]Serial Gadget (with CDC ACM and CDC QBEX support)
*Navigate to Device Drivers > Graphics Support > Xilinx DRM KMS Driver from the
Linux kernel configuration menu and ensure that the driver is enabled.
*Similarly, navigate to Device Drivers > Multimedia Support > V4L Platform Devices
and review the devices that are already enabled.
- reconfig petalinux rootfs
In order to run VCU-based applications using the VCU software stack provided by Xilinx,
open-source frameworks such as GStreamer and OMX should be enabled.
*Navigate to user packages > gstreamer-vcu-examples and ensure that the GStreamer
VCU examples option is enabled.
*navigate to Filesystem Packages > misc > hdmi-module and ensure that the
HDMI module is enabled.
- rebuild the Linux image
petalinux-build
under /petalinux/image/linux copy boot.src image.ub Image system.dtb to SDCARD boot folder then untar the rootfs to the ext4 SDCARD root partition
sudo tar -C /media/jy/root -xvf ~/petalinux/images/linux/rootfs.tar.gz
sudo sync
copy bl31.elf pmufw.elf u-boot.elf fsbl.elf(zynqmp_fsbl.elf)to vitis workspace to add DPU and generate the UBOOT.BIN in the next step.
- Part 2: Adding the DPU to the platform
- Clone the Vitis-AI v1.1 repository
git clone --branch v1.1 --single-branch https://github.com/Xilinx/Vitis-AI ~/Vitis-AI-v1.1
- Copy the DPU-TRD directory from the Vitis-AI v1.1 repository to the project directory
cp ~/Vitis-AI-v1.1/DPU-TRD $PROJ_DIR/.
- Modify the DPU configuration to enable URAM and the project configuration for 1 DPU kernel.
cd $PROJ_DIR/DPU-TRD/prj/Vitis
bash -x update_dpu_config.sh
- Source the XRT setup script (your path may vary)
source /opt/xilinx/xrt/setup.sh
- Build with default DPU configuration
cd $PROJ_DIR/DPU-TRD/prj/Vitis
export SDX_PLATFORM=$PFM_DIR/zcu104_vcu_ml.xpfm
make KERNEL=DPU DEVICE=zcu104
copy BOOT.BIN dpu.xclbin init.sh platform_desc.txt zcu104_vcu_ml.hwh to SDCARD
STEP2. Build TFSSD Gstreamer pluginIntroductionThis document describes steps that can be used to create a GStreamer Machine Learning plugin that uses the Xilinx Vitis-AI Library. This tutorial provides detailed steps to create face detection and person detection GStreamer plugins. The plugins are then tested on the <a href="https://xterra2.avnet.com/xilinx/zcu104/zcu104-mc4-ml-example">ZCU104 Quad-Camera + ML Platform</a>.
Since this tutorial uses the ZCU104 Quad-Camera + ML platform you will see references to that directory structure throughout this document. If you are targeting a different platform then you will need to modify accordingly.
PrerequisitesLinux host machine
2019.2 Xilinx tools
Step1.Builded_ZCU104_Plaform
This document assumes that the 2019.2 Xilinx tools are set up on the host machine, and a target SDK (sysroot) is available for cross compilation.
Part 0: Project setup- Create a project directory of your choosing (i.e. ~/gst_plugin_tutorial) and create an environment variable that points to that location.
mkdir ~/test/gst_plugin_tutorial
export GST_PROJ_DIR=~/test/gst_plugin_tutorial
- Clone this repository
cd $GST_PROJ_DIR
git clone https://xterra2.avnet.com/xilinx/ml/vai-gst-plugin-tutorial
Part 1: Setting up the cross-compilation environment with Vitis-AI-Library supportThis section describes how to add Vitis-AI-Library support to an existing PetaLinux SDK.
**NOTE 1:** If you have already updated your PetaLinux SDK with the Vitis-AI libraries then you can skip this section. This section follows the steps provided in the <a href="https://github.com/Xilinx/Vitis-AI/tree/v1.1/Vitis-AI-Library#setting-up-the-host">Setting Up the Host</a> section of the Vitis-AI-Library setup instructions.
**NOTE 2:** All steps in this section are performed on the host machine
- Create an environment variable that points to your cross-compilation target root file system.
export SYSROOT=~/zcu104_mc4_ml/platform/zcu104_mc4/platform_repo/sysroot/sysroots/aarch64-xilinx-linux
- Source the cross-compilation environment setup script
unset LD_LIBRARY_PATH
source $SYSROOT/../../environment-setup-aarch64-xilinx-linux
- Download the Vitis-AI runtime package
cd $GST_PROJ_DIR
wget https://www.xilinx.com/bin/public/openDownload?filename=vitis_ai_2019.2-r1.1.0.tar.gz -O vitis_ai_2019.2-r1.1.0.tar.gz
- Install the Vitis-AI runtime package in the cross-compilation environment
tar -xvzf vitis_ai_2019.2-r1.1.0.tar.gz -C $SYSROOT
# Part 2: Creating and cross-compiling the GStreamer pluginThis section describes how to create a GStreamer video filter plugin from template, and customize it to use the Vitis-AI-Library. The culmination of this section will result in two custom Vitis-AI plugins - 1) Face Detection and 2) Person Detection.
- Download the GStreamer plugins-bad repository
cd $GST_PROJ_DIR
git clone https://github.com/GStreamer/gst-plugins-bad.git
- Create the face and person detection plugins from the video filter template
mkdir -p vaitfssd
cd vaitfssd
../gst-plugins-bad/tools/gst-element-maker vaitfssd videofilter
rm *.so *.o
mv gstvaitfssd.c gstvaitfssd.cpp
cd ..
**NOTE:** You may see a warning indicating ``gst-indent: command not found
``. As long as the plugin ``.c`` and ``.h`` file are created then it should be okay.
- The face and person detection plugins will process video frames in-place instead of copying data from an input buffer to an output buffer. Remove references in the plugin template code to the ``tranform_frame()`` function, but make sure to leave the references to ``transform_frame_ip()``. The following commands will delete the lines of code that set the ``transform_frame`` function in the ``class_init()
`` function.
sed -i '/video_filter_class->transform_frame = /d' vaitfssd/*.cpp
The ``transform_frame()`` function is used when data from the input buffer is processed and then copied to a different output buffer. In this particular application the buffer copy is not necessary and the in-place (``transform_frame_ip``) function is used instead to modify the video frame data.
- Add the OpenCV and Vitis-AI-Library header files
* In ``vaitfssd/gstvaitfssd`` add
/* OpenCV header files */
#include <opencv2/core.hpp>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
/* Vitis-AI-Library specific header files */
#include <vitis/ai/tfssd.hpp>
#include <vitis/ai/nnpp/tfssd.hpp>
- Update the pad templates to reflect the supported pixel formats
* In ``gstvaitfssd.cpp`` Change
FROM:
/* FIXME: add/remove formats you can handle */
#define VIDEO_SRC_CAPS \
GST_VIDEO_CAPS_MAKE("{ I420, Y444, Y42B, UYVY, RGBA }")
/* FIXME: add/remove formats you can handle */
#define VIDEO_SINK_CAPS \
GST_VIDEO_CAPS_MAKE("{ I420, Y444, Y42B, UYVY, RGBA }")
TO:
/* Input format */
#define VIDEO_SRC_CAPS \
GST_VIDEO_CAPS_MAKE("{ BGR }")
/* Output format */
#define VIDEO_SINK_CAPS \
GST_VIDEO_CAPS_MAKE("{ BGR }")
- Update the ``*_class_init()`` functions
* In ``gstvaitfssd.cpp`` change
FROM:
/* Setting up pads and setting metadata should be moved to
base_class_init if you intend to subclass this class. */
gst_element_class_add_pad_template (GST_ELEMENT_CLASS(klass),
gst_pad_template_new ("src", GST_PAD_SRC, GST_PAD_ALWAYS,
gst_caps_from_string (VIDEO_SRC_CAPS)));
gst_element_class_add_pad_template (GST_ELEMENT_CLASS(klass),
gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS,
gst_caps_from_string (VIDEO_SINK_CAPS)));
gst_element_class_set_static_metadata (GST_ELEMENT_CLASS(klass),
"FIXME Long name", "Generic", "FIXME Description",
"FIXME <fixme@example.com>");
TO:
/* Setting up pads and setting metadata should be moved to
base_class_init if you intend to subclass this class. */
gst_element_class_add_pad_template (GST_ELEMENT_CLASS(klass),
gst_pad_template_new ("src", GST_PAD_SRC, GST_PAD_ALWAYS,
gst_caps_from_string (VIDEO_SRC_CAPS ",width = (int) [1, 640], height = (int) [1, 360]")));
gst_element_class_add_pad_template (GST_ELEMENT_CLASS(klass),
gst_pad_template_new ("sink", GST_PAD_SINK, GST_PAD_ALWAYS,
gst_caps_from_string (VIDEO_SINK_CAPS ", width = (int) [1, 640], height = (int) [1, 360]")));
gst_element_class_set_static_metadata (GST_ELEMENT_CLASS(klass),
"Face detection using the Vitis-AI-Library",
"Video Filter",
"TFSSD",
"FIXME <fixme@example.com>");
- Update the ``transform_frame_ip()`` functions. The code snippet shown below that is used to draw bounding boxes is based on <a href="https://github.com/Xilinx/Vitis-AI/blob/v1.1/Vitis-AI-Library/ssd/test/test_ssd.cpp">test_ssd.cpp</a> from the Vitis-AI-Library GitHub repository.
* In ``gstvaitfssd.cpp`` change
FROM:
static GstFlowReturn
gst_vaitfssd_transform_frame_ip (GstVideoFilter * filter, GstVideoFrame * frame)
{
GstVaitfssd *vaitfssd = GST_VAITFSSD (filter);
GST_DEBUG_OBJECT (vaitfssd, "transform_frame_ip");
return GST_FLOW_OK;
}
TO:
static GstFlowReturn
gst_vaitfssd_transform_frame_ip (GstVideoFilter * filter, GstVideoFrame * frame)
{
GstVaitfssd *vaitfssd = GST_VAITFSSD (filter);
/* Create ssd detection object */
thread_local auto ssd = vitis::ai::TFSSD::create("ssd_mobilenet_v1_coco_tf");
/* Setup an OpenCV Mat with the frame data */
cv::Mat img(360, 640, CV_8UC3, GST_VIDEO_FRAME_PLANE_DATA(frame, 0));
/* Perform ssd detection */
auto results = ssd->run(img);
/* Draw bounding boxes */
for (auto &box : results.bboxes)
{
int xmin = box.x * img.cols;
int ymin = box.y * img.rows;
int xmax = xmin + (box.width * img.cols);
int ymax = ymin + (box.height * img.rows);
xmin = std::min(std::max(xmin, 0), img.cols);
xmax = std::min(std::max(xmax, 0), img.cols);
ymin = std::min(std::max(ymin, 0), img.rows);
ymax = std::min(std::max(ymax, 0), img.rows);
cv::rectangle(img, cv::Point(xmin, ymin), cv::Point(xmax, ymax), cv::Scalar(0, 255, 0), 2, 1, 0);
}
GST_DEBUG_OBJECT (vaitfssd, "transform_frame_ip");
return GST_FLOW_OK;
}
- Update the plugin package information
* In ``gstvaitfssd.cpp`` change
**FROM:**
#ifndef VERSION
#define VERSION "0.0.FIXME"
#endif
#ifndef PACKAGE
#define PACKAGE "FIXME_package"
#endif
#ifndef PACKAGE_NAME
#define PACKAGE_NAME "FIXME_package_name"
#endif
#ifndef GST_PACKAGE_ORIGIN
#define GST_PACKAGE_ORIGIN "http://FIXME.org/"
#endif
GST_PLUGIN_DEFINE (GST_VERSION_MAJOR,
GST_VERSION_MINOR,
vaitfssd,
"FIXME plugin description",
plugin_init, VERSION, "LGPL", PACKAGE_NAME, GST_PACKAGE_ORIGIN)
**TO:**
#ifndef VERSION
#define VERSION "0.0.0"
#endif
#ifndef PACKAGE
#define PACKAGE "vaitfssd"
#endif
#ifndef PACKAGE_NAME
#define PACKAGE_NAME "GStreamer Xilinx Vitis-AI-Library"
#endif
#ifndef GST_PACKAGE_ORIGIN
#define GST_PACKAGE_ORIGIN "http://xilinx.com"
#endif
GST_PLUGIN_DEFINE (GST_VERSION_MAJOR,
GST_VERSION_MINOR,
vaitfssd,
"TFSSD using the Xilinx Vitis-AI-Library",
plugin_init, VERSION, "LGPL", PACKAGE_NAME, GST_PACKAGE_ORIGIN)
- Compile the plugins using provided make files
cd $GST_PROJ_DIR/vaitfssd
make -f $GST_PROJ_DIR/vai-gst-plugin-tutorial/solution/vaitfssd/Makefile
- When the compilation completes you should find ``libgstvaitfssd.so`` files
# Part 3: Installing the Vitis-AI-Library on the target hardwareThis section describes how to install the Vitis-AI-Library on the target hardware.
**NOTE:** The steps in this section should be executed on the target hardware
- Download the Vitis-AI pre-compiled model files
mkdir -p ~/Downloads
cd ~/Downloads
wget https://www.xilinx.com/bin/public/openDownload?filename=vitis_ai_model_ZCU102_20
**NOTE:** The ZCU102 model files are used for both the ZCU104 & ZCU102 Quad-Camera + ML platforms since the DPU was compiled with the RAM_USAGE_LOW option.
- Download the Vitis-AI Runtime
wget https://www.xilinx.com/bin/public/openDownload?filename=vitis-ai-runtime-1.1.2.tar.gz -O vitis-ai-runtime-1.1.2.tar.gz
- Install The Vitis-AI model files
dpkg -i vitis_ai_model_ZCU102_2019.2-r1.1.0.deb
- Install the Vitis-AI runtime
tar -xvzf vitis-ai-runtime-1.1.2.tar.gz
cd vitis-ai-runtime-1.1.2
dpkg -i --force-all unilog/aarch64/libunilog-1.1.0-Linux-build46.deb
dpkg -i XIR/aarch64/libxir-1.1.0-Linux-build46.deb
dpkg -i VART/aarch64/libvart-1.1.0-Linux-build48.deb
dpkg -i Vitis-AI-Library/aarch64/libvitis_ai_library-1.1.0-Linux-build46.deb
- Modify the ``/etc/vart.conf`` file to point to the correct location for the ``dpu.xclbin``
Part 4: Running on the ZCU104 Quad-Camera + ML target hardwareThis section describes how to run the custom Vitis-AI GStreamer plugins on the ZCU104 Quad-Camera + ML platform from the command line.
- Copy the compiled ML plugins from the host machine to the target hardware using an Ethernet connection. Exectue the following command on the **host** machine
cd $GST_PROJ_DIR
scp vaitfssd/libgstvaitfssd.so root@192.168.4.196:/usr/lib/gstreamer-1.0/.
scp -r vai-gst-plugin-tutorial/scripts root@$TARGET_IP:~/.
**NOTE:** ``$TARGET_IP`` in the command above should be replaced with the IP address of your target hardware
-Testing with the GStreamer face detection plugin has been known to work when setting the DPU clock to 50% with the following command
python3 ~/dpu_clk.py 50
-Execute the following commands on the ZCU104 to set the display resolution to 480p
export DISPLAY=:0.0
xrandr --output DP-1 --mode 640x480
- In the previous step a few predefined scripts were copied to the target hardware which set up the image processing input & output pipelines. Feel free to investigate the scripts. The command that launches the GStreamer pipeline starts with gst-launch-1.0 and looks like the following:
gst-launch-1.0 -v \
v4l2src device=/dev/video0 ! \
video/x-raw, width=640, height=360, format=YUY2, framerate=30/1 ! \
queue ! \
videoconvert ! \
video/x-raw, format=BGR ! \
queue ! \
vaitfssd ! \
queue ! \
videoconvert ! \
fpsdisplaysink sync=false text-overlay=false fullscreen-overlay=true
SummaryThis tutorial described detailed steps used to create tfssd GStreamer plugins using the Xilinx Vitis-AI-Library. The finished plugin source code is included in this repository in the solution directory.
ReferencesCreating a Vitis-AI GStreamer Plugin for the Ultra96-V2
Vitis-AI 1.1 Flow for Avnet VITIS Platforms -Part 1
Vitis-AI 1.1 Flow for Avnet VITIS Platforms -Part 2
Comments