After setting up the board and deploying the smart camera application provided by Xilinx. We want to run other models on this framework. Therefore,
We present the implementation of yolov2-tiny as a custom model used in the smart camera application. We run this project on Ubuntu 18.
Before we start, Make sure that you have done the following steps.
- Install the smart camera application: https://xilinx.github.io/kria-apps-docs/main/build/html/docs/smartcamera/docs/app_deployment.html
- Install Vitis AI: https://github.com/Xilinx/Vitis-AI
Now, we are ready to implement.
1. Convert Darknet to Tensorflow- First, download the yolov2-tiny-voc model configuration and weights from the darknet framework https://pjreddie.com/darknet/yolov2/
- Then, run the Vitis-AI and choose vitis-ai-tensorflow framework.
The smart application uses Vitis ai 1.4. You need to convert the yolov2-tiny from the Darknet to framework like Caffe, Tensorflow, PyTorch. In this work, we use Tensorflow.
So, we use https://github.com/jinyu121/DW2TF to convert yolov2-tiny to Tensorflow format. After converting, you will get .pb and .ckpt files.
- Copy these files to the project directory.
You need to create a frozen graph that combines all Tensorflow files into a single file.
- First, create a directory name frozen to save the.pb fil
- Before running "freeze_graph" command, we need to know these parameters.
- -input_graph: .pb file
- -input_checkpoint:.ckpt file
- -output_graph: Result file after running freez_graph
- -output_node_name: The output of the model (from.pb file). You can find the output of the model by using https://netron.app/. Then upload the yolov2-tiny-voc.pb to see the structure of the model. In this work, the output is "yolov2-tinyconvolutional9/BiasAdd".
- Now, we can run freez_graph as follow:
freeze_graph --input_graph yolov2-tiny-voc.pb --input_checkpoint yolov2-tiny-voc.ckpt --output_graph frozen/frozen_graph.pb --output_node_names yolov2-tinyconvolutional9/BiasAdd --input_binary true
- Then you will get frozen_graph.pb in the frozen directory.
- First, create the directory name quantize to save the output of quantization.
- Before running the quantization command, we need:
- -input_frozen_graph: The output from freez_graph command (frozen_graph.pb).
- -input_nodes: You can get this by using https://netron.app/ with frozen_graph.pb. In our work, the input node is "yolov2-tinynet1"
- -output_nodes: You can get this by using https://netron.app/ with frozen_graph.pb. In our work, the output node is "yolov2-tinyconvolutional9/BiasAdd"
- -input_shapes: The size of input of the first node.
- -input_fn: Dataset for the calibration process. This parameter requires a python dictionary that the key name is the input of the model and the value is image files.
We create calibration.py as follows:
import os
import cv2
import glob
import numpy as np
# set data path to your own dataset
dataset_path = "/workspace/data/VOCdevkit/VOC2007/JPEGImages"
# set input size
inputsize = {'h': 416, 'c': 3, 'w': 416}
# set input node name
input_node = "yolov2-tinynet1"
calib_batch_size = 10
def convertimage(img, w, h, c):
new_img = np.zeros((w, h, c))
for idx in range(c):
resize_img = img[:, :, idx]
resize_img = cv2.resize(resize_img, (w, h), cv2.INTER_AREA)
new_img[:, :, idx] = resize_img
return new_img
# This function reads all images in dataset and return all images with the name of inputnode
def calib_input(iter):
images = []
line = glob.glob(dataset_path + "/*.j*") # either .jpg or .jpeg
for index in range(0, calib_batch_size):
curline = line[iter * calib_batch_size + index]
calib_image_name = curline.strip()
image = cv2.imread(calib_image_name)
image = convertimage(image, inputsize["w"], inputsize["h"], inputsize["c"])
image = image / 255.0
images.append(image)
return {input_node: images} # first layer
You need to save it as calibration.py to the project directory. And you can change dataset_path, input_node and input size to match with your model.
- -calib_iter: Number of the calibration round, this number is not larger than the number of images in the dataset.
- -output_dir: Set the directory to save the output file of quantization. In this work, we set to quantization directory.
- Finally, the quantization command is:
vai_q_tensorflow quantize --input_frozen_graph frozen/frozen_graph.pb --input_fn calibration.calib_input --output_dir quantize/ --input_nodes yolov2-tinynet1 --output_nodes yolov2-tinyconvolutional9/BiasAdd --input_shapes ?,416,416,3 --calib_iter 100
It will take some time depending on the number of calib_iter and the model size. And don't worry if you get loss = 0 for each iteration. it does not affect the model performance.
- Then you will get quantize_eval_model.pb in the quantization directory
This is the final process for deploying the model to run on DPU in KV260.
- Before compiling, we need:
- frozen_pb: The.pb file from the quantization process (quantize_eval_model.pb)
- a: JSON file that represents the DPU architecture. You need to create arch.json file as follows:
{
"fingerprint":"0x1000020F6014406"
}
Then save as arch.json to the project directory.
- o: The directory for output after compiling.
- n: Name of the model.
- Finally, the compile command is:
vai_c_tensorflow --frozen_pb quantize/quantize_eval_model.pb -a arch.json -o yolov2tiny -n yolov2tiny
Then you can see yolov2tiny.xmodel,md5sum.txt, meta.json in the yolov2tiny directory.
5. Customizing smart camera app.The smart camera application runs with Vitis Video Analytic SDK (VVAS) for DPU controlling.
We need to set two parts: VVAS plugin configuration and DPU configuration.
5.1DPU configuration:
- Create yolov2tiny.prototxt file as follow:
model {
name: "yolov2tiny"
kernel {
name: "yolov2tiny"
mean: 0.0
mean: 0.0
mean: 0.0
scale: 0.00390625
scale: 0.00390625
scale: 0.00390625
}
model_type : YOLOv3
yolo_v3_param {
num_classes: 20
anchorCnt: 5
conf_threshold: 0.3
nms_threshold: 0.45
biases: 1.08
biases: 1.19
biases: 3.42
biases: 4.41
biases: 6.63
biases: 11.38
biases: 9.42
biases: 5.11
biases: 16.62
biases: 10.52
test_mAP: false
}
is_tf : true
}
You can change the model name and bias values. You can find the bias values from the anchor parameter in the darknet configuration:yolov2-tiny.cfg.
5.2VVAS plugin configuration
- First, we have to create preprocess.json
{
"xclbin-location":"/lib/firmware/xilinx/kv260-smartcam/kv260-smartcam.xclbin",
"ivas-library-repo": "/opt/xilinx/lib",
"kernels": [
{
"kernel-name": "pp_pipeline_accel:pp_pipeline_accel_1",
"library-name": "libivas_xpp.so",
"config": {
"debug_level" : 1,
"mean_r": 0,
"mean_g": 0,
"mean_b": 0,
"scale_r": 0.25,
"scale_g": 0.25,
"scale_b": 0.25
}
}
]
}
This file is used before the inference process.
- Second, We have to create aiinference.json
{
"xclbin-location":"/lib/firmware/xilinx/kv260-smartcam/kv260-smartcam.xclbin",
"ivas-library-repo": "/usr/lib/",
"element-mode":"inplace",
"kernels" :[
{
"library-name":"libivas_xdpuinfer.so",
"config": {
"model-name" : "yolov2tiny",
"model-class" : "YOLOV2",
"model-path" : "/home/petalinux",
"run_time_model" : false,
"need_preprocess" : false,
"performance_test" : false,
"debug_level" : 1
}
}
]
}
You need to change parameters as follows:
"model-name": The name of your model.
"model-class": VVAS provides different classes including YOLOV3, FACEDETECT, CLASSIFICATION, SSD, REFINEDET, TFSSD, YOLOV2
"model-path": This is the path of your project in the KV260. I will save my model (.xmodel) in /home/petalinux on KV260.
- You also need to create label.json and drawresult.json files for displaying the result.
label.json
{
"model-name": "yolov2tiny",
"num-labels": 3,
"labels" :[
{
"name": "aeroplane",
"label": 0,
"display_name" : "aeroplane"
},
{
"name": "bicycle",
"label": 1,
"display_name" : "bicycle"
},
{
"name": "bird",
"label": 2,
"display_name" : "bird"
}
...
}
In this file, I show only 3 classes to give you an idea how is the structure of the label parameter. You can continue writing it until 20 classes since yolov2-tiny have 20 classes detection.
drawresult.json
{
"xclbin-location":"/usr/lib/dpu.xclbin",
"ivas-library-repo": "/opt/xilinx/lib",
"element-mode":"inplace",
"kernels" :[
{
"library-name":"libivas_airender.so",
"config": {
"fps_interval" : 10,
"font_size" : 2,
"font" : 1,
"thickness" : 2,
"debug_level" : 0,
"label_color" : { "blue" : 0, "green" : 0, "red" : 255 },
"label_filter" : [ "class", "probability" ],
"classes" : [
{
"name" : "aeroplane",
"blue" : 255,
"green" : 0,
"red" : 0
},
{
"name" : "bicycle",
"blue" : 0,
"green" : 255,
"red" : 0
},
{
"name" : "bird",
"blue" : 0,
"green" : 255,
"red" : 0
}
...
]
}
}
] ]
}
You can continue writing to 20 classes. You can change "font_size", "font", "thickness" and colors. Make sure all class names are the same as label names.
Finally, you will have the following files in the yolov2tiny directory.
Make sure .xmodel and .prototxt have the same name as the directory name.
The project directory should be like this.
6. Running model on KV260
- First, we need to upload the yolov2tiny directory to KV260 using the following command.
scp -r yolov2tiny petalinux@192.168.0.150:~/
You need to set the IP address as your KV260 board.
In KV260 board
We will replace the SSD model with our model.
Now, our project on the board is at /home/petalinux directory.
- You need to copy aiinference.json, preprocess.json and drawresult.json from our project to /opt/xilinx/share/ivas/smartcam/ssd/
sudo cp yolov2tiny/aiinference.json /opt/xilinx/share/ivas/smartcam/ssd/aiinference.json
sudo cp yolov2tiny/preprocess.json /opt/xilinx/share/ivas/smartcam/ssd/preproces.json
sudo cp yolov2tiny/drawresult.json /opt/xilinx/share/ivas/smartcam/ssd/drawresult.json
- Then you can load the application and run it as a normal deployment.
sudo xmutil unloadapp
sudo xmutil loadapp kv260-smartcam
sudo smartcam --usb 0 -W 1920 -H 1080 --target rtsp --aitask ssd
Finally, you will get output like this.
All files can be found here: https://github.com/HDCOE/CustomModelOnKV26
More information:
- Running smart camera application
https://xilinx.github.io/kria-apps-docs/main/build/html/docs/smartcamera/docs/app_deployment.html
- A customizing model used in the smart camera application
- Vitis Video Analytics SDK
https://xilinx.github.io/VVAS/main/build/html/docs/common/common_plugins.html
Comments