Deep neural networks (DNNs) are the key technique in modern artificial intelligence (AI) that has provided state-of-the-art accuracy in many applications, and due to this, they have received significant interest. The ubiquity of smart devices and autonomous robot systems are placing heavy demands on DNNs-inference hardware with high energy and computing efficiencies along with the rapid development of AI techniques. The high energy efficiency, computing capabilities, and reconfigurability of FPGA make it a promising platform for hardware acceleration of such computing tasks.
In this challenge, we have designed a flexible video processing framework on a KV260 SoM, which can be used in a smart camera application for intelligent transportation systems (ITS) in a smart city. Our framework is not only capable of automatically detecting application scenarios (e.g. Car or Pedestrian) using a semantic segmentation and road line detection network, and it is also able to automatically select the best of the DNN models for the application scenarios. Thanks to the dynamic reconfiguration and rum-time management APIs, our system is able to dynamically switch the DNN inference model at run-time without stopping the video pipeline. This finally allows our smart camera system to be truly adaptive, and achieve the best performance in a smarter way.
- Support application scenarios detection (e.g. Car, Pedestrian, and many more) using semantic segmentation and road line detection networks.
- Extend the existing VVAS framework (v1.0) with the support of many more DNN models: semantic segmentation, lane detection, pose detection, OFA, and many more when using our flexible JSON interface.
- Support dynamic model switching for both software and hardware video processing pipelines.
- Support run-time performance monitoring with a graphic interface on the monitor (e.g. Power consumption, FPS, temperature, CPU/Memory, and many more system information).
- Support switching DNN inference models dynamically (e.g. using models with different pruning factors) in the run-time without affecting the system performance.
- Support mitigation to other Xilinx Ultrascale+ MPSoC platform
- Vitis-AI 1.4.1
- Vivado 2021.1
- PetaLinux 2021.1
- VVAS 1.0
- KV260 or any compatible Xilinx Ultrascale+ MPSoC, e.g. ZCU104, Ultra96
- HDMI monitor and cable
- HD camera (Optional)
This video shows the switch of AI processing branches for different scenarios. According to the detected scenarios, the corresponding AI inference will be enabled or disabled.
- Branch 0 (left top): for scenario classification.
- Branch 1 (left bottom): enable in people scenarios.
- Branch 2 (right bottom): enable in car scenarios.
See HD version in YouTube
This shows the real-time adjustment of inference interval in Jupyter.
See HD version in YouTube
Running applications tracking for cars: Yolo + CarID + tracking This video shows the real-time adjustment of model size in Jupyter. There are 4 different model sizes of CarID for different workloads. This video shows the case that FPS increases significantly with smaller model.
It is also supported to change types of AI model for different functionalities.
Note: Due to the resolution problems in preprocessing plugins in VVAS 1.0, it requires CPU for do preprocessing tasks.
Adaptive optimizationSee HD version in YouTube
This video shows the performance changes with above adaptive optimization methods.
- Branch 0 (Segmentation): the inference interval increases (1->5) for less performance cost.
- Branch 1 (Refindet & Openpose): the inference is disabled, because there is no person.
- Branch 2 (Yolo): the size of the model decreases and the inference interval increases (1->2)
For different cases, we deployed different hardware configurations for switching as well.
The bigger DPU (e.g. larger size or higher frequency) consumes more power even if there is no AI inference tasks. Hence, using smaller DPU in low workloads can lower the power consumption.
Currently, we use two different sizes of DPU: 1) B3136 and 2) B4096. The hardware configuration is packaged into the different firmware.
DPU size :B3136
DPU size :B4096 Firmware name: cmpk4096 https://github.com/luyufan498/Adaptive-Computing-Challenge-2021/tree/main/firmware
For details of different hardware configurations and performance adjustments, please see our previous project in Hackster.io: Adaptive deep learning hardware for video analytics.
There are a number of parts in our demo: 1) Gstreamer video processing pipes, 2) Host program for management and 3) Hardware firmware. Please follow the instructions to run the demo.
Environment setup1. Follow the official instructions to set up KV260 (smart camera and AIBox-ReID are needed).
2. (Optional) For your convenience, I have packaged everything you need into KV260AllYOUNEED.zip. You can just download it and extract (overwrite) it to your KV260 system. If you are using the packaged ZIP file, you can skip the next step and run it directly.
unzip -o -d / KV260AllYOUNEED.zip
Library, configuration and models3. Download our customized VVAS libs to kv260 (/opt/xilinx/lib/):
- dpuinfer for AI inference to support new model and switch: libivas_xdpuinfer.so
- Crop for Openopse: libivas_crop_openopse.so
- To support Openopse: libivas_openpose.so
- Tracking update: libaa2_reidtracker.so
- Draw chart/wareform: libivas_sensor.so
- Draw running indicator: libivas_runindicater.so
- Draw segmentation: libivas_performancestatus.so
- Draw pose: libivas_drawpose.so
- Draw box/roadline: libivas_xboundingbox.so
Note: to create your own VVAS libs for your customized model, please follow my projects: VVAS_CMPK.
Your folder should look like this:
4. IMPORTANT: Update gstreamer plugin lib to support multiple inference channels (/usr/lib/).
Note: need sudo to overwrite original files.
5. Download new models and extract to kv260 (/opt/xilinx/share/vitis_ai_library/):
After that your model folder should look like this:
xilinx-k26-starterkit-2021_1:~$ tree /opt/xilinx/share/vitis_ai_library/ -L 3
/opt/xilinx/share/vitis_ai_library/
`-- models
|-- B3136
| |-- ENet_cityscapes_pt
| |-- SemanticFPN_cityscapes_256_512
| |-- caltechlane
| |-- carid
| |-- densebox_640_360
| |-- personreid-res18_pt
| |-- refinedet_pruned_0_96
| |-- sp_net
| |-- ssd_adas_pruned_0_95
| |-- yolov2_voc
| `-- yolov3_city
|-- kv260-aibox-reid
| |-- personreid-res18_pt
| `-- refinedet_pruned_0_96
`-- kv260-smartcam
|-- densebox_640_360
|-- refinedet_pruned_0_96
`-- ssd_adas_pruned_0_95
20 directories, 0 files
6. Download new json file for VVAS configuration and extract to kv260 (/opt/xilinx/share)
Note: Please find the appendix section for the description of configurations.
After that your model folder should look like this:
xilinx-k26-starterkit-2021_1:~$ tree /opt/xilinx/share/ivas/ -L 3
/opt/xilinx/share/ivas/
|-- aibox-reid
| |-- crop.json
| |-- dpu_seg.json
| |-- draw_reid.json
| |-- ped_pp.json
| |-- refinedet.json
| `-- reid.json
|-- branch1
| |-- drawPipelinestatus.json
| |-- drawfpsB1.json
| `-- fpsbranch1.json
|-- branch2
| |-- dpu_yolo2.json
| |-- drawPipelinestatus.json
| |-- drawbox.json
| |-- fpsbranch2.json
| `-- ped_pp.json
|-- cmpk
| |-- analysis
| | |-- 4K
| | `-- drawTemp.json
| |-- openpose
| | |-- crop.json
| | |-- draw_pose.json
| | `-- openpose.json
| |-- preprocess
| | |-- resize_cmpk.json
| | |-- resize_reid.json
| | `-- resize_smartcam.json
| |-- reid
| | |-- carid.json
| | |-- crop.json
| | |-- draw_reid.json
| | `-- reid.json
| |-- runstatus
| | |-- pp1status.json
| | `-- pp2status.json
| `-- segmentation
| |-- dpu_seg.json
| |-- dpu_seg_large.json
| |-- drawSegmentation.json
| |-- drawSegmentationLR.json
| |-- drawSegmentationTR.json
| `-- preprocess_seg_smartcam.json
`-- smartcam
|-- facedetect
| |-- aiinference.json
| |-- drawresult.json
| `-- preprocess.json
|-- myapp
| |-- dpu_seg.json
| |-- dpu_ssd.json
| |-- dpu_yolo2.json
| |-- drawPLTemp.json
| |-- drawPerformance.json
| |-- drawPipelinestatus.json
| |-- drawPower.json
| |-- drawSegmentation.json
| |-- drawTemp.json
| |-- drawbox.json
| |-- preprocess.json
| `-- preprocess_seg.json
|-- refinedet
| |-- aiinference.json
| |-- drawresult.json
| `-- preprocess.json
|-- ssd
| |-- aiinference.json
| |-- drawresult.json
| |-- label.json
| `-- preprocess.json
|-- yolov2_voc
| |-- aiinference.json
| |-- drawresult.json
| |-- label.json
| `-- preprocess.json
`-- yolov3_city
|-- aiinference.json
|-- drawresult.json
|-- label.json
`-- preprocess.json
7. Now, you should be ready to run the video pipeline. Download scripts to start video pipeline to /home/scripts/.
Use the following command to run the video pipeline:
sudo ./scripts/gst_reid_4k2.sh -f <video> -r <AI program>
For details, please see appendix section: gst_4k.sh
8. Download Host program to kv260. Use Jupyter to run it.
Example use of python interfaces:
traffic_modelctr = kv260adpModelCtr()
# Set UI with pipe path
traffic_modelctr.setIndicaterUI('on',FFC_UI_BRANCH2)
traffic_modelctr.setIndicaterUI('off',FFC_UI_BRANCH1)
# SET branch with pipe path
traffic_modelctr.setDPUenable('on',FFC_DPU_BRANCH_CAR_CTR)
traffic_modelctr.setDPUenable('off',FFC_DPU_BRANCH_PEO_CTR)
# SET inference interval with pipe path
traffic_modelctr.setDPUInvteral(30,FFC_DPU_SEG_CTR)
# Create a ctr with pipe path and set new model
modelctr = kv260adpModelCtr("/home/petalinux/.temp/dpu_seg_rx")
modelctr.setNewModel("ENet_cityscapes_pt","SEGMENTATION","/opt/xilinx/share/vitis_ai_library/models/B3136/")
(Optional) load the hardware with the B4096 DPU:
sudo xmutil unloadapp
sudo xmutil loadapp cmpk4096
4. Gstreamer video processing pipes in the demo1. The architecture of the video processing pipeline:
The structure of video processing pipes is as follows. In our demo, there are two types of branches: 1) the management branch and 2) the main AI inference branch.
In the one channel (1080P) mode, everything will be drawn on the same 1080P output. As shown in the video, the segmentation result from the management branch and data waveform are put in the top right corner of frames. The size and the position can be adjusted by configuration files.
In the 1080P mode, the inference information from different branches needs to be drawn on the same frame. However, the original Meta Affixer plugin does not support the combination of inference results from different branches. It returns errors when there are multiple inference results. We modified the gstreamer plugin (libgstivasinpinfermeta) to support this feature. Now, the info from the master sink port will be kept, while others will be dropped.
The shell script for 1080P can be downloaded: gst_1080p.sh. Please see the appendix for more details.
In the 4K mode, there is a separate branch (1080p) to draw waveforms and GUI.
In the four channels (4k) mode, the output is 4K resolution. The results were drawn on 4 1080P videos streams. As shown in the video, the segmentation results from the management branch are put in the top left corner, while the data waveforms are put in the top right. The results from branches 1 and 2 are put on the bottom.
The shell script for 4K can be downloaded: gst_4k.sh. Please see the appendix for more details.
Management branch:The management branch is responsible for checking the scenario of input videos. As shown in the figures, the management branch runs as an assistant branch with the main AI inference branch. This branch takes a copied video stream from the main AI inference branch as an input so that it can monitor the video stream simultaneously.
Note: considering performance costs, the AI inference in the management branch runs on a seconds basis. The inference interval can be adjusted by pre-designed interfaces in real time.
In our demo, we include two kinds of models for scenario classification:
1. Segmentation:
There are two models from Model Zoo are used in our demo to satisfy different requirements of the accuracy:
- pt_ENet_cityscapes_512_1024_8.6G_2.0
- pt_SemanticFPN-resnet18_cityscapes_256_512_10G_2.0
Note1: The input size of "512*1024" decreases the performance significantly.
Note2: current VVAS (v1.0) on KV260 does not support segmentation officially. We use custom plugins to support Segmentation.
2. Lane detection: Lane detection are very useful to detect the region of interest. We use the model for the model zoo:
- cf_VPGnet_caltechlane_480_640_0.99_2.5G_2.0
Note: current VVAS on KV260 does not support Lane detection officially. We use custom plugins to support Lane detection.
Main AI inference branches:The main AI inference branches are responsible to operate AI models for corresponding scenarios. In our demo, we include two typical scenarios for smart city systems: 1) people scenario and 2) car scenario. Videos from different scenarios will be processed by the corresponding branch. If the scenario is not detected, the corresponding branch will also be disabled.
The structures of the video pipeline are shown in the figure. Considering the different requirements of applications, the video processing pipe can run one-stage or two-stage AI inference. 'Video Pipe (a)' represents a typical one-stage AI application (e.g. object detection and segmentation), where there is only one AI model to conduct the inference once per frame. 'Video Pipe (b)' represents a two-stage AI application (e.g. tracking, ReID, and car plates detection), where there are two AI models ruining simultaneously and the second one may run multiple times due to the detection results from the first one.
Branch for people scenario:In people scenarios, the demo can run three kinds of tasks: 1) People detection, 2) ReID and 3) Pose detection.
- People detection: refinedet. It is from the kv260 ReID example.
- ReID: refinedet + crop + personid + tracking. It is from the Kv260 ReID example.
- Pose detection: refinedet + crop + spnet.
Note: we use cf_SPnet_aichallenger_224_128_0.54G_2.0 from Xilinx Model Zoo v1.4.
Branch for car scenarios:In the car scenarios, the demo can run two tasks: 1) object detection and 2) car track.
1. Yolo The object detection models we used are from Model Zoo v1.4. We integrate 4 sizes of Yolo models in our demo so that we can dynamically switch it according to the video processing speed.
- dk_yolov2_voc_448_448_34G_2.0
- dk_yolov2_voc_448_448_0.66_11.56G_2.0
- dk_yolov2_voc_448_448_0.71_9.86G_2.0
- dk_yolov2_voc_448_448_0.77_7.82G_2.0
2. CarID We trained and pruned 4 different sizes of carid model for model switch.
Note: RN18_<xx> means the percentage of the pruned weights. For example, RN18_08 means 80% of the weights was pruned, so it is the smallest one here.
3. OFA and ResNet-50 The OFA model we used are from Model Zoo V2.0. We integrated them in Vitis-AI library 1.4.
In our demo, we designed a dedicated plugin lib (libivas_xdpuinfer.so) to get data and draw waveforms. Please see the Appendix section for detailed configuration.
1. Sample data
The first functionality of this lib is getting platform status data. Currently, this lib supports 7 different data sources: 5 preset sources (LPD temperature, FPD temperature, total power consumption, PL temperature, and FPS) and 2 custom sources.
When using Preset data sources except for FPS, the plugin will read the proc file in the petalinux system to get the platform status.
When using the custom data sources, the plugin will read the data from a custom file. In this way, users can display custom data or use it in other boards (e.g. we have tested it in ZCU104).
FPS is a spacial data sources. Plugin calculates the average FPS of the current video processing branch. However, it can not get the fps information from other branches, which is inconvenient in 4K mode. In our demo, the FPS data can be output to a file, so that the plugin in display branch can read it from custom data files.
2. Draw charts
Another functionality of this lib is to draw waveforms with an acceptable performance cost. As shown in the performance figure, lib can draw the waveform in two different modes: 1) Filled mode and 2) Line mode. The title and real-time data can also be drawn on the frames.
Due to CPU costs of Draw, we also provide a number of parameters for optimization. It is supported to disable the title, data, and overlay. There is also an optimization option for this lib so that you can draw half of the pixels only on UV planes to lower the costs. In the best case, the filled mode costs 150 us, while the line mode costs 50 us.
6. Host programTo trigger a dynamical switch, a Python host program to interact with the plugins in the video processing pipeline has been also developed. Because the host program is a separate program, it uses IPC to read information and send commands.
To use the named pipe to control the video pipeline, there are a few steps:
- Install the new library file (so) to replace the official plugins.
- Prepare the configuration file (JSON) to set communication methods.
- Use Gstreamer to start a pipeline or use the provided shell script to start the video pipeline.
- Start the Python program to control the video pipeline by sending commands
All the control interfaces are designed in python. So you can easily control the video pipeline. Here I list the python APIs in our demo for controlling the video pipelines. Please see host example for the detailed instructions.
class kv260adpModelCtr(object):
def __init__(self,write_path="",*args, **kw):
def setNewModel(self,modelname, modelclass, modelpath, write_path = ""):
def setNewREIDModel(self,modelname,modelpath,write_path = ""):
def setDPUInvteral(self,inverteral,write_path = ""):
def setDPUenable(self,enable,write_path = ""):
def setIndicaterUI(self,on,write_path = ""):
def getFPSfromFile(self, file):
def getSegmentationResult(self,file):
Communication between plugins and the host:In our demo, there are three kinds of Inter-process communication (IPC) to transfer data between the host program and gstreamer video pipeline:
1. Named Pipe (FIFO):
Named pipe is the main method in our demo to communicate with VVAS plugins. Our custom plugins read new commands from the named pipe. The path of the named pipe can be set in the configuration JSON. Currently, in our demo, it is the most stable method to send commands.
2. File:
For convenience, it is also supported to use file to report the running status of the VVAS processing pipeline. For example, our plugin can output the segmentation results to a file for further analysis. The path of the output file can be set in the configuration.
Note: Although it is easy for the host program to access, writing files does cost more time.
3. Shared Memory
Python does not support shared memory natively to communicate with VVAS plugins. In our demo, it is used to transfer data between the plugins in different video processing branches.
7. Generate ModelsModel Zoo has provided a lot of models, which are easy to use. However, most of those models are not available in other sizes. Hence, we used two methods in our demo to generate the different sizes of models: 1) pruning and 2) OFA.
Training CarIDThe CarID was trained using reid_baseline_with_syncbn framework: , please follow their installation and configuration instructions on the Github page.
The CarID model was trained using: VRIC: Vehicle Re-Identificaton in Context
To prune the model, we used the Torch-Pruning PyTorch package: Torch-Pruning;
Once-for-all network (OFA)Once-for-all network (OFA) is also used to generate different sizes of models.
In the demo, we use OFA trained network as a super network as well as searching algorithm, to generate multiple subnetworks according to our requirements. We firstly use latency as an input parameter in the search algorithm.
The figure describes the model generation technique, where the model is optimized in terms of latency and accuracy. In OFA framework, random search is firstly used to determine a set of subnetworks (Subnet N) that are close to the defined latency and evolutionary search is then used to find out the subnetworks (Subnet K) with the highest accuracy among the previously selected set of subnetworks.
8. Experiment results1. Energy consumption (ZCU104)
The total energy consumption has been reduced up to 53.8% and 61.6% for car and pedestrian scenarios respectively.
2. DPU inference latency (ZCU104)
The detailed DPU inference latencies for each model is shown in the figure below.
3. FPS results in different scenarios (ZCU104)
By switching the different sizes of DNN models at run-time, the FPS has been increased immediately. For example, beyond the switch point the average frame rates are raised from 17.04 FPS to 29.4 FPS and 6.9 FPS to 30.8 FPS in car and pedestrian scenarios. Meanwhile, due to finishing tasks early, it also saved energy consumption up to 34% in overall.
In conclusion, in this project, we have developed a flexible framework that could be integrated into the existing Xilinx Vitis-AI (v1.4.1) and VVAS (v1.0) software packages. The proposed framework is capable of offering high-speed dynamic DNN model switching at run-time for both hardware and software pipelines, which is able to further improve both energy and computing efficiency of the existing video processing pipeline. To verify the framework, we have extended the existing VVAS (v1.0) package, and support more DNN model from Vitis-AI model zoo, and performed extensive testing on both Xilinx KV260 and ZCU104 development boards.
10. Appendix1. Configuration of the JSON file for plugin libsHere we only list the most import libs, please see JSON example for other libs.
libivas_xdpuinfer.so
{
"xclbin-location":"/lib/firmware/xilinx/kv260-smartcam/kv260-smartcam.xclbin",
"ivas-library-repo": "/opt/xilinx/lib/",
"element-mode":"inplace",
"kernels" :[
{
"library-name":"libivas_xdpuinfer.so",
"config": {
"model-name" : "SemanticFPN_cityscapes_256_512",
"model-class" : "SEGMENTATION",
"model-path" : "/opt/xilinx/share/vitis_ai_library/models/B3136",
"run_time_model" : true,
"need_preprocess" : true,
"performance_test" : true,
"debug_level" : 0,
"ffc_txpath":"/tmp/ivasfifo_tomain",
"ffc_rxpath":"/home/petalinux/.temp/dpu_seg_rx",
"interval_frames":3,
"buff_en":false,
"branch_id":10
}
}
]
}
libivas_xdpuinfer.so is modified from VVAS example. Hence we only add an explanation of the newly added key:
libivas_xdpuinfer.so
{
"xclbin-location":"/usr/lib/dpu.xclbin",
"ivas-library-repo": "/opt/xilinx/lib",
"element-mode":"inplace",
"kernels" :[
{
"library-name":"libivas_postsegmentation.so",
"config": {
"debug_level" : 0,
"debug_param": 30,
"ffc_txpath":"/home/petalinux/.temp/segresults",
"enable_info_overlay" : true,
"font_size" : 2,
"font" : 5,
"thickness" : 2,
"label_color" : { "blue" : 255, "green" : 255, "red" : 255 },
"info_x_offset":100,
"info_y_offset":1000,
"enable_frame_overlay":true,
"y_offset_abs":0,
"x_offset_abs":0,
"overlay_width":1920,
"overlay_height":1080,
"write_file_path":"/home/petalinux/.temp/segres",
"enable_w2f":true,
"classes" : [
{
"id":0,
"name" : "road",
"blue" : 38,
"green" : 71,
"red" : 139
},
{
"id":11,
"name" : "person",
"blue" : 128,
"green" : 0,
"red" : 0
},
{
"id":13,
"name" : "car",
"blue" : 200,
"green" : 255,
"red" : 255
},
{
"id":10,
"name" : "sky",
"blue" : 255,
"green" : 191,
"red" : 0
},
{
"id":8,
"name" : "vegetation",
"blue" : 0,
"green" : 255,
"red" : 69
},
{
"id":9,
"name" : "terrain",
"blue" : 139,
"green" : 60,
"red" : 17
}]
}
}
]
}
libivas_runindicater.so
It is just a UI plugin to indicate if the branch is running.
{
"xclbin-location":"/usr/lib/dpu.xclbin",
"ivas-library-repo": "/opt/xilinx/lib",
"element-mode":"inplace",
"kernels" :[
{
"library-name":"libivas_runindicater.so",
"config": {
"debug_level" : 0,
"debug_param": 30,
"default_status":1,
"x_pos":50,
"y_pos":50,
"width":100,
"ffc_rxpath":"/home/petalinux/.temp/runstatus1_rx"
}
}
]
}
libivas_sensor.so
{
"xclbin-location":"/usr/lib/dpu.xclbin",
"ivas-library-repo": "/opt/xilinx/lib",
"element-mode":"inplace",
"kernels" :[
{
"library-name":"libivas_sensor.so",
"config": {
"debug_level" : 0,
"debug_param": 30,
"senor_description":"0:LPD_TMEP,1:FPD_TMEP,2:PL_TEMP,3:POWER,4:FPS. 5~6: custom data (long,float) based on path and scale",
"senor_mode":1,
"sensor_path":"/sys/class/hwmon/hwmon1/power1_input",
"sensor_scale":0.000001,
"enable_fps":true,
"fps_window_len":30,
"enable_fifocom":false,
"ffc_tx":"/home/petalinux/.temp/pf_tx",
"ffc_rx":"/home/petalinux/.temp/pf_rx",
"ffc_description":"only work for fps",
"enable_info_overlay" :true,
"title":"FPD Temp (C):",
"font_size" : 1,
"font" : 5,
"label_color" : { "blue" : 255, "green" : 255, "red" : 255 },
"enable_chart_overlay":true,
"enable_analysis_overlay":true,
"chart_y":512,
"chart_x":896,
"chart_width":512,
"chart_height":128,
"chart_type":1,
"chart_perf_optimize":2,
"line_thickness" : 1,
"line_color" : { "blue" : 0, "green" : 200, "red" : 200 },
"sample_interval_ms":500,
"max_sample_points":32,
"max_display_value":100,
"min_display_value":0
}
}
]
}
gst_1080P.sh
Download the source file: gst_1080P.sh
Note: To run this shell script, firmware of kv260-smartcam has to be loaded.
This script can take parameters as inputs, the following table shows the parameters:
For example, If you want to run Reid application with a video file as input. The command should be as follows:
<script_path>/gst_1080P.sh -i file -f <video_path> -r reid
If you want run Yolo with MIPI camera as input:
<script_path>/gst_1080P.sh -i mipi -r yolo
gst_4k.sh
Download the source file: gst_4k.sh
Note :to run this shell script, firmware of kv260-aibox-reid or cmpk4096 has to be loaded.
This script can take parameters as inputs, the following table shows the parameters:
Note: due to the driver issues, -i is not supported in 4k mode.
For example, if you want to run an application with two branches: 1) Reid for people and 2) Yolo for Adas.
<script_path>/gst_4k.sh -f <video_path> -r reid
If you don't want to overlay the segmentation results on original videos:
<script_path>/gst_4k.sh -f <video_path> -r reid -b
Comments