This project demonstrates how to use KV260 starter kit with Vitis AI to detect accidents by using specific Algorithm. The model used in this project is ssd_adas_pruned_0_95 from vitis model zoo.
However, because of limited time for the contest and many problems found during development. The result is not complete 100% detection (depend on the quality of the model we use too) but I hope this project will help others with the guideline on how to develop solutions on KV260.
Design OverviewThere are many factors that need to consider when designing this project. They are
- The angle of monitoring. Cars from the dash camera will look different from cars from CCTV.
- Quality of video. Most CCTV does not use high resolution because want to reduce the bandwidth of the communication.
- Time. At night the picture will be less clear than during the day.
- Weather. Raining will reduce the image quality and hard to detect cars.
There are two approaches for detecting accidents on the road.
1) Train a new model for accidents. This includes the training set for car accidents in many aspects. This approach will need a lot of training set from different angles and at different times ( day or night). I had tried this approach and faced many problems. For example, no quality datasets for car accidents from CCTV and create model from these datasets using very long time. I tried one example from the Vitis-AI tutorial and it reports that will take 248 hr ( 10 Days) to complete ( I use I7 machine with 4GB GPU card) I have not much time and still don't know whether this model wil work. So I decide to use the second approach.
2) Use algorithm to determine accidents. This approach can use available models that detect car and apply specific algorithms to classify accidents. But it will be hard to specify every scenario to detect all case of accidents. In this project, we use two algorithms to determine accident
2.1 If the boundary of car more than specific value. and nearly center of the screen. It's possible that we got an accident in front of us. ( This is the angle from in front car's dashcam )
2.2 If the boundary of two cars come close less than specific value. There are chances that there are accidents on the road.
In the future, if we combine to use both algorithms and models to detect accidents. The accuracy should be increased.
Hardware Components- Krai KV260 with Basic accessory pack
- HDMI Monitor. I used the old monitor which is not 4K monitor. It's resolution is 1600:900 This will effect for the command convert video and options to start the application.
We use Smartcam example for the skeleton of the program and the model to detect cars is ssd_adas_pruned_0_95. Please note that the model from the Model zoo for KV260 need to recompile for B3136 DPU for use with image from KV260 tutorial. The details can be found at https://xilinx.github.io/kria-apps-docs/main/build/html/docs/smartcamera/docs/customize_ai_models.html
If you download the models from Vitis-Model Zoo for KV260. These model are compiled with DPU B4096 and the KV260 image need to download from https://github.com/Xilinx/Vitis-AI/blob/master/setup/mpsoc/VART/README.md#step2-setup-the-target This image use DPU B4096 which is different from B3136 that we use in SmartCam application. I know it's very confuse and took me time to understand it.
The image that we will use in application for Smartcam will be from this link
https://xilinx.github.io/kria-apps-docs/main/build/html/docs/smartcamera/docs/app_deployment.html
I also find some data set for cars accidents. But cannot have time to adjust it to use with model from Vitis AI model zoo. But it's may be useful for you for reference.
Kaggle DataSets : https://www.kaggle.com/datasets/ckay16/accident-detection-from-cctv-footage
https://github.com/Giffy/AI_CarCrashDetector
CADP DataSets : https://docs.google.com/document/d/12F7l4yxNzzUAISZufEd9WFhQKSefVVo_QsPdTsWxZh8/edit
Building the projectThe installation and running of the sample SmartCam have no issue with the tutorial instructions. First, try to burn SD card image and make sure you run SmartCam sample application without any problems. Follow the instructions at
https://xilinx.github.io/kria-apps-docs/main/build/html/docs/smartcamera/docs/app_deployment.html
However because different environment (Monitor resolution) You may need to adjust some command for screen resolution. For example, my HDMI with 1600:900 resolution start with these options
sudo /opt/xilinx/bin/smartcam --file $1.h264 --target dp --width 1600 --height 9
00 -r 30 --aitask ssd
Note : $1 is the video file name without extension
The video for this Smartcam application also need to convert to 1600:900 resolution with this command on the host machine ( not on KV260)
ffmpeg -i {input-video.mp4} -c:v libx264 -pix_fmt nv12 -vf scale 1600:900 -r 30 {output.h264}
Now the problems start. I cannot use ffmpeg to convert to 1600:900 resolution on my Ubuntu 18.04. I got this error
Unable to find a suitable output format for '1600:900' 1600:900: Invalid argument
But I can convert it on My Mac machine. I did not have time to solve this problem. If you face this problem just try to convert on Mac or Windows machine.
Everytime when you boot up KV260, please don't forget to run these commands to load the environment before running SmartCam.
sudo xmutil unloadapp
sudo xmutil loadapp kv260-smartcam
Now is the tricky parts, to set up the development environment on hosts for SmartCam make me sick and spend most of time to resolve the problems. In summary, what we need to do is modify and compile SmartCam to use on KV260 board. Then we need to install all components and libraries necessary to compile in SmartCam application. If you miss something, there will be error report missing components and it's hard to find and install it later. So you need to make it right from the start.
Follow this guideline and download link below. Please note there are many different version for the same documents and you may got confused if your link goes to the wrong one. Make sure your version is correct for each tools and image.
https://xilinx.github.io/kria-apps-docs/main/build/html/docs/build_petalinux.html
The overall process for setting development environment for KV260 are
1) Install petalinux. There are many version of petalinux and you do need to use the same version for the tools. Mix -up version will cause unexpected results. Trust me... I use petalinux 2021.1, Vitis 2021.1 and Vitis-AI 1.4 on Ubuntu 18.
2) Now is the crucial part. To compile SmartCam application, you need to upgrade eSDK and this step should go without any error, If you got error about yocto extract error. Try to upgrade with this command.
$petalinux-upgrade -u http://petalinux.xilinx.com/sswreleases/rel-v2021/sdkupdate/2021.1_update1/ -p "aarch64" --wget-args "--wait 1 -nH --cut-dirs=4"
Refer to these two posts for more details on error.
https://www.hackster.io/contests/xilinxadaptivecomputing2021/discussion/posts/9149#challengeNav
https://www.hackster.io/contests/xilinxadaptivecomputing2021/discussion/posts/9120#comment-178684
You should not got any errors on Yocto extraction. If not you will face many errors later.
3) Create Vivado project from xilinx-k26-starterkit-v2021.1-final.bsp using command
petalinux-create -t project -s xilinx-k26-starterkit-v2021.1-final.bsp
4) Build the project
petalinux-build
If you did not change anything on platfrom or have any custom hardware. You may use KV260 starter kit image for reference.
5) To include KV260 starter kit specific packages on the rootfs at build time, the BOARD_VARIANT
variable needs to be set in the config. Set the variable with the below command:
echo 'BOARD_VARIANT = "kv"' >> project-spec/meta-user/conf/petalinuxbsp.conf
6) Add the application packagegroups into the user rootfs config file such that rootfs menuconfig gets populated with those entries.
echo 'CONFIG_packagegroup-kv260-smartcam' >> project-spec/meta-user/conf/user-rootfsconfig
echo 'CONFIG_packagegroup-kv260-aibox-reid' >> project-spec/meta-user/conf/user-rootfsconfig
echo 'CONFIG_packagegroup-kv260-defect-detect' >> project-spec/meta-user/conf/user-rootfsconfig
echo 'CONFIG_packagegroup-kv260-nlp-smartvision' >> project-spec/meta-user/conf/user-rootfsconfig
7) Run the petalinux rootfs config and select all the added package under user packages. (Refere to this link https://xilinx.github.io/kria-apps-docs/main/build/html/docs/build_petalinux.html for details) If you got some error in compile, come to this step and add more packages openCV and gstreamer.
petalinux-config -c rootfs
8) Build the cross compiler for building the application run on KV-260
petalinux-build -s
9) Install the SDK by running script at images/linux/sdk.sh and specify directory to install SDK. This directiory will be used when you compile program later.
Now you are ready to develop application for SmartCam. Go to https://github.com/wtos03/accAlert to download source code and configuration for accAlert application. Make the program with build.sh ${SDKPATH} .
The ${SDKPATH} is the path you install SDK before. After successfully compiling. Upload.rpm to KV260 using command.
scp org_file username@ip_address:dest_file
On KV-260 board install the program by running command
sudo rpm -ivh --force ./smartcam-1.0.1-1.aarch64.rpm
All the modified source code is at ivas_airender.cpp and the configuration files for accident alert is in accAlert.h These parameters are
- OBJECT_SIZE (500) Size of Object in front of camera. If there are huge car object in front of camera, we assume something went wrong.
- MID_SCREEN_X (800) Middle screen X coordinate. Screensize/2.
- MID_SCREEN_LIMIT (50) If object deviate from the center more than the limit. We will not interested.
- OBJECT_OVERLAPX (100) X disctance between two objects.
- OBJECT_OVERLAPY (100) Y distance between two objects.
Note number in parentheses is default value
Please note that these setting for video file applied for video and screen size 1600:900. You may need to readjust to suit your screen and video size.
At time of writing, I still adjust on algorithm. Please refer to source code in ivas_airender.cpp for latest info.
Setup and TestingAfter you setup the development environment you can clone source code from https://github.com/wtos03/accAlert. Before you start to build, pleas copy smartcam-1.0.1-1.aarch64.rpm to somewhere else. This is the finished rpm. You may need it if you want to test without any modification. You can start to build the project by calling
./build.sh $path of SDK
For example mine is /opt/petalinux/2021.1
You will get smartcam-1.0.1-1.aarch64.rpm file. Copy this file to KV-260 using command.
scp org_file username@ip_address:dest_file
At KV-260, after you reboot or restart the board. You need to download DPU environment with this command first.
sudo xmutil unloadapp
sudo xmutil loadapp kv260-smartcam
Install rpm package with this command
sudo rpm -ivh --force ./smartcam-1.0.1-1.aarch64.rpm
Run the program with command
sudo /opt/xilinx/bin/smartcam --file video_file.h264 --target dp --width 1600 --height 900 -r 30 --aitask ssd
For details of more options, go to https://xilinx.github.io/kria-apps-docs/main/build/html/docs/smartcamera/docs/app_deployment.html#usage
Sample video files is provided in the videos directory. These videos is in h264 with 1600:900 format. If you need other resolution, use ffmpeg to convert them.
When you run program with sample video files. If the condition is met, the screen will show Status : Accident on the screen. Try the program with many sample videos.
Here is sample video file run on video caracc7.h264
Wrap UpFrom the testing with many sample videos, you will see some miscalculate for the accidents due to
- Sometimes cars drive size by size and the program think it's crash.
- Different angles of the cars make model cannot detect car.
- SSD model detect not 100% percent. As a result, our program is failed too. ( Especially at night)
- If there are many cars in the video. The detection will take more time and more false alarms on accident detection. We set number of objects to process at 20 objects only.
There are some points that we can improve in the future for precise accident detection.
- Improve model to detect cars at different angles and in mixed environments (Raining, Night time, etc)
- Use both model and algorithm to detect accident. Model need to be trained to recognize car that have accident. (crash at the body)
- For Algorithm, add ReID features to track each cars velocity and direction. This will improve accuracy of accident detection. Right now the Object ID is not consistent. It's keep changing for each processed frame.
This project is not 100% complete detection. But it's more accurate on a few cars road which is in remote area. This will help to give emergency assistance in time after an accident occurred.
Comments