Nowadays, topics of ADAS (Adaptive Driver Assistance Systems) and autonomous vehicles are more and more common. Both big companies and ordinary people work hard to create efficient systems for such purposes.
In this field, one of the most important things people devote their time for is object detection. A car, which is supposed to drive by itself, must know what objects are in its surround. There are three sensors which are most commonly used for such purposes. It is LiDAR (Light Detection and Ranging), RADAR (Radio Detection and Ranging) and camera. Data from this sensors must be acquired and properly processed to detect objects. Very common approach is to fuse data from a number of these sensors. It gives higher reliability of system output information.
Often, when algorithm detects object efficiently, it is also time consuming. However there is very little trials to implement such algorithms on heterogeneous platforms. In our project, we want to show that it can be done and it can bring visible profits.
The aim of this project is to implement a hardware-software car detection system based on LiDAR and camera data fusion. The target platform is Zybo Z7-20.
Step 1: Choosing data processing algorithmThe first step was decision how should we detect cars. We searched for sufficient literature - articles, conference papers, etc. After evaluation (in MATLAB) and analysis of algorithms we have chosen the one presented on figures 1 and 2.
The next step was about creating reference model. The basis was evaluation version from step 1. We have changed it so that it could nicely fit in FPGA architecture with taking into account the manner how data comes from LiDAR.
The most important things we have altered:
- analysing LiDAR scans on an ongoing basis instead of buffering and analysing the whole 360 degree point cloud,
- how the space around LiDAR is divided into clusters,
- how LiDAR cluster aggregation is being done.
The heart of Zybo Z7-20 is heterogenous Zynq-7000, which contains programmable logic (PL) and ARM processor (PS). We took advantage of it and we moved to PS these parts of algorithm, which have more sequential "nature". The diagram below (figure 3) shows scheme of algorithm with its division into PS and PL part.
The following step is about implementation of PL part of the algorithm. It is done in Vivado using Verilog language. We have divided work into part connected with processing LiDAR point clouds and part connected with processing images. We have tested modules in behavioral simulation, substituting PS modules with Verilog simulation constructs.
However, not everything works:
- HOG gives different feature vector than reference model,
- image scaling adds unexpected black stripes.
This step is about implementation of PS part of the algorithm. It is being done in Vivado SDK using C++ language. Currently, only PL/PS communication issues are being prepared. For now, software part is being developed independently with hardware part.
What is to be doneOur TODOs are listed below:
- improvement of HOG and image scaling,
- software part of the algorithm,
- integration of all parts of the project,
- implementation on target hardware.
We give a summary of the project in a form of video below.
How to run the code?Git repository with most recent code is attached to the project. It requires at least ver. 2018.2 of Vivado. User should choose folder with part of the project he wants to investigate. Current version provides only possibility of behavioral simulation.
Comments