Introduction
Autonomous mobility is more important than ever, as it introduces more traffic efficiency, offers shared, urban mobility concepts, and specialized for more safety. The automotive industry already uses specialized heterogeneous computing hardware for certain autonomous driving functions. One part of the autonomous driving stack is the perception, which outputs detected objects in the environment. Current implementations use AI to do this but require large GPUs. In the project, I wanted to implement a LiDAR-based object detection on the KRIA KR260 SoC to investigate the feasibility for a potential real-world autonomous driving application. Therefore I want to integrate a ROS2 node to communicate into the Open-Source Autonomous Driving Software Autoware.
Architecture
The main proposed architecture is shown in the following picture.
The LiDAR Data is selected from prerecorded ROS bags, and then fed into the ROS2 node via data replay. Unfortunatly I did not have time to implement any of these.
Metrics
The power consumption and the latency are used to determine the efficiency as main comparison metrics. Further metrics could be the accuracy when the model will be optimized.
Evaluation
No evaluation was done.
Outlook
This project's scope is large, therefore I could not manage to finish the original objective, because of my main studies. As the next steps, a ROS2 node is written to run the example centerpoint inference. Other future work will include optimization of the latency to compete with the GPU implementation and be able to run it in real-time with the autonomous driving software.
Comments