At Sundance we are currently looking into spacial AI using the RealSense platform from intel on our in house board, the Sundance VCS system. We are using this technology in the filming industry to increase the autonomous capabilities of the Agito robot from Motion Impossible.
A RealSense camera is a range of cameras from Intel that allow a robot to interact more with the environment around them. They are great for beginners as some of the models are very modestly priced, starting from £149.00.
Currently, AI-based image processing is usually performed on a full-frame. This means that everything in the field of view is processed; up to a set distance. Now, this is great, but can sometimes lead to unwanted predictions that need to be ironed out later on in code, which can be inefficient when creating streamline Robotic platforms.
An issue we faced with our standard camera technology was that it tended to pick up hazards in the distance before they needed detecting. This resulted in unnecessary post-processing which caused the system to become sluggish and draw more power.
Creating your Face Tracking with Spatial Awareness
First step is to create our SD card that will be used to run Petalinux Vitis-AI on the Ultra96V2:
- You will need a 16GB micro SD card
- You can download the image file Here
Once downloaded you can either use Etcher on Windows or disks on your Linux distribution to install the image file onto your SD card.
Step 2: Setting up LibRealSenseWe need to link the LibRealSense Library so that it can be accessed by our application;
$ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib64/"
You can test that LibRealSense is configured properly by running the camera depth script using this command;
rs-depth
You should now see a similar output to this;
ssh into your Ultra96v2 using a USB Ethernet adapter making sure to port forward graphics;
$ ssh -X root@<Ultra96V2-ip>
Now we need to clone the repository;
git clone https://github.com/JackBonnellDevelopment/Hackster_Realsense
Once this has been done, Plug in your RealSense camera and then the fun can begin!
Step 4 Running Code:To run the code all you need to do is compile the application using;
$ cd Hackster_Realsense
$ ./buildrealsense.sh
$ ./realsense_faceTracking
The Results:As you can see, in the results everything but the user is greyed out, this means that only the objects closest to the camera are seen by the Vitis-AI model; leading to less chance of false positives.
Currently the grey out area is set to 1 Meter but this can be changed in code on line 39 of "realsense.cpp"
$ float depth_clipping_distance = <number of meters>.f;
Future work and IdeasThe next logical step is to accelerate the LibRealSense libraries as much as possible and streamline exactly what is needed to run the cameras efficiently, using both the RealSense camera and Vitis-AI creates an increased level of intelligence to the robotic applications, allowing for a higher level of awareness. I can see this being a benchmark for low power AI solutions going forward and hope it inspires future development.
Further ReadingIf you are interested in furthering your knowledge of Applied Robotics, I was recently a guest speaker at Brunel Robotics & AI Expo (B:RAI) talking all things Vitis-AI and Real World Applications!
I also have a Published Paper which you can read Here!
Comments