BikeWatch is an ingenious product that uses the Xilinix Kria KV260 and machine learning to detect pedestrians in front of a bike and alert them that a bicycle is approaching.
The idea for this project came to me when I was walking in the local park and many people were biking along the same pathway. Every time a bike went by, I would get surprised, and once I even walked in front of a bike as I didn’t know it was coming. Luckily I was saved, but that moment could have been a serious accident in which both the cyclist and I could have gotten hurt.
This is where I thought of how I could put my tinkering knowledge to the test and invent a device that could be mounted on a bike and tell pedestrians that a bike is coming beforehand This would prevent bike accidents as pedestrians wouldn’t accidentally walk in front of the bikes and be aware to make space for the passing cyclist. This would allow the cyclist to know that pedestrians are up ahead.
I knew that for my idea, BikeWatch, to become a reality I would need to utilize Computer Vision and a board that could handle such a large processing task. I saw the Xilinx Adaptive Computing challenge on hackster and I applied for the Xilinix Kria KV260. Xilinix liked my idea and gave me this board for the competition.
What are the benefits of using the Xilinix Kria KV260?
Before talking about the details of BikeWatch, I would like to discuss the benefits of the Kria KV260. It has some main +points that suited my project:
- No need for complex hardware design knowledge
When I first read this line over at the board’s homepage (https://www.xilinx.com/products/som/kria/kv260-vision-starter-kit.html) I was skeptical.
Most boards require you to dig very deep into their hardware. But this board blowed me away, I didn’t even need to do any complex hardware design. It was very easy to just run the examples from the Vitis AI Model Zoo and include them in the C++ code.
- Vision Ready Deep Learning Processing Unit (DPU)
The DPU is an amazing processor that Xilinix made specifically for Computer Vision tasks. Located under the red fan on the Kria, this processor helped me achieve a very high Frame rate Per Second (FPS) and a very short inference time for the model, RefineDet, which I used in BikeWatch to detect pedestrians.
So… Let’s build the project!!! 🤩
Booting the Xilinix Kria KV260Flash the 16 GB Micro-SD Card that comes with the Starter Kit with this petalinux image (https://www.xilinx.com/member/forms/download/design-license-xef.html?filename=xilinx-kv260-dpu-v2020.2-v1.4.0.img.gz).
Note: To download the petalinux image you will need a Xilinix Developer Account.
Once the Micro-SD Card is ready,
- Turn on the Anker PowerHouse 100 Portable Battery (press the metallic gray circle once)
- Turn on AC power (press the circle until you see a blue light next to where AC is written, see image below).
- Insert the Micro-SD card into the Kria KV260
- Plug in the AC power plug (it comes in the Kria KV260 Vision AI Starter Kit) into the AC port on the Anker PowerHouse 100 Portable Battery.
Note: The Kria KV260 can currently only power itself and USB peripherals if connected to AC power, which is why I bought this specific battery. I figured this out with the help of @quentonh, @prithvi-mattur and @stonux https://www.hackster.io/contests/xilinxadaptivecomputing2021/discussion/posts/9085#comment-179807.
Connect a HDMI monitor, a keyboard, and a mouse
or
Connect your laptop to the Kria using a micro-USB cable. Then to remotely connect to the board,
- for Mac and Linux users, `ssh` into the Kria,
- for Windows users, use Putty or TeraTerm
These libraries are needed to run models on the DPU. These models can be from the Vitis AI model zoo or your own compiled & quantized.
1. Connect your board to Ethernet
2. Download using wget:
wget -O vitis-ai-runtime-1.4.0.tar.gz https://public.db.files.1drv.com/y4mjpl9YZ41ROOqTX9lUST-M4TkVaNhlr21LIFqAnHE-EV6odhoHIrhraMDnewblJipb9njNV_lzx2og_ysKGigSc6LXHWPmxI_25cetlYtaQKeCOrNGP663jm69WE8nXqAqxrhPVddhGY4cFGTFSL3IWnbOK7Uo7fhAmbIy52i1Bf3PHueihcmXwCTr_ql5bdf2CA8RvwA-cs7CpOGVLCF6ib0qpAGedSZ865qKR7y__0?access_token=EwAAA61DBAAU2kADSankulnKv2PwDjfenppNXFIAAXOnMxeLdBlEHwoMDolhCBBZJEY5Z9YNb6DuOxIVt0MLU8cO0DQPI1uaQUzqsyCVy7NDAt1Rl2YbMzfXc8vFcNjLCGFfeECnGmeL%2bud1NIPxlGDaqPt78HIKMX6EYMhr0i3JJavXc7C9b7F/pFzvSGppS302gwnZWubDto7G0zftl/sH8dOAdLsVO%2b2KPvNRj6jE6gba6C6s/%2b6EUrbjDFFV91dBoBnS6BLSbI1pSro4p7VrNUC2KS7Grm8JbrLODUFJGyugRv11fvnMdWOWxtDRHOx7mRcX8kolE8Z0NAV%2bLJ5meNYx8249rKf%2bzTj50tOu1azHyNsfOOkJ%2bmcdv10DZgAACFgC4zG/zP900AFTCP%2bU924SuRn1x4N9Z8NEfBFjXDwOL6QbLWNLaFgVMnVLx7EH8xe3CNxVdPFvav%2bS6vEs75jIjdOt1BUgmx9DngiTAwkyLtQRqdQcuUCY61GBmu5cM1qiMTgxhICc3K%2bk1CewFaAWyUX4X5bkX/DZZMiwBZrPNCJulC%2bF1QyVrS9pF/9Yld1fut%2b2I9EvkpCUrgQCFgfy8jx0q/wvvYGZaX5goZaTr8IAs0BNKpZ25r6voWvnrmI5RFS86tSBnzldpxbwyRq4hluxmdpqFNCXkabwxeqmYeVzKmwPZct3NY8Ib7G7ypnN2ZnhKaNsG5cG3gBAv8gyFbPq2wU3qaz1DJPKdVrp38T08XOJ9E1nf6NWCj8QeFBFB2w1FTBd2EpmZurzuYkuMC4ZOlmApPvFB3Md7dyFiSkR39KZ9DEGgzxV20SBCOtEecQCuZ%2bzjmRggeKiY5C9i6E86adUvFn4mwFV5tp3Aqnh3Yjk8c2uOTYqzebtHe3lZvdHAPTvf9HwsE5uFiYOT6Cnog5ZHwaideaSlkSG5jllkxazILJGG5aOU2sQkuW80g/88GiWkIS9Kf%2bCjqwfh1kQFqTLiqg5K3eRy1RZ%2bJjRbbp7WDD6VRAC
3. Decompress the downloaded.tar.gz:
tar -xzvf vitis-ai-runtime-1.4.x.tar.gz
4. cd /vitis-ai-runtime-1.4.0/2020.2/aarch64/centos/setup.sh
5. setup Vitis AI Runtime Libs:
bash setup.sh
Run BikeWatch1. Clone my repository:
git clone https://github.com/Raunak-Singh-Inventor/BikeWatch.git
Note: Before running the model I would like to mention that originally I was thinking of using YOLOv3 trained on COCO dataset for object detection of pedestrians. Then @jerryahn told me to use EfficientDet D0 after I asked for instructions to compile the model using Vitis AI. I saw that the computation performance of EfficientDet was much greater than YOLOv3 (see graph below). As I wasn't able to find either model trained on COCO in the Vitis AI Model Zoo, I implemented the suggestion by @quentonh to use RefineDet, which is more computationally efficient and is present in the Model Zoo trained on COCO. https://www.hackster.io/contests/xilinxadaptivecomputing2021/discussion/posts/9110#comment-178221
2. cd BikeWatch
3. Build the C++ code that performs inference on the RefineDet model and returns `1` for pedestrian detected and returns `0` for no pedestrian detected:
./build.sh
4. Connect the camera and speaker to the Kria.
Note: Speaker needs to be connect through the audio to usb jack.
5. Run the python script that calls the C++ code and plays the sound on the speaker:
python3 main.py
Now that we have gone over the steps to build the core product that can detect a pedestrian and say "bicycle approaching", let's check out a short demo of how it works mounted on a bike and in a 3d printed case.
3d printing and MountingTo get the 3d printed case, print the BikeWatch-holder and BikeWatch-cover CAD files on your own 3d printer. I used the Dremel 3D20 3d printer at my school, (Memorial Middle School New Jersey) with the help of my Dynamic Apps. Teacher, Mr.Harrington.
Once you have got both the cover and the holder printed out, use a hot glue gun to connect the cover and the holder on top of each other.
Then, glue the components (speaker, camera) onto the 3d printed case. I hanged the battery on one of my handlebars as it was too big to put on the case.
Once you have this setup ready, use the Bike HandleBar Mount to put BikeWatch onto your bike. Congratulations, you have built BikeWatch! Take this project for a drive around the park in retro-gadget style, while keeping pedestrian safety.
Thanks for reading!🥳
Comments