These might be very difficult times for many of us, depending on the part of the world you live in. Due to aggravating coronovirus pandemic many countries implemented strict lockdown policies. I myself recently had to spend 14 days in quarantine, staying indoors for 24 hours a day. I decided to make most of it and continue working on the stuff I am excited about, i.e. robotics and machine learning. And this is how aXeleRate was born.
UPDATED 03/29/2022. The difficult times are mostly over, but I decided to leave the above intro as a part of backstory. I try my best to keep my articles updated on a regular basis and based on your feedback from YouTube/Hackster comments section. If you'd like to show your support and appreciation for these efforts, consider buying me a coffee (or a pizza) :) .
aXeleRate started as a personal project of mine for training YOLOv2 based object detection networks and exporting them to.kmodel format to be run on K210 chip. I also needed to train image classification networks. And sometimes I needed to run inference with Tensorflow Lite on Raspberry Pi. As a result I had a whole bunch of disconnected scripts with somewhat overlapping functionality. So, I decided to fix that by combining all the elements in an easy to use package and as a bonus part – making it fully compatible with Google Colab.
aXeleRate is meant for people who need to run computer vision applications (image classification, object detection, semantic segmentation) on the edge devices with hardware acceleration. It has easy configuration process through config file or config dictionary (for Google Colab) and automatic conversion of the best model for training session into the required file format. You put the properly formatted data in, start the training script and come back to see a converted model that is ready for deployment on your device!
Here is quick rundown of the features:
Key Features- Supports multiple computer vision models: object detection(YOLOv2), image classification, semantic segmentation(SegNet-basic)
- Different feature extractors to be used with the above network types: Full Yolo, Tiny Yolo, MobileNet, SqueezeNet, VGG16, ResNet50, and Inception3.
- Automatic conversion of the best model for the training session. aXeleRate will download the suitable converter automatically.
- Currently supports trained model conversion to:.kmodel(K210),.tflite formats. Support planned for:.tflite(Edge TPU),.pb(TF-TRT optimized).
- Model version control made easier. Keras model files and converted models are saved in the project folder, grouped by the training date. Training history is saved as.png graph in the model folder.
- Two modes of operation: locally, with train.py script and.json config file and remote, tailored for Google Colab, with module import and dictionary config.
In this article we’re going to train a person detection model for use with K210 chip on cyberEye board installed on M.A.R.K. mobile platform. M.A.R.K. (I'll call it MARK in text) stands for Make a Robot Kit and it is an educational robot platform in development by TinkerGen education. I take part in the development of MARK and we’re currently preparing to launch a Kickstarter campaign. One of the main features of MARK is making machine learning concepts and workflow more transparent and easier to understand and use for teachers and students.
As it was mentioned before, aXeleRate can be run on local computer or in Google Colab. We’ll opt for running on Google Colab, since it simplifies the preparation step.
Let’s open the sample notebook
Go through the cells one by one to get the understanding of the workflow. This example trains a detection network on a tiny dataset that is included with aXeleRate. For our next step we need a bigger dataset to actually train a useful model.
Open the notebook I prepared. Follow the steps there and in the end, after a few hours of training you will get.h5,.tflite and.kmodel files saved in your Google Drive. Download the.kmodel file and copy it to an SD card and insert the SD card into mainboard. In our case with M.A.R.K. it is a modified version of Maixduino called cyberEye.
MARK is an educational robot for introducing students to concepts of AI and machine learning. So, there are two ways to run a custom model you created just now: using Micropython code or our TinkerGen’s graphical programming environment, called Codecraft. While the first one is undoubtedly more flexible in ways you can tweak the inference parameters, the second is more user-friendly.
If you opt for graphical programming environment, then go to Codecraft website, https://ide.tinkergen.com and choose MARK(cyberEye) as target platform.
Click on Add Extension and choose Custom models, then click on Object Detection model.
There you will need to enter filename of the model on SD card, the actual name of the model you will see in Codecraft interface(can be anything, let's enter Person detection), category name(person) and anchors. We didn't change anchors parameters, so will just use the default ones.
After that you will see three new blocks have appeared. Let's choose the one that outputs X coordinate of detected object and connect it to Display... at row 1. Put that inside of the loop and upload the code to MARK. You should see X coordinate of the center of the bounding box around detected person at the first row of the screen. If nothing is detected it will show -1.
This allowed us to implement model inference in graphical programming environment. Now we’ll go to Micropython and implement more advanced solution. Download and install MaixPy IDE from here.
Open the example code I enclose with the article. The code logic is following:
1) We check if there are people detected in find_center() function. If there are people found, it returns the x-coordinate of the center of the biggest detected bounding box. If no people detected, the function would return -1.
2) If find_center() function returns the x-coordinate, we check if it is closer to image center/on the left/on the right, then control the motors accordingly.
3) If find_center() function returns -1, we use servo to do 40 degree tilt scan for people with the camera.
4) If while tilt scan we are unable to find people, the robot does 2 180 degree pan scans.
5) Finally if pan scan doesn’t detect any people, robot starts rotating in place in clockwise direction, while still performing person detection.
Here is the final result of Micropython code in action! It could be improved still to make faster/more robust.
aXeleRate is still work in progress project. I will be making some changes from time to time and if you find it useful and can contribute, PRs are very much welcome! In near future I will be making another video about inference on Raspberry Pi 4 with/without hardware acceleration. We will have another WIP hardware appearance for that one. Which one? Can’t tell you, hush-hush!
Stay tuned for more articles from me and updates on MARK Kickstarter campaign.
Add me on LinkedIn if you have any questions and subscribe to my YouTube channel to get notified about more interesting projects involving machine learning and robotics.
Until the next time and stay safe from the coronavirus!
Comments