Hackster is hosting Hackster Holidays, Ep. 7: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Friday!Stream Hackster Holidays, Ep. 7 on Friday!

Go Tiny or Go Home

The open open source TinyML-CAM pipeline creates image recognition models that run at over 80 FPS on an ESP32 while using under 1 KB of RAM.

Nick Bild
2 years ago β€’ Machine Learning & AI
A selection of microcontroller development boards (πŸ“·: B. Sudharsan et al.)

How low can you go? For artificial intelligence to be fully taken advantage of in the numerous electronic devices that we use on a daily basis, the algorithms must be capable of running on tiny compute platforms with limited resources. The alternative would be to run the resource-intensive inferences in the cloud, which means increases in latency, privacy concerns, and a requirement that internet access always be available. These cloud platforms are also orders of magnitude more expensive and use far more energy. With billions of microcontrollers already in use around the world to control our electronic devices, these tiny, inexpensive chips provide the ideal platform to run our machine learning models.

Well, except for just one tiny little detail. In the world of microcontrollers, memory is measured in kilobytes and the processing speed is something you would not have even been jealous of back when you were running Windows 95. Considering that state of the art machine learning models can require gigabytes of memory for operation, this is no longer looking like such a great match. That is the reason behind recent developments like TensorFlow Lite for Microcontrollers and the Edge Impulse machine learning development platform. These tools are needed to produce models that are both accurate and compact enough to run on edge computing devices.

A team of machine learning researchers has shown just how far the envelope can be pushed with their TinyML-CAM pipeline. They have developed an end-to-end solution for designing, deploying, and efficiently executing image recognition models on highly resource-constrained development boards. The open source methods have been shown to consume as little as 1 KB of memory on the popular ESP32 microcontroller when running inferences on an image recognition model. The pipeline is also designed to be simple, with a custom model able to be developed in about 30 minutes with a minimal amount of coding required.

TinyML-CAM was demonstrated by walking through the creation of a random forest classifier. For a model to learn, it needs to be shown example data, so the pipeline begins with data collection. An HTTP video streaming server that can be accessed from any web browser is used to capture images at 160x120 resolution by default, but the resolution is adjustable. Captured images are then fed into a tool called MjpegCollector that prompts users to supply a class name, then collects images for that class. Previews are available to ensure that the quality of the training data is sufficiently good.

Once the data collection is finished, the optimizations begin. This second stage of the pipeline extracts features from the captured data to bring the most informative variables to the forefront. Images are first converted to grayscale and resized to reduce computational requirements, then a histogram of oriented gradients is computed. A visualization is created from this data to help assess the utility of each feature, and how much it is likely to contribute to an accurate classification. Next, the Uniform Manifold Approximation and Projection dimensionality reduction algorithm compresses the feature vector.

Having confidence that the features well characterize the data, the pipeline moves on to the classification stage. Based on past experience, the team knew that a random forest classifier would be well suited to edge devices and likely to provide both good results and performance. A default set of configuration parameters is provided, which is said to be good for most use cases, but they can be tweaked as needed. Model accuracies are reported as typically falling in the range of 70% to 100%. After training, the pipeline is ported to C++, which can be compiled and loaded to a wide range of development boards.

In a validation, the image recognition inferences were conducted at a very impressive 83 frames per second while using only one kilobyte of memory. The full source code of TinyML-CAM is available on GitHub. The results demonstrated may warrant giving it a try for your next tinyML project.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles