Hackster is hosting Hackster Holidays, Ep. 6: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Monday!Stream Hackster Holidays, Ep. 6 on Monday!

Building Gesture and Vision Models Using TensorFlow Lite and Arduino

An official port of TensorFlow Lite for Micro-controllers, released by Google and Arduino, now with gesture and image recognition models.

We first saw TensorFlow Lite running on Arduino-compatible hardware for the first time three months ago when Adafruit picked up the TensorFlow demo and ported it, along with TensorFlow Lite for Micro-controllers, to the Arduino development environment.

Since then, Adafruit has invested a lot of time into making things a lot more useable, iterating the tooling around the original speech demo to make it easy to build and deploy new models, as well as working on new hardware to support. However, what’s been missing until now is official support from the TensorFlow team at Google, and examples of how to use TensorFlow on micro-controller level hardware for more than just voice recognition.

That changed a few hours ago, with a release of an official port of TensorFlow Lite for Micro-controllers from the team at Google, in collaboration with Arduino.

The port of TensorFlow Lite for Micro-controllers has ‘unofficially’ been in the works since the middle of the year. However today’s announcement makes both it, and example code on how to use it, available from the Arduino development environment directly from the Arduino Library Manager.

While today’s release walks you through building and using the original speech recognition demo for TensorFlow Lite, announced along with the SparkFun Edge at from the stage of this year’s TensorFlow Dev Summit in Santa Clara, CA, earlier in the year, the big news is that we’re now finally moving beyond speech recognition models.

Gesture Recognition on Arduino with TensorFlow Lite for Micro-controllers (📹: Arduino)

Based around a workshop given by Sandeep Mistry and Don Coleman at the AI/ML DevFest in Phoenix, AZ, at the tail end of last month the new gesture recognition example built around today’s release takes you all the way through from capturing data, classifying it, then training your models in the cloud, before deploying your trained models locally to the same Arduino board you used to capture the data in the first place.

Making use of the new Arduino Nano 33 BLE Sense which comes with a 9-axis IMU, as well as sensors for pressure,humidity,temperature,proximity and light, plus an embedded microphone, the tutorial shows how to capture training data, before going on to use Google Colab to train a new machine learning model using that data.

After training a new model you can convert it to TensorFlow Lite, and download it as an Arduino header file, and deploy it back to your board.

The really interesting thing here is that the new demo is entirely contained inside the Arduino board. That means all you now need to get started with machine learning on the Arduino is a Nano 33 BLE Sense board, and a micro USB cable. That’s really lowering the barrier to entry.

But that isn’t where things end. Because, as well as a new gesture recognition example, there is also a full ‘person’ detection example built for the Arduino Nano 33 BLE Sense and the Arducam 2MP SPI camera board.

But be aware, while it is a fully working vision example running locally on a micro-controller it’s going to be limited. Since the vision model for person detection is relatively large compared to the voice and gesture models it takes a long time to run, with each frame taking 19 seconds to process.

Yes, that’s 19 seconds per frame, not 19 frames per second.

However, while we aren’t seeing performance on par with the new accelerator hardware that’s still an amazing achievement, and for the right niche uses, it’s also going to be really rather useful to have in your toolbox.

More information on today’s release can be found from the guest post on the TensorFlow blog by Sandeep Mistry and Dominic Pajak who are part of the Arduino development team. While all the example code can be found in the TensorFlow Git repo, there is also a walkthrough of the gesture recognition example in the Arduino Git repo.

If you want to get more background on today’s release there’s also a book written by Pete Warden and Daniel Situnayake, both of whom work on the TensorFlow Lite team at Google called “TinyML: Machine Learning with TensorFlow on Arduino and Ultra-Low Power Micro-controllers that’s now available for pre-order, and due to be published in the middle of December.

I’m sure we’re going to be hearing a lot more about this in the upcoming TensorFlow World conference in Santa Clara, CA, which is happening later in the month, where Warden and collaborators will be talking about running TensorFlow on device. Although if you're interested in slightly higher powered hardware, I'm also going to be talking about my benchmarking work. Given how things are evolving it's looking to be an interesting conference.

Today’s release goes a long way to increasing the accessibility of TensorFlow Lite for Micro-controllers. It also makes the first non-voice models we’ve seen for the platform widely available, along with example tooling allowing all of us to move things forward.

I’m really excited to see what people are going to be doing with these new tools. Things are moving fast, and it’s going to be a fascinating year for embedded machine learning and computing on the edge.

Alasdair Allan
Scientist, author, hacker, maker, and journalist. Building, breaking, and writing. For hire. You can reach me at 📫 alasdair@babilim.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles