Hackster is hosting Hackster Holidays, Ep. 6: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Monday!Stream Hackster Holidays, Ep. 6 on Monday!

Easy TinyML on ESP32 and Arduino

The easiest way to deploy TensorFlow Lite models onto your ESP32 with just two lines of code.

Eloquent Arduino
5 years ago β€’ Machine Learning & AI

In this post, I will show you the easiest way to deploy your TensorFlow Lite model to an ESP32 using the Arduino IDE without any compilation stuff.

So I finally settled on giving a try to TinyML, which is a way to deploy TensorFlow Lite models to microcontrollers. As a first step, I downloaded the free chapters from the TinyML book website and rapidly skimmed through them.

Let me say that, even if it starts from "too beginner" level for me (they explain why you need to use the arrow instead of the point to access a pointer's property), it is a very well-written book. They uncover every single aspect you may encounter during your first steps and give a very sound introduction to the general topic of training, validating and testing a dataset on a model.

If I will go on with this TinyML stuff, I'll probably buy a copy: I strongly recommend you to at least read the free sample.

Once done reading the six chapters, I wanted to try the described tutorial on my ESP32. Sadly, it is not mentioned in the supported boards on the book, so I had to solve it by myself.

In this post I'm going to make a sort of recap of my learnings about the steps you need to follow to implement TF models to a microcontroller and introduce you to a tiny library I wrote for the purpose of facilitating the deployment in the Arduino IDE: EloquentTinyML.

Building our first model

First of all, we need a model to deploy.

The book guides us on building a neural network capable of predicting the sine value of a given number, in the range from 0 to Pi (3.14).

It's an easy model to get started (the "hello world" of machine learning, according to the authors), so we'll stick with it.

I won't go into too much details about generating data and training the classifier, because I suppose you already know that part if you want to port TensorFlow on a microcontroller.

Here's the code from the book.

import math
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers

def get_model():
SAMPLES = 1000
np.random.seed(1337)
x_values = np.random.uniform(low=0, high=2*math.pi, size=SAMPLES)
# shuffle and add noise
np.random.shuffle(x_values)
y_values = np.sin(x_values)
y_values += 0.1 * np.random.randn(*y_values.shape)

# split into train, validation, test
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])

# create a NN with 2 layers of 16 neurons
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(1,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
model.fit(x_train, y_train, epochs=200, batch_size=16,
validation_data=(x_validate, y_validate))
return model

Exporting the model

Now that we have a model, we need to convert it into a form ready to be deployed on our microcontroller. This is actually just an array of bytes that the TF interpreter will read to recreate the model.

model = get_model()
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()

# Save the model to disk
open("sine_model_quantized.tflite", "wb").write(tflite_model)

Then you have to convert to a C array in the command line.

xxd -i sine_model_quantized.tflite > sine_model_quantized.cc

This is copy-paste code that hardly would change, so, to ease my development cycle, I wrapped this little snippet in a tiny package you can use: it's called tinymlgen.

pip install tinymlgen
from tinymlgen import port

model = get_model()
c_code = port(model, pretty_print=True)
print(c_code)

I point you to the GitHub repo for a couple more options you can configure.

Using this package, you don't have to open a terminal and use the xxd program to get a usable result.

Deploy the model

Now it is finally time to deploy the model on our microcontroller.

This part can be tricky, actually, if you don't have one of the supported boards in the book (Arduino Nano 33, SparkFun Edge or STM32F746G Discovery Kit).

I tried just setting "ESP32" as my target in the Arduino IDE and I got tons of errors.

Luckily for us, a man named Wezley Sherman wrote a tutorial on how to get a TinyML project to compile using the PlatformIO environment. He saved me the effort to try to fix all the broken import errors on my own.

Since I could get the project to compile using PlatformIO (which I don't use in my everyday tinkering), I settled to get the project to compile in the Arduino IDE.

Fortunately, it was not difficult at all, so I can finally bring you this library that does all the heavy lifting for you.

Thanks to the library, you won't need to download the full TensorFlow Lite framework and compile it on your own machine β€” it has been already done for you.

As an added bonus, I created a wrapper class that incapsulates all the boring repetitive stuff, so you can focus solely on the application logic.

Install the library from GitHub first.

git clone https://github.com/eloquentarduino/EloquentTinyML.git

Here is an example on how you use it.

#include "EloquentTinyML.h"
// sine_model.h contains the array you exported from the previous step
// with either xxd or tinymlgen
#include "sine_model.h"

#define NUMBER_OF_INPUTS 1
#define NUMBER_OF_OUTPUTS 1
// in future projects you may need to tweak this value.
// it's a trial and error process
#define TENSOR_ARENA_SIZE 2*1024

Eloquent::TinyML::TfLite<NUMBER_OF_INPUTS, NUMBER_OF_OUTPUTS, TENSOR_ARENA_SIZE> ml(sine_model);

void setup() {
Serial.begin(115200);
}

void loop() {
// pick up a random x and predict its sine
float x = 3.14 * random(100) / 100;
float y = sin(x);
float input[1] = { x };
float predicted = ml.predict(input);

Serial.print("sin(");
Serial.print(x);
Serial.print(") = ");
Serial.print(y);
Serial.print("\t predicted: ");
Serial.println(predicted);
delay(1000);
}

Does it look easy to use? I bet so.

For simple cases like this example where you have a single output, the predict method returns that output so you can easily assign it to a variable.

If this is not the case and you expect multiple output from your model, you have to declare an output array.

float input[10] = { ... };
float output[5] = { 0 };

ml.predict(input, output);

Wrapping up

I hope this post helped you kickstart your next TinyML project on your ESP32.

It served me as a foundation for the next experiments I'm willing to do on this platform which is really in its early stages, so needs a lot of investigation about its capabilities.

I plan to do a comparison with my MicroML framework when I get more experience in both, so staty tuned for the upcoming updates.

Disclaimer

I tested the library on both Ubuntu 18.04 and Windows 10 64 bit: if you are on a different platform and get compiling errors, please let me know in the comments so I can fix them.

Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles