This project was created for a school project to present how to work with neural networks on embedded devices with the use of Tensorflow Lite/Micro. It uses a simple trained model to predict temperature and humidity in case of a physical sensor failure but can be expanded and updated to work with other trained models.
Thank youA big thank you goes to a different hackster project that provided a big help with jumpstarting the project. They made a similar solution, just on the ESP32 platform.
Steps- Collect Data
- Create Model
- Transfer Model
- Create Firmware
- Plot the Data
To start training any model you need to collect relevant data. For this, you can use the prepared firmware available on my GitHub.
To flash this firmware to your device you can use HARDWARIO Playground. Just download the firmware binary file and follow the documentation.
To pair the device with the Radio Dongle go to the Devices tab connect to the radio dongle, Start Pairing, and reconnect the Core Module to pair it. You can also visit the documentation.
The last thing you need to do is to create a Node-RED flow to collect the data from the device into.csv file. Go to the Functions tab and import the flow with the use of the hamburger menu in the top right corner You can find the flow in the attachments as Data collection Node-RED flow.
The flow is easy and it just creates JSON from the string and then uploads it in the csv format to the file on drive. It also adds the UTC date to the data (needed for this project but you can remove it in the Change format node)
You can collect the data for a day, week, or month. With more data, your model will probably be more accurate at predicting.
Create ModelYou can find all files associated with this chapter in my repository. There is the data file in csv format that is used for both Python scripts.
You can run the script with your own data or just take my already-trained models with the.tflite extension (but it will probably predict weird data because your home will probably have different temperatures and humidity).
To learn more about the script you can visit project that inspired me how to do this. But it is quite an easy script that just renames some columns, parses the UTC to parts, trains the model and transforms it to the.tflite format.
Transfer ModelWe need to transform the previously created.tflite file to the format accepted by Core Module.
For this, we will use STM32CubeIDE and STM X-CUBE-AI version 7.3.0.
- Download STM32CubeIDE
- Create New Project for MCU STM32L083CZT6
- Open the.ioc file
- Select Software Packs -> Manage Software Packs -> STMicroelectronics -> X-CUBE-AI -> Artificial Intelligence version 7.3.0
- Select Software Packs -> Select Components -> STMicroelectronics.X-CUBE-AI and select version 7.3.0 (If you have Install next to it you need to click on that as well and install it) -> Expandit -> Expand the Artificial Intelligence X-CUBE-AI -> Check the Core checkbox -> Click OK
- Expand Software Packs in the left menu and click STMicroelectronics and then confirm the popup (You should see something like this)
- Click the Add network button
- Name your network. I recommend the names network_humidity and network_temperature since the final firmware is prepared for these names
- Change Keras to TFLite
- Click Browse next to Model: and select the downloaded or generated.tflite file for the correct network (humidity or temperature).
- Save the project and select Yes at the popup
- Repeat the last few steps for the second network.
If everything went well you should have new folders in the project present:
- X-CUBE-AI - contains the.c and.h files that we will use for the firmware
- Middlewares - contains all the needed library files (you can ignore this if you will build on the premade firmware)
Once you have the model trained on your data you are ready to expand the original firmware with the prediction part.
You can clone my firmware again to see how it works but it is created on my data but it is ready to be used with the AI. If you want to use your own data and trained networks it is still a good idea to start on my firmware because the CMake is updated and the AI library is imported and working there.
You need to take all the.c and.h files from the previous step from folder X-CUBE-AI/App and paste them into folder src/model in the firmware. You will not need to touch any of those files from now on.
I will go here over some special parts of the firmware that are new to the neural network.
In the application_init (starting at line 256 of the example firmware). We need to include the line __HAL_RCC_CRC_CLK_ENABLE(); for the predictions to work. The rest of the code is setting up the date time for RTC. This is needed for the example code because the network was trained with the UTC if you don't need the time you can skip the datetime setup.
void application_init(void)
{
__HAL_RCC_CRC_CLK_ENABLE();
// You need to set up the time if you trained tme model with time
struct tm datetime;
datetime.tm_hour = 12;
datetime.tm_min = 43;
datetime.tm_sec = 00;
datetime.tm_mon = 03;
datetime.tm_mday = 20;
datetime.tm_year = 123;
twr_rtc_set_datetime(&datetime, 0);
// Rest of the initialization
}
At the top of the file, there are some imports that are needed for the code to work. The first and last includes are static and will be there every time, it imports the AI library and allows the use of the __HAL_RCC_CRC_CLK_ENABLE(); line.
The rest of the imports will be different based on the name of the networks you selected in the previous steps.
// AI library import
#include <ai_platform.h>
// Temperature network import
#include <network_temperature.h>
#include <network_temperature_data.h>
// Humidity network import
#include <network_humidity.h>
#include <network_humidity_data.h>
#include <stm32l0xx_hal_crc.h>
This part will be also different based on the name you chose in the previous step. all the prefixes like AI_NETWORK_TEMPERATURE will change with the name of the network.
// All definitions for temperature and humidity AI networks
ai_handle network_temperature;
float aiTemperatureInData[AI_NETWORK_TEMPERATURE_IN_1_SIZE];
float aiTemperatureOutData[AI_NETWORK_TEMPERATURE_OUT_1_SIZE];
ai_u8 activations_temperature[AI_NETWORK_TEMPERATURE_DATA_ACTIVATIONS_SIZE];
ai_buffer *ai_temperature_input;
ai_buffer *ai_temperature_output;
static void AI_Temperature_Init(void);
static void AI_Temperature_Run(float *pIn, float *pOut);
ai_handle network_humidity;
float aiHumidityInData[AI_NETWORK_HUMIDITY_IN_1_SIZE];
float aiHumidityOutData[AI_NETWORK_HUMIDITY_OUT_1_SIZE];
ai_u8 activations_humidity[AI_NETWORK_HUMIDITY_DATA_ACTIVATIONS_SIZE];
ai_buffer *ai_humidity_input;
ai_buffer *ai_humidity_output;
static void AI_Humidity_Init(void);
static void AI_Humidity_Run(float *pIn, float *pOut);
These two functions are made for each network that you work with. You need the Init function that will initialize the AI. This function is called once in the application_init() function.
The second function is the actual run of the network over some inputs. We will go over when to run that in the next step.
// Initialization of temperature network
static void AI_Temperature_Init(void)
{
ai_error err;
/* Create a local array with the addresses of the activations buffers */
const ai_handle act_addr[] = {activations_temperature};
/* Create an instance of the model */
err = ai_network_temperature_create_and_init(&network_temperature, act_addr, NULL);
if (err.type != AI_ERROR_NONE)
{
twr_log_error("ai_network_create_and_init error - type=%d code=%d", err.type, err.code);
return;
}
ai_temperature_input = ai_network_temperature_inputs_get(network_temperature, NULL);
ai_temperature_output = ai_network_temperature_outputs_get(network_temperature, NULL);
}
// Run the temperature network over the inputs and save the output
static void AI_Temperature_Run(float *pIn, float *pOut)
{
ai_i32 batch;
ai_error err;
/* Update IO handlers with the data payload */
ai_temperature_input[0].data = AI_HANDLE_PTR(pIn);
ai_temperature_output[0].data = AI_HANDLE_PTR(pOut);
batch = ai_network_temperature_run(network_temperature, ai_temperature_input, ai_temperature_output);
if (batch != 1)
{
err = ai_network_temperature_get_error(network_temperature);
twr_log_error("AI ai_network_run error - type=%d code=%d", err.type, err.code);
return;
}
}
This is the actual run of the network over input data. First, you need to put the input data into the input array. The number of inputs is based on what data you trained the model. For this example I used this data:
- Year
- Month
- Day
- Hour
- Minute
- Second
- Last Measured Temperature
- Last Measured Humidity
We predict the current temperature based on the last measurements. The datetime is there just for this specific project.
Next, you need to call the Run.
Lastly, you get the predicted temperature from the output array at the first index.
twr_rtc_get_datetime(&datetime);
int year = datetime.tm_year + 1900;
// Put all the needed data into as inputs (based on what you trained the model with)
((ai_float *)aiTemperatureInData)[0] = (ai_float)year;
((ai_float *)aiTemperatureInData)[1] = (ai_float)datetime.tm_mon;
((ai_float *)aiTemperatureInData)[2] = (ai_float)datetime.tm_mday;
((ai_float *)aiTemperatureInData)[3] = (ai_float)datetime.tm_hour;
((ai_float *)aiTemperatureInData)[4] = (ai_float)datetime.tm_min;
((ai_float *)aiTemperatureInData)[5] = (ai_float)datetime.tm_sec;
((ai_float *)aiTemperatureInData)[6] = (ai_float)lastTemperature;
((ai_float *)aiTemperatureInData)[7] = (ai_float)lastHumidity;
// Run the model
AI_Temperature_Run(aiTemperatureInData, aiTemperatureOutData);
// Get the output
float predicted_temperature = ((ai_float *)aiTemperatureOutData)[0];
Plot the DataThe example firmware predicts the data and sends them in another message over the radio with the actual data. You can plot both of these data into one chart to see how good the model is.
With this skeleton project, you should be able to update the code to make your own predictor with the HARDWARIO Core Module. Just follow the few steps.
Collect your data, make the model, parse it to the.c and.h files with the use of SMTCubeIDE and create firmware.
If anything goes wrong, feel free to ask in the comments.
Comments
Please log in or sign up to comment.