in this guide, you will learn how to create a motion-sensing application to recognize human activities using machine learning on Hexabitz modules.
The model used classifies activities such as stationary, walking, or running from accelerometer data provided by H0BR4x(IMU module).
We will be creating a Human Activity Recognition (HAR) application for the new Hexabitz module H41R6x.
The module is powered by an STM32F413VHT6 microcontroller (32-bit ARM Cortex-M4 )with FPU (floating-point unit), ART (adaptive real-time accelerator), MPU (memory protection unit), and DSP instructions).
First, we need to read the accelerometer data value from H0BR4x (IMU Module) by sending a message to it, which will respond 12-byte message that represents accelerometer values in X, Y, and Z axes.
Second, we will be using a Keras model trained using a small dataset created specifically for this example. Either download the pre-trained model.h5 or create your own model using your own data captures.
- Instructions on how to create and train your own model can be found in the following Python Notebook: https://colab.research.google.com/github/STMicroelectronics/stm32ai/blob/master/AI_resources/HAR/Human_Activity_Recognition.ipynb
- dataset.zip Is a ready-to-use dataset of 3-axis acceleration data for various human activities.
Back into STM32CubeIDE, got back to the Software Components selection:
Add the X-CUBE-AICore component to include the library and code generation options:
- Use the Artificial Intelligence filter.
- Enable Core.
- Click OK.
- Next, configure the X-CUBE-AI component to use your keras model:
- Expand Additional Software to select STMicroelectronics.X-CUBE-AI.7.0.0
- Check to make sure the X-CUBE-AI component is selected
- Click Add network
- Change the Network Type to Keras
- Browse to select the model
- (optional) Click on Analyze to view the model memory footprint, occupation, and complexity.
- Save or Project > Generate Code
Add both the Cube.AI runtime interface header file (ai_platform.h
) and model-specific header files generated by Cube.AI (network.h
and network_data.h
).
/* Includes ------------------------------------------------------------------*/
#include "ai_platform.h"
#include "network.h"
#include "network_data.h"
3. Declare neural-network buffers:With default generation options, three additional buffers should be allocated: the activations, input, and output buffers. The activations buffer is a private memory space for the CubeAI runtime. During the execution of the inference, it is used to store intermediate results. Declare a neural-network input and output buffer (aiInData and aiOutData). The corresponding output labels must also be added to activities.
ai_handle network;
float aiInData[AI_NETWORK_IN_1_SIZE];
float aiOutData[AI_NETWORK_OUT_1_SIZE];
uint8_t activations[AI_NETWORK_DATA_ACTIVATIONS_SIZE];
char* activities[AI_NETWORK_OUT_1_SIZE] = { "stationary", "walking", "running"};
4. Add AI bootstrapping functions:In the list of function prototypes, add the following declarations:
This snippet is provided AS IS, and by taking it, you agree to be bound to the license terms which can be found here for the component: Application.
static void AI_Init(ai_handle w_addr, ai_handle act_addr);
static void AI_Run(float *pIn, float *pOut);
static uint32_t argmax(const float * values, uint32_t len);
And add the following code snippets to use the STM32Cube.AI library for models with float32 inputs
static void AI_Init(ai_handle w_addr, ai_handle act_addr)
{
ai_error err;
/* 1 - Create an instance of the model */
err = ai_network_create(&network, AI_NETWORK_DATA_CONFIG);
if (err.type != AI_ERROR_NONE) {
printf("ai_network_create error - type=%d code=%d\r\n", err.type, err.code);
Error_Handler();
}
/* 2 - Initialize the instance */
const ai_network_params params = AI_NETWORK_PARAMS_INIT(
AI_NETWORK_DATA_WEIGHTS(w_addr),
AI_NETWORK_DATA_ACTIVATIONS(act_addr)
);
if (!ai_network_init(network, ¶ms)) {
err = ai_network_get_error(network);
printf("ai_network_init error - type=%d code=%d\r\n", err.type, err.code);
Error_Handler();
}
}
static void AI_Run(float *pIn, float *pOut)
{
ai_i32 batch;
ai_error err;
/* 1 - Create the AI buffer IO handlers with the default definition */
ai_buffer ai_input[AI_NETWORK_IN_NUM] = AI_NETWORK_IN;
ai_buffer ai_output[AI_NETWORK_OUT_NUM] = AI_NETWORK_OUT;
/* 2 - Update IO handlers with the data payload */
ai_input[0].n_batches = 1;
ai_input[0].data = AI_HANDLE_PTR(pIn);
ai_output[0].n_batches = 1;
ai_output[0].data = AI_HANDLE_PTR(pOut);
batch = ai_network_run(network, ai_input, ai_output);
if (batch != 1) {
err = ai_network_get_error(network);
printf("AI ai_network_run error - type=%d code=%d\r\n", err.type, err.code);
Error_Handler();
}
}
5. Create an argmax function:Create an argmax function to return the index of the highest scored output.
static uint32_t argmax(const float * values, uint32_t len)
{
float max_value = values[0];
uint32_t max_index = 0;
for (uint32_t i = 1; i < len; i++) {
if (values[i] > max_value) {
max_value = values[i];
max_index = i;
}
}
return max_index;
}
6. Call the previously implemented AI_Init() function ↑add AI_Init() in the main after BOS_init();
AI_Init(ai_network_data_weights_get(), activation);
7. Update the main for loop:
Finally, put everything together with the following changes in your main for
loop:
void UserTask(void *argument) {
/* Infinite loop */
for (;;) {
while (write_index < AI_NETWORK_IN_1_SIZE) {
messageParams[0] = 4;
messageParams[1] = 1;
SendMessageToModule(1, 551, 2);
HAL_UART_Receive_DMA(&huart4, (uint8_t*) &array_temp[0], 12);
x = bytesToFloat(array_temp[0], array_temp[1], array_temp[2],a rray_temp[3]);
y = bytesToFloat(array_temp[4], array_temp[5], array_temp[6],array_temp[7]);
z = bytesToFloat(array_temp[8], array_temp[9], array_temp[10],array_temp[11]);
x = x * 1000;
y = y * 1000;
z = z * 1000;
aiInData[write_index + 0] = (float) (x) / 4000.0f;
aiInData[write_index + 1] = (float) (y) / 4000.0f;
aiInData[write_index + 2] = (float) (z) / 4000.0f;
memset(array_temp, 0, 12);
x = 0;
y = 0;
z = 0;
write_index = write_index + 3;
Delay_ms(10);
}
/* Normalize data to [-1; 1] and accumulate into input buffer */
/* Note: window overlapping can be managed here */
if (write_index == AI_NETWORK_IN_1_SIZE) {
write_index = 0;
AI_Run(aiInData, aiOutData);
class = argmax(aiOutData, AI_NETWORK_OUT_1_SIZE);
memset(aiInData, 0, sizeof(aiInData));
memset(aiOutData, 0, sizeof(aiOutData));
if (class == 0)
HAL_UART_Transmit(&huart2, (uint8_t*) "stationary\r\n",sizeof("stationary\r\n"),100);
else if (class == 1)
HAL_UART_Transmit(&huart2, (uint8_t*) "walking\r\n",sizeof("walking\r\n"),100);
else
HAL_UART_Transmit(&huart2, (uint8_t*) "running\r\n",sizeof("runnin3\r\n"),100);
}
}
}
8. Compile, download and run:You can now compile, download and run your project to test the application using live sensor data. Try to move the board around at different speeds to simulate human activities.
- At idle, when the board is at rest, we send "1" to H23R0x(Bluetooth module), the Application should display “stationary”.
- If you move the board up and down slowly to moderately fast, we send "2" to H23R0x. the Application should display “walking”.
- If you shake the board quickly, we send "3" to H23R0x, the Application should display “running”.
Comments