Smart Palette is a project that brings Machine Learning to the Edge with a practical application. Logistic companies transport sensitive goods such as TV's, monitors etc. These goods are transported on a unit called Palette. In this project, we use sensors and Machine Learning to determine if the Palette is handled as expected. The expected outcome of the classification can be if the current state of the Palette is Flat, Inverted, Moving, or Shock (Falling).
What have I made?I have designed a system that can classify the four different states of a Palette based on the data acquired from the accelerometers.
Below is the figure that represents the overall architecture of the system.
- CY8CKIT-028-SENSE is the IoT expansion kit that has sensors such as Accelerometer, Gyroscope, Barometer, Display etc.
- CY8CKIT-062S2-43012 is the processing system which gathers the data and processes it.
Hardware connections for this project are very simple. The IoT Sense Expansion board is stacked on top of the PSoC6 development board.
These are the steps that I used to achieve the end goal: Classification of the sensor data.
- Data collection: Collect data from accelerometer and gyroscope. A Qt TCP server is used to collect data in files.
- Model training: Edge Impulse is used to train and build a C++ SDK.
- Classification: Edge Impulse SDK is deployed on PSoC6.
These steps are described in detail below.
Data CollectionAccelerometer and Gyroscope (3-axis each) data is collected over the Wi-Fi. CY8CKIT-062S2-43012 has a dual-band (2.4GHz and 5GHz) Wi-Fi module, which is made use for the data collection part. The main project is based on the AnyCloud: TCP client example project in the ModusToolBox.
I have designed a TCP server that can collect the data of 1 second duration and store it in a file. This TCP server is written in Qt.
Steps to collect the data:
- Use the ModusToolBox Eclipse IDE to build and download the executable to the board. The project code can be found in the code section. Before building the project open the Makefile and set DATA_COLLECTION flag as DEFINES+=DATA_COLLECTION=1.
- To build the TCP server, Qt has to be downloaded. This can be achieved from following this link. The Git branch should be switched from master to Hackathon. Pay attention to the data classes that are used to transport the data. These data classes can be found in Programs/IMU/Imu.h. It may be required to adapt these structs if there is a change in the embedded application part (ModusToolBox IDE).
path = QString("/home/navin/AccelData") + QDir::separator() + "_accelGyro_"
+ QDateTime::currentDateTime().toString("yyyy-MM-dd_HH.mm.ss") + ".csv";
The string needs to be modified in Programs/Server/TcpHandler.cppto set a custom path for storing the log files.
- Before building the embedded program (ModusToolBox IDE), Wi-Fi and Server related parameters can be configured in tcp_client.c of the Palette project.
- Once correct parameters are configured, the code should be built and flashed from ModusToolBox on to the PSoC development board. Simultaneously, Qt TCP server can be started. I have also built the serial term for easy debugging.
15:21:25.612 - [INFO] : [TcpServer] Mic server started on port: 8085
15:21:25.619 - [INFO] : [Serial] Could not connect to TCP server. Error code: 0x082a000b
15:21:25.62 - [INFO] : [Serial] Trying to reconnect to TCP server... Please check if the server is listening
15:21:25.624 - [INFO] : [Serial] Could not connect to TCP server. Error code: 0x082a000b
15:21:25.631 - [INFO] : [Serial] Trying to reconnect to TCP server... Please check if the server is listening
15:21:25.636 - [INFO] : [Serial] Exceeded maximum connection attempts to the TCP server
15:21:25.639 - [INFO] : [TcpServer] Incoming TCP connection..
15:21:25.641 - [INFO] : [Serial] Failed to connect to TCP server.
15:21:25.641 - [INFO] : [Serial] Connect to TCP server
15:21:25.646 - [INFO] : [Serial] Connecting to TCP Server (IP Address: 192.168.8.100, Port: 8085)
15:21:25.652 - [INFO] : [Serial]
15:21:25.652 - [INFO] : [Serial] ============================================================
15:21:25.655 - [INFO] : [Serial] Connected to TCP server
15:21:26.562 - [INFO] : [TcpHandler] ------------------------------
15:21:26.562 - [INFO] : [TcpHandler] Data rate: 51 kB/sec
15:21:26.563 - [INFO] : [TcpHandler] Packet rate: 41/sec
15:21:27.512 - [INFO] : [TcpHandler] ------------------------------
15:21:27.513 - [INFO] : [TcpHandler] Data rate: 26 kB/sec
15:21:27.513 - [INFO] : [TcpHandler] Packet rate: 21/sec
15:21:28.462 - [INFO] : [TcpHandler] ------------------------------
15:21:28.463 - [INFO] : [TcpHandler] Data rate: 18 kB/sec
15:21:28.463 - [INFO] : [TcpHandler] Packet rate: 15/sec
15:21:29.413 - [INFO] : [TcpHandler] ------------------------------
15:21:29.413 - [INFO] : [TcpHandler] Data rate: 14 kB/sec
15:21:29.413 - [INFO] : [TcpHandler] Packet rate: 11/sec
- Data will be saved in the folder previously specified.
Once the data is collected, we use Edge Impulse to train a classification model.
Model Training and TestingIn this project, Edge Impulse is used a tool to train and deploy a Machine Learning (ML) model. For every ML model, data is very much essential, which we have collected in the previous step. In this stage, we use the collected data to train an ML mode. Edge impulse getting started docs can be found here. Also, the Edge Impulse Project is available publicly at this link.
After following the getting started guide and creating a project, we can upload the data manually under the Data acquisition tab.
During the upload process, the data label can be provided, or it can also be modified later. Once the data is uploaded, it can be visualized by clicking on the sample.
Once the data is uploaded and labeled, an impulse can be created as shown in the figure below. There are a few options provided when selecting a DSP block or a Neural Network (NN) model. The choice is up to the user based upon the data and required performance.
After generating the features from the data, an NN model can be adjusted to achieve desired performance. In the figure below, the model accuracy and device related parameters can be seen.
Although, I tuned the model to add more layers and achieve 100% accuracy, those models turned out to consume a lot of memory and processing time. Therefore, a compromise is made to not achieve cent percent accuracy. This compromise has consequences later when the model is deployed. For e.g. check out the figure below where the accuracy is just 80%. To improve the model's resilience, more data can be collected.
The model can be exported as a C++ library by clicking on the Deployment tab.
ClassificationNow that we have downloaded the C++ Edge Impulse SDK, it is time to deploy it and test it on the device. To integrate the C++ SDK into the Embedded project, I have followed this link.
Ensure that the DATA_COLLECTION flag is set to 0 in the Makefile. Otherwise, the project still builds for data collection.
Most part of the code is almost same to the data collection project, except that the sensor task is enabled now.
void sensor_task(void *arg)
{
(void)arg;
ei_impulse_result_t ei_result = { 0 };
int total_length = 300;
printf("Sensor Task started\r\n");
sensor_data_t current_sensorData;
for(;;)
{
/* Wait until there is 64 samples from the accelerometer and the
* gyroscope in the circular buffer */
xQueueReceive(qSensorData, ¤t_sensorData, portMAX_DELAY);
m_pSensorData = current_sensorData.pData;
convert_int16_to_float();
ei_c_wrapper_run_classifier(
total_length,
&get_signal_data,
&ei_result,
false);
printf("\r\n=========================================\r\n");
// Print to screen
for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++)
{
printf("%s:\t\t%0.4f\r\n", ei_result.classification[ix].label, ei_result.classification[ix].value);
}
}
}
Above is the code snippet of the function that waits for the sensor data and prints the results to the serial terminal.
Results
I'm very happy with the results. The classification outcome is nearly as expected. It could have been better if the model was trained with more varied samples or if a different model was designed to extract more features.
ConclusionI would really like to thank Hackster.io and Infineon for providing the hardware and opportunity to participate in the competition. I really enjoyed working with the board and developing a custom solution to identify different states based upon the sensor data.
Future work:
- Add more data and fine tune the model.
- Store the results of the classification in an EEPROM or Flash during transportation.
- Use BLE or Wi-Fi to transmit the stored data.
Comments