With latest advancement in technology it became feasible to run Machine learning (ML) Model on the Edge, without the need to send the data to the backend to be processed on big servers. On the edge we can deploy the Model not on Single Board Computer (SBC) but on low cost Microcontroller.
ScopeIn this Project we demonstrate how to use a ML model to Recognize faces and control a DC Lock, this all taking place on the Edge.
How Project WorkUsing "Edge Impulse" we built a ML Model that can recognize specific face. We run this model on the Vision Board to open the lock if the board recognized the correct face.
RT-Thread Vision BoardThe Vision Board is based on Renesas RA8D1 microcontroller equipped with ARM Cortex-M85 running at 480 MHz. The Vision board can be programed using MicroPython using OpenMV IDE.
The Vision board cab be programed with MicroPython using OpenMV IDE. To be able to user the MicroPython interpreter you have to flash the board with the example provided by RT-Thread. You can find this example in RT-Thread Studio, just compile and flash it without any modification.
Now you will be able to use the board as OpenMV board, and program it using MicroPython in the OpenMV IDE
The Current Board firmware that enable the board to run MicroPython only allow controlling the RGB LED, so in order to control a GPIO pins we need to modify RT-Thread Example to add another Pin. For simplicity I configured the Pin that connect to the DC-Lock as a fourth Pin, as the DC lock is controlled by a HIGH-LOW signal. Code for this change can be found the code section.
Machine Learning PipelineThe process of developing Machine Learning application requires multiple steps that need to be performed, the following Diagram shows the mandatory steps need to have working machine learning application.
- Data Collection : The process of developing machine learning application always starts with collecting the data that will be used to teach the model about the problem we need to solve. For example if we are building a model to detect spam emails, we need to have a huge email samples some of them are normal emails while the others are spam emails.
- Training : This is the process of teaching the model using the data in the previous step, for simplicity we can say we are giving the model examples and let him learn from these samples.
- Deployment : This is the process of installing our model, and this is the final step before being able to use the model, weather to deploy it on a powerful server or and edge device that run on a battery.
- Testing : Starting from here we can use our ML application, but before the first use we need to try it to know the level of accuracy we have achieved with this Model.
Our dataset in the application will consist of images, with two labels (Categories). The first set is images with the valid face that will open the lock, and the other set of images will contain faces that aren't consider valid.
For the first set of data with the valid face we have automated the process by writing a python script that will run on a PC to capture images and crop the image to have the face without much information this script has speed up the process very much. (Script can be found in the code section).
For the second set of data we have used random faced that we got from Kaggle LINK .
Train the ML ModelWith Edge Impulse we can use image data we have prepared in the previous step to train out model. After Training the model we see training Accuracy exceeds 84%.
With the Help of Edge Impulse we can test our model virtually without the need to use the actual Hardware. For sure the result can differ from the testing result on the actual hardware, but it gives us an indication about the model performance and whether we need adjust our model and retrain out model to increase accuracy.
With support of Edge Impulse we can generate the model as OpenMV Library, this will generate 3 files :
- .tflite file, which represents the model itself
- .txt file, which contains all the labels used by the model
- .py file, which is a sample code the could be used directly in the OpenMV IDE, this code uses the model to recognize valid faces.
Both.tflite and.txt need to copied to the SD card in the RT-Vision Board, and.py file will be run from OpenMV IDE
Testing the model on Real Target
After running the Python script attached in the code section, we can see that board can recognize valid persons with good accuracy, I've set in the code to open the lock if the confidence above 60%. In some case with good light condition the confidence reaches more than 90%.
Comments