- Click on the Link to Edge Impulse and login if you have an account or click the "Get Started" button to make one.
- Start a new project.
- Since this is an image classification project, select Images for the data type
- Now that the project is set up, it's time to collect or upload image data!
- For more details on computer vision and image classification, visit Shawn Hymel's Computer Vision with Embedded Machine Learning course
- There are multiple ways data can be collected in this project
- Below are the two main steps
Device connection:
- Select the Devices tab on the far left side of Edge Impulse
- Connect to any available device, the smartphone was used in this example
- Select the smartphone and scan the QR code to go to the URL
- Give the smartphone permission to access the camera
- Adjust the labels and category (training and testing data) settings for data collection of rock, paper, scissors, and background classes
Image Augmentation:
- This part of data collection involves taking existing images and augmenting them to create new data sets
- The augmentation in this project just involves mirrored and rotated images
- Install Atom or any software that allows you to look at and edit code in any languages (available instructions for installation on Windows, Mac, and Linux)
- Install Python 3.9 and Tensorflow via terminal or command prompt
For Windows:
python -V
python -m pip install -U pip
python -m pip install -U setuptools
python -m pip install tensorflow==2.6
python -m pip install keras==2.6
python -m pip install tensorflow-estimator==2.6.0
python -m pip install onnxmltools mmdnn tensorflow-datasets tflite-model-maker
python -m pip install numpy scipy matplotlib ipython jupyter pandas sympy nose imageio
python -m pip install netron seaborn west pyserial sklearn opencv-python pillow
For Linux:
python3 -V
python3.9 -m pip install -U pip
python3.9 -m pip install -U setuptools
python3.9 -m pip install tensorflow==2.6
python3.9 -m pip install keras==2.6
python3.9 -m pip install tensorflow-estimator==2.6.0
python3.9 -m pip install onnxmltools mmdnn tensorflow-datasets tflite-model-maker
python3.9 -m pip install numpy scipy matplotlib ipython jupyter pandas sympy nose imageio
python3.9 -m pip install netron seaborn west pyserial sklearn opencv-python pillow
- In the Data Acquisition tab on the far left, go to the Export data tab on the top right and click Export to download data that was collected in the previous step
- There is a python code attached to this project in the Code section that takes already existing images in a folder and rotates/mirrors them
- Download and adjust the code in Atom or any similar software (change the folder location in line 20 to the one where the exported data was downloaded from and keep the "/*.jpg" part to avoid errors when running the program)
- After adjusting the program, open the terminal or command prompt on your operating system and run the python code from there
$ python3 rotateimage.py
- With the new images the python script generated, go to the Data Acquisition tab on Edge Impulse, select Upload data, browse for the files, and upload them
- Now you're ready for the Impulse Design part of the project
- When you click on the Impulse Design tab on the far left, select Image as the processing block and Classification(Keras) as the learning block
- Click on Save Impulse and go to the Image section under Create Impulse
- Without changing anything, click on Generate Features on the top left side of the Image section
- Click on the green Generate Features button and the training set will get a Feature explorer 3D graph showing how the classes relate to each other
- Then go to NN Classifier and click Start training to get an accuracy percentage and confusion matrix. (The overall accuracy is about 87%, which seems good enough to move on to the next step of the project)
To set up the Maaxboard RT and MCUXpresso software, follow Monica Houston and Jason Lambert's instructions on their Hackster project, Run Practically Any Tensorflow Model on Maaxboard RT and work from the Board Setup section, all the way through Test the example code section.
PROJECT EXECUTIONDownload Tensorflow Lite file- On Edge Impulse go to the Dashboard section on the far left
- Scroll down to the Download block output section and download the int8 quantized file of your NN Classifier model
- Change the.lite to.tflite
- In this step you will use the the xxd utility in the terminal or command prompt to convert the.tflite file into a C array that will go into the MCUXpresso project
If using Windows or Linux:
xxd -i ei-embeddedcv-nn-classifier-tensorflow-lite-int8-quantized-model.tflite > model_data.h
If using Windows Powershell:
xxd -i ei-embeddedcv-nn-classifier-tensorflow-lite-int8-quantized-model.tflite | out-file -encoding ASCII ei-embeddedcv-nn-classifier-tensorflow-lite-int8-quantized-model.h
- Open the model_data.h file and add the changes below to the top of the file:
#include <cmsis_compiler.h>
#define MODEL_NAME "ei_embeddedcv_nn_classifier_tensorflow_lite_int8_quantized_model"
#define MODEL_INPUT_MEAN 127.5f
#define MODEL_INPUT_STD 127.5f
const char model_data[] __ALIGNED(16) = {
The adjusted file should look like this:
- Open a notepad, Atom, or any other code/text editor
- This label is to make a file that works with C++
- This file will have a list of the different classes that will be identified
- Since it's rock, paper, scissors:
const char* labels[] = {
"rock",
"paper",
"scissors",
"background"
};
- Copy & paste the list above, make any necessary changes depending on what is being classified and save the file as labels.h
- Go back to Monica and Jason's Hackster project, Run Practically Any Tensorflow Model on Maaxboard RT and work only through the Import the model and labels files to MCUXpresso section. After that, refer to the eIQ™ Inference with Tensorflow Lite for Microcontrollers oni.MXRT1170 -With Camera instructions by NXP
- Double click on the model.cpp file under the “source\model” folder in the Project Explorer tab to open it
- On line 27 add the following #include for the ops resolver that supports all the operands usedby this retrained model:
#include "tensorflow/lite/micro/all_ops_resolver.h"
- On lines 30 and 31, make sure there's an
#include "model.h"
and#include "model_data.h"
. It should look like the following when finished:
- On line 42, change the kTensorArenaSize variable to 800000 and On line 55, change the model name to the array name to model_data
- On line 73 in model.cpp, comment out the original resolver line. Then add a new line:
tflite::AllOpsResolver micro_op_resolver;
It will look like this after:
- Double click on the output_postproc.cpp file under the “source\model” folder in the Project Explorer tab to open it and make sure line 13 has an #include to the labels file
- Go to Project->Build Configurations->Set Active->Release. This will enablehigh compile optimizations. This will significantly decrease the inference time of TFLM projects.
- On MCUXpresso build the project by clicking Build in the Quickstart Panel to make sure there aren't any errors
- Click on the Terminal tab, set up a Serial terminal with a 115200 baud rate, 1 stop bit, no parity
- Debug the project by clicking on “Debug” in the Quickstart Panel.
- It will ask what interface to use. Select CMSIS-DAP.
- The debugger will download the firmware and open up the debug view. It may take some timeto download the firmware. Click on the Resume button to start running
- On the LCD screen, you should see what the camera is pointing at.
- As you can see, the classification has some issues with accuracy and you mainly get "background" and "paper" readings
- Possible causes include the Edge Impulse project needing more images to work with, poor lighting for the camera, the training and testings sets clustering different image labels, etc.
- An "Improving the accuracy of the Rock, Paper, Scissors model" project will be coming soon so you can see how the image classification was better adjusted!
Comments
Please log in or sign up to comment.