We’re going to create a project on a Raspberry Pi with a Camera that distinguishes between socks using the Edge Impulse Machine Learning system running on balena. We all know that matching socks is a nightmare, so having an affordable, intelligent system saying if a sock matches with another sock is the best weekend project ever.
There are a lot of solutions for Machine Learning for Internet of Things projects nowadays. What makes this project interesting is how we’ll use Edge Impulse and some pretty affordable gear to help you get started with AI.
ContentsBefore you start
- What's Edge Impulse?
- Hardware required
- Software required
Tutorial
- Create the EdgeImpulse Project
- Train the model of the Project
- Deploy the EdgeImpulse ML Model
- Create the balena application with the EdgeImpulse ML Model
- Test your image classification application
Until next time
- Acknowledgements
Edge Impulse is a service that enables you to generate Machine Learning trained models in the cloud and deploy it on microcontrollers (e.g. Arduino and STM32), or single board computers like the Raspberry Pi. That means that no GPU or TPU are needed because all the machine learning and neural network training are done beforehand in the cloud with advanced methods. Edge Impulse generates a trained model that deploys into the device and enables it to classify images (or sound, motion and more) in the edge without any special hardware requirement.
This project deploys an image classifier of the stream captured via the Raspberry Pi camera. It classifies images using a trained model on the neural network of Edge Impulse based on transfer learning technique for images plus a dataset of images taken with the mobile phone camera, in this case.
We’ll also show you how to generate a Machine Learning model using Edge Impulse and deploy an image classification system running on a Raspberry Pi with balena. By the end of this project, you’ll be able to reunite all of your unpaired socks together again (and try out other edge AI use cases)!
TutorialCreate the EdgeImpulse ProjectGo to the Edge Impulse Studio website and create an account.
Click on the menu and select Create new project
and enter a name for your new project. In this case, we’ll create a project that will classify socks and will tell us if the socks match or not. Obviously a useful tool that everybody should have in their home.
Once the project is created, select the project and start collecting the data to train the Machine Learning model.
Navigate to Devices on the main menu and then click Connect a new device
on the top right. A modal will pop-up to connect a new device. For this project, we’ll use our mobile phone to take pictures to train the model.
Click Use your mobile phone
and then scan the QR code, generated by the website, on your phone. The QR code will open a website on your mobile phone where you will be able to capture pictures of the paired socks or unpaired socks, once you grant permission to your camera.
At that point, click on Label
and write pair, and start taking pictures of paired socks. Once you have taken 40-50 pictures of different paired socks (depending on your jungle of socks), change the label to unpair and take pictures of your socks unpaired. There is the possibility to split automatically the pictures (80/20) as training pictures and testing pictures or you can do it manually.
Now if you go to the Edge Impulse Studio Data Acquisition
you will see all the Training Data and the Test Data. In this case we have more than 250 items as Training Data and more than 90 items as Test Data. We selected the automatic split (80/20).
While uploading the pictures from your phone, you may have seen an error saying that there was no impulse detected on the project. Let’s create it now.
Go on the main menu to Create impulse
. For this project we are using Image data on a resolution of 96x96.
Click Add a processing block
and add Image
.
Click Add a learning block
and Add Transfer Learning (Images)
which has been created for images datasets.
Transfer Learning is used to make a fast image model classifier. It’s complicated to build a good computer vision system from zero, usually because a lot of images are needed to train models on a GPU. Transfer Learning uses a well-trained model, only using the upper layers of a neural network to train the model in a fraction of the time and work on smaller datasets.
Once the learning block has been created it should detect two different Output features
the pair, unpair and the unknown. The final block Output features
should say 3(pair, unpair and unknown). For the unknown label, we took pictures of random objects, as well. Now it’s time to click Save Impulse.
Now there are more menus below Create impulse
on the main menu. Now click on Image
. And click on Save Parameters
with RGB color depth. This sends us to Generate features
that will create a 3D visualization of the dataset captured.
With the 3D model generated with the Training Data captured with the mobile phone you can see how different are the objects classified.
The data is processed, now we have to start training a neural network to recognize the patterns of the data. The neural networks are algorithms that are designed to recognize patterns. In this case the neural network will be trained with image data as input, and it will try to map the images into one of the categories, paired or unpaired socks plus unknown (which I only took 3 pictures).
To train the neural network we’re going to use these parameters:
- Number of training cycles to 100
- Learning rate to 0.0075
- Data augmentation: enabled
- Minimum confidence rating: 0.8
And click Start training
and the neural network will start to compute all the images and train to generate the Machine Learning model. After the model is done you'll see accuracy numbers, a confusion matrix, and some predicted on-device performance on the bottom. You have now trained your model.
As you generated some test set of pictures meanwhile you captured pictures with your mobile phone, let’s test the model with those pictures. Click on Model testing
on the main menu. Select all the pictures and click Classify selected
.
With our model, we get more than 89% of accuracy. Great!
Now it’s time to deploy the model on a Raspberry Pi and apply it to the real world.
Deploy the EdgeImpulse ML ModelClick on the main menu Deployment
and select WebAssembly
, scroll down and click Analyze optimizations
on the Available optimizations for Transfer Learning table.
Click Build
in order to build the Quantized (int8) model. This will build and download a wasm model. However, you won’t need this file since the project automatically downloads the model once it’s on the balenaCloud using your Edge Impulse API KEY
and PROJECT ID
as you will see below.
Go to the EdgeImpulse balenaCam project and click the Deploy with balena
button below to automatically deploy the application. If you use this one-click approach, you can skip the manual step of adding device environment values later because they’ll be pre-configured for you.
Select your board as a device type (Raspberry Pi 4 in this case) and click the button ‘Create and deploy’.
Alternatively, if you want to learn the ins and outs of balena, you can choose to download the project repo and push it to your balenaCloud account using balenaCLI.
Once the application is deployed on your balenaCloud account, go to Edge Impulse and copy your PROJECT ID and the API KEY so we can set them as Application Service variables in balenaCloud.
For the PROJECT ID on Edge Impulse go to the Dashboard and you will find it on the bottom-right.
For the API KEY, select on the top menu, next to the selected Project Info, Keys
and generate a new API key for balenaCloud.
Copy it and go to balenaCloud to generate the Service variables into the container edgeimpulse-inference
EI_API_KEY
and EI_PROJECT_ID
.
Once your application has been created, you can add a device to that new application by clicking the Add device
button. You can also set your WiFi SSID and password here if you are going to use WiFi.
This process creates a customized balenaOS image configured for your application and device type and includes your network settings if you specified them. Once the balenaOS image has been downloaded, it’s time to flash your SD card (in case you use a Raspberry Pi).
You can use balenaEtcher for this. If the downloaded image file has a.zip extension, there’s no need to uncompress it before using balenaEtcher.
Once the flashing process has completed, insert your SD card into the Raspberry Pi and connect the power supply.
When the device boots for the first time, it connects to your network automatically and then to the balenaCloud dashboard. After a few moments, you’ll see the newly provisioned device listed as online.
Once the device appears online in the dashboard, it will automatically start to download the Edge Impulse balenaCam application. After a few minutes, your device information screen in the dashboard should look something like this, showing the device with the two container services running ready to classify images through the Pi Camera attached on your Raspberry Pi 4.
Toggle Public Device URL
to enable you to remotely access the camera.
Open a browser and put in the Public Device URL or the local IP address.
The Pi camera stream should be displayed on the website. If you experience any problems, check the troubleshooting section below.
If the camera is streaming properly, try to move different objects in front of the camera and see how well the classifier works! Predictions are displayed for all labels with values between 0 and 1, with 1 being a perfect prediction.
According to Edge Impulse, there's a 99% chance that these socks match.
...and according to Edge Impulse, there's a 99% chance that these socks **don't** match.
Edge Impulse can also determine what is likely not a sock, aka "unknown."
Enjoy training machine learning models with Edge Impulse and deploy them on your fleet of connected devices with balena.
TroubleshootingChrome browsers will hide the local IP address from WebRTC, making the page appear but no camera view. To resolve this try the following Navigate to chrome://flags/#enable-webrtc-hide-local-ips-with-mdns
and set it to Disabled
- You will need to relaunch Chrome after altering the setting
Firefox may also hide local IP address from WebRTC, confirm following in 'config:about'
media.peerconnection.enabled: truemedia.peerconnection.ice.obfuscate_host_addresses: false
- This project uses WebRTC (a Real-Time Communication protocol). In some cases a direct WebRTC connection fails.
- This current version uses mjpeg streaming when the webRTC connection fails.
If you wish to test the app in balena local mode, you'll need to clone the repository and add your Edge Impulse Project ID and API Key in edgeimpulse-inference/app/downloadWasm.sh and uncomment lines 5 and 6. This will enable your project to download the Edge Impulse ML models locally on your computer.
Check the BalenaCam project if you have more issues or check the advanced options available in this guide.
Until next timeWe’d love to hear from you to see what you are classifying with balena and Edge Impulse. We’re always interested to see how the community puts these projects to work.
Get in touch with us on our Forums, Twitter, and Instagram to show off your work or to ask questions. We’re more than happy to help.
AcknowledgementsThis project is made possible by the great work of Aurelien Lequertier from Edge Impulse and the balenaCam project developers.
Comments