With my very limited knowledge about Machine Learning, it just took about an hour to deploy a machine learning model on my Remote Raspberry Pi. Thanks to Edge Impulse for making it this easy.
So what am I trying to teach my Raspberry to recognize? I got a Table Tennis Racket and a ball 🏓. This might sound silly to some of you but this is what I got right now as objects near me 😋
Let me take you on the journey of how it was done.
Hardware PreparationFor this project, I am going to use Sixfab's Cellular Kit with their CORE solution to reach the Raspberry Pi remotely. The Hardware setup is pretty straightforward here. The components go to their respected place as shown in the image below. The step-by-step guide for the Cellular HAT setup is available here and the setup guide for the enclosure is also available. And most commonly the Camera goes to the camera slot on the Raspberry Pi. One can also use a USB camera.
Here is a closer look inside.
After getting the SD card prepared with the Raspberry Pi OS, I installed the Sixfab CORE. Then before installing the Edge impulse, I will enable my camera from
sudo raspi-config
Once the camera is activated Raspberry Pi will reboot.
After the reboot, we will install the Egde Impluse on our Raspberry Pi. This installation process is also super easy. The commands required to install it are as follows:
curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm
The details of this installation process are explained in the documentation of the edge impulse.
From the studio.edgeimpulse.com and login and create a project.
Once the installation is complete and the project is ready run the following command to connect to the edge impulse.
edge-impulse-linux
This will ask for the username and password that you just top login to studio.edgeimpulse.com.
Once the setup is complete you can verify it under the Devices.
Once everything is set up we will need data to train our Machine Learning model. You can either upload the data or collect your own data. Here I am going to collect my own data.
To do so go to Data Acquisition from the left menu.
Here you can see Device, Label, Sensor & Camera feed under Record new data.
Fill the label with any name, I am using "Data" and click on Start Sampling for collecting the images you want.
You will be able to see the images being taken on the very same page. The naming of the images will be like LabelName.jpg.SomeRandomString, for example, 'Data.jpg.20o2ja35' in my case
On the top, you can see the data are divided into Training Data and Test Data, and a tab called Labeling queue.
For dividing the data, go to Dashboard > Danger zone > Rebalance dataset
If you cannot see Labeling Queue, go to Dashboard > Project Info > Change Labeling method to Bounding boxes
Now go to Label queue in Data Acquisition and start labeling your data. Do it for all the data.
Once labeling is done, go to the Impulse Design> Create Impulse, Add processing block (Image) and learning block (Object Detection) and Save Impulse.
Now, go to Image, select RGB as Color depth and press Save parameters. This will lead to Generate features tab and press generate features. This may take few seconds.
Once the features are generated, you will see a graph under the Feature explorer, which is extracted from the given data. (I know it is not very clear data, I guess it is good enough as a beginner).
Next, I will go to the object detection tab and press Start Training, which will train a model.
Be patient, as it will take few moments. Once completed it will show the precision score. Depending on the Model version the precision score may differ, the unoptimized version gave a better precision score and utilizes more memory.
Now the model is ready to be used.
Let us check if the Raspberry Pi can detect the objects right.
To check this open a terminal on your Raspberry PI and run
edge-impulse-linux-runner
The result can be seen below, it gives a link to observe the live feed and classification on thebrowser.
It can also detect all known objects at the same time 🏓.
I will remote connect to this cellular-connected Raspberry Pi and run the trained model to check if it can detect the known objects in front of it. For the remote session, we have Sixfab CORE installed. The installation of Sixfab CORE has been explained in this blog by Ensar.
I will open up my Remote Terminal from the CORE
Now I will run the following command
sudo edge-impulse-linux-runner
and place the objects in front of it to see if it can detect them.
And Yes it did!! It shows the label of the detected objects.
ConclusionWith the help of Edge impulse, you can train your model with your out dataset or and ready dataset. Once the model is ready, you can implement it to any device as long as you have access to these devices. To have access to the cellular-based Raspberry Pi, Sixfab CORE is a good solution to have a hassle-free internet connection along with remote terminal access.
Comments
Please log in or sign up to comment.