The Role of Artificial Intelligence for livestock and wildlife monitoring is expected to grow significantly. This project is an example of how AI can be used for tracking and counting objects (animals or crops) in a quick and efficient way using embedded Machine Learning. This asset tracking system uses Computer Vision from a drone flying across the field (scanning down the surface) with a camera facing down. The ML model will be able to detect and differentiate types of animals or crops and can count the cumulative number for each type of objects (animal/crop) in real time. This enables wildlife rescue teams to monitor the population of the animals/crops, it can also be used for businesses to calculate the potential revenue in the livestock and agriculture market.
This project uses Edge Impulse’s FOMO (Faster Objects, More Objects) object detection algorithm. The wildlife/livestock/asset tracking environment can be simulated and performed by selecting the grayscale Image block and FOMO Object detection with 2 output classes (e.g. turtle and duck). This project takes advantage of FOMO’s fast and efficient algorithm to count the objects while using a constrained micro-controller or a single board Linux-based computer such as the Raspberry Pi. (I’m using a Raspberry Pi 3 model B+, but the Pi 4 model B or Jetson Nano should work better in theory).
The Edge Impulse model is also implemented into our Python code so that it can count the object cumulatively. The algorithm compares the coordinates of the current frame to the previous frames; to see if there is a new object on camera or if the object has been previously counted. In our testing sometimes there’s still inaccuracy in the number of objects counted as this model is still in the Proof of Concept stage. We are confident that this concept can be developed further for the real world application.
This project consists of 5 steps:- Preparation
- Data acquisition and labelling
- Training and building model using FOMO Object Detection
- Deploy and test object detection on the Raspberry Pi
- Build Python application to detect and count (cumulative)
Prepare your Raspberry Pi with the updated Raspberry Pi OS (Buster or Bullseye). Then open your Terminal app and ssh to your Pi. Install all dependencies and Edge Impulse for Linux CLI by following the guide here.
Take pictures of the objects from above (e.g. ducks and turtles) in different positions with backgrounds of varying lighting condition to ensure that the model can work under different conditions (to prevent overfitting). In this project I use a smartphone camera to capture the images for data collection for ease of use.
Note: Try to keep the size of objects similar in size in the pictures, significant difference in object size will confuse the FOMO algorithm.
As you might already know, this project uses Edge Impulse as the Machine Learning platform, so we need to login (create an account first), then go to Edge Impulse and create new project.
Choose Images project option, then Classify multiple objects.
In Dashboard > Project Info, choose Bounding Boxes for labelling method and Raspberry Pi 4 for latency calculations.
Then in Data acquisition, click on Upload Data tab, choose your files, select auto split, then click Begin upload.
Now, it’s time for labelling. Click on Labelling queue tab then start drag a box around an object and label it (duck or turtle) and Save. Repeat until all images labelled. Make sure that the ratio between Training and Test data is ideal, around 80/20.
Once you have dataset ready, go to Create Impulse and set 96 x 96 as image width - height (this help in keeping the model small in memory size). Then choose Fit shortest axis, and choose Image and Object Detection as learning blocks.
Go to Image parameter section, select color depth as Grayscale then press Save parameters.
Finally, click on Generate features button, you should get a result just like the one below.
Then, navigate to Object Detection section, and leave training setting for Neural Network as it is — in our case is quite balanced pre-trained model, then we choose FOMO (MobileNet V2 0.35). Train the model by pressing the Start training and you can see the progress. If everything is OK, you should see something like this:
After that we can test the model, go to Model testing section and click classify all. If the accuracy result is more than 80%, then we can move on to the next step — deployment. Note: If accuracy result is not as good as expected, re-start with quality datas, label, or just retrain the model with Training cycle and Learning rate setting changes.
Now, we can switch to Raspberry Pi. Make sure your Pi has installed all dependencies and Edge Impulse for Linux CLI (as in Step 1) and connect your Pi camera (or USB webcam). Then, via terminal ssh your Pi and type:
$ edge-impulse-linux-runner
(add - - clean if you have more than one projects) During this process you will be asked to log in to your Edge Impulse account.
This will automatically download and compile your model to your Pi, and start classifying. The result will be shown in the Terminal window.
You can also launch the video stream on your browser. http:// YOUR Raspberry Pi IP ADDRESS:4912
Then you can see how this live classification works:
Turtle and Duck have been successfully identified with x, y coordinates in real-time (very short time per inference).
Until this step, we’ve taken out data and trained an object detection model in Edge Impulse platform and running that model locally on our Raspberry Pi board. So, it can be concluded that it was successfully deployed.
Step 5: Build Python program to detect and count (cumulative)To make this project more meaningful for a specific use case we want it to calculate the cumulative count of each type of objects taken from a moving camera (via drone). we take Edge Impulse’s example object detection program and turned it into an object tracking program by solving a weighted bipartite matching problem so the same object can be tracked across different frames. For more detail, you can check our Python file in code attachment below.
Because we use Python, so we need to install the Python 3 Edge Impulse SDK and clone the repository from the previous Edge Impulse examples. Follow the steps here.
You also need to download the trained model file so it is accessible by the program the we are running. Type this to download it:
$ edge-impulse-linux-runner --download modelfile.eim
Make sure that your/our program <count_moving_ducks> is placed in the correct directory, such as this:
$ cd linux-sdk-python/examples/image
Then, run the program using this command:
$ python3 count_moving_ducks.py ~/modelfile.eim
Yay!! Finally, we have successfully implemented Edge Impulse FOMO object detection model and run cumulative count program locally in Raspberry Pi. With the speed and accuracy level that we obtained, we are confident that this project can also be used in micro-controllers such as Arduino’s Nicla Vision or the ESP32 CAM, so it will be easier to be mounted to a drone.
Feel free to leave a comment and Thank you!
Comments