One truth is that snakes often appear in farms and pose a threat to human safety. I looked up some relevant information, according to United Nations Office for Disaster Risk Reduction, snake bites are a neglected public health issue in many countries, particularly in Africa, Asia, and Latin America. In Asia, up to 2 million people are envenomed by snakes each year, while in Africa, there are an estimated 435, 000 to 580, 000 snake bites annually requiring treatment. Moreover, bites from venomous snakes can lead to acute medical emergencies involving severe paralysis, respiratory problems, and bleeding disorders.
Therefore, I intend to develop a prototype project for automatically monitoring snakes in farm field environments to warn people that there are snakes here. To achieve this, we can utilize tiny machine learning for automatic recognition. However, we face the challenge of the remote and expansive nature of farm fields, making it impractical to rely on Wi-Fi or cellular networks for communication. After careful consideration, I have opted for LPWAN technology, specifically LoRaWAN. LoRaWAN provides a coverage range of several kilometers and ensures low power consumption, eliminating the need for frequent battery replacements.As a result, we have successfully created the LoRaWAN Based TinyML Snake Recognition System project, which you are currently witnessing.
Basic Thought1. To ensure convenient outdoor deployment of the equipment while keeping costs relatively low, it is crucial to have a compact-sized solution. Additionally, the device should possess sufficient computing power to handle image recognition tasks. Considering these factors, I have selected the XIAO-ESP32-S3 Sense as the primary controller for this project. The XIAO-ESP32-S3 Sense is responsible for capturing images and feeding them into the model for accurate image detection and prediction.
2. For achieving long-distance transmission and minimizing power consumption, the Grove Wio-E5 has been chosen as the ideal solution for data transmission to the Things Network via LoRaWAN.
3. Moreover, I intend to directly monitor the snake detection status of the device. To facilitate this, I will visualize the data on Datacake, enabling easy interpretation of information through graphical representation. This visualization will provide a convenient way to check if the device has successfully detected snakes.
The overall architecture is shown in the diagram below.
The full code engineering for the project is available on GitHub.
Demo ShowcaseThe results are truly impressive. By pointing the camera of the XIAO S3 Sense at snake images, we can now easily see if there's a snake detected. Not only that, but we also get specific details like the snake's location and size within the image. You can check out these detection results either on a serial monitor or through cloud platform. It's such a seamless way to stay updated on any snake activity happening around.
You can observe that the value stays at 0 when there are no snakes. However, when a snake is detected, the number will change. A value of "1" indicates that snakes were present for the entire duration of that specific period of time. As a result, this value indicates the frequency of snake occurrences, the higher this value is, the higher the frequency of snake appearances.In the next section, I will explain how to complete this project.
Get StartedIn this section, I will introduce the necessary hardware and software, explain the reasons for my choices, and provide detailed steps.
Hardware
Personally, I don't want to break the bank on a gateway, but I also don't want to end up with a cheap, low-quality option. The M2 gateway fits the bill perfectly for me. It strikes the right balance, meeting all my requirements without draining my wallet.It's a cost-effective solution that doesn't compromise on performance.
Seeed Studio XIAO ESP32S3 Sense
The Seeed Studio XIAO ESP32-S3 Sense is a super versatile board that comes with a camera sensor, digital microphone, and SD card support. It's like the ultimate choice for our Snake Recognition project since it has built-in machine learning power and awesome photography capabilities.
To upload the XIAO image recognition results to the IoT platform, we can utilize the LoRa module for data transmission through LoRaWAN.The E5 module's plug-and-play functionality is incredibly convenient, and the fact that it has built-in AT commands and Arduino compatibility makes it even more appealing.
Grove Shield for Seeed Studio XIAO
Plug-and-play Grove extension board for Seeed Studio XIAO series. It's perfect for our project where we need to connect the XIAO and E5 together.
Overall structure
As shown below, the overall structure of the combination remains compact, similar to my AirPods. The expansion board effortlessly connects the LoRa module and XIAO together.
The total cost for this setup is just $35.39, with $13.99 for the XIAO ESP32-S3 Sense, $4.5 for XIAO’s expansion board, and $16.9 for the Grove-Wio-E5. It's an affordable and efficient solution for my project.
Software
1. The software environment construction of XIAO ESP32S3 programming with Arduino can be referred as Getting Started | Seeed Studio Wiki.
2. Another software I used called Edge Impulse.I have tried several AI training platform, and they all seem a bit complex. Edge Impulse allows you to directly export an Arduino library, the exported Arduino library from Edge Impulse can be easily incorporated into your Arduino IDE.
Once the preparation work is completed, you can start training the model, let's start!
Step 1. Training object detection with Edge Impulse
1) Gathering dataset
To train a target recognition model, the first step is to gather a high-quality dataset of target images. You can find and download relevant datasets from platforms like Kaggle, Roboflow, and OpenDataLab. Once you have chosen a suitable dataset, download it in the COCO format. This will provide you with a compressed package of the dataset.
After decompressing the downloaded dataset, you will discover folders named "train" and "test." These folders contain the data used for training and validating the model, respectively. Inside each folder, there will be a JSON file that stores the labels for each image, eliminating the need for manual labeling. However, if labeling is required, you can utilize the labelme annotation tool for data annotation.
2) Build a new project on Edge Impulse
Note: Edge Impulse's developer model has limitations on dataset size and training duration. Prior complexity assessment is essential to ensure your model adheres to these constraints.
First, create a new project.
Then, Click Data acquisition and click upload button, Select a folder -> Select files -> Choose train or test(Depend on folder) -> Upload data
After uploading the data, you can view the entire dataset in the Edge Impulse platform. Check the label of the entire dataset.
Next step, click on Create Impulse and change resize mode to Fit longest axis, save impulse.
After that click Image and Generate features
The final step is to start model training and deploy it in the Arduino IDE. Once the model training is complete, Click Deployment -> Choose Arduino library -> Disable EON Compiler -> Selcet Quantized(int8) -> Build
Note: Parameter changes are required based on data set size and model structure, and the default parameters are used here for training
Through the above steps, we have completed the whole process of target detection from data set acquisition to model training export.
Other model training can be found in the official documentation of Getting Started - Edge Impulse Documentation
Step 2. Compiling code using Arduino IDE
1) Installing the object detection libraries
After completing the last steps mentioned in the document, you will be able to download the trained model and the related library files required to run on Arduino. This will be provided as a compressed package that you can download.
installing libraries in Arduino IDE, refer to the Installing Libraries | Arduino Documentation
2). Inferencing the trained model on XIAO
With the deep learning dependency libraries installed in the previous section, we are now able to use our trained model for inference. The initial step involves invoking the camera to capture real-time image data. Once the camera is successfully initialized, the serial port will print "camera initialized." For a visual representation of this initialization process, please refer to the figure below.
The camera configuration code is attached.
Next, we proceed to retrieve the image data by invoking the camera and perform forward propagation prediction using the loaded model to obtain the prediction result for the image. The prediction result corresponds to different predicted categories, which we represent using a sequential numbering system for classification. Specifically, for snake detection, we assign "1" to indicate the detection of a snake and "0" to represent no snake detection. Once we have this classification data, we can transmit it via LoRa communication. The Code has been attached.
Step 3. Transmit data to TTN and achieve visualization on Datacake
1). Sending data to the Things Network
Refer to Sending Wio-E5 Data to Datacake via TTN - Hackster.ioto create an application, add a gateway, and bind devices, it is important to note that you must have your own LoRaWAN gateway to use The Things Network (TTN).
As the figure above, we can see that XIAO successfully uploaded the recognition result to TTN via the Wio-E5 module.
2). Visualizing the data by using Datacake
The reference is Sending Wio-E5 Data to Datacake via TTN - Hackster.io.Before visualizing the data, we need to decode the data to get the data that we really need. We can vi
w the page from device -> Configuration -> Payload Decoder, as the following figure:
Then we can change the code in the Payload Decoder to the decoder code that I wrote:
After all the tasks have been completed, we can visualize the data from XIAO on the dashboard.
You can observe that the value stays at 0 when there are no snakes. However, when a snake is detected, the number will change. A value of "1" indicates that snakes were present for the entire duration of that specific period of time.
ApplicationBefore deploying the program in the real world, it is crucial to debug it. As depicted below, once the program is uploaded to XIAO, the camera captures the snake image for object detection. It is worth noting that it is preferable to ensure that the target appears as complete as possible in the image.
The detection result is presented below.
Through x, y, width and height, we can infer the specific position of the target in the image, and if necessary, it can be extended, for example, a steering gear head can be installed at the bottom of the device, and the target can be tracked and recognized by constantly letting the target appear in the center of the image. Additionally, due to the utilization of LoRaWAN technology, the actual coverage range of this project can reach several kilometers, which is sufficient to cover the area of a farm.
FutureI've been working on this project that can detect the presence of snakes. In the next steps, I aim to extend the capabilities to detect the number of snakes and even identify their species. This expansion will require further development and integration of advanced computer vision techniques and deep learning algorithms.
By leveraging the power of AI, I envision a future where the system can accurately count and classify different snake species in real-time. This would greatly benefit fields such as wildlife conservation, ecological research, and snakebite prevention. The ability to monitor snake populations and identify specific species can contribute to better understanding their behavior, distribution, and potential threats.
I will continue to advance this project, and I encourage you guys to explore and make your own efforts using AI and LoRaWAN technology to build unique projects.
Comments