Biodiversity is declining at a rapid pace and several wild species are at risk of extinction. Among them are elephants, especially targeted with the Hackster challenge - ElephantEdge. With technical support as IoT solutions and machine learning, conservation efforts can be facilitated.The scenario in Elephant Edge is a collar for elephants, equipped with a set of sensors and a lightweight processing unit. In the challenge, it is encouraged to build TinyML models and simulate the deployment.
My contribution focuses on development for creating models and testing dashboard features without the need for a full hardware setup. However, some features, like a camera-trap with TinyML, are demonstrated as a proof of concept.Moreover, I hope to contribute with some general insights, tips and tricks.
Components:- Camera trap classification (hardware: OpenMV)- Activity Classification (hardware: smartphone as proof of concept)- Dashboard (IoTConnect)
Provided:- Description of how to train TinyML models for image and accelerometer data- Python script to send simulated data to IoTConnect- Image dataset (elephant, rhinoceros, zebra, buffalo - about 1500 images)- MicroPython script for image inference and MQTT publishing to IoTConnect the OpenMV device.
Furtherwork: - Combine the components to a full solution- Replace the smartphone with a microcontroller that has GPS and accelerometer
📜 Background storyThrowback. It's 2015 and I choose to get involved with conservation efforts in Zimbabwe and I find myself with orphaned elephants as a result of poaching. Since then, I wondered: How can I make a difference on a bigger scale? With time (and lots of programming classes), the thought turned more and more towards: How may one combine an interest in system development with animal conservation?
Fastforward. It's 2020 and a way to combine these interests was possible after all, I chose to write a master thesis about edge machine learning for camera traps. My thesis partner and I really got our hands dirty with model training, hardware, and a test environment at the local Zoo. The setup of this competition is thus familiar, however with different sensors and new tools. One thing I take with me is that with devices out in the field, there are several aspects that become vital:
Logging. Is the device even alive?Is it operating as intended? Maintainance.How may OTA (over-the-air) updates be achieved, if parameters need justificationsor even the ML model?Processing and Battery limitations.How can sensors trigger the "heavy" analysis to start, so that it doesn't need to run around-the-clock?.. to mention a few! These aspects are facilitated with some kind of centralized platform to which all devices are connected, managed, and monitored. For this purpose, there are options as Azure IoT Hub or IoTConnect, which the latter is used in this project.
📺 Dashboard - IoTConnectIn IoTConnect, each device is provided with a template to define its attributes and twin properties. This Hackster project describes how to create a Template and other basic features of IoTConnect.
In the devices tab, it is possible to set up simulated telemetry values. Perfect for quick testing! As an option to send more customized data, see the attached file for telemetry data upload, since the simulation options are rather simple.
Motivation
A proposed add-on to the collar is to also monitor locations as water holes. When the position tracker is close to a camera station, it can start image capturing and analysis. The fact that the camera trap can be triggered is really valuable, as it saves processing power. Moreover, network traffic could be reduced if some analysis is done on the edge device rather than sending loads of frames.
Alternative approaches
I considered two options for edge machine learning for the camera traps:
1. Several camera-equipped leaf devices with connection to an edge device, as a Raspberry Pi with IoT Edge runtime. (saved to a later Hackster project!)
2. Use OpenMV as a self-contained edge device, with the responsibility to both capture and analyze data, as well as to send result telemetry to the cloud. (chosen approach)Let's walk through the process of the chosen approach illustrated below.
Dataset
A dataset of four classes (Elephant, Rhinoceros, Zebra, and Buffalo) was used, in total 1155 for training and 311 for testing.Note: If you have an OpenMV, you can collect a custom dataset fast from the IDE, like described in this tutorial. I did not have any elephants in my apartment so I used an open dataset from Kaggle.
Training
Edge Impulse is a helpful tool to create TinyML models. To create a model, register an account, and upload your first dataset in the Data Acquisition tab. Then follow the Impulse Design steps.
In the Edge Impulse studio, the creation of CNN models is straightforward (migrating from TensorFlow Object Detection API in a notebook, I felt grateful for these simple blocks!) Either use pre-defined data processing blocks or include custom code. In the last step of the Impulse Design, you can start the training.
Performance
After training, the performance result is shown. Notice the performance difference which depends on the model optimization. If you chose to use the quantized version of your model, the model parameters will be of data type int8 rather than float32, which also affects both inference time and memory usage.
Note: Look over the on-device performance to determine if the model is small enough to run on your device. For example, an ESP32 has only 520 KiB ofSRAM.
Model testing
In the device tab, it is possible to connect your smartphone as a device and either collect new test data to Edge Impulse or download the model to run live classification on the phone. This web client is open-source and I look forward to experimenting with it more later on. I tried the model on camera traps streams from Africam.
Model deployment
To deploy your model, chose OpenMV in the Deployment tab and download the generated build folder. Connect your OpenMV camera and copy the files into your SD card, it will overwrite the default trained.tflite and labels.txt. Install the OpenMV IDE, comment the device in the lower-left corner, and run the python file. The video below shows the camera running inference on the device.
OpenMV H7 Plus is the go-to device for running your own models due to its more powerful processor in comparison with the standard OpenMV H7. As I wanted to add internet connectivity to connect to IoTConnect, I also used a standard OpenMV H7 that I had soldered header pins on and could add the OpenMV WIFI shield.To make the model running on the standard H7, I reduced the quality by retraining the model with the following changes:- Changed to MobileNetV2 0.05 which has the smallest size (around 214K) - Reduced input size to 80x80- And in the application: Feed with grayscale images of size QQVGA.
The application itself does model inference and publishing to an MQTT broker. I connected the device to the broker provided on IoTConnect and could see the populated result in realtime.
Motivation
For several reasons, it is interesting to monitor animal behaviors, and with sensors, it is possible to gather continuous data over a long period of time. Moreover, with real-time analysis, park rangers can get informed if abnormal activity is registered and take action.
Model creation
The procedure is quite similar to the training of the CNN model, however this time it is easier to create your own dataset.- In Edge Impulse, connect your smartphone to capture acceleration data.- Capture data with different labels and performs the corresponding movements during the sampling time. I captured Walking, Rushing, and Still by doing these activities myself with carrying my phone.
The slideshow below shows samples of the raw accelerometer data:
In the features tab, the spectral features can be explored. As the different categories were quite clustered, I considered the data to be good enough to produce a model that would be able to differentiate the categories.
Model verification
Recall that in the device tab, it is possible to connect your smartphone as a device and either collect new test data or download the model to run live classification on the phone. As the video below was recorded, I tried the activities STILL > WALKING > STILL > RUSHING and got results as expected. Sadly, I have no dog and I wasn't eager to record myself running around. 👀
Guidance for deployment on hardware
I simply did not have any microcontroller equipped with a GPS tracker or accelerometer at the moment. However, if you do, chose a deployment option in Edge Impulse. For example, these examples are provided for Arduino:
Guidance for simulation in IoTConnect
In addition to the model verification with your smartphone, it is possible to simulate telemetry data in IoTConnect. In the figure below, I illustrate how smartphone testing and telemetry simulation can validate the two key features of the solution. Firstly, we can confirm that the TinyML works as intended with the Live Classification tool in Edge Impulse. Thereafter, we can simulate telemetry in IoTConnect that the classification and other sensors would produce on an actual device. That is, both the device and the cloud components can be tested.
I really enjoyed exploring the possibilities to set up a development environment. One (off-scope) idea I got was to create a mobile app that could simulate being a microcontroller (please let me know if something like that already exists!). Thus I would have been able to let my phone connect to IoTConnect and test the accelerometer model directly. 🧪In production, robustness regarding connectivity, error handling, etc becomes vital. Without consideration, the solution may run fine but eventually enter a faulty state which it cannot recover from. This time, I stayed focused on the fun of creating different kinds of models testing concepts. 💡 Regarding real usage, I think that the addition of some stationary devices like a camera trap adds extra value when placed at water holes or similar popular places, as images say a lot. If an OpenMV would cope with rough weather conditions, I'm not sure, maybe a robust alternative to Raspberry Pi would be necessary. It is then also interesting to communicate via Bluetooth or LoRa to all nearby collar devices. In conclusion, there are many things that are super interesting to this topic. 🌍
Cred: my thesis partner Amanda Tydén as we discovered the topic of edge machine learning together earlier this year.
Comments