The vast majority of the world’s wild plants (almost 90%) and 75% of leading global crops depend on animal pollination. Pollinator species include wasps, ants, butterflies, beetles, moths and bees.
Climate change is causing shifts in the seasons which are disrupting flowering times and the availability of food, shelter and nesting sites for pollinating insects. It also brings extremes of drought, heavier rainfall and flooding. Many pollinating insects are struggling to survive.
For example: Rising average temperatures as associated with climate may cause plants to flower earlier each year, whereas its pollinators cannot keep pace by hatching earlier. In the worst case, this may cause the seed production of the plant to decrease and impair reproduction while requiring the insect to search for other plants as food supply.
Among these pollinators wild - or solitary - bees are threatened by the effects of climate change.
This project should make a small contribution to the effort to examine and document the decline of wild bees – in absolute number and variety of species.
SolutionThis is a proof of concept for a simple and cheap smart surveillance camera which helps to document the appearance of wild bees at certain locations for the purpose of counting and classification.
The camera is activated only by the presence of bees, which is detected by the bee’s buzzing sound in the vicinity of the device. On activation the camera takes a still picture of the location and sends it as attachment to an email to a designated address.
The core components of the solution are the
QuickLogic Corp. QuickFeather Dev Kit
and the
The QuickFeather dev kit is responsible for the acquisition and classification of audio data, while the ESP32-CAM is used for acquiring the imagery and network communication.
Acquisition and classification of audio dataThe acquisition and classification of audio data, i.e., the detection of a bee’s buzzing sound, is implemented by using QuickLogic’s QuickFeather Development Kit and SensiML Analytics Toolkit.
QuickFeather Development KitQuickLogic’s QuickFeather Development Kit is a small form factor system ideal for enabling the next generation of low-power ML-capable IoT devices. QuickFeather is based on open-source hardware, which is compatible with the Adafruit Feather form factor and is built around 100% open-source software.
The QuickFeather is powered by QuickLogic’s EOS™ S3, the first FPGA-enabled Arm Cortex®-M4F MCU to be fully supported with Zephyr RTOS. Other functionality includes:
- GigaDevice 16-Mbit of flash memory #GD25Q16CEIGR
- mCube MC3635 accelerometer
- Infineon DPS310 pressure sensor
- Infineon IM69D130 PDM digital microphone
- Powered from USB or a single Li-Po battery
- Integrated battery charger
- USB data signals tied to programmable logic
For a comprehensive description visit the product page at
https://www.quicklogic.com/products/eos-s3/quickfeather-development-kit/ .
SensiML Analytics ToolkitThe SensiML Analytics Toolkit, integrated with the QuickFeather Development Kit, is the only AI development tool delivering scalable, production-grade workflows for IoT development teams. SensiML's attention to scalable, transparent workflows makes it the best solution for getting AI out of the lab and into real product deployments.
SensiML brings real-time event detection to the IoT sensing endpoint with software usable by developers regardless of AI/ML skill level. Models are created using SensiML’s AutoML analytics engine that automatically generates an optimized, device-ready model maximizing accuracy within the specified resource constraints of the target hardware.
Resulting models are automatically translated into embedded code (with options for binary, library, or full source output) delivering real-time inferencing that executes directly on the target device itself… not in the cloud. The result is smart applications that run faster, provide insight where events occur, require less network performance, and are more secure through better partitioning of data processing tasks.
For more information see the product page at
Documentation & getting startedThe documentation for the QuickFeather Development Kit and SensiML Analytics Toolkit at
https://sensiml.com/documentation/index.php
provides guidance and information for all steps necessary to develop and implement the solution described in this document.
Collection of audio dataQuickFeather Development Kit is used to collect the necessary training and test data for the development of the machine learning model used for classification later.
The necessary steps for preparing the QuickFeather board to collect audio data are described in different sections in the SensiML documentation at
https://sensiml.com/documentation/firmware/quicklogic-quickfeather/quicklogic-quickfeather.html
Flashing QuickFeather data collection firmwareObtain the „Simple Stream - Audio data collection over USB Serial binary file” (https://sensiml.com/documentation/_downloads/1cb85ab2d45c7eb34213ff1bc48f2511/quickfeather-audio-data-collection-serial.bin), download it to your computer and write the image file to the QuickFeather board, using the TinyFPGA Programmer from QuickLogic to flash your device. Instructions for setting up your computer to flash a QuickFeather board can be found here: https://github.com/QuickLogic-Corp/TinyFPGA-Programmer-Application .
See
for detailed instructions.
Collecting sample data with SensiML Data Capture LabSensiML Data Capture Lab (DCL) is used to capture data from the QuickFeather device and to store it on host computer for further processing and export. To obtain this tool (and for using the Analytics Studio later), it is required to sign in / sign up for the community edition (https://sensiml.com/plans/community-edition/) and download the software at https://sensiml.com/download/ .
Installation, setup and usage is described in the SensiML Getting Started Guide at https://sensiml.com/documentation/guides/getting-started/overview.html.
After preparing the QuickFeather and the SensiML Data Capture Lab and connecting the device connected via USB to the PC running the DCL, audio samples can be acquired at an appropriate location - e.g., a “bee hotel”.
For details about the capturing process refer to the documentation at https://sensiml.com/documentation/guides/getting-started/capturing-sensor-data.html .
Labeling sample data with SensiML Data Capture LabAfter sampling a sufficient amount of training and test data, the acquired audio samples have to be segmented and labeled. This is done in “Label Explorer” mode in the DCL.
For this proof-of-concept only two categories are of interest: The sound of a buzzing bee – buzz – and the background noise without buzzing – background noise.
For details about the labeling process refer to the documentation at https://sensiml.com/documentation/guides/getting-started/labeling-your-data.html .
Preparing data & building the modelAfter using DCL to capture sufficient amount of buzzing sound and background noise samples and adding the appropriate labels, data preparation and model building has to be completed in the SensiML Analytics Studio.
The whole process is described in detail in the documentation at https://sensiml.com/documentation/guides/getting-started/analytics-studio.html .
For this proof-of-concept the defaults have been used for most of the configuration options.
The build model can be tested (therefore it might be good idea to split captured data sets into training and test data) with the sampled data. If the results are not satisfactory the whole process might need to be re-iterated with different settings.
Deploying the modelOnce model building has been completed, the next step is to create and download a “Knowledge Pack” which can be deployed to the QuickFeather for running the model.
In the "Download Model" section screen the appropriate “HW Platform” (“QuickFeather”) and “Platform Version” (“latest”), the desired “Format” (“Binary”) and the “Application” (“SensiML AI Simple Stream”) have to be selected.
Refer to the documentation at https://sensiml.com/documentation/guides/analytics-studio/generate-knowledge-pack.html for details.
After downloading and unpacking the “Knowledge Pack”, the contained .bin-file has to be written back to the QuickFeather device, as described before for the capturing firmware.
After resetting the device, the running model can be tested on the QuickFeather. The classification result can be received via a serial connection as described in the documentation at https://sensiml.com/documentation/firmware/quicklogic-quickfeather/quicklogic-quickfeather.html#quickfeather-simple-streaming .
The classification result is represented as JSON string containing the detected category of the acquired data as numeric value of the field “Classification”.
For this project the categories are
1. for background noise and
2. for buzz.
The QuickFeather is now ready to detect buzzing bees and to control the camera component.
Imagery and communicationFor imagery and network communication an AI-Thinker ESP32-CAM module is used.
AI-Thinker ESP32-CAMThe AI-Thinker ESP32-CAM module is an ESP32 development board with an “integrated” OV2640 camera, microSD card support, on-board flash lamp and several GPIOs to connect peripherals, including two pins dedicated for UART. Since it doesn't have a built-in programmer, a FTDI programmer is required to connect it to a development computer and upload firmware. It’s inexpensive and easy to use, and is quite suitable for IoT devices requiring a camera with advanced functions like image tracking and recognition.
FirmwareThe custom firmware implements the following functionality:
Initialization
- Setup camera
- Setup SPIFFS file system
- Connect to Wi-Fi network
Operation
- Read & parse classification result from serial connection
- Check classification
- Capture still and store to SPIFFS
- Compose email with image file attachment and send message
Besides the Arduino core for the ESP32 the following libraries need to be installed:
ArduinoJson: Efficient JSON serialization for embedded C++
for parsing the JSON data emitted by the QuickFeather.
ESP-Mail-Client: Mail Client Arduino Library for ESP32 and ESP8266 v 1.2.0
https://github.com/mobizt/ESP-Mail-Client
for sending the captured image as mail attachment.
CodeSee Code section …
WiringThe QuickFeather board and the ESP32-CAM module are connected as follows
For testing the device has been attached to a plant support stick and placed near a "bee hotel", which is frequently visited by solitary bees.
For the test the PoC device was powered by a USB power bank (since there was no working LiPo battery at hand).
Measuring the power consumption revealed, that the PoC device has an average power consumption of about 100 mA, which is way to much for long-time operation in the field.
While the QuickFeather itself only draws about 1 - 2 mA, it is no surprise, that most of the high power consumption is caused by the ESP32-CAM with it's WiFi communication.
Future implementations of the ESP32-CAM firmware should take advantage of the ESP32's low-power sleep capabilities to reduce the power consumption.
Beyond the proof-of-conceptThis proof-of-concept leaves plenty of room for improvement.
- The audio data collection and labeling process has to be extended and refined, possibly even to reflect the distinction of different insect / solitary bee species.
- The data preparation and model building process needs to be refined for better accuracy and precision.
- Power supply (e.g., solar power) and power consumption (taking advantage of sleep modes) need to be revised - see above.
- Imagery could be improved (multiple stills, short video clip etc.).
- Communication could be adapted (HTTP upload instead of email) and enriched (classification data, additional sensor data like temperature, humidity etc.).
- Image recording on SD card may be considered.
- Nice weather-proof enclosure.
- ...
Comments