Nearly 19 million acres of forests are destroyed annually, equal to 27 football pitches a minute. Forests serve as homes for thousands of animals, and for many people, they’re a source of food, water, clothing, medicine, and shelter. The so-called lungs of the Earth also mitigate climate change by acting as a carbon sink. As cities expand with the rise of natural calamities like forest fires at an alarming rate the need to conserve the forest ecosystem is a vital step in our fight against climate change. Forest fires need no introduction. That being said, more than 80% of forest fires are anthropogenic (human-made) as opposed to biogenic (natural). These can be for many reasons, such as illegal logging, deforestation efforts. Forest loss as a result of illegal logging is a threat to biodiversity in forest habitats. More and more species are unable to survive as the practice denies the habitat crucial for natural interconnectedness. The extensive fragmentation and degradation of the forest have put more animal and plant species on the verge of extinction. Hence stopping illegal logging would help restore the flora and fauna and restore nature's balance to sustain the world.
Proposed SolutionThe solution is to build an Illegal Logging Detection Based on Acoustic events at the edge itself powered by solar energy. Most of the solutions to date have been heavily relying on the cloud for post-processing the data collected from sensors, which is limited by the network bandwidth making them power-hungry devices. Hence the proposed device would be capable of classifying 3 classes of Acoustic events, namely: Normal ( Natural Forest Sound), Axe ( Logging of trees using Axe), Chainsaw ( Logging of trees using a chainsaw) at the edge, and would only send the classification results & device state over a radio frequency to the base station. The base station receives the data, uploads it to the cloud, and generates an SMS alert to the concerned authority if illegal logging is detected. To sustain this solution in forests, we would need to harvest solar energy and store it in a battery. Also to save power the device would in a continuous cycle of sleeping, waking up to inference, and sleep back again.
Our first goal is to select components that are energy efficient as the device would run entirely on battery and solar energy. Below mentioned is a list of components selected:
1) QuickFeather Development Kit: This board is capable of an 80Mhz Arm core with an eFPGA which helps perform compute-intensive tasks faster with less energy footprint. The board has an onboard microphone with a low-power sound detector (LPSD), a NOR Flash, a battery charger with a JST connector which makes it a suitable candidate for our application
2) Xbee S2C: This RF module can operate with low as 2mW of power and with a sleep current in microamps. Apart from the power specifications, the availability and price of the module make it stand out.
3) Solar Clicker Board: This module houses a BQ25570 Nano Power Boost Charger and Buck Converter from TI. The low cold startup voltage ( 300mV), peak output current (110mA), easy pinout, and a backup supercapacitor made it suitable for our project.
Found a cheap solar lighting module on amazon, which turned out to be a good enclosure for the project, as the solar panel is already attached to the enclosure and there also a battery holder. As the microphone on the quick feather board is downward facing, ensure that the opening is not blocked due to the enclosure. All the components were connected using female header pins soldered to the prototyping board. The hardware connection is as follows:
1) The Solar panel input is connected to the input pin of the solar click module. To Enable the buck booster converter, the EN pin of the solar clicker module is shorted to the ground. The boosted voltage is in turn connected to the quick feather board using the just connector. The XBee is powered through regulated voltage output ( J3_15 ) on the quick feather board to maintain a stable RF communication. The battery is also connected through the Vbat pin just beside the JST connector.
2) The Rx and Tx pins on the quick feather board are connected to the Rx and Tx pin of the Xbee module respectively. The IO_6 pin on the quick feather connected to Pin 9 of the Xbee module controls the sleep cycles of the Xbee module. Pulling it high ( 3.3 V) drives the XBee into sleep mode and keeping it low (GND) wakes it up.
Small fly wires were used to form the interconnect between the components. Below are some snaps of the hardware design.
SensiML provides an end-to-end software solution for data capture, data modeling, and firmware generation for on-device inference for low-power resource-constrained devices. To use the quick feather with SensiML Solutions, we'll need to flash the data collection firmware bin file onto the board. You can download the bin file directly from the SensiML website ( make sure to download audio supported bin file) and follow this tutorial to get started. I was facing issues with creating Alias for the Tinyfpga programmer, however, using git bash console helped me resolve the issue with creating an alias name "qfprog" and also to flash the bin file to the board.
Since we would need to write some firmware later in the project, it would be better to get some hands-on experience with generating bin files using Quickfeather Simple Streaming Interface AI Application Project on eclipse ide. Here is a descriptive walkthrough to set up the application project on eclipse.
After setting the project on the IDE, head to the Fw_global_config.h file and make the following changes for the device to be detected in the Data capture Lab.
Select the Audio macro to enable audio streaming via UART. You also need to enable the SENSOR_AUDIO_LIVESTREAM_ENABLED macro in the sensor_audio_config_user.h header file present in the sensor_audio directory of the project.
Hit the build button (hammer-shaped), to generate the bin file located in the GCC_Project/output/bin directory. Open git bash in the same directory, press the reset button on the board followed by the user button before the green led stops blinking, and use the following command to flash the bin file.
qfprog --port <Device COM Port> --m4app <Name of the bin file>.bin --mode m4
Once the program is flashed connect the USB to TTL serial converter to quick feather according to the following connections:
Rx ( Serial Converter) -> Rx ( quickfeather board)
Tx ( Serial Converter) -> Tx ( quickfeather board)
Ground ( Serial Converter) -> Ground ( quickfeather board)
To begin, create a new account and download the DCL software, as well as sign in. After the connection is ready, open the Data Capture Lab, create a new project by giving it a name and saving it somewhere. Then switch from the 'Label Explorer' mode to 'Capture' mode. The DCL uses plugins in the form of SSF files that tell it how to communicate with devices. Download the one for the QuickFeather here (make sure to choose the one for Simple Streaming) and add it using Edit->Import Device Plugin and selecting the just-downloaded SSF file. In the upper-right corner you'll see the Sensor Configuration is empty, so click on the add new sensor button, select the QuickFeather Simple Stream plugin, use the 'Audio' capture source, a sample rate of 16000 samples per second, and ensure that 'Microphone' is checked. Go ahead and save it with the name you like.
With the board set up, go ahead and click "Connect" within the DCL after finding the correct serial port (the one for the USB to TTL serial converter!) with the "Scan Devices" button. If it doesn't work initially, try unplugging the converter, plugging it back in, or disconnecting and reconnecting. Just below that pane, there is a section for adding labels and metadata. I have added three labels: normal, axe and chainsaw. Hit the record button when you are ready.
Once the recording is completed, we need to clean the audio data as we don't want undesirable fragments of audio to feed into the training model. This can be accomplished by going to the Project Explorer tab in the upper-left and double-clicking on the capture you want to modify. Then we can add segments by dragging the mouse while holding down right-click over the areas you want to keep. You can see them in the upper-right area. This also allows us to capture different labels within the same capture by creating segments for each of them and changing the labels. You can also use Detect Segments option at the bottom to automatically detect segments in the data and do all the repetitive work for you. Ensure to create an equal amount of segments for each class as this would balance the model and prevent it from underfitting or overfitting. You can also add a video of the data to correlate with the audio event.
After heading to File->Close File, it's time to use the Analytics Studio to generate a model from the captured data. Remember that data saved within the DCL is automatically uploaded and stored in the cloud, although it might take a few moments for it to refresh and appear.
Data Training:SensiML provides a Community Edition subscription plan which offers most of the features from the Analytics Toolkit at zero cost. A great deal for makers who want to experiment with.
Login to Analytics Studio in a web browser and select the project that was created in the DCL.
To train a model, we must first inform the Analytics Studio of the data we want to use in a Query form. This can be done by clicking on the Prepare Data
tab and entering a name, session, label, relevant metadata, the sensor(s), and how to plot it. After saving, the dataset should appear on the right, and we can see how many segments are in each label.
A pipeline is a container for a series of data processing steps. The pipeline object allows you to get an existing pipeline or create a new one with a given name. You can set input data sources with the object created, add transforms, feature generators, feature selectors, feature transforms, and classifiers. We can create a pipeline by clicking on the Build Model
tab and entering the following details:
1) Pipeline Name:
2) Select the query that was just created,
3) Window size: It corresponds to the number of samples to buffer for each event. This can have a remarkable effect on the model and the size of the model. Hence set this wisely.
4) Optimization Metrics: Choose your priority between Accuracy, F1 Score, Sensitivity.
5) Classifier size: This limits how large the model can be, great for loading onto ROM-constrained chips.
Clicking Optimize
will go through and build the model. Click on the Show Advanced Setting bar to reveal the options. I like this set of options as it provides more flexibility and control over the model building process. One of the options I chose was "Balance Data", which evens out the data in each class for training. Feel free to play around with these settings as the Pipeline Log is quite intelligent as it suggests the user, to change a certain parameter in case of build failure.
SensiML provides more options to investigate the generated model. Switch to the Explore Model Tab on the left side to reveal all details related to the model, such as Model Visualization, Confusion Matrix, Feature Summary, and Knowledge Pack Summary. If you want to optimize and tweak the underlying algorithm, please visit the e advanced model building tutorial
You can go ahead and download the model/knowledge by clicking on the Download Model option from the left drop-down menu. SensiMl supports quite a variety of boards, one that we need to select is the quick feather board. Even if your board is not supported you can select ARM GCC Generic, which would download the static library along with an example code to use with any ARM enabled board. Below mentioned are options that we need to focus on before downloading the knowledge pack:
1) Float Option: This would generate floating-point instructions target for Floating-Point specific conventions
2) Format: We have three options here ( Binary, Library, Source). Source code is only available for the paid version. If you want to directly flash it onto the board, download the select binary. For our project, we would need the library format for seamless integration into the firmware.
3) Application: Select the simple stream option as we had used the Uart for data collection rather than the WIFI board.
4) Debug: Under the Advanced Setting tab, make sure to set this to true, as issues related to recognition results not being displayed on the Uart Terminal have been reported on the quick logic forum.
The first step of firmware development is to integrate the knowledge pack generated into the qf_ssi_ai_app project. Below mentioned are some links that would help you achieve the same:
Compiling the SensiML Library into the QuickLogic QuickFeather Document
The ask here is to develop firmware that performs the audio inference for a particular duration, enters into sleep mode to save power for a fixed duration of time, and wakes up again to resume into the loop. The qf_ssi_ai_app project uses Audio DataBlock Processor Threads to collect the audio data, feeds it into the model, and outputs the recognition results to the console. If we could suspend this thread, the device would automatically enter into the ideal state as there are no more tasks to run. Hence the plan for firmware design is explained below:
Step by Step guide for code development is as follows:
1) Enable recognition mode in the sensor_audio_config_user.h file:
#define SENSOR_AUDIO_RECOG_ENABLED (1) // Change it to 1
#define SENSOR_AUDIO_LIVESTREAM_ENABLED (0)
#define SENSOR_AUDIO_DATASAVE_ENABLED (0)
2) Add the following header files to the main.c file
#include "ql_audio.h"
#include "sml_output.h"
3) Create a handle to be attached to the Audio DataBlock Processor thread and defining extern, makes the life of the compiler easy. Create timer handles to control the duration of activity and sleep cycles of the device.
extern TaskHandle_t xTobeParsed;
TickType_t xTimestart;
TimerHandle_t idealtimer;
TimerHandle_t worktimer;
#define IDEAL_TIMER_PERIOD 10000
#define WORK_TIMER_PERIOD 2000
4) Now attach the task to the thread by making the changes in the following files:
- sensor_audio_process.c:
// Add this at the top of file
TaskHandle_t xTobeParsed;
// Pass the address of the task handle in the last parameter
datablk_processor_params_t audio_datablk_processor_params[] = {
{ AUDIO_DBP_THREAD_PRIORITY,
&audio_dbp_thread_q,
sizeof(audio_datablk_pe_descr)/sizeof(audio_datablk_pe_descr[0]),
audio_datablk_pe_descr,
256,
"AUDIO_DBP_THREAD",
&xTobeParsed /****** Edited here ********/
}
};
- datablk_processor.h:
// As we are passing the address of the task handle, we need to change type of the
// datablk_pe_handle to pointer.
typedef struct st_datablk_processor_params
{
int dbp_task_priority; ///< desired task priority
QueueHandle_t *p_dbp_task_inQ; ///< input queue handle
int num_pes; ///< number of PEs for this thread
datablk_pe_descriptor_t *p_descr; ///< array of thread PE configurations
int stack_depth; ///< depth of stack needed for this thread
char *dbp_task_name; ///< datablock processor task name string
xTaskHandle *datablk_pe_handle; /****** Edited here ********/
} datablk_processor_params_t ;
- datablk_processor.c:
// Remove the Ampersand sign in order be compatible with the pointer operator.
xTaskCreate ( datablk_processor_task,
p_dbp_params->dbp_task_name,
p_dbp_params->stack_depth,
p_dbp_params,
p_dbp_params->dbp_task_priority,
p_dbp_params->datablk_pe_handle /****** Edited here ********/
);
4) Add functions for the following tasks:
- Initialize Timers:
void timer_init(void)
{
if (!idealtimer) {
idealtimer = xTimerCreate
(
"idealTimer",
IDEAL_TIMER_PERIOD, // 10 ticks = ~10ms
pdTRUE, // auto-reload when the timer expires
(void *)0,
idealTimer_Callback
);
}
if (!worktimer) {
worktimer = xTimerCreate
(
"workTimer",
WORK_TIMER_PERIOD, // 10 ticks = ~10ms
pdTRUE, // auto-reload when the timer expires
(void *)0,
workTimer_Callback
);
}
}
- Callback Functions to called after each timer expires:
void workTimer_Callback (TimerHandle_t timHandle)
{
max_class_print();
vTaskSuspend(xTobeParsed);
TimerStart(1);
uart_tx_raw_buf(UART_ID_SSI,"\r\nSleeping",10);
Xbee_Sleep_Config(1);
TimerStop(0);
}
void idealTimer_Callback(TimerHandle_t timHandle)
{
vTaskResume(xTobeParsed);
TimerStart(0); //work start
Xbee_Sleep_Config(0);
TimerStop(1); // ideal stop
HAL_DelayUSec(1000);
uart_tx_raw_buf(UART_ID_SSI,"\r\nInferencing",13);
}
- Functions to start and stop the timers:
void TimerStart(bool timer_select)
{
BaseType_t status;
if (timer_select) {
status = xTimerStart (idealtimer, 0); // start timer
if (status != pdPASS) {
// Timer could not be started
uart_tx_raw_buf(UART_ID_SSI, "\r\n start ideal timer failed\r\n", 30);
}
}
else
{
status = xTimerStart (worktimer, 0); // start timer
if (status != pdPASS) {
// Timer could not be started
uart_tx_raw_buf(UART_ID_SSI, "\r\n start work timer failed\r\n", 30);
}
}
}
void TimerStop(bool timer_select)
{
if (timer_select) {
xTimerStop(idealtimer, 0);
}
else {
xTimerStop(worktimer, 0);
}
}
- Configure Xbee Sleep Mode:
void Xbee_Sleep_Config(bool enable_sleep) {
IO_MUX->PAD_6_CTRL = 0x103;
// Pull the Pin 6 to 3.3v, which is
// connected o PIN9 on Xbee
if(enable_sleep)
HAL_GPIO_Write(GPIO_0, 1);
else
HAL_GPIO_Write(GPIO_0, 0);
}
We need to match the device baud rate to that of the XBee module in order to establish communication over UART. Visit the following link to set up the XBee module and change the brate parameter in the uart_setup() function present in the App/src/qf_hardwaresetup.c file. The complete project source code is available on github, here is the link to it:
https://github.com/Pratyush-Mallick/qorc-sdk.git
Low Power Optimization:Low Power Mode in the QORC SDK is achieved by leveraging the FreeRTOS tickless IDLE power-saving technique. When there are no active tasks in FreeRTOS, only the IDLE task is active and causes the CPU to enter sleep. Quicklogic has a well-guided document on this topic. The page can found be here
Measuring the power consumption of an embedded system is an increasingly difficult task, however, the Nordic Semiconductor Power Profiler Kit II (PPK2) makes it a seamless experience. We measured our system's current consumption in the source mode i.e the Device Under Test (DUT) is supplied power by the PPK2. Install the Power Profiler app, connect the VOUT on the PPK2 to the VBAT on the quick feather board, connect both the ground and you are good to go.
Here are average power consumption ( 1 min) observations at 3.3 volts with a sleep cycle of 10 seconds and recognition of 2 seconds:
- Only Quickfeather: 6mA
- Only Quickfeather (DFS): 7mA ( Dynamic Frequency Scaling can be enabled by setting the #define CONST_FREQ to 1 in the Fw_global_config.h file )
- Only Xbee ( no Sleep ) : 31mA
- Only Xbee ( sleep enabled): 1.97mA ( This guide can help you configure the XBee sleep modes)
- Quickfeather + Xbee ( no sleep) : 36mA
- Quickfeather (DFS) + Xbee ( sleep) : 7mA
I could achieve a minimum of 3mA by incorporating the following changes on the firmware side by configuring the sleep policy node clock to 256 kHz in the s3x_pwrcfg.c file.
I wanted to bring the power consumption to the bare minimum, hence reached out to Quicklogic and SensiML team. Here is what they had to say:
"Reducing power consumption while maintaining optimum performance requires lots of development and fine-tuning. It is difficult to determine if the 2mA peak current for the EOS S3 is achievable without some clear understanding of the model size, Sensor ODR (IMU only or also Audio), etc. To further reduce power consumption, FW needs to put all unused components (use-case specific) into the lowest power mode. This is not being addressed by the current FW. "
However, they did provide me with the following guidance which I would be incorporating in future development:
- In the current SimpleStreaming Application, the CONST_FREQ is set to provide max performance. In this case, there are two power modes: Sleep and Active. Sleep state is when S3 M4 enters WFI mode while the S3 IP collecting sensor data in the buffer. Active State is when M4 is running algo or setup HW to transfer data. The M4 active time is algo dependent based on the knowledge pack data.
- For Active, the S3 core consumes ~5.67mW (@ 1.8V); the # does not include the S3 IO consumption and Sensors.
- For WFI, the S3 core consumes ~ 0.370uW (@1.8V); the # does not include the S3 IO consumption and Sensors.
- For EOS S3, when changing HOSC frequency, it requires reconfigured all of the clock nets dividers update to get the correct output clock frequency.
Here are some pictures and videos during the outdoor testing
Our cloud gateway consists of a Personal Computer connected to the internet and an Xbee Coordinator connected to a PC via UART. A python script running on the PC fetches the data from the Xbee Coordinator through UART, processes it, and then forwards it to the ThingSpeak cloud for further visualization and analysis. You can follow this video on youtube to get started with thingspeak.
Once we start receiving data successfully on the cloud, we would need a trigger mechanism to send out alert SMS if illegal logging of wood is detected. Thankfully, Thingspeak has a ThingHTTP App that enables communication among devices, websites, and web services without having to implement the protocol on the device level. We are going to use this app to trigger Twilio APIs that can send automatic emails and SMS. As ThingSpeak is based on Matlab, hence we would need to code this process in Matlab. Visit the following links to get started with ThingHTTP and TwilioAccount. I have attached both the python and Matlab code at the end of this blog.
Here is the link to the public channel displaying the quick feather sleep State, recognition class, and the location of the device: https://thingspeak.com/channels/1370213
Congratulations on making it to the end of this project.! Let's embrace the final prototype. Here is a video demonstrating the complete working of the device.
Future Scope:Here is a list of tasks I plan to achieve in the next revision ( any kind of help is highly appreciated ) :
- Bring down the current consumption in sleep mode in the microampere range.
- Add battery level monitoring functionality by leveraging the ADC pin connected to the VBAT pin.
- Add a gas sensor for forest fire detection as well.
- A better solar pannel such as from epshine or powerfilm.
- A better boost converter either from e-peas or matrixindustries
Comments