According to an article from conversation.com. Indonesia is home to the third-largest tropical forest after the Amazon and Congo. Some of these forests have biodiversity values and also used as a sanctuary to protect rare animals that is endemic in Indonesia. Such as orangutan in borneo, sumatran tiger and elephant in sumatra, and javan rhinoceros in Banten. Conservation doesn't run well in the forests of Indonesia due too various of things.
Javan Rhinoceros (Rhinoceros Sundanese) is one of the animal that still concerned to extinct and still preserved until this day in ujung kulon national park sanctuary. The javan rhinoceros is different from other rhinoceros because it only have one horn, and the horn is believed to have medicinal content. Because of that reason there are still some people who done illegal hunting for the animal.
Another animal that is constantly hunted is orangutan. According to various article source, more than 100.000 orang has been killed for the past 20 years due to illegal trading and deforestation. The orangutans are sold illegally as pets to rich people and killed by the employees from a palm tree company that runs near the orangutan habitat.
The main challenge in conservation according to the article is lack of staff or resources to protect the forests, to put in perspective only 1 staff is responsible for 7000 acre of forest.
Another challenge for conservation is wildfire. In 2020 a wildfire happen in one of the animal sanctuary in sumatera. and it is already in a huge fire condition when it is notified.
So for that challenge I want to build a device that help conservation to monitor the forests condition and detect the animals using the Sensecap K1100 kit.
PreparationFor this project I want to build a device that capable to do audio classification to check the animal liveliness and notify if illegal hunting occur, object detection to check the animal activity, and fire detection for early warning to prevent the spread of wildfire. and the device could send the result wirelessly in a long distance.
And thankfully the seeedstudio sensecap K1100 kit have all the things needed for the purpose.
This project will consist of these guides to fulfill the main objective:
- Make a sound classification model to detect orangutan and rhino while also detect a gunshot
- Make an image classification model to detect orangutan visually using the Grove vision AI module provided from the kit
- Make a wildfire detection using the sensors provided from the kit
- Send all the detection/sensing and environment data through lora protocol and push it to an IoT dashboard
This part is made to make sensing all the desired detection with only one sensor, the microphone sensor
Edge Impulse will be used to make a model that capable to recognize and classify the sound needed, This projects will focus only on two animal mention above. Not only to make animal detection the data model also will be made to be able detect threat for the animals such as contact with a hunter by detecting a gunshot and nature threat like wildfire by detecting wildfire sound (thanks to Shawn Hymel for the idea).
Dmitry has written a great tutorial to make audio recognition with Wio Terminal. This tutorial will be the based and guidance to make the data model.
Search for orangutan, rhino, gunshot and wildfire sound on youtube and copy the url of the video
insert the video url to an Youtube to mp3 website and download the mp3 file
after download store it in a folder
Convert the mp3 file to wav and lower the sample rate to 16 khz. Use this site to help convert the files
After all the sound files are ready. Make new project at edge impulse and upload the files according to the label of the sound.
After uploaded, Split the files to 10 second for each files then split again for 1 second each, it is important to only pick the interested area of the sound files
Repeat the process until all data is sufficient enough for model building. Don't forget to put random noises files as background label.
After all data collected, its time build the model. Use the specification below according to Dmitry's tutorial. Make sure to choose the MFE as processing block and Keras as Learning block
Generate the feature needed
Config the NN classifier like the picture below or adjust it until the best result is produced
There are still some confusion on a few classes but it's still good for testing
After the model is done building, it can be tested 1st using the test data or live classification. Or it can be tested directly using wio terminal via arduino IDE
here's a test to detect rhino sound
Grab the data model and test the model using this code.
Grove AI Vision Orangutan Custom ModelOne of the best feature of the sensecap K1100 kit is the Grove AI vision module, which come with a bunch of sensors already. But one of the main feature of the module is its capability to run object detection or object classification on the module itself.
Using those capability, a data model will be build to detect and monitor orangutan or rhino if it pass the sensor.
By following this comprehensive guide a custom model will be built to detect the desired animal.
Roboflow already have some dataset for orangutan but it comes with another set of animal also.
the dataset already have 5 class to detect 5 endangered species and orangutan is included. And after following the guide from seeedstudio's wiki the result is acceptable.
After the data model is converted to uf2 format. Copy the uf2 file to the Grove AI vision and test it on the real hardware. Connect the the Grove AI vision USB port to PC and Grove port to I2C port on wio terminal. and it should show the result like the picture below.
Grab theUF2 File Here
To display the inferencing result directly to the wio terminal upload this code to the wio terminal.
After the code is uploaded a few test is run to make sure everything is running ok
Next step is to put the Grove AI vision module to client device and send a detection data via lora e5 module
Gas Based Fire Detection data modelOne of the forest threat where the sanctuary is located is wildfire. And most of the time the response action is too late to do because there are immediate notification.
According to this blog post from edge impulse, a data model can be build to detect fire with the criteria as follow:
- Normal: the forest exhibits normal temperature, humidity, and air quality.
- Open fire: fully fledged wildfire with low humidity, high temperatures, and large amounts of volatile organic compounds (VOCs).
So with the sensor provided from the sensecap K1100 kit which are the Grove - VOC and eCO2 Gas Sensor(SGP30) module and Grove Temp&Humi Sensor (SHT40) whidh has capability to read VOC, temperature, and humidity a data model will be build to detect if a fire occur in the forest.
Shawn Hymel already made a video to make a data model from sensor fusion for sensing the air/gas
check the video above before continuing the next step.
The difference here, only two sensor will be used and the data will be collected via sd card.
Before collect data make sure the SD card already have "aqi.csv" on the root folder.
Use the SGP30 and SHT40 and connect it like the picture below
To Collect the data, upload this code to the wio terminal and insert the sd card.
It will save the VOC, temperature and humidity data into csv format to the sd card. The 3 button on the wio terminal will be use to capture different class of data.
Button A to collect background data
Button B to collect fire data
Button C to collect smoke data
When one of the button is pressed it will record 1 second data to the "aqi.csv" that was created before.
Start a small fire and collect all the necessary data
After all data has been collected split each class into single file and add a timestamp index then normalize the data using the guide from the video above. and check on the preprocessed data
Upload each class using the guide from video above. Here are some result for the preprocessed data
Check on the preprocessed data result and save the value to a notepad. The "mins" and "Range Value" will be reused on the inferencing code
Mins: [0.0, 26.53, 33.17, 0.0]
Ranges: [484.0, 22.67, 51.78, 60000.0]
After running all the code from the google collab. Download the data result zip and extract it. Next step is to split the each data value to each designated label/class. To do that a python small app using pandas library will be used.
copy the code below and save it in the same folder where the data is stored.
import pandas as pd
in_csv = 'smoke.sample0.csv' #change this with file you want to split
number_lines = sum(1 for row in (open(in_csv)))
rowsize = 1
colnames=['timestamp', 'temp', 'humi', 'tvoc']
for i in range(1,number_lines,rowsize):
df = pd.read_csv(in_csv,
names=colnames,
header=None,
nrows = rowsize,#number of rows to read at each loop
skiprows = i)#skip rows that have been read
out_csv = 'smoke.sampling' + str(i) + '.csv' #change this with the label name
df.to_csv(out_csv,
index=False,
header=True,
mode='a',#append data to csv file
chunksize=rowsize)#size of data to append for each loop
Repeat the splitting data for each label.after splitting each label to each own files it will be looked like this
upload all data to edge impulse and split it automatically between training and test data
then follow along the rest of the guide from Shawn's video above
After downloading the model, add it to the arduino IDE. Grab the data model here
test the model using this code. The result is as follow
Testing the P2P connection
The plan was to send the inference and environment data using lorawan infrastructure in helium platform, but since a gateway is quite expensive and there are no coverage in nearby area, this idea is changed into a P2P lora connection and send the data to the BLYNK platform. P2P lora is used for proof of concept that this project is capable of using Lora as a medium to send the data.
Seeedstudio's lora e5 wiki page already have some sample code to test P2P connection. Using xiao RP2040 connected with SGP30 to read air quality as transmitter and WIO terminal at the other end to receive and display the data. After tweaking the sample code from the wiki page, the RP2040 and wio terminal finally able to communicate like the picture below.
Connect the SGP30 to the I2C grove port and the lora e5 pin 1 and 2 port on the xiao RP2040
Connect the 2nd lora e5 module to the right grove port on the wio terminal
check this guide to test the xiao RP2040 as transmitter and the Wio terminal as receiver.Take note on these line of code
Transmitter side
memset(data, 0, sizeof(data));
sprintf(data, "%04X,%04X,%04X,%04X", tvoc_ppb, co2_eq_ppm, int(temperature),int(humidity));
sprintf(cmd, "AT+TEST=TXLRPKT,\"5345454544%s\"\r\n", data);
ret = at_send_check_response("TX DONE", 2000, cmd);
the "data" variable is use to store all the necessary data that will be send to receiver and it needs to be integer value to make the parsing process on the receiver side easier.
Receiver Side
p_start = strstr(recv_buf, "5345454544");
if (p_start && (1 == sscanf(p_start, "5345454544%s,", data)))
{
data[16] = 0;
int tvoc;
int co2;
int temp;
int humi;
char *endptr;
char *endptr1;
char *endptr2;
char *endptr3;
char datatvoc[5] = {data[0], data[1],data[2], data[3]};
char dataco2[5] = {data[4], data[5], data[6], data[7]};
char datatemp[5] = {data[8], data[9], data[10], data[11]};
char datahumi[5] = {data[12], data[13],data[14], data[15]};
tvoc = strtol(datatvoc, &endptr, 16);
co2 = strtol(dataco2, &endptr1, 16);
temp = strtol(datatemp, &endptr, 16);
humi = strtol(datahumi, &endptr1, 16);
on the receiver side, the data received in a single line. and to split it make a char array for each section of data like the sample above.Testing the range capability of the lora e5
2nd test with 350m distance between the modules
Update the wifi firmware
Next step is to test sending the data to the BLYNK Platform. Before testing to connect to blynk make sure the wifi firmware already updated using this guide. This step is very necessary since the RTL8720 wifi chip firmware on the wio terminal is outdated. Also install the necessary BLYNK library according from the guide from this page.
Follow along the guide from this wiki page to create a template dashboard to display the data. after the template has been made upload the following code only to the wio terminal and it will act as a gateway to the BLYNK platform
#include <Arduino.h>
#include"TFT_eSPI.h"
#include<SoftwareSerial.h>
#define BLYNK_PRINT Serial
#define BLYNK_TEMPLATE_ID " "
#define BLYNK_DEVICE_NAME " "
#define BLYNK_AUTH_TOKEN " "
// Comment this out to disable prints and save space
#include <rpcWiFi.h>
#include <WiFiClient.h>
#include <BlynkSimpleWioTerminal.h>
int tvoc;
int co2;
// Your WiFi credentials.
// Set password to "" for open networks.
char ssid[] = " ";
char pass[] = " ";
uint8_t state = 0;
char auth[] = " ";
SoftwareSerial e5(0, 1);
#define NODE_SLAVE
BlynkTimer timer;
static char recv_buf[512];
static bool is_exist = false;
TFT_eSPI tft;
TFT_eSprite spr = TFT_eSprite(&tft); //sprite
static int at_send_check_response(char *p_ack, int timeout_ms, char *p_cmd, ...)
{
int ch = 0;
int index = 0;
int startMillis = 0;
va_list args;
memset(recv_buf, 0, sizeof(recv_buf));
va_start(args, p_cmd);
e5.printf(p_cmd, args);
Serial.printf(p_cmd, args);
va_end(args);
delay(200);
startMillis = millis();
if (p_ack == NULL)
{
return 0;
}
do
{
while (e5.available() > 0)
{
ch = e5.read();
recv_buf[index++] = ch;
Serial.print((char)ch);
delay(2);
}
if (strstr(recv_buf, p_ack) != NULL)
{
return 1;
}
} while (millis() - startMillis < timeout_ms);
return 0;
}
static int recv_prase(void)
{
char ch;
int index = 0;
memset(recv_buf, 0, sizeof(recv_buf));
while (e5.available() > 0)
{
ch = e5.read();
recv_buf[index++] = ch;
Serial.print((char)ch);
delay(2);
}
if (index)
{
char *p_start = NULL;
char data[32] = {
0,
};
int rssi = 0;
int snr = 0;
p_start = strstr(recv_buf, "+TEST: RX \"5345454544");
if (p_start)
{
spr.fillSprite(TFT_BLACK);
p_start = strstr(recv_buf, "5345454544");
if (p_start && (1 == sscanf(p_start, "5345454544%s,", data)))
{
data[8] = 0;
char *endptr;
char *endptr1;
char datatvoc[3] = { data[1],data[2], data[3]};
char dataco2[4] = {data[4], data[5],data[6], data[7]};
tvoc = strtol(datatvoc, &endptr, 16);
co2 = strtol(dataco2, &endptr1, 16);
Serial.println(datatvoc);
Serial.println(dataco2);
Serial.println(tvoc);
Serial.println(co2);
spr.createSprite(100, 30);
spr.setFreeFont(&FreeSansBoldOblique12pt7b);
spr.setTextColor(TFT_WHITE);
spr.drawNumber(tvoc, 0, 0, 1);
spr.pushSprite(15, 100);
spr.deleteSprite();
spr.createSprite(150, 30);
spr.setFreeFont(&FreeSansBoldOblique12pt7b);
spr.setTextColor(TFT_WHITE);
spr.drawNumber(co2, 0, 0, 1);
spr.pushSprite(150, 100);
spr.deleteSprite();
Serial.print(data);
Serial.print("\r\n");
}
p_start = strstr(recv_buf, "RSSI:");
if (p_start && (1 == sscanf(p_start, "RSSI:%d,", &rssi)))
{
String newrssi = String(rssi);
/*u8x8.setCursor(0, 6);
u8x8.print(" ");
u8x8.setCursor(2, 6);
u8x8.print("rssi:");
u8x8.print(rssi);*/
Serial.print(rssi);
Serial.print("\r\n");
spr.createSprite(150, 30);
spr.setFreeFont(&FreeSansBoldOblique12pt7b);
spr.setTextColor(TFT_WHITE);
spr.drawString(newrssi, 0 , 0 , 1);
spr.pushSprite(180, 185);
spr.deleteSprite();
}
p_start = strstr(recv_buf, "SNR:");
if (p_start && (1 == sscanf(p_start, "SNR:%d", &snr)))
{
/*u8x8.setCursor(0, 7);
u8x8.print(" ");
u8x8.setCursor(2, 7);
u8x8.print("snr :");
u8x8.print(snr);*/
spr.createSprite(100, 30);
spr.setFreeFont(&FreeSansBoldOblique12pt7b);
spr.setTextColor(TFT_WHITE);
spr.drawNumber(snr, 0, 0, 1);
spr.pushSprite(15, 185);
spr.deleteSprite();
}
return 1;
}
}
return 0;
}
static int node_recv(uint32_t timeout_ms)
//
{
at_send_check_response("+TEST: RXLRPKT", 1000, "AT+TEST=RXLRPKT\r\n");
int startMillis = millis();
do
{
if (recv_prase())
{
return 1;
}
} while (millis() - startMillis < timeout_ms);
return 0;
}
static int node_send(void)
{
static uint16_t count = 0;
int ret = 0;
char data[32];
char cmd[128];
memset(data, 0, sizeof(data));
sprintf(data, "%04X", count);
sprintf(cmd, "AT+TEST=TXLRPKT,\"5345454544%s\"\r\n", data);
/*u8x8.setCursor(0, 3);
u8x8.print(" ");
u8x8.setCursor(2, 3);
u8x8.print("TX: 0x");
u8x8.print(data);*/
ret = at_send_check_response("TX DONE", 2000, cmd);
if (ret == 1)
{
count++;
Serial.print("Sent successfully!\r\n");
}
else
{
Serial.print("Send failed!\r\n");
}
return ret;
}
static void node_recv_then_send(uint32_t timeout)
{
int ret = 0;
ret = node_recv(timeout);
delay(100);
if (!ret)
{
Serial.print("\r\n");
return;
}
node_send();
Serial.print("\r\n");
}
static void node_send_then_recv(uint32_t timeout)
{
int ret = 0;
ret = node_send();
if (!ret)
{
Serial.print("\r\n");
return;
}
if (!node_recv(timeout))
{
Serial.print("recv timeout!\r\n");
}
Serial.print("\r\n");
}
void sendSensor()
{
node_recv_then_send(2000);
Blynk.virtualWrite(V2, tvoc);
Blynk.virtualWrite(V3, co2);
Serial.println("data for blynk: ");
Serial.println(tvoc);
Serial.println(co2);
}
void setup(void)
{
tft.begin();
tft.setRotation(3);
Serial.begin(115200);
// while (!Serial);
Serial.print("ping pong communication!\r\n");
//u8x8.setCursor(0, 0);
Blynk.begin(auth, ssid, pass);
// You can also specify server:
//Blynk.begin(auth, ssid, pass, "blynk.cloud", 80);
//Blynk.begin(auth, ssid, pass, IPAddress(192,168,1,100), 8080);
// Setup a function to be called every second
timer.setInterval(2000L, sendSensor);
e5.begin(9600);
tft.fillScreen(TFT_BLACK);
tft.setFreeFont(&FreeSansBoldOblique12pt7b);
tft.setTextColor(TFT_RED);
tft.drawString("TVoC", 7 , 65 , 1);
tft.drawString("CO2", 165 , 65 , 1);
tft.setFreeFont(&FreeSansBoldOblique12pt7b);
tft.setTextColor(TFT_RED);
tft.drawString("SNR", 7 , 150 , 1);
//level
tft.setFreeFont(&FreeSansBoldOblique12pt7b);
tft.setTextColor(TFT_RED);
tft.drawString("RSSI:", 165 , 150 , 1);
if (at_send_check_response("+AT: OK", 100, "AT\r\n"))
{
is_exist = true;
at_send_check_response("+MODE: TEST", 1000, "AT+MODE=TEST\r\n");
at_send_check_response("+TEST: RFCFG", 1000, "AT+TEST=RFCFG,866,SF12,125,12,15,14,ON,OFF,OFF\r\n");
delay(200);
#ifdef NODE_SLAVE
tft.setFreeFont(&FreeSansBoldOblique18pt7b);
tft.setTextColor(TFT_WHITE);
tft.drawString("Slave", 50, 10 , 1);
#else
tft.setFreeFont(&FreeSansBoldOblique18pt7b);
tft.setTextColor(TFT_WHITE);
tft.drawString("Master", 50, 10 , 1);
#endif
}
else
{
is_exist = false;
Serial.print("No E5 module found.\r\n");
}
}
void loop(void)
{
if (is_exist)
{
#ifdef NODE_SLAVE
Blynk.run();
timer.run();
#else
node_send_then_recv(2000);
delay(3000);
#endif
}
}
and the result is as follow
Assemble all the pieces together, is quite challenging. The Idea is to make wio terminal as a gateway to receive the lora data and forward/push it to blynk Platform.
The grove Ai Vision module and the environment sensors is daisy chained to the Xiao RP2040 via I2C port with the help of the Xiao shield. Thankfully the grove AI vision does its own inferencing which make the Xiao RP2040 only does one data model inferencing which is the wildifre detection model. All of the inferencing data is send to the wio terminal who acts as a lora gateway.
For the power a solar panel with 18650 battery is used to make deployment into the wild more convenient. Follow The schematic of the whole transmitter like the picture below
Put all the component of the xiao transmitter in a 3d printed enclosure like the picture below
The enclosure is remix from the file on thingiverse Designed by user 3KU_Delta. Mounting holes for grove sensors is added to place all the grove modules needed
A mounting stand for wio terminal is printed from this file. It use to attach the lora e5 neatly to make the wio teminal as gateway to the IoT Dashboard. Connect the Lora e5 module to pin D2 and D3 on the back of the wio terminal. Somehow when connecting lora e5 module to the right grove module and activating the Blynk code, the lora e5 module couldn't be detected
An nrf Power Profiler Kit II is used to check the power consumption of the transmitter device. with setup like below
The result are as follow
on standby it consume around 200 mAh of power and when sending the data to the lora receiver it rise up around 50 mAh and consume around 250 mAh on average. This mean when using 3000 mAh 18650 battery the device could run around 10 hour, not quite bad but still need lot of improvement if the device is running remotely and depend on the sun as a power source.
Sadly the Xiao RP2040 don't have deepsleep feature yet on arduino. Maybe changing it to Xiao esp32C3 could make the device run even longer by using deepsleep feature from the esp32.
Push Data to BlynkCreate a new template dashboard on the Blynk cloud Dashboard and save it. This will be used to display all the data that has been read from the lora node
The template dashboard still doesn't have any device. The next step is to add the device manually to the dashboard
Choose from template
Pick the template that was created before then click create
After created It will give the necessary token that will be used later on the arduino IDE
Next step is to add the widget to the dashboard by clicking the edit dashboard menu
Add the data stream like the picture below before adding and arranging the widgets:
The Fire Detection and Animal detection is set to String to display the inference result from the lora node. Add the widgets based on personal preference and set the virtual pins to each widget.
UploadTheFinalArduinoCode
After setting done setting the dashboard, it's time to upload the final code for the Xiao Transmitter and Wio Receiver. Grab the code from link below:
Transmitter Code Explanation
The Xiao RP2040 as lora node transmitter is running 2 tinyML framework, one running on the Xiao RP2040 itself to detect wildfire and the other one is running on the Grove Vision AI module to classify the animal detected from the custom data model that was created an the previous part. The Xiao will send all the environment data from the SGP30 and SHT40 while also send the detected class from the TinyML running.
int conf;
conf = max_val*100;
memset(data, 0, sizeof(data));
sprintf(data, "%04X,%04X,%04X,%04X,%04X,%04X",vision_idx, max_idx, int(temperature),int(humidity),sgp_tvoc,sgp_co2);
sprintf(cmd, "AT+TEST=TXLRPKT,\"5345454544%s\"\r\n", data);
ret = at_send_check_response("TX DONE", 2000, cmd);
Blynk IOT platform has many new feature, including capturing a specific event that the user need. This event is triggered when the lora node capture an orangutan sound or image and also when a wildfire detected. This event is recorded nicely on the dashboard and can push a notification to an email or the android app.
As a 2nd Option to display all data, the qubitro platform is easier to deploy. Create free account on qubitro platform.
Download the qubitro library from the Arduino IDE library manager.
Back at qubitro portal create new project. Give it a name and click the create button
Click the project that just been created and click the add source button
At the add source page choose mqtt and Toit option then click continue.
Give the necessary detail then click continue
Click done if all the necessary details has been filled
Click on the device/data source that just been created and click on setting menu to see get Device ID and Token ID to be use on the Arduino IDE
Upload the code below at the Arduino IDE to push the data Qubitro portal and fill the necessary credential and wifi settings
Back to the device, click on the data menu to check if the data has been successfully uploaded to qubitro platform
If Data has been successfully uploaded click on monitoring option to make a custom dashboard.
Edit it to display needed for the project by clicking the add widget menu and widget blank widget will be added.
Choose widget type and the data that want to be displayed and click save
And the qubitro will display the data that has been transmitted from the Xiao lora node transmitter
I've been having a blast tinkering with the Sensecap K1100 kit. Unlocking it's potential while also adding more knowledge to the things that I've never touch before such as Lora and environment sensing. The original idea was to sense or classify the animals based on sound using TinyML but I end up making 3 different data model
The project goal is to help maintain the animal sanctuary while also try to solve few of its challenge such as hunting and wildfire.
Due to the lack of animal sound dataset provided from internet, this project still need some improvement. I still want to make a data model to recognize how the orangutan communicate like long call and kiss sound, but as a mention before the dataset for those things are still lacking on the web. An onsite data recording could make the project much better, but due to my budget and permit limitation it has been difficult to do direct data recording.
Maybe in the future there will be some improvement or someone who have access to the sanctuary could use this project as template to help them monitor the sanctuary.
Thank you for all the Seeedstudio staff and member on discord channel for helping me solving some problem for this project.
Comments