As the climate is changing, associated natural disasters of changing climate is on the rise. Changing climate is on a positive feedback loop with natural disaster. Rising temperature is causing wildfire which will release large amount of CO2 and that will cause more greenhouse effect to increase ambient temperature, more wildfire in the future.
To track climate change, one must focus on the frequency of natural disasters. Most of these natural disasters have unique sounds associated with each and every disaster. Heavy Rainfall, Thunderstorm, Hailstorm, Tsunami, Wildfire, Hurricane, Earthquake, Volcanic Eruption - each of these disasters will produce unique/specific sound during those catastrophic events. These sounds are voices of changing climate.
I am trying to build a device with quick logic quick feather board and sensiML AI to identify these sounds. The final goal is to build a network of natural disaster detecting devices which can identify natural disaster based on the sounds/voices of climate change.
Now the question is, can AI detect these events based on sound ?
Let's find out !
HardwareThese are the hardware used for this project :
- LiPo Power Rig: Under the QORC sticker there is a 4000 mAh LiPo battery which is glued on a regular veroboard to built the power rig. This rig provided plug and play connectivity and mechanical support for all the hardware pieces
- Quick Feather Dev Board: Quick Logic AI board, brain of this project
- Adafruit Tripler: Adafruit Tripler extension helps connectivity between Quick feather and HC-05 bluetooth module
- HC-05 Bluetooth-UART Module: Provides wireless UART connectivity over buletooth with computer or smartphone for sending results
- Modular Solar LiPo Charger: This part consists of one buck converter module that can step-down any DC voltage between 6-30 volts to 5 volts, the TP4056 module which converts 5 volts to 4.2 volts for LiPo charging.
- 5 Watts Solar Panel: Provide power for charging the LiPo battery for outdoor/remote operation.
I have installed the latest Ubuntu 20.04 on a computer before downloading the SDK. This part is quite tricky, expect some churn because the official setup guide didn't exactly worked as expected. Some of the dependencies were missing during the automatic installation, which I had to install manually.
Installing SDK
Log into Ubuntu 20.04 on, open a terminal, type following 4 commands one at a time:-
sudo apt install git
git clone https://github.com/QuickLogic-Corp/qorc-sdk.git
cd qorc-sdk
source envsetup.sh
These will install git (if you don't have it installed), clone the quick logic sdk, create a folder named qorc-sdk and suppose to install everything !
Everything must be installed on qorc-sdk folder/directory from now on.
Next, restart your Ubuntu machine/computer.
If ARM toolchain and TinyFPGA Programmer are not install properly, install these with following steps:-
Installing ARM Toolchain
After restarting computer open the terminal again and type following 3 commands :-
cd qorc-sdk
mkdir arm_toolchain_install
wget -O gcc-arm-none-eabi-9-2020-q2-update-x86_64-linux.tar.bz2 -q --show-progress --progress=bar:force 2>&1 "https://developer.arm.com/-/media/Files/downloads/gnu-rm/9-2020q2/gcc-arm-none-eabi-9-2020-q2-update-x86_64-linux.tar.bz2?revision=05382cca-1721-44e1-ae19-1e7c3dc96118"
tar xvjf gcc-arm-none-eabi-9-2020-q2-update-x86_64-linux.tar.bz2 -C ${PWD}/arm_toolchain_install
Installing Python 3 and TinyFPGA programmer
Type following commands on command terminal:-
cd
cd qorc-sdk
sudo apt install python3-pip
git clone --recursive https://github.com/QuickLogic-Corp/TinyFPGA-Programmer-Application.git
pip3 install tinyfpgab
Ideally, this step should not be necessary, but somehow these are missing or not working dependencies I was talking about
Modifying Source CodeI checked out the branch ssi-audio-sensor and built a new qf_ssi_ai_app.bin. Also I used dcl_import.ssf as plugin. Audio sensor was there but DCL still captures acclerometer data.
To solve this, I modified Fw_global_config.h file before compiling the code qf_apps/qf_ssi_ai_app/inc/Fw_global_config.h
/* Settings for selecting either Audio or an I2C sensor, Enable only one of these mode */
#define SSI_SENSOR_SELECT_AUDIO (1)
// 1 => Select Audio data for live-streaming or recognition modes
#define SSI_SENSOR_SELECT_SSSS (0)
// 0 => Disable SSSS sensor data for live-streaming of recognition modes
Flashing Audio Capture Firmware (data collection mode)Compile/Build an Example code
From Ubuntu terminal go to qorc-sdk directory, add toolchain to path, change directory to qf_ssi_ai_app project and build a bin file with the source code with following commands:-
cd qorc-sdk
export PATH=${PWD}/arm_toolchain_install/gcc-arm-none-eabi-9-2020-q2-update/bin:$PATH
cd
cd qorc-sdk/qf_apps/qf_ssi_ai_app/GCC_Project
make
Flash/Upload code to QuickLogic board
Connect Quick Feather board to computer with a USB cable, press restart button on the board. While the Bule LED is blinking, press the User button. A pulsing Green LED should start to blink, meaning the board is in Upload mode
Open terminal on Ubuntu and type these commands to flash the firmware:-
cd
cd qorc-sdk/qf_apps/qf_ssi_ai_app/GCC_Project
sudo chmod a+rw /dev/ttyACM0
alias qfprog="python3 /home/computer/qorc-sdk/TinyFPGA-Programmer-Application/tinyfpga-programmer-gui.py"
qfprog --port /dev/ttyACM0 --m4app output/bin/qf_ssi_ai_app.bin --mode m4
Flashing should happen like this:-
Next part is done on a separate Windows 10 computer, with SensiML Data Capture Lab software.
This part is throughly explained by Arduino “having11” Guy on his project Getting Started with the QuickFeather Dev Kit and SensiML So, I will not go into details repeating the same things. Everything is almost exactly the same, except I will use microphone to capture sound data instead of accelerometer.
Here is a quick description of all the steps:
Step 1: SSF File for Audio Capture from Microphone
The ssf file is attached below, using this ssf file will enable data acquisition from microphone of Quick Logic dev kit. This configuration is loaded on SensiML data capture lab. (see this project for details)
{
"name": "QuickFeather SimpleStream",
"uuid": "10b1db20-48a5-4442-a40e-fc530b456c89",
"collection_methods": [
{
"name": "live",
"display_name": "Live Stream Capture",
"storage_path": null,
"is_default": true
}
],
"device_connections": [
{
"name": "serial_simple_stream",
"display_name": "Data Stream Serial Port",
"value": 1,
"is_default": true,
"serial_port_configuration": {
"com_port": null,
"baud": 460800,
"stop_bits": 1,
"parity": 0,
"handshake": 0,
"max_live_sample_rate": 3301
}
},
{
"name": "wifi_simple",
"display_name": "Simple Stream over WiFi",
"value": 2,
"is_default": true,
"wifi_configuration": {
"use_mqttsn": false,
"use_external_broker": false,
"external_broker_address":"",
"broker_port":1885,
"device_ip_address": null,
"device_port": 0,
"max_live_sample_rate": 1000000
}
}
],
"capture_sources": [
{
"max_throughput": 0,
"name": "Motion",
"part": "MC3635",
"sample_rates": [
333, 250, 200, 100, 50
],
"is_default": true,
"sensors": [
{
"column_count": 3,
"is_default": true,
"column_suffixes": [
"X",
"Y",
"Z"
],
"type": "Accelerometer",
"parameters": [],
"sensor_id": 1229804865,
"can_live_stream": true
}
]
},
{
"max_throughput": 0,
"name": "Audio",
"part": "IM69D130",
"sample_rates": [
16000
],
"is_default": true,
"sensors": [
{
"column_count": 1,
"is_default": true,
"column_suffixes": [""],
"type": "Microphone",
"parameters": [],
"sensor_id": 1096107087,
"can_live_stream": true
}
]
},
{
"max_throughput": 0,
"name": "Qwiic Scale",
"part": "NAU7802",
"sample_rates": [
333, 250, 200, 100, 50
],
"is_default": false,
"sensors": [
{
"column_count": 1,
"is_default": true,
"column_suffixes": [
""
],
"type": "Weight",
"parameters": [],
"units": null
}
],
"sensor_id": 1334804865,
"can_live_stream": true
},
{
"max_throughput": 0,
"name": "ADC",
"part": "ADS1015",
"sample_rates": [
100,
250,
500,
1000,
1600,
2400,
3300
],
"is_default": false,
"sensors": [
{
"column_count": 1,
"is_default": true,
"column_suffixes": [
""
],
"type": "Analog Channel"
}
],
"sensor_id": 1184532964,
"can_live_stream": true
},
],
"is_little_endian": true
}
Step 2: Building AI models with Sensi ML
Here is a sample rainfall and thunder sound I have recorded for building the AI model around 7 am, so ambient noise from other sources are low :
First in Capture Mode in Sensi ML DCL, record heavy rainfall and thunder sound
Then in Label Explorer Mode, manually identify and label the sounds as rainfall or thunder
Step 3: Training the Model
This step is done on https://app.sensiml.cloud/ web interface to build a training model
After that the model can be tested and explored to improve performance.
Then the model is downloaded (file attached below) and flash into quick feather board
DeploymentWell, after flashing the bin file, I connected the device to Putty over computer's Bluetooth and HC-05 on the other end for wireless UART connection.
Detection of rainfall and thunder is not very accurate. I am sure, I missed lots of optimization during the AI model building and testing phase. I have to try again and rebuild everything from scratch for a better model !
Comments