Climate change is causing birds to migrate earlier each year as the planet warms, meaning they are spending more time in places they didn't used to. Conservation funding to protect endangered migratory birds needs to be adaptive to the changing habitats these birds are now spending more time in. Machine learning at the edge provides an opportunity to study when birds are arriving and where by recognizing their unique bird calls. A distributed network of solar-powered bird call detectors could create a high-resolution dataset of where birds are living and when. These data can also be used to confirm climate models, as they will demonstrate that birds are arriving earlier and staying later in their summer habitats than in the past. This project is a proof of concept design for a solar-powered bird call detector using a QuickLogic QuickFeather, lithium polymer battery, 3.5W solar panel, a plastic case, and a model trained using SensiML on calls from my home's two federally endangered birds, the southwestern willow fly catcher and the whooping crane.
One of the wonders of the internet is just how much data there is freely available in databases curated by passionate scientists and enthusiastic citizen scientists. Bird sounds are well documented on sites such as Xeno-Canto where I sourced calls and songs for the whooping crane and southwestern willow flycatcher. I chose these two birds as they are the two federally endangered species in my home state of Colorado, and as such are particularly vulnerable to climate change. If there is any species near my home that need additional conservation resources, it is these two. I encourage you to pick one to three species for your home using similar criteria. One to three is a good threshold as to not make the model too large or slow.
An additional option is to get out there yourself and bird watch! Capturing your own bird calls and songs is a lot of fun, and you can find a guide about how to do it here.
A final technical step you will need to take to import the data into SensiML's Data Capture Lab is to convert any recordings you find to WAV sampled at 16kHz (which is what the QuickFeather will sample at). For this, I used FFmpeg, a free tool for converting and manipulating audio data. It is available on nearly every operating system and is fast and free. An example of how to convert an MP3 file to a WAV suitable for SensiML is shown below, but of course you can substitute any data format for the input file that FFmpeg supports which is nearly all major formats. You may also be able to find online conversion resources.
ffmpeg -i input.mp3 -acodec pcm_s16le -ac 1 -ar 16000 output.wav
Step 2 - Train your ModelOpen the SensiML Data Capture lab and create a new project. Using the dropdown menu in the upper left, import your new WAV files:
When prompted, create a new configuration. Be sure to select the QuickFeather SimpleStream configuration like so, or you will see nothing on your terminal when you finish your model:
Finally be sure to configure the sensor when prompted for Audio. Use the default 16kHz sampling rate and be sure to check the box next to microphone so that the QuickFeather model will make use of the onboard mic:
Next, click "Edit" from the top menu. Select "Project Properties". We are now going to add our bird species as labels:
Use the blue plus (+) button in the lower right to create one segment label for each bird species you have calls and songs for. Click "Done" when you're finished.
Open the "Project Explorer" using the button in the upper left and double click on one of your WAV files. You should now see something like this:
One of the great things about bird calls and songs is that the signal is very distinct. It is quick and easy to find and label each chirp by looking for the peaks! Right click at the beginning of each signal and then click and drag the red line to the end of the signal. In the "Segments" table on the right you can right click the row you just created and click "Edit". From the list, select the right label based on what bird species you were just listening to.
It can be helpful to use the media player in the lower right to find signals you might have otherwise missed.
Repeat this operation until all the WAV files you have uploaded have been labeled. Be sure to press the "Save Changes" button! This will commit your changes to the cloud where we will head next.
Go to the SensiML website and log in with your account information. Make sure it's the same information you used when you set up the Data Capture Lab software. Select your new project from the table, like so:
You will now see a summary of your project. You can also add a fun picture and description from this page:
You can use the "Captures" tab to check all of your WAV files have been pushed. The first step to building a model however is to go to the "Queries" tab and tell SensiML that you want to use all of your capture data in the model (since for this simple example we are not creating an explicit test set).
Give the query a name (it can be anything you like) and choose the default values for each dropdown. You can plot your query if you would like using the Plot dropdown. The plot will appear after you press "Save". Here you can see I have nearly as many samples of the whooping crane as I do the flycatcher. This informs how I will build my model in the next step. Click "Build Model" from the menu on the left.
Give your pipeline a name (it can be anything you wish) and under settings choose your new query. I have upped my window size to the biggest possible window to capture longer audio which works better with my bird calls. You may find a smaller window works just as well for your chosen birdcalls and will give you a performance boost. Since my samples are roughly evenly distributed, I will use "Accuracy" as my optimization metric, however if you have far more samples for one species over another I recommend you optimize based on "Sensitivity" instead as it will help prevent overfitting the model which will produce poor results in the field. Click "Optimize". The model will be trained and optimized over several minutes. When training is finished, the "Auto Sense Results" table will be populated with information about the top 5 best models it produced during this operation. You can click the "Explore Model" tab to get some great insights into your new model and how it works:
To download one of these models, click "Download Model" from the menu on the right.
Be sure to choose the QuickFeather as your target device. Choose the rest of the parameters as shown, taking special care to choose the "Simple Stream" application so that you will be able to see model output on the console. Under "Advanced Settings" at the bottom of the page, toggle "Debug" to "True" for the same reason. Click "Download". Your model will be compiled and will download automatically. This may take several minutes.
The last step is to flash this model to your board. Extract the zip file contents somewhere safe and then follow your platform-specific steps for flashing the board provided by QuickLogic.
Step 3 - 3D Print the Case, OptionalBirdBuddy has an optional case with mounting holes for the solar panel as well as a covered enclosure for the electronic components that you can 3D print. This is not required for BirdBuddy to work but is a nice way to keep everything together. I used a third party printing service since the model is quite large. The model can be customized using the online CAD software OnShape to your desired specifications!
Remove the thumbscrews on the backside of the solar panel, like so:
Feed the cord through the opening in the back of the enclosure, and then line up the screws with the holes:
Solder the electrolytic capacitor that comes with the solar charger kit to the charger board like so:
Assemble the battery and QuickFeather like in the following Fritizing diagram. It also shows the correct orientation for the capacitor:
Connect the solar panel to the solar battery charger using its barrel jack and the barrel jack adapter, if required. The solar panel will now charge the battery and the Feather board will power on, like so:
Connect the jumper wires that came with the UART cable to the appropriate TX/RX pins on the Feather and on the USB adapter:
Connect the UART USB adapter to your computer. Using PuTTY or similar client for your operating system, create a serial connection on the COM port it gets assigned at 460800 baud.
On Windows you can use Device Manager from the start menu to figure out what COM port to use:
Open the connection and press the RESET button on the board. Everything is now powered and you should see the model predictions being printed on screen!
Demo VideoThe FutureWhere do we go from this proof of concept? What I envision is a central database where users can register their BirdBuddies and have their data posted in real time, giving scientists and citizen scientists insight into the arrival times of bird species all over the world. This would show that birds are spending less time in warmer climates and moving to their cooler summer homes earlier each year and help make the case that conservation efforts must be increased at their new homes. Additionally these data would be a further physical indicator that the climate is warming, adding to the case that the Anthropocene is putting pressure on birds and informing efforts to conserve them.
One way I propose to do this is to add an ESP32 to this project to communicate results from the trained models over WiFi, and then using AWS IoT Core to pipe the data into a cloud database that could be visualized and downloaded from a website. This website would offer pre-trained models for QuickFeather boards and potentially a new tool to streamline training a model by simply uploading bird call audio files for non-technical users.
Comments