The day when I first noticed "Bird Buddy" on kickstarter I decided to make something similar for my daughter all by myself. I was learning Edge Impulse recently and just happened to have few Balena Fin laying on my desk for last few weeks which actually motivated me to start my own "Smart Bird Watcher".
As I had Balena Fin, I had not other choice but use balena OS. But honestly, if I had a choice, I would have chosen balena OS anyway. It's all based on docker containers and you can simply deploy to different devices such as RPI 3 or 4 or zero. You don't need to deal with custom software installations and configurations which are real pain for any raspberry pi based projects.
Wait, you still need some configuration but you do them on balena dashboard and they are very few. I will go over them in a moment.
FeaturesFirst let me tell you the cool features of this bird watcher.
1. Voice classification - Continuously records sound from the microphone and run through Edge Impulse inference engine to predict which bird is chirping. I have trained five most commonly visited birds in our backyard. Chirps were collected from youtube videos and kaggle
2. Image classification - Continuously captures images from camera and run through Edge Impulse inference engine to predict which bird is present. Again I have trained same five birds. Images are taken from internet.
3. Live video streaming - Live stream from the camera can be viewed from anywhere with proper authentication.
4. Push Notification - Instant notification to Telegram app when a bird is sighted.
Now let's take a deeper dive into technology stack
Bird Chirp Classification Using Edge ImpulseAt a high level, the microphone records 10 seconds of voice continuously using Sox software and feed into EI inference engine for prediction. I have trained 5 birds commonly sighted in our backyard and "unknown" with random audio clips such as background noise, wind, car, human speaking etc.
I have sliced raw data into 5 seconds window with increase of 1 seconds. So basically you get 6 samples from 10 seconds recordings.
I wanted to see the result sooner, so I captured only 3 minutes of data for each bird and 6 minutes of "unknown". I would recommend you to capture as much data as possible, at least 10 minutes for each bird.
I got fairly good trained model. Live classification works pretty well. Only concern I found is with "unknown". When someone is speaking or some other background noise is there, model is giving false prediction. If you know how to get a better result, please let me know in the comment with your suggestion π
Once I am satisfied with the model, I built "Web Assembly" which is downloaded in a zip file with a "wasm" and a javascript file. I used these files later my code.
The project is made public and inviting community to improve it further. https://studio.edgeimpulse.com/public/14310/latest
I have collected all the images from internet but it would give you more accurate prediction if you can capture some live images from your pi camera. Follow this tutorial on EI website to better understand image classification.
Once your model is trained, built the web assembly and copy those two files under face-ei-inference/app
folder.
The project is made public and inviting community to improve it further. https://studio.edgeimpulse.com/public/14460/latest
Setup Telegram BotI used telegram app to send push notification instead of creating my own app to save some development time. You need to setup a telegram bot. The process is very simple. Head over to https://core.telegram.org/bots#3-how-do-i-create-a-bot and create your first bot and copy the access token, save to somewhere. Next start interacting with the bot by typing something, for example, just type "hello". Then open your browser and hit below url.
https://api.telegram.org/bot<token>/getUpdates
You will receive a json response. Copy "chat.id" and save it as well. You will need both later.
Head over to my github repo https://github.com/just4give/bird-watcher-fin and click on "Deploy with balena" button.
Please note, this will deploy my trained model. If you like to deploy your model, clone my repo, replace edge-impulse-standalone.js and edge-impulse-standalone.wasm files under "tweet/app" folder.
Once application is deployed, head over to "Device Services" and add above 4 variables. Use the telegram token and chat id copied earlier. Choose your username and password which you need to provide to access live stream.
Next, head over to "Device configuration" and make changes as per above screen shot.
Make sure "audio" is "off" in DT Parameters.
Change DT Overlays to "googlevoicehat-soundcard".
BALENA_HOST_CONFIG_start_x = 1
BALENA_HOST_CONFIG_gpu_mem_256 = 192
BALENA_HOST_CONFIG_gpu_mem_512 = 256
BALENA_HOST_CONFIG_gpu_mem_1024 = 448
This is very important configuration based on DAC you are using. I have used "Raspiaudio". You may check this link for supported devices. https://sound.balenalabs.io/docs/audio-interfaces/
Next, enable public device url and click on the link which will take you to live video stream.
If you are using USB Microphone you need to remove DT Overlays and set audio=on
as below screen shot. Also you need to use .asoundrc_usb
in the dockerfile.
# if .asoundrc if you are using speaker+microphone hat. If using USB microphone use .asoundrc_usb
#COPY .asoundrc /root/.asoundrc
COPY .asoundrc_usb /root/.asoundrc
Also you
EI ConfigurationFor better portability, create below service variables and copy EI project id and api key from EI Studio dashboard. You don't need to download swam files anymore. Every time your container is restarted, it will download EI inference files automatically.
New feature added on 15th Feb 2021. Now you are not locked down to the initial WiFi credential you setup. You can change your device WiFi at runtime. Imagine you want to gift this device to someone else away from your home WiFi. When they connect the device to power, balena will create an captive access point named BIRD-WIFI-AP
(configurable in device variables ). User has to connect to that access point and open http://192.168.42.1:8100
on the browser. Or can scan the below QR code which should open captive wifi management website on phone. Select your WiFi network and connect. In few minutes your device should be connected to internet.
You can change the access point name and port in device variables as below
If you find microphone or speaker is not working, check your audio card settings ( ssh into "tweet" service)
aplay -l
arecord -l
cat /root/.asoundrc
Make sure you see both speaker and microphone card from first two commands and make sure card numbers match with .asoundrc
file.
Comments