1. Introduction:
In the recent years, the elephant population has been reduced to an alert level. Mostly, it is due to the illegal activities from poachers and incidents from elephant-human conflicts in the rural areas of many parts of the world. In the support of the effort to save both human and elephant lives and in the response to Elephant Edge call, this project is made.
2. How it works:
Elephants often communicate to each others via audible sounds. These sounds are produced by rumble vocal system, trumpet through the trunk, ear slapping and stomping feet. Their behaviors drive the combinations of above sounds into some distinguishable sound signatures. Especially, in the response to life threating situation, the signature sounds will occur frequently. Using the built-in microphone in the tracker collar, elephant acoustic sounds can be captured and classified continuously in real time. This machine learning model is based on a collective real life elephant sounds in many different scenarios. The model was built using Edge Impulse Studio. Below are the 2 classes in response to the elephant sounds. They'll allow to detect and classify the types of key signatures to trigger the Elephant-Edge tracker to send the alert signals to the park rangers' monitoring dashboard (such as from Avnet IoT Connect).
The elephant sound/voice resources to be used for training and testing can be collected from the below:
- https://www.elephantvoices.org/multimedia-resources/introduction.html
- https://www.youtube.com
- Animal zoos
All of the sound/voice sources are recorded into the Edge Impulse Studio with the highest quality of 48KHz sampling frequency. The mobile phone was used as the sound capturing device.
Please see the setting details in the attachment section to configure the below blocks to train the model and test
Model Classification: this consists of 2 main classes below:
a. Elephant-Normal-Behavior: this includes elephant vocal and trumpet sounds from scenarios below:
- Elephant calf calling from adult elephants
- Elephant calf crying during protest within elephant family
- Elephant male fighting event among elephants
- Elephant calf frightened by some reasons other than threat from human
- Elephant protest within elephants
- Elephant threating the others within group of elephants
b. Elephant-panic: this includes elephant vocal and trumpet sounds from scenarios below:
- Elephant on the defending mode due to the life threating situation especially when facing with human
- Elephants try to deter great enemies such as lions and poachers
- Elephant calf separated due to the lost of mother or family being chased by enemies
From the above, the elephant-panic class is the key signature to trigger the Elephant-Edge tracker to start calling for help.
TinyML Models ready to use: There're several models for use with Tensor Lite and Tensorflow created from this project. They are ready to use for any devices. The Edge Impulse Studio provides a great method to deploy both standalone classification application and firmware incorporated with these models onto the embedded IoT devices. These devices can run with or without the internet connection. Please choose the one that is best fit for your device.
For Tensorflow Lite:
- ei-pvinhha-project-1-nn-classifier-tensorflow-lite-float32-model.lite
- ei-pvinhha-project-1-nn-classifier-tensorflow-lite-int8-quantized-model.lite
- ei-pvinhha-project-1-nn-classifier-tensorflow-lite-int8-quantized-with-float32-input-and-output-model.lite
For tensorflow:
Comments