We, humans, are occupying the whole planet inch by inch and thereby stealing the living rights of poor animals. This behaviour of humans destroying forests in the name of development has huge effects such as animal-human conflicts where animals in search of food reach the habitual place of humans and damage crops, cattle or even causing harm to people. This issue is quite serious in the parts of developing nations like India and Africa. According to statistics, there is a gradual increase in the death caused by wild animal attacks from the past few years.
In the three years between 2015-2018, the human-elephant conflict caused 1,713 human and 373 elephant deaths by unnatural causes, including electrocution and poaching. Experts say various factors, including habitat disturbance and urbanisation, could be the cause of the alarming rise in unnatural human and animal casualties. This is an example in India but the same follows for countries like Africa where the population of the African Elephant is falling at an alarming rate.
Image Source: The Hindu
One more main reason for the fall of the elephant poaching and the cruel people kill elephants for their ivory and sell the ivory in the black market. This is one of the major reason for the fall of the elephant population.
The Planned solution:
Elephants and sound:
Elephants can communicate using very low-frequency sounds, with pitches below the range of human hearing. These low-frequency sounds, termed "infrasounds," can travel several kilometres, and provide elephants with a "private" communication channel that plays an important role in elephants' complex social life. Their frequencies are as low as the lowest notes of a pipe organ.
Although the sounds themselves have been studied for many years, it has remained unclear exactly how elephant infrasounds are made. One possibility, favoured by some scientists, is that the elephants tense and relax the muscles in their larynx (or "voice box") for each pulse of sound. This mechanism, similar to cats purring, can produce sounds as low in pitch as desired, but the sounds produced are generally not very powerful.
The other possibility is that elephant infrasounds are produced like human speech or singing, but because the elephant larynx is so large, they are extremely low in frequency. Human humming is produced by vibrations of the vocal folds (also called "vocal cords"), which are set into vibration by a stream of air from the lungs, and don't require periodic muscle activity. By this hypothesis, elephant infrasounds result simply from very long vocal folds slapping together at a low rate and don't require any periodic tensing of the laryngeal muscles.
With the help of tinyML and awesome tools like edgeimpulse.com, we can train a voice model to detect the condition of an elephant by just recognising it's sound. After recognising the sound we will send this information to the near gateway via LoRa communication. Enclosing everything in a proper case can help us to use them as a smart collar for tracking and alerting about the elephant's location with the help of GPS and LoRa communication.
The block diagram of the system is as follows.
Reasons for choosing the following components,
1. Artemis: Since it has TinyML support and very low power consumption
2. MEMS Microphone: [Part of the Artemis ATP board] Very good microphone which has a good response curve for various range of frequencies
3. 18650 battery: Compact and has the maximum energy density
4. LoRa: Long Range uninterrupted communication without any other requirements such as SIM cards etc.
5. GPS for precise get tagging of the elephant location.
6. Solar panel: Solar panel for charging the battery.
STEP 2: Training the model.I wasn't able to find the required elephant voice data sets from the elephant voices website so I had to work with whatever was available. I got some sounds from the website https://www.zapsplat.com/.
I'm not going to explain how to setup and use edge impulse in order to train and build your model as it has been covered by various other tutorials already. Here's the one by Daniel the great himself.
By the way, the audio samples I got from the zapsplat was not long so I had to divide the samples into small pieces and had to feed it to edge impulse
To do this I have used the audacity software which is free and easy to use.
Here I'll be detecting 4 things through audio:
- Angry elephant sound
- Calm elephant sound
- Human voices i.e for poaching detection
- Woodcutting machine sound to detect deforestation
After training, I'll be able to detect the particular sound and I will be able to report it concerned authority via LoRa communication.
I have chosen Sparkfun Redboard Artemis as brain because it supports TinyML and will be easy to configure also. The communication is carried out through LoRa modules as it is easy to set up and has long rage in terms of kilometres. Neo 6M GPS has been added to accommodate communication. Since I already had this OLED module I will just connect it with my bord to get some visual indications of the detection.
The receiver end is nothing but a LoRa Module connected to USB to UART or ESP32 running IoTConnect which will help us to visualise the data on the IoTConnect dashboard.
Currently, I will be able to demo using the USB to UART and the data being received on a terminal screen, this can be improvised according to the requirement.
For Artemis ATP board there's a basic example of wake word detection which can be programmed to detect Yes and No is an amazing example to start with TinyML on Artemis board. Here's a tutorial on that.
I have used this example as my basic framework my project and developed based on that. I have the trained model from the edge impulse which I will be deploying on the Artemis using the exported C++ file.
Along with this code, I will have to initialise the software serial protocol as I will be having the GPS and the LoRa both using UART to communicate.
You can find the code, schematics, audio used for training in my GitHub repository.
Since my data set is not that large the accuracy of the model is a bit low only. i.e around 80%. If I get more data I will be able to make a better model which is more accurate.
BTW even though the goal was to make a model and test the accuracy I wanted a go a step ahead and build my own setup of the collar.
Model and code can be found here.
STEP 6: Future Enhancements:I would like to do these future enhancements I get to work on the smart collar project.
- Improved battery life for long sustenance
- Elephants emit heat comparatively more hence this can be used as a health indicator or as a vital parameter
- Adding sensors for water detection i.e swimming elephant.
- Adding accerolmeter+Gyroscope for the more information on movement & posture etc.
- Push notification to rangers and guards in protected forests, natural parks in the case of violent movements and poaching.
Comments