For decades, protecting endangered wildlife has been a race against time—and against poachers.
In remote forests, park rangers often work alone, covering vast areas with limited resources. While illegal hunting threatens species already on the brink, most detection methods are reactive, not proactive.
Enter Rana Loca: a smart forest sensor that listens to the wild. Built with Arduino and powered by machine learning, it distinguishes between natural sounds, animal calls, and gunshots. When a threat is detected, it sends a real-time alert to rangers via mobile app, with GPS coordinates included.
What used to take hours—or never be discovered at all—can now be acted upon in seconds. We’re not just listening to nature. We’re protecting it.
Before beginning the physical build, we carefully planned the power and connectivity layout of our device. The diagram shown below was created to clearly represent the relationship between all essential components, allowing us to anticipate power needs, data connections, and real-world constraints.
In our setup, the solar power bank is responsible for charging a LiPo battery, which acts as the primary energy reservoir. To meet the voltage requirements of both the microcontroller and communication module, the battery’s output is stabilized through a boost converter, delivering a consistent 5V supply.
The central processing unit is the Seeed Studio XIAO nRF52 Sense, a compact and energy-efficient board with integrated microphone support. It connects to the SIM800L GSM module via UART (TX/RX), enabling real-time data transmission via cellular networks.
This diagram helped us validate our power path and signal connections before wiring the physical components. It also serves as a clear visual reference for future debugging or replication by others.
Visual clarity and pre-assembly planning were crucial to minimizing trial-and-error and ensuring a robust final build.
To enable intelligent sound detection in our project, we implemented a machine learning model using Edge Impulse, a platform optimized for embedded AI. The system is capable of classifying three distinct types of audio events: nature sounds, gunshots, and weather phenomena.
The model runs on the Seeed Studio XIAO nRF52 Sense, a compact and low-power microcontroller that includes a built-in microphone. This makes it ideal for real-time audio inference in forest environments, where connectivity and energy are limited.
We trained a convolutional neural network (CNN) with a simple and efficient architecture composed of two 1D convolutional layers, dropout regularization, and a softmax output for the three target classes. A total of 1, 950 input features were processed through this structure. The model was trained over 100 cycles using a learning rate of 0.005, using CPU-based training and no data augmentation.
The training process yielded the following performance:
- Accuracy: 98.1%
- Loss: 0.04
- F1 Score: 0.98 (Nature), 0.98 (Weapon), 0.97 (Weather)
- Inference time: 7 ms
- Peak RAM usage: 15.1 KB
- Flash usage: 50.7 KB
All processing was compiled using the EON Compiler (RAM optimized), allowing the model to run efficiently on the XIAO’s limited hardware resources.
The Data Explorer visualization revealed well-defined clustering between the classes, and the confusion matrix confirms high reliability—particularly in classifying nature and weapon sounds with minimal error.
This setup enables fully local, real-time detection of critical sound events, without needing cloud computation—ensuring both speed and reliability in off-grid forest scenarios.
To ensure reliable communication in remote forest areas, our project relies on GSM connectivity instead of Bluetooth. Unlike Bluetooth, which is limited by range, GSM enables our device to send real-time alerts over long distances, even in areas where the user is not nearby.
When a gunshot is detected by the sensor, the onboard microcontroller activates a GSM module (SIM800L) to either send an SMS or make an automated call to the park ranger. This allows immediate awareness of the event, without requiring proximity or manual checks.
To power the system autonomously, we propose the use of a solar-powered power bank, ensuring that the device remains operational in the field without constant human intervention. This makes the system ideal for deployment in protected natural areas or wildlife reserves.
The setup is compact, low-power, and scalable—capable of operating for extended periods with minimal maintenance.
As part of the Rana Loca system, we designed a fully functional app prototype using Figma, focused on usability, clarity, and quick access to critical information.
The interface is structured around three core screens:
- Map View: Displays all deployed sensors on a satellite map. When a gunshot is detected, the corresponding marker turns red, allowing rangers to immediately identify the location of the incident.
- History: A log of past detections, including time, location, and confirmation status. Each event can be reviewed and validated, helping to track repeated activity in the same area.
- Settings: Allows user profile management and access to options like notifications, language, and device details.
Tapping a sensor on the map opens a detailed view with coordinates, a photo of the area, and sensor data. The interface also includes alert management, where rangers can confirm or dismiss threats. The app was designed to simulate the full user experience and sets a solid foundation for real-time field deployment.
Check our demo by clicking on the next link.
Live demo of the Rana Loca app in action — map, alerts, and history at your fingertips.
Comments
Please log in or sign up to comment.