Year after year in the Andean region of South America there are emergency situations due to flash floods that take the civilian population by surprise. This idea aims to provide a solution to generate early warnings that allow timely reaction to disasters due to floods or avalanches.
HOW?it is true that there are some solutions in early warning systems, however these solutions tend to be expensive (in investment, space occupation and maintenance). The idea is to build a low-cost system based on artificial vision and complementary sensors that allow early warnings to be generated.
As they are low-cost warning systems, it is possible to think about deploying a network of nodes that allow greater monitoring of water flows, providing more information for the construction of models that allow flood prediction and thus take preventive measures.
the system runs a trained model for the detection of sudden floods and generates an alarm when a defined detection threshold is exceeded, sending the alert by email through the BLYNK platform.
STEP 0 - SOME RESTRICTIONS:Since the LoRa service on the HELIUM platform does not have coverage in my area, it was decided to use the BLINK service for the transmission of relevant data, including the flood alert signal.
STEP 1 - BUILD THE DATASETFollowing this guide, a dataset was built on the ROBOFLOW platform using photographs available on the Internet and some videos captured on YOUTUBE.
In total 322 images were captured, however using the DATA AUGMENTATION option, the DATASET could be scaled to 802 images.
STEP 2 - TRAIN, SOAK AND REPEAT...After having properly labeled the DATASET, you can go to this notebook in GOOGLE COLAB following the sequence presented there and perform several training tests changing the size of the BATCH and the number of EPOCHS until obtaining an "acceptable" result.
For this case, the training configuration with the best performance was the following:
!python3 train.py --img 192 --batch 64 --epochs 200 --data {dataset.location}/data.yaml --cfg yolov5n6-xiao.yaml --weights yolov5n6-xiao.pt --name yolov5n6_results --cache
giving the following results:
To confirm these results, some images and videos that are not part of the dateset were passed to corroborate these results.
!python3 detect.py --weights runs/train/yolov5n6_results/weights/best-int8.tflite --img 192 --source /content/pic1.jpg --data {dataset.location}/data.yaml
!python3 detect.py --weights runs/train/yolov5n6_results/weights/best-int8.tflite --img 192 --source /content/pic2.jpg --data {dataset.location}/data.yaml
!python3 detect.py --weights runs/train/yolov5n6_results/weights/best-int8.tflite --img 192 --source /content/pic3.jpg --data {dataset.location}/data.yaml
!python3 detect.py --weights runs/train/yolov5n6_results/weights/best-int8.tflite --img 192 --source /content/pic4.jpg --data {dataset.location}/data.yaml
giving...
also passing a video...
!python3 detect.py --weights runs/train/yolov5n6_results/weights/best-int8.tflite --img 192 --source /content/vsample1.mp4 --data {dataset.location}/data.yaml
giving...
With these results it can be concluded that the model is reliable enough to migrate to the Grove - Vision AI module.
tip: it may happen that this training stage takes time and patience during the trial and error process until the most suitable result is obtained
Deployment of the model is simple, you just have to connect the Grove - Vision AI module to the PC via USB, press the BOOT
button twice, wait for the PC to recognize the storage device and copy the model-1.uf2
file (see step 3 in this guide).
The code is quite simple, it consists of taking the information from the artificial vision module, measuring the confidence level of the detection and when a minimum confidence threshold is exceeded, an event counter is generated to avoid false positives. Once the occurrence of the detection is validated, the alert message is sent to the dedicated virtual variable in BLYNK (you can find the code ahead, and that code is a modification of the code available HERE).
// Template ID, Device Name and Auth Token are provided by the Blynk.Cloud
// See the Device Info tab, or Template settings
#define BLYNK_TEMPLATE_ID "***"
#define BLYNK_DEVICE_NAME "***"
#define BLYNK_AUTH_TOKEN "***"
// Your WiFi credentials.
// Set password to "" for open networks.
char ssid[] = "***";
char pass[] = "***";
TIP: be careful with this section of the code, as the
*
should be replaced with your wifi hotspot data and the application data in BLYNK
if(data.confidence>50)
{
count++;
tft.drawNumber(count, 280,220);
}
if(count>=5)
{
tft.setTextColor(TFT_CYAN);
tft.drawString("Send ALARM...", 70, 160);
Blynk.virtualWrite(V3, 1);
delay(1000);
}
This section of the code takes into account the events where there is a detection with a confidence greater than 50% and a count of occurrences is kept as a validator of positive occurrences, then when a threshold of repetitions is exceeded, the alert data is sent in the virtual variable V3 for flash flood alarm identification.
STEP 5 - SETUP THE BLYNK DASH BOARDthe setup was made following this guide on which a switch type signal is included to identify the occurrence of the alarm associated with the danger of flood. Below are two possible states of the dashboard, in the first there is no detection of flash flooding and in the second it is appreciated when a flash flood alarm is generated:
for the alarm signal an integer type datastream is defined with allowed values zero and one, this datastream is linked to the dashboard by means of an LED type box to visualize its change of state when the WIO terminal transmits the value 1 on the V3 signal.
STEP 6 - SET AUTOMATIONIn the templates section, within the AUTOMATION menu, you can select the datastreams that will be the object of process automation. In this case, the datastream associated with the V3 variable is selected so that I have a SWITCH CONDITIONAL type behavior.
Once the datastream that is going to have automation functionalities has been defined, the automation process is defined in the AUTOMATIONS menu.
This automation process will send an email when a flash flood alert is generated from the Wio terminal, thereby complementing the detection system based on artificial vision with the alerts that are issued via email to a defined list of recipients and even integrations can be made with other web services to issue massive alerts (for example, social networks or emergency organizations).
FURTHER WORK- improve the quality of the dataset
- implement an AI model architecture that can perform better with these types of data
- include additional sensors to give greater reliability (like level sensor, rain sensor, weather sensors...)
- implementation of other web services to send massive alerts in real tim
- migration to another platform for wireless communication (since HELIUM coverage is not available): cellular network, other LoRa networks, Sigfox...
Comments