Forest Fire Prevention Using CNN Model with Two Frameworks
1Author 1 (Po-Lin, Chen), 1Author 2 (Sih-Min, Liu) , and 2Author 3 (Pei-Jen, Wang)
1 Department of Power Mechanical Engineering,
National Tsing Hua University, Hsinchu, Taiwan,
E-mail:s112030504@m112.nthu.edu.twE-mail:s112033639@m112.nthu.edu.tw2 Department of Power Mechanical Engineering,
National Tsing Hua University, Hsinchu, Taiwan,
E-mail: pjwang@pme.nthu.edu.twAbstractWildfire has always been a troublesome issue for the residents of North America. Although the monitoring system is widespread nowadays, it is hard to apply such system in natural environments. Therefore, we aim to implement a lightweight neural network on the edge device, which is KR260 in order to distinguish fire or non-fire precisely. At first, we simply use color detection by our clustering method based on K-means. But the fact is that the proposed color detection method does not predict fire or not well. Considering the performance of KR260, which is the board we apply on, we choose to train a simple CNN model on our PC, and test some samples on the board. By the proposed model using two frameworks, we obtain good results evaluated by loss and accuracy. Finally, we run forest fire prevention system on embedded system so as to prove our method that can predict wildfire precisely, thereby utilized in natural environments.
Keywords: Wildfire, Natural Environments, KR260, CNN Model.
1. Introduction
The northern American wildfire season typically runs from spring to fall; however, as the effects of climate change increase, disasters continue to shirk the expected seasonal rules, occurring with ever-increasing frequency and intensity. Wildfire usually appears in wilderness, and when disasters strike, addressing immediate needs is paramount. However, the outdoor device needs to utilize power effectively to extend the device's operating time. This Kria KR260 Robotics Starter Kit embedded system is suitable for the requirement. We implemented fire detection to report outdoor conditions. If there is any fire coming out, the rescue team can extinguish the fire before it grows larger.
2. related work
The bill of materials(BOM) we used are listed below:
• Kria KR260 Robotics Starter Kit
• 64GB SD card
• PC (usage for machine training)
Wildfire detection has been widely studied with various techniques developed over the years. Traditional approaches often rely on humidity and temperature sensors or even thermal cameras. Though these methods are much effective, they come with limitations such as high expenses, limited coverage areas, and delayed response times. Recently, image-based techniques, particularly convolutional neural networks (CNNs), have emerged for wildfire detection. These models can more accurately identify fire in images by extracting visual features. However, many of the existing CNN models are computationally intensive and unsuitable for real-time detection on edge devices. Our research addresses these limitations by developing a lightweight neural network optimized for deployment on the KR260 edge device. Unlike traditional sensor-based methods, our approach exclusively utilizes image processing, enabling real-time wildfire detection in natural environments without additional sensors. By concentrating on efficient model architectures and leveraging the capabilities of KR260 board, we strive to achieve precise and prompt wildfire detection.
3. methodology
3.1. Activate the Board
The initial step involves using the Win32 Disk Imager tool to download Ubuntu and write it onto the microSD card. Next, connect everything on the board, which contains microSD card, USB keyboard, USB mouse, monitor display, Ethernet cable, and Power Supply.
Fig. 1. The connection of KR260.
3.2. Set Up the Environment
After the Power Supply is connected, it requires time to turn on the device. Firstly, the standard system requires us to change the password when the first time logging in. Then, enter the following commands on the terminal.(See Fig.2.)
Fig. 2. The Commands Entered in Terminal.
Following that, a virtual environment is set up and is activated afterwards. The platform we use is Jupyter Notebook, which is a common tool in Anaconda.
3.3. Color Detection
At first, we want to apply color detection on our study. From the graph below, we can see after implementing our code, we cannot clearly distinguish whether it is on fire or not. The difficulty is that if a man wearing red clothes appears in the sight of the camera, the system may mistake the man as fire. As a result, we take on another method. Another reason why we cannot use color detection is that it is hard to find the algorithm to catch out all the color exchange when the forest is on fire. For instance, thick smoke will affect the results, leading to judge the fire’s red color more difficultly.
Fig. 3.a. The Scenario That Is About to Be on Fire.
Fig. 3.b. Testing Our Color Detection Algorithm by Applying a Non-Fire Image Above.
3.4. CNN Model
We use a CNN model with PyTorch and Tensorflow framework respectively. Below shows the framework of our model. To begin with, we train the model on our PC, and the code generates a file containing model parameters, such as weight and bias. Subsequently, we use this file to implement our system on KR260 with its accurate prediction.
3.4.1. Tensorflow Framework
Based on the TensorFlow framework, we execute the following process: First of all, we load some necessary libraries and initialize data, including class names (fire and non-fire) and image input size. We then randomize and standardize the training and testing images, using 90 percent of the data as the training set and 10 percent as the test set. Moreover, we builda convolutional neural network using Adam as the optimizer, train it with a batch size of 128 for 50 epochs, and save the trained model as "CNN_model.keras". On the KR260, we implement the trained model, following steps to convert color space from BGR to RGB, resize the image, normalize it, and convert the data structure to predict fire presence in images. The following figure demonstrates the proposed model architecture using Tensorflow framework.
Fig. 4. Model Architecture by Tensorflow Framework
3.4.2. PyTorch Framework
Another widely-used framework is PyTorch, which is also a common tool. The overall process is quite similar to that of Tensorflow with several differences listed below:
• Set up the device to use either a CUDA GPU or a CPU.
• Set random seeds so that reproducibility of the results can be ensured.
• The batch size in PyTorch is 256, which is twice that of TensorFlow.
• Input size is initialized to 32*32*3, where the input size in Tensorflow is initialized to 64*64*3.
• Set a learning rate scheduler so that it can adjust the learning rate during training, helping the model converge more efficiently and avoid local minima.
• Do data preprocess, such as color jittering, random erasing and so on.
Fig. 5. Model Architecture Listed by PyTorch Framework
4. experiment results
In this section, we discuss the differences between two selected frameworks. We use cross entropy as our loss function, with accuracy mainly being the benchmark to evaluate our models. Moreover, Adam is selected as the optimizer of the models, and each model of different frameworks runs 50 epochs.
4.1. Dataset Overview
A fire dataset provided on Kaggle, which is a well-known website with various kinds of datasets. The dataset contains 755 outdoor-fire images and 244 nature images such as river, foggy forest, and so on. Seeing that the ratio between fire and non-fire images is not quite balanced, we add some nature images so that the number of images with fire and without fire are approximately equal. The proportion of the training set and test set is chosen randomly. On the PyTorch framework, we allocate 10% of the data to the test set, 10% to the validation set, and the remaining 80% to the training set. On the TensorFlow framework, we allocate 10% of the data to the test set and the remaining 90% to the training set.
4.2. Different Framework Analysis
4.2.1. Tensorflow Framework
In this study, we use confusion matrix, loss, and accuracy to evaluate and analyze the performance of the model. Fig. 6 shows the confusion matrix of the model on the test set. Though the model performs well in classifying fire and non-fire, it sometimes confuses the two. This may be due to the model misclassifying early-stage fire images as the non-fire one. We can mitigate this issue by adding more early-stage fire images in the dataset.
Fig. 6. Confusion Matrix Using Tensorflow Framework
In Fig. 7, it demonstrates the change of the loss during the training process. From the figure, we can see that the loss decreases rapidly in the early stages but begins to plateau after reaching a certain point. However, the curve shows some fluctuations though dropout layer and early stopping techniques have been applied. In Section 6, future work can explore more refined hyperparameter adjustments to further improve model performance.
Fig. 7. Loss Function by 50 Epochs
The accuracy of the model on the training set and the test set show in Fig. 8. The model achieves an accuracy of around 98% on the test set, indicating good generalization ability.
Fig. 8. Final Accuracy Result
4.2.2. PyTorchFramework
In this study, we also use confusion matrix, cross entropy loss, and accuracy to conduct analyses about the performance of the model. As shown below, the confusion matrix of the model based on the test set is depicted in Fig. 9, where 0 represents non-fire and 1 represents fire.
Fig. 9. Confusion Matrix Using PyTorch Framework
We can observe from Fig. 10 that the loss function in PyTorch framework exhibits larger fluctuations, especially for validation loss. Consequently, we may infer that overfitting exists since our model performs better during training process compared to test process. What’s more, due to the smaller size of test dataset, validation data in each batch encompasses higher variances.
Fig. 10. Loss Function by 50 Epochs
Fig. 11. Final Accuracy Result
4.3. Validation of Prediction
Apart from the accuracy provided by the code, we actually test some images to validate whether the model is well performed or not. The listed tables show the true ratio with 30 fire images and 30 non-fire images by two frameworks respectively by random selection.
4.3.1. Tensorflow Framework
Below figures demonstrate how we predict the forest fire in reality by testing fire and non-fire images.
Fig. 12. Test Fire Image by Tensorflow Framework
Fig. 13. Test Non-Fire Image by Tensorflow Framework
Compared to the confusion matrix provided in the last section, Table 1 shows that the non-fire class achieves higher accuracy in real situations. The reason may be that there are some numbers of images about to be on fire in the fire dataset. In addition, randomly selecting sample images results in huge uncertainty so that the model mistake the situation as the other one.
Table 1. Ground Truth Table by Tensorflow Framework.(Unit: number of images.)
4.3.2. PyTorch Framework
Fig. 14. Test Fire Image by PyTorch Framework
In contrast to the confusion matrix presented earlier in Section 4.2.2, Table 2 illustrates that among the 30 samples in each class, 3 samples are misclassified in both fire and non-fire categories. This could stem from the challenges of the model in discerning between fire and non-fire images that share similar visual attributes in the initial stages of the model training process; moreover, the random selection of sample images introduces variability that occasionally results in misclassification.
Table 2. Ground Truth Table by PyTorch Framework.(Unit: number of images.)
5. Conclusion
In this study, our ultimate objective is to implement a lightweight neural network on edge devices, which is KR260, to address the challenge of wildfire detection in natural environments. Initially, we experiment with a color detection method based on K-means clustering; consequently, the accuracy of predicting fire does not meet our needs. Considering the performance limitations of the KR260, we opt to train a simple convolutional neural network (CNN) model on a PC and test its performance on the KR260 board.
Our proposed CNN model performs well in evaluations of loss and accuracy across both frameworks, indicating its effectiveness in distinguishing between fire and non-fire scenarios. Finally, integrating the model into an embedded system for forest fire prevention demonstrates the ability of our method to accurately predict wildfires in natural environments.
6. future work
For future work, we aim to enhance our wildfire detection system by exploring more advanced neural network architectures and optimization techniques to further increase the performance of the model. Expanding the dataset with diverse and challenging fire scenarios, such as some animals in the same color as fire appearing in the forest, will also be a priority to ensure the reliability of the system in different natural environments. Our goal is to achieve real-time wildfire monitoring solely through image processing without temperature and humidity sensors. In the end, real-world deployment and long-term monitoring will be conducted to validate the effectiveness of the system in actual wildfire prevention efforts.
acknowledgement
We would like to express our gratitude to whom has great contribution to the successful completion of this research. Firstly, we would like to thank our advisor, Pei-Jeng Wang, for his unwavering support, invaluable guidance, and encouragement. His insights and suggestions have significantly enhanced the quality of this work. We also extend our appreciation to our and lab mates for their assistance. Their feedback and advice have been instrumental in refining my ideas and approaches. Moreover, we are quite grateful that AMD provides the edge device KR260, which was sponsored by the company. Their support, including the necessary funding and resources, as well as offering the facilities and equipment required for the experiments, has been invaluable. Finally, we are profoundly thankful to our family and friends for their steadfast support and understanding; their patience and encouragement have really been a source of strength for us.
References
[1] "Getting Started with Kria KR260 Robotics Starter Kit, " AMD, Available: https://www.amd.com/en/products/system-on-modules/kria/k26/kr260-robotics-starter-kit/getting-started/connecting-everything.html. [Accessed: June, 2024].
[2] Kaggle, Fire Dataset, https://www.kaggle.com/datasets/phylake1337/fire-dataset, accessed on: June 28, 2024.
[3] Emanuel Sousa Tomé, Rita P. Ribeiro, Inês Dutra, and Arlete Rodrigues, "An Online Anomaly Detection Approach for Fault Detection on Fire Alarm Systems, " Sensors, vol. 23, no. 10, pp. 4902, May 2023. Available online: https://doi.org/10.3390/s23104902.
[4] Hao Wu, Deyang Wu, Jinsong Zhao, "An intelligent fire detection approach through cameras based on computer vision methods, " Process Safety and Environmental Protection, vol. 127, pp. 245-256, July 2019.
[5] Turgay Celik, Kai-Kuang Ma, "Computer vision based fire detection in color images, " in 2008 IEEE Conference on Soft Computing in Industrial Applications.
[6] Xuanxuan Hong, Wei Wang, Quanli Liu, "Design and Realization of Fire Detection Using Computer Vision Technology, " in 2019 Chinese Control And Decision Conference (CCDC).
[7] Viktor Tuba, Romana Capor-Hrosik, Eva Tuba. (2017) Forest Fires Detection in Digital Images Based on Color Features. International Journal of Environmental Science, 2, 66-70
[8] Zhao, R., Niu, X., Wu, Y., Luk, W., Liu, Q. (2017). Optimizing CNN-Based Object Detection Algorithms on Embedded FPGA Platforms. In: Wong, S., Beck, A., Bertels, K., Carro, L. (eds) Applied Reconfigurable Computing. ARC 2017. Lecture Notes in Computer Science(), vol 10216. Springer, Cham. https://doi.org/10.1007/978-3-319-56258-2_22
[9] K. Muhammad, J. Ahmad, Z. Lv, P. Bellavista, P. Yang and S. W. Baik, "Efficient Deep CNN-Based Fire Detection and Localization in Video Surveillance Applications, " in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 7, pp. 1419-1434, July 2019, doi: 10.1109/TSMC.2018.2830099.
Comments