Falls are a significant concern, particularly for the elderly and individuals with medical conditions, as they can lead to serious injuries and even fatalities. Fall detection systems aim to identify such incidents promptly, ensuring timely assistance and reducing the risk of severe consequences. With the advancement of artificial intelligence and machine learning, fall detection has become more accurate and reliable.
In my previous blogs, "Get Started With Jetson Nano Developer Kit": https://www.hackster.io/pattshibang/get-started-with-jetson-nano-developer-kit-281c38 and "Running YOLOv12 on Jetson Nano 4GB: A Comprehensive Guide, " https://www.hackster.io/pattshibang/running-yolov12-on-jetson-nano-4gb-a-comprehensive-guide-f9042e we explored the basics of setting up the Jetson Nano and running the YOLOv12 model for object detection. Building on that foundation, we will now dive into customizing YOLOv8 specifically for fall detection on the Jetson Nano.
YOLOv8 (You Only Look Once, Version 8) https://docs.ultralytics.com/models/yolov8/. is a state-of-the-art object detection model known for its speed and accuracy. Its ability to quickly and accurately identify objects in real-time makes it an ideal choice for fall detection applications. By customizing YOLOv8, we can train the model to specifically recognize fall events, enhancing its effectiveness in detecting and responding to such incidents.
The Jetson Nano, a powerful yet affordable edge AI platform developed by NVIDIA, is an excellent choice for deploying YOLOv8-based fall detection systems. Its compact size, low power consumption, and robust GPU capabilities make it well-suited for running complex AI models at the edge. This combination enables real-time fall detection and immediate alerts, ensuring prompt assistance and improving safety in various environments.
In this blog post, we will explore the process of customizing YOLOv8 for fall detection and deploying it on the Jetson Nano, providing a comprehensive guide for building an effective and efficient fall detection system.
2.Preparing the Datasetin this tutorial we used Roboflow dataset https://universe.roboflow.com/universe-datasets as shows in the figure.
Download the dataset with your desire format as shows on this figure.
The dataset is well-organized and includes images annotated in JSON format, divided into three categories: train
, test
, and valid
. It features three classes (stand
, falling
, and fallen
) that are crucial for training a fall detection model. The YAML file and additional text files provide configuration settings and instructions to facilitate the use of the dataset in machine learning workflows.
To train YOLOv8 or higher versions (as YOLOv8 is an Ultralytics product), the Ultralytics documentation provides more details about training https://docs.ultralytics.com/. Customizing YOLOv8 on custom data, like in the fall detection tutorial, means training YOLOv8 with a specific target dataset.
3.1. Preparing the Dataset:
- To customize YOLOv8 for fall detection, you'll need a specific dataset tailored to your application. In this tutorial, we focus on a dataset that includes images of people in different states: standing, falling, and fallen.
- Ensure the dataset is well-organized with images divided into training, testing, and validation sets. Annotations should be in JSON format, and the dataset should be structured with YAML configuration files for easy setup.
3.2. Dataset Annotation:
- Each image in the dataset should have corresponding annotations that specify the bounding boxes and class labels. For fall detection, There is three classes: stand, falling, and fallen.
3.3. Configuring the Training Environment:
- Use the provided YAML file to configure the training parameters. This file will include paths to the dataset on your computer, number of classes, and other necessary settings.
- An example YAML configuration might look like this:
train: /path/to/train/images
val: /path/to/valid/images
test:/path/to/valid/images
nc: 3 # Number of classes
names: ['stand', 'falling', 'fallen']
3.4. Train the Model
from ultralytics import YOLO
# Load the model.
model = YOLO('yolov8n.pt')
# Training.
results = model.train(
data=r"C:\Users\user\Documents\ALL DATASET\Fall_Dataset2\data.yaml",
imgsz=640,
epochs=200,
batch=8,
)
# Export the model to ONNX format
success = model.export(format='onnx')
3.5.Fall Architecture
The figure below illustrates the fall detection architecture. It involves fine-tuning the pretrained YOLOv8 model by freezing certain layers' weights and passing the output through a dense layer for fall motion classification.
3.6.Train the Model
The evaluation results demonstrate a Mean Average Precision (mAP) of 94%, a Precision of 90%, and a Recall of 91.9%. The figures provide a visualization of our model's performance, illustrating its robustness in detecting falls through training result graphs and a confusion matrix.
5.1. Inference with Image
# FALL DETECTION
from ultralytics import YOLO
import cv2
#Load the YOLOv8 model:Python
model = YOLO(r"C:\Users\Documents\My_pretrained\falling-3c.pt")
#Load the image
image = cv2.imread(r"C:\deep\fl6.JPG")
#Run YOLOv8 to detect
detections = model(image)
5.2. Inference with Video
import cv2
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
cap=cv2.VideoCapture(0)
#cap=cv2.VideoCapture('/home/infinity/Videos/.mp4')
while cap.isOpened():
success,frame = cap.read()
if success:
results=model(frame)
annotated_frame = results[0].plot()
cv2.imshow("YOLOv8-Fall detection Inference",annotated_frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
break
cap.release()
cv2.destroyAllWindows()
6.Hardward Set-up
The figure below illustrates the Jetson Nano hardware setup, which includes:
- Jetson Nano Developer Kit
- Monitor with HDMI input capability
- Keyboard and mouse for user interaction
- Camera for capturing video
- Camera stand for stabilizing the camera during use
6. Deploy on Jetson Nano
To deploy YOLOv8 on Jetson Nano, I recommend checking out my previous article https://www.hackster.io/pattshibang/running-yolov12-on-jetson-nano-4gb-a-comprehensive-guide-f9042e. Follow the same steps to deploy YOLOv12 on Jetson Nano, but instead of using YOLOv12, you'll use the pretrained YOLOv8 model for fall detection.
You need to activate the Python virtual environment named y8
and run the fall-detection-i.py
script located in the Documents, follow these steps:
- Activate the virtual environment with
$ source y8/bin/activate
- Navigate to the
Documents
directory using$ cd Documents
- Execute the script by running
$ python
fall-detection-i.py
$ source y8/bin/activate
$ cd Documents
$ python fall-detection-i.py
In conclusion, customizing YOLOv8 on the Jetson Nano opens up new possibilities for real-time AI applications like fall detection. This demonstration not only highlights the potential of embedded AI systems but also showcases how powerful and efficient these solutions can be for safety and monitoring. As technology continues to evolve, projects like these pave the way for innovative, impactful applications. Thank you for exploring this journey with me!
Comments
Please log in or sign up to comment.