YOLOv12 on the Jetson Nano 4GB enables real-time object detection with impressive accuracy and speed, all on a compact and affordable device. This setup leverages the power of NVIDIA's CUDA and TensorRT technologies to optimize performance. Following the installation steps for dependencies and setting up a virtual environment, you can effectively run YOLOv12 for tasks like person detection.https://docs.ultralytics.com/models/yolo12/#overview. Whether using live camera feeds or video files, the Jetson Nano handles inference efficiently, making it ideal for edge AI applications.
1. PrerequisitesBefore diving into this tutorial, I recommend checking out my previous article: : https://www.hackster.io/pat27/get-started-with-jetson-nano-developer-kit-281c38. "Get Started With Jetson Nano Developer Kit". It provides essential background information that will be helpful for this tutorial.
2. Creation of Python Virtual Environment
Creating a Python virtual environment is an essential step to manage dependencies and keep your project isolated from the global Python environment. For instance, on my Jetson Nano, there are Python 3.8 and 3.11 environments that run YOLOv8 and YOLOv11. To run YOLOv12, I created a new Python 3.11 environmen t as shown in the explanation below.
Open the terminal and write down this :
$python3.11 -m venv y12
3. Activation of Python virtual environment
Write down on the terminal: $source "the name of python virtual environement"/bin/activate. The name of my python virtual environment is "y12".feel free name your python envirionment as you, in my case, 'y12 means yolov12'.
$source y12/bin/activate
4. Ultralytics Installation for YOLOv12
To run YOLOv12, we need to install Ultralytics. There are two ways to do this: cloning the GitHub repository of Ultralytics or installing Ultralytics directly. In this tutorial, we opted for the second method and installed Ultralytics without cloning the GitHub repository.
$pip install ultralytics
5. Installing Jupyter Notebook or Jupyter Lab
$pip install jupyter lab
$pip install jupyter notebook
6. Training YOLOv12
For detailed information on training and validation, refer to the Ultralytics documentation https://docs.ultralytics.com/models/yolo12/#detection-performance-coco-val2017.
The figure below shows the code snippet for training YOLOv12 with the COCO dataset in Jupyter Notebook.
from ultralytics import YOLO
# Load a COCO-pretrained YOLO12n model
model = YOLO("yolo12n.pt")
# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
# Run inference with the YOLO12n model on the 'bus.jpg' image
results = model("path/to/bus.jpg")
7. Conversion to ONNX format
ONNX Runtime optimizes the execution of ONNX models by leveraging hardware-specific capabilities. This optimization allows the models to run efficiently and with high performance on various hardware platforms, including CPUs, GPUs, and specialized accelerators. For detailed information, refer to https://docs.ultralytics.com/integrations/onnx/#onnx-and-onnx-runtime.
from ultralytics import YOLO
# Load the YOLO12 model
model = YOLO("yolo12n.pt")
# Export the model to ONNX format
model.export(format="onnx")
8. YOLOv12 Inference -image
The snippet below shows how to run YOLOv12 with image inference:
from ultralytics import YOLO
import cv2
# Load a COCO-pretrained YOLO12n model
model = YOLO("yolo12n.pt")
#load the image
image = cv2.imread("/home/infinity/Pictures/elephant.jpg")
#run the YOLOv12
detections =model(image)
from PIL import Image
im1=Image.open("/home/infinity/Pictures/gir2.jpg" )
results = model.predict(source=im1, save=True)
#Display image
display(Image.open('runs/detect/predict2/gir2.jpg'))
9. Python code for Real-time YOLOv12: Camera and Video Inference
In this snippet of code, there are two cases for the line cap = cv2.VideoCapture(0):
- If you want to use the camera and run real-time inference, the code will use 0as the camera source.
- If you want to run inference on a video, replace 0 with the path to the video file.
import cv2
from ultralytics import YOLO
model = YOLO("yolo12n.pt")
cap=cv2.VideoCapture(0)
#cap=cv2.VideoCapture('/home/infinity/Videos/bskt.mp4')
while cap.isOpened():
success,frame = cap.read()
if success:
results=model(frame)
annotated_frame = results[0].plot()
cv2.imshow("YOLOv12 Inference",annotated_frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
break
cap.release()
cv2.destroyAllWindows()
9.1.Inference
on Video
To activate the Python virtual environment named y12
and run the yolov12-vrd.py
script located in the inference
folder within Documents
, follow these steps:
- Activate the virtual environment with
$ source y12/bin/activate
. - Navigate to the
Documents
directory using$ cd Documents
. - Move to the
inference
folder with$ cd inference
. - Execute the script by running
$ python yolov12-vrd.py
.
$ source y12/bin/activate
$ cd Documents
$ cd inference
$ python yolov12-vrd.py
9.2. Real-time inference with camera
$ source y12/bin/activate
$ cd Documents
$ cd inference
$ python yolov12-inference.py
Conclusion
In this comprehensive guide, we have successfully set up and run YOLOv12 on the Jetson Nano 4GB, taking advantage of its compact size and powerful AI capabilities. From creating a Python virtual environment to installing Ultralytics, we have covered every essential step. We also demonstrated how to use YOLOv12 for real-time person detection, utilizing both camera feeds and video files. With the provided links to Ultralytics and NVIDIA Jetson Nano, you can further explore and optimize your setup. By following these instructions, you can harness the full potential of YOLOv12 for your edge AI applications, making real-time object detection both efficient and accessible.
Comments
Please log in or sign up to comment.