I was working on an edge computing computer vision project with real-time object detection. We chose Jetson Nano as the main hardware and YOLOv7 for object detection. But here comes the problem. I couldn't find any good (complete) tutorials on how to set up Jetson Nano for the YOLOv7 algorithm. So I decided to write one, with all the steps included. I tested all the steps in exact same order.
Why YOLOv7 is the best object detection algorithm at the moment?YOLOv7 (You Only Look Once version 7) is a state-of-the-art object detection algorithm that has gained popularity due to its efficiency and accuracy in detecting and identifying objects in images and videos. One of the main advantages of YOLOv7 is its ability to perform real-time object detection, making it suitable for use in applications such as video surveillance, autonomous vehicles, and augmented reality.
Another key feature of YOLOv7 is its use of a single convolutional neural network (CNN) to detect objects, rather than using multiple networks as in some other object detection algorithms. This allows YOLOv7 to run faster and more efficiently, as it does not need to process multiple networks separately.
Another factor that contributes to the effectiveness of YOLOv7 is its ability to handle a large number of classes, with some implementations able to detect up to 1000 different objects. This makes it a versatile and flexible choice for object detection tasks in a wide range of settings.
Overall, YOLOv7 is a reliable and effective object detection algorithm that has proven itself in a variety of applications. Its real-time performance, single CNN architecture, and strong accuracy make it a top choice for object detection tasks.
YOLOv7 and Jetson NanoYOLOv7 is a particularly useful object detection algorithm to use with the Jetson Nano, a small, low-power computer designed for edge computing applications. One of the main reasons for this is YOLOv7's ability to perform real-time object detection, which is crucial for many applications that require fast and accurate detection of objects in images or videos.
The Jetson Nano's low power consumption and compact size make it well-suited for use in a variety of edge computing applications, such as video surveillance, autonomous vehicles, traffic monitoring, and smart IoT devices. By using YOLOv7 on the Jetson Nano, users can take advantage of its fast and accurate object detection capabilities to build powerful and efficient edge computing applications.
This is the most up-to-date tutorial on how to run the YOLOv7 model on Jetson Nano. Before starting with this tutorial, make sure to install the latest Nvidia Developer Kit SDK. You can follow the official tutorial from Nvidia. Follow the steps to section Next Steps and you will be ready to start with this tutorial.
Jetson Nano Setup
First, create a folder for the YOLO project and clone the YOLOv7 repository (all commands are inside bash terminal):
mkdir yolo
cd yolo
git clone https://github.com/WongKinYiu/yolov7
Then use a virtual environment to install most of the required python packages inside. We will use a virtual environment so we can for example always choose another PyTorch package version. You have to use SUDO. Note: we can't install OpenCV as a local package (more on that later).
We have to install pip before we continue:
sudo apt update
sudo apt install python3-pip
Then type:
sudo pip3 install virtualenv virtualenvwrapper
Now we installed virtual environment. Next change.bashrc file using nano editor (we have to install it first):
sudo apt update
sudo apt install nano
nano ~/.bashrc
Add this to the end of.bashrc file (under all the lines already there):
# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh
To activate the changes the following command must be executed:
source ~/.bashrc
Now it's time to create our virtual environment, where most of our Python packages will be stored.
To create a virtual environment run:
mkvirtualenv yolov7 -p python3 # yolov7 is the name of our virtual environment. You can use any other name.
If you want to activate the environment just type:
workon yolov7 # yolov7 is the name of our virtual environment
If you want to deactivate the virtual environment type:
deactivate
Because OpenCV has to be installed system-wide (it comes preinstalled with Nvidia developer pack Ubuntu 18.04), we have to create a symbolic link from global to our virtual environment. Otherwise, we won't be able to access it from our virtual environment.
cd ~/.virtualenvs/xxxx/lib/python3.6/site-packages/
ln -s /usr/lib/python3.6/dist-packages/cv2/python-3.6/cv2.cpython-36m-aarch64-linux-gnu.so cv2.so
Note: xxxx is your environment folder name (in our case it's yolov7). EXAMPLE:
cd ~/.virtualenvs/yolov7/lib/python3.6/site-packages/
ln -s /usr/lib/python3.6/dist-packages/cv2/python-3.6/cv2.cpython-36m-aarch64-linux-gnu.so cv2.so
Move back to the folder you created at the beginning (yolo in our example).
We have to install all the required packages. But here comes a problem I've encountered. I couldn't install matplotlib library just using pip install matplotlib. It will take us a few more steps to install matplotlib on Jetson Nano. First, enter these two commands:
sudo apt install libfreetype6-dev
sudo apt-get install python3-dev
I found that it's best to install numpy and matplotlib first:
pip3 install --upgrade pip setuptools wheel
pip3 install numpy==1.19.4
pip3 install matplotlib
Now move to the YOLO7 repository you cloned at the beginning (if you are not already there).
We have to comment out two packages that we will install manually (PyTorch and TorchVision). Change requirements.txt using nano editor:
nano requirements.txt
Comment out these lines with # before the package name:
# matplotlib>=3.2.2
# numpy>=1.18.5
# opencv-python>=4.1.1
.
.
.
# torch>=1.7.0,!=1.12.0
# torchvision>=0.8.1,!=0.13.0
.
.
.
# thop # FLOPs computation # --> LOCATED ON THE BOTTOM OF requirements.txt
Save and exit (ctr+s, y, enter).
Install all required packages from requirements.txt. Make sure you are in a yolov7 cloned repository where the requirements.txt file is located.
pip3 install -r requirements.txt
Install PyTorch and TorchVision
We are ready to install PyTorch. I've tested YOLOv7 algorithm with PyTorch 1.8 and 1.9 and both worked fine. We will install PyTorch version 1.8 in this tutorial because it's officially provided by Nvidia.
Run this command:
sudo apt-get install libopenblas-base libopenmpi-dev
Then use these commands from the official Nvidia tutorial.
pip3 install -U future psutil dataclasses typing-extensions pyyaml tqdm seaborn
pip3 install Cython
wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl
pip3 install torch-1.8.0-cp36-cp36m-linux_aarch64.whl
pip3 install thop
To check if everything is installed correctly run:
python -c "import torch; print(torch.__version__)"
You should see the result:
1.8.0
Now the tricky part. Because there is no pre-built wheel for TorchVision for Jetson Nano, we have to clone it from GitHub. Then we build the right version. After this, we install it in our virtual environment.
TorchVision version needs to be compatible with PyTorch. We used PyTorch version 1.8.0, so we will choose TorchVision version 0.9.0.
sudo apt install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
pip3 install --upgrade pillow
git clone --branch v0.9.0 https://github.com/pytorch/vision torchvision
cd torchvision
export BUILD_VERSION=0.9.0
python3 setup.py bdist_wheel # Build a wheel. This will take a couple of min
cd dist/
pip3 install torchvision-0.9.0-cp36-cp36m-linux_aarch64.whl
cd ..
cd ..
sudo rm -r torchvision # Now remove the branch you cloned
To test if TorchVision is installed successfully type:
python -c "import torchvision; print(torchvision.__version__)"
You should see the result:
0.9.0
Try YOLOv7
Congrats, we are almost ready. Before we run YOLOv7 on Jetson Nano for the first time, we have to download trained weights first. We can choose between the normal and tiny version. We used the tiny version for this tutorial because it's optimized for edge devices like Nano.
Then download trained weights (choose between tiny or normal) into YOLOv7 folder:
# Download tiny weights
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt
# Download regular weights
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
Now we are ready to run YOLOv7 algorithm for the first time.
python3 detect.py --weights yolov7-tiny.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg
If there is no error, YOLO should detect objects in around 30 seconds. You can open the folder runs/detect/ and choose your latest experiment folder (for example exp3). There will be an image with bounding boxes around recognized objects.
Congrats! Now you are ready to build awesome stuff with Jetson Nano and YOLOv7 algorithm.
If there are any problems feel free to contact me so I can update the tutorial.
Useful resources
1. Start working with Jetson Nano.
2. PyTorch and TorchVision for Jetson Nano.
3. In-depth information about OpenCV, PyTorch, and other libraries on Jetson Nano.
[For Business]NEED HELP WITH YOUR PROJECT?If you need help with your custom project, I'm open to help. You can contact me through Linkedin.
Comments