As time moves forward, natural disasters will continue to wreak havoc on cities and counties around the world, causing many people to die. Bolstering the power of search and rescue teams with autonomous UAVs that can identify people in disaster relief zones could significantly reduce the impact tornadoes, earthquakes, tsunamis, floods, etc... have on the world by giving rescue teams more information about the situation they are working with.
In a disaster relief situation, knowing where survivors are is of vital importance, and any system that can provide that kind of information is highly valuable to the rescue team, and of course, the survivors themselves.
My solution to strengthening search and rescue operations is to outfit an autonomous unmanned aerial vehicle (UAV) with a computer vision system that detects the location of people as the vehicle flies over ground. The powerful neural-network capabilities of the Jetson Nano Dev Kit will enable fast computer vision algorithms to achieve this task. Joseph Redmon's YOLOv3 algorithm will be used to do the actual object-detection (people) in the camera's view.
While the UAV is flying a waypoint mission using ArduPilot, PX4, or any other autonomous flight control stack, the absolute location of people in the camera view can be calculated based on the altitude, orientation, and GPS location of the UAV.
Now that you seem interested in this project, let's get to work on it! The first step will, of course, be to make sure your Jetson Nano Dev Kit is setup with an operating system, NVIDIA's JetPack SDK, and the code for this project.
1) Connect the Raspberry Pi Cam V2 to the Dev Kit using the flat-flex ribbon cable that came with the camera module.
2) Follow NVIDIA's Getting Started instructions for the Jetson Nano Developer Kit all the way through the Setup and First Boot section.
3) Login to the Jetson Nano Dev Kit, and open a terminal by right clicking the desktop and selecting OpenTerminal. Enter the following command to clone the Git repository for this project and install the required Python libraries. (If you want the repository cloned in another folder, just cd
into the folder as shown below.)
cd Documents
git clone https://github.com/jonmendenhall/jetson-uav
cd jetson-uav
./setup
Darknet InstallationNow that the project code is ready, you will need to install the actual computer vision code. This project uses Joseph Redmon's Darknet tiny-YOLOv3 detector because of its blazingly-fast object detection speed, and small memory size compatible with the Jetson Nano Dev Kit's 128-core Maxwell GPU.
1) Clone the Darknet GitHub repository inside of the jetson-uav GitHub repository cloned in the previous section.
git clone https://github.com/pjreddie/darknet.git
cd darknet
2) Because Darknet runs "like 500 times faster on GPU, " (Joseph Redmon) we will modify the Makefile for compiling with GPU support. Using a text editor, modify the flags at the top of the Makefile to match the following, leaving any other values as they are.
GPU=1
CUDNN=1
OPENCV=1
3) Now that the Makefile is corrected, compile Darknet using the following command.
make
4) If the compilation was successful, there should be a file called libdarknet.so in the Darknet repository. Copy this file to the jetson-uav directory so the script will have access to it.
cp libdarknet.so ..
Luckily, you do not need to spend hours or days training the YOLOv3 detector because I pre-trained the model on a Compute Engine instance on Google's Cloud Platform. The model was trained on the 2017 COCO dataset for around 70 hours using an NVIDIA Tesla V100, and the weights (eagleeye.weights) are saved in the GitHub repository for this project.
If you would like the train the model further, or understand how training YOLOv3 works, I recommend reading this article, as it helped me greatly during the process.
In order for the code to run as seamlessly as possible, the script needs to be setup to run at startup on the Jetson Nano. Rather than waiting to launch the code via an SSH session, this will allow for the Dev Kit to be powered on and automatically begin detecting people in frame while the UAV is flying on a mission.
The auto-launch capability will be achieved by setting up a systemd service (eagleeye.service) that runs a bash file (process.sh), which then runs the python script (main.py) and streams the output to a log file (log.txt). It may sound complicated, but only a few simple steps is all it takes to get it up and running!
1) Modify the second line of process.sh to match where you cloned the jetson-uav GitHub repository. Make sure to only change the path that is shown in bold below, as the other files are relative to this path.
#!/bin/bash
cd /home/jon/Documents/jetson-uav && python3 -u main.py > log.txt
2) Next, modify line 6 of eagleeye.service to match the location of the process.sh file used in the previous step. (Shown in bold below)
[Unit]
Description=EagleEye service
[Service]
User=root
ExecStart=/home/jon/Documents/jetson-uav/process.sh
[Install]
WantedBy=multi-user.target
3) Copy eagleeye.service to the /etc/systemd/system directory so systemd has access to it.
sudo cp eagleeye.service /etc/systemd/system/
4) Enable the newly-created systemd service, so it will automatically run at startup.
sudo systemctl enable eagleeye
Jetson Nano as Wi-Fi Access PointWhile this section is optional, I highly recommend using a USB Wi-Fi module to setup the Jetson Nano as a Wi-Fi access point so you can connect to it while not tethered to an ethernet connection. This will allow you to monitor the processes running on the Jetson Nano for debugging and ensuring no errors occur while your UAV is either preparing for flight, in flight, or landed.
1) Ensure a USB Wi-Fi module is plugged into one of the Dev Kit's USB ports. I recommend the Edimax EW-7811Un 150Mbps 11n Wi-Fi USB Adapter attached in the Hardware components section of this project.
2) In a terminal on the Jetson Nano, run the following command to create an access point with an SSID and password of your choice. The second command will enable auto connect functionality so the Jetson Nano will automatically connect to its hosted network if there is no other network option available.
nmcli dev wifi Hotspot ifname wlan0 ssid <SSID> password <PASSWORD>
nmcli con modify Hotspot connection.autoconnect true
Camera CalibrationThe camera calibration process will allow for the removal of any distortion from the camera lens, providing more accurate location estimates of people in frame while in flight. Camera calibration will be done by taking multiple pictures of a calibration chessboard at various positions and angles, then letting the script find the chessboard in each image and solve for a camera matrix and distortion coefficients.
1) Print out a 10 by 7 chessboard and adhere it to a small rigid surface. The calibration script will search for this marker in each image. It is vital that the chessboard is 10 by 7, as the script will look for the interior corners of the chessboard which should be 9 by 6.
2) Run the calibrate.py file in the downloaded repository using the following command. This process will allow you to save multiple pictures of a chessboard to a desired location (a folder named "capture" in this case).
python3 calibrate.py capture
Press [spacebar] to save a picture of the chessboard in various positions and angles as shown below. The more variability in chessboard images, the better the calibration will be.
3) Run the calibration script again, instead adding the -c flag to run calculation rather than saving new images. (Use the same capture path from running the previous time.)
python3 calibrate.py -c capture
This process will look through all captured images, detecting the chessboard corners in each one; any image that it could not find the chessboard in will be deleted automatically. After finding all corners in each image, the script will execute OpenCV's calibrateCamera routine to determine the camera matrix and distortion coefficients. The calibration parameters will be saved to a file (calibration.pkl) for use by the main script.
Here are some examples of the detected corners from the images shown above...
Now that the Jetson Nano and camera are setup, you can assemble the module to be mounted in the UAV. This section gives an outline of how to use the provided parts, but if your Jetson Nano must be mounted a different way, ignore this section and mount the Dev Kit as you need to, making sure the camera has a clear view of the ground wherever it is.
1) Print the Jetson Mount, and two Power Pack Mounts (one should be mirrored along the x-axis when slicing).
2) Thread the four holes in the Jetson Mount with an M3 bolt, then screw a M3x20mm hex standoff into each corner.
3) Drill out the four mounting holes on the Jetson Nano Dev Kit to 3mm, then thread the four holes of the heatsink with an M3 bolt.
4) Secure the Jetson Nano Dev Kit to the Jetson Mount using four M3x6mm bolts.
5) Mount both of the Power Pack Mounts to the heatsink using four M3x8mm bolts.
6) If the Wi-Fi adapter is not installed, make sure to place it in one of the USB ports on the Jetson Nano Dev Kit.
This section will cover assembling the camera module using the provided models. As these parts are designed to be mounted on a sheet of foam board, feel free to skip this section and assemble a camera mount for your own aircraft.
1) Place a vibration damper in each corner of the Camera Plate.
2) Place the Raspberry Pi Cam V2 in the slot on the Camera Plate with the lens pointing down.
3) Secure the Camera Bracket onto the Camera Mount using two M3x8mm bolts.
4) Push the other end of each vibration damper into the corners of the Camera Mount.
Make sure your UAV has enough space in it to mount the module. If there is not enough space, feel free to move parts around to make space.
1) Using hot glue, adhere the Jetson Nano Mount to the Frame of your UAV, making sure there is enough space, and the camera will have a clear view to the terrain below.
2) Remove the Jetson Nano Dev Kit from its mount so the camera module can be installed.
3) Cut a 19mm square opening in the bottom of the body section for the camera module.
4) Using hot glue, adhere the camera module in the opening, making sure the ribbon cable goes the oppositedirection of where the camera connector is on the Jetson Nano Dev Kit.
5) Connect the ribbon cable to the Jetson Nano Dev Kit, then mount the Jetson on the standoffs using the four bolts as before. (The ribbon cable should loop from beneath the Dev Kit as shown below)
6) Connect the Jetson Nano Dev Kit to a telemetry port on the Pixhawk. (Only use the TX, RX, and GND pins on the connector as the Pixhawk will already be powered by a battery)
- Pixhawk RX (Yellow) --> Jetson Pin 8 (TX)
- Pixhawk TX (Green) --> Jetson Pin 10 (RX)
- Pixhawk GND (Black) --> Jetson GND
Typically with Pixhawk and ArduPilot systems, people often use QGroundControl or Mission Planner to monitor the status of their UAV while on autonomous missions. I will be using QGroundControl for its intuitive and simple interface.
This setup will differ slightly in that the ground control software (GCS) will not be connected directly to the telemetry radio over USB. Because QGroundControl (QGC) does not have any extra plugin features to display markers on the map, I wrote a program that runs in the middle of the connection between QGC and the telemetry radio.
My custom GCS connects to the telemetry radio over USB and hosts a TCP server that QGC can connect to. My code will then stream data directly from the telemetry radio to QGC, while also parsing all the packets to detect those that show vehicle location and the detection results from the Jetson Nano on-board. The code will also stream data from the QGroundControl's TCP connection back to the telemetry radio, so QGC and the Pixhawk will not know any difference, and autonomous missions can be flown as usual.
This setup allows for my system to augment the great features of QGroundControl or any other ground control software without interfering with their operations in any noticeable way. Refer to the following diagram if you are confused :)
To get this system setup, follow the steps below!
1) Clone the same jetson-uav GitHub repository on the laptop or computer you intend to monitor the telemetry from your UAV on. Then, run the setup file in the gcs directory of the repository. This will install the required Python libraries for the GUI application to run.
git clone https://github.com/jonmendenhall/jetson-uav
cd gcs
./setup
2) Modify the top lines of web/script.js to match your desired flight area. I used google maps to get the coordinates of the park I fly at.
// create the map element and set the first view position
var map = L.map('map').setView([35.781736, -81.338296], 15)
3) Run the program from a terminal window. While engineers tend to not make the best user-interfaces, there are not many mistakes to go wrong with this GUI I created.
python3 main.py
The map view can be zoomed and panned like a normal map (uses Leaflet.js), and the serial port and baud rate for the telemetry radio can be set at the top of the window. You can also set the port the TCP server will listen on, but 5760 is the default that QGroundControl uses, so I would not worry about changing that. The start button will open the serial port and start listening for TCP connections, and the stop button will do just the opposite.
Detection results will show up as blue markers on the map, and have popups that show exact location and the detection probability YOLOv3 calculated. While the UAV is flying, a red line will also appear showing its path, to better orient you will operating.
4) Open QGroundControl, then go to the General settings. Make sure AutoConnect is disabled for all devices, as QGC will steal all of the serial ports for itself, and not let the custom GCS to open them.
5) Under the Comm Links settings, create a new TCP Link by pressing Add. Fill in the Host Address as 127.0.0.1, and the TCP Port to match the one in the custom GCS (5760 by default).
6) Select the new TCP Link, then click Connect to connect to the custom GCS. (Make sure you pressed Start in the custom GCS software, or QGC will show an error that the connection was refused...)
Now that the Jetson Nano Dev Kit and camera module have been installed on the UAV, snap the PowerCore into its mount and connect it to the micro-USB port on the Dev Kit. SSH into the Jetson Nano by connecting to its Wi-Fi network.
If your camera is mounted at an angle other than straight down, you will need to modify the value of CAM_MOUNT_ANGLE in main.py to match your setup. A positive value (towards front), or negative value (towards rear) indicates the number of degrees the camera is angled away from straight down. Setting CAM_MOUNT_ANGLE to 0 would mean the camera points straight down.
# degrees from straight down the camera faces (relative to UAV)
# positive = angled forward, negative = angled backward
CAM_MOUNT_ANGLE = 50
Modify the value of CONTROL_CHANNEL in main.py to match a switch on your RC transmitter. Flipping the switch connected to this channel number will start / stop the recording or detection loops on the Jetson Nano.
# which RC channel to read from for starting / stopping recording or detection loops
CONTROL_CHANNEL = '7'
If you would like to record a telemetry stream and video stream rather than running live detection, add the -record flag to command in process.sh. This could allow you to run YOLOv3 on the recorded video each frame and have a very smooth result, rather than the low FPS of live object detection.
cd /home/jon/Documents/jetson-uav && python3 -u main.py -record > log.txt
Start the code with the following command. Because you previously enabled the service, the Jetson Nano will automatically run the script at startup from now on.
sudo systemctl start eagleeye
Programmers are not PilotsI have skill in programming, but my skill in flying RC airplanes is not quite at the same level...
When I flew my UAV, the ground was too rough for the wheels to roll smoothly so the plane could not roll as fast... the takeoff took a longer distance than usual, and I ended up flying it into a tree. After a great deal of effort in shaking the tree, the plane finally fell to the ground, but got destroyed beyond repair in the process. I managed to recover everything except the two motors, but I still had a great time building the plane!
Although I crashed the plane, the primary premise of this project is that the Jetson Nano system can be linked to any Pixhawk flight controller. I removed all components from the Pixhawk system (RC receiver, GPS module, battery connector...) and mounted them on a board along with the Jetson Nano, power pack, and camera to demonstrate the capabilities of the system.
If you had more luck than I did in actually flying the UAV, this section will give you a taste as to what can be expected from the system while it is running on a UAV that can fly without crashing into trees. :)
I ran the Jetson Nano code with the -record flag to simultaneously write a telemetry stream to the disk along with a video capture stream to an mp4 file. After walking in the view of the camera, I ran the post_annotate.py script on the recording using the following command. This will run the object detection code for each frame in the video, annotating the frame with estimated GPS locations and boxes around the detection results, saving the annotated video to the specified mp4 output path.
python3 post_annotate.py <TELEMETRY_FILE> <VIDEO_FILE> <OUTPUT_FILE>
This is what the Search and Rescue system produced when the system was running. The markers on the map indicate the estimated location of people in the camera's view, and the popups show the object detection algorithm's output probability for a person at that location in the camera's view. An extra set of eyes could mean the difference between life and death for those stuck in a disaster stricken area.
The following shows the estimated path I walked while in the view of the camera. When compared to my movement in the video above, the system shows that it can perform exceedingly well in estimating the location of people in the camera's view based on the GPS position and orientation of the UAV it is attached to.
It is worth noting that the memory limitations of the relatively small GPU on the Jetson Nano Dev Kit limits the Jetson Nano to tinyYOLOv3, which is less accurate than the more powerful model, YOLOv3. As you can see, tinyYOLOv3 still detects people in the camera's view with reasonable accuracy, so this is just something to keep in mind when expanding this to a higher level. Modifying the threshold value in the object detection code can help to make object detections more exact and reduce the number of errors.
As always, I hope you enjoyed this project and learned something new while you were at it!
PS: Yes, I did codename this project "Eagle Eye, " only to watch the movie a few weeks after and realize its a perfectly fitting name...
Comments