- This project is focused on tracking people whenever a catastrophe occurs (like a fire), as well as delimit and recognize how much does the fire have extended through time aiming on delivering this data to the pertinent authorities and take faster actions to fight this incidents. All this with drone´s help. As an add on, recognize the pilot of certain drones, just to know who is in charge of what equipment because in a sinister there might be more than one drone, thus more pilots.
- I decided to build this project in order to develop technology that could help to reduce losses whenever necessary. In the last decade we have seen an impressive increase of natural events just as floods, earthquakes, likewise events that might or not might be produced by humans just as fires, which tend to consume important areas of land.
- Images during building:
1 / 5
- Drones have been becoming main characters on the fighting against some of this events, thanks to the easy of use, reaching points that have not been so accesible to humans and the continuos developing of both hardware and software technology, just as the kit we used for this contest.
- To develop my idea I used two cameras for the same purpose (tracking people) but they are not used at the same time. The main difference between them is the weight, so if I want to have less grams in my drone, I just use IMX217-99. Personally, I preferred the Logitech´s C920 Webcam, which gives better imaging.
- Nvidia provides a great framework to do inference video with its Jetson ecosystem, which I find really helpul. Yoy can select among some famous networks depending on what classes of objects you desire to recognize. Here is an example with the C920 webcam for recognizing cars and humans:
- I used another very interesting camera called Pixy2Cam. You can teach objects to it based on their color and perform a really good track of them. I used it to perform pilot recognizing, basically with a figure on the pilot´s gap I can certainly know who is in managing which drone. If there exists more than one drone on the fire fighting, we can assign a number to every drone, linked to the signature of teached objects in the pixycam just like this:
- NXP Robotic FMU relies the PX4 autopilot ecosystem, which uses a messsaging protocol called MAVLINK to handle information comming from another systems within a drone network, just as companion cumputers. In this case my companion computer is the Jetson Nano development board. As we saw, Nano helped me with the video inference to recognize people and other objects in the environment. Furthermore, we can use companion cumputers to talk to our autopilot for perform missions or to handle oher types of info with external world. For that matter, I used MAVSDK library to create missions in a certain area and record video from the jetson inference framework. (Code it is attached). Here is a result from a mission. Unfortunetaly, the telemetry radio failed for a while and stated that the battery was not ok so it critically landed after a minute of flying. Here is the video:
- For the final test, although the video lasts for a minute or so, we can be sure that the companion is properly working whilst on the drone. The main problem was the weight of the drone, which gave it an unstable flight for a moment. Of course this can be fixed with PID tune. Another probleam was that in one of the test I lost one of the support from the drone so retest was completely impossible. However, I am very satisfied with the results of the project so far. Improvements will come for sure.
- For the companion computer connection to the FMU I used a FTDI cable. The connection is USB for the Jetson Nano and TELEM2 to the FMU. The talk at 921600 baud. And for the connection from Nano to the computer I used a USB Wifi doungle which allowed me to ssh the jeton nano using a hotspot. This way I started all the scripts on the nano. This Jetson hotspot could also be used to provide connection in the case that people found in the sinister do not have a directly way to reach autorithies
- The RapidIoT was simply used for display the values of some of the data it was reading from the sensors on it, and some MAVLIK messages comming from the FMU. The connection was made over UART on RapidIot and TELEM2 in FMU.
Now some photos of the final build:
1 / 10
In this images, PixyCam is not seen given that just before the photos, the PixyCam suffered a broke up in a test crash.
1 / 2 • broken Pixy base
- The next step would be to record some videos of a controlled fire aiming to train a network with that generated data set. This can be done with Nvidia Digits Platform, which allows us to rapidly train networks for image classification, segmentation and object detection. I would choose this option given that it can work together with the jetson inference library to export the networks and perform object detection and segmentation like this:
1 / 2 • example of smoke segmentation
Comments
Please log in or sign up to comment.