#NXPHoverGames
The primary goal for the project is to identify pedestrian hotspots in large, public spaces so that, in the context of a pandemic, appropriate social distancing and pedestrian traffic control measures can be put in place. Something such as this would be particularly helpful as different regions of the world re-open, more folks are out and about, fans are re-introduced into arenas for sporting events and all the while maintaining social distancing rules to counter the rapid spread of the virus.
We accomplish this goal with the help of heatmaps generated from captured video or images. The heatmaps are used to show where motion occurs most frequently. (See Image 1 below for an example. Additional example images/videos can be found at the end of the submission). More frequented spots or areas with crowds appear as brighter orange/yellow on the processed images and videos.
The project itself has served as a good introduction for the project members into the world of drones and embedded systems.
SetupThe project makes use of NXP's HoverGames drone development kit and their custom experimental 8MMNavQ companion computer. The drone development kit coupled with the excellent and in-depth NXP HoverGames gitbook provides a helpful guide to drones, their key components, and functions.
From a software perspective, the project makes use of MAVSDK-Python. MAVSDK-Python is a wrapper for the C++ libraries for MAVSDK, and provides an API into MAVLink (MAVLink is a lightweight messaging protocol for communicating with drones and other companion nodes).
A key component of the project is the generation of heatmaps to review pedestrian hotspots from on high and the project uses OpenCV-Python library for the Computer Vision processing of the image and video captured. The video below (Video 1) highlights the concept when an example video is processed with the script written for this project.
JourneyThe project idea came about during the first re-opening from the initial lockdown in 2020, where a logical question to ask would be whether being outdoors would put an individual in an area that may be crowded and whether social distance can be maintained.
After making an initial idea submission, and while waiting for the drone kit itself to arrive we got to work on what our base script may look like, to enable us to perform the computer vision heatmap creation we wanted to achieve.
While the drone arrived in the fall, due to busy personal schedules and pandemic-related lockdowns, we were only able to fully assemble the drone in early January (see Image 2 below). We discovered a bit too close to the deadline that there were issues with our motors (see the Challenges and Learning section for additional details), and so we made do by running our code in a simulation environment against footage provided in the Stanford Drone Dataset.
In parallel to working on the computer vision aspect of this project, the two of us also spent time designing/developing the larger drone-based workflow that defined exactly when and how the heatmap analysis would take place. This piece was implemented using MAVSDK-Python and is described in further detail in our project readme on github.
The computer vision aspect of the project (implemented in heatmap.py
) uses some basic ideas to identify where movement occurs when processing a video capture feed:
Background Subtraction: As long as the camera position remains stationary, we can estimate what the background looks like by averaging the images in our video capture feed. Dynamic actors (e.g. pedestrians, cars, etc.) will, over a large enough number of images, be averaged out, leaving an image of just the background:
With a background image on hand, finding the dynamic actors is mostly a matter of subtracting the background current video capture image. If we keep tally of those differences in a matrix (whose cells represent image pixels), then the pixels that more frequently differ from their background will have larger values... and such is how our heatmap is born!:
# Incorporate the current frame into our averaged background and get the updated foreground mask
fg_mask = bg_subtractor.apply(frame)
fg_mask[fg_mask > 0] = 255 # People often seem to get detected as shadows (i.e. 127), so round up to 255
bg = bg_subtractor.getBackgroundImage()
# Update the heatmap
if heatmap is None:
heatmap = np.zeros(fg_mask.shape, 'float64')
heatmap += fg_mask
Once we have enough data, the heatmap's values are scaled to the appropriate range and applied to the background image:
Erosion and Dilation: Some of our initial results were off due to moving tree branches, faint drone movement, shadows, and noise in general. To reduce the effects of noise, we employed erosion and dilation to erode away smaller discrepancies:
# We do this to reduce noise and merge/emphasize the relevant parts of the foreground mask. See:
# https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html
if config.noise_reduction_enabled:
erosion_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, config.noise_reduction_erosion_kernel_size)
fg_mask = cv2.erode(fg_mask, erosion_kernel)
dilation_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, config.noise_reduction_dilation_kernel_size)
fg_mask = cv2.dilate(fg_mask, dilation_kernel)
Performance Tuning: Our initial implementation naively consumed and processed every captured image it could at full resolution. This yielded performance issues when running our code directly on the NavQ. We took several measures to mitigate these issues:
- Frame Sampling: Rather than processing every single image, we found that by processing only two to five images per second, our final heatmap was almost completely unaffected, and the live video of the heatmap being captured was in fact improved. And so we added some configuration options around this.
- Down-Sampling: Similarly, we found that reducing the resolution of the images before processing them greatly reduced the load on the CPU, with the only tradeoff being that the resulting heatmap image was also of a lower resolution. Down-sampling was implemented directly in code, as well as by supporting gstreamer pipelines (so that gstreamer could take care of it).
For the aforementioned heatmap script to interface with the drone, we devised two separate workflows, implemented as separate Python scripts:
heatmap_single_point.py
- Launches a drone to a specific altitude, scans the ground and builds the heatmap, then returns the drone back down to where it was launched from.heatmap_multi_point.py
- Waits for a simple waypoint-based mission plan to be uploaded to the drone (via QGroundControl), then launches the drone, generating heatmaps at each of the waypoints.
These scripts were implemented using MAVSDK-Python and are described in further detail on our github project readme.
Challenges and Learning- The Pandemic: Like everyone else, the pandemic has contributed to scheduling and logistical challenges for the project team. The last couple of months of complete lockdown (in Ontario, Canada) coupled with cold weather has meant that we haven't been able to fly the drone outside.
- NavQ Boot Loop: First technical challenge was connecting to the NavQ. The computer seemed to be stuck in a boot loop. It was eventually discovered that powering the NavQ with an external power supply was causing the issue, but worked fine when connected to the drone battery or to the computer directly (community post regarding the issue here). We were able to help a couple of other folks on the forums related to this issue.
- Damaged Motors: The biggest hardware challenge (and still an open issue) is that two of the motors do not work smoothly. A RMA was submitted in early January. After quite a bit of time trying to understand the cause of the issue we suspect it is damage to the motors, likely caused by the the screws used to mount them. We opened one of the motors to inspect the coils and can make out some rough spots on there. The issue has resulted in us not being able to successfully fly the drone due to the uncertainly of the two motors. The interesting thing is that the motors "seem" to work when under reduced speed (for example when taking off with a low altitude setting (1m) without the propellers as seen here but fails with propellers. Video for the stuttering issue can be found below (Video 2):
- Long Range Wireless Connectivity: Another open item is how to connect to the NavQ when out of range of any Wi-Fi. An early goal of the project was to have the heatmap generation be streamed live to a ground-based companion computer, but achieving this outside of Wi-Fi range was not obvious. We do not believe the telemetry radio can be used for data intensive actions such as retrieving the video but we have been unable to try it in the field due to the aforementioned issue with the motors.
- Performance Issues: We initially experienced performance issues when running the heatmap script on the NavQ. We found that this was due to the high resolution of the images being captured by the camera. To improve performance, we implemented frame sampling (i.e. only grab a frame every x milliseconds), down-sampling (i.e. reduce the size of the image before processing), and also added support for reading video from gstreamer pipelines (instead of the camera directly), as recommended on this forum post.
- Ramping Up on Drone Development: Being new to the drone scene, we were not quite sure how the code would run, i.e. whether we would need to change the PX4 code itself to accomplish missions. Reading more about the NavQ re-focused us to executing our scripts on the companion computer separately. So while we went through the Developer Guide and setup, we did not have to change any of the software for the drone itself.
- MAVSDK Setup: Experienced some growing pains with MAVSDK installation. Initially, the installation for MAVSDK on the NavQ seemed to hang (post about it here). Turns out it just takes a long time to install. When trying to build the mavsdk-server, we ran out of space, only realizing that we missed expanding the SD Card memory per the gitbook setup page for the NavQ on a subsequent re-flash of it. One thing that came out of this experience is that folks on the forums are super helpful! After getting stuck on the last step for mavsdk-server build/install and posting about it here, we realized that mavsdk-python wrapper contains the binary for the mavsdk-server so building/installing MAVSDK itself is not necessary. Made a post for it to communicate that to folks that may run into something similar.
- QGroundControl/MAVSDK Oddities: For one of our drone workflows, the idea is to create and upload a mission plan in QGroundControl for the script to act on. MAVSDK-Python states in its documentation that not all mission plans are supported, but we were taken aback when an extremely simple mission plan created in QGroundControl was rejected by our MAVSDK-Python. We discovered that this was because MAVSDK-Python doesn't recognize/support the Takeoff task created in QGroundControl. After deleting that task, everything worked as intended. The kicker is that QGroundControl requires you to add a Takeoff task before it allows you to add waypoints. So for all of this to work, we had to: 1) Create a Takeoff task, 2) Add waypoints, 3) Delete the Takeoff task, and 4) Upload the plan. Learning some of these low-level oddities certainly took some time.
- Helping Others: Tried to help folks where we thought we could by posting in the forums and documenting instructions and gotchas in our github readme.
Due to technical issues and time restrictions, we weren't able achieve all of our goals. Moreover, during the project design, drone assembly, software development, and testing, we gained a lot more ideas on how this project can be further expanded. Some of these ideas and next steps include:
- Installing replacement motors so that we can properly test this out in the field.
- Adding support for a live video feed of the heatmap generation to a ground control station, ideally beyond wifi ranges.
- Make the heatmap generator robust to drone movement so that heatmaps can be generated over broader areas while the drone is in motion.
Comments