Fight Pandemic with Smart Drone
Author Roy Zang <roy.zang@nxp.com>
1 Overview
Starting from end of 2019, the COVID -19 pandemic has taken millions of people’s lives and hits the global economy. There is no sign the pandemic will stop very soon without widely usage of vaccination.
The modern drone technology provides innovation way to prevent and reduce the pandemic. The project is to use drone with companion computer to monitor small group of gathering and check whether mask is worn. The number of people and mask wearing information can be sent back to the gathering organizer via WiFi or I2C.
This project utilizes the NXP HoverGames drone kit, NavQ companion computer with camera to count people to make sure that the gathering is not crowd. NavQ can also do face mask recognition. The result can be used to provide warning for gathering organizer.
2 Smart drone
2.1 Fly platform
Following the hover game build guild [1] to build the drone platform.
2.1 Drone platform
And the drone flies in the sky.
The camera on the drone can stream video and count people via facial detection. Once the facial detection is done, the program can also check whether mask is worn.
2.2 AI/ML people counting and face mask detection
NavQ has MIPI-CSI camera to capture video and i.mx8mm provides the computation power.
2.3 Companion computer with camera
The NavQ has already included the MIPI camera. The shipped NavQ Linux kernel includes the driver for the MIPI camera and the camera appears as a video device.
/dev/video0
The shipped filesystem is Ubuntu 20.04.1 LTS. It is easy to install corresponding packages to do stream capture and AI/ML via python language.
The default python version is 3.8.5 with Ubuntu LTS 20.04. unfortunately, there is no tensorflow pre-build image at the time ( Jan. 2021) for ARM 64. An alternative solution is to use python 3.7.9. The following is the system setup.
- python 3.7.9
- tensorflow 2.4.0
- opencv-python 4.5.1
- Flask 1.1.2
2.4 Detection result via WiFi
The capture control panel can be accessed via WiFi from computer or smart phone. Flask provides web service for access via WiFi. The following pictures are static result around 5 yards between people and camera.
The facial detection is based on the posted “real time face detection’ via OpenCV and MTCNN.[2]
2.4.1 Capture people via OpenCV
2.4.2 OpenCV CPU workload
The following shows the cpu usage to run OpenCV face detection and people counting.
2.4.3 MTCNN
Multi-task Cascaded Convolutional Networks (MTCNN) is a framework developed as a solution for both face detection and face alignment. Python implements MTCNN library. It is easy to install MTCNN library with pip command on NavQ.
2.4.4 MTCNN CPU workload
The following shows the cpu usage to running MTCNN face recognition and people counting.
Compare the workload, MTCNN has a lower CPU usage. From practice, MTCNN also has a better detection rate.
2.4.5 Mask detection
The facial mask detection is derived from github project [3]:
https://github.com/chandrikadeb7/Face-Mask-Detection
file
detect_mask_video.py
The following mask detection result is running on a i.mx8mmevb board instead of NavQ. The NavQ always reboot when running this program even there is no USB connection. I can’t get a stable result on NavQ.
The mask detection result can also be streamed to web server via Flask.
2.5 Code
The project is implemented by python.
The basic video stream framework is based on flask-video-streaming github project[4].
To run the code, in a the NavQ terminal:
$ python app.py
The video stream can be accessed by any web browser in the same network:
192.168.x.x(real navq ip address):5000
the port is 5000
In app.py
# Raspberry Pi camera module (requires picamera package)
# from camera_pi import Camera
#from camera_opencv import Camera
#from camera_opencv_mtcnn import Camera
from camera_opencv_mask import Camera
app = Flask(__name__)
The default is using camera_opencv_mask.
camera_opencv_mtcnn and camera_opencv are also supported.
In camera_opencv_mtcnn.py
boxes = detector.detect_faces(img)
if boxes:
print(boxes, len(boxes))
for ibox in boxes:
print(ibox)
box = ibox['box']
conf = ibox['confidence']
x, y, w, h = box[0], box[1], box[2], box[3]
if conf > 0.5:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 1)
len(boxes) is the people counting result and can also send to the FMU ( not implemented) via I2C. One example of print(boexes, len(boxes))
[{'box': [242, 142, 120, 146], 'confidence': 0.998508632183075, 'keypoints': {'left_eye': (286, 201), 'right_eye': (341, 202), 'nose': (316, 237), 'mouth_left': (289, 262), 'mouth_right': (332, 263)}}] 1
2.6 To do list ( on-going)
The project is also trying to send warning and people count result to FMU via I2C. the connection is via GST-GH connector [5].
However, it is still under debugging to get a stable I2c connection when the project is submitting.
3 Conclusion
The project uses drone with companion computer as pandemic fighting platform. NavQ i.mx8mm board is used as companion computer to provide video stream and computation power. Video stream is processed locally on drone with NavQ. The people counting result or mask detection result can be accessed remotely via WiFi.
Multiple people can be captured and detected by the smart drone platform.
The 2G memory of NavQ is low for some complex AI/ML cases.
I2C communication between NavQ and I2C is under debugging.
4 Acknowledgement
Great thanks for the support from Iain Galloway and the hover games team providing the drone kit, NavQ computer and answering stupid questions.
5 Reference
The project code can be found
https://teams.microsoft.com/_#/files/Submissions%20First%20Flyers?groupId=ed622e69-ede7-4ede-9e9c-d97566d92ab3&threadId=19%3A657e4786ed1c4e2692cafe01b6d6226f%40thread.skype&ctx=channel&context=Code&rootfolder=%252Fsites%252FHoverGamesFirstFlyers%252FShared%2520Documents%252FSubmissions%2520First%2520Flyers%252FChallenge2_HelpDronesHelpOthers%252FTeam%2520Austin%2520Smart%2520Drone(RoyZang%252C%2520Austin)%252FCode
Note: NXP internal access only for now.
[ 1 ]. https://nxp.gitbook.io/hovergames/
[ 2 ]. https://www.mygreatlearning.com/blog/real-time-face-detection/
[ 3 ]. https://github.com/chandrikadeb7/Face-Mask-Detection
[ 4 ]. https://github.com/miguelgrinberg/flask-video-streaming
[ 5 ]. https://nxp.gitbook.io/8mmnavq/hardware-overview/pinouts-and-connector-info
Comments