MaskCam is a prototype reference design for a Jetson Nano-based smart camera system that measures crowd face mask usage in real-time, with all AI computation performed at the edge. MaskCam detects and tracks people in its field of view and determines whether they are wearing a mask via an object detection, tracking, and voting algorithm.
It uploads detection statistics to the cloud, where a web GUI can be used to monitor face mask compliance in the area the camera is watching. It saves interesting video snippets to local disk (e.g., a sudden influx of lots of people not wearing masks) and can optionally stream video via RTSP.
We urge you to try it out! It’s easy to install on a Jetson Nano Developer Kit and only requires a USB webcam. This page gives instructions for setting up and running MaskCam. We'll also be adding a video guide that walks you through the setup.
MaskCam was developed by Berkeley Design Technology, Inc. (BDTI) and Tryolabs S.A., with development funded by NVIDIA. MaskCam is offered under the MIT License. For more information about MaskCam, please see the report from BDTI. The project is also fully open-sourced under the MIT license. Further details about using and modifying MaskCam are given in our GitHub repository.
If you have questions, please email us at maskcam@bdti.com. We'll be giving a talk on this project at the 2021 Embedded Vision Summit, stay tuned for a link to the session page!
Materials Needed For This ProjectTo set up MaskCam, you will need:
- A Jetson Nano Developer Kit running JetPack 4.4.1 or 4.5. (See these instructions on how to install JetPack on the Jetson Nano.)
- An external DC 5 volt, 4 amp power supply connected through the Dev Kit's barrel jack connector (J25). (See these instructions on how to enable barrel jack power.) This software makes full use of the GPU, so a typical 2A power supply won't provide enough power.
- A USB webcam attached to your Nano
- An Ethernet cable, USB WiFi dongle, or M.2 WiFi module to connect to your Jetson Nano to the internet
- Another computer with a program that can display RTSP streams -- we suggest VLC or QuickTime.
The easiest and fastest way to get MaskCam running on your Jetson Nano Dev Kit is using our pre-built containers.
First, power on your Jetson Nano and wait for it to fully boot. Open a terminal and use the following command to download the MaskCam container from Docker Hub (this takes about 10 minutes to download):
sudo docker pull maskcam/maskcam-beta
Find your local Jetson Nano IP address using ifconfig
. This address will be used later to view a live video stream from the camera and to interact with the Nano from a web server.
Make sure a USB camera is connected to the Nano, and then start MaskCam by running the following command. Make sure to substitute <your-jetson-ip>
with your Nano's IP address.
sudo docker run --runtime nvidia --privileged --rm -it --env MASKCAM_DEVICE_ADDRESS=<your-jetson-ip> -p 1883:1883 -p 8080:8080 -p 8554:8554 maskcam/maskcam-beta
The MaskCam container will start running the mask detection script, maskcam_run.py
, using the USB camera as the default input device (/dev/video0
). It will produce various output messages in the terminal as it's loading. If there are errors, the process will automatically end after several seconds. Check the Troubleshooting section at the end of this guide for tips on resolving errors.
After 30 seconds or so, it should continually generate status messages (such as Processed 100 frames...
). Leave it running (don't press Ctrl+C
, but be aware that the device will start heating up) and continue to the next section to view the live mask detection video stream!
If you scroll through the logs and don't see any errors, you should find a message like:
Streaming at rtsp://aaa.bbb.ccc.ddd:8554/maskcam
where aaa.bbb.ccc.ddd
is the address that you provided in MASKCAM_DEVICE_ADDRESS
previously. If you didn't provide an address, you'll see some unknown address label there, but the streaming will still work.
You can copy-paste that URL into your RSTP streaming viewer (see how to do it with VLC) on another computer. If all goes well, you should be rewarded with streaming video of your Nano, with green boxes around faces wearing masks and red boxes around faces not wearing masks. An example video of the live streaming in action is shown below.
This video stream gives a general demonstration of how MaskCam works. However, MaskCam also has other features, such as the ability to send mask detection statistics to the cloud and view them through a web browser. If you'd like to see these features in action, you'll need to set up an MQTT server, which is covered in the next section: MQTT and Web Server Setup.
If you encounter any errors running the live stream, check the Troubleshooting section for tips on resolving errors. For more details on configuring MaskCam, please check the Setting Device Configuration Parameters section on our GitHub page.
MQTT and Web Server SetupMaskCam is intended to be set up with a web server that stores mask detection statistics and allows users to remotely interact with the device. We wrote code for instantiating a server that receives statistics from the device, stores them in a database, and has a web-based GUI frontend to display them. A screenshot of the frontend for an example device is shown below.
You can test out and explore this functionality by starting the server on a PC on your local network and pointing your Jetson Nano MaskCam device to it. This section gives instructions on how to do so. The MQTT broker and web server can be built and run on a Linux or OSX machine; we've tested it on Ubuntu 18.04LTS and OSX Big Sur.
The server consists of several docker containers that run together using docker-compose. Install docker-compose on your machine by following the installation instructions for your platform before continuing. All other necessary packages and libraries will be automatically installed when you set up the containers in the next steps.
After installing docker-compose, clone this repo:
git clone https://github.com/bdtinc/maskcam.git
Go to the server/
folder, which has all the needed components implemented on four containers: the Mosquitto broker, backend API, database, and Streamlit frontend.
These containers are configured using environment variables, so create the .env
files by copying the default templates:
cd server
cp database.env.template database.env
cp frontend.env.template frontend.env
cp backend.env.template backend.env
The only file that needs to be changed is database.env
. Open it with a text editor and replace the <DATABASE_USER>
, <DATABASE_PASSWORD>
, and <DATABASE_NAME>
fields with your own values. Here are some example values, but you better be more creative for security reasons:
POSTGRES_USER=postgres
POSTGRES_PASSWORD=some_password
POSTGRES_DB=maskcam
After editing the database environment file, you're ready to build all the containers and run them with a single command:
sudo docker-compose up -d
Wait a couple minutes after issuing the command to make sure that all containers are built and running. Then, check the local IP of your computer by running the ifconfig
command. (It should be an address that starts with 192.168...
, 10...
or 172...
.) This is the server IP that will be used for connecting to the server (since the server is hosted on this computer).
Next, open a web browser and enter the server IP to visit the frontend webpage:
http://<server IP>:8501/
If you see a ConnectionError
in the frontend, wait a couple more seconds and reload the page. The backend container can take some time to finish the database setup.
Your local web server is now set up and ready to receive MQTT messages from your Jetson Nano.
Setup a Device With Your ServerOnce you've got the server set up on a local machine (or in a AWS EC2 instance with a public IP), switch back to the Jetson Nano device. Run the MaskCam container using the following command, where:
MQTT_BROKER_IP
is set to the IP of your serverMQTT_DEVICE_NAME
is a name for your device (like "Camera1")MASKCAM_DEVICE_ADDRESS
is the IP address of your Jetson Nano
sudo docker run --runtime nvidia --privileged --rm -it --env MQTT_BROKER_IP=<server IP> --env MQTT_DEVICE_NAME=my-jetson-1 --env MASKCAM_DEVICE_ADDRESS=<your-jetson-ip> -p 1883:1883 -p 8080:8080 -p 8554:8554 maskcam/maskcam-beta
And that's it! If the device has access to the server's IP, then you should see in the output logs some successful connection messages and then see your device listed in the drop-down menu of the frontend (reload the page if you don't see it). In the frontend, select Group data by: Second
and hit Refresh status
to see how the plot changes when new data arrives.
If you run into any errors or issues while working with MaskCam, please view the Troubleshooting Common Errors section in our GitHub repository. It gives a list of common errors and how to resolve them.
More InformationIf you'd like to learn more about MaskCam and dig into the code that makes it work, please visit our open-source GitHub repository at https://github.com/bdtinc/maskcam. The repo also gives instructions for setting up MaskCam with balenaOS so it can be deployed and managed as a fully containerized application.
If you have questions or need help, please email us at maskcam@bdti.com. Also, be sure to check out our independent report on the development of MaskCam!
Comments