As we deal with these unprecedented times organisations are looking at how people can return to physical shared spaces. A common approach to this solution is to use computer vision to monitor the occupancy of shared spaces in schools, offices and retail stores. But with the technology that is currently available we can take this one step further and monitor the distance between people in these shared spaces. With information about the number of people and the distances between people we can build innovative alerting solutions to notify when good social distancing practices are not being adhered to.
The most basic alerting solution is to provide a dashboard with a people counter and distances graph.
For this particular solution I decided to use the Luxonis DepthAI: USB3C camera; it is the basis for the OpenCV AI Kit OAK-D camera that garnered much attention on Kickstarter. It features three vision sensors, one 4K camera and the other two stereo cameras are used for spacial sensing. And, to top it off it includes an Intel MyriadX accelerator that can very efficiently run AI models to turn the unstructured data coming from the cameras into more structured data that we can build solutions such as this one with.
The DepthAI camera doesn't have any general purpose processing capabilities that would let us provide a dashboard, data persistence, cloud connectivity for example. To address this I paired with DepthAI camera with an ADLINK Vizi-AI, which includes an Intel Atom processor and a software stack from ADLINK that eases the process of building solutions such as this one.
In particular we leverage the Node-RED application that is included on the Vizi-AI to process the data that is coming from the DepthAI camera. All of the logic for counting the number of people, the distances between people, and the dashboard is implemented in Node-RED. As you will see it provides a fairly frictionless environment for developing rapid prototypes of solutions (like we are doing here.)
The devil is in the detailsLet's start by turning the cameras on the DepthAI into a source of data by leveraging the onboard MyriadX with person-detection-retail-0013
model from the OpenCV Model Zoo.
We start with the main entry-point for the DepthAI camera application.
This uses a DepthAIConfig
class that we have written and that wraps the processing of the DepthAI default values along with the overrides from the command-line. We won't go into the details of it but feel free to browse through it in the accompanying source code repository. The other class we make use of which is specific to this application is Main
and the details of it are included below. It is responsible for bridging grabbing data from the DepthAI device and making it available to be used to build solutions by publishing the data through the ADLINK Edge SDK.
Of note in the code above is the init_edge_thing()
call which instantiates the use of the Edge SDK and enables the call to write_tag
in run
which makes the information coming from the DepthAI camera available to other applications to build on-top of. The information is made available has the following schema
The DepthAI
class is responsible for managing and acquiring the data from the DepthAI device.
Finally, the completion of the solution we use Node-RED. The flow is subscribes to the ADLINK Data River (which we published the DepthAI camera information to in the previous step) in the DetectionBox_Reader
node. The subscription results in a stream of detection boxes that can be processed in the rest of the flow
The detailed configuration of the DetectionBox_Reader
node.
The next step is to filter out all of the detection boxes that are not for the person class. The filtering is done in the Extract Inferenced Objects
node
Now that we have only a list of detected people we can calculate the distance between them in the Calculate Distances Between People
node. The result of this node flow into a dashboard graph node that will show the distances between people in a bar chart.
We can also to a simple count of the number of people detected in the People Counter
node. The result flows into a gauge that shows the number of people detected.
And finally we get a dashboard that looks like the following.
We have our occupancy monitor that runs completely at the edge by combining Luxonis and ADLINK technology. The next steps would be to use the Azure IoT or AWS IoT connectors that ADLINK provides to be able to send the processed data to the cloud so that a fleet of these solutions can be remotely monitored.
Toby Mcclean
Comments