I recently delivered a session at NVIDIA's GTC 2020 Digital Event on "Productionizing GPU Acclerated IoT Workloads at the Edge". A video of this talk and a recorded workshop are available online for playback at your leisure. The topic of GPU Accelerated small-form-factor devices has been a strong interest of mine over the last couple years. The technology makes it possible to develop edge-deployable solutions that can perform advanced AI tasks, including computer vision, on devices not much larger than your cell phone. When coupled with cloud services, the pairing allows you to operate on and react to artificial intelligence inference at scale through the cloud and in near real-time at the edge.
In this project, we will demonstrate how to use a Camera Serial Interface (CSI) Infrared (IR) Camera on the NVIDIA Jetson Nano with Microsoft Cognitive Services, Azure IoT Edge, and Azure IoT Central. This setup will allow us to accurately capture images at any time of day, to be analyzed in real-time using a custom object detection model with reporting to the cloud. Specifically, we will cover techniques in this article that will allow us to interact with the CSI hardware and access the host X11 windowing system from a containerized IoT Edge workload.
This will give us the baseline knowledge to modify an existing solution to operate with new hardware, namely the IMX219-77IR 8-Megapixels Infrared Night Vision IR Camera Module to allow consistent image capture regardless of lighting conditions. With our camera properly configured, we will then demonstrate how to apply a custom object detection model to the image stream. We will then package our application as a containerized IoT Edge deployment with reporting into Azure IoT Central.
At the end, you will leave with the knowledge to create an end-to-end edge-to-cloud object detection solution that can be modified to a variety of use cases.
When we employ computer vision based object detection models, it is important to take into account the hardware and environment that your solution will be deployed into. With standard color cameras, you may find that object detection works great when lighting conditions are favorable. However, at certain times of day, or particularly at night in non-lit areas, the imagery will often degrade to pitch black or a combination of black and white depending on whether an IR filter is present or not. Below is an example image from a camera that is capable of viewing infrared frequencies. The image quality is decent for viewing areas at night, but you may find that object detection models don't perform very well with this type of input.
I can only theorize as to why object detection algorithms are less accurate with these types of images. I suspect it has to do with limited color information, decreased ability to accurately locate object edges, and the likelihood that the object detection model was trained against full color images.
I can tell you with absolutely certainty that this effect is real.
Over the last few months, I have employed a home security system built on top of Azure IoT Edge and NVIDIA DeepStream that monitors my home 24 hours a day to detect People and Vehicles with uninterrupted reporting to a Dashboard in Azure IoT Central. Due to the recent COVID-19 quarantine, we don't leave the house much and I can assure you that there has been a car parked in my driveway for the majority of this time.
Take a look at this graph of object detections over 30 days, the green graph is detected people and the pink one is detected vehicles. Pay particular attention to the gaps in the vehicle graph.
Do you notice anything in this data?
Let's zoom into a 24 hour period of vehicle detections, and note that the non-solid dashed lines represent no detections:
In the green graph, beginning at around 10:00 PM to 6:20 AM ,there are no people detected. That is because we are out of view of the cameras and sleeping.
In the pink graph at around 5:10 PM, the vehicle detections start to become increasingly dashed until around 8:00 PM where there are zero detections until 6:40 AM. The vehicle was in the driveway this entire time.
What does this tell us?
The camera image coupled with the employed object detection model does not produce accurate detections at night.
Here is a clear image depicting the issue, the image data is completely different at night time versus day time:
How can the issue of image consistency be alleviated in AIOT Computer Vision solutions?
- Illuminate the non-well lit areas at night with additional lighting (effectively altering the environment to match the daytime capture / model training data)
- Train a second object detection model using evening lighting conditions to be employed during the evening hours. This has drawbacks of accurately starting the correct model at the appropriate time and involves the use of multiple models to detect the same object.
- Use a camera that produces the same or at least very similar images regardless of whether it is day or night and train a single model to work with the images it produces.
There may be other solutions, but it's clear that altering the environment or duplicating the model is likely to create additional problems. The approach of using a camera to produces consistent images solves the problem directly and elegantly.
Now the question becomes, what kind of camera options are available that can produce consistent images?
Thermal cameras are one possible solution. They can be coupled with an object detection model trained to produce detections based on the heat signatures of objects in it's field of view. This works regardless of lighting conditions or color in the medium. For vehicle detection, it may not be perfect (a cold vehicle gives off a very different signature to one that is running or recently turned off).
Unfortunately, I did not have access to one of these cameras a the time of writing, but I would love to try this. If you know of a good candidate, particularly one that can produce image data consumable with GStreamer, please let me know in the comments!
Another option is to use a night-vision / infrared capable camera with active infrared LED attachments. The infrared camera has the capability of seeing portions of the infrared spectrum while the infrared LEDs illuminate it's field of view with IR beams that are barely visible to the human eye. You may notice a bit reddish light emanating from LED attachment where the beams are concentrated.
I like this setup because the camera itself is inexpensive (~$25 USD on Amazon) and it produces very consistent images in both light and dark environements.Here is an example of the camera running in the daytime:
And here is an example of the same camera used at night:
Notice how similar these images are compared to the example of the night time and day time comparison we showed earlier.
Now that we have decided on a strategy for producing consistent images, let's implement it into a live object detection pipeline.
NVIDIA provides an SDK known as DeepStream that allows for seamless development of custom object detection pipelines. It provides a built-in mechanism for obtaining frames from a variety of video sources for use in AI inference processing. This is accomplished using a series of plugins built around the popular GStreamer framework.
This SDK is available for use as a stand-alone application that can be installed on host x86 or ARM64 environments and also comes available as a containerized module suitable for compatible devices running Azure IoT Edge.
DeepStream is primarily controlled through modification of a configuration file which specifies the video sources, output types, and inference models to employ in a processing pipeline. A DeepStream application is typically launched with the -c option which directs it to use a configuration file at a designated path. For example:
deepstream-test5-app -c DSConfig.txt
CSI or Camera Serial Interface is a specification that allows for high throughput transmission of image data from a connected camera to a host processor. This is often designated by special connector that looks like a ribbon cable. These are most commonly encountered in variations of the official Raspberry Pi camera, which use this specification and connector:
When using a CSI based camera from a DeepStream instance running on the host, you simply need to specify a source set to type = 5, and provide the appropriate width, height, fps, and sensor id values.
Note that the particular camera we are using, the IMX219-77IR, is meant to operate at 20 frames per second. This is why we set the camera-fps-n (frames per second numerator) and camera-fps-d (frames per second denominator) to 20 and 1 respectively. In my tests, this camera performs optimally at this setting and may otherwise drop frames and inhibit performance.
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=5
camera-width=3280
camera-height=2464
camera-fps-n=20
camera-fps-d=1
camera-csi-sensor-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
A DeepStream configuration with this type of entry can access a compatible CSI device.
However, this is not as straightforward when employed in a containerized instance of DeepsStream. We have to account for the mechanisms that allow DeepStream/GStreamer to communicate with CSI devices and ensure that the container has access to them. Under the hood, CSI communication is handled using Interprocess Communication over a socket interface. To accommodate this in a container, we must enable host ipc communication and mount in the appropriate socket as explained in this post on the NVIDIA forums.
For convenience we also mount in /dev/misc/storage to provide a persistent DeepStream configuration that is editable from the host.
In addition, we must expose the camera device from the host to the container (by default the camera should be accessible at /dev/video0).
To test this, we can run an instance of the marketplace.azurecr.io/nvidia/deepstream-iot2-l4t:latest image with the --rm --it flags to allow the container to self remove when exited and begin in interactive mode. We must also specify the nvidia container runtime (--runtime nvidia) to enable access to the GPU from the container. Also, we will enable host networking (--net=host) to make it easy to expose additional services from the container that may require access to network ports on the host (For example an RTSP server for visualizing detected objects in the video stream).
All of the required steps are accounted for in the command below:--net=host) to make it easy to expose additional services from the container that may require access to network ports on the host (For example an RTSP server for visualizing detected objects in the video stream). All of the required steps are accounted for in the command below:
docker run --rm -it --runtime nvidia --net=host --ipc=host -v /tmp/argus_socket:/tmp/argus_socket -v /data/misc/storage:/data/misc/storage --device=/dev/video0:/dev/video0 marketplace.azurecr.io/nvidia/deepstream-iot2-l4t:latest
If we wish to employ this in an IoT Edge configuration, you would transform these command line options into compatible createOptions and add them to a deployment.template.json. For a full list of available container create options, see the Docker Engine API documentation.
"modules": {
"NVIDIADeepStreamSDK": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "marketplace.azurecr.io/nvidia/deepstream-iot2-l4t:latest",
"createOptions": {
"Entrypoint": [
"/usr/bin/deepstream-test5-app",
"-c",
"DSConfig.txt"
],
"HostConfig": {
"runtime": "nvidia",
"NetworkMode": "host",
"Binds": ["/data/misc/storage:/data/misc/storage","/tmp/argus_socket:/tmp/argus_socket"],
"IpcMode" : "host",
"Devices": [
{
"PathOnHost": "/dev/video0",
"PathInContainer":"/dev/video0",
"CgroupPermissions":"rwm"
}
]
},
"NetworkingConfig": {"EndpointsConfig": {"host":{}}},
"WorkingDir": "/data/misc/storage"
}
}
},
Using the techniques we have shown so far, you will be able to access the CSI camera from a containerized instance of the DeepStream SDK. However, you may want to visualize the result of the object detection locally. This is especially useful during testing and may be required for certain production scenarios.
Visualizing results from a containerized instance of NVIDIA DeepStreamVisualization of detected objects with overlayed bounding boxes can be achieved by configuring an appropriate Sink Group in your DeepStream configuration. These include the EGL, RTSP Streaming, or Overlay Sinks. EGL will give you a familiar X11-based application window with minimize, maximize, and close buttons. RTSP will serve the results over an RTSP endpoint that can be consumed by an RTSP client. Overlay will draw over the device framebuffer and override anything that may be on screen. Examples of each of these configurations is provided below as sink0, sink1, and sink2 respectively:
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=OverlaySink
type=2
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=OverlaySink
type=5
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1
[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=OverlaySink
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=500000
# set below properties in case of RTSPStreaming
rtsp-port=8554
#udp-port=5400 //Multi-cast port
The EGL type is special in that it also requires that you provide access to the X11 socket on the host system to the container and to also configure permissions for the owner of the container process to be able to access and draw to the screen.
To accommodate this for IoT Edge, edit /etc/profile on the host system and append the following to the top of this file:
xhost local:iotedge
On next reboot, this directive will grant permissions to the X11 service for the local iotedge user account. This user is responsible for running all IoT Edge processes by default and will allow us to use X11 services from containerized modules.
In addition to this, the container will need to have a DISPLAY environment variable set and require mounting of the X11 socket to enable communication to the X11 server running on the host. In most instances, the DISPLAY environment variable will be set to :0, this corresponds to the first display attached to the host. If using a single monitor with a Jetson Nano device this value should always be :0, but you can confirm this by running the following on the terminal:
echo $DISPLAY
Once we have the display value, you can modify the previous docker command to include the $DISPLAY Environment variable and mount in the X11 socket:
docker run --rm -it --runtime nvidia --net=host --ipc=host -v /tmp/argus_socket:/tmp/argus_socket -v /data/misc/storage:/data/misc/storage -v /tmp/.X11-unix/:/tmp/.X11-unix/ -e DISPLAY=$DISPLAY --device=/dev/video0:/dev/video0 marketplace.azurecr.io/nvidia/deepstream-iot2-l4t:latest
Similarly, if deploying using Azure IoT Edge, you would modify the associated deployment.template.json to:
"modules": {
"NVIDIADeepStreamSDK": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "marketplace.azurecr.io/nvidia/deepstream-iot2-l4t:latest",
"createOptions": {
"Entrypoint": [
"/usr/bin/deepstream-test5-app",
"-c",
"DSConfig.txt"
],
"HostConfig": {
"runtime": "nvidia",
"NetworkMode": "host",
"Binds": ["/data/misc/storage:/data/misc/storage","/tmp/argus_socket:/tmp/argus_socket", "/tmp/.X11-unix/:/tmp/.X11-unix/"],
"IpcMode" : "host",
"Devices": [
{
"PathOnHost": "/dev/video0",
"PathInContainer":"/dev/video0",
"CgroupPermissions":"rwm"
}
]
},
"NetworkingConfig": {"EndpointsConfig": {"host":{}}},
"WorkingDir": "/data/misc/storage"
}
},
"env": {
"DISPLAY":{
"value": ":0"
}
}
},
Now that we understand how to properly configure access from a container to the X11 server on the host, we can employ the EGL sink to test our model from a containerized instance of the DeepStream SDK.
Verifying a DeepStream configuration using the DeepStream SDK module for Azure IoT EdgeNow that we have discussed some of the background on accessing the CSI camera and how to enable visualization of detected objects within a DeepStream container, we can verify this process. We will do this by bringing in a sample DeepStream configuration that uses an object detection model created with customvision.ai. On your Jetson Nano create a folder named data
at the root:
sudo mkdir /data
Download and extract the DeepStream configuration files into the data
directory:
cd /data
sudo wget -O setup.tar.bz2 --no-check-certificate "https://onedrive.live.com/download?cid=54AD8562A32D8752&resid=54AD8562A32D8752%21376762&authkey=AMyy2JGDIFFQaUI"
sudo tar -xjvf setup.tar.bz2
Make the folder accessible from a normal user account:
sudo chmod -R 755 /data
The extracted files should consist of the following, we will explain their purpose in detail:
./misc/
./misc/storage/
./misc/storage/ONNXSetup/
./misc/storage/ONNXSetup/configs/
./misc/storage/ONNXSetup/configs/config_infer_onnx.txt
./misc/storage/ONNXSetup/configs/msgconv_config.txt
./misc/storage/ONNXSetup/detector/
./misc/storage/ONNXSetup/detector/labels.txt
./misc/storage/ONNXSetup/detector/libnvdsinfer_custom_impl_Yolo.so
./misc/storage/ONNXSetup/detector/model.onnx_b1_fp32.engine
./misc/storage/ONNXSetup/detector/model.onnx
./misc/storage/DSConfig.txt
The ONNXSetup folder contains two directories; configs and detector. The configs directory specifies a configuration in config_infer_onnx.txt. This file specifies the.onnx file located in /misc/storage/ONNXSetup/detector/model.onnx as the model to be used for object detection. The msgconv_config.txt provides metadata which is used by Azure IoT Edge to determine which sensor (camera) a detected object originated from.
The detector directory contains a labels.txt file which provides the labels that accompany the object detection model (model.onnx). This model detects one label of type 'person' by default. The libnvdsinfer_custom_impl_Yolo.so file is a special shared object that acts as a parser for the.onnx models produced in object detection models generated by customvision.ai. This file is required if you intend to use these types of models with NVIDIA DeepStream. The object detection model is provided by model.onxx using the Open Neural Network Exchange format. There is also a pre-built single batch (b1) 32-bit floating point precision (fp32) engine file that has been generated by DeepStream to allow this model to run on the NVIDIA TensorRT inference runtime.
The DSConfig.txt file is the DeepStream Application configuration, for a detailed overview on what you can do with this file, refer to the official DeepStream Documentation. By default, this configuration specifies a single CSI camera source designated at /dev/video0 and designates config_infer_onnx.txt as the configuration for object detection. It uses an output sink of type EGL.
Now that we understand and have our sample DeepStream application on our Jetson device, it is time to try it out!
Ensure that your CSI camera is properly connected to the Jetson Nano. If it is not already connected, power down your device and properly attach it to the on-board CSI connector.Run the following command to begin an interactive session within a containerized instance of the DeepStream SDK module:
docker run --rm -it --runtime nvidia --net=host --ipc=host -v /tmp/argus_socket:/tmp/argus_socket -v /data/misc/storage:/data/misc/storage -v /tmp/.X11-unix/:/tmp/.X11-unix/ -e DISPLAY=$DISPLAY --device=/dev/video0:/dev/video0 marketplace.azurecr.io/nvidia/deepstream-iot2-l4t:latest
This will start an interactive session within the DeepStream SDK container. Manually launch DeepStream by issuing the following command within the interactive session:
deepstream-test5-app -c /data/misc/storage/DSConfig.txt
You should see some output indicating that DeepStream needs to create an engine file from model.onnx, this may take a minute or so. After that has completed, the application should produce output similar to the following:
To exit the interactive session, type "exit" and the container will self-destruct.
If you receive errors, double-check that the commands were entered in correctly and that you have extracted the sample application to the appropriate directory (/data). If you are still encountering issues, feel free to reach out to me directly on Twitter @pjdecarlo and we'll sort it out =)
If you were successful, congratulations! You now know how to easily test a sample DeepStream configuration from within a container using an interactive session. Now let's modify the app to use a custom object detection model.
Training a custom object detection model using Custom Vision AIMicrosoft provides a Custom Vision service as part of Cognitive Services which allows you to very easily train and export a custom object detection model. To get started, create an account at customvision.ai then begin a new project with the following options:
Once created, upload at least 15 images per tag (object) that you are interested in detecting. Here is an example using 15 images of myself to train a "person" detector:
Ensure that each image in the training set is tagged appropriately with an associated bounding box around the intended object:
With your images tagged, select "Train" and then "Quick Training":
Next, select "Performance", and then "Export":
Select ONNX and then right-click the "Download" button to copy the link to download your resulting model:
Take note of this value as we will use it in the next section. The copied text should look similar to the following:
https://irisscuprodstore.blob.core.windows.net/m-ad5281beaf20440a8f3f046e0e7741af/e3ebcf6e22934c7e89ae39ffe2049a46.ONNX.zip?sv=2017-04-17&sr=b&sig=A%2F9raRar12TSTCvH7D72OxD6mBqvRY5doovtwV4Bjt0%3D&se=2020-04-15T20%3A43%3A26Z&sp=r
That's all there is to it! If you are interested in learning more about object detection models created with customvision.ai, there is an excellent resource on the topic at Microsoft Learn.
Using a Custom Vision AI Object Detection Model with NVIDIA DeepStreamIf we wish to use our new model with our existing application, all we have to do is update /data/misc/storage/ONNXSetup/detector/labels.txt and /data/misc/storage/ONNXSetup/detector/model.onnx.
This can be done on your Jetson Nano using the link copied in the previous section. To begin, let's download the exported model as "model.zip".
Note that is important that the link to your model is quoted when running this command!
cd /data
sudo wget "https://irisscuprodstore.blob.core.windows.net/m-ad5281beaf20440a8f3f046e0e7741af/e3ebcf6e22934c7e89ae39ffe2049a46.ONNX.zip?sv=2017-04-17&sr=b&sig=A%2F9raRar12TSTCvH7D72OxD6mBqvRY5doovtwV4Bjt0%3D&se=2020-04-15T20%3A43%3A26Z&sp=r" -O model.zip
Next, we will unzip the model.zip that we just downloaded into a new directory named "model":
sudo unzip model.zip -d model
Now we will copy the label.txt and model.onxx from the model directory to /data/misc/storage/ONNXSetup/detector/labels.txt and /data/misc/storage/ONNXSetup/detector/model.onnx:
sudo cp /data/model/labels.txt /data/misc/storage/ONNXSetup/detector/labels.txt
sudo cp /data/model/model.onnx /data/misc/storage/ONNXSetup/detector/model.onnx
Now to test out the new model, use the same steps as before to start an interactive session with the containerized instance of the DeepStream SDK:
docker run --rm -it --runtime nvidia --net=host --ipc=host -v /tmp/argus_socket:/tmp/argus_socket -v /data/misc/storage:/data/misc/storage -v /tmp/.X11-unix/:/tmp/.X11-unix/ -e DISPLAY=$DISPLAY --device=/dev/video0:/dev/video0 marketplace.azurecr.io/nvidia/deepstream-iot2-l4t:latest
Next, manually launch the DeepStream application with the same configuration as before inside the interactive session. You should see similar results, only this time it will use your newly updated custom object detection model:
deepstream-test5-app -c /data/misc/storage/DSConfig.txt
Congratulations! Now we are ready to deploy this workload using Azure IoT Edge and Azure IoT Central to publish, monitor, and act on the telemetry produced by our DeepStream application.
Creating an IoT Central Application for use with the NVIDIA DeepStream SDK ModuleSign up for an Azure Account, or sign in if you already have one.
Navigate to the Build Your IoT Application section of IoT Central and select the "Custom App" template.
Give the IoT Central Application a name and URL (both must be globally unique). Then, select "Custom application" for the application template, choose a pricing tier, select a subscription, and choose "United States" for the location. Finally, click "Create" and wait for the deployment to complete.
When the deployment completes, you will be navigated to the newly deployed IoT Central Instance.
If you need to find the url to your IoT Central instance, you can view a list of all of the IoT Central instances for your subscription at https://apps.azureiotcentral.com/myapps
Create the Device Template for the NVIDIA Jetson NanoIn your IoT Central Instance, select "Device Templates"
Next, select "New" and choose "Azure IoT Edge" for the device template type.
When prompted, do NOT upload a manifest, instead select "Skip+Review"
Next, choose "Create"
Next, select "Import Capability Model"
Save the following as "nvidia-jetson-nano-dcm.json" and upload it when prompted. This file is a JSON document that represents the type of telemetry and properties that our device can produce. It is created using the Digital Twin Definition Language or DTDL.
{
"@id": "urn:AzureIoT:NVIDIAJetsonNano:1",
"@type": "CapabilityModel",
"implements": [],
"displayName": {
"en": "NVIDIA Jetson Nano DCM"
},
"contents": [
{
"@id": "urn:AzureIoT:NVIDIAJetsonNano:NVIDIANanoIotCModule:1",
"@type": [
"Relationship",
"SemanticType/EdgeModule"
],
"displayName": {
"en": "NVIDIA Jetson Nano"
},
"name": "NVIDIANanoIotCModule",
"maxMultiplicity": 1,
"target": [
{
"@id": "urn:AzureIoT:NVIDIANanoIotCModule:1",
"@type": "CapabilityModel",
"implements": [
{
"@type": "InterfaceInstance",
"displayName": {
"en": "Settings"
},
"name": "Settings",
"schema": {
"@id": "urn:AzureIoT:NVIDIANanoIotCModule:ISettings:1",
"@type": "Interface",
"displayName": {
"en": "Settings"
},
"contents": [
{
"@type": "Property",
"displayName": {
"en": "Primary Detection Class"
},
"name": "wpPrimaryDetectionClass",
"writable": true,
"schema": "string"
},
{
"@type": "Property",
"displayName": {
"en": "Secondary Detection Class"
},
"name": "wpSecondaryDetectionClass",
"writable": true,
"schema": "string"
}
]
}
},
{
"@type": "InterfaceInstance",
"name": "DeviceInformation",
"displayName": {
"en": "Device information"
},
"schema": {
"@id": "urn:azureiot:DeviceManagement:DeviceInformation:1",
"@type": "Interface",
"displayName": {
"en": "Device information"
},
"contents": [
{
"@type": "Property",
"comment": "Company name of the device manufacturer. This could be the same as the name of the original equipment manufacturer (OEM). Ex. Contoso.",
"displayName": {
"en": "Manufacturer"
},
"name": "manufacturer",
"schema": "string"
},
{
"@type": "Property",
"comment": "Device model name or ID. Ex. Surface Book 2.",
"displayName": {
"en": "Device model"
},
"name": "model",
"schema": "string"
},
{
"@type": "Property",
"comment": "Version of the software on your device. This could be the version of your firmware. Ex. 1.3.45",
"displayName": {
"en": "Software version"
},
"name": "swVersion",
"schema": "string"
},
{
"@type": "Property",
"comment": "Name of the operating system on the device. Ex. Windows 10 IoT Core.",
"displayName": {
"en": "Operating system name"
},
"name": "osName",
"schema": "string"
},
{
"@type": "Property",
"comment": "Architecture of the processor on the device. Ex. x64 or ARM.",
"displayName": {
"en": "Processor architecture"
},
"name": "processorArchitecture",
"schema": "string"
},
{
"@type": "Property",
"comment": "Name of the manufacturer of the processor on the device. Ex. Intel.",
"displayName": {
"en": "Processor manufacturer"
},
"name": "processorManufacturer",
"schema": "string"
},
{
"@type": "Property",
"comment": "Total available storage on the device in kilobytes. Ex. 2048000 kilobytes.",
"displayName": {
"en": "Total storage"
},
"name": "totalStorage",
"displayUnit": {
"en": "kilobytes"
},
"schema": "long"
},
{
"@type": "Property",
"comment": "Total available memory on the device in kilobytes. Ex. 256000 kilobytes.",
"displayName": {
"en": "Total memory"
},
"name": "totalMemory",
"displayUnit": {
"en": "kilobytes"
},
"schema": "long"
}
]
}
},
{
"@type": "InterfaceInstance",
"name": "ModuleInformation",
"displayName": {
"en": "Module Information"
},
"schema": {
"@id": "urn:AzureIoT:NVIDIANanoIotCModule:IModuleInformation:1",
"@type": "Interface",
"displayName": {
"en": "Module Information"
},
"contents": [
{
"@type": "Telemetry",
"displayName": {
"en": "System Heartbeat"
},
"name": "tlSystemHeartbeat",
"schema": "integer"
},
{
"@type": "Telemetry",
"displayName": {
"en": "Primary Detection Count"
},
"name": "tlPrimaryDetectionCount",
"schema": "integer"
},
{
"@type": "Telemetry",
"displayName": {
"en": "Secondary Detection Count"
},
"name": "tlSecondaryDetectionCount",
"schema": "integer"
},
{
"@type": "Telemetry",
"displayName": {
"en": "Inference"
},
"name": "tlInference",
"schema": {
"@type": "Object",
"displayName": {
"en": "Object"
},
"fields": [
{
"@type": "SchemaField",
"displayName": {
"en": "cameraId"
},
"name": "cameraId",
"schema": "string"
},
{
"@type": "SchemaField",
"displayName": {
"en": "trackingId"
},
"name": "trackingId",
"schema": "integer"
},
{
"@type": "SchemaField",
"displayName": {
"en": "className"
},
"name": "className",
"schema": "integer"
},
{
"@type": "SchemaField",
"displayName": {
"en": "roi"
},
"name": "roi",
"schema": {
"@type": "Object",
"displayName": {
"en": "Object"
},
"fields": [
{
"@type": "SchemaField",
"displayName": {
"en": "left"
},
"name": "left",
"schema": "double"
},
{
"@type": "SchemaField",
"displayName": {
"en": "top"
},
"name": "top",
"schema": "double"
},
{
"@type": "SchemaField",
"displayName": {
"en": "right"
},
"name": "right",
"schema": "double"
},
{
"@type": "SchemaField",
"displayName": {
"en": "bottom"
},
"name": "bottom",
"schema": "double"
}
]
}
}
]
}
},
{
"@type": [
"Telemetry",
"SemanticType/State"
],
"displayName": {
"en": "Module State"
},
"name": "stModuleState",
"schema": {
"@type": "Enum",
"valueSchema": "string",
"enumValues": [
{
"@type": "EnumValue",
"displayName": {
"en": "inactive"
},
"enumValue": "inactive",
"name": "inactive"
},
{
"@type": "EnumValue",
"displayName": {
"en": "active"
},
"enumValue": "active",
"name": "active"
}
]
}
},
{
"@type": [
"Telemetry",
"SemanticType/State"
],
"displayName": {
"en": "Pipeline State"
},
"name": "stPipelineState",
"schema": {
"@type": "Enum",
"valueSchema": "string",
"enumValues": [
{
"@type": "EnumValue",
"displayName": {
"en": "inactive"
},
"enumValue": "inactive",
"name": "inactive"
},
{
"@type": "EnumValue",
"displayName": {
"en": "active"
},
"enumValue": "active",
"name": "active"
}
]
}
},
{
"@type": [
"Telemetry",
"SemanticType/Event"
],
"displayName": {
"en": "Processing Started"
},
"name": "evVideoStreamProcessingStarted",
"schema": "string"
},
{
"@type": [
"Telemetry",
"SemanticType/Event"
],
"displayName": {
"en": "Processing Stopped"
},
"name": "evVideoStreamProcessingStopped",
"schema": "string"
},
{
"@type": [
"Telemetry",
"SemanticType/Event"
],
"displayName": {
"en": "Device Restart"
},
"name": "evDeviceRestart",
"schema": "string"
},
{
"@type": "Telemetry",
"displayName": {
"en": "Free Memory"
},
"name": "tlFreeMemory",
"schema": "long"
},
{
"@type": "Telemetry",
"displayName": {
"en": "Inference Rate"
},
"name": "tlInferenceRate",
"schema": "double"
},
{
"@type": "Telemetry",
"displayName": {
"en": "Primary Detection Class"
},
"name": "tlPrimaryDetectionClass",
"schema": "string"
},
{
"@type": "Telemetry",
"displayName": {
"en": "Secondary Detection Class"
},
"name": "tlSecondaryDetectionClass",
"schema": "string"
},
{
"@type": "Command",
"commandType": "asynchronous",
"displayName": {
"en": "Restart DeepStream"
},
"name": "cmRestartDeepStream"
},
{
"@type": "Command",
"commandType": "asynchronous",
"request": {
"@type": "SchemaField",
"displayName": {
"en": "Timeout"
},
"name": "cmpRestartDeviceTimeout",
"schema": "integer"
},
"displayName": {
"en": "Restart Device"
},
"name": "cmRestartDevice"
}
]
}
}
],
"displayName": {
"en": "NVIDIAJetsonNano Module"
},
"contents": [],
"@context": [
"http://azureiot.com/v1/contexts/IoTModel.json"
]
}
]
}
],
"@context": [
"http://azureiot.com/v1/contexts/IoTModel.json"
]
}
Next, select "Replace Manifest"
Save the following as "deployment.manifest.json" and upload it when prompted.
This file represents the deployment that we would like to apply to our NVIDIA Jetson Nano device. The IoT Edge runtime will consume this configuration and start the edgeAgent, edgeHub, NVIDIADeepStreamSDK, and NVIDIANanoIotCModule modules as containerized processes on the Jetson Nano device.
When we successfully configure our device for the first time, it will register with IoT Central and apply this configuration.
{
"modulesContent": {
"$edgeAgent": {
"properties.desired": {
"schemaVersion": "1.0",
"runtime": {
"type": "docker",
"settings": {
"minDockerVersion": "v1.25",
"loggingOptions": "",
"registryCredentials": {}
}
},
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": "{}"
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
}
}
},
"modules": {
"NVIDIADeepStreamSDK": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "marketplace.azurecr.io/nvidia/deepstream-iot2-l4t:latest",
"createOptions": "{\"Entrypoint\":[\"/usr/bin/deepstream-test5-app\",\"-c\",\"DSConfig.txt\"],\"HostConfig\":{\"runtime\":\"nvidia\",\"NetworkMode\":\"host\",\"Binds\":[\"/data/misc/storage:/data/misc/storage\",\"/tmp/argus_socket:/tmp/argus_socket\",\"/tmp/.X11-unix/:/tmp/.X11-unix/\"],\"IpcMode\":\"host\",\"Devices\":[{\"PathOnHost\":\"/dev/video0\",\"PathInContainer\":\"/dev/video0\",\"CgroupPermissions\":\"rwm\"}]},\"NetworkingConfig\":{\"EndpointsConfig\":{\"host\":{}}},\"WorkingDir\":\"/data/misc/storage\"}"
},
"env": {
"DISPLAY": {
"value": ":0"
}
}
},
"NVIDIANanoIotCModule": {
"settings": {
"image": "toolboc/nvidia-nano-iotc-module",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"9014/tcp\":[{\"HostPort\":\"9014\"}]},\"Binds\":[\"/data/misc:/data/misc\",\"/run/systemd:/run/systemd\",\"/var/run/docker.sock:/var/run/docker.sock\"]}}"
},
"type": "docker",
"env": {
"DEBUG_TELEMETRY": {
"value": "1"
}
},
"status": "running",
"restartPolicy": "always",
"version": "1.0"
}
}
}
},
"$edgeHub": {
"properties.desired": {
"schemaVersion": "1.0",
"routes": {
"DeepstreamToFilter": "FROM /messages/modules/NVIDIADeepStreamSDK/outputs/* INTO BrokeredEndpoint(\"/modules/NVIDIANanoIotCModule/inputs/dsmessages\")",
"filterToIoTHub": "FROM /messages/* INTO $upstream"
},
"storeAndForwardConfiguration": {
"timeToLiveSecs": 7200
}
}
}
}
}
This manifest contains the specification for the containerized modules which will be deployed to the Nvidia Jetson device by the IoT Edge Runtime. This deployment contains two custom modules, the "NVIDIADeepStreamSDK" module from the Azure marketplace and an additional "NVIDIANanoIotCModule".
If you look carefully at this manifest, you will notice that it is configured to route all messages from the "NVIDIADeepStreamSDK" module to a filter module named "NVIDIANanoIotCModule" as specified in the routes section:
"routes": {
"DeepstreamToFilter": "FROM /messages/modules/NVIDIADeepStreamSDK/outputs/* INTO BrokeredEndpoint(\"/modules/NVIDIANanoIotCModule/inputs/dsmessages\")",
"filterToIoTHub": "FROM /messages/* INTO $upstream"
}
The "NVIDIANanoIotCModule" transforms the output from the "NVIDIADeepStreamSDK" module in order to conform to IoT Central's specification for use in custom dashboards etc. This matches the entites defined earlier in nvidia-jetson-nano-dcm.json.
Finally, select "Publish" to publish your template
Before we install IoT Edge, we will install an additional utility to make text editing a bit easier onto the Nvidia Jetson Nano device with:
sudo apt-get install -y curl nano
We will make use of the ARM64 release of IoT Edge to ensure that we get the best performance out of our IoT Edge solution.
These builds are provided starting in the 1.0.8 release tag. To install the latest release of IoT Edge, run the following from a terminal on your Nvidia Jetson device:
# You can copy the entire text from this code block and
# paste in terminal. The comment lines will be ignored.
# Install the IoT Edge repository configuration
curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
# Copy the generated list
sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
# Install the Microsoft GPG public key
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
# Perform apt update
sudo apt-get update
# Install IoT Edge and the Security Daemon
sudo apt-get install iotedge
Provisioning the IoT Edge Runtime on the Jetson Nano Device with DPSTo automatically provision your device with DPS, you need to provide it with appropriate device connection information obtained from IoT Central.
To locate this information, in your IoT Central app, select "Devices" then choose the name of your newly created device template.
Next, select "New" to create a new IoT Edge device using your created template:
Once created, select the newly created device and choose the "Connect" option
You should see a display of information which we will use in the next steps
Once you have obtained a device connection info, open the iot edge configuration file on your Jetson Nano with:
sudo nano /etc/iotedge/config.yaml
Find the provisioning section of the file and comment out the manual provisioning mode. Uncomment the "DPS symmetric key provisioning configuration" section as shown.
# Manual provisioning configuration
#provisioning:
# source: "manual"
# device_connection_string: "<ADD DEVICE CONNECTION STRING HERE>"
# DPS TPM provisioning configuration
# provisioning:
# source: "dps"
# global_endpoint: "https://global.azure-devices-provisioning.net"
# scope_id: "{scope_id}"
# attestation:
# method: "tpm"
# registration_id: "{registration_id}"
# DPS symmetric key provisioning configuration
provisioning:
source: "dps"
global_endpoint: "https://global.azure-devices-provisioning.net"
scope_id: "{scope_id}"
attestation:
method: "symmetric_key"
registration_id: "{registration_id}"
symmetric_key: "{symmetric_key}"
Next update the value of {scope_id}
with the "ID Scope" value in IoT Central, then update the value of {registration_id}
with the "Device ID" value in IoT Central, and finally update the value of {symmetric_key}
with either the "Primary Key" or "Secondary key" value from IoT Central.
At this point are almost ready to start the IoT Edge runtime, which will provision our device with the NVIDIADeepStreamSDK and NVIDIANanoIotCModule modules.
Before we do that, we need to enable the Azure IoT Edge Message broker within our DeepStream configuration. This will allow DeepStream to forward object detection data to be routed into the NVIDIANanoIotCModule. Open the configuration file for editing using:
nano /data/misc/storage/DSConfig.txt
Then find the sink1 entry and set enable=1
[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=/data/misc/storage/ONNXSetup/configs/msgconv_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_azure_edge_proto.so
topic=mytopic
#Optional:
#msg-broker-config=../../../../libs/azure_protocol_adaptor/module_client/cfg_azure.tx
After you have updated these values, restart the iotedge service with:
sudo service iotedge restart
Initially, your device will show a status of "Registered" in the IoT Central portal
Double-check your settings under "Administration" => "Device Connection". If "Auto Approve" is disabled, you will need to manually approve the device.
To manually approve the device, select the "Approve" option after selecting the device on the "Devices" page. If "Auto Approve" is enabled, you may skip this step.
Once the iotedge service has restarted successfully and connected to IoT Central the status will change to "Provisioned"
You can check the status of the IoT Edge Daemon on the Jetson Nano using:
systemctl status iotedge
Examine daemon logs using:
journalctl -u iotedge --no-pager --no-full
And, list running modules with:
sudo iotedge list
Creating a Custom Dashboard in IoT Central for displaying TelemetryNow that our newly created device is provisioned, we can use it to create visualizations for data produced by our Nvidia Jetson device.
Navigate to the recently created device template:
Select "Views" => "Visualizing the Device"
Name the View "Device" and add the following properties:
Select "Add Tile" then "Save":
Create another View by selecting "Views" => "Visualizing the Device"
Name the View "Dashboard" and add the following:
Select "Add Tile" then choose the gear icon and rename the chart to "System Health" the choose "Update" and "Save"
Next, add a new tile by selecting "Module State" then "Add Tile", then select the measure icon and choose "Last Known Value"
Next, add a new tile for "Primary Detection Count" by selecting it and choosing "Add Tile" then repeat for "Secondary Detection Count".
Finally, add a new tile for the following fields and name it "events"
Now we just need to verify that our dashboard is ready for publication. Before we do that, we will test it by configuring it to use data from our current device. Select "Configure Preview Device" then choose your device from the populated list:
After a short while you should see the view populated with data from the device
When you are satisfied with your dashboard, publish it by selecting "Publish"
Select "Devices" then choose your newly created device underneath the heading of your recently published device template.
You should be greeted by the "Device" dashboard which now has live information pulled from the device:
Select "Dashboard" to view the additional live data:
By default, the NVIDIANanoIotCModule will forward all detections matching person as it is the default value the Primary Detection Class. We can change this value as well as enable a Secondary Detection Class within IoT Central.
While in the device view panel, select "Manage" to bring up the screen for updating the Module Twin for the Primary and Secondary Detection Class of NVIDIANanoIotCModule.
Here is an example of updating the Primary Detection Class to "person" and the Secondary Detection Class to "vehicle":
In the picture above, we are monitoring the output of the NVIDIANanoIotCModule with:
iotedge logs -f NVIDIANanoIotCModule
Using an optional secondary tag in your object detection model, you can report detections of an additional object using a single model.onnx exported from customvision.ai. Simply repeat the previoius steps on "Training a custom object detection model using Custom Vision AI" to add an additional tag (object) to the model you created earlier. After you replace labels.txt and model.onxx again, make sure that you have enabled your new tag's name in IoT Central. This will configure the NVIDIANanoIotCModule to begin reporting detections of the new tag.
Additional features within IoT CentralConfiguring e-mail alert with IoT Central
To create a rule, for example to alert when an Occupancy Threshold has been reached, select "Rules" and create an entry with the following settings:
Exporting data to Azure for use in additional Services
Data can be exported from IoT Central to an available Storage Blob, Event Hub, or Service Bus. For more information, consult the relevant documentation.
ConclusionIn this article, we covered a variety of techniques using NVIDIA DeepStream to create an object detection pipeline using input from an CSI based infrared camera. We then developed a custom object detection model with customvision.ai to be consumed by our DeepStream application.
Our solution was developed as a containerized application from the start, allowing us to easily package it for deployment from Azure IoT Central. From there, we were able to instrument our NVIDIA Jetson Nano device with the Azure IoT Edge runtime to enable command and control features from the cloud and expose our device telemetry into additional Azure services.
Using these skills, we have demonstrated a complete end-to-end IoT solution to allow for custom object detection with robust hardware to produce consistent imagery at all times of day.
There are a variety of interesting ways to apply these techniques into custom scenarios. For example, looking into the usage of image capture hardware based on thermal, stereoscopic, depth or other signatures to produce high accuracy object detections in unique environments. There are also numerous features that can be explored in DeepStream and IoT Central to create unique applications. Of course, the sky is the limit with regard to the types of models you can train and deploy. Go forth and build something awesome!
If you are able to add support for additional image capture devices, leverage a new feature in DeepStream or IoT Central, or build a custom object detection model for something unique, please share your results in the comments!
To learn more about how to build Production GPU-Accelerated IoT Workloads at the Edge, check out the recent content I worked on with my collegue Emmanuel Bertrand for NVIDIA GTC 2020:
GTC 2020: Productionizing GPU Accelerated IoT Workloads at the Edge - A full hour presentation covering IoT DevOps strategies, IoT Central, and development of custom images for flashing NVIDIA Jetson Devices
GTC 2020: Visual Anomaly Detection using NVIDIA Deepstream IoT - An hour long workshop that covers DeepStream, IoT Edge, IoT Central, and development of custom object detection models with customvision.ai
Until next time,
Happy Hacking!
Comments