Do you want to monitor rooms or spaces in a building remotely and ensure a room is in use when it is supposed to be? Well then you've come to the right place. At Helium, we get asked frequently about the best way to do this. Our recommendation, detailed below, is to build a quick POC version of a sensor and web app that will do the job. Then call us when you want to deploy these at scale.
While this is by no means is a complete space monitoring application, it's a suitable, lightweight version of something you would pay 50-100x more for from a vendor pitching it as a packaged solution.
What We'll CoverDeploying this sensor system on your own should be fairly simple and take less than 45 minutes. Here's what we'll cover:
- Constructing a prototype people counter with the Grid-EYE sensor, the Pi, and the Helium Atom.
- Registering your Helium Element Gateway and Helium Atom Prototyping Module in the Helium Dashboard; and building your own low power, wide area wireless network.
- Programming the Grid-EYE in Python using Helium, JSON, OpenCV, and Matplotlib libraries. Specifically we'll be capturing raw readings from the Grid-EYE and doing some edge processing to transform those into simple JSON.
- Sending our JSON data wirelessly over the Helium Network and visualizing it in using AWS IoT and AWS QuickSight.
This will be fun. Let's ride.
Required Hardware and SoftwareTo build your own lightweight people counting application, you'll need the following hardware and software:
- A Helium Development Kit - We're using the Raspberry Pi variant here but any will do. You can buy one (and all other Helium hardware) here.
- A Raspberry Pi - Below we're using the Pi 3 and we recommend it for this task.
- The Grid-EYE IR Thermal Camera Sensor - Specifically the AMG8833. This sensor is manufactured by Panasonic but below we're using the break out board version sold by Adafruit.
- A Helium Dashboard Account - Get a free one here if you don't already have access.
- An AWS IoT Account - Register here if you're part of the uninitiated.
- A space to monitor - This one should be easy. We recommend your office or maybe the line at your favorite taco truck. Or maybe where your cat lives.
To start, we need to create our sensor. This is what you'll be building:
You need to connect the Grid-EYE to the Raspberry Pi via I2C
.The voltage line will either be 5V
or 3.3V
depending on the Grid-EYE model selected. The I2C
pins and the Raspberry Pi are 04(SCL)
and 06(SDA)
. For this project, the interrupt pin can be left disconnected.
If you are using the DigiKey Grid-EYE, the supply voltage of the Grid-EYE should be connected to the Raspberry Pi 3.3V
pin. If you are using the Adafruit Grid-Eye the supply voltage of the Grid-Eye should be connected to the 5V pin.
Here, I am using the Adafruit Grid-Eye:
For reference, here is the overall schematic of the completed board.
Once you've completed this, take a picture, post it on Twitter with a creative hashtag, and set it aside. Now it's time to fire up your very own Distributed Low Power Wide Area Network (DLPWAN) using Helium.
Deploy your Helium Element GatewayThe Helium Element Gateways creates the a (Distributed Low Power Wide Area Network (DLPWAN responsible for bi-directional sensor data routing between IoT devices and the cloud. In this deployment your Element will be routing data for your people counter but it can be used for all future sensors built on Helium - even ones that aren't yours.
Deploying the Helium Element is fast and easy. Here's a quick video on how to do it:
To start, simply plug the power supply and the provided Ethernet cable into the Element and a live Ethernet port that accepts outbound traffic. The Element will be connected when the front-facing LED turns Green indicating a successful ethernet connection. If the Element is a Cellular version, it will show the Blue LED when connected successfully to Cellular.
Register your Element and Atom with the Helium DashboardNow you need to register your hardware in the Helium Dashboard. This entire process should take you less than 120 seconds. The Helium Dashboard will be the interface through which you manage and view your connected devices and manage your Cloud Channels (like the AWS IoT Channel we'll deploy later).
Each Helium device is registered with Helium before you take delivery on it. All that's required for you to make it operational is to assign it to yourself in Dashboard.
You can find full documentation on Helium Dashboard here.
- First create a Helium Dashboard account if you haven't already done so.
- To register your Atom, start by selecting New Atom. In the UI, add a name (e.g. Grid-EYE) then input the last four digits of its
MAC Address
and its four digitHVV Code
. (If needed, you can see full docs on this process here.)
- The Element registration is done in exactly the same way. Select New Element, then supply a name, the last four of its
MAC Address
and its four digitHVV Code.
Also, make sure to input a location for you Element so Dashboard can display it on a map. (Again, if needed, see full docs on this here. )
Alright. With your Helium hardware activated and deployed, it's not time to wire up the Helium AWS IoT Channel and get sensor data flowing from the edge to the cloud.
Deploying an AWS IoT Channel and Verifying Data FlowNow we need to deploy an AWS IoT Channel. Complete instructions for how to do this can be found here in the Helium Developer Documentation. The short summary is as follows:
- In your AWS IoT org, find your
Access Key ID,
Secret Access,
andRegion.
- When logged into the Channel interface in Helium Dashboard, create a new AWS IoT Channel and input the credentials listed above.
- Deploy the Channel code to your device. Once your Channel has been created, the Helium Dashboard UI will auto generate the code you'll need to send data from your Helium Atom to AWS IoT. For a Pi, it will look something like this:
from helium_client import Helium
helium = Helium("/dev/serial0")
helium.connect()
channel = helium.create_channel("aws_channel_name")
channel.send("hello from Python")
In addition to the AWS IoT Channel Docs linked above, here's a video that shows how to wire this up end to end.
Programming the Grid-EYENow that we have data flow up and running between your Helium sensor and AWS IoT, we can load our Python program to capture data from the Grid-EYE. (And after this we'll put it all together by visualizing the data in AWS Quicksight.)
Here are the next steps:
- Update the Raspberry Pi and install the associated libraries.
- Test the functionality of the Grid-EYE by running the test examples provided by each board distributor to ensure the connections are correct.
- Test the functionality of the Helium Atom and Element by running the sample setup code found in the Helium library.
- Once you have ensured the proper function of both hardware devices, grab the Grid-EYE code below and run it on your Pi.
Install the Grid-EYE libraries on your Pi with the following commands
cd ~
git clone https://github.com/adafruit/Adafruit_AMG88xx_python.git
cd Adafruit_AMG88xx_python
python setup.py install
//check setup success
python
import Adafruit_AMG88xx
If the import
statement does not return an error, the AMG88xx library was successfully installed.
Install OpenCV and the following libraries using these commands:
sudo apt-get install python-opencv
sudo pip install imutils
sudo apt-get install libtiff5-dev libjasper-dev libpng12-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libgtk2.0-dev
sudo apt-get install libatlas-base-dev gfortran
sudo pip install numpy
sudo apt-get install python-matplotlib
Check your OpenCV install by running the following:
python
import cv2
cv2.__version__
From Pixel Arrays to JSONWith our Pi ready to roll, we can now start to capture data from the Grid-EYE and do some edge processing to turn it into JSON before sending it across the Helium Network.
The raw data from the Grid-EYE in the Python implementation is sampled as a raw 64 element array. However, with more complex OpenCV tools, this array is reformatted to an 8x8 array, and interpolated to form a 32x32 image.
The image is then converted to grayscale, inverted, and scanned for circular blobs with blob detection tools. OpenCV counts the number of blobs found in the resulting image and that variable is sent wirelessly over the Helium Network and onto AWS IoT.
At the sensor level, the raw data being transmitted from the Grid-EYE to our Raspberry Pi is in a an array format and generally follows this pattern:
pixels = [num0, num1, num2, ..., num63]
This data is reformatted to fit an 8x8 array and will look like this:
pixels = [[num0, num1,..., num7],
[num8, num9,..., num13],
[...],
[num55, num56,..., num63]]
Each one of these 64 data points is a voltage returned by the Grid-EYE which, in turn, corresponds to a temperature reading. From there, these can be interpolated to a 32x32 array and saved as an image. They come out looking something like this?
This is great, but we can remove the need to transmit this image over the air and, instead, with some simple edge processing in Python running on our Pi (code below), we can approximate the number of people in a space and convert that to JSON.
So, by setting a threshold and running some code on our sensor that performs blob detection, we can identify the the number of blobs are create a simple JSON representation to be sent over the air to Helium and onto our visualization application.
In other words, this:
Pixel Output:
28.50 28.50 28.00 27.75 27.75 27.50 28.00 29.00
29.50 28.75 28.00 27.25 27.00 27.25 29.00 29.25
29.00 28.25 29.50 28.25 27.50 27.75 30.50 29.00
28.50 29.00 31.25 31.00 29.75 29.25 31.25 29.50
29.00 27.50 28.25 30.25 30.25 29.75 30.50 29.75
27.25 27.00 27.00 27.75 29.75 30.00 29.25 27.75
27.25 27.00 26.75 26.25 27.50 28.75 27.75 27.25
26.50 26.50 26.00 26.00 26.00 26.25 26.50 26.25
Becomes this:
{
"People" : "2"
}
The relatively large image data used in locating people in view of the Grid-EYE is parsed down to the smallest valuable form for sending over the wire. The blob detection function will also return the coordinates in the image of each blob if the sensor was, for instance, looking at seats for occupancy. But for this demonstration we are only concerned with how many people are in a room. Here is how it looks from my desk:
By adjusting the sensitivity of the Grid-EYE temperature threshold, you can place it a maximum of 15 feet from an area to scan for heat signatures. In this example I have set the threshold for 30 degrees celsius, which is enough to find a human body that is uncovered. When you first deploy this sensor you'll most likely notice that it needs some tuning. Specifically you'll need to play around with the temperature threshold to fit the area your Grid-EYE is viewing.
Visualizing People Data in AWS QuickSightNow that we're capturing the number of people in a space, we need can pipe it to a web service via the Helium AWS IoT Channel and visualize space utilization in real time. Fair warning: this process is slightly arduous but the results are worth the effort. Read this blog post for details. (Also, if anyone knows a better way to visualize time series data without leaving AWS, please let me know.
Here's the payoff. Below is a screenshot from the QuickSight UI visualizing people passing by my desk after being sent through the Helium Network and into AWS IoT:
There are inherent issues with a small pixel array and detecting multiple objects. In both code samples, values below the threshold for human body temperature are removed to improve detection. Additionally, varying body temperature will create a false positive (reading 2
instead of 1
) for a body count in an area due to two particularly warm portions of a close body having local heat maximums. But with some testing and tweaking, you can start to feel pretty good about the accuracy of the data being returned.
Here is a blog post about managing this devices state in AWS to add even more functionality to your object detector. You can change the sampling time, and activate/deactivate the sensor from the cloud:
Next Steps and HelpGood job! You now have a functioning Grid-Eye with basic embedded object detection. If you want to learn more about Helium, how you can prototype your own IoT project, or learn more about this project, join us here:
Comments