I live in a neighborhood with a lot of kids around, and now that schools are out for summer break (here in the U.S.), there's always kids playing out on the street. One of the apparent issues here is that it seems like people driving through the neighborhood don't slow down enough, even though the speed limit is posted. Given this, I thought it'd be interesting to actually monitor the speed of passing cars and log if they're actually speeding, as well as the time of day to see if there's any correlation between people coming through to/from work (so possibly from the neighborhood or one nearby), or if it's mainly people going to the nearby park for children's soccer/baseball games during the week.
Solution OutlineTo monitor the speed of drivers within the neighborhood, I need to set up devices in various locations to get a good idea of how fast people are going in different parts. Generally my street (126th on the map) is a little slower, as it is tucked into a back corner with no specific outlet, whereas Hazel has a good stretch where people tend to speed as they head towards the park.
Since the neighborhood isn't too big, but it's bigger than a standard wifi signal can reach, I figured this would be a perfect problem for a connection using the Helium atom/element combination for long range wireless. It also gives me a good reason to try a Raspberry Pi library I had found online that seemed interesting :p Given this, I could take a couple Raspberry Pis, add cameras to them, solar power and Helium atom modules to place them along the road (though out of the way, possibly against light poles or in neighbor's yards with their permission) for monitoring traffic. To start, this project will only have one camera in front of my place, so it won't be as useful as it could be by expanding, but that's kind of how tech projects/proof of concepts work: you start small, and then you iterate.
Setting Up Google Cloud IoT Core and Pub/SubThe majority of the backend for this project is driven by the Google Cloud Platform. To access this, go to the developer console and create a new project.
Once your project is created and you have gone into it, you will need to enter pub/sub into the search bar at the top of the screen to go into the API and enable it.
Once the API is enabled, you will be prompted to create a new Pub/Sub topic.
After setting up Pub/Sub, you will want to search for IoT Core to enable that API.
Once the API is enabled, you will be prompted to create a new device registry.
The next page will allow you to enter a name for your device registry, select a hosting region, and associate a Pub/Sub topic with your new registry.
After your registry is setup, you will need to create a service account for Helium to communicate directly with IoT Core. You can do this by going into the side navigation drawer on the Google Cloud Platform and selecting IAM & admin, then selecting service accounts.
On the next screen you can create a new service account with the role of Cloud IoT Editor. You will also need to generate a JSON private key that will be used by Helium to connect to Google Cloud IoT Core.
After clicking on CREATE, the JSON file will be generated and saved to your machine. At this point we have what we need to set up a Helium device and connect it to Google IoT Core, though we will return to our Google backend later when we add Firebase Functions support.
Setting up the Helium NetworkHelium is a product that allows your IoT devices to connect to networking equipment over fairly long ranges, then handles the routing of uploaded data to various cloud services (in our case, Google IoT Core). There are two main parts: the Element, which is essentially a router that your devices can connect to in order to get to the Internet, and the Atom, which is a specialized transmitter/receiver that your IoT devices can use to communicate with an Element. For this project I used an ethernet connected Element, though cellular elements are available.
Helium Element
To set up your Helium products, you will need to make an account at their site and use their dashboard. At the top of the dashboard you will see two buttons: Add Element and Add Atom
You will select both of these and register your Helium devices using the codes available on their stickers.
After your devices are activated, you will need to create a new channel to act as a middleman between your IoT device and Google IoT Core. Select channels from the side navigation menu, and then select Google Cloud IoT Core from the channels options.
You will then be presented with a screen that asks for your registry ID, region and JSON key. The JSON key is the contents of the JSON file you downloaded in the previous section, the registry ID is what you named your IoT Core registry and the region is whatever you selected when creating your registry (us-central1 in this case)
After you have filled out the above information, a separate section will appear that will allow you to name your new channel before clicking on the blue Nextbutton.
On the next screen will be able to see some sample code for a "Hello World" sample that connects to your channel. After reviewing the other details on that screen, you can click on the Done button and be taken to your channel's details screen.
Hardware SetupAs mentioned above, the hardware for this project is relatively simple. First, I have a Helium element box connected to my home network via ethernet. This is the hub that will connect to the field devices, and send data from them to Google's IoT Core via MQTT.
As for the device itself, it is a Raspberry Pi 3B+ (I recently picked up the 3B+, so trying that out. I'm sure a 3B or even Zero W could also work), the Helium atom module, a power source and a camera.
Additionally, I put together a small camera stand 3D model, printed it with the fastest settings possible (think it took five or ten minutes), and attached the Pi camera to it with a set of M3 bolts so that the camera could remain in place while I tested. Sometime I'll come back to put together a real project case :)
One of the great things about the open source community is that people can create awesome libraries or hardware devices, and others can build on that (kind of the core of it, really :P). I recently came across a library (credit: Claude Pageau, pageauc@gmail.com. He did an awesome job) that uses the camera on a Raspberry Pi to figure out how quickly objects are moving across the view plane. After playing with the library a little bit, I decided to try it out in a project to collect data. Rather than repeating the setup process again here, I'll point readers to the library's readme file in order to learn how to install the library and calibrate it with the camera to more accurately determine speeds. I did modify the configuration file to measure in MPH rather than Km/h since the speed limits here are all in miles.
Updating the ProgramUnder the speed camera library on your Raspberry Pi, you can find a file named speed-cam.py. To put together this project, I made a copy of that file and added the necessary lines of code to connect to the Helium network under the import
statements towards the top of the file.
from helium_client import Helium
helium = Helium("/dev/serial0")
helium.connect()
channel = helium.create_channel("Neighborhood Speed")
The next step is where things get a bit tricky. To do the simplest modifications, you can go down to line 967 (if you have added the above five lines) and find the line if motion_found:
. This is where the magic happens. When motion is detected via the camera, times and changes are checked to see if a speed can be determined. Scroll down to line 1208 and find the section that logs speed information
logging.info(" Add - cxy(%i,%i) %3.1f %s"
" px=%i/%i C=%i %ix%i=%i sqpx %s",
cx, cy, ave_speed, speed_units,
abs(start_pos_x - end_pos_x),
track_len_trig, total_contours,
mw, mh, biggest_area,
travel_direction)
Under this line we can check to ensure that average_speed is available, and then send it to the Helium network
if ave_speed is not None:
channel.send("{\"speed\": \"" + ave_speed + "\"}")
At this point we should be able to start saving speed information. Easy, right? No reason to reinvent the wheel :) This sample program is fairly big, so I won't go into the details, but you can go through it and strip out any sort of logging that goes to a local database or saves images. I chose to do this because if I knew of a device taking pictures in a neighborhood, I know I'd want there to be no saved photos for comfortability reasons.
Finally, in Raspbian, update the system's configuration files to run the script on startup so that you can just plug in the device and start collecting data without needing a keyboard or monitor.
Configuring Firebase for Data StorageOnce data is flowing from your IoT device into Google Cloud IoT Core, it's time to do something with it. For this project I created a Firebase portion to the Google Cloud project so that a Firebase Function can listen for Pub/Sub events, then store the device's data in a Firebase Database. To start, go to the Firebase Console and create a new Firebase project. Be sure to select your Google Cloud project from the dropdown menu.
After your project is created, you will want to select the database option from the side navigation bar and create a new Realtime Database
This will also give you an option to enable permissions in test mode, allowing read/write without authentication.
After your database is set up, it's time to add a Firebase Function. I'm using OSX as my computer OS with Homebrew installed, so depending on your setup, you may need to do some things in this section of the tutorial a little differently. Create a new directory location to store your local environment, and navigate to it in your terminal.
After navigating to your new directory, ensure that you have NodeJS installed on your computer. I did this with the command
brew install npm
Once you know that Node is installed, you can run the following command to install the Firebase tools.
npm install -g firebase-tools
With the tools installed, you can run the command firebase login
, which will prompt you through a browser window to authenticate against your Google account.
After authenticating, run the command firebase init functions
to initialize your local environment for Firebase Functions
After selecting your project, you will be prompted to select a programming language for our Cloud Functions. For this project I went with JavaScript. I also did not elect to use ESLint, and did install the additional dependencies.
When that finishes, you can go into the directory that you created for your Firebase Functions and open the index.js file, which is where you will add your new code. This code will take the data from IoT Core and Pub/Sub and put it into a Firebase Database.
const functions = require('firebase-functions');
var admin = require('firebase-admin');
const projectId = "";
const cloudRegion = "us-central1";
const registryId = "";
const deviceId = "";
const version = 0;
const parentName = `projects/${projectId}/locations/${cloudRegion}`;
const registryName = `${parentName}/registries/${registryId}`;
admin.initializeApp();
var db = admin.database();
//Receiving Pub/Sub from device and adding data to Firebase
exports.receiveTelemetry = functions.pubsub
.topic('telemetry-topic')
.onPublish((data) => {
const attributes = data.attributes;
const message = data.json;
const deviceId = data.attributes.deviceId;
const speed = {
speed: message.speed,
};
return Promise.all([
updateCurrentDataFirebase(speed, deviceId)
]);
});
function updateCurrentDataFirebase(data, deviceId) {
var d = new Date();
var timeInMillis = d.getTime();
var ref = db.ref(`/devices/${deviceId}/${timeInMillis}`);
return ref.set({
speed: data.speed,
});
}
AnalyticsAt this point you should have data in Firebase that can be accessed and you can do something with it. You could just look at the data and make determinations, but what's the fun in that? The next step will be to take collected data and display the number of instances where someone has sped, the time, and I'll think on other interesting ways to display that data.
Possible ImprovementsFirst and foremost, this project needs a good casing :) I'd also like to add better solar power options. Apart from the hardware, I'm going to look into getting TensorFlow working in a Python script, as I'd love to check the pictures to see if they're vehicles before checking their speed.
Comments