This project demonstrates how to build a camera-equipped IoT device and companion web application, used to record and monitor the number of vehicles driven on a roadway.
The vehicle counter device is deployable to any road, where it captures images and utilizes machine learning to detect each vehicle that drives past it. Vehicle detections are logged in real-time to a cloud database using the device's cellular connectivity.
City planners and traffic engineers can access the web application to view the daily average vehicle counts for any monitored road and use this metric to predict when road maintenance will be needed.
Problem IdentificationRoadways and highways are critical infrastructure for cities, which facilitate travel, commerce, and connectivity. Each day, city populations depend on well-maintained roads for commuting to work, the delivery of goods, and emergency services, among many other uses.
As roads deteriorate, developing potholes, cracks, and uneven surfaces, their safety and reliability can be compromised. Vehicles driving on these road conditions are easily damaged and face an increased risk of accidents.
Extensive road repairs are often very inconvenient for the public, requiring lanes of traffic to be shut down for an extended period of time, resulting in backups and delays for drivers. Additionally, the public must fund these repairs through their tax dollars, which can be expensive.
Cities can become more resilient and better recover from disruptions by working to protect this key infrastructure through preventive road maintenance. Tasks such as pothole patching, crack sealing, and grading, can help proactively avoid significant issues that affect roadways.
To optimize efficiency and smartly deploy their resources, cities must adopt a data-driven approach to maintenance scheduling that accurately predicts when an individual road needs to be serviced.
IoT Vehicle Counter DeviceWhile various factors contribute to the deterioration of a roadway, one of the most impactful is the overall traffic load that the road receives.
There is a correlation between the number of vehicles a road accommodates and the extent of damage it incurs, as each vehicle contributes to wear and tear through its weight and tire contact with the road surface.
With knowledge of exactly how many vehicles drive on a road within a certain period of time, it is possible to calculate the road's daily traffic load and use this metric to determine when preventive maintenance will be needed.
This project aims to provide an end-to-end solution for counting vehicles and calculating daily averages for any roadway by using an IoT camera device and a companion web application.
When deployed, the device's camera detects each vehicle that drives past it and logs a record of it to a cloud database. Users access the web application to view device details including its location and vehicle count statistics.
Benefits of this solution include:
- Nonintrusive Installation: Vehicle counter devices are compact, allowing them to be mounted to signposts or other objects next to roadways, and therefore do not affect the road or its traffic in any way.
- Remote Monitoring: Each device utilizes cellular connectivity to send vehicle detection data to the cloud in real-time, which users can view and analyze remotely through the application.
- Widespread Coverage: Devices are easily assembled at an affordable cost, enabling cities to deploy a large network of devices across their road system for comprehensive maintenance planning.
Traffic engineers or other users responsible for road maintenance can use this solution to ensure that roadways are serviced when they are needed, thereby keeping city roadways prepared and resilient.
How It WorksThe device consists of electronics housed in a 3D-printed enclosure, which can be deployed alongside any road. Once set up, the device detects each vehicle that drives past it and logs a total vehicle count on the cloud.
An OpenMV Cam H7 Plus provides the device with its machine vision capabilities. This board includes an onboard camera and processor that can run object detection models on images it captures in real-time.
Cellular connectivity is enabled on the device by an AVR IoT Cellular Mini board, which features a SIM card reader, cellular modem and antenna, and built-in support for interacting with Amazon Web Services (AWS).
A LiPo battery is plugged into the AVR-IoT to deliver the board its power. The battery features a button switch to toggle the power on and off.
The OpenMV is connected to the power and ground pins of the AVR IoT to receive its power, as well as the RX pin for transmitting data to the AVR IoT.
Both boards and the battery are secured within the device enclosure, which is designed with openings for the camera, battery switch, and antenna to extend out through its walls.
Using its brackets and an elastic cord, the device is attached to a signpost, pole, or other object alongside a road that is designated for monitoring.
It is positioned with its camera facing perpendicular to the road traffic to view the side profile of each passing vehicle.
The device should be approximately 10-15 feet from the side of the road, ensuring that only one passing vehicle is visible within the camera's field of view at a time, at a height 3-6 feet above the ground.
Once the device is powered on, both the OpenMV and AVR-IoT boards begin to execute code that is loaded on their processors.
The Python script running on the OpenMV runs continuously, looping through the following processing steps:
- Capture an Image: A 240x240 pixel color image is captured of the road using the onboard camera and stored in memory.
- PerformObject Detection: The image is processed by an object detection model that detects the presence of any vehicles in the image.
- TransmitData: If any vehicles are detected, the number of detected vehicles is transmitted from its serial port using UART communication.
The object detection model used to process each image has been trained on a dataset of images that includes hundreds of examples of vehicles of different makes, models, and colors.
Through training, the model has learned the distinctive shapes and features of vehicles, and can accurately detect where vehicles are within any input image.
The OpenMV Cam continuously captures and processes images at 10 frames per second, allowing it to detect vehicles as they drive past the device and through the camera's field of view.
Given the device's placement and spacing in traffic, each passing vehicle is typically detected individually in an image. However, if vehicles from opposite lanes enter the frame simultaneously, they are detected together in an image.
After detecting vehicles in a captured image, the code proceeds to transmit the number of vehicles detected as a UART message. The code then pauses, allowing time for the recognized vehicles to drive out of frame before resuming the processing loop.
The Arduino script running on the AVR-IoT runs continuously, looping through the following processing steps:
- Connect to AWS: If not already connected, the board establishes connections with an LTE cellular provider and an AWS MQTT client.
- Receive Data: Any available bytes of data on the serial port, representing the vehicle count sent from the OpenMV, are read into an integer variable.
- Publish an MQTT Message: The vehicle count and the device's ID are formatted into a JSON message and published to an AWS MQTT topic.
The AVR-IoT connects to an LTE cellular provider using its SIM card and cellular modem. With the internet access this provides, the board then connects to an Amazon Web Services MQTT client.
MQTT is a protocol used by IoT devices to communicate with a centralized server by publishing and subscribing to specific message topics.
Each time a UART message is received from the OpenMV, the Arduino sketch publishes a message with the received vehicle count and the device's unique ID to a 'vehicle-detections' AWS MQTT topic using the client.
Multiple Amazon web services are used to collect, store, and host the data received from the device so it can be utilized by the end user.
An AWS DynamoDB database is configured with an entry for each deployed device and includes information about the device's location and the total count of vehicles it has detected.
AWS IoT Core is set up with a rule that calls an AWS Lambda function each time a new message is published to the 'vehicle-detections' topic. This function updates the corresponding device entry in the database, incrementing the total count of detected vehicles and detection timestamps based on the message.
Device information is exposed publically through an HTTP API, implemented with AWS API Gateway. Requests to the API call a Lambda function that fetches device data from the database, which is formatted into JSON and returned in the API response.
AWS S3 is used to host the project website, which utilizes the API to collect the device data and display it to the end user.
The website interface includes a menu displaying details for each deployed device, and a map that plots their physical geographical locations.
Users can interact with the map through panning and zooming. Clicking a map icon highlights the icon and scrolls the menu to its corresponding device, which is also highlighted.
Alternatively, clicking a device menu item highlights the item and centers the map on its corresponding map icon, which is then highlighted.
The map provides users with a visual reference to where the device coverage exists within their city, and how many roads are being monitored.
Within the menu, information about each device includes the name of the road being monitored, its placement coordinates, device ID, date range of vehicle detections, and total vehicle detections.
Using the date range of detections and total vehicle count, a daily average vehicle count for the road is calculated and displayed.
Traffic engineers use models to determine the threshold of total vehicles that can drive on the road before it requires service. Dividing this number by the daily average vehicle count provides engineers with the number of days remaining before maintenance should be performed. This information allows for scheduling maintenance in advance.
The data pipeline from vehicle to device, to the cloud, to the user happens in near real-time. Allowing users to refresh the website to see up-to-the-minute vehicle counts and daily averages for each monitored road.
Build InstructionsBelow are end-to-end instructions on how to build, test, and deploy the device to capture vehicle detections and store them on the cloud. Instructions also include how to set up a website to monitor data collected from the device.
These instructions require the following:
- Access to the OpenMV IDE, Arduino IDE, and Node.js.
- An Edge Impulse account.
- An Amazon Web Services account.
- Setup and configuration of the AVR-IoT Cellular Mini board. This includes completing its onboarding steps, installing its Arduino Libraries, and provisioning the board for AWS.
Complete project code and 3D print files are found within this project's attachments and the GitHub project repository.
Hardware SetupBegin the build by first assembling the device hardware, which consists of multiple components enclosed in a 3D-printed housing.
1. Build the Electronics Circuit
Following the below circuit diagram, solder the connections between the OpenMV Cam and the AVR IoT Cellular Mini board using ~2" lengths of jumper wire. 30AWG wire is recommended for its flexibility.
Wires should be soldered directly to the pin holes (no header pins) on the bottom of the OpenMV Cam, and the top of the AVR IoT Board. These two sides will face each other when mounted inside the housing.
2. Construct the Battery Switch
To power the device, a 3.7V 350mAh LiPo battery is used. Use 30AWG jumper wires to splice the toggle switch into the battery's ground wire.
This modification will allow the device's power to be toggled on and off without having to physically unplug the battery from the AVR-IoT board.
3. 3D print the device housing
Parts include a housing front, body, and back plate. 3D model files (.stl) are provided in this project's Custom Parts and Enclosures section.
The housing shown in this project was printed in high-visibility PLA so it can be easily seen when deployed outdoors.
4. Mount the electronic components
Feed the OpenMV Cam's camera through the hole in the housing front, then attach the board to its interior mounting holes using two M2.5x5 screws. Place an SD card into the OpenMV Cam through the opening on the side.
Attach the Cellular Mini board to the mounting standoffs on the back plate using four M2x3 screws. Then, plug the battery into the board's JST socket and slide the battery between the board and the backplate.
5. Assemble the device
Slide the backplate and battery through the housing body, then attach the front to the body using four M2x3 screws in their aligned attachment holes.
Fit the toggle button through the hole on the side of the body, and glue its base to the inside wall of the body to secure it in place.
Feed the antenna of the Cellular Mini through the hole in the top of the body, then align the six holes of the back plate with the mounting holes on the body and attach them using M2x3 screws.
After completing these steps, the device assembly is complete. It can be toggled on and off using the battery switch, and the OpenMV and Cellular IoT boards can be accessed via their USB ports on the underside of the device.
The device will employ an object-detection model to detect vehicles in the images it captures. For accurate performance, the model must be trained on a dataset of example images featuring many different vehicles.
Image in this dataset should include a wide variety of vehicle makes, models, and colors, captured from a side view. These criteria ensure the model can be trained to detect all types of vehicles as they are captured from the perspective of the deployed device.
Build this dataset using the device's OpenMV Cam and the OpenMV IDE.
1. Setup the capture area
Begin by setting up a laptop and the hardware device adjacent to a road with regular vehicle traffic, similar to where the device would be deployed.
The setup should be at a distance where the camera's field encompasses approximately 2-4 car lengths of the road when viewing it from the side.
Connect the device's OpenMV Cam to the laptop through a USB cable.
2. Initialize a new Dataset
Open OpenMV IDE on the laptop. From the "Tools" menu, click "Dataset Editor" then "New Dataset".
When prompted, create a new folder called vehicle-dataset
and select it. This folder will contain all of the dataset images and related files and is automatically opened within the Dataset Editor panel within the IDE.
3. Edit the capture script
A dataset_capture_script.py
Python script is automatically created within the dataset folder and opened inside the IDE editor window. This script works to initialize the camera settings before capturing images.
Images collected for this dataset should be captured in square dimensions (240x240 px) and RGB color. Both these settings will ensure images match the input formatting expected by the model.
Edit the camera configuration code block in the script by adding a line defining the image size with sensor.set_windowing((240, 240))
.
The updated code block should be as follows:
sensor.reset()
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240)) # Set 240x240 window.
sensor.skip_frames(time = 2000)
Click the "Connect", and then "Start" buttons button in the lower-left corner of the IDE to begin executing the data_capture_script.py
script on the connected OpenMV Cam.
Once running, a live preview window of what the camera sees, with configuration settings applied, is viewable in the IDE's Frame Buffer panel.
4. Collect the dataset images
In the Dataset Editor button menu of the IDE, click the "New Class Folder" button and enter the class name "vehicles" when prompted.
This creates a vehicles.class
subfolder, shown in the Dataset Editor menu. Click this folder to set it as the save location for newly captured images.
Hold the device with one hand with its camera facing the road perpendicular to its direction of traffic, at a height between 3 and 6 feet above the ground, similar to the intended deployment height of the device.
Use the live preview within the IDE to determine when a vehicle is driving through the camera's field of view, and click the "Capture Data" button as it does to save a still image of the vehicle.
Only images of single vehicles, which are centered vertically and have at least half of their side profile visible in the frame should be captured for the dataset.
Multiple images of a single vehicle can be captured as it passes by the device, from the moment its front end enters one side of the frame until it exits the opposite side.
Continually capture images of vehicles to build the dataset with a variety of vehicle makes, models, can colors.
After 50-100 images of vehicles have been captured, move to a new location and repeat the process. Changing locations ensures that there is variety in the backgrounds of the dataset images.
Collect a minimum of 500 images of vehicles to complete the dataset.
Label Dataset ImagesBefore the collected dataset images can be used to train the model, they must first be manually annotated with bounding boxes that outline the positions of vehicles within them.
These bounding box labels allow the model to learn which image shapes and features can be used to define a vehicle versus the image background.
Edge Impulse is an online platform that provides a workflow for building, training, and testing machine learning models. Add the dataset images to an Edge Impulse new project and label them using the platform's labeling tools.
1. Create a new Edge Impulse project
Log into Edge Impulse, then click "Create new project" from the project dashboard. When prompted, enter the project name vehicle-detection
, and then click "Create new project".
A new vehicle-detection
project is created and the website redirects to the project dashboard, which shows an overview of the project's details.
In the "Project info" section of the dashboard, select "Bounding Box" for the "Labeling Method" and "OpenMV Cam H7 Plus" for "Target device".
2. Upload the Image Dataset
Use OpenMV IDE to upload the labeled dataset to the newly created Edge Impulse project.
Link OpenMV IDE to Edge Impulse to enable the upload. From the OpenMV IDE 'Tools' menu, select 'Dataset Editor', then 'Export', then 'Login to Edge Impulse Account and Upload to Project'. Type in your Edge Impulse username and password when prompted and hit OK.
A dropdown dialog appears prompting you to select a project from your Edge Impulse account. Select the vehicle-detection
project and hit OK.
Next, a Dataset Split dialog will prompt you to select how to split the data between training and testing sets. Leave the default 80/20% split and hit OK.
A progress bar appears showing the upload progress. After it is complete a dialog detailing how many images were uploaded to the project is shown
In Edge Impulse, click Data Acquisition in the navigation menu to see an overview of the data uploaded to the project
This page allows you to see which individual images were split between the training and test sets. Click any image name to see a preview of it.
3. Add bounding box labels to the images
Click the 'Labeling Queue' menu item to access the dataset labeling tool. Here each unlabeled image is displayed, one after another, allowing for bounding box labels to be drawn on them.
Use your mouse to drag a box around the vehicle in the image, making sure it fits tight around the vehicle's edges. When prompted, enter 'vehicle' as the label, then click 'Set Label'.
When the labeling is complete for the image, click 'Save labels' to save the labels to this image and display the next image in the queue.
With the tool's 'Label suggestions' option set to 'Track object between frames', a 'vehicle' bounding box is automatically drawn on the next image. Edit the size and position of this box to fit the vehicle in the image before saving it.
Repeat this process until all images in the queue have been labeled.
Build and Train an Object Detection ModelContinue through the Edge Impulse workflow to build, train, and test an image-detection model.
This model will be designed to take an image as input and output the bounding box coordinates of any vehicles it detects while running in real-time on the OpenMV Cam hardware.
The labeled dataset is utilized for both training the model and testing its detection accuracy. This testing helps give insight into how well the model will perform when processing newly captured images on the device.
1. Create a new impulse
Within Edge Impulse, an impulse is a chain of processing blocks configured to accomplish a certain machine-learning task. Different blocks are used to collect, preprocess, and create outputs from project data.
Design an impulse for this project that formats input images, generates image features, and fine-tunes a pre-trained image-detection model.
Click 'Impulse Design' in the navigation menu to view the project's default impulse layout.
The impulse includes an Image Data block that defines how input images will be resized during pre-processing. Keep the default values for this block to downsample input images to 96x96 pixels.
Click 'Add a processing block'. In the popup menu, click 'Add' on the Image item to add this block to the Impulse chain. This block further processes and normalizes the image data for input into the model. Keep its default values.
Click 'Add a learning block'. In the popup menu, click 'Add' on the Object Detection item to add this block to the Impulse chain. This block defines the model type, its input, and output features. Keep its default values.
Finish designing the impulse by clicking 'Save Impulse'. Image and Object Detection items are added under Impulse Design in the navigation menu.
2. Generate Image Features
Input images are normalized and converted into a list of numerical features to enable their use as inputs into machine learning models. Generating these features for the dataset images is required before training can take place.
Click on 'Image' in the navigation menu. This opens the Parameters tab of the Image page, which allows users to view raw image data and alter processing parameters before feature extraction. Click 'Save Parameters'.
After saving, the Generate Features tab is automatically opened. Click the 'Generate Features' button to begin this process. Once complete, a reduced-dimension version of the dataset features is displayed in a 3D visualization.
3. Train the object detection model
With the dataset processed and converted into features, it can now be used to train an image detection model. Configure the model's learning parameters and initiate the training.
Click on 'Object Detection' in the navigation menu to open the Object Detection page. The Neural Network Settings are used to define the training settings and architecture of the model that will be used.
Training settings control how the process of training is performed, including the total amount of training and learning rate. Update the 'Number of training cycles' setting to 100, to increase the training for this model.
Neural network architecture settings control which model type will be used. Leave the default FOMO (Faster Objects, More Objects) MobileNetV2 0.35 model selected, which is specially designed to work on embedded devices.
Click 'Start Training' to begin the training process. An output console shows the training progress, including its validation accuracy after each training epoch. This val_accuracy
value should increase as the model is exposed to more training images and better learns how to detect vehicles.
After training is complete the Model window shows statistics about the model's performance. Validation set accuracy is displayed, as well as a confusion matrix and feature explorer.
The trained model achieved 98% accuracy on its validation set, demonstrating an almost perfect performance in detecting vehicles and distinguishing them from the image background.
4. Test the Model
Apply the fully trained model to the test portion of the data that was initially split from the training data to further validate its performance accuracy.
Click on 'Model testing' in the navigation menu. The Model Testing page shows a list of all dataset images included in the test split. Click 'Classify All' to begin the classification process.
When the process is complete, the model's accuracy is displayed.
Here, testing validation reached 95% accuracy in correctly detecting vehicles in the test data images. This is a very strong result and implies that the model will perform well when running on new images captured by the device.
If your model did not achieve a high accuracy during testing there are a few changes that can be made:
- Ensure that each labeled image contains a single vehicle, centered vertically, with at least half of the body viewable. Capture and label additional images for the dataset that meet this criteria.
- Tune the training and model architecture parameters to see if a better accuracy can be achieved. Increase the training cycles so that the model is exposed to more data during the training process.
Retrain and retest the model after making these changes, if necessary. When the model reaches an accuracy greater than 85%, it is ready to be deployed and used on the device.
A public version of the full Edge Impulse project, including the labeled data and trained model, can be viewed here.
Deploy the ModelThe trained model exists on Edge Impulse and must be deployed to the OpenMV Cam within the device. Once deployed, the model can be tested in real-time on images that are captured live from the device's camera.
1. Export the model from Edge Impulse
Begin by first exporting the trained model from Edge Impulse.
Click on 'Deployment' in the navigation menu. On the following page click 'Search deployment options' and select 'OpenMV Libary'. Then click 'Build'.
When the build process is complete, an ei-vehicle-2-openmv-v1.zip
file is downloaded. Unzip this file to access a directory that includes three files: trained.tflite
, labels.txt
, and ei_image_detection.py
.
2. Copy Files to the OpenMV Cam
The trained.tflite
TensorFlow Lite file contains the trained model, while the labels.txt
text file contains labels for the model's possible detection classes - 'vehicle' and 'background'.
Both files must be copied on the OpenMV Cam's onboard storage to be utilized by the image detection script that will be running on it.
With the OpenMV Cam connected to the computer, copy and paste both files into the OpenMV Cam's file system which appears as a connected storage device on the computer.
3. Test the vehicle detection in real-time
The ei_image_detection.py
script will run on the open OpenMV Cam and includes code for initializing the camera, capturing images, processing them, and displaying any vehicle detections.
Open the ei_image_detection_script.py
in the OpenMV IDE. With the OpenMV Cam connected, click the 'Start' button to begin running this script.
The content of the Python script is shown here:
# Edge Impulse - OpenMV Object Detection Example
import sensor, image, time, os, tf, math, uos, gc
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240)) # Set 240x240 window.
sensor.skip_frames(time=2000) # Let the camera adjust.
net = None
labels = None
min_confidence = 0.5
try:
# load the model, alloc the model file on the heap if we have at least 64K free after loading
net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
except Exception as e:
raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
try:
labels = [line.rstrip('\n') for line in open("labels.txt")]
except Exception as e:
raise Exception('Failed to load "labels.txt", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
colors = [ # Add more colors if you are detecting more than 7 types of classes at once.
(255, 0, 0),
( 0, 255, 0),
(255, 255, 0),
( 0, 0, 255),
(255, 0, 255),
( 0, 255, 255),
(255, 255, 255),
]
clock = time.clock()
while(True):
clock.tick()
img = sensor.snapshot()
# detect() returns all objects found in the image (splitted out per class already)
# we skip class index 0, as that is the background, and then draw circles of the center
# of our objects
for i, detection_list in enumerate(net.detect(img, thresholds=[(math.ceil(min_confidence * 255), 255)])):
if (i == 0): continue # background class
if (len(detection_list) == 0): continue # no detections for this class?
print("********** %s **********" % labels[i])
for d in detection_list:
[x, y, w, h] = d.rect()
center_x = math.floor(x + (w / 2))
center_y = math.floor(y + (h / 2))
print('x %d\ty %d' % (center_x, center_y))
img.draw_circle((center_x, center_y, 12), color=colors[i], thickness=2)
print(clock.fps(), "fps", end="\n\n")
When executed, the code configures the camera capture settings and then enters a while
loop that repeats continuously while the board is powered.
At each iteration of the loop, a new image img
is captured from the camera using sensor.snapshot()
and displayed in the Frame Buffer of the IDE.
The loaded model net
processes the image with its detect()
method, which returns a detection_list
list of any vehicle detections. When detections occur, this list contains bounding box information for each vehicle.
As camera images are continuously captured and processed by the model, the center coordinates of each detected vehicle are printed to the console and overlayed as a circle on the Frame Buffer image.
Test the model's performance by pointing the device's camera at passing vehicles and viewing real-time detection coordinates and image overlays for each vehicle moving across the image frame.
Send data to the AVR-IoT boardAt this step in the build, vehicle detections are only viewable when the OpenMV Cam is connected to the OpenMV IDE. The next step is to transmit the occurrence of each vehicle to the AVR-IoT board for further handling.
This can be accomplished using UART serial communication between the boards. With the OpenMV's TX pin connected to the AVR-IoT board's RX pin, data can be transmitted from the former to the latter using just one wire.
Modify the detection script to transmit detection data, then configure the script to run automatically on the OpenMV Cam.
1. Add the following import statement to the top of the script:
from pyb import UART
Here the UART
class is imported from the pyb
library, which is used to implement the UART serial communication protocol.
2. Define a UART object, below the import statements:
# Define UART object
uart = UART(3, 9600, timeout_char=200)
This line initializes a UART object uart
on the OpenMV Cam's serial bus 3 (Pin 4 for TX) at a baud rate of 9600.
3. Below the for
loop that processes each detection, add the following code:
# Send the number of detections over UART
num_vehicles = str(len(detection_list))
uart.write(num_vehicles)
# Delay between next image capture
time.sleep_ms(2000)
The number of vehicle detections in the image num_vehicles
is transmitted over the UART using the write()
function of the uart
object.
Next, the script sleeps for 2 seconds. This pauses further processing to allow time for detected vehicles to drive out of the image frame, preventing any vehicle from being detected more than once as it drives past the device.
The sleep time should be adjusted based on the device's deployment, and set to the number of seconds it takes for vehicles to drive through the deployed camera's field of view.
4. Save the updated script to the OpenMV Cam
The updated image-detection script must be run on the OpenMV Cam when it is disconnected from the computer and powered by the AVR-IoT board.
From the 'Tools' menu in the IDE, select 'Save the open script to OpenMV Cam (as main.py). The OpenMV Cam is configured to execute any script named main.py
that is stored on its drive when it first boots up.
When the AVR-IoT is powered, either through USB or battery, and the OpenMV Cam is powered through its 3.3V pin, the script will automatically begin running on the OpenMV Cam. Unplug the OpenMV Cam from the computer.
Receive data on the AVR-IoT boardThe AVR-IoT board must be set up to receive the vehicle detection data transmitted from the OpenMV Cam before further data processing can occur.
Write and upload an Arduino script to the AVR-IoT board that continuously listens for and reads incoming data on its connected serial bus.
Before completing the below steps, make sure the Arduino development environment is configured for the AVR-IoT by following this guide.
1. Connect the AVR-IoT board to a computer via a USB cable.
2. Open a new sketch in the Arduino IDE, and add the following at the top:
#include <led_ctrl.h>
#include <log.h>
Here the AVR-IoT LED Control and Log modules are included to provide additional functionality when interfacing with the board.
3. Next, define the OpenMV Cam serial bus:
// OpenMV Cam serial bus
define OpenMVSerial Serial2
The OpenMV Cam's TX pin is connected to the RX pin of the AVR-IoT's second hardware serial port. This line defines OpenMVSerial
as this port.
4. Within the sketch's setup()
function add the following:
// LED control setup
LedCtrl.begin();
LedCtrl.startupCycle();
The LedCtrl
object is used to control the AVR-IoT's multiple onboard LEDs. This code initializes the object and runs a startup cycle to test each LED.
5. Setup serial communication with the computer:
// Log setup
Log.begin(9600);
Log.info(F("Starting IoT vehicle counting device"));
Messages can be printed to the Arduino IDE serial monitor using the Log
object while the AVR-IoT board is connected via USB. Log-level formatting is included for each message to help categorize message types.
6. Setup serial communication with the OpenMV Cam:
// OpenMV Cam hardware serial setup
OpenMVSerial.swap(1);
OpenMVSerial.begin(9600);
Here, serial communication is initialized on the OpenMVSerial
port, with the same baud rate utilized by the OpenMV Cam.
7. Within the sketch's loop()
function, add the following code block:
// Check for data sent from OpenMV Cam
if (OpenMVSerial.available() > 0) {
// Read vehicle detection data
LedCtrl.on(Led::USER);
int num_vehicles = OpenMVSerial.read();
Log.infof(F("Vehicles detected: %d\r\n"), num_vehicles);
LedCtrl.off(Led::USER);
}
This code first checks for any bytes of data available and waiting to be read from the OpenMVSerial
receive buffer using its available()
method.
Data sent from the OpenMV Cam will represent a string formatted number - the number of vehicles detected. If available, these bytes are read into the num_vehicles
integer variable using the serial object's read()
function.
The board's User LED is flashed as the data is read, and the received number of detections is printed to the serial monitor. Reading this available data removes it from the receive buffer.
Because this code block exists in the sketch's loop()
function, it will run repeatedly, enabling the AVR-IoT board to continuously listen for and read vehicle detection data sent from the OpenMV Cam.
8. Save and upload the sketch to the AVR-IoT board.
With the sketch uploaded and the AVR-IoT board still connected through USB, test this sketch by again capturing live vehicle traffic with the device.
Open the Arduino IDE serial monitor. As vehicles drive past the device and are detected, the number of vehicle detections is printed out to the monitor, indicating that the serial communication is working.
Publish Data to an AWS MQTT TopicWith the vehicle detection data now being received on the AVR-IoT board, the board's cellular connectivity can be utilized to send the data to Amazon Web Services via MQTT.
MQTT is a lightweight communications protocol used by IoT devices to send and receive messages to and from a central server. Messages are structured into MQTT topics which networked devices can subscribe and publish to.
Configure the AVR-IoT board in AWS IoT Core and update the Arduino sketch to connect and publish vehicle detection data to a custom MQTT topic.
Before completing the below steps, the board must be provisioned for AWS using your AWS account credentials by following this guide.
1. Add publish permissions for the AVR-IoT board in AWS IoT Core
Vehicle data will be published to an MQTT topic named vehicle-detections
on the AWS IoT Core service.
To enable this, the AVR-IoT device's certificate policy must be updated with a new statement that allows for publishing to this topic.
From the main AWS Console, navigate to the IoT Core service. Click on 'All Devices' in the Manage menu, then 'Things'. Click the name of the provisioned ARV-IoT board in the Things list.
Click on the 'Certificates' tab for the device, then click the name of its certificate from the Certificates list. Click on the 'Policies' tab for the certificate, then click the listed 'zt_policy' policy.
On the policy page, click 'Edit active version', then 'Add new statement'. In the empty statement form fields, select 'Allow' for Policy effect and select 'iot-Publish' for Policy action.
For the Policy resource, enter 'arn:aws:iot:<AWS-Region>:<AWS-Account-ID>:topic/vehicle-detections', replacing the AWS-Region and AWS-Account-ID values with your region name and account ID.
Click 'Save as new version', then check the box next to the updated policy version to set it as the active policy within the 'All versions' menu.
2. Add the following statements at the top of the Arduino sketch:
#include <lte.h>
#include <mqtt_client.h>
#include <ArduinoJson.h>
Additional modules are included here to add functionality for connecting to LTE, publishing to MQTT, and formatting message data.
3. Define the MQTT topic name
// Define MQTT topic name
#define AWS_PUB_TOPIC "vehicle-detections"
The device will publish to the vehicle-detections
topic name that it has been granted permission for, defined here as AWS_PUB_TOPIC
.
4. Define the device ID
// Define unique device ID
#define DEVICE_ID "vehicle_counter_001"
The device is assigned an ID number, DEVICE_ID
, that is later included in every published MQTT message.
Each deployed device requires a unique ID, allowing its data to be linked with that specific device among the other devices publishing to the same topic.
5. Add the following code block to the top of the sketch's loop()
function:
// Establish LTE connection to operator
if (!Lte.isConnected()) {
if (Lte.begin()) {
Log.infof(F("Connected to operator: %s\r\n"), Lte.getOperator().c_str());
} else {
Log.error(F("Failed to connect to operator\r\n"));
return;
}
}
The Lte
object includes various methods for interfacing with the AVR-IoT's LTE module and connecting to an LTE operator.
This code checks the board's current LTE connection status using Lte.isConnected()
. If no connection is present, Lte.begin()
is called to establish a new connection with the LTE operator.
This process of checking and connecting to the network is repeated on each iteration of the loop, working to establish an initial LTE connection, and then re-establishing the connection if it is disconnected during operation.
6. Establish a connection to MQTT:
// Establish AWS MQTT connection
if (Lte.isConnected()) {
if (!MqttClient.isConnected()) {
if (MqttClient.beginAWS()) {
Log.infof(F("Connected to AWS\r\n"));
} else {
Log.error(F("Failed to connect to AWS\r\n"));
return;
}
}
}
The MqttClient
object includes various methods for connecting to and interacting with a specified MQTT broker server.
Here the AVR-IoT's connection to the configured AWS MQTT server is checked using MqttClient.isConnected()
. If no connection is present, MqttClient.beginAWS()
is called to establish a new connection with AWS.
Similar to the LTE connection, this process repeats with each loop to ensure the MQTT connection is first established and then re-established if it is disconnected during operation.
7. After the code block where the vehicle data is read()
, add the following:
// Format data into JSON string
StaticJsonDocument<200> doc;
doc["device_id"] = DEVICE_ID;
doc["num_vehicles"] = num_vehicles;
char message[512];
serializeJson(doc, message);
The message published to the MQTT topic will consist of the values for the number of vehicles num_vehicles
detected and the device ID DEVICE_ID
.
This code format formats this data into JSON key-value pairs and then converts that JSON into a serialized string, stored in the variable message
.
8. Publish the message to the MQTT topic
// Publish data
Log.infof(F("Publishing data: %s\r\n"), message);
if (!MqttClient.publish(AWS_PUB_TOPIC, message)) {
Log.warn(F("Failed to publish data\r\n"));
}
The message is published to the topic by calling MqttClient.publish()
, with the designated topic name AWS_PUB_TOPIC
, and formatted data string message
as arguments.
9. Save and upload the sketch to the AVR-IoT board.
With the sketch uploaded and the AVR-IoT board still connected through USB, test this sketch by again capturing live vehicle traffic with the device.
In the Test menu within AWS IoT Core, click on 'MQTT Test Client'. On the 'Subscribe to Topic' tab, enter the 'topic/vehicle-detections' in the Topic Filter, then click 'Subscribe'.
This test client is now subscribed to the topic that the device publishes to, and will display new messages in a subscription feed as they are received.
As vehicles drive past the device and are detected, messages containing the number of vehicles detected and device ID appear in the client feed in near real-time, indicating that the MQTT publishing is working.
Create a Device DatabaseData sent by the device to the AWS broker server must be stored in a database in order to persist and accumulate a total count of vehicle detections for the device.
Database items can be updated as new device messages are received, and queried by an API for use in the project's road monitoring application.
Create a DynamoDB database to store information about each unique vehicle detector device, including its ID, deployment location, and the total number of vehicles it has detected.
1. Create and configure a DynamoDB
DynamoDB is a NoSQL database offered by AWS that provides serverless data storage. Within a DynamoDB, data is stored in tables, with each table consisting of a collection of items, and each item is a collection of attributes.
From the main AWS Console, navigate to the DynamoDB service. On the DynamoDB dashboard, click 'Create table'.
In the create table form, enter 'Devices' as the table name, and 'device_id' as the partition key, then click 'Create table'. The Tables dashboard will be displayed, listing the newly created 'Devices' table as active.
2. Add device information to the database
Initial information about the device must first be manually added to this database before it can be modified later by MQTT messages.
Within the Tables dashboard, click on the 'Devices' table, then 'Explore Table Items'. In the Items Returned section, click 'Create item'.
Fill out the Create Item form, using the 'Add new attribute' dropdown to add the following attribute names, types, and values:
- device_id (String): The unique device ID that is defined in the Arduino sketch and included in the MQTT message. Ex. 'vehicle_counter_001'
- road_name (String): The name of the road that the device is positioned to monitor. Ex. 'Market Street'
- latitude (Number): The latitude of the device position. Ex. 38.031639
- longitude (Number): The longitude of the device position. Ex. -78.481444
- vehicle_count (Number): The total number of vehicles detected by the device. Set as 0 for its initial value.
- first_detection_timestamp (String): The timestamp of the first vehicle detection recorded by the device. Leave this as an Empty value.
- last_detection_timestamp (String): The timestamp of the latest vehicle detection recorded by the device. Leave this as an Empty value.
These attributes fully encapsulate the information needed for the road monitoring application, including the number of vehicles detected over a given period, and the location details of where the detections take place.
Next click, 'Create item' to save this device item to the Devices table. This new item is now listed in the Items Returned section of the table.
Whenever a new vehicle detector device is built and deployed, this step must be repeated to add that specific device's information into the database table.
Create an IoT Rule to Update the DatabaseEach device message published to the MQTT topic contains a vehicle detection count that needs to be added to the total vehicle count for the device stored in the database.
This can be accomplished using an IoT Rule and Lambda function. IoT Rules are created to execute actions when MQTT messages are published, and Lambda functions are code that can interface with various AWS services.
Configure an IoT rule to automatically run a custom Lambda function each time the device publishes a message, which processes the message data and updates the corresponding device database item attributes.
1. Create an IAM role with full database permissions
The Lambda function will read and update items in the database. This requires it to have appropriate access permissions for interacting with DynamoDB.
Within AWS, permissions are controlled through IAM roles. Each IAM role is assigned permissions to perform certain actions on AWS resources, and services such as Lambda functions can assume these roles.
From the main AWS Console, navigate to the IAM service. In the 'Access Management' menu, click 'Roles, ' then click 'Create Role.'
On the Create Role page, select 'AWS Service' for the 'Trusted Entity Type' and 'Lambda' for the 'Use Case, ' then click 'Next.'
When prompted to select a permission policy, search for and select 'AmazonDynamoDBFullAccess, ' then click 'Next.'
In the 'Role Details' section, enter 'device_ddb_role' for the 'Role name.' Then click 'Create Role.' The newly created role will show up in the 'Roles' list.
2. Write a Lambda function that updates the device item
Lambda functions are units of code that can be executed within AWS based on certain trigger conditions. These functions can interact with other AWS services to automatically perform various tasks.
From the main AWS Console, navigate to the Lambda service. On the Lambda dashboard, click 'Create function'.
In the Create Function menu, enter 'device_dbb_update' as the 'Function name', then select 'Python 3.12' as the 'Runtime'.
Next, click 'Change default execution role', then select 'Use an existing role'. From the dropdown menu, select the previously created 'device_dbb_role' role to grant this function the full DynamoDB permissions this role enables.
Click 'Create function' to create the function and navigate to its dashboard. This dashboard is used to edit, test, and configure the Lambda function code.
Within the dashboard's code source editor, copy and paste the following code into the open lambda_function.py
script:
import json
import boto3
from datetime import datetime
def lambda_handler(event, context):
# Define DB and table
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Devices')
# Partition key for device item
partition_key_value = {'device_id': event['device_id']}
# Get device item data
response = table.get_item(Key=partition_key_value)
device = response['Item']
# Update vehicle count
vehicle_count = device.get('vehicle_count') + event['num_vehicles']
# Update detection timestamps
timestamp = str(datetime.now())
first_detection_timestamp = device.get('first_detection_timestamp') or timestamp
last_detection_timestamp = timestamp
# Update device item
table.update_item(
Key=partition_key_value,
UpdateExpression='SET \
vehicle_count = :vehicle_count, \
first_detection_timestamp = :first_detection_timestamp, \
last_detection_timestamp = :last_detection_timestamp',
ExpressionAttributeValues={
':vehicle_count': vehicle_count,
':first_detection_timestamp': first_detection_timestamp,
':last_detection_timestamp': last_detection_timestamp
}
)
The lambda_handler
function within this script gets executed each time the Lambda function is triggered by the IoT Rule.
Data is passed into the function through its event
parameter from the IoT Rule. The rule will be configured so that event
is a Python dictionary containing the device_id
and num_vehicles
values from the message.
The function first establishes a connection to the DynamoDB service and creates a reference table
to the 'Devices' table.
A partition_key_value
value is defined to identify the device in the table that corresponds to the message. The message's device_id
value matches the partition key of the device in the table.
Device information is read in by calling the table.get_item()
method with the partition key. The response from this method is parsed, storing the device's item attributes as the device
variable.
Next, the device's vehicle_count
value is updated by adding the message's num_vehicles
value to it.
The current timestamp is then captured and used to set the device's first_detection_timestamp
value if that value is empty and to update the device's previous last_detection_timestamp
value.
Finally, updated device values are written back into the database using the table.update_item()
method with the partition key, and a properly formatted update expression and expression values.
3. Create an IoT Rule that triggers the Lambda function
The IoT Core service includes a rule engine that allows rules to be defined based on incoming MQTT messages. These rules can capture the data contained in the messages, and trigger actions to perform with this data.
Write a new rule that is executed when messages are published by the device which automatically runs the previously created Lambda function.
From the main AWS Console, navigate to the IoT Core service. In the 'Message Routing' menu, click 'Rules', then click 'Create Rule'.
Enter 'device_ddb_rule' as the rule name in the 'Rules Properties' section, then click 'Next'.
In the 'SQL statement' section, set the SQL statement to 'SELECT * FROM "vehicle-detections"'. This statement instructs the rule to extract all data attributes (*) from incoming messages and to only activate for messages published to the 'vehicle-detections' MQTT topic.
Select 'Lambda' from the 'Choose an action' dropdown menu in the 'Rules action' section. Then, in the 'Choose Lambda function' dropdown, select the 'device_ddb_update' function from the list.
The selected Lambda function is passed all of the extracted message data when it is triggered by the Rule, allowing the function to process the message data and update the device database with this new information.
Click 'Next', then click 'Create'. The 'Rules' dashboard is shown, which now lists the newly created 'device_ddb_rule' as active.
With the rule active, it is now continuously monitoring the "vehicle-detections" MQTT topic for newly published messages and will trigger the Lambda function each time a new message is received.
4. Test the IoT Rule and Lambda Function
Test that the IoT Rule and Lambda function are working properly by capturing live vehicle traffic with the device.
Navigate to the DynamoDB dashboard, then click 'Explore items' from the 'Tables' menu, then select the 'Devices' table.
In the 'Items returned' section, find the device item with the device_id
value of 'vehicle_counter_001' and explore its attributes.
As vehicles are detected by the device, use the 'Refresh' button to view how the item's vehicle_count
, first_detection_time
, and last_detection_time
values are updated with each detection.
Updates to the database confirm that the IoT Rule is being activated with each message published by the device and that the Lambda function is accessing the message data and making updates to the device database.
Create API Access to the DatabaseAt this point in the build, the data pipeline for getting vehicle detections from the device to an AWS database is complete. The next step is to enable this data to be retrievable by the project's road monitoring application.
The AWS API Gateway service is used to create and manage APIs, which can give clients access to data through requests made to the API's resources.
Configure an API Gateway resource to trigger a Lambda function each time an HTTP request is received, which returns a list of device items in JSON format.
1. Write a Lambda function that returns all database items
Like the previous Lambda function, this function will interact with the database, but instead of updating items it will read and return item data.
From the main AWS Console, navigate to the Lambda service. On the Lambda dashboard, click 'Create function'.
In the Create Function menu, enter 'device_dbb_get_items' as the 'Function name', then select 'Python 3.12' as the 'Runtime'.
Next, click 'Change default execution role', then select 'Use an existing role'. From the dropdown menu, select the previously created 'device_dbb_role'.
Click 'Create function' to create the function and navigate to its dashboard. Within the dashboard's code source editor, copy and paste the following into the open lambda_function.py
script:
import json
import decimal
import boto3
def lambda_handler(event, context):
# Define DB and table
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Devices')
# Get all items from the table
response = table.scan()
items = response.get('Items', [])
# Convert decimals to int/float type
for item in items:
for key, value in item.items():
if isinstance(value, decimal.Decimal):
item[key] = int(value) if value % 1 == 0 else float(value)
# Return the items in JSON format
return {
'statusCode': 200,
'headers': {'content-type': 'application/json'},
'body': json.dumps(items)
}
The lambda_handler
function within this script gets executed each time the Lambda function is triggered by a request to the API Gateway resource.
The function first establishes a connection to the DynamoDB service and creates a reference table
to the 'Devices' table.
All items in the table are read in by calling the table.scan()
method and parsing its response to store the list of device items as the items
variable.
Numerical item values, such as the vehicle count and position coordinates, are stored as decimal types in the database. These values are each converted to integer or float types to conform to JSON formatting.
Finally, the function returns a response dictionary defining a status code 200, and a body containing the list of device items as a serialized JSON string.
2. Setup an HTTP API endpoint that integrates with the Lambda function
HTTP APIs can be created to integrate with Lambda functions. This integration triggers the Lambda function with each request to an API endpoint and responds to each request with the output generated by the function.
From the main AWS Console, navigate to the API Gateway service. From the 'Choose an API Type' list, click 'Build' within the 'HTTP API' section.
On the 'Create an API' step, click 'Add Integration' and then select 'Lambda' from the dropdown menu. In the 'Lambda Function' input, select the 'device_dbb_get_items' function that was previously created.
For the 'API Name', enter 'devices_api', then click 'Next'.
In the next step, 'Configure routes', set the route 'Method' as 'GET' and the 'Resource path' as '/devices'. Leave the 'Integration target' as the selected Lambda function.
These settings establish a '/devices' API endpoint, accessible through HTTP GET requests. Each request to this endpoint triggers the 'device_ddb_get_items' function and the JSON-formatted list of device data produced by the function will be returned in its response.
Click 'Next', then on the following 'Define stages' step, leave the default selections, and click 'Next'.
Then on the last step, 'Review and create', click 'Next'. After the API is successfully created, the page navigates to the API's dashboard.
From the 'Develop' menu of the dashboard, click 'CORS', then click 'Configure'. For the 'Access-Control-Allow-Origin' input, enter '*', then click 'Add'. Then click 'Save' to update the CORS settings.
Updating this CORS setting is required to allow the road monitoring application, which will be hosted on a different domain than the API, to make API requests without getting blocked.
3. Test the HTTP API
Confirm that requests to the API are returning the expected device data by navigating to the '/devices' endpoint in the browser.
In the 'Deploy' menu of the dashboard, click 'Stages', then select the '$default' stage. The stage details for the deployed API are shown.
The Invoke URL is the AWS URL that the API is accessible on. The URL is in the format 'https://<APP-ID>.execute-api.<Region>.amazonaws.com'.
Copy and paste this URL into the browser's URL bar. Append '/devices' to the end of the URL and navigate to it.
A list of device data in JSON format is returned and displayed. This demonstrates that the API and Lambda functions are working, and data from the database is now accessible to clients.
Configure and Host the Road Monitoring Web AppThe project's web application enables end users to visualize and monitor the physical locations and vehicle counts for a network of deployed devices.
Within the app interface, device positions are plotted on a map, and device details are listed in a scrollable menu. Users can view the geographical coverage of their devices, and use their total and daily average vehicle counts to estimate when road maintenance will be required.
Download and configure the web application code, then host the app on AWS S3 to make it publicly available to users.
1. Download and configure the app source code
Navigate to the project's GitHub repository, click 'Code', then click 'Download ZIP'. An 'iot-vehicle-counter-main.zip' file will be downloaded. Unzip this file to access all of the project files in the unzipped directory.
The web app project files are stored within the 'Web_Application' directory.
To utilize the API endpoint that was previously created, the app relies on an environment variable, stored in an '.env' file that defines the API URL.
Open the 'env.example' file in a text editor, then copy and paste the device API URL as the value for the existing REACT_APP_DEVICES_API_URL
variable. Save the updated file as '.env'.
2. Compile the app source code
The app is built using the React framework, and its source code must compiled and bundled into static files that can be hosted on a server. This is accomplished using Node.js and the app's build command.
From an open command terminal, navigate into the 'Web_Application' folder, then enter the command 'npm run build'.
The app is compiled and its static files are available in a newly created 'build' folder within the 'Web_Application' folder.
3. Host the app files in an AWS S3 bucket
AWS S3 is a cloud storage service, that can also be configured to host static websites that are publically accessible.
From the AWS Console, navigate to the S3 service, then click 'Create bucket'.
In the 'General configuration' input a unique name as the 'Bucket Name' (ex. 'iot-vehicle-counter-app'). Then click 'Create bucket'.
The newly created bucket will show up in the list of AWS buckets. Click the name of the bucket in the list to open the bucket dashboard.
Upload the app website files by clicking the 'Upload' button. Then click 'Add files' and select the individual files within the app 'build' folder. Click 'Add folder' and select the 'static' folder within the 'build' folder. Click the 'Upload' button to add these files into the bucket.
Next, configure the permissions of the bucket to allow public access to its file. Click the 'Permissions' tab of the bucket dashboard.
In the 'Block public access' section, click 'Edit', then uncheck the 'Block all public access' box. Click 'Save changes'.
In the 'Bucket policy' section, click 'Edit', then paste then copy and paste the following into the Policy input and replace Bucket-Name
with the name of the bucket, and click 'Save changes':
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::Bucket-Name/*"
]
}
]
}
Finally, click on the 'Properties' tab from the bucket dashboard. In the 'Static hosting website' section, click 'Edit'.
Select 'Enable' for 'Static website hosting', then input 'index.html' as the 'Index document'. Click 'Save changes'.
The 'Static website hosting' section is updated to include a Bucket website endpoint in the format 'http://<Bucket-Name>.s3-website.<Region>.amazonaws.com'.
Copy and paste this URL into a browser to navigate to the app website.
The website can be used by end users to monitor the vehicle counts of any devices that are deployed and sending information to AWS. As additional devices are deployed, they will also be viewable within the app.
Future DevelopmentThis project has demonstrated the capabilities of an IoT device that can detect vehicles and send data to the cloud, along with a web app that displays vehicle counts to an end user. Future upgrades to this project may include:
- Solar Power: The device is powered by a LiPo battery, which requires periodic recharging. Integrating a solar panel to replenish the battery would let the device run indefinitely and eliminate any charging downtime.
- Detect Vehicle Type: Different vehicle types, such as cars or trucks, contribute varying degrees of road wear. The vehicle detection model can be further trained to identify and count specific types of vehicles.
- Automatic Alerts: End users can use the web app to view the total vehicle counts for any monitored road. Instead of using the web app, AWS services can be configured to automatically send an email or text message to users when the count reaches a certain threshold.
Comments