This project intends to develop a prototype to solve the difficulty of finding a parking spot in a big city. As the population living in cities grow is becoming increasingly difficult to find parking spots for everyone. In this way, every day, millions of people struggle for a place to park, going around the block multiple times to find that empty parking spot. This social behavior not only frustrate the daily commuters but also pollutes the environment. Studies also reveal that finding parking spots create traffic congestion since is estimated that 30 percent of the cars circling a city at any given time are doing so as drivers look for parking.
The solutionThe idea is to make our street parking smarter installing cameras with image processing and communication capabilities in strategic points such as lamp posts and building facades. This cameras would collect valuable information in what concerns to the parking spots occupancy. In order to protect people's privacy the idea is to make all the processing of the image locally and only transmit the list with the occupancy status of the parking spots to the central server, in this way no one is able to access the image from the camera, since it is not technically capable to transmit images.
The parking occupancy information would then be centralized in a server, available to the consumer through an API. Further developments can be made by building a mobile app that makes use of that API and shows in a map the position of the free parking spaces, however in this project the mobile app development will not be taught.
Since is not feasible to prototype on the streets, this projects uses toy cars and a piece of paper with parking spot markings to simulate the real environment.
Solution architectureThis system is composed by two sub-systems:
- Embedded parking spot detection system - that is composed by the following components: camera, image processor (Raspberry Pi), embedded decision and communication system (Arduino MKR Fox 1200)
- Server - that is composed by the following components: backend Sigfox server, database and web host service
In the left side of the figure 1 is show the embedded parking spot detection system this is the system that in the real environment application would be placed on the streets. This system communicates through Sigfox with the servers depicted in the right side of figure 1.
Given the fact that processing an image is complex, is necessary to use an independent image processor system (Raspberry Pi) that translates the image feed in a list of booleans (where each boolean represents the occupancy status of a parking spot) that is transmitted using serial protocol through a USB cable to the embedded decision and communication system (Arduino MKR Fox 1200).
This solution is already developed to be prepared to receive parking spot occupancy information from other types of sensor like infrared sensors. However it is not going to be used in this tutorial.
The messages sent through Sigfox are available on the backend Sigfox server however we need to forward those messages to the web host and database service dedicated to process this solution, in order to store this information in a convenient way and make it accessible through an HTTP REST API to other systems like a smartphone app. The communication between the Sigfox backend and the web host service is also made through a HTTP REST API developed by Sigfox.
Note: This tutorial will only explain the system until the occupancy information reaches the SigFox back-end server. If you want to deploy the database and server, the code is already available in the folder parkingalot_server
in the GitHub repository attached in the end of this tutorial.
First we want to build a image processing system with Raspberry Pi that receives as input an image from the Raspberry Pi camera module and send through serial port (USB) to the Arduino, a list of parking space occupancies.
You have to set up a Raspberry Pi with Raspbian OS and install custom python libraries using this commands on the terminal:
wget http://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/robot/resources/imgproc.zip
unzip imgproc.zip
cd library
sudo make install
cd ..
For
more information on the library installation click here.
After the library installation you are ready to code the image processing system. Open a new file in a text editor an name it for example parking_a_lot.py
. The first lines of code will import the necessary libraries to this system
from imgproc import *
import struct
import
serial
The
next section of the code will be responsible to establish a serial connection with Arduino, you should change the port name: /dev/ttyUSB0
according to the port name assign to your Arduino.
port = "/dev/ttyUSB0"
s1=serial.Serial(port,9600,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
dsrdtr=False,
rtscts=False,
timeout=1)
s1.reset_input_buffer()
s1.reset_output_buffer()
Now you are going to code the script that capture the image from the camera and display it in a new window
my_camera=Camera(960,720)
my_image = my_camera.grabImage()
# open a view setting the view to the size of the captured image
my_view = Viewer(my_image.width, my_image.height, "Parking a lot")
# display the image on the screen
my_view.displayImage(my_image)
Next you initialize several arrays, the first four arrays will contain image coordinates in pixels, and the last four arrays will assist to determine if the parking spots are occupied:
- The
x_folha
andy_folha
arrays are responsible for defining the image coordinates of corners of the paper sheet with parking spot markings that is going to emulate a parking lot. This coordinates should be adjusted by trial and error to adapt to your viewing angle. - The array
x_pos
andy_pos
have the image coordinates of the position of the parking spaces in the image. This code is made to monitor 17 parking spaces, thus the size of the array is 17. This coordinates should be adjusted by trial and error to adapt to your viewing angle and position of you parking spaces in the paper sheet. - The
empty
array will store the RGB code of the pixels of a parking space when the parking space is empty. - The
counter
array will be used to assist the system making a stable prediction of the parking space occupancy by slowing down changes. It will be explained latter how we accomplish a system that does not react to slightly temporary changes. - The
prev
andspace
arrays store the occupancy status of the parking spaces for the previous and current image processing cycle correspondingly.
#Paper sheet corners position
x_folha = [285,670,370,600]
y_folha = [705,705,452,452]
#Parking spaces position
x_pos = [350,360,370,375,380,385,390,580,585,590,595,600,605,605,610,610,615]
y_pos = [585,560,535,515,495,475,458,458,475,495,515,535,560,585,615,645,685]
#first iteration
empty = [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]]
counter = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
prev = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
space=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
Now is a good time to introduce you to the algorithm that deals with the identification of free spaces in the image. This algorithm works by analyzing a set of pixels in each parking space location. This set is a 10 by 10 matrix of pixels in the middle of the parking space. When we first start the script it captures an image and takes the mean value of each set of pixels that corresponds to each parking space. The first time we capture the mean color of each parking space, we assume that all parking spaces are empty, and store that value in the emtpy
array. So every time we capture a new image we compare each parking space mean color with their empty mean color, and if the difference between those two color surpass a certain value the parking space is considered occupied if not it is considered empty.
Taking this last paragraph in mind we have to determine the mean value of color for each parking space and store it in the empty
array. This segment of code is responsible for that functionality:
#Empty space mean color determination
for i in range(17):
x=x_pos[i]
y=y_pos[i]
value=[0,0,0];
for j in range(10):
for k in range(10):
value[0]=value[0]+my_image[x+j,y+k][0]
value[1]=value[1]+my_image[x+j,y+k][1]
value[2]=value[2]+my_image[x+j,y+k][2]
value[0]=value[0]/100
value[1]=value[1]/100
value[2]=value[2]/100
empty[i][0]=value[0]
empty[i][1]=value[1]
empty[i][2]=value[2]
Until here all the python code was executed only once, since all the previous code was related to setting up things and calibration. From now on the code will be inside an infinite loop in order to run continuously and be able to find the parking space occupancy in real-time.
The first part of the loop determines the mean color of the set of pixel that represent each parking space, and compare them with the mean color when the parking space was empty and the system was calibrated. If the difference between the current color and the empty color surpass a certain threshold the parking space is considered occupied
while 1:
my_image = my_camera.grabImage()
cur= [[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0],[0,255,0]]
state = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
#space mean color determination
for i in range(17):
x=x_pos[i]
y=y_pos[i]
value=[0,0,0];
for j in range(10):
for k in range(10):
value[0]=value[0]+my_image[x+j,y+k][0]
value[1]=value[1]+my_image[x+j,y+k][1]
value[2]=value[2]+my_image[x+j,y+k][2]
value[0]=value[0]/100
value[1]=value[1]/100
value[2]=value[2]/100
if (abs(value[0]-empty[i][0])+abs(value[1]-empty[i][1])+abs(value[2]-empty[i][2]))>100:
cur[i][0]=255
cur[i][1]=0
state[i]=1
In order to visualize the result of the image processing, the image is then marked up. The first mark up is related to the paper sheet corners position and it is marked up with blue squares. The second mark up is related to the parking spot position and their occupancy status, showing a green square over an empty parking space, and a red square in an occupied parking space. The image with the markups are then displayed in real time in a new window when the python scripted is running.
This code is responsible for the markup and displaying the image in real time:
#paper corners markup
for i_f in range(4):
x_f=x_folha[i_f]
y_f=y_folha[i_f]
for j_f in range(10):
for k_f in range(10):
my_image[x_f+j_f,y_f+k_f]=0,0,250
#parking space markups
for i in range(17):
x=x_pos[i]
y=y_pos[i]
for j in range(10):
for k in range(10):
my_image[x+j,y+k]=cur[i][0],cur[i][1],0
# display the image on the screen
my_view.displayImage(my_image)
At last we have to determine if the changes made to the occupancy status are permanent or only temporary. Avoiding false positives and negatives. In this way only when a change in the occupancy status is maintained through four consecutive cycles it is registered to be sent to the Arduino.
When there are changes in the occupancy list determined by the previous algorithm the system send via Serial (USB) the updated complete list of parking space status.
#process change
change=0
for i in range(17):
if counter[i]==0:
if prev[i]!=state[i]:
counter[i]=1
elif counter[i]<4:
if prev[i]!=state[i]:
counter[i]=0
else:
counter[i]=counter[i]+1
else:
if prev[i]==state[i]:
space[i]=state[i]
change=1
counter[i]=0
prev=state
#Send changes to Arduino
if change==1:
s1.write(struct.pack('>B',0xf0))
s=0
end=0
while end==0:
val=0x00
for k in range(7):
val=val|(state[s]<<(6-k))
if s==16:
end=1
break
s=s+1
s1.write(struct.pack('>B',val))
s1.write(struct.pack('>B',0xff))
After the code development you can run and test it executing in phython2.
The complete python script to be run in Raspberry Pi is showed in the GitHub repository folder parkingalot_imageproc
attached in the end of this tutorial.
Now you have to implement a system that receives through the serial port the list of parking space occupancies form the image processing system and sends the resultant occupancy status if changed thought Sigfox.
This projects was actually coded for the Snootlab Akeru Beta 3.3 board, however you should easy adapt it to the Arduino MKR FOX 1200.
In order to facilitate and make the communication more robust between the Raspberry Pi and the Arduino MKR FOX 1200 it was developed a C++ module for the Arduino, that deals with the reception of serial messages using a dedicated protocol that was also programed in the image processing python script.
One of the files of this module is called serialcom.h
and defines the interface of this module, having only three function:
start_serial()
- responsible for starting a new serial communications session. It is intended to be only executed once, in the system initialization.receive_serial()
- it returns 1 if a valid message was received, and stores the message content in the buffer.unknown_timed_request()
- is a function that should be run periodically, for example once every main loop iteration, to make sure that the image processing system receives warnings periodically in the case that the Arduino does not know the current occupancy status list of the parking lot.
The file serialcom.cpp
implements these module and it is shown in the folder parkingalot_akeru
in the GitHub repository attached in the end of this tutorial together with the serialcom.h
file.
The main file of the Arduino part of this project is called parkingalot_akeru.ino
, this file is available together with the serial communication module files in the folder parkingalot_akeru
in the GitHub repository attached in the end of this tutorial. You should adapt this file in order to work with your specific SigFox hardware, I am going to explain some of the main features of this code in the next paragraphs.
The main loop of this Arduino code starts by cheeking if a new message from the serial port was received, if that is true it stores the parking lot occupancy information received in that message in a array called space_img. After that it checks that status of the IR sensor. The IR sensor has already explained may be used to help determining the parking space occupancy, by providing another source of information, however is not intended to be used in this tutorial.
After receiving new occupancy data, the system checks if there are any changes that should be transmitted via Sigfox to the server updating the car parking occupancies. If it is determined that the information received in that loop iteration changed in anyway the occupancy scenario it immediately transmits a new message over Sigfox network.
void loop() {
if(receive_serial(buf, N_BYTES)){
byte_to_bool(buf,space_img);
}
if(!digitalRead(SENSOR)){
space_sensor[6]=1;
}else{
space_sensor[6]=0;
}
downlink_request();
if(check_changes()){
bool_to_byte(buf_sig, space);
akeru.sendPayload(akeru.toHex(buf_sig,N_BYTES_SIG));
}
unknown_timed_request();
}
TestingTo test this system you should program the Arduino with the code provided taking into account the appropriate modifications regarding to all code related with SigFox communication to be adapted to your specific hardware such as the Arduino MKR FOX 1200. And then you should connect the Arduino to the Raspberry Pi through USB and run the python script. After this step you should be able to see new messages arriving to the SigFox back-end every time you change the parking occupancy, by moving a toy car to the paper sheet simulating the car park.
Comments