I love the idea of smart cars, but it's hard for me to justify purchasing a whole new car just to get a couple of bells and whistles. For the time being, I'm stuck with my "dumb" car, but that doesn't mean I can't try and make it smarter myself!
The term "Smart Car" can have dozens of different meanings depending on who you ask. So let's start with my definition of a smart cart:
- Touchscreen interafce
- Backup camera that let's you know if an object is too close
- Basic information about the car, such as fuel efficiency
- Maybe bluetooth connectivity?
I'm not sure which, if any, of those items I'll have any success with, but I guess we'll find out.
THE BACKUP CAMERAThe first obvious addition to our smart car wannabe is a backup camera. There are many kits out there that makes adding a backup camera pretty simple. But most of them require making modifications to the car itself, and since I'm just wanting to test a proof of concept, I don't really want to start unscrewing and drilling into my car.
Regardless, I went ahead and ordered a cheap $13 backup camera and a USB powered LCD screen.
One caveat with cameras like these is that they require and external power source. Generally it's prescribed to wire them to one of the reverse lights of your car so that it's automatically powered on when the car is in reverse. Being that I don't want to modify my car at this time, I'm just going to wire it up to some batteries. And I'll mount it to the license plate using trusty old duct tape!
I ran an RCA cable from the camera to the dashboard and connected it to my 5' LCD. This specific LCD can be powered through USB, so I plugged it in to a USB lighter adapter (most old cars have lighter adapters).
After starting the car, the screen immediately came on and I could see the image from the camera. Works as advertised! This would be a good solution for anyone just wanting to add a backup camera to their car, and don't want any bells and whistles with it. I think I can do better, however.
Enter the Raspberry Pi. The Pi is the perfect platform for a smart car, because it's basically a mini computer with tons of inputs and outputs. When connecting a camera to the Pi, you can use practically any generic USB webcam, or you can go with Pi Camera. Neither camera requires a separate power source. But just make sure you have plenty of cable to go to the back of the car.
I opted for the Pi Camera because it has higher throughput than a USB camera. Again, I just duct taped the camera to the license plate, ran the flat cable to the Pi at the front of the car, and then connected it to a 7" touchscreen. Both the Pi and the touchscreen can be powered by the USB adapter in the car.
Turning on the car, both the Pi and the screen powered up. One obvious downside is the boot time required for the Pi to boot up...something I'll have to consider later. To view the Pi cam, I opened up the terminal and ran a simple script (a script that can be set to auto-boot in the future)
raspivid -t 0
or
raspivid -t 0 --mode 7
After hitting enter, a feed of the video camera popped up! The nice thing about video on the Pi is that you can analyze it, and maybe even set up an alert system if an object gets too close! So let's work on that next!
METHOD 1
When it comes to commercial backup cameras, there are generally two versions that I've seen. The first uses a static overlay with color ranges so that you can visually determine how close an object is. The second method uses a camera in conjunction with some type of sensor that can sense how close an object is to the car and then alerts you when something is too close.
Since the first method seems easier, let's try that one first. Essentially, it's just an image overplayed on top of a video stream, so let's see if recreating it is as easy as it sounds. The first thing we'll need is a transparent image overlay. Here's the one I used (also found on my github repository):
The image above is exactly 640x480, which just so happens to be the same resolution my camera will be streaming at. This was done intentionally, but feel free to change the image dimensions if you are streaming at a different resolution.
Next we'll create a python script that utilizes the PIL python image editor and the PiCamera (if you are not using a Pi Camera, then adjust the code for your video input). I just named the file image_overlay.py
import picamera
from PIL import Image
from time import sleep
#Start a loop with the Pi camera
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.framerate = 24
camera.start_preview()
img = Image.open('bg_overlay.png')
img_overlay = camera.add_overlay(img.tobytes(), size=img.size)
img_overlay.alpha = 128
img_overlay.layer = 3
while True:
sleep(1)
Saving it and testing it out by running "python image_overlay.py", I tested it out on a small scale using a toy car to see how it worked. It worked like a charm, and there was practically no lag!
Loading it in the car and running the program, the results were equally as charming! One very important thing to note, however, is that you should take particular care in calibrating your camera to make sure the base of the video view is as close to your car bumper as you can get it. As you can tell in the pictures below, the camera was facing too high, so the test object was actually much further away than what the camera told me.
METHOD 2
The Method 1 test was very successful, but it was also very basic. It would be nice to have a system that can detect how far the object is from the car and notify you if it gets too close. As I mentioned before, most cars that have that feature use external sensors that can detect objects. I'm not really keen on adding any other external devices to my car, so I'm going to see if I can detect objects using computer vision.
I can use OpenCV with python as my computer vision API. This will allow me to analyze what's in the image and to set parameters for whatever is found. So the idea would be to take the video footage and set a boundary at the bottom (close to the car bumper) for the "alarm zone". Then I'll have it detect whatever large objects are in the footage. If the bottom most area of the objects enters the "alarm zone", then it should send an alert message.
To serve as the alret sound, I'm going to wire a piezo buzzer to the Raspberry Pi by connecting the positive leg to Pin 22 and the negative leg to a ground pin.
Before starting on the code, we have to install OpenCV on the Pi first. Luckily the Pi can do this through the Python "pip" command
pip3 install opencv-python
Once OpenCV is installed, we can create a new Python file and start on the code. For the fully documented code, you can visit my github repository. I just named my file car_detector.py
import time
import cv2
import numpy as np
from picamera.array import PiRGBArray
from picamera import PiCamera
import RPi.GPIO as GPIO
buzzer = 22
GPIO.setmode(GPIO.BCM)
GPIO.setup(buzzer, GPIO.OUT)
camera = PiCamera()
camera.resolution = (320, 240) #a smaller resolution means faster processing
camera.framerate = 24
rawCapture = PiRGBArray(camera, size=(320, 240))
kernel = np.ones((2,2),np.uint8)
time.sleep(0.1)
for still in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
GPIO.output(buzzer, False)
image = still.array
#create a detection area
widthAlert = np.size(image, 1) #get width of image
heightAlert = np.size(image, 0) #get height of image
yAlert = (heightAlert/2) + 100 #determine y coordinates for area
cv2.line(image, (0,yAlert), (widthAlert,yAlert),(0,0,255),2) #draw a line to show area
lower = [1, 0, 20]
upper = [60, 40, 200]
lower = np.array(lower, dtype="uint8")
upper = np.array(upper, dtype="uint8")
#use the color range to create a mask for the image and apply it to the image
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask=mask)
dilation = cv2.dilate(mask, kernel, iterations = 3)
closing = cv2.morphologyEx(dilation, cv2.MORPH_GRADIENT, kernel)
closing = cv2.morphologyEx(dilation, cv2.MORPH_CLOSE, kernel)
edge = cv2.Canny(closing, 175, 175)
contours, hierarchy = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
threshold_area = 400
centres = []
if len(contours) !=0:
for x in contours:
#find the area of each contour
area = cv2.contourArea(x)
#find the center of each contour
moments = cv2.moments(x)
#weed out the contours that are less than our threshold
if area > threshold_area:
(x,y,w,h) = cv2.boundingRect(x)
centerX = (x+x+w)/2
centerY = (y+y+h)/2
cv2.circle(image,(centerX, centerY), 7, (255, 255, 255), -1)
if ((y+h) > yAlert):
cv2.putText(image, "ALERT!", (centerX -20, centerY -20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255),2)
GPIO.output(buzzer, True)
cv2.imshow("Display", image)
rawCapture.truncate(0)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
GPIO.output(buzzer, False)
break
Alright, saving it and testing out in small scale, it did pretty well. It detected a lot of unnecessary objects, and I did notice that sometimes it would detect shadows as objects.
Loading it up in the car and testing it out in a real world scenario, the results were surprisingly accurate! It was near perfect conditions, however. I don't know how the results would have varied if this were at night.
Overall, I was pleased and surprised by the results, but I wouldn't trust it in less optimal conditions. This isn't to say it wouldn't work, it's just to say that it's basic code currently and could stand for a lot more testing and debugging (hopefully by a reader!)
Out of the two methods, method 2 was pretty cool, but method 1 is definitely more reliable in multiple situations. So if you were to make this for your car, I'd go with method 1.
Next, I'll try to tackle connecting to the car's OBDII port and see what I can extract!
CONNECTING TO ON BOARD DIAGNOSTICS (OBDII)In the US, cars have been required to have an On Board Diagnostics port (OBDII) since 1996. Other countries adopted the same standard a bit later.
The OBDII port allows you to connect to it and read information about the car such as problems, VIN numbers, speed, RPM's, etc. Whenever your "Check Engine" light comes on, it's what the mechanic plugs into to find out what the issue is and clear the issue.
The first thing we need to do is connect an adapter to the port so that we can communicate with it from the Raspberry Pi. For those that aren't aware, the OBDII port is located under the dashboard beneath the steering wheel in most cars. If you search online for "ODBII adapters", you'll find that there are primarily two types of adapters: USB and bluetooth. USB adapters are more secure, but if your aware of the vulnerabilities in bluetooth and how to combat them, you can use that one. For this guide, I'm going to be using bluetooth.
Connecting the bluetooth adapter to the port, you may notice that a light immediately illuminates on the adapter. This is because the OBD port has an "always on" 12v output. That means that the bluetooth adapter will be powered and active at all times, even when you're not in the car...hence the vulnerabilities.
With the bluetooth adapter in place, we can now connect our Pi to it. I've been using the Raspberry Pi 3 B+, which has bluetooth built in, so I don't need any other adapters. Just power on the pi, open up a terminal and launch the bluetooth controller.
bluetoothctl
Within the controller, you want to enter these commands in order (minus the # comments)
power on # ensures bluetooth is on
power on # ensures bluetooth is onble
agent on # makes pairing persistent
default-agent
scan on # scans for bluetooth devices
# the OBDII adapter should read something
# like this - 00:00:00:00:00:00 Name: OBDII
# If it asks for a pin, the default pin is 1234
scan off #turn off scanning once your adapter has been found
pair <adapter mac address> #pair to your adapters mac address
trust <adapter mac address> #keeps pairing even after reboot
quit #exits out of bluetoothctl
At this point, the adapter should be connected and you should be back to your main terminal line. Since the OBD port is a serial port, before we can start talking to it, we need to bind it to a serial port on the Pi.
sudo rfcomm bind rfcomm0 <adapter mac address>
NOW we should finally be able to communicate with the adapter! A good program to use for communication is called "screen".
sudo apt-get install screen
screen /dev/rfcomm0
You should be presented with a blank screen. At this point we can start typing our commands. The first few commands are standard for setting up communications. Type each command (minus the # comments) and press enter. To find out more about these commands, you can visit this website.
atz #resets the device and returns the device ID
atl1 #enables line feeds
ath0 #disables headers (ath1 enables)
atsp0 #automatically determines communication method
With that done, the next command tells the port what information we want to extract. The command consists of two hex values. The first set tells what mode we want to set.
There is a great Wikipedia page that explains the codes, and I've attached a screenshot below. Since I'm interested in real time data for this project, the mode I want to set would be 01 for the hex value.
The second set of values is for the Parameter ID. There are almost 200 different available PID's, and again the can be found on this Wikipedia page.. This is where we can ask for temperatures, speed, RPM, etc. For this test, I'm going to ask for speed, so my hex value is 0D.
So my final value is 010D (as you can see by the chart above, RPM would be 010C), so I can type this into the terminal to get our result.
010D
But the result it gives us is another hex code. Below is the hex code I received.
41 0D 32 11
Here's a breakdown of what it means (here's a good website for more information):
41 - The response for our mode (01) command
0D - The response for our PID (0D) command
32 - The speed (in hex)
11 - Unused data byte
So our speed in hex is 32. Popping that in a hex to decimal converter, we get 50 km/hr. Yes, the default speed values are in kilometers. Converting that to miles, we get 31 mph.
IMPORTANT: To get the Bluetooth adapter to connect automatially, there are a couple of extra steps that you need to take with the Raspberry Pi. The first is to edit rc.local
sudo nano /etc/rc.local
and add the following line before "exit 0"
rfcomm bind rfcomm99 <adapter mac address>
where "adapter mac address" is the mac address of your bluetooth adapter.
Finally, you will need to edit the bluetooth config file:
sudo nano /etc/systemd/system/dbus-org.bluez.service
Find the line that says "ExecStart=/usr/lib/bluetooth/bluetoothd", and change it to this:
ExecStart=/usr/lib/bluetooth/bluetoothd -C
ExecStartPost=/usr/bin/sdptool add SP
Ok, so that wasn't the easiest process, but at least now we know how to get data from the OBD port from scratch. Now all we need to do is write a program that automates all this and displays it in a graphical format!
MAKING A GRAPHICAL INTERFACENow that we have the data, we need a better, more interesting way to display it. Since I'm most familiar with Python, I'll be using that to manipulate the data. Online, I found a great library specifically for OBD connections called Python-OBD. So I installed it along with PySerial.
pip install pyserial
pip install obd
With that done, let's dive right in and see what we can do. For a basic test, I'm just going to write a simple program to display the car's RPM.
obd_hud_test.py
#import required libraries
import obd
#establish a connection with the OBD device
connection = obd.OBD()
#create a command varialbe
c = obd.commands.RPM
#query the command and store the response
response = connection.query(c)
#print the response value
print(response.value)
#close the connection
connection.close()
Saving it and testing it out, it outputs the car's current RPM!
Well, the python-OBD library works, but it's still not graphical. To make graphical interfaces in Python, there are several options to choose from. A few that I'm familiar with are:
Pygame is quick and easy, so I'm just going to go with that for the purposes of this tutorial. What's even more convenient is that Pygame comes preinstalled on the Raspberry Pi, so no need to install anything extra!
Alright, let's give this a shot. Here's my final code for displaying
obd_hud_test.py
import pygame
from pygame.locals import *
import obd
pygame.init()
#connection = obd.OBD()
connect = obd.Async(fast=False)
screen = pygame.display.set_mode((0,0),pygame.FULLSCREEN)
screen_w = screen.get_width()
screen_h = screen.get_height()
circle_y = screen_h/2
circle1_x = screen_w * .25
circle2_x = screen_w * .5
circle3_x = screen_w * .75
circle_rad = (circle2_x - circle1_x)/2
speed_text_x = screen_w * .25
speed_text_y = screen_h * .25
rpm_text_x = screen_w * .5
rpm_text_y = screen_h * .25
load_text_x = screen_w * .75
load_text_y = screen_h * .25
headerFont = pygame.font.SysFont("Arial", 50)
digitFont = pygame.font.SysFont("Arial", 50)
white = (255,255,255)
black = (0,0,0)
grey = (112, 128, 144)
speed = 0
rpm = 0
load = 0
def draw_hud():
screen.fill(grey)
pygame.draw.circle(screen, black, (int(circle1_x), int(circle_y)), int(circle_rad), 5)
pygame.draw.circle(screen, black, (int(circle2_x), int(circle_y)), int(circle_rad), 5)
pygame.draw.circle(screen, black, (int(circle3_x), int(circle_y)), int(circle_rad), 5)
speed_text = headerFont.render("SPEED", True, black)
rpm_text = headerFont.render("RPM", True, black)
load_text = headerFont.render("LOAD", True, black)
speed_text_loc = speed_text.get_rect(center=(speed_text_x, speed_text_y))
rpm_text_loc = rpm_text.get_rect(center=(rpm_text_x, rpm_text_y))
load_text_loc = load_text.get_rect(center=(load_text_x, load_text_y))
screen.blit(speed_text, speed_text_loc)
screen.blit(rpm_text, rpm_text_loc)
screen.blit(load_text, load_text_loc)
def get_speed(s):
global speed
if not s.is_null():
#speed = int(s.value.magnitude) #for kph
speed = int(s.value.magnitude * .060934) #for mph
def get_rpm(r):
global rpm
if not r.is_null():
rpm = int(r.value.mangitude)
def get_load(l):
global load
if not l.is_null():
load = int(l.value.mangitude)
connection.watch(obd.commands.SPEED, callback=get_speed)
connection.watch(obd.commands.RPM, callback=get_rpm)
connection.watch(obd.commands.ENGINE_LOAD, callback=get_load)
connection.start()
running = True
while running:
for event in pygame.event.get():
if event.type == KEYDOWN:
if event.key == K_ESCAPE:
connection.stop()
connection.close()
running = False
elif event.type == QUIT:
connection.stop()
connection.close()
running = False
draw_hud()
speedDisplay = digitFont.render(str(speed), 3, white)
rpmDisplay = digitFont.render(str(rpm), 3, white)
loadDisplay = digitFont.render(" " + str(load) + " %", 3, white)
screen.blit(loadDisplay, (circle3_x-(circle3_x/8), circle_y-45))
screen.blit(rpmDisplay, (circle2_x-(circle2_x/8), circle_y-45))
screen.blit(speedDisplay,(circle1_x-(circle1_x/8), circle_y-45))
pygame.display.update()
pygame.display.flip()
Testing it out, it works like a charm!
ADDING GPSEven though most smart devices have GPS integrated into them already, I thought that a dedicated GPS navigation system would be the perfect addition to our smart car setup.
A common GPS system generally is comprised of two parts, a GPS dongle, to retrieve coordinates from the satellite, and a mapping system that overlays those coordinates on a map. With the Raspberry Pi, you can use either a USB dongle, or one that connects via the Pi's GPIO pins. You can get a good one for around $10-$40. Just MAKE SURE that the dongle you choose is compatible with a Raspberry Pi. Since I already had a USB GPS dongle, I'll be using that.
In most cases, plugging in the GPS should work, but for me, I had to do a bit of troubleshooting to get it to work. First I had to install some GPSD packages, then I ran "cgps -s" to check the status of the connection.
sudo apt-get install gpsd gpsd-clients
cgps -s
As you can see from the image above, I wasn't getting any feedback. I ended up editing the gpsd config file, and told it not to autodetect the connection, but manually told it what connection to use instead.
sudo nano /etc/default/gpsd
ORIGINAL:
# Default settings for the gpsd init script and the hotplug wrapper.
# Start the gpsd daemon automatically at boot time
START_DAEMON="true"
# Use USB hotplugging to add new USB devices automatically to the daemon
USBAUTO="true"
# Devices gpsd should collect to at boot time.
# They need to be read/writeable, either by user gpsd or the group dialout.
DEVICES=""
# Other options you want to pass to gpsd
GPSD_OPTIONS=""
AFTER CHANGES:
# Default settings for the gpsd init script and the hotplug wrapper.
# Start the gpsd daemon automatically at boot time
START_DAEMON="true"
# Use USB hotplugging to add new USB devices automatically to the daemon
USBAUTO="false"
# Devices gpsd should collect to at boot time.
# They need to be read/writeable, either by user gpsd or the group dialout.
DEVICES="/dev/ttyUSB0"
# Other options you want to pass to gpsd
GPSD_OPTIONS="n"
GPSD_SOCKET="/var/run/gpsd.sock"
Saving it, and restarting the GPS service, I tested it again and finally started seeing results!
sudo etc/initd/gpsd restart
cgps -s
Now for the mapping system. Most mobile mapping systems require a wireless data connection to download maps. But getting a wireless data connection for the Raspberry Pi is an added monthly expense that I don't want to incur. So looking online for an offline mapping system that's compatible with a Raspberry Pi, there weren't many options. Really, the only one that would work is called Navit.
After updating the Pi, you can just install Navit and Espeak (text-to-voice) through "apt-get"
sudo apt-get update
sudo apt-get install navit espeak
Once it's through installing, we can create a directory for the configuration XML file and copy over the default one from the program directory.
mkdir ~/.navit
cp /etc/navit/navit.xml .navit/navit.xml
There's lots of editing that we can do with this XML file to configure the software.
MORE TO COME!!!
Comments