Over the last couple of years I have had trouble waking up in the morning. I have tried many solutions such as alarms and forcing myself to get up and get moving, but none of them have worked. I have decided to come up with one solution to rule them all, a device that drops a pillow on my head if I am not out of bed 5 minutes after my alarm goes off.
ElectronicsThe final electronics that I used are pretty simple. It consists of a Raspberry Pi 3 B, a Raspberry Pi Camera, 3 Male to Female jumper cables, a Micro Servo, and a battery pack.
I connected my servo pins to pins 4, 6, and 11 on the raspberry pi. I also plugged in the camera into the little slot for the ribbon cable on Raspberry Pi.
CodeBefore I go into detail about the code and facial recognition portion of my project I would like to give a huge shoutout to cytrontech on Youtube for posting this video that shows how to do basic facial recognition using Opencv.
Before I started to use my Raspberry Pi I made sure to install a new image of the most up-to-date version of the Raspberry Pi OS. I then proceeded to download opencv so that I could begin processing images. Once I confirmed that I had opencv downloaded and fully up to date I began to go through the cytrontech video.
The Code portion consists of four files, two of which stay the same as in the original video.
import cv2
name = 'Suad' #replace with your name
cam = cv2.VideoCapture(0)
cv2.namedWindow("press space to take a photo", cv2.WINDOW_NORMAL)
cv2.resizeWindow("press space to take a photo", 500, 300)
img_counter = 0
while True:
ret, frame = cam.read()
if not ret:
print("failed to grab frame")
break
cv2.imshow("press space to take a photo", frame)
k = cv2.waitKey(1)
if k%256 == 27:
# ESC pressed
print("Escape hit, closing…")
break
elif k%256 == 32:
# SPACE pressed
img_name = "dataset/"+ name +"/image_{}.jpg".format(img_counter)
cv2.imwrite(img_name, frame)
print("{} written!".format(img_name))
img_counter += 1
cam.release()
cv2.destroyAllWindows()
This is the first file called face_shot.py. It is used to take pictures of your face and collect data to then train the model.
#! /usr/bin/python
# import the necessary packages
from imutils import paths
import face_recognition
#import argparse
import pickle
import cv2
import os
# our images are located in the dataset folder
print("[INFO] start processing faces…")
imagePaths = list(paths.list_images("dataset"))
# initialize the list of known encodings and known names
knownEncodings = []
knownNames = []
# loop over the image paths
for (i, imagePath) in enumerate(imagePaths):
# extract the person name from the image path
print("[INFO] processing image {}/{}".format(i + 1,
len(imagePaths)))
name = imagePath.split(os.path.sep)[–2]
# load the input image and convert it from RGB (OpenCV ordering)
# to dlib ordering (RGB)
image = cv2.imread(imagePath)
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# detect the (x, y)-coordinates of the bounding boxes
# corresponding to each face in the input image
boxes = face_recognition.face_locations(rgb,
model="hog")
# compute the facial embedding for the face
encodings = face_recognition.face_encodings(rgb, boxes)
# loop over the encodings
for encoding in encodings:
# add each encoding + name to our set of known names and
# encodings
knownEncodings.append(encoding)
knownNames.append(name)
# dump the facial encodings + names to disk
print("[INFO] serializing encodings…")
data = {"encodings": knownEncodings, "names": knownNames}
f = open("encodings.pickle", "wb")
f.write(pickle.dumps(data))
f.close()
This is the second file called train_model.py. It is used to train a model based around the images you took with face_shot.py.
#! /usr/bin/python
# import the necessary packages
from datetime import datetime
import servo_move
from imutils.video import VideoStream
from imutils.video import FPS
import face_recognition
import imutils
import pickle
import time
import cv2
now = datetime.now()
da_time = datetime(2021, 4, 7, 12, 35, 00)
x = 0
#Initialize 'currentname' to trigger only when a new person is identified.
currentname = "unknown"
#Determine faces from encodings.pickle file model created from train_model.py
encodingsP = "encodings.pickle"
#use this xml file
#https://github.com/opencv/opencv/blob/master/data/haarcascades/haarcascade_frontalface_default.xml
cascade = "haarcascade_frontalface_default.xml"
# load the known faces and embeddings along with OpenCV's Haar
# cascade for face detection
print("[INFO] loading encodings + face detector…")
data = pickle.loads(open(encodingsP, "rb").read())
detector = cv2.CascadeClassifier(cascade)
# initialize the video stream and allow the camera sensor to warm up
print("[INFO] starting video stream…")
vs = VideoStream(src=0).start()
#vs = VideoStream(usePiCamera=True).start()
time.sleep(2.0)
# start the FPS counter
fps = FPS().start()
# loop over frames from the video file stream
while True:
# grab the frame from the threaded video stream and resize it
# to 500px (to speedup processing)
frame = vs.read()
frame = imutils.resize(frame, width=500)
# convert the input frame from (1) BGR to grayscale (for face
# detection) and (2) from BGR to RGB (for face recognition)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# detect faces in the grayscale frame
rects = detector.detectMultiScale(gray, scaleFactor=1.1,
minNeighbors=5, minSize=(30, 30),
flags=cv2.CASCADE_SCALE_IMAGE)
# OpenCV returns bounding box coordinates in (x, y, w, h) order
# but we need them in (top, right, bottom, left) order, so we
# need to do a bit of reordering
boxes = [(y, x + w, y + h, x) for (x, y, w, h) in rects]
# compute the facial embeddings for each face bounding box
encodings = face_recognition.face_encodings(rgb, boxes)
names = []
# loop over the facial embeddings
for encoding in encodings:
# attempt to match each face in the input image to our known
# encodings
matches = face_recognition.compare_faces(data["encodings"],
encoding)
name = "Unknown" #if face is not recognized, then print Unknown
# check to see if we have found a match
if True in matches:
# find the indexes of all matched faces then initialize a
# dictionary to count the total number of times each face
# was matched
matchedIdxs = [i for (i, b) in enumerate(matches) if b]
counts = {}
# loop over the matched indexes and maintain a count for
# each recognized face face
for i in matchedIdxs:
name = data["names"][i]
counts[name] = counts.get(name, 0) + 1
# determine the recognized face with the largest number
# of votes (note: in the event of an unlikely tie Python
# will select first entry in the dictionary)
name = max(counts, key=counts.get)
#If someone in your dataset is identified, print their name on the screen
if currentname != name:
currentname = name
print(currentname)
# update the list of names
names.append(name)
# loop over the recognized faces
for ((top, right, bottom, left), name) in zip(boxes, names):
# draw the predicted face name on the image – color is in BGR
cv2.rectangle(frame, (left, top), (right, bottom),
(0, 255, 0), 2)
y = top - 15 if top - 15 > 15 else top + 15
cv2.putText(frame, name, (left, y), cv2.FONT_HERSHEY_SIMPLEX,
.8, (255, 0, 0), 2)
# display the image to our screen
cv2.imshow("Facial Recognition is Running", frame)
key = cv2.waitKey(1) & 0xFF
# quit when 'q' key is pressed
if key == ord("q"):
break
# update the FPS counter
fps.update()
current_time = datetime.now()
if (currentname == "will") and (current_time.time() > da_time.time()) and (x == 0):
exec(open("servo_move.py").read())
x = 1
# stop the timer and display FPS information
fps.stop()
print("[INFO] elasped time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()
This is the third and final file that was in the video called face_rec.py. It is the file you run when you want to actually start the facial recognition software. I only added a couple of lines of code, these being:
from datetime import datetime
import servo_move
now = datetime.now()
da_time = datetime(2021, 4, 7, 12, 35, 00)
x = 0
current_time = datetime.now()
if (currentname == "will") and (current_time.time() > da_time.time()) and (x = = 0):
exec(open("servo_move.py").read())
x = 1
These lines of code check to see if the current time is 7:35 Am, 5 minutes after my alarm goes off. If it is and if my face is there then it executes a file called servo_move.py.
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11,GPIO.OUT)
servo1 = GPIO.PWM(11,50)
servo1.start(0)
servo1.ChangeDutyCycle(12)
time.sleep(2)
servo1.ChangeDutyCycle(2)
time.sleep(0.5)
servo1.ChangeDutyCycle(0)
servo1.stop()
GPIO.cleanup()
This is servo_move.py. It makes the servo move 180 degrees and then move back.
FabricationThe first piece that I had to fabricate was the "electronics board" that was show in my electronics section. It is just a piece of wood that everything rests on. The second piece is shown in the images below.
It is a very simple design of just some wood connected with some hinges that I 3D Printed. I would like to say that the hinges are not my own design, they were made by guppyk on thingiverse. The hinges I used and many variations of them can be downloaded here.
What I would do DifferentlyThis project did work as intended in the end, but that does not mean that I wouldn't change certain aspects. If I did it again I would have spray painted the wood black so that the tape and the parts did not stand out as much. I would have also made a more permanent version of the electronics board.
Comments
Please log in or sign up to comment.