Face recognition has taken a dramatic change in today’s world of, it has been widely spread throughout last few years in drastic way. Their have been some drastic improvements in last few years which has made it so much popular that now it is being widely used for commercial purpose as well as security purpose also.Tracking a users presence is becoming one of the problems in today’s world, so an attendance system based on facial recognition can act as a real world solution to this problem and add great heights of simplicity for tracking a users attendance.The manual entering of attendance in logbooks becomes difficult and takes a lot of time also, so we have designed an efficient module that comprises of face recognition using LBPH algorithm(OpenCV) to manage the attendance records of employee or students. During enrolling of a user, we take multiple images of a user along with his/her id/roll number and name also.The presence of each student/employee will be updated in database, and the user can check their attendance on the webpage also. The results showed improved performance over manual attendance system.This process can give us more accurate results in user interactive manner rather than the existing attendance systems.This also gives students/employees a more accurate result in user interactive manner rather than existing attendance management system.
A small video for brief view of project
Technology/Tools used and MethodologyI. Technology/ToolsHere we will be using various python libraries and modules for face recognition, face identification, saving a users image and other information also.We use OPEN-CV(Open Source Computer Vision) library for face recognition, identification, we use pandas package to store student information in local database,Numpy is used to perform the appropriate task, Pymysql is used to connect to a MySQL database, Tkinter helps us to make GUI for better interaction with the program.In this project, we use MySQL database to store the students attendance.For Web-page, to implement our front-end, we have used HTML, CSS/SCSS and for better interaction we have used JavaScript and JQuery. As far as back-end technology is concerned we have used PHP for that.
II. MethodologyThe currently available Face Recognizer Algorithms in OPEN-CV are:
- EigenFaces
- FisherFaces
- Local Binary Patterns Histograms
For our purpose, we would be using the last algorithm (Local Binary Patterns Histogram)
Local Binary Pattern HistogramEigenFaces and FisherFaces take a somewhat holistic approach to face-recognition. You treat your data as a vector somewhere in a high-dimensional image space. We all know high-dimension is bad, so a lower-dimensional subspace is identified, where (probably) useful information is preserved. The EigenFaces approach maximizes the total scatter, which can lead to problems if the variance is generated by an external source, because components with a maximum variance over all classes aren’t necessarily useful for classification. So to preserve some discriminative information we applied a Linear Discriminant Analysis and optimized as described in the FisherFaces method. The FisherFaces method worked great at least for the constrained scenario we’ve assumed in our model.
Now real life isn’t perfect. You simply can’t guarantee perfect light settings in your images or 10 different images of a person. So what if there’s only one image for each person? Our co-variance estimates for the subspace may be horribly wrong, so will the recognition.So some research concentrated on extracting local features from images. The idea isto not look at the whole image as a high-dimensional vector, but describe only local features of an object. The features you extract this way will have a low-dimension implicitly. A fine idea! But you’ll soon observe the image representation we are given doesn’t only suffer from illumination variations. Think of things like scale, translation or rotation in images - your local description has to be at least a bit robust against those things. The Local Binary Patterns methodology has its roots in 2D texture analysis. The basic idea of Local Binary Patterns is to summarize the local structure in an image by comparing each pixel with its neighborhood. Take a pixel as center and threshold its neighbors against. If the intensity of the center pixel is greater-equal its neighbor, then denote it with 1 and 0 if not. You’ll end up with a binary number for each pixel, just like 11001111. So with 8 surrounding pixels you’ll end up with 2^8 possible combinations, called Local Binary Patterns or sometimes referred to as LBPcodes. The first LBP operator described in literature actually used a fixed 3 x 3neighborhood just like this:
By definition the LBP operator is robust against monotonic gray scale transformations.We can easily verify this by looking at the LBP image of an artificially modified image (so you see what an LBP image looks like):
So what’s left to do is how to incorporate the spatial information in the face recognition model. The representation proposed by Ahonenet. Al is to divide the LBP image into m local regions and extract a histogram from each. The spatially enhanced feature vector is then obtained by concatenating the local histograms (not mergingthem). These histograms are called Local Binary Patterns Histograms.
Histogramic representation of one sample:Similarly all the histogramic samples are concatenated and it is called called LocalBinary Patterns Histograms.
Software Implementation details- Graphical User Interface(GUI)
First we import all the required packages/modules that are to be used for making the GUI of our application.
import PIL.Image
import PIL.ImageTk
from tkinter import *
Now we will make our window with our LOGO and background.
window =Tk()
window.geometry('600x600')
window.resizable(width=False, height=False)
window.title("My Attendance Portal")
window.configure(background='#D0D3D4')
image=PIL.Image.open("logo.png")
photo=PIL.ImageTk.PhotoImage(image)
lab=Label(image=photo,bg='#D0D3D4')
lab.pack()
Now we imply input boxes to collect the username, id for a new user, and we also implement an input box to collect the id of user whose detail we want to delete.
fn=StringVar()
entry_name=Entry(window,textvar=fn)
entry_name.place(x=150,y=257)
ln=StringVar()
entry_id=Entry(window,textvar=ln)
entry_id.place(x=455,y=257)
dn=StringVar()
entry_name_del=Entry(window,textvar=dn)
entry_name_del.place(x=150,y=507)
Collection of all the labels, placed in their respective positions present in the GUI :
label2=Label(window,text="New User",fg='#717D7E',bg='#D0D3D4',font=("roboto",20,"bold")).place(x=20,y=200)
label3=Label(window,text="Enter Name :",fg='black',bg='#D0D3D4',font=("roboto",15)).place(x=20,y=250)
label4=Label(window,text="Enter Roll Number :",fg='black',bg='#D0D3D4',font=("roboto",15)).place(x=275,y=252)
label5=Label(window,text="Note : To exit the frame window press 'q'",fg='red',bg='#D0D3D4',font=("roboto",15)).place(x=20,y=100)
status=Label(window,textvariable=v,fg='red',bg='#D0D3D4',font=("roboto",15,"italic")).place(x=20,y=150)
label6=Label(window,text="Already a User ?",fg='#717D7E',bg='#D0D3D4',font=("roboto",20,"bold")).place(x=20,y=350)
label7=Label(window,text="Delete a users information",fg='#717D7E',bg='#D0D3D4',font=("roboto",20,"bold")).place(x=20,y=450)
label8=Label(window,text="Enter Id :",fg='black',bg='#D0D3D4',font=("roboto",15)).place(x=20,y=500)
Collection of all the buttons placed in their respective positions.
button1=Button(window,text="Exit",width=5,fg='#fff',bg='red',relief=RAISED,font=("roboto",15,"bold"),command=exit_window)
button1.place(x=500,y=550)
button2=Button(window,text="Submit",width=5,fg='#fff',bg='#27AE60',relief=RAISED,font=("roboto",15,"bold"),command=insert_user)
button2.place(x=20,y=300)
button3=Button(window,text="Train Images",fg='#fff',bg='#5DADE2',relief=RAISED,font=("roboto",15,"bold"),command=train_image)
button3.place(x=100,y=300)
button4=Button(window,text="Track User",fg='#fff',bg='#E67E22',relief=RAISED,font=("roboto",15,"bold"),command=track_user)
button4.place(x=20,y=400)
button6=Button(window,text="Delete User",fg='#fff',bg='#8E44AD',relief=RAISED,font=("roboto",15,"bold"),command=del_user)
button6.place(x=20,y=550)
window.mainloop()
Our GUI looks such :
- Collecting a Users Information
The packages/modules used for collecting the users information are:
import cv2,os import csv
import numpy as np
import PIL.Image
import PIL.ImageTk
import pandas as pd
Our code structure is defined such
#Block starts
def insert_user():
<--code-->
#Block Ends
To fetch the details of user from the input box, we use.
Id=ln.get()
name=fn.get()
After fetching the details, we verify if the format is correct or not
if(Id.isnumeric() and name.isalpha()):
Now we match the details from our existing database, if user already exists then an error message will be returned to the notification area.
df=pd.read_csv("StudentDetails\StudentDetails.csv")
if(df['Id'].astype(str).str.contains(str(Id)).any()==True):
v.set("User with same Roll No. Already exists")
If the user doesn’t exist in the database already, then :
i. The ‘Haarcascade files’ will be loaded to the program.
ii. ‘samplenum’ will be initialized to 0.
else:
cam = cv2.VideoCapture(0)
harcascadePath = "haarcascade_frontalface_default.xml"
detector=cv2.CascadeClassifier(harcascadePath)
sampleNum=0
iii. An infinite while loop starts, if it’s 100 second or a user press ‘q’ thenthe frame window will exit, or if the ‘sampleNum’ is 61 then the frame window will exit, in the mean time 61 gray images of the student/user will be clicked and saved to the path given below:
"TrainingImage\ "+name.lower() +"."+Id +'.'+ str(sampleNum) + ".jpg", gray[y:y+h,x:x+w]
iv. And the student details would be saved in the given below path:
'StudentDetails\StudentDetails.csv'
Code:while(True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
#Incrementing sample number
sampleNum=sampleNum+1
#Saving the captured face in the dataset folder TrainingImage
cv2.imwrite("TrainingImage\ "+name.lower() +"."+Id +'.'+ str(sampleNum) + ".jp g", gray[y:y+h,x:x+w])
#Display the frame
cv2.imshow('frame',img)
#wait for 100 miliseconds
if cv2.waitKey(100) & 0xFF == ord('q'):
break
# Break if the sample number is morethan 100
elif sampleNum>60:
break
cam.release()
cv2.destroyAllWindows()
row = [Id , name]
with open('StudentDetails\StudentDetails.csv','a+') as csvFile:
writer = csv.writer(csvFile)
writer.writerow(row)
csvFile.close()
name_saved=" ID : "+str(Id)+ " with NAME : "+ name +" Saved"
v.set(name_saved)
The images and student details would be saved in their respective directories :
The student details would be as such:
And the images saved would be as such:
- Training the Pretrained LBPH Recognizer Model
After collecting a users information, we train our model on the images available to us.
def train_image():
recognizer = cv2.face.LBPHFaceRecognizer_create()
harcascadePath = "haarcascade_frontalface_default.xml"
detector =cv2.CascadeClassifier(harcascadePath)
faces,Id = ImagesAndNames("TrainingImage")
recognizer.train(faces, np.array(Id))
recognizer.save("recognizers/Trainner.yml")
v.set("Images Trained")
Here we use the ‘haarcascade’ file for detecting our face, and then for training our ‘pretrained’ model, we extract the features present with the image, i.e. faces, id.We define a new function to extract the faces and id associated with each faces.
def ImagesAndNames(path):
#get the path of all the files in the folder
imagePaths=[os.path.join(path,f) for f in os.listdir(path)]
#create empty face list
faces=[]
#create empty ID list
Ids=[]
#now looping through all the image paths and loading the Ids and the images
for imagePath in imagePaths:
#Loading the images in Training images and converting it to gray scale
g_image=PIL.Image.open(imagePath).convert('L')
#Now we are converting the PIL image into numpy array
image_ar=np.array(g_image,'uint8')
#getting the Id from the image
Id=int(os.path.split(imagePath)[-1].split(".")[1])
# extract the face from the training image sample
faces.append(image_ar)
Ids.append(Id)
return faces,Ids
Now when the faces and Ids are extracted, then we train our model on these values, and save the trained information as “Trainner.yml”and return an‘Images Trained’message to the notification section.
- Tracking Users
Once we get our image data-set trained, now we can track the user, for tracking the user, we already have our Trainner.yml file ready, we load ‘haarcascade’ fileto identify faces, and the recognizer algorithm to identify the users.
def track_user():
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/Trainner.yml")
cam = cv2.VideoCapture(0)
harcascadePath = "haarcascade_frontalface_default.xml"
detector=cv2.CascadeClassifier(harcascadePath)
To show the labels, we use :
font=cv2.FONT_HERSHEY_SIMPLEX
We also read the StudentDetails.csv file to identify the names, matching to each id, and we also make a data-frame to track the students attendance:
df=pd.read_csv("StudentDetails\StudentDetails.csv")
col_names = ['ID','Date','Time']
attendance = pd.DataFrame(columns = col_names)
An infinite while loop starts, if it’s 100 second or a user press ‘q’ then the frame window will exit.
while(True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
This will return image, which would be converted to gray image and faces would be detected further. After detecting the face, the algorithm for face identification will run, where the face with Ids allocated to it would be identified with a confidence level, with the help of our pretrained Trainner.yml file, now the Id would be matched to our Studentdetails.csv file and the corresponding name to the ids would be returned, further it also takes the current time and date that would be saved in a json file, and if the confidence will be greater than 90, then the image would be saved to ImagesUnknown folder, and if we get duplicate values of attendance, then we drop those value as well, and finally ‘.json’ file is created in our Attendance folder:
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
Id, conf = recognizer.predict(gray[y:y+h,x:x+w])
name=df.loc[df['Id'] == Id]['Name'].values
name_get=str(Id)+"-"+name
time_s = time.time()
date = str(datetime.datetime.fromtimestamp(time_s).strftime('%Y-%m-%d'))
timeStamp = datetime.datetime.fromtimestamp(time_s).strftime('%H:%M:%S')
attendance.loc[len(attendance)] = [Id,date,timeStamp]
if(conf>90):
noOfFile=len(os.listdir("ImagesUnknown"))+1
cv2.imwrite("ImagesUnknown\Image"+str(noOfFile) + ".jpg", img[y:y+h,x:x+w])
Id='Unknown'
name_get=str(Id)
cv2.putText(img,str(name_get),(x+w,y+h),font,0.5,(0,255,255),2,cv2.LINE_AA)
attendance=attendance.drop_duplicates(keep='first',subset=['ID'])
fileName="Attendance/in.json"
attendance.to_json(fileName,orient="index")
Now when the user presses‘q’,then update_att() function is called and “Imagestracked” message would be displayed in notification section.
cv2.imshow('img',img)
if (cv2.waitKey(1)==ord('q')):
update_att()
break
cam.release()
cv2.destroyAllWindows()
v.set("Images tracked ")
The users id and name would be displayed with face :
The ‘attendance.json’ file created would be such :
- Update Attendance
After the json file is created, now update_att() function comes into action to update the attendance to our mysql database.
Our database structure is as such:
The structure of attendance table is as such:
The structure of student table is as such :
The structure of teacher table is as such :
In our update function, first we connect to our MySQL database , and a cursor is also created, here cursor is used to execute MySQL commands.
def update_att():
conn=pymysql.connect(host="remotemysql.com",user="KLseHZ0Qv2",passwd="*******",db ="KLseHZ0Qv2")
myCursor=conn.cursor()
Now we fetch the details of our attendance table :
myCursor.execute("SELECT * FROM attendance;")
records=myCursor.fetchall()
length_db=len(records)
if the length of attendance table is 0 i.e. no records present, then a record of 10, 000 students is inserted to the attendance table.
if(length_db==0):
df=pd.read_json("Attendance/classtest.json")
length_df=len(df.columns)
for i in range (length_df):
id=(df[i].ID).item()
date=(df[i].Date).item()
time=(df[i].Time).item()
myCursor.execute(""" INSERT INTO attendance(id,date1,time1,att,totclass) VALUE S %s,%s,%s,%s,%s)""",(id,date,time,0,0))
v.set("Attendance Inserted for the first time")
Here classtest.json contains 10, 000 id starting from 1700000 to 1709999 with each date set to 0, time also set to 0. The code for generating these 10, 000 students information is :
import pandas as pd
col_names = ['ID','Date','Time']
attendance = pd.DataFrame(columns = col_names)
for i in range(1700000,1710000,1):
Id=i
date=0
time=0
attendance.loc[len(attendance)] = [Id,date,time]
fileName="Attendance/classtest.json"
attendance.to_json(fileName,orient="index")
And if the length of attendance table is not zero, then the else block executes:
i. Firstly, if the date in our json file matches with the date of any of the user in our existing attendance table, then check variable will be initialized to 1, and if it doesn’t matches to any 1 user, then check will be set to 0. And if check==1 , then total classes updated won’t be increasing, and if check==0 , then total classes held would be increased by 1.
else:
df=pd.read_json("Attendance/in.json")
length_df=len(df.columns)
date_json=(df[0].Date)
check=0
for row in records:
date_db=row[1]
if(date_db==date_json):
check=1
break
else:
check=0
if(check==1):
v.set("Total classes didn't updated")
else:
myCursor.execute("UPDATE attendance SET totclass=totclass+1")
ii. Now, if the ids present in json file matches with the id of database and the id in json file is not equal to the date in database, the date and time in database is set to date and time of json file, and the attendance is increased by 1.
for i in range(length_df):
id_json=(df[i].ID)
date_json=(df[i].Date)
time_json=(df[i].Time)
date=str(datetime.date.today())
for row in records:
id_db=row[0]
date_db=row[1]
if(id_json==id_db and date_db!=date_json):
sql=" UPDATE attendance SET date1=%s,time1=%s,att=att+1 WHERE id=%s"
val=(date_json,time_json,id_json)
myCursor.execute(sql,val)
v.set("Attendance Updated")
conn.commit()
conn.close()
- Delete Users Info
To delete a users info, first we fetch the id/roll number from the input box, set src=“TrainingImage” load the data-set present in “StudentDetails.csv” file to a data-frame.
def del_user():
roll_del=int(dn.get())
src="TrainingImage"
df=pd.read_csv("StudentDetails\StudentDetails.csv")
Now if the roll present in data-frame matches to the roll_del, then a for loop runs for all images present in the Training image and if the roll is present inside the image name, then all the similar images will be removed, and the details of user present in our data-frame matching to roll is also dropped and the df is overwritten in our “StudentDetails.csv” file.
for roll in df['Id']: if(roll==roll_del):
for image_file_name in os.listdir(src):
roll_str=str(roll)
if(roll_str in image_file_name):
v.set("Deleting the Given user names info...")
os.remove(src+"/"+image_file_name)
df.drop(df.loc[df['Id']==roll_del].index, inplace=True)
df.to_csv("StudentDetails\StudentDetails.csv", index=False, encoding='utf 8')
v.set(roll_str+" Deleted From Database")
else:
v.set("User with given roll number not present"
Comments