Based on the importance of the human eye in detecting objects and integrating scientific technology with interactive applications used by humans, I proposed the programming and implementation of this project to improve the theoretical and practical concepts of electronics in detecting the movement of an object such as the human eye through interactive modalities.
This project can be used to detect the movement of the eye to obtain evidence for body language analysis. Thus, by changing the position of the eye, we can reveal the movement of the eye and thus analyze these movements, which have different interpretations in Neurolinguistic Programming (NLP).
Several photographs were taken with a webcam pointed directly at the user's face. In the first phase, the images are taken, then one image is identified, the face image and then the eyes are identified, finally, the pupil is defined, its center is displayed and the coordinates of these centers are displayed through the graphical user interface in the MATLAB software.
IntroductionTo detect eye movements, we can use one of the features of the eyes, such as: Pupil, Iris, etc.
In my project, I mainly determined the boundaries between the iris and sclera and then identified the center of the pupil using appropriate image processing algorithms.
Figure (2) shows the block diagram of my project, which contains three main blocks:
The input unit: This unit could be a separate Webcam sensor or the built-in camera in your computer/laptop.
The processing unit: This unit is represented by the MATLAB graphical user interface (GUI) that connects the input unit to the output unit and does the processing of the photo.
The output unit: This unit is also represented by MATLAB GUI and displays the photos and processing steps.
Figure (3) shows the figure drawn in MATLAB GUI, which displays the results and processing operations of the downloaded image.
It mainly consists of axes for displaying images, several pushbuttons, several edit text boxes and static text boxes.
Tool functions of the previous GUI:
- Three axes: To display the mapped image and subsequent processing operations.
- Pushbuttons:
1- Start capturing: The function of this pushbutton is to activate the laptop/computer camera or webcam and connect it to MATLAB.
Then, 10 consecutive RGB true color images are captured for 3 seconds with the maximum dimensions of the camera and saved in a special folder with the extension 'camshots1.jpeg'.
2- Captured image: This pushbutton function selects an image from the 10 previously captured group images. You can choose any image here by simply editing the name of the image, or the GUI will automatically use the fifth image, which is in the middle and seemed to be the best after many attempts. This image is displayed on AXES 1.
3- Eye detection: This pushbutton function implements the "Viola Jones" algorithm for recognizing faces. It determines a blue rectangle containing only the eyes from the face image displayed on axis 1, cuts it out and displays it separately on AXES 2.
4- Iris contouring: This pushbutton function allows you to delineate the area between the iris and the sclera, which is displayed as a circular line in red, to detect and determine the iris of the human eye using the "Circular Hough Transform (CHT)". This is a selective transform to detect circular objects in a digital image, which is then displayed on AXES 2.
5- Eye center: This pushbutton function determines the center of the eye, taking into account the previous operations, and puts an (X) yellow signal in the center and displays it on AXES 3.
- Eyes position Panel
This panel contains 4 pushbuttons to accurately determine the X and Y axes for the center of the eye and display the axis in the next edit text box.
P.s. You can also use the data cursor in the toolbar to make sure the axes are correct.
ResultsAfter running the GUI, we will get the following results:
Comments
Please log in or sign up to comment.