In today's interconnected world, security and automation have become paramount. The "Face Recognition System with Real-Time Unknown Person Alerting Using Raspberry Pi and Blynk App" is an innovative solution that leverages cutting-edge technology to address these needs effectively. This project utilizes the Raspberry Pi, an affordable yet powerful microcomputer, to perform real-time facial recognition, enabling enhanced security and remote monitoring. The integration of the Blynk app adds unparalleled convenience, allowing users to control and monitor the system from anywhere.
The system revolves around creating and training a facial recognition model using datasets of known individuals. With a camera module capturing live video feeds, the Raspberry Pi processes the data to identify familiar faces while alerting the user about unknown individuals. This functionality is supported by a combination of hardware, such as a camera and servo motor, and software tools, including the Blynk app, which enables video streaming and remote control.
By uniting IoT with machine learning, this project exemplifies a practical, scalable, and efficient approach to security automation, catering to homes, offices, and other facilities.
To implement the facial recognition functionality, follow the step-by-step instructions provided in the Tom’s Hardware guide. I used this guide as a foundation and made modifications to the code to meet my requirements. Specifically, the system was enhanced to send notifications whenever an unknown person is detected.
Additionally, an ESP32 microcontroller was integrated to control the servo motor, enabling movements such as forward, reverse, left, and right. The robot's operations are seamlessly managed via the Blynk app interface, providing users with an intuitive and remote control solution.
Raspberry Pi CodeThe Raspberry Pi serves as the brain for real-time facial recognition. Below is a brief overview of the face recognition code flow:
Loading Trained Face Data:The file encodings.pickle
contains face encodings (numeric representations of facial features) of known individuals. This file was created during the training phase using known face datasets. The system uses this file to match faces in real time.
Video Stream Initialization:The VideoStream
module starts capturing video from the camera. The frames are resized to 500 pixels wide for faster processing.
Face Detection and Recognition:
- Each frame from the video is processed to detect faces using the
face_recognition.face_locations
function. - The detected faces are then compared with the stored encodings from
encodings.pickle
. - If a match is found, the corresponding person's name is retrieved. If no match is found, the face is labeled as "Unknown."
Unknown Face Alert:
When an "Unknown" face is detected:
- The Raspberry Pi sends a HIGH signal to the ESP32 via a GPIO pin.
- This alerts the ESP32, triggering the robot to stop and send a notification to the user via the Blynk app.
- When an "Unknown" face is detected:The Raspberry Pi sends a HIGH signal to the ESP32 via a GPIO pin.This alerts the ESP32, triggering the robot to stop and send a notification to the user via the Blynk app.
For recognized faces:
- The GPIO pin signal remains LOW to indicate no alert.
- For recognized faces:The GPIO pin signal remains LOW to indicate no alert.
Displaying Results:
- Each detected face is highlighted with a bounding box on the video feed, and the name (or "Unknown") is displayed.
- The video feed is shown on the connected monitor, allowing the user to see the recognition in real time.
The ESP32 acts as the controller for the robot's movement, obstacle detection, and communication with the Blynk app.
Pins and Peripherals :- Motor Pins (ml1, ml2, mr1, mr2): Control the robot's motors for forward, reverse, left, and right movements.
- Servo Pin: Connected to a servo motor for additional functionality.
- Ultrasonic Sensor Pins (trigPin, echoPin): Measure the distance to obstacles.
- Buzzer Pin: Alerts the user about obstacles.
- RPI_Pin: Reads the alert signal from the Raspberry Pi.
- The robot's movements (forward, reverse, left, right) are controlled by sending high/low signals to the motor pins.
- These movements are triggered by commands from the Blynk app, mapped to virtual pins (V1 for forward, V4 for reverse, V5 for left, V6 for right).
- When no command is active, the robot stops.
- The servo motor is controlled via virtual pin V3 on the Blynk app.
- The user can set the servo's angle for specific tasks, such as rotating a mounted camera or other accessories.
- The ultrasonic sensor measures the distance to obstacles in front of the robot.
- If an obstacle is detected within 2 to 25 cm, the robot stops, and the buzzer emits a series of beeps to alert the user.
- The obstacle distance is continuously updated in the Blynk app on virtual pin V2.
- The ESP32 monitors the
RPI_Pin
for signals from the Raspberry Pi.
If the RPi detects an "Unknown" face, it sends a HIGH signal to the ESP32. In response:
- The ESP32 displays "Unknown Person Found" in the Blynk app (via virtual pin V7).
- An alert is also shown on the Blynk LCD widget on virtual pin V8.
Facial Recognition on Raspberry Pi:
- The RPi captures video frames and detects faces in real time.
- Recognized faces are identified, and alerts are sent to the ESP32 if an unknown face is detected.
Robot Control via ESP32:
- The ESP32 controls the robot's movement based on user commands from the Blynk app.
- It stops the robot and alerts the user if an obstacle is detected
Remote Monitoring and Control:
The Blynk app provides an interface for:
- Viewing alerts (e.g., "Unknown Person Found").
- Controlling the robot's movement.
- Adjusting the servo motor's angle.
- Monitoring obstacle distances.
- The Blynk app provides an interface for: Viewing alerts (e.g., "Unknown Person Found").Controlling the robot's movement. Adjusting the servo motor's angle. Monitoring obstacle distances.
To evaluate the system’s performance, datasets of two participants, Mallikarjuna and Chaitrashree, were created and used to train the facial recognition model. The system was then tested by removing the creator's face dataset to ensure they were identified as "Unknown."
During these trials:
- The system reliably identified the known individuals.
- It correctly labeled the creator as "Unknown, " demonstrating its accuracy and reliability.
Comments
Please log in or sign up to comment.