The main goal was to develop a system, which can detect any kind of objects and track them. Another goal was to manually control the robot and receive a livestream from the camera via a web interface.
Whywe started this project:
Ensuring the safety and security of our homes is a top priority for many of us. With advancements in technology, one highly effective tool for achieving this is the use of a camera to observe our houses.
Our system secures your household from enemy entrances and observes it 24/7. Additionally, the user is notified when a person is detected and has the option to visually mark them with a laser.
Basic idea:A robot arm is controlled by a user via a web application. In the web application, a camera feed is displayed from the perspective of the robot arm. An object recognition software identifies possible targets and marks them in the image. The user can manually control the robot arm or can select one of these objects. When selecting an object the robot arm then automatically tracks it. With an laser the targeted object is marked. Also the user gets notified when an object is detected via a telegram message.
Our block diagram is shown below:
The project can be divided into 4 blocks. The camera transmission, the object detection, the website and the control of the robot arm. The implementation of these blocks is listed below.
1. Transmitting the camera feed to the web interface
The Raspberry Pi camera is used to capture objects. It is connected to the Raspeberry Pi and sends high-resolution images to it.
The images from the camera are then processed. For this purpose, they are decoded, serialized and sent to the host PC within UDP data packets. The host PC converts the received data back into image formats and transfers the images to the website.
2. Detecting and marking the objects in the camera feed
For the object detection the yolov7-tiny model is used. The model is already trained, so the detection of humans and animals is possible.
During the detection, the detected objects are assigned to an individual ID and a frame. Thereby, the IDs do not change if the object changes its position in the frame. These IDs are needed to differentiate in the website between the detected objects.
For the targeting of the objects the coordinates of each detected object is sent to the website.
3. Implementation of the website
The website is implemented using the Flask package for Python and has the following features:
- Display a live video feed from the camera
- A dropdown list which displays the detected objects
- Buttons to manually control the robot arm
- A login page
The commands of the website are directed to the Python script where they are further processed. The manual control commands will be send to the Raspberry Pi using MQTT. If the user selects an oject from the dropdown list the robot will automatically track the object.
The Python script also includes a telegram bot which notifies the user when an object is detected. The user can specify for which objects he wants to be notified.
For a demonstration of this feature, the bot is configured to send the current frame when a horse is detected:
4. Control the robot arm
As shown in the figure, the control of the robot arm happens via the I2C bus from the Raspberry Pi. Via the Raspberry Pi commands for the position change as well as shooting commands are given to the Arduino Uno. The position change is implemented via Braccio Schield and the shooting happens via the digital output of the Arduino Uno.
Moment of truth:
After the blocks are implemented the complete system is tested. The following video shows the detection, targeting and marking of an person.
Modularity of the system:The system is designed to easily change the actuator (in our case the laser) to everything the user desires. It is furthermore possible to exchange the camera to a better camera with an higher resolution or even to an night vision camera to operate the system at night and day.
Comments