MIMIC (Movement Interpretation and Motion Interface Control) is an innovative project designed to bridge the gap between human gestures and robotic responses. The system is equipped with dual bionic robotic arms capable of interpreting and replicating hand gestures, specifically focusing on sign language. This project aims to enhance communication accessibility and explore new frontiers in human-robot interaction.
What It DoesMIMIC utilizes advanced computer vision and machine learning techniques to detect and interpret hand gestures. The system primarily focuses on sign language, enabling the robotic arms to replicate the detected gestures in real time. The key functionalities include:
- Gesture Detection: Using AI-powered algorithms, MIMIC accurately recognizes a variety of hand gestures and sign language symbols.
- Motion Replication: The dual bionic robotic arms replicate the detected gestures, providing a visual and physical representation of the input.
- Sign Language Interpretation: Beyond replication, MIMIC interprets sign language gestures, offering potential for translation and communication assistance.
The primary motivation behind MIMIC is to enhance communication accessibility for individuals who use sign language. By providing a robotic system capable of understanding and replicating sign language, we aim to:
- Support Accessibility: Assist the deaf and hard-of-hearing community by providing a tool for communication with those unfamiliar with sign language.
- Explore Human-Robot Interaction: Advance the field of robotics by exploring the nuances of gesture recognition and physical replication in real-time.
- Innovate in Assistive Technology: Develop cutting-edge technology that can be used in various applications, from education to public services.
MIMIC is built on robust hardware components to ensure high performance and accuracy:
- Processor: The system uses the AMD Xilinx Kria KR260 Starter Kit, a powerful platform that handles vision AI processing and robotic arm control.
- Image Sensor: A Logitech 4K USB Camera is employed as the image sensor, providing high-resolution video input for precise gesture detection.
- Robotic Arms: The system features two bionic arms, each with five fingers and a range of motion comparable to human arms. These arms are capable of replicating intricate gestures with high fidelity.
MIMIC is composed of several key components:
- Camera and Sensor Array: Captures hand movements and gestures with high precision.
- AI and Machine Learning Models: Analyze the captured data to identify and classify gestures.
- Robotic Arms: Dual bionic arms that replicate the identified gestures with high fidelity.
- Control Interface: A user-friendly interface for interacting with the system and configuring responses.
MIMIC has potential applications in various fields, including:
- Communication: Assisting in communication for those who use sign language.
- Education: Providing a learning tool for sign language students.
- Entertainment: Creating interactive experiences in entertainment and gaming.
- Healthcare: Assisting in rehabilitation and physical therapy by demonstrating exercises.
During the development of MIMIC, we encountered challenges due to a lack of documentation on integrating ROS2 with Ubuntu on the Kria platform, particularly for creating an accelerated app using ROS2. This gap in available resources delayed our progress and limited our ability to fully realize the project's potential within the planned timeframe.
Our plan includes creating a ROS2 package that performs hand gesture recognition using Google Mediapipe. The recognized gestures would then be used to operate the robotic arms via PMOD and Raspberry Pi GPIO to control the servos. The two robotic arms contain 6 servos each and the two bionic hands contain 5 servos each, which is a total of 22 servos. This means we are exporting 22 PWM to PMOD and GPIO on Kria using Vivado.
Though on the robotic arms, we removed one of the servo to attach the bionic hand, this reduced the range of motion at the wrist to just anterior and posterior motion without medial or lateral motion. This makes the motion control a little complicated as it has to move the entire base to rotate the hand which is even more complicated if the arm was curved to begin with. The anterior and posterior motion servo is also underpowered to hold the bionic hand steady, constant adjustment needed to keep the hand up as it was too heavy.
However, interfacing these components has proven to be complex and challenging. First issue is to use the 4K USB camera with ROS2 Mediapipe as using the camera as input has been quite a hassle. The second issue is to control the robotic arms with in that ROS2 Mediapipe package as well, and same issue with interfacing, its just too much troubles.
AcknowledgementsWe would like to thank the contributors and the broader community for their support and inspiration. Special thanks to Hackster.io for their continued support in making this project a reality.
Comments