We have experienced the challenges of uncertainty in specific lab environments. "Where can I get my things 3D printed?", "Who can help me do such and such?" or "What are the safety protocols"? And oftentimes, the Internet of Things (IoT) offers incredible potential to help users get trained and interact with all the resources available in the university. By creating a mobile IoT platform. We can bridge the gap between physical spaces and digital resources by building a literal robot that does that :P. Which provides just-in-time training, equipment status monitoring and seamless integration with university systems.
Also, what's the point of having a physical robot if its just gonna stand there and talk to you? That's why LabAssist helps you even transport tools across the room, guiding new users through safety protocols, or providing accessibility support for those with mobility needs. Imagine you need assistance to carry objects around and are unable to do so, it would be very difficult to work without a staff or other person constantly there to help you. So just place your tools on its head tray and it will use machine learning to follow you to your next work station! A personal butler of sorts!
2. What it doesLabAssist is an intelligent autonomous robot that acts as your personal laboratory assistant. The system uses advanced computer vision and motion control to create a seamless assistance experience:
- Person Recognition: The robot identifies and locks onto a specific individual using computer vision, maintaining a consistent following distance.
- Obstacle Avoidance: Advanced real-time mapping allows LabAssist to navigate around objects and people in the lab environment.
- Tool Transportation: A specialized tray mounted on the robot can securely carry tools and materials, freeing up the user's hands.
- Interactive Interface: A user-friendly display provides information, responds to queries, and can be used to book resources or request assistance.
- Workplace Training: LabAssist can guide new users through safety protocols, equipment usage, and lab procedures.
LabAssist combines hardware and software components to create a full-stack solution:1.Hardware Layer:
- Raspberry Pi 5 with Sony's IMX500 AI-powered camera module
- Arduino Uno with L293D motor driver for motor control
- USART communication between Pi and Arduino
- 4 12V DC motors with a chassis built from Expanded Polystyrene
- Interactive display for theuser interface (quite literally can be an Ipad)
2.Software Layer:
- Computer Vision: Pre-provided models for the IMX500 for person detection and tracking
- Control System: Arduino IDE for motor control programming
- Navigation Logic: Python backend sending commands to the Arduino
- User Interface: Web-based interface for interaction and information display
- IoT Integration: API connections to lab booking systems and communication tools
The prototyping process:we realised that a small version is not sufficient for human-computer interaction and would need to build a large version - however, weight would be a big issues. Hence, after the initial blocking and scale we then moved on to construct the robot with Expanded Polystyrene:
- Optimizing ML models to run efficiently on the IMX500 AI camera while maintaining tracking accuracy.
- Developing a reliable communication protocol between the Raspberry Pi and Arduino to ensure smooth motor control with minimal latency.
- Fine-tuning the robot's following behavior to maintain appropriate distances in varied lab environments.
- Balancing computational resources between vision processing, navigation algorithms, and the user interface.
- The physical design of the buggy - Just look at it 🥹
- The 3D Lidar Path tracking - woah, looks so realistic 🤯
- Accomplishing all of this ideation and execution of a physical product in just 24 hours of no sleep!
The Sony IMX500 camera's ability to handle ML processing directly on the sensor revolutionized our approach to robot vision. Much like how the human visual system pre-processes information at the retina before sending it to the brain, the IMX500 processes visual data at the source before relaying results to the Raspberry Pi. This architecture allows for faster response times, lower power consumption, and more efficient resource allocation - critical factors for a responsive robot assistant. We also discovered that this distributed processing approach opens possibilities for scaling with multiple camera inputs without overwhelming the main processor. So you could have many of the cameras around the robot or around the environment for full mapping of every activity
Comments
Please log in or sign up to comment.