Teaching Robots for Dummies
The augmented reality-based Learning from Demonstration system helps non-expert users to efficiently train robots to learn new tasks.
In the ever-evolving landscape of robotics, there is a growing emphasis on designing and developing robots that can effectively navigate unstructured, real-world environments, providing invaluable assistance to humans in a myriad of tasks. These robots are envisioned to be versatile problem-solvers, capable of adapting to dynamic and unpredictable scenarios. Tasks that could benefit from such robotic assistance range from disaster response and search-and-rescue operations to warehouse logistics and household chores.
One of the key challenges in creating robots for unstructured environments lies in their ability to learn and generalize tasks efficiently. Traditional programming methods fall short in addressing the diverse and complex nature of real-world tasks. To overcome this hurdle, contemporary robotic systems often leverage machine learning. Robots are trained by demonstrating tasks, a process known as imitation learning. While this method has shown promise, it is not without its challenges.
To ensure the effectiveness of these robots, demonstrations are typically performed by experts, who meticulously break down tasks into numerous subtasks. This detailed breakdown allows the robot to learn the intricacies of each step. However, this process is labor-intensive, time-consuming, and inefficient. Each new task requires substantial computational power to process the vast amounts of data generated during training. As a result, the scalability of training robots for a wide range of tasks becomes a significant hurdle in achieving widespread deployment of these systems.
A multi-institutional effort including engineers from Carnegie Mellon University and Monash University is working to make robots more effective and practical through the use of a system that they call learning from demonstrations (LfD). Unlike traditional approaches, LfD focuses on collecting data from individuals that are not experts in robotics to use in training machine learning algorithms. The approach is iterative in that if the robot is not successful initially, the user can simply provide more demonstrations until the task is being performed as desired.
In order to turn non-expert humans into good teachers, the researchers are using a measure of uncertainty called task-related information entropy. This metric allows informative demonstration examples to be selected that will provide robots with the information that they need to perform a task in a generalized way. This also helps to avoid problems that plague many existing datasets, like the presence of low-quality data and insufficient examples. Not only do these types of problems make it challenging for a robot to learn a new task, but they can also actively mislead the robot.
As the user provides a robotic system with demonstrations, LfD highlights specific areas that are contributing most to the system’s uncertainty with respect to completing the task. This focused information gives the human teachers insights that will help them to focus on specifically clearing up the problem areas. It also helps the teacher to minimize their effort by providing a high density of useful information in the demonstrations given, eliminating the need for massive data collection efforts.
An experiment was conducted to assess the utility of LfD. A group of 24 participants, all non-experts in robotics, were instructed to use an augmented reality-based system to provide them with guidance as they performed demonstrations of a task. It was found that those using the LfD system trained robots with almost 200% greater efficiency than those that did not.
The team hopes that their work will help to democratize robotics and bring about an era in which robots can assist humans with far more tasks than they are capable of today.