Hackster is hosting Hackster Holidays, Ep. 6: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Monday!Stream Hackster Holidays, Ep. 6 on Monday!

The Open Source Robotics Transformer 1 Aims to Help Robots Learn From Other Robots

Released under a permissive license, RT-1 lets robots of one type learn from the experiences of other, completely different, robots.

Researchers at Google's robotics arm, Google Research, and Everyday Robots have come up with a way to help robots learn from each other and absorb huge amounts of data to boost performance: the Robot Transformer 1 (RT-1), which sadly is neither an Autobot nor a Decepticon.

"Earlier this year, we worked with Everyday Robots to demonstrate that integrating a powerful language model such as PaLM into a robot learning model could not only enable people to communicate with a robot — but also improve the robot’s overall performance," explains Vincent Vanhoucke, head of robotics at Google Research. "This language model made it possible for helper robots to understand several types of requests — like 'I’m hungry, bring me a snack' or 'help me clean up this spill' — and execute them.

"Now, we're using the same architectural foundation as PaLM's – the Transformer – to help robots learn more generally from what they’ve already seen. So rather than merely understanding the language underpinning a request like 'I’m hungry, bring me a snack,' it can learn — just like we do — from all of its collective experiences doing things like looking at and fetching snacks."

Google is looking to help robots help robots, by letting them learn from each others' experiences. (📹: Google Research)

The companies' earlier research on the topic resulted in PaLM-SayCan, which was designed to give robots a better understanding of natural language commands — and that resulted in a 26 percent boost to long-horizon task planning in testing. This time around, though, the aim is to help the robots help themselves — by dramatically increasing the amount of data they can ingest through knowledge transfer.

The Robotics Transformer 1 (RT-1) is designed to tokenize robotic input and output actions — things like camera feeds, task instructions, and commands to motors — in order to allow for run-time inference efficient enough for real-time control. Trained on a 130,000-episode dataset of more than 700 tasks, gathered from an Everyday Robots fleet over a 17-month period, RT-1 proved capable of significantly improving generalization across new tasks, objects, and environments, boosting its accuracy by observing other robots in action.

"As human beings, we learn from our personal experiences and from each other. We often share what we've learned and rework systems based on failures we've encountered," says Vanhoucke. "While our robots don't communicate with each other, this research shows that we can successfully combine datasets from different types of robots and transfer behaviors across them.

RT-1 is a multi-model transformer, and has proven its capabilities in boosting robotic accuracy. (📹: Google Research)

"In fact," Vanhoucke continues, "our research shows that by combining data from different robots we're able to nearly double the model's ability to generalize to a new scene. That means that as we continue to experiment with different robots and new tasks, we may be able to augment the training data for RT-1 to improve robot behavior, making it a flexible and scalable approach to robot learning."

The team's work is detailed in a paper available under open-access terms from the project website; the source code for RT-1 has been released to GitHub under the permissive Apache 2.0 license.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles