Do the Robot!
Robots can get down and dance, high five, and generally act natural with ExBody, which combines motion capture with reinforcement learning.
Robots are not exactly known for their graceful movements. In fact, to move robotically has become synonymous with making jerky, unnatural movements while lacking any trace of emotion or warmth. As robots become more common sights in our everyday lives, this will become more than just a little quirk. In order for people to interact with robots in a positive way, rather than run in the other direction screaming, they will need to act in a much more natural, human-like manner.
There is a lot of work yet to be done in this area. Roboticists have many technical challenges to address before they can create natural enough robot behavior to overcome the images of humanoid robots that people have in their heads from movies like The Terminator. Engineers at the University of California San Diego believe that they are well on their way to solving this problem. Using novel techniques, they have created a robot that can dance, high five, and just all around act like one of the cool kids of the robot world. Their hope is that people will find this disarming and that it will help them to build trust with their inferior biological counterparts.
After studying prior work, the team found that both physics-based character animation and reinforcement learning based on human movement are among the best options available today for producing natural appearing motion in robots. However, real-world actuators cannot keep up with the demands of animation-based approaches, and humans have many more degrees of freedom than most robot hardware is capable of. For these reasons, these approaches still leave much to be desired.
In an effort to overcome these issues, the researchers developed an approach called Expressive Whole-Body Control (ExBody). It was designed to match the expressivity of human motions. This was achieved by developing a whole-body motion control system for humanoid robots by combining large-scale human motion capture data with deep reinforcement learning. But crucially, the reward function for the learning algorithm did not depend on perfectly mimicking human motions.
Instead, the upper body and legs of the robot were treated separately. This allowed the upper body to learn more diverse and expressive motions that more closely matched humans, while the legs were free to simplify the actions that they could not match. Without this division of functions, the relatively poor performance of the legs would have dragged the upper body’s performance down, preventing it from successfully learning actions it is capable of performing.
Aside from the enhanced expressiveness, ExBody also made the robot more robust. It was noted that it could easily maintain its footing even when walking over challenging surfaces like dirt, gravel, and grass.
At present, the robot requires input from a human operator with a controller that gives it some guidance as to what actions to perform. The team hopes to add more sensing instrumentation in the future, however, to make the entire system fully autonomous. They are also working to enhance their algorithm such that later revisions of it will be capable of performing even more fine-grained tasks.