Robots Unite!

A distributed system called Kimera-Multi enables robots to map large areas by sharing the details of their explorations with one another.

Nick Bild
1 year agoRobotics
Building a map of the environment with the help of many robots (📷: MIT)

Robots have come a long way in learning to understand and navigate their surroundings thanks to technologies like Simultaneous Localization and Mapping (SLAM). SLAM allows robots to map out unknown environments while simultaneously determining their own position within that environment. By combining sensor data, such as cameras, lidar, and odometry, robots can create detailed maps and localize themselves in real-time.

Despite these successes, robots still have difficulties when it comes to mapping large-scale environments. This limitation has hindered their applicability in solving problems related to factory automation, search and rescue, intelligent transportation, planetary exploration, and other areas as well. The fact of the matter is that a single robot can only map a region just so fast.

So why not use more robots? That is the idea put forth by a group at MIT that has developed a multi-robot mapping technology called Kimera-Multi. It is a distributed system in which each robot runs its own copy of the mapping software. When robots come within communication range of one another, they can share their maps with each other. This allows each robot to build up a larger, more accurate map of their environment with a little help from their friends.

Each of the robots is equipped with visual and inertial sensors, and that data is fed into the Kimera software. This software calculates local trajectory and 3D mesh estimates from the sensor data collected by that robot. When a pair of robots come close enough to one another to communicate wirelessly, the algorithm leverages both of their data to perform inter-robot place recognition, relative pose estimation, and a distributed trajectory estimation. These robots can then share their larger, more accurate maps with yet more robots when they come into range.

The maps are additionally annotated with human-readable semantic labels (e.g. building, road, person). These labels are the raw data needed for next-generation spatial perception or spatial artificial intelligence applications. They also allow for higher-level decision-making algorithms to be developed. As a rule, Kimera-Multi is very modular, however, so specific features like semantic annotation or mesh reconstruction can be turned off to suit different use cases.

The system was evaluated in multiple photo-realistic simulation environments (Medfield, City, and Camp) to assess its performance. Kimera-Multi was also evaluated using a pair of outdoor datasets collected from physical robots to ensure that the experiments were as close to real-world situations as possible. It was discovered that Kimera-Multi outperformed state of the art algorithms in terms of robustness and accuracy. Also, despite being a completely distributed system, the team’s new method performed comparably to centralized SLAM systems.

A pair of videos were released that show how Kimera-Multi can build up and refine a 3D map of a large area. The first video was captured in the simulated environment, and the second was recorded while using the outdoor dataset. They serve as great visual representations of just what this software can do by leveraging the information obtained by multiple robots.

For those that are interested in a deeper dive, the researchers have released their source code on GitHub.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles