A New Spin on Computer Vision
PanoRadar is an innovative RF imaging solution that pairs a rotating mmWave radar sensor with AI for clear vision even in poor conditions.
Autonomous robotic systems β like self-driving vehicles, drones, and industrial robots β all rely on one method or another to perceive their surroundings. Very frequently they use cameras or LiDAR for this purpose, as these sensors are capable of providing very rich, high-resolution data about the environment. Well, they can as long as conditions are good, anyway. Factors like fog, smoke, dust, rain, and even differing lighting conditions are enough to blind a robot that makes use of them. For certain applications, like self-driving vehicles, that is more than an inconvenience β incorrect or incomplete data can result in tragic consequences.
There are, of course, sensing options that operate outside of the visible and near-visible light spectrum, which enables them to sidestep these issues that confuse cameras and LiDAR. RF imaging systems, for example, interpret the reflections of radio waves off of nearby objects to construct a picture of the environment. They do this without being sensitive to changes in lighting or obstructions like smoke or fog.
Sounds about perfect, right? For some use cases, perhaps it is. However, RF imaging cannot provide resolutions that come close to what is possible with traditional optical imaging methods. As such, the results are simply too coarse for many applications. But thanks to the work of a team of researchers at the University of Pennsylvania, that may no longer be the case in the near future. They have developed a powerful and inexpensive method called PanoRadar that gives robots superhuman vision via RF imaging.
PanoRadar works by integrating a single-chip mmWave radar with a motor that rotates it to effectively form a dense cylindrical array of antennas. By rotating the radar around a vertical axis, PanoRadar significantly improves angular resolution (to 2.6 degrees) and provides a full 360-degree view of the environment. The vertical placement of the radarβs linear antenna array allows for beamforming along the vertical axis, which, combined with the azimuth rotation, enables detailed 3D perception. This rotation also overcomes the typical field-of-view limitations of RF sensors, providing comprehensive environmental coverage without the bulk and cost of traditional, larger mechanical radar systems.
The system also incorporates sophisticated algorithms to manage the challenges posed by external motion, especially when the robot is moving. Its signal processing system carefully tracks reflections from objects in the environment to estimate the robot's motion and compensate for any shifts in the radar's position. Additionally, PanoRadar uses machine learning models trained with paired RF and LiDAR data to enhance resolution. The algorithm leverages the fact that patterns in indoor environments tend to have consistent patterns and geometries to boost detail accuracy, making it adept at recognizing objects and surfaces.
Once deployed, PanoRadar can generate a 3D point cloud of its surroundings, enabling visual recognition tasks like object detection, semantic segmentation, and surface normal estimation. These capabilities allow mobile robots equipped with the sensor to navigate complex spaces and interact with objects and humans in various settings, such as warehouses or healthcare facilities. By making RF-based 3D imaging both accessible and cost-effective, PanoRadar opens new possibilities for mobile robot perception and enhances the versatility and safety of autonomous systems.