Is AI On Your Radar?

An approach inspired by computer vision has breathed new life into adaptive radar systems by enhancing their object tracking capabilities.

Nick Bild
2 months agoMachine Learning & AI
Digital landscapes included in the researchers' dataset (📷: Duke University)

Locating and tracking objects with radar has many applications in various fields such as aviation, maritime navigation, military defense, and autonomous vehicles. The techniques used to interpret these signals have been continually refined since they were first put to work almost a century ago. Adaptive radar systems have advanced to the point that they are one of the best radar-based tools that we presently have to track objects. They traditionally perform an initial signal processing step before they estimate the location of an object through model fitting.

But despite their successes, these methods have hit a wall that is preventing further forward progress. As a result, adaptive radar systems are impractical for many applications where a high level of precision is needed. Furthermore, the processing algorithms required by these systems are computationally expensive, which increases both the cost and size of devices that implement them. Without a significant shift in direction, radar-based object tracking may cease to be a live option for many use cases.

Recently, however, a team led by researchers at Duke University drew inspiration from the artificial intelligence algorithms that transformed the field of computer vision. Specifically, the researchers developed a convolutional neural network (CNN) that can translate radar data into a prediction of an object’s location and velocity. These are the same types of networks that are commonly used for image classification and object detection tasks.

One reason for the early successes of CNNs in image processing tasks was the release of the massive ImageNet dataset, which consists of over 14 million annotated images. This is because any machine learning algorithm needs a large, diverse, and high-quality source of data to learn from before it can be put to work. No such resource existed for adaptive radar applications, so the team compiled an enormous dataset of digital landscapes — and they open sourced it so that other developers and researchers could benefit from their work. The data, about 16 terabytes in size, was generated with the help of an RF modeling and simulation tool called RFView.

When benchmarking the new approach against traditional processing algorithms, it was discovered that significant performance gains had been achieved. In some cases, as much as a seven-fold improvement in localizing objects was observed when using the CNN-based processing. It is worth noting that the experiments were all conducted in simulation, however, so the team’s approach has not yet been tested in the field.

The researchers’ goal is to move the state-of-the-art in the field forward, so they are trying to make their work accessible to the community. One of the lead researchers involved in this project stated that “as we move forward and continue adding capabilities to the dataset, we want to provide the community with everything it needs to push the field forward into using AI.”

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles