You Can Turn Wi-Fi Into Cheap X-Ray Vision
Hackers built a VR system that uses Wi-Fi and AI to detect people through walls, proving X-ray vision is not just for superheroes.
Of all the superhero powers that have been imagined in comic books and movies over the years, the ability to fly has got to be the most universally coveted. But a close second for most people is X-ray vision, of the sort possessed by Superman. Sure, it completely ruins the game of hide-and-seek, but all the same, looking through solid objects on a whim would be a blast. And it would be useful, too — imagine if first responders could see through walls when they arrive at the scene of a disaster, for instance.
But alas, X-ray vision is only for the aliens and mutants found in superhero stories, not us humans. Or is it? With the help of some relatively inexpensive technology, hardware hacker Jared Mantell and a few friends recently demonstrated how we actually can see through walls. The team’s system makes use of a popular virtual reality (VR) headset, invisible electromagnetic radiation, and an artificial intelligence (AI) algorithm that interprets the reflections of that radiation to locate people that are hidden behind walls or other obstructions.
Rather than using actual X-rays (and literally mutating oneself), the device relies on Wi-Fi signals to track human movement. Unlike fictional superpowers that let someone see directly through objects, this technology works by analyzing how Wi-Fi signals bounce off and interact with the environment, allowing it to identify people even when they are hidden from view.
At the heart of the system is a pair of low-cost ESP32 microcontrollers, which function as a makeshift software-defined radio system. One ESP32 continuously transmits Wi-Fi signals, while the second one receives the reflections after they have interacted with the environment.
In particular, the setup looks at the Channel State Information (CSI) data that describes the characteristics of a Wi-Fi channel. This data is sent to an NVIDIA Jetson Nano edge AI computing device for processing. The Jetson runs a convolutional neural network (CNN) trained to recognize human presence based on subtle distortions in the Wi-Fi signal. Once the algorithm detects a person, it estimates their location and represents it as green dots in a Meta Quest VR headset using Unity’s spatial rendering tools. The headset is operating in passthrough mode, so the green dots are superimposed on top of the user’s normal vision.
Building a real-time, wall-penetrating person detection system was no easy task. One of the biggest hurdles was managing the sheer volume of raw data flowing from the ESP32s to the Jetson. The team had to carefully balance signal quality with real-time performance, fine-tuning factors like sampling rate, packet window size, and subcarrier count to ensure stable operation.
Another challenge was deploying the machine learning model on the Jetson Nano, which has just 2GB of memory. The team optimized their CNN using batch normalization and dropout layers to ensure the system could process information quickly while filtering out noise.
Unity integration also posed some difficulties, as direct data streaming from the Jetson Nano to the VR headset proved unreliable. To address this, the team built a WebSocket server on the Jetson to facilitate real-time communication between components.
Though the prototype successfully demonstrated the concept, the team has ambitious plans to improve it. First, they aim to upgrade the system by replacing the ESP32 microcontrollers with professional-grade software-defined radios. This would improve range, resolution, and signal clarity. They also plan to refine the AI model with better training data, as the initial tests were conducted in relatively controlled environments.
What started as a quick hack could one day evolve into a life-saving tool for first responders, all while proving that with the right combination of AI, hardware, and ingenuity, even X-ray vision is not beyond our reach.