The Cone of Silence Just Got Real
Penn State researchers created "audible enclaves" — specific locations sound can be projected to — for private listening without headphones.
Techies love to carry their electronics around with them, but they do not want to feel like they are carrying anything at all — the more these devices can disappear into the background, the better. Take earbuds, for instance. The long wires that used to accompany them have been replaced by tiny Bluetooth radios. And now they are shrinking to ever smaller sizes, fitting largely within the ear itself.
A new era in audio technology
What if we take this trend to the extreme, all the way until the earbuds are so small that they disappear completely? That is effectively what a team of researchers at Penn State University did. They have developed a technique that makes it possible for highly localized pockets of sound, called audible enclaves, to exist. In this way, a person can hear sounds produced by a remote source while even those near to them cannot hear the same sounds. It is essentially a real-life Cone of Silence.
The researchers’ work utilizes a novel approach in audio engineering to create personal sound zones without requiring wearable hardware. This is made possible by harnessing ultrasonic waves — sound waves at frequencies too high for human ears to detect. By precisely controlling these waves, the team can ensure that sound is only heard at specific locations.
How the audible enclaves work
The key to making this work involves the use of two ultrasound transducers paired with an acoustic metasurface — an advanced material engineered to manipulate sound waves. These transducers emit two slightly different ultrasonic beams that travel along a curved trajectory until they intersect. It is only at this exact intersection point that the ultrasonic signals interact to produce audible sound. Anyone standing outside this intersection hears nothing, even if they are right next to the listener.
The acoustic metasurfaces used in this technology play a crucial role in shaping the ultrasonic beams. These surfaces contain microstructures that bend sound waves in a controlled manner, ensuring they reach the intended point of intersection. The beams can even bend around obstacles, such as human heads, making the system practical for real-world use.
To test their system, the researchers placed microphones inside a dummy head to simulate human hearing. They found that only at the designated intersection point was sound perceptible, confirming the effectiveness of their approach.
Currently, the system can project sound about a meter away from the target listener at a volume of 60 decibels — equivalent to the volume level of a normal conversation. However, the researchers believe that by increasing the ultrasound intensity, they could extend the range and volume of the audible enclaves.
The future sounds good
Beyond listening to music or podcasts in public, this innovation could benefit applications in virtual reality, private communication, and advanced noise canceling. The ability to create isolated sound zones could allow for more focused learning environments, confidential business discussions, and even more immersive gaming experiences.
By overcoming the traditional limitations of sound diffusion, this research takes a meaningful step forward in audio engineering. The idea of sound without speakers, projected only to those who need to hear it, is no longer just science fiction — it is becoming reality.