Hackster is hosting Hackster Holidays, Ep. 6: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Monday!Stream Hackster Holidays, Ep. 6 on Monday!

Take a Swipe at It

SAWSense makes it possible to turn almost any surface into an input device by leveraging surface acoustic waves and machine learning.

Nick Bild
2 years agoMachine Learning & AI
Controlling a laptop with surface acoustic waves and ML (📷: Y. Iravantchi et al.)

Human-computer interfaces have come a long way over the years. Although traditional interfaces such as the keyboard and touchscreen have gained immense popularity, they may not always be the best choice for every situation. Consequently, researchers are now exploring new ways of creating interfaces that are embedded in everyday objects to enhance user experiences.

Many of these interfaces make use of cameras or microphones to control devices with methods like gesture or speech recognition. While there are many useful applications for such systems, they are not always practical. The target of a camera-based system can be obscured by any number of objects that get between it and the camera. And when dealing with a noisy environment, or sensitive information, microphone-based speech recognition systems are not a good choice.

For cases when an alternative is needed, researchers have been experimenting with ways to transparently instrument any arbitrary surface — like a TV remote embedded in the arm of a couch, or an interactive wall that controls a smart home. Many methods have been tried to build this type of functionality, and accelerometers have stood out as one of the most promising sensors, because they can sense touch-based gestures on a variety of surfaces without any other modifications.

The problem with this approach is that accelerometers have an insufficient sampling bandwidth to capture much more than a few relatively coarse gestures when they are incorporated into a surface touch sensing device. A collaboration between researchers at the University of Michigan and Meta Reality Labs has demonstrated another path forward that provides the bandwidth needed to create more advanced user interfaces.

Relying on surface acoustic waves (SAWs) instead of mechanical vibrations to sense touch inputs, the team’s device can leverage a Voice Pick Up Unit (VPU) to detect even subtle touch gestures. VPUs are specially designed to conduct only surface waves into the hermetically sealed chamber containing the sensor, so interference from irrelevant background noise is not an issue. And because they are fabricated using a MEMS process, VPUs have the high bandwidth that would be expected from a typical MEMS microphone.

A high-performance sensor is an important piece of the puzzle, but a method to convert those surface acoustic waves into taps, swipes, and other gestures was still needed. That would be very difficult to do with hard coded logic, so the team designed a machine learning model that would allow the algorithm to be learned from the data.

VPUs collect a lot of data, which could make processing it on an edge computing device in real time challenging. To deal with this problem, Mel-Frequency Cepstral Coefficients were calculated to help understand which features of the data are the most informative. This analysis reduced the number of features that needed to be considered from 24,001 to just 128. These features were then fed into a Random Forest classifier to determine exactly what the surface waves represent.

Several demonstrations were conducted to assess the performance of the system. It was found that trackpad-style gestures made on a desk, and sixteen different cooking-related activities that took place on a kitchen counter could be recognized with better than 97% accuracy on average.

The system cannot, at present, decipher what is happening when multiple sounds are made at one time. With traditional microphone-based systems, this sound separation is achieved using multiple microphones, and the team expects that the same can be achieved by using multiple VPUs. They plan to explore this further in the future, in addition to working towards adding directional information so that it will be possible, for example, to tell the difference between a swipe up and a swipe down. With a bit of refinement, an interface built using these methods may prove to be the low-cost and effective solution we have been searching for.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles