Use the Force Myography

By capturing multiple signals from the muscles, UC Davis researchers built a prosthetic arm that reliably carries out its user's intent.

Nick Bild
2 months agoHealth & Medical Devices
This robotic arm copies the movements of its user (📷: Greg Urquiaga, UC Davis)

Until scientists uncover the secrets that allow animals like starfish, axolotls, and salamanders to regrow lost limbs, people with limb amputations will have to settle for artificial replacements. Unfortunately, these prosthetic devices are cheap substitutes at best. Even today’s best, cutting-edge systems lack most of the function and sensation of a real arm or leg.

The ideal prosthetic limb would capture physiological signals from its wearer and respond in the same way that a natural limb would. The most frequent way this is accomplished at present is by measuring and interpreting electromyography (i.e., electrical activity associated with muscle movement) data to understand the user’s intent. That intent is then turned into physical movements of the prosthetic device.

A prosthetic arm, for instance, may have an array of sensors that attaches to the forearm. When the user attempts to move their missing limb, the forearm muscles are stimulated in the same way as they would be if the arm were intact. This sounds like a great solution, however, the signal is extremely noisy. When the user moves their arm to a different position, the signals completely change. If they lift a heavy object, once again they change. Any time the data is captured outside of a carefully controlled lab environment, it becomes exceedingly difficult to reliably interpret.

Researchers at the University of California, Davis are working on this problem, and believe that they may have found a solution. They found that by blending two sources of data on muscle movement the user’s intent can more accurately be determined, and in a way that is not so easily perturbed by irrelevant factors. Specifically, they combined electromyography and force myography into a unified gesture recognition system.

The team integrated sensors for both types of data into a cuff that is worn on the forearm. They then had a group of volunteers wear the device while making a set of different arm and hand gestures. The data from these experiments was collected and used to train a machine learning classifier.

Once it was trained, the model was then tested on a different set of data. To determine the influence of each data source, trials were conducted in which they were used alone, as well as in combination. The combination clearly won out — 97% of gestures were correctly identified during testing when using both data sources. Electromyography alone classified 83% of the examples correctly, while force myography was correct in 92% of cases.

As the work continues and the accuracy levels climb closer to 100%, this system will become more practical for real-world use. And it is not only prosthetics that could benefit from that. There are also numerous applications in virtual reality and robotics that need a system like this to move to the next level.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Get our weekly newsletter when you join Hackster.
Latest articles
Read more
Related articles