A Taste of Things to Come
Cornell's robotic feeding system aids people with severe mobility limitations by using machine learning to adapt to changing conditions.
When asked about robots, most people’s first thoughts will be of the big industrial machines that line manufacturing plants, or of humanoid robots that are still often more of a marketing gimmick than anything with a practical use. But there are many other important applications of robotics that are less visible. As an assistive technology, for example, robots can lend caregivers a helping hand. In many cases there are not enough caregivers to go around, so developing these technologies to the point that they can assist with a wide range of tasks is crucial to our well-being as a society.
Considering all of the things a person might need help with, eating stands out as particularly important. When someone is unable to feed themself, they will require hands-on assistance for extended periods several times daily. That added workload can stretch individual caregivers and workers at assisted living centers past the breaking point.
Robotic systems have been introduced to automate feeding those that need a bit of help. Existing systems generally use a computer vision-based approach to locate the individual’s mouth, then calculate a motion plan and guide a utensil to the target. Unfortunately, this approach does not always work. Some people experience involuntary muscle spasms or have other issues that cause them to move after the robot decides where to deliver the food, which causes it to miss the mark.
A more advanced system was recently built by researchers at Cornell University. It uses computer vision, machine learning, and a variety of sensors to safely feed people, even those with severe mobility limitations that prevent them from even leaning forward to take a bite. This was achieved through two primary innovations. The computer vision system operates in real-time so that it can adjust to match a person’s movements. Furthermore, the utensil is instrumented so that the robot can continue to dynamically adjust itself even after the utensil has entered the mouth and computer vision is no longer informative.
The robot itself is a multi-jointed arm with cameras positioned both above and below a utensil at its end. By using multiple cameras, the machine learning algorithm, which was trained on thousands of images of faces, can account for occlusions (even those caused by the utensil itself) to more accurately guide the arm. Once the utensil enters the mouth, force sensors provide feedback that enables the robot to safely feed the user of the system by making any necessary adjustments in real-time.
In a study consisting of thirteen participants, the robot was demonstrated to be capable of feeding individuals with a wide range of medical conditions that impacted their ability to feed themselves. After working with the robot, the users reported feeling safe receiving help from it, and also that the experience was comfortable for them. A number of the study participants and their caregivers were very emotional about the experience, noting that it could help to restore independence and quality of life to both.
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.