Dress Code for Robots
Taking a reinforcement learning-based approach, researchers have developed a robotic system that can help people to dress themselves.
Caring for the elderly and disabled, particularly when it involves assisting with basic tasks like dressing, places a significant and often underestimated burden on caregivers. The role of a caregiver can be both emotionally and physically demanding, as it requires a high level of attention, patience, and adaptability. The responsibility of tending to every need, including getting others dressed, can be time-consuming and exhausting, leading caregivers to experience heightened stress, fatigue, and a diminished sense of personal freedom. The constant vigilance required to ensure the safety, comfort, and dignity of their loved ones can take a toll on caregivers' mental and physical well-being.
For the elderly and disabled individuals who struggle with dressing themselves, it can have profound psychological and emotional implications. The inability to dress oneself can lead to feelings of helplessness, loss of independence, and a diminished sense of self-worth. For many, the act of dressing goes beyond mere functionality; it is a form of self-expression and a means to maintain a sense of identity. When this autonomy is compromised, individuals may experience a sense of shame or frustration, eroding their self-esteem.
Because it is such a common need experienced by so many individuals, researchers have been working to develop robotic assistants that help people to dress themselves and take the burden they experience off of them and their caregivers. But these systems have not been very successful, because they tend to focus on highly constrained problems in which the type of clothing or pose of the individual is fixed. Of course the real world does not look like that, so the systems are of very limited utility.
But with 92% of nursing facility residents and at-home care patients needing assistance to get dressed according to the National Center for Health Statistics, better solutions are urgently needed. Recent research conducted by a team at Carnegie Mellon University holds the promise of becoming that solution one day. They have developed a generalized robotic control system that uses machine learning to adapt to different types of clothing, poses, and body shapes. At present, it has only learned how to pull a sleeve over a person’s arm, but using similar techniques, it could be extended to do much more in the years to come.
Clothing is very difficult to work with because of its high degree of deformability. Combining that with the unpredictable movements of people and a slew of other unknowns makes the problem very difficult to solve. The researchers leveraged a reinforcement learning-based approach to help them tackle this complexity. By doing so, the robot could experiment with different types of clothing, body positions, and other factors to teach itself the optimal path forward in each case.
In order to learn, the algorithm needs data from many attempts — more than even the most patient study participants can provide — so the initial data was collected in a simulated environment to speed up the training process. A large diversity of clothing types and scenarios were set up in this environment, then the results were carefully transferred to a strategy for controlling robots in the real world.
The control system was tested in a trial consisting of seventeen participants, five types of clothing, and many different types of arm poses and body shapes. In most cases, the robot was found to be quite capable of dressing the individual, at least as far as pulling a sleeve onto their arm is concerned. When looking across all test cases, the system was able to cover 86% of the length of the arm on average.
Moving forward, the team is working towards adding more advanced functionalities to their system, like pulling both arms of a jacket onto an individual, or pulling a t-shirt over their head. They also intend to explore making the process more dynamic, such that it can deal with situations in which the individual is in motion while they are being dressed. There is much work to be done, but if more advanced capabilities can be developed, this system could give more independence to those that have lost the ability to dress themselves.