Expressive Eyeballs Could Keep Pedestrians From Making Risky Decisions Around Autonomous Vehicles
By glancing towards or away from pedestrians, these googly eyes can help reduce risk-taking behavior.
A team of researchers at the University of Tokyo has come up with a novel way to make autonomous vehicles safer for pedestrians: fitting them with expressive motorized eyeballs capable of "looking" around as they drive.
"There is not enough investigation into the interaction between self-driving cars and the people around them, such as pedestrians," Takeo Igarashi, professor at the Graduate School of Information Science and Technology, explains of the impetus behind the project. "So, we need more investigation and effort into such interaction to bring safety and assurance to society regarding self-driving cars."
The big problem with pedestrians and vehicles is, of course, when the two collide — something that never ends well for the pedestrians. Autonomous vehicles can find it difficult to figure out when a pedestrian might be trying to cross the road, while pedestrians robbed of the ability to see where the driver is looking can't readily determine an autonomous vehicle's intentions or attention.
Which is where the robotic eyeballs come in. Fitted to the front of the vehicle, the eyes typically stare straight ahead — until and unless a pedestrian is spotted, in which case they either look towards the pedestrian or away from them. In either case, the human-like gesture provides important feedback to the pedestrian: either that they've been spotted and the "driver" is aware of them, or that the "driver's" attention is elsewhere and thus unlikely to stop should they step out in front of the vehicle.
In testing, for which a physical cart was fitted with the eyeball system then filmed in 360 degrees for participants to experience in virtual reality, the results spoke volumes — but also highlighted an interesting split between male and female test participants.
"The results suggested a clear difference between genders, which was very surprising and unexpected," explains Chia-Ming Chang of the testing. "While other factors like age and background might have also influenced the participants’ reactions, we believe this is an important point, as it shows that different road users may have different behaviors and needs, that require different communication ways in our future self-driving world.
"In this study, the male participants made many dangerous road-crossing decisions (i.e., choosing to cross when the car was not stopping), but these errors were reduced by the cart’s eye gaze. However, there was not much difference in safe situations for them (i.e., choosing to cross when the car was going to stop.) On the other hand, the female participants made more inefficient decisions (i.e., choosing not to cross when the car was intending to stop) and these errors were reduced by the cart's eye gaze. However, there was not much difference in unsafe situations for them."
"Moving from manual driving to auto driving is a huge change. If eyes can actually contribute to safety and reduce traffic accidents, we should seriously consider adding them," Igarashi concludes. "In the future, we would like to develop automatic control of the robotic eyes connected to the self-driving AI (instead of being manually controlled), which could accommodate different situations. I hope this research encourages other groups to try similar ideas, anything that facilitates better interaction between self-driving cars and pedestrians, which ultimately saves people's lives."
The team's work has been published in the Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '22), with a copy available under open-access terms in the ACM Digital Library.