The ROKOKO project regards an open source motion-capture face-recognition streaming suit connected with inertial sensors. It has been originally developed to realize a de-localized theater experience through 3d avatars.
The project is based on Arduino and a chip from InvenSense. The ArUco library 8(http://www.uco.es/investiga/grupos/ava/node/26) and two knee-mounted webcams have been used too to determine the actor’s position on the stage, as well as reading their facial expressions with a third USB camera mounted on a helmet. All the code is available on Github at
To realize the project we're using the UDOO Quad with ArchLinuxARM to take the input from the three webcams, crunching the computer vision and relaying the rotations, coordinates and facial data to a Unity3D app for live feedback, wirelessly. In the future we aim to use the UDOO also to read sensor data.
We call this Motion Streaming, and we’re going to use it for live theatre using 3D figures.
The attached video shows Maja wearing the suit in an early test. Actually, the sensors are connected to Raspberry Pis over USB, but 17 parallel USB connections has never been the way we wanted to interface with that many sensors. Instead, we have worked on re-designing the sensor boards so they will all be on the same I2C bus, which the UDOO can speak natively. This also strongly reduce the component count that the actor has to carry.
Jens Christian Hillerup
Comments