Human robot social interaction has become one of the active areas of research in robotics. Roboquin is designed and developed as a platform for research in speech synthesis and recognition as well as in several related areas: 1) Internet of Things (IoT) Mannequins, 2) Software controlled speed and pitch as well as the gender and native or foreign accent of mannequin's speech, 3) Interactive robotic mannequin interacting with people in different ways, 4) Social robotics research projects, and 5) Kinematics based movement control in sync with the speech to create a gesture.
Electrical Components and Materials: •Raspberry Pi 3 •Arduino UNO •RPi Camera •Ultrasonic Sensors •Speakers and LEDs •Servo Motors •Tetrix Robotics Kit •Cardboard
Features: 1) The most exciting feature of Roboquin is that RGB LED eyes change color and the Bar Graph LEDs used for lips move in sync with its speech. To achieve speech function eSpeak and Festival software is used, 2) Improvement in the current state of Roboquin’s speech synthesis is planned by implementing voice recognition and voice commands using Amazon Alexa and Google Voice, and 3) Plans are also underway to use the mannequin in an inter-disciplinary course that combines computer technology, social sciences and humanities.
Areas of further research and teaching where interactive mannequin may be used: ●Social robotics research projects ●Display model for wearable & fashion technology ●Mobile device based remote control ●3D physical modeling and design ●Computer controlled system design ●Embedded systems programming
Raspberry Pi Alexa setup instructions:
https://www.raspberrypi.org/blog/amazon-echo-homebrew-version/
https://github.com/alexa/alexa-avs-sample-app/wiki/Raspberry-Pi
Comments