Alexa is just a screen, so we want to give her a physical shape and it's mouth can physically move when she talks. And we can achieve this using LEGO MindStorm EV3. We would need LEGO MindStorm Ev3 Education Set 45544 and Expansion 45560 to get this done.
Step 1: Build out the JawIn this project we will build our project based on LEGO MindStorms Jaw which can be found at https://le-www-live-s.legocdn.com/sc/media/files/support/mindstorms-ev3/building-instructions/design%20engineering%20projects/jaw-ee93e8f3243e4d30cd34b0c337c33653.pdf
This is fairly simple project, when it's all done we should get something like below where you can move the button to snap
We need eyes and ears for the bot, so we can build up by getting the branch up like below.
After that we have to build the eye, we can follow Znapper's eyes
After eyes are built we should have the following
And when it's connected it would looks very cute like below
We can make Nose from 3 red L shape Legos and hold up the Lego face through another 2 Double Angular Beam 3X7 45°
In the back side we can lock this in using a long beam
If you want to create a more humanly face we can also add a frame around the face.
Bu this doesn't look cute anymore so our project decided to remove it.
Step 3: Go through Alexa GadgetAlexa Gadget has written a setup tool and 4 missions, you can follow the setup, mission 1 and mission 2 procedure before moving onto the next step.
https://www.hackster.io/alexagadgets
We need the development environment setup from Setup
Alexa Gadget setup from Mission-01
And the MusicTempo from Mission-02, we are not using music in this guide, but the SpeechData follows very closely to MusicTempo
Step 4: Write ev2dev python codeWe will go through the code in this step, our code base will closely follow Mission-01, which you can copy and paste and we will add it from there. First we need to change talkbot.ini to add Alexa.Gadget.SpeechData = 1.0 - viseme as this is to activate speechdata.
[GadgetCapabilities]
Alexa.Gadget.StateListener = 1.0 - wakeword
Alexa.Gadget.SpeechData = 1.0 - viseme
in importing section fo the talkbot.py we will add OUTPUT_A, MediumMotor, SpeedPercent, as we are using MediumMotor to control the jaw of the talkbot itself. Additionally we will need datetime to calculate time offset.
import os
import sys
import time
import datetime
import logging
from ev3dev2.sound import Sound
from ev3dev2.led import Leds
from agt import AlexaGadget
from ev3dev2.motor import OUTPUT_A, MediumMotor, SpeedPercent
During init we will add mouth and startime for our project
def __init__(self):
"""
Performs Alexa Gadget initialization routines and ev3dev resource allocation.
"""
super().__init__()
self.leds = Leds()
self.sound = Sound()
self.mouth = MediumMotor(OUTPUT_A)
self.startTime = datetime.datetime.now()
There is a bug on https://developer.amazon.com/en-US/docs/alexa/alexa-gadgets-toolkit/alexa-gadget-speechdata-interface.html#Speechmarks-directive where there is no playerOffsetInMilliseconds, so we'll have to create our own. During init we get our self.startTime, in on_alexa_gadget_statelistener_stateupdate, at the state.value == 'cleared' we will record the start time of the speech
def on_alexa_gadget_statelistener_stateupdate(self, directive):
"""
Listens for the wakeword state change and react by turning on the LED.
:param directive: contains a payload with the updated state information from Alexa
"""
color_list = ['BLACK', 'AMBER', 'YELLOW', 'GREEN']
for state in directive.payload.states:
if state.name == 'wakeword':
if state.value == 'active':
print("Wake word active", file=sys.stderr)
self.sound.play_song((('A3', 'e'), ('C5', 'e')))
for i in range(0, 4, 1):
self.leds.set_color("LEFT", color_list[i], (i * 0.25))
self.leds.set_color("RIGHT", color_list[i], (i * 0.25))
time.sleep(0.25)
elif state.value == 'cleared':
print("Wake word cleared", file=sys.stderr)
print(directive)
self.sound.play_song((('C5', 'e'), ('A3', 'e')))
for i in range(3, -1, -1):
self.leds.set_color("LEFT", color_list[i], (i * 0.25))
self.leds.set_color("RIGHT", color_list[i], (i * 0.25))
time.sleep(0.25)
self.startTime = datetime.datetime.now()
self.mouth.on_for_rotations(SpeedPercent(30), .1)
self.mouth.on_for_rotations(SpeedPercent(-30), .1)
in on_alexa_gadget_speechdata_speechmarks, we will get speech offset via startOffsetInMilliSeconds, then we can also get the playerOffsetInMilliseconds through the delta of current time - the start time that we got by end of the wakeword. If the speechoffset is higher than the player offset, we move the mouth up and down
def on_alexa_gadget_speechdata_speechmarks(self, directive):
speechtime = directive.payload.speechmarksData[0].startOffsetInMilliSeconds
delta = datetime.datetime.now() - self.startTime
offset = int(delta.total_seconds() * 1000)
if speechtime > offset:
print(str(speechtime) + "," + str(offset))
self.mouth.on_for_rotations(SpeedPercent(30), .1)
self.mouth.on_for_rotations(SpeedPercent(-30), .1)
pass
When Alexa talks we should have
Now everything is done, a demo can be seen below
Comments