Concept
We wanted to create DWR to teach our kids the basic building blocks of programming and multi-step instructions without the need to teach them how to code just yet. Inspired by concepts from First Lego League, board games such as Robot Turtles, and a love of making mazes and labyrinths, I set out to create a robot that you control with your voice and give instructions on how to solve a maze. By playing this game, the kids learn the basics of direction, how to add multiple instructions together to solve a problem, and how to visualize the robots position after certain moves.
Robot Build
The robot build was intended to be super simple with just the EV3 brick, two large motors for the wheels, and an idler ball bearing for the back. The main design feature of the robot is that it is compact so that it fits on one 16x16 stud LEGO platform which is the basic building block of the maze.
To begin the robot build connect two EV3 Large Servo Motors using Technic beams and lift arms as shown in the image below. Again exactly how this is done is not as important as keeping the robot small and compact. Using a 5x7 lift arm (part # 64179) above and below the Servo Motors leaves a one stud spacing between the widest part of the motors and this size seems to work well. A lift arm fits very well between the two connectors on each of the motors and provides a great deal of rigidity and strength to the robot.
Also shown in the images is the optional color sensor which fits nicely between the narrower parts of the servo motors. I envisioned using this as part of the maze solving (i.e. "go to the blue square"), which is also reflected in the maze design below. This capability is not included in the current code however, but will likely be added in a future version.
I chose to use small Technic pulleys with rubber tires as the wheels again in an effort to keep the robot small and compact. The small wheels have the additional benefit of minimizing the amount of lateral error caused by angular error in the servo. Large wheels translate angular error into larger distances travelled.
The back of the motors provide a great mount point for a beam and the LEGO ball bearing and steel ball (part # 4610380 and 6023956) included in the EV3 education set. This seems to be a little heavy for this size of robot however it does not have any backlash which is an advantage over a swivel wheel design.
Finally, the EV3 brick can be attached to the frame formed by the top of the servo motors and the beam used to connect the motors together, and the motors are connected to the EV3 brick using the short 10" cables.
Robot OS
I followed the Setup Guide for the installation of the ev3dev software on the EV3 Brick and followed the Mission 1 tutorial to connect it to the Alexa device. You can use a USB cable to connect to the ev3dev OS, but I used a Edimax USB wifi dongle. One nuance with the ev3dev software is that if there is a sensor connected to the EV3 brick when the OS is booting, the bluetooth may not connect. To remedy this, you can just leave any sensors for your build unplugged until the OS is fully booted.
Maze Build
The maze was designed so that it can be reconfigured very quickly to introduce obstacles, change the location of the target, and build progressively harder mazes.
The maze consists of 16x16 LEGO plates that are covered in 2x2 flat tiles to make for a smooth surface. We used colored tiles at the interior of each plate, and made a multi-colored checkered plate to designate the goal for the robot. Some of the plates have walls to act as obstacles for the robot. I just used a single-sided wall on four of the plates but feel free to experiment with additional designs. The plates are placed in a border made up of 2x4 LEGO bricks that is 7 layers high. This seems to be a good size to make for a rigid boarder but feel free to experiment with a higher or lower border. The plates fit snugly in the border but can still be reconfigured quickly to make more challenging mazes.
Alexa Skill
We designed the voice interface to be as simple and accessible as possible to keep the kids engaged and avoid frustration. The tutorial I followed to get all of the Alexa Skill nuts and bolts sorted can be found in the Alexa Mindstorms Challenge Mission 3 tutorial.
The first step is to create the custom model.json file to describe the voice interaction model for the Alexa skill. The full code is provided as an attachment however, example snippets are also shown below.
The main challenge in creating the skill model for DWR was that we want the player to be able to issue a variable number of commands in order to solve the maze. For this reason we need to have multiple "Direction" slots in the model rather than a single one as in the tutorial. Naming the various Direction slots also proved to be a challenge because the Alexa Skill API does not allow Direction01, Direction02, etc slot names. I suspect this is because the API has builtin features such as verification of the slot values via voice commands and Direction01 is not pronounceable by Alexa.
So, to get around these limitations, I named the direction slots "one", "two", "three" through to "ten".
"slots": [
{
"name": "One",
"type": "CommandType"
},
{
"name": "Two",
"type": "CommandType"
},
{
"name": "Three",
"type": "CommandType"
},
SImilarly, I wanted each direction command to have an associated "Distance" slot, and because of the same limitations, named these "A", "B", "C", through "J."
{
"name": "A",
"type": "DistanceType"
},
{
"name": "B",
"type": "DistanceType"
},
{
"name": "C",
"type": "DistanceType"
}
For the sample utterances, I had to include examples for all of the number of commands in the sequences. If a slot was not filled in the command utterance, then it defaults to "none" which would cause no action in the robot.
"samples": [
"go {One}",
"go {One} then {Two}",
"go {One} then {Two} then {Three}",
"go {One} then {Two} then {Three} then {Four}",
"go {One} then {Two} then {Three} then {Four} then {Five}",
"go {One} then {Two} then {Three} then {Four} then {Five} then {Six}",
"go {One} then {Two} then {Three} then {Four} then {Five} then {Six} then {Seven}",
"go {One} then {Two} then {Three} then {Four} then {Five} then {Six} then {Seven} then {Eight}",
"go {One} then {Two} then {Three} then {Four} then {Five} then {Six} then {Seven} then {Eight} then {Nine}",
"go {One} then {Two} then {Three} then {Four} then {Five} then {Six} then {Seven} then {Eight} then {Nine} then {Ten}"
]
Also, I had to include sample utterances that included a direction and a distance for all the commands.
"go {One} {A}",
"go {One} {A} then {Two} {B} ",
"go {One} {A} then {Two} {B} then {Three} {C}",
"go {One} {A} then {Two} {B} then {Three} {C} then {Four} {D}",
"go {One} {A} then {Two} {B} then {Three} {C} then {Four} {D} then {Five} {E}",
"go {One} {A} then {Two} {B} then {Three} {C} then {Four} {D} then {Five} {E} then {Six} {F}",
"go {One} {A} then {Two} {B} then {Three} {C} then {Four} {D} then {Five} {E} then {Six} {F} then {Seven} {G}",
"go {One} {A} then {Two} {B} then {Three} {C} then {Four} {D} then {Five} {E} then {Six} {F} then {Seven} {G} then {Eight} {H}",
"go {One} {A} then {Two} {B} then {Three} {C} then {Four} {D} then {Five} {E} then {Six} {F} then {Seven} {G} then {Eight} {H} then {Nine} {I}",
"go {One} {A} then {Two} {B} then {Three} {C} then {Four} {D} then {Five} {E} then {Six} {F} then {Seven} {G} then {Eight} {H} then {Nine} {I} then {Ten} {J}",
]
The final step for the model are to define the DistanceType and the CommandType. Currently, I only have forward, backward, left, right, and variations of those defined, however future builds may include forward until a certain color or condition is encountered.
{
"name": "DistanceType",
"values": [
{
"name": {
"value": "one"
}
},
{
"name": {
"value": "two"
}
},
{
"name": {
"value": "three"
}
}
]
},
{
"name": "CommandType",
"values": [
{
"name": {
"value": "none"
}
},
{
"name": {
"value": "brake"
}
},
{
"name": {
"value": "go backward"
}
},
{
"name": {
"value": "go forward"
}
},
{
"name": {
"value": "go right"
}
]
}
The index.js file contains most of the logic to get the slot values from the voice command and send a json directive to the gadget. Again, I followed the example in the Mission 3 tutorial. I decided that at least the first command needs to have a value, and every command after that defaults to none. Also, every distance slot defaults to a value of 1.
handle: function (handlerInput) {
const request = handlerInput.requestEnvelope;
const one = Alexa.getSlotValue(request, 'One');
// All others are optional, use default if not available
const two = Alexa.getSlotValue(request, 'Two') || "none";
const three = Alexa.getSlotValue(request, 'Three') || "none";
const four = Alexa.getSlotValue(request, 'Four') || "none";
const five = Alexa.getSlotValue(request, 'Five') || "none";
const six = Alexa.getSlotValue(request, 'Six') || "none";
const seven = Alexa.getSlotValue(request, 'Seven') || "none";
const eight = Alexa.getSlotValue(request, 'Eight') || "none";
const nine = Alexa.getSlotValue(request, 'Nine') || "none";
const ten = Alexa.getSlotValue(request, 'Ten') || "none";
const a = Alexa.getSlotValue(request, 'A') || "1";
const b = Alexa.getSlotValue(request, 'B') || "1";
const c = Alexa.getSlotValue(request, 'C') || "1";
const d = Alexa.getSlotValue(request, 'D') || "1";
const e = Alexa.getSlotValue(request, 'E') || "1";
const f = Alexa.getSlotValue(request, 'F') || "1";
const g = Alexa.getSlotValue(request, 'G') || "1";
const h = Alexa.getSlotValue(request, 'H') || "1";
const i = Alexa.getSlotValue(request, 'I') || "1";
const j = Alexa.getSlotValue(request, 'J') || "1";
To construct the directive json to send to the gadget, All of the other code in the index.js, common.js, and util.js follow the tutorial fairly closely.
By uploading these files and constructing the Alexa skill as in the tutorial, we now have a skill that will accept a direction and distance for up to ten commands and send a json directive to the python code on the robot.
Robot Code
The robot code needs to serve two main functions, 1) to accept and parse directives from the Alexa skill, and 2) to drive the robot wheels according to the commands.
This is implemented in python using the ev3dev module to interface with the robot motors, and the agt module to implement the Alexa gadget interface.
As in the Mission 3 tutorial the main part of this code is implemented as a class that inherits from the AlexaGadget class in the agt module. I adapted the methods that drive the robot to be specific to the maze build, the robot dimensions, and the wheel size. This took a little tinkering to get the values right, and yours may be different than what I have below. Right now the percentage and duration to drive the robot are hard coded and this code needs to be cleaned up a bit, but it works for now.
def _forward(self, dur):
self.tank.on_for_rotations(50,50,1.3*dur)
def _backward(self, dur):
self.tank.on_for_rotations(-50,-50,1.3*dur)
def _turn_right(self):
self.tank.on_for_rotations(-50,-50,1.)
self.tank.on_for_rotations(50,10,1.7)
self.tank.on_for_rotations(-50,-50,.15)
def _turn_left(self):
self.tank.on_for_rotations(-50,-50,1.)
self.tank.on_for_rotations(10,50,1.8)
self.tank.on_for_rotations(-50,-50,.15)
Below is the method for handling the directives sent by the Alexa Skill. Because we encoded the keys for the commands as numerals (3) rather than the words (three), we can use the range function to get the commands in order and then call the respective method depending on the command. To get the duration values, I constructed an array of the lower case letters and then indexed into the payload with the lower case letter that corresponds to the direction.
def on_custom_mindstorms_gadget_control(self, directive):
payload = json.loads(directive.payload.decode("utf-8"))
print("Control payload: {}".format(payload))
try:
for i, idx in enumerate(range(1,11)):
idx_str = str(idx)
duration = 1
duration = payload[lower_case[i]]
if duration == 'to':
duration = 2
duration = int(duration)
if payload[idx_str] in ['none']:
pass
if payload[idx_str] in ['left','go left','turn left']:
self._turn_left()
if payload[idx_str] in ['right','go right','turn right']:
self._turn_right()
if payload[idx_str] in ['forward', 'go forward']:
self._forward(duration)
if payload[idx_str] in ['backward', 'back', 'go backward']:
self._backward(duration)
One peculiarity is that Alexa sometimes interprets the duration "two" as the word "to", this is easily handled by checking for this and setting the duration.
You can sync this file to the EV3 brick by using the VSCode IDE as described in the tutorials, and then run it to connect to the Alexa device and begin handling directives.
Playing!
Now with a deployed Alexa skill, the robot with the accompanying gadget python code, and the maze, we can start to play. If playing with kids, it is a good idea to start simple with maybe just a single move required to complete the maze. This gets them familiar with what commands they are allowed to use and how to communicate them to Alexa. As they get comfortable with how to use their voice to command the robot, you can construct progressively harder mazes. It can even be fun and engaging to have kids challenge each other where one makes the maze and the other commands the robot to solve it.
I hope you enjoyed this build and it makes for a fun game to get kids engaged in robotics, LEGOs, and programming. Now enjoy a video of the robot in action!
Comments