Our family members like candies for every reason. We wanted to innovate some home automation around our favourite subject. So we built a machine with conveyor belt which can serve us candies. Having Amazon Echo Dot communicating with Mindstorms EV3, it was easy to teach some tricks to Alexa for our good.
Hope you will follow the below steps to build our model and then extend it with your own cool ideas.
Demo Video"Alexa, open Candy Machine"
"Candy Machine activated. What Can I do for you?"
"Give me one piece of blue and one piece of green candies, please"
Or even tell things like:
"Alexa, tell Candy Machine that I have finished my homework"
"Congratulations! You deserve some candies"
---
Communication Overview1. User speaks to Echo Dot
2. Echo Dot sends the audio sample to Alexa Voice Service
3. AVS transforms it into a text, and matches it onto our Alexa skill model, then calls our lambda to process the message
4. Our lambda sends voice and/or EV3 commands back to Echo Dot
5. Echo Dot speaks the message and/or sends our command to EV3 via Bluetooth
6. EV3 executes our command
Step 1: Setup development environmentPlease follow the guide on the LEGO MINDSTORMS Voice Challenge: Setup page, which will walk you trough the steps of:
EV3
- Download + flash ev3dev software to a microSD card
- (This will allow you to run python code and use advanced hardware features on EV3)
- Insert microSD card into EV3 + boot up
- Connect your EV3 to your PC via USB
- (This will speed up transferring your program to EV3 and real-time debugging)
PC
- Install Visual Studio Code + its extensions
- Configure Visual Studio Code to connect to your EV3
- Download sample code to EV3
As we will use EV3 as an Alexa Gadget (that is Alexa will use EV3 with the help of the Bluetooth of Echo Dot), we need to register and connect EV3 to Alexa by following steps of LEGO MINDSTORMS Voice Challenge: Mission 1 page, where:
Amazon
- Register a developer.amazon.com account + add EV3 as Alexa Gadget
EV3
- Turn on Bluetooth on EV3
- (This is needed for communication with Echo Dot)
- Connect to Echo Dot by starting the sample code, and finishing the pairing process
Now follow steps of the LEGO MINDSTORMS Voice Challenge: Mission 3 page, which will guide you to connect an Alexa skill to your EV3. Once that is done, you can be proud that you have no any connection issues.
You do not need to build EV3STORM, it is enough if you connect the Large motors to ports B and C, and the Medium motor to port A. Remember we just setting up and testing the connection here.
Here you will do:
Alexa
- Create a new Alexa skill + interaction model
- (to teach Alexa what and how to understand)
- Add NodeJS lambda code to the skill
- (so Alexa can react your commands and call events on EV3)
EV3
- Python code which will handle events on EV3 side, and will move the motors
Once you are done with the above setup steps, you are ready to build the Candy Machine itself. You will need the retail 31313 LEGO Mindstorms set plus some extra LEGO Technic elements for our conveyor belt. (Please find the extra elements listed in the hardware list.)
The contraption is built in a modularized way. Below video shows the modules and how to build the Machine.
Please note: Medium Motor goes to port A, Large Motor goes to port B and Color Sensor is connected to port 2.
Next come the candies.
EV3's color sensor can detect black, blue, green, yellow, red, white and brown colors. However, conveyor belt is measured as black or red. From among the remaining colors we had only blue, green, and yellow candies. So we will use only these 3 colors in this project.
Step 5: Customizing Alexa for Candy MachineHere we are implementing #3 of the Communication Overview.
Create a new skill called CandyMachine using the same steps you followed when you have created the Mindstorms skill.
Open code sample from this project (alexa-candymachine-code.zip
at the bottom of this page) and copy paste the contents of model.json
file into the Build / Interaction Model / JSON Editor, and then Save the Model.
This will define the Invocation Name of the Alexa Skill, and a CandyIndent
for requesting candies and a GoalIntent
for reaching your goals. Please review the phrases we have defined.
{
"interactionModel": {
"languageModel": {
"invocationName": "candy machine",
"intents": [
{
"name": "AMAZON.CancelIntent",
"samples": []
},
{
"name": "AMAZON.HelpIntent",
"samples": []
},
{
"name": "AMAZON.StopIntent",
"samples": []
},
{
"name": "AMAZON.NavigateHomeIntent",
"samples": []
},
{
"name": "CandyIntent",
"slots": [
{
"name": "Pieces",
"type": "AMAZON.NUMBER"
},
{
"name": "Color",
"type": "AMAZON.Color"
},
{
"name": "PiecesB",
"type": "AMAZON.NUMBER"
},
{
"name": "ColorB",
"type": "AMAZON.Color"
}
],
"samples": [
"Give me {Pieces} pieces of {Color} candies",
"Give me {Pieces} pieces of {Color} and {PiecesB} pieces of {ColorB} candies",
"Give me {Pieces} pieces of candies",
"Give me a {Color} candy",
"Give me a candy"
]
},
{
"name": "GoalIntent",
"slots": [],
"samples": [
"I have finished my homework",
"My room is clean"
]
}
],
"types": []
}
}
}
Once this is done, you need to copy and paste the lambda files from the alexa-candymachine-code.zip
to the Alexa Code editor to the respective files: common.js
, index.js
, package.json
and util.js
. Don't forget to Save them. These files are basically the lambda code behind the Alexa skill. The validation part of the logic and the messages that Alexa will say are here, as well as the commands which we want to send to EV3 are also here.
Let's take a look of them:
// Skill starting event
const LaunchRequestHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'LaunchRequest';
},
handle: async function(handlerInput) {
const request = handlerInput.requestEnvelope;
const { apiEndpoint, apiAccessToken } = request.context.System;
const apiResponse = await Util.getConnectedEndpoints(apiEndpoint, apiAccessToken);
if ((apiResponse.endpoints || []).length === 0) {
return handlerInput.responseBuilder
.speak(`I couldn't find an EV3 Brick connected to this Echo device. Please check to make sure your EV3 Brick is connected, and try again.`)
.getResponse();
}
// Store the gadget endpointId to be used in this skill session
const endpointId = apiResponse.endpoints[0].endpointId || [];
Util.putSessionAttribute(handlerInput, 'endpointId', endpointId);
return handlerInput.responseBuilder
.speak("Candy machine activated. What can I do for you?")
.reprompt("What can I do for you?")
.getResponse();
}
};
LaunchRequestHandler
is called if you say "Alexa, open Candy Machine". This will check connection to EV3, and respond back to the user if successful.
Method .reprompt()
is important here. It will let Alexa wait in Candy Machine mode, for the next command (keep the session open).
// Construct and send a custom directive to the connected gadget with
// data from the CandyIntent.
const CandyIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'CandyIntent';
},
handle: function (handlerInput) {
const request = handlerInput.requestEnvelope;
// Parameter is optional, use default if not available
const pieces = Alexa.getSlotValue(request, 'Pieces') || 1;
const color = Alexa.getSlotValue(request, 'Color') || "";
const piecesB = Alexa.getSlotValue(request, 'PiecesB') || 0;
const colorB = Alexa.getSlotValue(request, 'ColorB') || "";
////debug : return handlerInput.responseBuilder.speak(`Please wait while I am serving your ${pieces} pieces of ${color} and ${piecesB} pieces of ${colorB} candies.`).getResponse();
// Validations - if request is not valid, we will require the user to specify his/her request in more detail.
let validationSpeechOutput = "";
let repromptSpeechOutput = "What can I do for you?";
if (color === "")
validationSpeechOutput = "I am afraid, you forgot to mention the color";
else if (color !== "blue" && color !== "green" && color !== "yellow")
validationSpeechOutput = "Sorry, I don't have this color";
else if (colorB !== "" && colorB !== "blue" && colorB !== "green" && colorB !== "yellow")
validationSpeechOutput = "Sorry, I don't have this color";
else if (pieces > 5 || piecesB > 5)
validationSpeechOutput = "I am afraid, this is too much for you";
// (reprompt will keep session open)
if (validationSpeechOutput !== "")
return handlerInput.responseBuilder
.speak(validationSpeechOutput + repromptSpeechOutput)
.reprompt(repromptSpeechOutput)
.getResponse();
// Validations done
// Get data from session attribute
const attributesManager = handlerInput.attributesManager;
const endpointId = attributesManager.getSessionAttributes().endpointId || [];
// Construct the directive with the payload containing the move parameters
let directive = Util.build(endpointId, NAMESPACE, NAME_CONTROL,
{
type: 'candy',
pieces: pieces,
color: color,
piecesB: piecesB,
colorB: colorB
});
const speechOutput = (piecesB === 0)
? `Please wait while I am serving your ${pieces} ${color} candies.`
: `Please wait while I am serving your ${pieces} ${color} and ${piecesB} ${colorB} candies.`;
return handlerInput.responseBuilder
.speak(speechOutput)
.addDirective(directive)
.getResponse();
}
};
CandyIntentHandler
will handle commands where the user asks for candies. First we get the parameters with Alexa.getSlotValue()
. Please note we set default values if the values are missing, as the user said something incomplete, e.g. Color
is missing.
////debug
was our big help when we tested if the parameters or the intent samples were good. Just uncomment it, and Alexa will tell you the parameters she got. (You can even execute tests without deploying using the Test tab of the Alexa developer console.)
Next come the validations. For several incomplete or unhandled inputs we respond with a message, and keep the session open with .reprompt()
. These are situations where the user missed the color, or asked for colors we don't have, or asking for too many candies.
Once validations were successful, we pass the parameters to EV3's candy handler with Util.build()
We also let the user know at the end that we are serving the candies.
// Construct and send a custom directive to the connected gadget with
// data from the GoalIntent.
const GoalIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'GoalIntent';
},
handle: function (handlerInput) {
const request = handlerInput.requestEnvelope;
return handlerInput.responseBuilder
.speak("Congratulations! You deserve some candies.")
.reprompt("You deserve some candies.")
.getResponse();
}
};
As a bonus, we are motivating the user via the GoalIntentHandler
if s/he reached one of the goals listed previously in the GoalIntent
model.
Here we just responding with Congratulations and keep the session open. This will be enough for the discussion to continue between the user and Alexa.
---
Once you Saved all the files for lambda in Code editor, you also need to click Deploy, so the solution will be active in the cloud.
Step 6: Customizing EV3 for Candy MachineNow as Alexa is ready to pass our parameters to the candy handler of our EV3 Python code, let's implement that. This is #6 in our Communication Overview.
First you need to edit candymachine.ini
, and paste your Alexa Gadget ID and Secret (which you registered during Step 2) to this file. So Alexa will be able to connect to your EV3 via Echo Dot and Bluetooth.
[GadgetSettings]
amazonId = xxxxxxxxxxxxxx
alexaGadgetSecret = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Next comes candmayhine.py
, a code for EV3 written in Python:
In the beginning we are defining how we wired the peripherals.
from agt import AlexaGadget
from ev3dev2.led import Leds
from ev3dev2.sound import Sound
from ev3dev2.motor import OUTPUT_A, MediumMotor
from ev3dev2.motor import OUTPUT_B, LargeMotor
from ev3dev2.sensor.lego import ColorSensor
Also in initialization:
def __init__(self):
"""
Performs Alexa Gadget initialization routines and ev3dev resource allocation.
"""
super().__init__()
# Connect motors and color sensor
self.colorsensor = ColorSensor()
self.beltmotor = LargeMotor(OUTPUT_B)
self.ejectmotor = MediumMotor(OUTPUT_A)
self.sound = Sound()
self.leds = Leds()
And then when a command arrives to Mindstorms Gadget, we unbox the parameters and calls our own method:
def on_custom_mindstorms_gadget_control(self, directive):
"""
Handles the Custom.Mindstorms.Gadget control directive.
:param directive: the custom directive with the matching namespace and name
"""
try:
payload = json.loads(directive.payload.decode("utf-8"))
print("Control payload: {}".format(payload), file=sys.stderr)
control_type = payload["type"]
if control_type == "candy":
# Expected params: [pieces, color, piecesB, colorB]
self._candy(int(payload["pieces"]), payload["color"], int(payload["piecesB"]), payload["colorB"])
except KeyError:
print("Missing expected parameters: {}".format(directive), file=sys.stderr)
And here is the main part, which will serve the candies:
def _candy(self, pieces: int, color, piecesB: int, colorB, is_blocking=False):
"""
Handles candy commands from the directive.
Sample:
Give me {Pieces} {Color} and {PiecesB} {ColorB} candies
Variations:
1 blue and 2 green
1 blue and 0 <empty string>
"""
print("Candy command: ({}, {}, {}, {})".format(pieces, color, piecesB, colorB), file=sys.stderr)
# Let EV3 do his job
# Music at the beginning
self.leds.set_color("LEFT", "RED")
self.leds.set_color("RIGHT", "RED")
self.sound.play_song((('C4', 'e'), ('D4', 'e'), ('E5', 'q')))
self.colorsensor.mode='COL-COLOR'
colors=('unknown','black','blue','green','yellow','red','white','brown')
print("Processing 1st color", file=sys.stderr)
for x in range(pieces):
print("Belt started: candy: {}".format(x), file=sys.stderr)
self.beltmotor.run_forever(speed_sp=-100)
while True:
actualcolor = colors[self.colorsensor.value()]
print("Actual color: {}, {}".format(actualcolor, color), file=sys.stderr)
if actualcolor == color:
break
self.beltmotor.stop(stop_action="hold")
self.beltmotor.wait_while('running')
print("Belt stopped", file=sys.stderr)
print("Move candy to eject position", file=sys.stderr)
self.beltmotor.run_to_rel_pos(position_sp=-135, speed_sp=-100, stop_action="hold")
self.beltmotor.wait_while('running')
print("Eject candy", file=sys.stderr)
self.ejectmotor.run_to_rel_pos(position_sp=-360, speed_sp=400, stop_action="hold")
self.ejectmotor.wait_while('running')
print("Processing 2nd color", file=sys.stderr)
for x in range(piecesB):
print("Belt started: candy: {}".format(x), file=sys.stderr)
self.beltmotor.run_forever(speed_sp=-100)
while True:
actualcolor = colors[self.colorsensor.value()]
print("Actual color: {}, {}".format(actualcolor, colorB), file=sys.stderr)
if actualcolor == colorB:
break
self.beltmotor.stop(stop_action="hold")
self.beltmotor.wait_while('running')
print("Belt stopped", file=sys.stderr)
print("Move candy to eject position", file=sys.stderr)
self.beltmotor.run_to_rel_pos(position_sp=-135, speed_sp=-100, stop_action="hold")
self.beltmotor.wait_while('running')
print("Eject candy", file=sys.stderr)
self.ejectmotor.run_to_rel_pos(position_sp=-360, speed_sp=400, stop_action="hold")
self.ejectmotor.wait_while('running')
# Music at the end
self.leds.set_color("LEFT", "GREEN")
self.leds.set_color("RIGHT", "GREEN")
self.sound.play_song((('C4', 'e'), ('D4', 'e'), ('E5', 'q')))
So for the Python code, first we implemented and tested using EV3's own programming language, like this:
The Python code was based on that, so after some starting sound, we turn the belt with Large motor while continuously checking the value of the color sensor. If the sensor sees our selected candy color, then we stop the belt right away, and then move it to desired eject position, then Medium motor will eject our candy. This repeats until all first color candies are ejected. Then again a repeat for the second color.
This site was very useful when we translated our program to Python: https://sites.google.com/site/ev3python/learn_ev3_python/using-motors
Step 7: Fun!Now let's put candies into the machine and let the fun begin.
You can start the program by downloading it to EV3 using Visual Studio Code, then right click on your candymachine.py
file and Run it. Then Visual Studio Code will start it in debug mode, and you will see the console output of EV3 on your PC as well:
The process continuously logs out to the console what is happening, so you can check it while running from Visual Studio Code, like this:
Starting: brickrun --directory="/home/robot/alexa-candymachine" "/home/robot/alexa-candymachine/candymachine.py"
Started.
----------
Attempting to reconnect to Echo device with address: 08:A6:BC:95:53:02
Connected to Echo device with address: 08:A6:BC:95:53:02
GadgetDFD connected to Echo device
Control payload: {'pieces': '2', 'color': 'blue', 'piecesB': '1', 'type': 'candy', 'colorB': 'green'}
Candy command: (2, blue, 1, green)
Processing 1st color
Belt started: candy: 0
Actual color: black, blue
Actual color: black, blue
Actual color: black, blue
...
Actual color: red, blue
Actual color: red, blue
Actual color: red, blue
...
Actual color: black, blue
Actual color: black, blue
Actual color: black, blue
Actual color: blue, blue
Belt stopped
Move candy to eject position
Eject candy
Belt started: candy: 1
Actual color: red, blue
Actual color: red, blue
...
Actual color: black, blue
Actual color: black, blue
Actual color: black, blue
...
Actual color: red, blue
Actual color: red, blue
Actual color: red, blue
...
Actual color: black, blue
Actual color: black, blue
Actual color: black, blue
Actual color: blue, blue
Belt stopped
Move candy to eject position
Eject candy
Processing 2nd color
Belt started: candy: 0
Actual color: red, green
Actual color: red, green
...
Actual color: black, green
Actual color: black, green
Actual color: black, green
...
Actual color: red, green
Actual color: red, green
...
Actual color: black, green
Actual color: black, green
Actual color: black, green
Actual color: green, green
Belt stopped
Move candy to eject position
Eject candy
Now, let's chat with Alexa:
"Alexa, open Candy Machine"
"Candy Machine activated. What Can I do for you?"
"Give me 2 pieces of green and 3 pieces of yellow candies"
"Please wait while I am serving your 2 green and 3 yellow candies."
Or according to the Alexa skill interaction model:
"Alexa, ask Candy Machine to give me a candy"
"Give me 2pieces of blue candies"
"Give me a green candy"
"Give me 3 pieces of candies"
"Give me 6 pieces of red candies"
Whenever your request is incomplete or you request something which is not available, Alexa will respond accordingly based on the logic implemented in the lambda.
You can also tell her you have reached your goals, so she can present a candy to you.
"I have finished my homework"
"Congratulations! You deserve some candies"
"My room is clean"
"Congratulations! You deserve some candies"
You can see some advanced conversation with her in below video:
We hope you enjoyed building our Candy Machine project!
(submitted at 11/17/2019)
Comments