Make your EV3 Brick and SMART-PATIENT-BED react to voice commands using an Alexa skill.
Smart Patient Bed: This is a voice controlled patient bed, which provides hands-free & hygienic patient care.
The smart patient bed provides following features:
- Allows back of the bed to move up and down.
- Open and close food tray.
- Call nurse.
To start commanding just say "Alexa, open mindstroms".
Control SMART-PATIENT-BED with your voiceYou will now explore how to create custom interactions between your EV3 Brick and an Alexa skill. An Alexa skill is like an app for Alexa, and is built using code stored in the cloud. You will:
- Download SMART-PATIENT-BED code and build instructions
- Build SMART-PATIENT-BED
- Create an Alexa skill to control SMART-PATIENT-BED
- Make SMART-PATIENT-BED react to custom commands triggered by your voice
In the end, SMART-PATIENT-BED can react to an Alexa skill like this:
Please download SMART-PATIENT-BED code and build instructions. SMART-PATIENT-BED code and build instructions are present in the folder smart-patient-bed.
Under smart-patient-bed
please refer smart-patient-bed-build-instructions-1.0.pdf to build it. The build should look like this:
Let’s walk through the steps of creating your Skill:
1. Sign in to developer.amazon.com.
2. In the top header, hover over Alexa, and click on Alexa Skills Kit.
3. In the upper-right of the screen, click on Create Skill.
4. Enter a Skill Name, maybe “MINDSTORMS”. The name you give your Skill will also be the way you open the Skill. For example, “Alexa, open mindstorms”, or, “Alexa, tell mindstorms to go forward”. You can modify this later.
5. Select your default language. Make sure the language select matches the language used on your Echo device.
6. Select Custom for the “Choose a model to add to your skill” option.
7. Select Alexa-Hosted for the “Choose a method to host your skill's backend resources” option.
Note: At the time of writing, the Python option for Alexa-Hosted skills was not available, but this method will work. We will work to add a guide for Python-based skills in the future. In the meantime, you can reference the Color Cycler Alexa skill for the Raspberry Pi sample to get started.
8. Click Create skill in the upper-right.
9. Once you click Create skill, you will see a modal while your skill is being prepared to be customized.
Enable the Custom Interface ControllerThe Custom Interface Controller allows you to send custom commands to your EV3Brick (Custom Directives), and receive events from your EV3 Brick (Custom Events). For this mission, you will need to enable the Custom Interface Controller, so you can use Custom Directives:
1. Click on Interfaces in the left navigation of the Alexa Developer Console.
2. Toggle Custom Interface Controller to ON.
3. Scroll to the top of the page and click Save Interfaces
That’s it! With Custom Interface Controller toggled on, you can write code that sends custom directives to your EV3 Brick and program how you want it to react. Learn more about Custom Interfaces.
Define the Skill Interaction ModelThe Skill Interaction Model defines how you can speak to your skill, and what kind of commands it can expect to respond to. The interaction model includes intents, slots, sample utterances that you define, and program against in your skill’s code. Learn more about the Skill Interaction Model.
1. In the Alexa Developer Console, under Interaction Model, click on JSON Editor.
2. In the smart-patient-bed
folder, you will see a folder called skill-nodejs
. Within that folder, there is a model.json
file. Copy the interaction model JSON from that file, and paste it into the editor, or drag and drop the JSON file onto the drop zone to upload it.
After pasting the JSON into the Alexa skill JSON Editor, click Save Model, and then Build Model presented at the top of the console interface. It may take some time for the model to build, so be patient. In the meantime, let’s explore the Intents and Slots in the JSON file.
An intent represents an action that fulfills a user's spoken request. Intents can optionally have arguments called slots. For example, in the JSON you will see a MoveIntent
with slots for Direction
, which maps to a list of words that represent the different directions you can say.
“Alexa, tell mindstorms to move up.”
The SetCommandIntent
with a Command
slot that maps to a list of commands defined within CommandType
. These intents allow you to write code that reacts to:
“Alexa, tell mindstorms to call nurse”
Take a moment to browse through these intents and their slots — key things to note are:
- Intent names (MoveIntent, SetCommandIntent)
With your Intents defined, you’re ready to start implementing the Skill code.
Implementing the Skill LogicThere’s a lot to learn about creating skills, but for the purpose of this mission, we’ll guide you through using the Alexa-Hosted skill option you selected earlier, and share additional resources at the end. With an Alexa-Hosted skill, you can start writing code for your skill directly in the Alexa Developer Console:
1. Click on Code in the top navigation bar of the Alexa Developer Console.
2. In VS Code, open the index.js
file in the smart-patient-bed/skill-nodejs/lambda
folder.
3. Copy the code in the index.js
file into the index.js
file in the Alexa Developer Console Code Editor.
4. Copy the contents of the package.json
and util.js
files to the respective files in the Alexa Skill Code Editor.
5. Create a new file by clicking the New File icon in the upper-left of the Code Editor, and fill in the path and file name as /lambda/common.js
6. With the common.js
file created, make sure the file is open, and then copy the code in the common.js
file from the smart-patient-bed/skill-nodejs/
folder in VS Code to the common.js
file in the Alexa Skill Code Editor.
Now, let’s take a look at the core elements of the index.js
file. The snippet below contains all of the handlers used by this skill. The last four handlers are lifted from the skill template — no modifications needed.
exports.handler = Alexa.SkillBuilders.custom()
.addRequestHandlers(
LaunchRequestHandler,
SetSpeedIntentHandler,
MoveIntentHandler,
SetCommandIntentHandler,
Common.HelpIntentHandler,
Common.CancelAndStopIntentHandler,
Common.SessionEndedRequestHandler,
Common.IntentReflectorHandler, // make sure IntentReflectorHandler is last so it doesn't override your custom intent handlers
)
.addRequestInterceptors(Common.RequestInterceptor)
.addErrorHandlers(
Common.ErrorHandler,
)
.lambda();
There are also three unique handlers that have been added:
LaunchRequestHandler
This handler is invoked when you launch the skill by saying, “Alexa, open mindstorms”. When the skill is launched, the following occurs:The endpointID
of the MindstormsGadget
is obtained using:
let request = handlerInput.requestEnvelope;
let { apiEndpoint, apiAccessToken } = request.context.System;
let apiResponse = await Util.getConnectedEndpoints(apiEndpoint, apiAccessToken);
if ((apiResponse.endpoints || []).length === 0) {
return handlerInput.responseBuilder
.speak("I couldn't find an EV3 Brick connected to this Echo device. Please check to make sure your EV3 Brick is connected, and try again.")
.getResponse();
}
If there is no EV3 Brick connected to the Echo device, the skill replies with a message. Otherwise, the endpointID
is stored as a session attribute. Session attributes allow you to retain data during the skill session:
let endpointId = apiResponse.endpoints[0].endpointId || [];
Util.putSessionAttribute(handlerInput, 'endpointId', endpointId);
Once the endpointID
is stored, the skill continues by welcoming the person speaking to the Echo device:
return handlerInput.responseBuilder
.speak("Welcome, you can start issuing move commands")
.reprompt("Awaiting commands")
.getResponse();
MoveIntentHandler
This handler is invoked when you say, “Alexa, tell mindstorms to move up”, or other similar supported utterance. When the skill is invoked using this utterance, the following code gets the values from the direction
to construct a command that will be sent to the EV3 Brick to make SMART-PATIENT-BED move.
const request = handlerInput.requestEnvelope;
const direction = Alexa.getSlotValue(request, 'Direction');
// Duration is optional, use default if not available
const duration = Alexa.getSlotValue(request, 'Duration') || "20";
// Get data from session attribute
const attributesManager = handlerInput.attributesManager;
const speed = attributesManager.getSessionAttributes().speed || "10";
const endpointId = attributesManager.getSessionAttributes().endpointId || [];
Once all the parameters are available, a control
custom directive is constructed to send to the MindstormsGadget
. This custom directive is then parsed on the EV3 Brick using code we’ll review later.
const directive = Util.build(endpointId, 'Custom.Mindstorms.Gadget', 'Control',
{
type: 'move',
direction: direction,
duration: duration,
speed: speed
});
return handlerInput.responseBuilder
.speak(speechOutput).reprompt("awaiting command")
.addDirective(directive).getResponse();
SetCommandIntentHandler
This handler is invoked when you say, “Alexa, tell mindstorms to call nurse”, or other similar supported utterance. When the skill is invoked using this utterance, the following code gets the values from the command
slot to construct a command that will be sent to the EV3 Brick to make SMART_PATIENT-BED react to a special command.
let command = Alexa.getSlotValue(handlerInput.requestEnvelope, 'Command');
let directive = Util.build(endpointId, 'Custom.Mindstorms.Gadget', 'Control',
{
type: 'command',
command: command,
speed: speed
});
return handlerInput.responseBuilder
.speak(`command ${command} activated`).reprompt("awaiting command")
.addDirective(directive).getResponse();
}
With this skill logic in place, you can test the skill, but you will find that you receive a response indicating that your EV3 Brick has not been detected:
1. Click Deploy in the upper-right of the Alexa Skill Code Editor. Wait for the deployment process to complete.
2. In the top navigation bar of the Alexa Developer Console, click on Test.
3. You can choose to enable the microphone on your computer or not.
4. Switch the testing from Off to Development using the dropdown under the navigation bar.
5. In the panel on the left, you can either type “open mindstorms”, or say that utterance when you press the microphone button (you will have opted to enable the microphone on your computer for this to work).
6. You should get a response along the lines of, “I couldn't find an EV3 Brick connected to this Echo device. Please check to make sure your EV3 Brick is connected, and try again”, which you specified in the LaunchRequestHandler
.
Your skill is working! Let’s complete the other side of the experience by reviewing and running the EV3 code to get SMART-PATIENT-BED to react to your skill.
Registration and capabilitiesWith your skill created, it’s time to review and run the code that can react to the Alexa skill you created. Using the same VS Code workspace, open the smart-patient-bed
folder. You should see a familiar series of files: an INI file, and Python file. This will be a common pattern for all the missions you will be working through.
As with the other missions, the first thing you need to do is add your Amazon ID and Alexa Gadget Secret for your registered gadget, and add a new capability to your gadget specified in the initialization file. You can access the code you need to run on the EV3 Brick at smart-patient-bed
. Within this folder, open the smart-patient-bed.ini
file, which should look like:
[GadgetSettings]
amazonId = YOUR_GADGET_AMAZON_ID
alexaGadgetSecret = YOUR_GADGET_SECRET
[GadgetCapabilities]
Custom.Mindstorms.Gadget = 1.0
The above should look familiar. Replace YOUR_GADGET_AMAZON_ID
and YOUR_GADGET_SECRET
with the ID and Secret tied to the Alexa Gadget you registered in the Amazon Developer Console.
With those changes, your EV3 Brick will be able to connect to your Echo device as an Alexa Gadget, and you’ll see a different capability has been specified. The Custom.Mindstorms.Gadget
should look familiar from the skill code you were working with when the commands to send to your EV3 Brick were being constructed. Custom Interfaces are different because YOU get to define what data is sent to and from your gadget, rather than Alexa. The naming convention for adding a Custom Interface is as follows:
Custom.<namespace> = <version>
This can be broken down as:
Custom.
is constant prefix that is always appended to the namespacenamespace
is defined by you. It can be any arbitrary string. In this example, you are usingMindstorms.Gadget
as the namespace of our interface. Note that namespace is not case sensitiveversion
is set to 1.0 since this is the currently supported version of your Custom Interface. It can’t, and shouldn’t need to, be changed.
With the Custom Interface added to your EV3 Brick’s Gadget Capabilities, you can respond to custom directives sent to your EV3 Brick sent via the Alexa skill you created.
Handling the Custom Directive from the Alexa skillFrom the Alexa Skill, you are sending a custom directive — specifically the custom directive with the Custom.Mindstorms.Gadget
namespace and Control
name. In order for EV3RSTORM to react, you need to define what happens when data is received via the Custom.Mindstorms.Gadget
interface you defined. Within the mission-03
folder, open the mission-03.py
file.
Similar to handling the Alexa wake word and the music tempo directives, handling this directive requires extending the AlexaGadget
class and defining a callback method. Within the MindstormsGadget
class, a callback method is defined using a convention to match the capability you added to the.ini file:
on_custom_namespace_name(self, directive)
This method definition breaks down as:
on_custom
is a prefix indicating that this is custom callback methodnamespace
is your defined custom interface namespacename
is your defined custom interface namedirective
is the custom directive with the corresponding namepace and name
The formatting of the method definition is derived from the dot syntax of the Custom Interface, coupled with the custom interface name, and dots are converted to dashes, as well as lowercase characters. So, for the custom interface that maps directly to what was specified in the.ini file, the method looks like:
on_custom_mindstorms_gadget_control(self, directive)
You can learn more about custom directives in the documentation, but the MindstormsGadget
class for this mission looks like this:
from agt import AlexaGadget
class MindstormsGadget(AlexaGadget):
def __init__(self):
super().__init__()
def on_custom_mindstorms_gadget_control(self, directive):
"""
Handles the Custom.Mindstorms.Gadget control directive.
:param directive: the custom directive with the matching namespace and name
"""
payload = json.loads(directive.payload.decode("utf-8"))
print("Control payload: {}".format(payload))
control_type = payload["type"]
if control_type == "move":
"""
Expected params: [direction, duration, speed]
"""
self._move(payload["direction"], int(payload["duration"]), int(payload["speed"]))
if __name__ == '__main__':
MindstormsGadget().main()
This receives the custom directive, which you can then parse and use the parameters to make SMART-PATIENT-BED respond in a variety of ways.
Implementing the move commandsWith the MindstormsGadget
configured to receive commands from your skill, you can add the logic to process them. First, let's tackle the move commands. Within smart-patient-bed.py
, you will use ev3dev2.motor
API to control the way SMART-PATIENT-BED will move. The movement relies upon MoveTank
and other classes in the ev3dev API, which you can refer to in the ev3dev Python documentation. The code looks like this:
def _move(self, direction, duration: int, speed: int, is_blocking=False):
"""
Handles move commands from the directive.
Right and left movement can under or over turn depending on the surface type.
:param direction: the move direction
:param duration: the duration in seconds
:param speed: the speed percentage as an integer
:param is_blocking: if set, motor run until duration expired before accept ing another command
"""
print("Move command: ({}, {}, {}, {})".format(direction, speed, duration, is_blocking))
if direction in Direction.UP.value:
self.bedback.on_for_rotations(SpeedPercent(-1), 0.2)
if direction in Direction.DOWN.value:
self.bedback.on_for_rotations(SpeedPercent(1), 0.2)
if direction in Direction.OPEN.value:
self.tray.on_for_rotations(SpeedPercent(1), 0.73)
if direction in Direction.CLOSE.value:
self.tray.on_for_rotations(SpeedPercent(-1), 0.73)
The Python code above sets up the motors connected to the EV3 Brick, and tells SMART-PATIENT-BED how to move when it receives commands from the Alexa skill, including:
- How the motors move based on a direction: up, down, open, close
You’ll notice how what’s specified in the EV3 Brick Python code maps to the Alexa skill you created earlier.
Implementing other control typesThe code above handles the move
control type. Our skill interaction model also has command
control types. This control type is a one to one mapping. For example, the command call nurse
from the skill maps directly to the call nurse functionality in the gadget:
def on_custom_mindstorms_gadget_control(self, directive):
payload = json.loads(directive.payload.decode("utf-8"))
print("Control payload: {}".format(payload))
control_type = payload["type"]
if control_type == "move":
"""
Expected params: [direction, duration, speed]
"""
self._move(payload["direction"], int(payload["duration"]), int(payload["speed"]))
if control_type == "command":
"""
Expected params: [command]
"""
self._activate(payload["command"])
def _activate(self, command, speed=50):
"""
Handles preset commands.
:param command: the preset command
:param speed: the speed if applicable
"""
print("Activate command: ({}, {})".format(command, speed))
if command in Command.NURSE.value:
print("Yes nurse")
ns = Sound()
ns.play_file('bell1.wav',volume=100, play_type=0)
gadget.leds.set_color("LEFT", "GREEN")
gadget.leds.set_color("RIGHT", "GREEN")
This code parses the custom directive received from the skill, and maps it to code that defines how SMART-PATIENT-BED should react to call nurse command.
Test SMART-PATIENT-BED with the Alexa skillNow that you’ve created your skill, and reviewed the Python code that will cause SMART-PATIENT-BED to react to that skill, you’re ready to test out the experience. You will need to run the Python code before starting the skill:
1. Make sure VS Code is connected to your EV3 Brick. You should see a green dot next to your EV3 Brick’s name in the EV3 DEVICE BROWSER.
2. Copy the smart-patient-bed folder in your computer workspace to your EV3 Brick. Click on the Send workspace to device button next to the EV3DEV DEVICE BROWSER text that shows up when you hover over the text.
When you click this button, you will see the status of the files being copied in the bottom-right of VS Code.
3. Once the files have copied over to your EV3 Brick, you can run the Python code by navigating to the Python file in the EV3 DEVICE BROWSER , right-click on the Python file, and select Run.
4. Once the program starts, you should see a prompt in the debug console when your EV3 Brick connects to your Echo device.
You will also see a message displayed on your EV3 Brick's screen.
Now comes the fun part! Make some room for SMART-PATIENT-BED, and try the following commands to see how SMART-PATIENT-BED reacts:
“Alexa, open mindstorms”
Alexa should respond acknowledging the skill session is active and listening for commands. The skill must be in session in order for the other following commands to work.
“Move up”
Alexa should acknowledge the command, and SMART-PATIENT-BED should the back of the bed up.
“Open tray”
This will make SMART-PATIENT-BED to open the food tray for the patient
“Call nurse"
SMART-PATIENT-BED ring an alarm to call the nurse
“Close tray"
SMART-PATIENT-BED should close the food tray
“Move down"
SMART-PATIENT-BED should move the bed back down.
Comments