Before joining the Particle team as a Developer Evangelist, I was a Particle (and Spark) customer. One of the things I've always loved about Particle is the fact that they strive to pair great software with great hardware, which drastically lowers the learning curve to getting into physical computing (or making, or IoT, depending on your preferred phrase).
One example is the Cloud Functions feature of the Particle device firmware. Cloud functions provide a clean, straightforward API that developers can use to publish local variables (sensor values, for instance), or allow local functions to be called from an app or site (to blink a light or turning on a motor, for instance). Over the years, I've used this feature to monitor environment values in my home office, track temp values when home-brewing, interact with servos, and more.
To install the skill and try it out for yourself, click here, or say "Alexa, install Particle Cloud" to your nearest device.
The GoalWith this project, I had two goals:
- To explore the world of developing voice-controlled apps, specifically the Amazon Alexa Skills platform.
- To show how easy it is to use the Particle JavaScript SDK to access the Particle Cloud and interact with your devices.
I wrote a bit about this project on the Particle Blog. I also included a demo video below. The rest of this post is a deep-dive into how I built the skill.
Demo VideoSetting up the SkillTo get started, I visited the Alexa Skills Kit portal and clicked the "Start a Skill" button. You'll be asked to give the skill a name, and then to choose a model for your skill. There are three pre-built options (see below), and a Custom option for defining your own model from scratch, which is what I chose. For more info on what the skill models provide, check out this article.
After choosing a model, you'll be dropped into the Build portal. This is where you define the Alexa-side of your skill, from the invocation phrase to the interaction model and, finally, where Alexa should route invocation requests for you to handle.
You'll start by defining your invocation name, which should be a short 2+ word phrase that users will utter to invoke your skill. Mine is "particle cloud."
Once you've defined your invocation name, it's time to define your interaction model. The purpose of this step is to define the phrases (utterances in Alexa terminology) your users will speak, any essential values you need to capture (called slots) and map those to intents. Those intents will be critical once we define our backend functionality.
For my skill, I defined 11 intents (shown below): 3 built-in intents and 8 to field requests for online Particle devices, set and get active device values, call functions, and get variables from the Particle Device Cloud.
When defining intents, you can also provide Alexa with one or more "sample utterances" or phases that a user can say that you want Alexa to use to trigger your intent. This does not need to every possible combination of words and phrases a user can utter, just some "training data" that can results in a better UX for your skill.
A few of my intents are meant to be called without "variables" (slots), but several require the user to utter the name of a device, variable or function, and I needed to capture those in order for the skill to be useful. When you define slots, you assign a slot type so that Alexa can better capture user utterances. For known sets of information--like cities, days of the week or numbers--Amazon provides a number of built-in slot types that you can leverage.
My skill, however, works with the names of Particle devices, functions and variables, all of which are open-ended and unknown. For these types of unknown sets, Amazon recommends the AMAZON.SearchQuery
slot type, which I used throughout the skill.
Note: One drawback of the SearchQuery slot type is that you can only have one slot of this type in a sample utterance. For my skill, this means that I cannot capture device names, functions, arguments, and/or variables in the same utterance. For V1 of my skill, I implemented intents for saving an active device as a work-around. In the future, I plan to solve this by introducing dialog interactions.
With your interactions defined, you're ready to configure your backend logic for the skill.
Going Serverless with AWS LambdaNow comes the fun part: the codez, and the blinky lights, and devices, and whatnot.
When Alexa invokes your skill, and maps a user utterance to an intent with zero, one or more slots, it needs a place to hand off that intent for you to process. And while you can host the backend logic for your skill pretty much anywhere, the quickest solution is to use a "Serverless" approach, like Amazon's AWS Lambda, Azure Functions or Google Cloud Functions. I chose Lambda because Amazon has some built-in triggers that make it easy to configure your skill, but any Serverless approach will work.
If you go through Amazon's Alexa tutorials, you'll end up in the Lambda console for configuring and setting up your function. This is fine for a basic use case or tutorial, but I knew that I wanted to develop my Lambda function locally, and deploy quickly, so I opted to use the Serverless framework to help me. Serverless is an awesome toolkit for managing serverless projects and I highly-recommend it. They also have great docs, and built-in support for configuring Lambda functions for Alexa skills. Many of these frameworks also provide multi-language support, meaning you can author your skill handlers in JavaScript, Python or Java. I used JavaScript for mine.
Configuring your Serverless project is as simple as creating a serverless.yml
file and populating a few values. Here's what mine looks like:
service: MyParticleAlexaSkill-SRV
frameworkVersion: ">=1.4.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
custom:
defaultStage: dev
profiles:
dev: MyParticleAlexaSkill-SRV-profile-dev
prod: MyParticleAlexaSkill-SRV-profile-prod
package:
include:
- particleUtils.js
- skillUtils.js
- node_modules/**
functions:
handler:
handler: index.handler
role: arn:aws:iam::xxxxxxxxxx:role/lambda-dynamodb-execution-role
events:
- alexaSkill: amzn1.ask.skill.xxxx-xxxx-xxxx-xxxx-xxxxxxx
You'll need to configure AWS CLI assess to your Amazon account, but once that's done, you can deploy your new service with a single command:
$ serverless deploy
Mapping Intents to HandlersOf course, at this point, your lambda function is nothing more than an empty project. You need some code! Specifically, you need code that responds to user intents from Alexa, calls the Particle Device Cloud and passes a result back to the user via Alexa's friendly voice.
The structure of your skill handler is pretty straightforward. For a Node-based handler, you need to export a handler callback that ties your Alexa App dd to the service and registers the handlers that map to your intents.
exports.handler = function(event, context) {
const APP_ID = 'amzn1.ask.skill.xxxx-xxxx-xxxx-xxxx-xxxxxxx';
const alexa = Alexa.handler(event, context);
alexa.appId = APP_ID;
alexa.registerHandlers(handlers);
alexa.execute();
};
Handlers can be created as an object with functions as object properties, like so:
const handlers = {
LaunchRequest: function() {
this.response
.speak(
'Welcome to the Particle cloud. You can ask me about online devices, list functions and variables, set variables and call cloud functions.'
)
.listen(
'Give me a command or ask "what can Particle do" to get started.'
);
this.emit(':responseReady');
},
NumberOfDevicesIntent: function() {
// Intent Logic Here
},
ListDevicesIntent: function() {
// Intent Logic Here
}
}
You'll notice that the name of each intent handler maps exactly to an intent you defined when you set up your skill. This is how Alexa maps the intent of a user to your Lambda functionality.
The LaunchRequest
intent, which is a standard intent you can include to respond when a user says "Alexa, open {skill name}", shows how to pass control from your logic back to Alexa. Using API functions like speak()
, listen()
, and emit()
you instruct Alexa how to respond to the user, collect information and continue the conversation.
With our Skill handlers configured, you can talk to the Particle Device Cloud! Particle provides a number of SDKs for interacting with devices across platforms. For this skill, you'll use the JavaScript SDK, which you can install via npm
:
$ npm install particle-api-js
In my serverless.yml
file above, you'll notice that I specifically include my node_modules
directory so that Serverless knows to include those dependencies when it creates a zip for deployment.
To make my skill more reusable and easier to maintain, I placed all of my interactions with the Particle JS API into a module, which you can view in the GitHub source. For example, here's a snippet of the listDevices
and getDevice
functions:
const listDevices = token => {
return new Promise((resolve, reject) => {
particle
.listDevices({ auth: token })
.then(devices => resolve(devices), err => reject(err));
});
};
const getDevice = (token, id) => {
return new Promise((resolve, reject) => {
particle
.getDevice({ deviceId: id, auth: token })
.then(device => resolve(device))
.catch(err => reject(err));
});
};
The Particle JS SDK is entirely promise-based, which made it easy for me to use promises in the Skill handlers themselves.
const handlers = {
/* Other handlers */
NumberOfDevicesIntent: function() {
let response;
const token = this.event.session.user.accessToken;
if (token === undefined) {
return emitAccountLinkResponse(this);
}
particleApiUtils
.getOnlineDevices(token)
.then(devices => {
const onlineDevicesCount = devices.length;
response = `You currently have ${
onlineDevicesCount > 1
? `${onlineDevicesCount} devices`
: `${onlineDevicesCount} device`
} online.`;
})
.catch(err => {
response = `This request has failed. Please try again. ${err}`;
})
.finally(() => {
emitResponse(this, response);
});
}
}
Setting up OAuth for Account LinkingIf you've got a keen eye, perhaps you noticed that mention to an accessToken
in the source above. As with many APIs, the Particle JS SDK needs a user-specific access token in order to fetch your devices, work with functions, variables, and more.
In order for our Alexa skill to talk to your devices, you needed to set-up a "handshake" between your Amazon account and your Particle account. Thankfully, both the Particle Device Cloud and Amazon Alexa support OAuth.
On the Particle end, you'll need to create an OAuth client using either the console or by command-line calls to the Device Cloud API. Once you've created a client, head back to the Alexa Skills developer portal and enter your client details to enable account linking.
Now, when a user installs your skill, they'll be directed to link their account from the Alexa app on their phone, or at alexa.amazon.com.
Both Particle and Amazon provide docs on account linking, so check them out if you're looking to add account linking to your own skills.
Testing, Deployment & CertificationYou're nearly there! Once you've written your handlers and have deployed your Lambda function, you're ready to test. If you have an Echo device nearby that's tired to your developer account, you can test and invoke the skill on the spot!
Even cooler, however, is a new Alexa Simulator available from the "Test" tab in the skills portal. With the simulator, you can type (or use your mike to speak) commands, view the JSON input that Alexa sends to your handlers, as well as the response from your Lambda function. It's super-handy for debugging those cases where Alexa doesn't seem to be getting what you want.
Once I've deployed my skill, its available for me to test with an Echo against my real Particle devices. Now I can talk to my devices, and they talk back!
DisclaimerThis skill is not an officially-supported Particle product. It’s something I built to help me with my own Particle-based projects, and wanted to share with the community. I make no guarantees that this skill will work with all of your devices or for everything you want to do with it. I’d be happy to help with your feedback or questions, but please don’t ask for support from this from the Particle support team, or they’ll start sending me angry emojis on Slack.
CreditsIn the process of developing this skill, I had a lot of help from the Particle developer community. The following awesome folks took the time to beta test the skill, and patiently worked with me as I hammered out some of the bone-headed bugs I inserted into the code (Particle Forum handles were used in some cases to respect the privacy of community members):
- Clay Cooper
- Lagartus
- Pescatore
- Nicolai Ritz
- Cameron Turner
Why, V1+ of course! I already have a list of the features I'd like to add to the skill, both to expand its functionality (calling cloud functions with arguments, setting variables, etc.) and to make interactions a bit easier (back-and-forth dialog, etc.). If you find something missing as you use the skill, go ahead and add an issue in the GitHub repo. I'd love your help making this more useful for Particle developers everywhere.
Comments