I love oatmeal, especially the steel cut ones. But cooking them requires constant attention of stirring and watch over can be fairly annoying, and if we don't churn once in a while some of the oats ended up stick on the bottom which makes it difficult for washing.
So over the holidays I've decided to build this cooking bot, which would automate some of the process. Luckily, we can modify some of the existing bots to do the job. In this project we will be build and modify pencil bot and dropper to do the job.
Step 1: buildout the pencil botPencil bot instruction came with the ev3 educational set, we will first build it so we can move the spoon instead of pencil.
Follow the build instructions we should be able to build this out in about an hour or two.
The problem with pencil bot is that it only have 2 axis, which allows it to only move in y and z axis (z being up and down). In order to stir in circle we need x axis as well, so we have to add additional wheel with just 1 motor (2 motor is already being used) to get the x axis done. Single Motor Cart's Build Instruction is inside attachment section.
Remove the bottom and install this cart, we should be able to move all 3 axis this way. When all done we should have something look like this.
Next we need the loader for the oatmeal, raisin, as well as other solid objects, there is a loader on LEGO Instructions, we can easily modify the loading dock to fit our need. The Build Instruction for the loader is in attachment.
Following the instructions we should have something like below.
Modify the top and we would have loading new loading dock, along with the pencil hand we should have following and we can move onto step 2. Right now we have taken all 4 motor ports.
The entire build instruction for all 3 of them are available in attachment section.
Step 2: Step ev3dev build environmentWe need to setup oru ev3dev enviroment, AlexaGadget's has already written a guide located at
https://www.hackster.io/alexagadgets/lego-mindstorms-voice-challenge-setup-17300f
Please go through the guide and we can deploy python code. The ev3dev image can be found at https://www.ev3dev.org/downloads/
Step 3: Pencil bot's stirring motionBefore we integrate complex missions, we first need to get stirring right. Once environment is set up, we can check start programming the pencil bot's stirring motion. First we will test out the y axis.
from ev3dev2.motor import OUTPUT_A, OUTPUT_B, OUTPUT_C, LargeMotor, SpeedPercent
x_axis = LargeMotor(OUTPUT_C)
y_axis = LargeMotor(OUTPUT_A)
z_axis = LargeMotor(OUTPUT_B)
z_axis.on_for_degrees(SpeedPercent(10), -80)
z_axis.on_for_degrees(SpeedPercent(10), 80)
This should yield following
The ideal motion will be drawing a big area. which would involve moving Motor C and Motor A at the same time. To do that we need 2 separate thread, which we can see from following.
from ev3dev2.motor import OUTPUT_A, OUTPUT_B, OUTPUT_C, LargeMotor, SpeedPercent
from ev3dev2.sensor.lego import GyroSensor
import threading
import time
x_axis = LargeMotor(OUTPUT_C)
y_axis = LargeMotor(OUTPUT_A)
z_axis = LargeMotor(OUTPUT_B)
y_axis.on_for_seconds(SpeedPercent(-5), 1.5)
x_axis.on_for_seconds(SpeedPercent(-10), 1.5)
y_axis.on_for_seconds(SpeedPercent(5), 1.5)
x_axis.on_for_seconds(SpeedPercent(10), 1.5)
time.sleep(5)
When that's done, we will should be able to stir, we can always calibrate the pan arm with the pen itself.
Next is loader, Loader needs to be higher if we were to pour stuff into it, next moving the medium motor, which takes a bit to spin. The original spec uses the touch sensor as activation, but we modified it so it knows when it is being in default position.
from ev3dev2.motor import OUTPUT_A, OUTPUT_B, OUTPUT_C, OUTPUT_D, MediumMotor, LargeMotor, SpeedPercent
from ev3dev2.sensor.lego import TouchSensor
import threading
import time
x_axis = LargeMotor(OUTPUT_C)
y_axis = LargeMotor(OUTPUT_A)
z_axis = LargeMotor(OUTPUT_B)
dropper = MediumMotor(OUTPUT_D)
dropper_load = TouchSensor()
dropper.on_for_seconds(SpeedPercent(-20), 10)
while dropper_load.is_released:
dropper.on_for_seconds(SpeedPercent(20), 1)
This way we can see dropper in action
Now that major part is done, we need a water temperature for trigger, idealy we would use ev3 temperature sensor. The code would be something like this.
from ev3dev2.sensor.lego import Sensor
import sys
from ev3dev2.port import LegoPort
from sys import stderr
from time import sleep
from ev3dev2.sensor import INPUT_1
from os import system
temp = Sensor(INPUT_1)
temp.mode = 'TEMP'
x = temp.value
count=0
while True:
count = count + 1
print(x, end=' ',file = stderr)
sleep(2)
if count == 10:
break
print(x(0), end=' ',file = stderr)
Note: I don't have this sensor on hand, as it got lost in the mail during the holiday seasons, so my friend Peter did a work around using his MindSensor Adapter to simulate the similar effect, we will need to use i2c to get the temperature sensor out of air that way. In the demo video I will be using that instead.
MindSensor makes a Grove Adaptor that's able to do this, which connects ev3 with Grove sensors. As time of this writing LEGO do not yet have driver support for MindSensor, and also there are absolutely no resources at all on python interaction with MindSensor on ev3dev platform.
We first need to install smbus2 in our shell, the original smbus does not have i2c message option.
easy_install3 smbus2
ev3dev does not have Grove Driver yet, so Sensor class does not support Grove adapter yet, we will have to connect directly to I2C to receive the data, and to do that we will have to through LegoPort.
We will do following to get the port
robot@ev3dev:~$ ls /dev/i2c-in*
/dev/i2c-in1 /dev/i2c-in2 /dev/i2c-in3 /dev/i2c-in4
robot@ev3dev:~$ udevadm info -q path -n /dev/i2c-in1
/devices/platform/i2c-legoev3.3/i2c-3/i2c-dev/i2c-3
The first port is i2c-3, the 3 is the bus number
robot@ev3dev:~$ sudo i2cdump 3 0x21
You'd see
We then need to set flag on 0x42 as 0x01, this is to let the MindSensor Adapter know that we are trying to read the analog sensor input from the Grove Sensor itself, we can do this by writing moisbus.write_byte_data(I2C_ADDRESS, 0x42, 0x01). After that we can read the data from 0x44 and 0x45, combine them and we would get the result from the sensor. And we'd also need an algorithm that converts the sensor data into temperature data. The spec for the sensor can be seen at http://wiki.seeedstudio.com/Grove-Temperature_Sensor_V1.2/. Code is following
from smbus2 import SMBus, i2c_msg
import time
import math
I2C_ADDRESS = 0x21 # the default I2C address of the sensor
temperaturebus = SMBus(4)
temperaturebus.write_byte_data(I2C_ADDRESS, 0x42, 0x01)
B = 4250 # B value of the thermistor
R0 = 100000 # R0 = 100000
while(True):
part1 = temperaturebus.read_byte_data(I2C_ADDRESS, 0x44)
part2 = temperaturebus.read_byte_data(I2C_ADDRESS, 0x45)
result = (part1 << 2) + part2
print('result')
print(result)
R = 1023.0/result-1.0
R = R0*R
temperature = 1.0/(math.log(R/R0)/B+1/298.15)-273.15
print('temperature')
print(temperature)
time.sleep(.5)
These shows room temperature instead of water temperature, so we will adjust the demo accordingly, but it's a work around for not having the LEGO temperature sensor in hand.
Step 5: Setup Alexa Server sideIn order to do this step, please follow through all of the missions listed in Alexa's MINDSTORMS Voice Challenge missions. As it is important to setup the entire server side Alexa App and Alexa Voice Service. We will be modifying from mission 04, you can see the full guide at
https://www.hackster.io/alexagadgets
When you finish Mission 01 and Mission 03, we should have your amazonID and AlexaGadget Secret which you can place, and we can place them in oatmeal.ini, we will be using Custom.Mindstorms.Gadget = 1.0
[GadgetSettings]
amazonId = YOUR_GADGET_AMAZON_ID
alexaGadgetSecret = YOUR_GADGET_SECRET
[GadgetCapabilities]
Custom.Mindstorms.Gadget = 1.0
We should also have entire amazon alexa lambda setup from Mission 03, which we will be start modifying the application next step.
Step 6: Write lambda appNow we have the application from mission 04, we need to make some modifications, first, we will need the entire model.json file changed, in this case, we will be focused on CookIntent. first we need to change the name to "invocationName": "oat meal bot", we will focus on CookIntent, StopCookIntent, YesNoIntent, and ReadyIntent.
{
"interactionModel": {
"languageModel": {
"invocationName": "oat meal bot",
"intents": [
{
"name": "AMAZON.CancelIntent",
"samples": []
},
{
"name": "AMAZON.HelpIntent",
"samples": []
},
{
"name": "AMAZON.StopIntent",
"samples": []
},
{
"name": "AMAZON.NavigateHomeIntent",
"samples": []
},
{
"name": "CookIntent",
"slots": [],
"samples": [
"Star cooking",
"Cook me oat meal",
"Cook me food"
]
},
{
"name": "StopCookIntent",
"slots": [],
"samples": [
"Stop cooking",
"Stop cook"
]
},
{
"name": "ReadyIntent",
"slots": [],
"samples": [
"Is it ready yet",
"Is the food ready",
"I am hungry"
]
},
{
"name": "YesNoIntent",
"slots": [
{
"name": "YesNo",
"type": "yesNoType"
}
],
"samples": [
"{YesNo}"
]
}
],
"types": [
{
"name": "yesNoType",
"values": [
{
"name": {
"value": "yes",
"synonyms": [
"yep",
"yeah",
"I do",
"yes please",
"you know it",
"I would"
]
}
},
{
"name": {
"value": "no",
"synonyms": [
"no",
"no thanks",
"nope",
"I do not",
"no thank you",
"don't"
]
}
}
]
}
]
}
}
}
Inside skill builder from Mission 04, we need to swap their IntentHandler with CookIntentHandler, StopCookIntentHandler, and ReadyIntentHandler.
// The SkillBuilder acts as the entry point for your skill, routing all request and response
// payloads to the handlers above. Make sure any new handlers or interceptors you've
// defined are included below. The order matters - they're processed top to bottom.
exports.handler = Alexa.SkillBuilders.custom()
.addRequestHandlers(
LaunchRequestHandler,
CookIntentHandler,
StopCookIntentHandler,
ReadyIntentHandler,
YesNoIntentHandler,
EventsReceivedRequestHandler,
ExpiredRequestHandler,
Common.HelpIntentHandler,
Common.CancelAndStopIntentHandler,
Common.SessionEndedRequestHandler,
Common.IntentReflectorHandler, // make sure IntentReflectorHandler is last so it doesn't override your custom intent handlers
)
.addRequestInterceptors(Common.RequestInterceptor)
.addErrorHandlers(
Common.ErrorHandler,
)
.lambda();
CookIntentHandler handles starting to cook, which is sent to the ev3 and waiting for the water to boil, we will handle most of the response via EventsReceivedRequestHandler
// Construct and send a custom directive to the connected gadget with data from
// the CookIntentHandler.
const CookIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'CookIntent';
},
handle: function (handlerInput) {
const attributesManager = handlerInput.attributesManager;
let endpointId = attributesManager.getSessionAttributes().endpointId || [];
// Construct the directive with the payload containing the move parameters
let directive = Util.build(endpointId, NAMESPACE, NAME_CONTROL,
{
type: "cook"
});
let speechOutput = "Starting to cook oat meal, waiting for water to boil.";
return handlerInput.responseBuilder
.speak(speechOutput + BG_MUSIC)
.addDirective(directive)
.getResponse();
}
};
YesNoIntent is an answer back, if yes it will act same as cook intent, if no we simply give them instructions.
// Construct and send a custom directive to the connected gadget with data from
// the YesNoIntentHandler.
const YesNoIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'YesNoIntent';
},
handle: function (handlerInput) {
let yesno = Alexa.getSlotValue(handlerInput.requestEnvelope, 'YesNo');
if (!yesno) {
return handlerInput.responseBuilder
.speak("Can you repeat that?")
.withShouldEndSession(false)
.getResponse();
}
const attributesManager = handlerInput.attributesManager;
let endpointId = attributesManager.getSessionAttributes().endpointId || [];
if(yesno.includes("yes"))
{
// Construct the directive with the payload containing the move parameters
let directive = Util.build(endpointId, NAMESPACE, NAME_CONTROL,
{
type: "cook"
});
let speechOutput = "Starting to cook oat meal, waiting for water to boil.";
return handlerInput.responseBuilder
.speak(speechOutput + BG_MUSIC)
.addDirective(directive)
.getResponse();
}
else
{
let speechOutput = 'Sure, if you want to cook oat meal you can simply say Alexa, start cooking';
return handlerInput.responseBuilder
.speak(speechOutput + BG_MUSIC)
.getResponse();
}
}
};
StopCookIntentHandler sends stop to the ev3 to stop cooking right away.
// Construct and send a custom directive to the connected gadget with data from
// the StopCookIntentHandler.
const StopCookIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'CookIntent';
},
handle: function (handlerInput) {
const attributesManager = handlerInput.attributesManager;
let endpointId = attributesManager.getSessionAttributes().endpointId || [];
// Construct the directive with the payload containing the move parameters
let directive = Util.build(endpointId, NAMESPACE, NAME_CONTROL,
{
type: "stop"
});
let speechOutput = "Stopping the cooking, please turn off the stove";
return handlerInput.responseBuilder
.speak(speechOutput + BG_MUSIC)
.addDirective(directive)
.getResponse();
}
};
ReadyIntentHandler simples sends status to the ev3dev and get back the instructions from EventsReceivedRequestHandler
// Construct and send a custom directive to the connected gadget with data from
// the ReadyIntentHandler.
const ReadyIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'ReadyIntent';
},
handle: function (handlerInput) {
const attributesManager = handlerInput.attributesManager;
let endpointId = attributesManager.getSessionAttributes().endpointId || [];
// Construct the directive with the payload containing the move parameters
let directive = Util.build(endpointId, NAMESPACE, NAME_CONTROL,
{
type: "ready"
});
let speechOutput = "Checking status";
return handlerInput.responseBuilder
.speak(speechOutput + BG_MUSIC)
.addDirective(directive)
.getResponse();
}
};
EventsReceivedRequestHandler will speak out the conversations coming from ev3. Everything should be stored in payload.speech, if the custom header is start, we launch .withShouldEndSession(false) so we can answer the question that is coming in.
const EventsReceivedRequestHandler = {
// Checks for a valid token and endpoint.
canHandle(handlerInput) {
let { request } = handlerInput.requestEnvelope;
console.log('Request type: ' + Alexa.getRequestType(handlerInput.requestEnvelope));
if (request.type !== 'CustomInterfaceController.EventsReceived') return false;
const attributesManager = handlerInput.attributesManager;
let sessionAttributes = attributesManager.getSessionAttributes();
let customEvent = request.events[0];
// Validate event token
if (sessionAttributes.token !== request.token) {
console.log("Event token doesn't match. Ignoring this event");
return false;
}
// Validate endpoint
let requestEndpoint = customEvent.endpoint.endpointId;
if (requestEndpoint !== sessionAttributes.endpointId) {
console.log("Event endpoint id doesn't match. Ignoring this event");
return false;
}
return true;
},
handle(handlerInput) {
console.log("== Received Custom Event ==");
let customEvent = handlerInput.requestEnvelope.request.events[0];
let payload = customEvent.payload;
let name = customEvent.header.name;
let speechOutput;
if (payload.speech) {
speechOutput = payload.speech;
}
if (name === 'Start') {
return handlerInput.responseBuilder
.speak(speechOutput, "REPLACE_ALL")
.withShouldEndSession(false)
.getResponse();
}
return handlerInput.responseBuilder
.speak(speechOutput + BG_MUSIC, "REPLACE_ALL")
.getResponse();
}
};
When this is done, we can deploy it like we did it in the missions, the full index.js will be located in attachment section.
Now we have alexa hooked up, we will also write oatmeal.py based on mission-04, with additional MindSensor driver.
import os
import sys
import time
import logging
import json
import random
import threading
import math
from sys import stderr
from os import system
from enum import Enum
from smbus2 import SMBus, i2c_msg
from agt import AlexaGadget
from ev3dev2.led import Leds
from ev3dev2.sound import Sound
from ev3dev2.sensor.lego import TouchSensor
from ev3dev2.motor import OUTPUT_A, OUTPUT_B, OUTPUT_C, OUTPUT_D, MediumMotor, LargeMotor, SpeedPercent
from ev3dev2.sensor.lego import Sensor
from ev3dev2.port import LegoPort
from ev3dev2.sensor import INPUT_1
In
our init we pretty much setup everything talked in earlier steps, the 4 motors, touch sensor, the smbus(3) for temperature and starting both cooking thread and stiring thread.
def __init__(self):
"""
Performs Alexa Gadget initialization routines and ev3dev resource allocation.
"""
super().__init__()
temp = Sensor(INPUT_1)
temp.mode = 'TEMP'
# Robot state
self.isCooking = False
self.isActivated = False
self.sound = Sound()
self.leds = Leds()
self.dropper_load = TouchSensor()
self.current_time = datetime.datetime.now()
self.stop_time = datetime.datetime.now()
#Motor
self.x_axis = LargeMotor(OUTPUT_C)
self.y_axis = LargeMotor(OUTPUT_A)
self.z_axis = LargeMotor(OUTPUT_B)
self.dropper = MediumMotor(OUTPUT_D)
# Start threads
threading.Thread(target=self._cooking_thread, daemon=True).start()
threading.Thread(target=self._stiring_thread, daemon=True).start()
In on_custom_mindstorms_gadget_control we handle the payload and redirect them to handlers
def on_custom_mindstorms_gadget_control(self, directive):
"""
Handles the Custom.Mindstorms.Gadget control directive.
:param directive: the custom directive with the matching namespace and name
"""
try:
payload = json.loads(directive.payload.decode("utf-8"))
print("Control payload: {}".format(payload), file=sys.stderr)
control_type = payload["type"]
if control_type == "cook":
self._cook_handler()
elif control_type == "stop":
self._stop_handler()
elif control_type == "ready":
self._ready_handler()
except KeyError:
print("Missing expected parameters: {}".format(directive), file=sys.stderr)
in cook handler, we will simply set the isActivated flag to true, the cooking thread will handle the rest.
def _cook_handler(self):
self.isActivated = True
in stop handler, we first check if its already cooking, if it is, we will raise the z axis, after that we will set both activated and cooking thread to false
def _stop_handler(self):
if self.isCooking == True or self.isActivated == True
self.z_axis.on_for_degrees(SpeedPercent(10), -80)
self.isActivated = False
self.isCooking = False
ready handler will handle the Alexa dialog through different scenarios, which includes with it's not activated and not cooking, activated and not cooking and activated and cooking at the same time. Notice we are sending START in first case scenario to activate the question from alexa.
def _ready_handler(self):
if self.isActivated == False and self.isCooking == False:
self._send_event(EventName.START, {'speech':'cooking have not started yet, would you like to start cooking?'})
elif self.isActivated == True and self.isCooking == False:
self._send_event(EventName.READY, {'speech':'I am waiting for water to boil, please wait a little.'})
elif self.isActivated == True and self.isCooking == True:
self._send_event(EventName.READY, {'speech':'The oat meal is almost ready, sit tight for less than 5 minutes'})
Most of the magic happens in cooking thread, if the current time is higher than stop time we set activated and cooking flag to false, as well as telling Alexa that our food is ready.
Second part is that once it's activated, and temperature has reached the threshold, we will start cooking oatmeal, sending a signal to Alexa as well.
def _cooking_thread(self):
while True:
temperature = temp.value
print("temperature:" + str(temperature))
#first check the timer
if datetime.datetime.now() > self.stop_time and self.isActivated == False and self.isCooking == True:
self.isActivated = False
self.isCooking = False
self.z_axis.on_for_degrees(SpeedPercent(10), 80)
self._send_event(EventName.READY, {'speech':'Oat meal is ready, please turn off the stove'})
if self.isActivated == True and temperature > 75 and self.isCooking == False:
#if water is 100, the temperature around should be 75+
self.dropper.on_for_seconds(SpeedPercent(-20), 3)
self.z_axis.on_for_degrees(SpeedPercent(10), -80)
time.sleep(1)
while self.dropper_load.is_released:
self.dropper.on_for_seconds(SpeedPercent(20), 1)
#now we are starting to cook
self.stop_time = datetime.datetime.now() + datetime.timedelta(minutes=5)
self.isCooking = True
self.isActivated = False
self._send_event(EventName.READY, {'speech':'The water is almost boiling, we will start cooking oatmeal now'})
time.sleep(1)
_stiring_thread will depend on the flag self.isCooking, once flag is true we will constant stir it.
def _stiring_thread(self):
while True:
if self.isCooking == True:
self.y_axis.on_for_seconds(SpeedPercent(-5), 1.2)
self.x_axis.on_for_seconds(SpeedPercent(-10), 1.5)
self.y_axis.on_for_seconds(SpeedPercent(5), 1.5)
self.x_axis.on_for_seconds(SpeedPercent(10), 1.5)
time.sleep(5)
When these are all done, we will have a fully functional app, and clean pan!
We will open oat meal bot by saying
"Alexa, open oat meal bot"
To start cooking, we can say
"Alexa, Star cooking" or "Cook me oat meal"
to stop cooking simoply say
"Alexa, Stop cooking"
To check whether it's ready, we can say
"Is it ready yet" or "I am hungry"
Here is the demo!
Comments