Designing a custom Batmobile and making it smarter (using Amazon Echo and LEGO MINDSTORMS).
DEMO VIDEOUnable to see the video above? You can still watch in on YOUTUBE
Our ClientWayne Enterprises. Applied Sciences Division.
Bat ... ahem, Our Story BeginsHello visitor!
We are Andrés (left) and Luis (right), and we are the team behind the Batmobile redesign.
Lucious Fox hired us as consultants and allies to help him craft a better Batmobile and "night time" vehicle. We were of course honored to be part of this project, so we told him: "Don't worry. We can bring a User Centered approach to the redesign of Mr Wayne's vehicle".
We thought that creating a voice-commanded smart Batmobile could bring Batman's crime fighting experience to whole new level of awesomeness. So we decided to use Amazon Echo Dot to build the voice assistant that could take control of Batman's main crime fighting ally: his car.
--- OUR SOLUTION DESIGN PROCESS ---Before starting to build the solution or coding for that matter, it was necessary to understand the problem we were trying to solve first. This is why we took a User Centered approach and applied design thinking principles into our solution design process.
Our solution design process consists of 3 stages:
- DISCOVERY: Understanding Batman's problems and challenges when driving
- IDEATION: Coming up with a handful of ideas that might lead to potential solutions
- IMPLEMENTATION: Building and prototyping solutions and bringing them to life.
So let's get moving!
1. DISCOVERY PHASE: Learning about Batman's pain points and motivationsIn order to understand Batman, we need to think and act like Batman!
By taking a human centered approach, we could get insight into Batman’s pain points and obstacles when driving his Batmobile, fighting crime and dealing with daily life errands.
We used the Empathy Map, a valuable design thinking artifact that helped us understand the context of Batman driving its Batmobile, by answering 4 simple questions:
- What does Batman think?
- What does Batman do?
- What does Batman say?
- How does Batman feel?
How did we answered those 4 questions, you may say. We observed the Batman (in a non-creepy way) to gather the things he says and does. We referred to Alfred to help us solve the questions of what does Batman think and how does Batman feel when driving his car. We wrote down everything in our notes.
We moved our notes of what we observed on post-it notes. Then we placed them on a wall to map out and gain an overall understanding of what Batman said, did, thought and felt.
We then spotted common needs and patterns across the many post-it notes, which led us to the definition of top 3 main findings.
TOP FINDING #1: When Batman is in trouble, he calls Alfred: Alfred is the voice of reason for Bruce.
Alfred steps in and provides an external point of view that amplifies Batman's vision of things. Alfred enables him to make better decisions. Batman also relies too much on Alfred’s on-site assistance. Alfred is too important to Bruce’s psyche. He is Bruce’s consciousness, his sound of reason.
TOP FINDING #2: Batman is stubborn and overly confident when driving and fighting crime.
Bruce can be stubborn at times and this leads him to poor judgement when dealing with certain situations, especially on the wheel. Alfred puts him back in place and keeps him in check with his own emotions. On driving scenarios, he doesn’t have Alfred on his side.
TOP FINDING #3: Batman doesn’t get to talk much while driving the Tumbler: Once on the wheel, loneliness starts kicking in.
The current Batmobile's Artificial Intelligence lacks personality, sounds pretty robot-like. Sounds boring. One way interaction, no dialog, no engagement. There is some communication happening, but not a real sense of conversation.
Now that we understand Batman's pain points better, it is time to define the Opportunities for Solution Design. This will come in handy and helpful when ideating potential solutions.
Creating Opportunities For Solution DesignWe are now rephrasing each top finding as a few How Might We questions to build the foundation for the ideation phase. How Might We questions lay the foundation for coming up with cool and amazing ideas!
How Might We questions to enable TOP FINDING #1:
- How Might We make Alfred's wisdom present and available at times when Batman is driving his Batmobile?
- How Might We Alfred to keep assisting Batman when is not by his side?
- How Might We make Batman feel less guilty when calling Alfred for help?
How Might We questions to enable TOP FINDING #2:
- How Might We keep Batman in check with his emotions while he drives?
- How Might We help Batman understand the consequences of his actions while driving?
- How Might We help Batman become a more conscious and mindful driver?
- How Might We decrease the number of crashes Batman incurs?
How Might We questions to enable TOP FINDING #3:
- How Might We make Batman feel less lonely?
- How Might We turn the Batmobile his safe space?
- How Might We bring companionship or a partner in crime to Batman?
We grouped our previously defined How Might We questions into 3 different clusters, and labeled them in the format of a user need:
1. How Might We make Alfred omnipresent:
2. How Might We enable smarter and safer decisions on the wheel:
3. How Might We bring company and joy to the life of the Caped Crusader:
Because some of you might be wondering: Why are we doing all these post-it related stuff, and what does it have to do with building a LEGO robot?
After we understand our user (Batman) needs, and having clear opportunities for potential design solutions, it is time to start brainstorming ideas!
2. IDEATION: Turning ideas into design solutionsWe brainstormed a lot of ideas to turn Batman's identified problems into design solutions. We then posted the ideas we came up with on Instagram for people to vote and rank their favorite ideas. After that we marked the most voted ideas with a filled heart.
1. Ideas for "How Might We make Alfred omnipresent":
2. Ideas for "How Might We enable smarter and safer decisions on the wheel":
3. Ideas for "How Might We bring company and joy to the life of the Caped Crusader":
As you can read from the ideas above, the main theme was bringing a sidekick to the life of Batman while driving his Batmobile.
We Bring Echo (Amazon Alexa) To The Dark Knight’s LifeAs Alfred gets older, Batman will require an equally smart and wise companion to help him through his hardest days. We believe Echo can become not only his personal advisor, but also a great sidekick he can rely on.
Because some of you might not be the patient kind of people (including Batman himself): What are we waiting for to start putting some LEGO bricks and code together?
Now that we have picked a handful of ideas and selected Echo as our voice assistant for the task, it’s time to move to the phase you are all eager to know more about and make things happen!
3. THE IMPLEMENTATION PHASE: Making our ideas happen in the shape of LEGOs and Voice Activation.This is where we take the ideas that came out as a result of the previous 2 phases into reality.
We needed to go out there into the real world, into the Batcave, to start making things happen:
What did we need to turn these ideas into a reality?
- We designed an engaging Voice User Interface for the Echo Dot device that can provide Batman with the ability to control his car using his voice, in a simple and natural way, as opposed to having to use complex voice commands, i.e "weapons system activation". We applied most of these guidelines.
- We followed the LEGO MINDSTORMS Voice Challenge series of missions to understand how to control an EV3 Robot using an Amazon Echo Device. We tweaked and added new events to both the Python EV3 Brick code and the Alexa Node.js based backend skill from Mission 4. We improved upon the work already made by the LEGO MINDSTORMS Voice Challenge Team.
- We designed and assembled a strong LEGO Technic chassis that supports the EV3 Brick, along with the large, medium motors, and the sensors required.
Let's go ahead and take one item at a time to explain how we implemented each step of the IMPLEMENTATION PHASE:
A. Designing an engaging Voice User InterfaceDefining Echo's Mission Statement
First, we wrote a mission statement on what the echo assistant skill should accomplish. This statement kept us on track with the features that needed to be implemented, which resulted from the idea generation phase.
Our Echo's Mission Statement:
"I am Echo, and I was built to help Master Wayne fight crime, avoid road accidents, and bring joy to his daily life errands. I am constantly checking the road and its surroundings, staying alert at all times to avoid collisions and take that pressure off the Bat. When Master Wayne is feeling tired and sleepy, I take control over the Batmobile and drive all the way back to Wayne Manor. When Master Wayne invokes my name and asks me to speed up and boost into rampless jumps, I analyze the environment and road conditions and let him know if it is safe to make such moves or not. I let Batman make the ultimate decision, but if I don't hear back from him on time, then I make the right call. I can also provide entertaining options such as random joke telling, and dancing. When Master Wayne is feeling down, I provide words of encouragement and a sense of humor that resemblances that of Alfred's."
Situational and scenario design in the format of scripts
We created a series of dialogs and printed them out as scripts, based on scenarios aligned with Echo's Mission Statement (ex: Analyze the environment and road conditions and let Batman know if it is safe to make a rampless jump). These scripts show a simple dialog between Batman and Echo, and they helped us keep the focus on the conversation.
We also made sure to implement the session attribute manager, to help Echo remember previous interactions and understand the context of the current conversation being held.
Defining the Interaction Model
The interaction model is a combination of utterances, intents, and slots that we identified during the situational and scenario scripting process.
To create our interaction model, we needed to define the voice requests (intents) and the words that trigger those voice requests (sample utterances). More about Interaction Model (utterances, intents and slots): Voice design key concepts by Amazon.
The intents we needed to implement (on top of the ones already provided by the Mission-04 folder) were the following:
- MoveCarIntent - It enables the car to move its engines (move forward, reverse, come to a complete stop, etc).
"Move forward for 60 seconds" is the sample utterance.
"Forward" can be mapped to a slot named {direction} and and the "60" value (corresponding to the seconds the car would be moving for) can be mapped to a slot named {seconds}. The whole sample utterance triggers the intent MoveCarIntent found on the Alexa Skill Backend, which results in Echo replying back with the result of the Move Request.
- SteerIntent - It enables the car to turn the front wheel drive to the left, right or center.
- SetSpeedIntent - it enables the car to change speed as needed (go faster, go slower, accelerate, move at 50% speed, etc).
- SetCommandIntent - It enables the car to make a decision based on road conditions (keep eyes open, mind surroundings, remove an obstacle, driverless mode, rampless jump, dodge obstacle, etc).
- GetInfoIntent - It provides answers to Batman based on common requests (I need advice, I want to hear a joke, tell me something new, etc).
- GetVillainInfoIntent - It provides facts and insights about Batman's enemies (tell me more about the joker).
- SwitchVoiceIntent - It gives the ability to switch from a male to a female voice, and viceversa, at any given time. We made use of the session attributes feature to keep the selected voice (woman or man) available on every single interaction and turn throughout a conversation session.
We added new events and intent handlers. We also applied changes to the sample utterances and interaction model:
Front Proximity Analysis: An ultrasonic sensor is used to constantly analyze the distance between the Batmobile and an obstacle on the front. If the distance is less than 150 cm, a message is sent to Echo to warn Batman with a message and he should take action. If the car keeps moving forward and the distance breached is less than 100 cm, the car slows down and tells Batman about this action. If the distance breached is less than 70 cm then the car comes to a complete stop to avoid a collision and tells Batman he should take action from this point onwards.
def _front_proximity_thread(self):
"""
Monitors three front distance ranges between the car and an obstacle when sentry mode is activated.
If the minimum distance is breached for each range, send a custom event to trigger action on
the Alexa skill.
"""
count = 0
while True:
while self.sentry_mode:
distance = self.us.distance_centimeters_continuous
time.sleep(0.1)
print("Front Proximity: {}".format(distance), file=sys.stderr)
if distance < 150:
count = count + 1 if distance < 150 else 0
if count > 3:
print("Front Proximity breached <150. Sending event to skill", file=sys.stderr)
self._send_event(EventName.FRONT_PROXIMITY, {'distance': distance})
self.drive.on_for_seconds(SpeedPercent(int(distance/5)), SpeedPercent(int(distance/5)), 3)
self.patrol_mode = False
self.leds.set_color("LEFT", "RED", 1)
self.leds.set_color("RIGHT", "RED", 1)
count = 0
time.sleep(0.2)
if distance < 100:
count = count + 1 if distance < 100 else 0
if count > 3:
print("Front Proximity breached < 100. Sending event to skill", file=sys.stderr)
self._send_event(EventName.FRONT_PROXIMITY, {'distance': distance})
time.sleep(1)
self.drive.on_for_seconds(SpeedPercent(int(distance/4)), SpeedPercent(int(distance/4)), 3)
self.patrol_mode = False
self.leds.set_color("LEFT", "RED", 1)
self.leds.set_color("RIGHT", "RED", 1)
count = 0
time.sleep(0.2)
if distance < 70:
count = count + 1 if distance < 70 else 0
if count > 3:
print("Front Proximity breached < 70. Sending event to skill", file=sys.stderr)
self._send_event(EventName.FRONT_PROXIMITY, {'distance': distance})
time.sleep(1)
self.drive.on_for_seconds(SpeedPercent(int(distance/3)), SpeedPercent(int(distance/3)), 1)
self.drive.off()
self.sentry_mode = False
self.leds.set_color("LEFT", "RED", 1)
self.leds.set_color("RIGHT", "RED", 1)
count = 0
time.sleep(0.2)
time.sleep(1)
Back Proximity: An infrared sensor is used to constantly analyze the distance between the Batmobile and an obstacle on the rear. If the distance is less than 50 cm, the car moves forward for a bit and a message is sent to Echo to notify Batman about the action taken by the car on its own.
def _back_proximity_thread(self):
"""
Monitors the distance between the car and an obstacle on the rear when sentry mode is activated.
If the minimum distance is breached, send a custom event to trigger action on
the Alexa skill.
"""
count = 0
while True:
while self.sentry_mode:
distance = self.ir.proximity
print("Back Proximity: {}".format(distance), file=sys.stderr)
count = count + 1 if distance < 50 else 0
if count > 3:
print("Back Proximity breached. Sending event to skill", file=sys.stderr)
self._send_event(EventName.BACK_PROXIMITY, {'distance': distance})
time.sleep(0.2)
self.drive.on_for_seconds(SpeedPercent(10), SpeedPercent(10), 2, block=False)
self.drive.on_for_seconds(SpeedPercent(20), SpeedPercent(20), 1, block=False)
self.drive.off()
self.leds.set_color("LEFT", "RED", 1)
self.leds.set_color("RIGHT", "RED", 1)
self.sentry_mode = False
time.sleep(0.2)
time.sleep(1)
Rampless Jump: One of Batman favorite's moves on the wheel. We make use of the ultrasonic sensor again to analyze the distance between the car and an object ahead. If the distance is more than 150 cm, that means there is plenty of space to accelerate at full speed. Otherwise, Batman is notified by Echo that unfortunately this move can't be performed given the current road conditions.
def _rampless_jump_proximity_thread(self):
"""
Monitors the distance between the car and an obstacle in front when rampless jump mode is activated.
If one of two minimum distances are breached, send a custom event to trigger action on
the Alexa skill.
"""
count = 0
while True:
while self.rampless_mode:
distance = self.us.distance_centimeters_continuous
time.sleep(0.1)
print("Rampless Jump Proximity: {}".format(distance), file=sys.stderr)
if distance < 150:
count = count + 1 if distance < 150 else 0
if count > 3:
print("Rampless Jump Proximity < 150. Sending event to skill", file=sys.stderr)
self._send_event(EventName.RAMPLESS_JUMP_PROXIMITY, {'distance': distance})
time.sleep(1)
self.leds.set_color("LEFT", "RED", 1)
self.leds.set_color("RIGHT", "RED", 1)
self.rampless_mode = False
count = 0
time.sleep(0.2)
if distance > 150:
count = count + 1 if distance > 150 else 0
if count > 3:
print("Rampless Jump Proximity > 150. Sending event to skill", file=sys.stderr)
self._send_event(EventName.RAMPLESS_JUMP_PROXIMITY, {'distance': distance})
time.sleep(1)
self.drive.on_for_seconds(SpeedPercent(60), SpeedPercent(60), 1)
self.drive.on_for_seconds(SpeedPercent(80), SpeedPercent(80), 1)
self.drive.on_for_seconds(SpeedPercent(100), SpeedPercent(100), 3)
self.drive.on_for_seconds(SpeedPercent(80), SpeedPercent(80), 1)
self.drive.on_for_seconds(SpeedPercent(60), SpeedPercent(60), 1)
self.drive.on_for_seconds(SpeedPercent(40), SpeedPercent(40), 1)
self.drive.on_for_seconds(SpeedPercent(20), SpeedPercent(20), 1)
self.leds.set_color("LEFT", "RED", 1)
self.leds.set_color("RIGHT", "RED", 1)
self.rampless_mode = False
count = 0
time.sleep(0.2)
time.sleep(1)
Driverless Fail Safe Mode: The driverless mode consists of activating the proximity sensors to avoid collisions and moving the car in the direction of Waynes Manor.
GetInfo: When Batman wants to laugh or he simply wants to hear words of encouragement, Echo obliges.
// A handler to fulfill jokes, advices and other responses in a randomly way
// The type of info fulfilled comes from a slot value
const GetInfoIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'GetInfoIntent';
},
handle: function (handlerInput) {
// Get data from session attribute
const attributesManager = handlerInput.attributesManager;
voiceSelected = attributesManager.getSessionAttributes().voice || "Brian";
let infoPreviousRequest = attributesManager.getSessionAttributes().infoType
let typeOfInfoRequested = Alexa.getSlotValue(handlerInput.requestEnvelope, 'Info') || infoPreviousRequest;
let arrayOfInfo=[];
let speechOutput;
if (!typeOfInfoRequested) {
return handlerInput.responseBuilder
.speak(switchVoice("Can you say that again, sir?"))
.reprompt(switchVoice(getRandomResponse(ALLTYPESOFINFO["idleresponse"]), voiceSelected)).getResponse();
}else {
arrayOfInfo = ALLTYPESOFINFO[typeOfInfoRequested];
}
if(typeOfInfoRequested==="do"){
speechOutput = getRandomResponse(ALLTYPESOFINFO["advice"])
}
Util.putSessionAttribute(handlerInput, 'infoType', typeOfInfoRequested);
if(typeOfInfoRequested==="joke"){
speechOutput = getRandomResponse(ALLTYPESOFINFO["jokeopeningline"])
}else{
speechOutput= "";
}
return handlerInput.responseBuilder
.speak(switchVoice(speechOutput + getRandomResponse(arrayOfInfo), voiceSelected))
.reprompt(switchVoice(getRandomResponse(ALLTYPESOFINFO["idleresponse"]), voiceSelected))
.withShouldEndSession(false)
.getResponse();
}
};
GetVillainInfo: When Batman wants to learn more about an enemy, he simply says the magic words: "Echo, what do you know about the Joker?", and Echo obliges.
// A handler to fulfill information about Batman's villains
// by the villain's name. The villain's name comes from a slot value
const GetVillainsInfoIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'GetVillainInfoIntent';
},
handle: function (handlerInput) {
// Get data from session attribute
const attributesManager = handlerInput.attributesManager;
voiceSelected = attributesManager.getSessionAttributes().voice || "Brian";
let villainFromPreviousRequest = attributesManager.getSessionAttributes().villainType
let villainRequested = Alexa.getSlotValue(handlerInput.requestEnvelope, 'Villain') || villainFromPreviousRequest;
if (!villainRequested) {
return handlerInput.responseBuilder
.speak(switchVoice("Can you say that again, sir?"))
.reprompt(switchVoice(getRandomResponse(ALLTYPESOFINFO["idleresponse"]), voiceSelected)).getResponse();
}
let arrayOfVillainsInfo = VILLAINSINFO[villainRequested];
Util.putSessionAttribute(handlerInput, 'villainType', villainRequested);
return handlerInput.responseBuilder
.speak(switchVoice(getRandomResponse(arrayOfVillainsInfo), voiceSelected))
.reprompt(switchVoice(getRandomResponse(ALLTYPESOFINFO["idleresponse"]), voiceSelected))
.withShouldEndSession(false)
.getResponse();
}
};
SwitchVoice: Batman has the choice to pick between a female and male voice at any given time.
// Set the tone which Echo will use to reply to Batman.
// This allows other intent handler to use the specified speed value
// without asking the user for input.
let SetEchoVoiceIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'SwitchVoiceIntent';
},
handle: function (handlerInput) {
// Bound speed to (1-100)
let request = handlerInput.requestEnvelope;
let voiceType = Alexa.getSlotValue(request, 'Voice');
// Get data from session attribute
const attributesManager = handlerInput.attributesManager;
voiceSelected = attributesManager.getSessionAttributes().voice;
if (voiceType === "man") {
voiceSelected = "Brian";
}
if (voiceType === "woman") {
voiceSelected = "woman";
}
Util.putSessionAttribute(handlerInput, 'voice', voiceSelected);
let confirmation = getRandomResponse(ALLTYPESOFINFO["switchedvoiceresponse"]);
let speechOutput = switchVoice(confirmation, voiceSelected);
return handlerInput.responseBuilder
.speak(speechOutput)
.reprompt(switchVoice(getRandomResponse(ALLTYPESOFINFO["idleresponse"]), voiceSelected))
.withShouldEndSession(false)
.getResponse();
}
};
function switchVoice(text,voice_name) {
if (text && voice_name === "Brian"){
return "<voice name='" + voice_name + "'>" + text + "</voice>"
}
if (voice_name==="woman"){
return text
}
}
C. Designing and assembling a sturdy LEGO TECHNIC ChassisA mix between LEGO TECHNIC and LEGO MINDSTORMS sets.
LEGO Chassis Design using Megabricks online LEGO Editing Tool
Front Wheel Steering System
Testing Front Wheel Weight Support
Testing Rear Wheel Drive Weight Support
TEST TRACKSMovement Proof of Concept
Testing basic movements
THE RESULT: SEE IT IN ACTION!Unable to see the video above? You can still watch in on YOUTUBE
THE THINGS THAT HAPPENED BACKSTAGESketching Conversation Scenarios
Before we added Bat Eyes to the car ...
Too much power on this thing :D
... AND THIS IS IT, VISITOR!We hope you enjoyed and learned more about Voice User Interface Design as much as we did throughout this challenge. HASTA LUEGO!
All characters, character names, slogans, logos, and related indicia are trademarks of and copyright of DC Comics. The Smart LEGO Tumbler Project is in no way affiliated with or endorsed by DC Comics or any of its subsidiaries, employees, or associates. The Smart LEGO Tumbler Project offers no suggestion that the work presented on this DEMO Video is "official" or produced or sanctioned by the owner or any licensees of the aforementioned trademarks. The Smart LEGO Tumbler Project members will take all steps necessary to ensure that any usage of trademarked items in no way confuses the audience of this site as to its origin. The Smart LEGO Tumbler Project Team makes no claim to own Batman or any of the copyrights or trademarks related to it.
Comments