The Problem
While the Echo Show works great on a desk or bedside table where the user likely only ever views the screen from one location and / or angle, it doesn’t work quite so well when used in locations with several different user viewing positions. For example, we have at least four regular user viewing positions in our kitchen and dining area and the best location for the Echo Show is a compromise for all of them:
- When cooking and viewing recipes
- When eating at the table and wanting to view lyrics
- When washing up and wanting to watch a TV show
- When not needed for anything in particular
What if you could say “Alexa, move forward 20cm and face me”, or “Alexa, goto the eating location”? It would be great if the Echo Show could move wherever you need it to with voice commands alone as a natural extension of the device itself.
The Goals
Some goals, such as turning toward the user using sound alone, would be out of reach in the time available but the requirements for a rewarding user experience could still be met. These were basic movement and self preservation, the ability to tilt the Echo Show for a better viewing angle in some scenarios and location retention for an improved user experience. Additional goals in the form of reacting to timers, notifications and so on would be met if possible using an add on approach on to the main build.
Here are the goals:
Movement
- Direction: Be able to move forwards or backwards and turn left or right
- Distance: Be able to move a distance in centimetres or turn a distance in a degrees
- Tilt: Improve the viewing angle when seated lower than the device or you’re a 5 year old by tilting
Self Preservation
- Detect Edges: Be able to detect the edges of the kitchen counter and not fall off
- Detect Objects: Be able to detect objects in front and stop
Location Retention
- Set: Be able to set the current location as “home” or a previously saved location
- Save: Be able to save the current location
- Goto: Be able to goto home or a previously saved location
- Delete: Be able to delete a previously saved location
Notifications & Timers
- Timer Expiration: Turn a “spinner” till the timer is dismissed / cancelled
- Notification Raising & Clearing: Raise or lower a “flag” in response to the raising and clearing of notifications
The Solution
Alexa’s Little Helper is the solution that meets the key goals.
A combination of EV3 and NXT based Lego MindStorms with a few bits from other Lego Technic sets designed to take Alexa, in the form of an Echo Show 10, wherever the user needs her to go. The basic form is essentially a tank with an Echo Show 10 shaped tray on top, a frame for mounting the EV3 and NXT bricks and various sensors for edge detection, object detection and state detection.
Of course, saying “Alexa, tell Alexa’s little helper to …” would sound a bit strange so the skill is called “your little helper” and Alexa refers to it as “my little helper”. Here are example commands that can be used with the skill:
General Movement
- “Alexa, tell your little helper to move forward 40cm”
- “Alexa, tell your little helper to move forward” (moves until an object or edge is detected or a new command is given)
- “Alexa, tell your little helper to move backward 20cm”
- “Alexa, tell your little helper to move backward” (moves until an edge is detected or a new command is given)
- “Alexa, tell your little helper to turn left 45 degrees”
- “Alexa, tell your little helper to turn left” (turns left 90 degrees)
- “Alexa, tell your little helper to turn light 80 degrees”
- “Alexa, tell your little helper to turn right” (turns right 90 degrees)
- “Alexa, tell your little helper to tilt”
Location Management
- “Alexa, tell your little helper to set the current location to home”
- “Alexa, tell your little helper to goto to location home”
- “Alexa, tell your little helper to save the current location as cooking”
- “Alexa, tell your little helper to goto location cooking”
- “Alexa, tell your little helper to delete location cooking”
Power
- “Alexa, ask your little helper for the battery level”
- “Alexa, ask your little helper to turn off”
And here is a video of Alexa’s little helper in action, demonstrating some of the command examples listed above (note that some segments have been sped up for brevity):
The project consisted of three main areas, the Lego build, the EV3 Python code and the Alexa Skill Node JS code. The Lego build went through several iterations before I was happy it was good enough, as did both the EV3 code and the Alexa Skill code.
There’s plenty more I wish I’d had the time to implement covered in a futures section at the end and the rest of this covers the key items in each of the three main areas.
NOTE: Code Colouring
I'm not sure why as I've not used Hackster.io before, but I've not been able to figure out how to get code colouring to format correctly either in snippets within the project story or the main code files that have been uploaded. Some code looks fine and some doesn't, in some cases possibly due to single quotes but in others for seemingly no reason. All of the formatting and colouring is fine within Visual Studio Code and I would still expect it to be when the code files are downloaded to a local computer.
Deciding what was needed for the Lego build was pretty straight forward, essentially a tank that can move the weight of the Echo Show 10 around. Putting it all into action together with neat cable runs in as compact a form factor as possible was another matter. A lot of prototyping was required, especially in the area of sensor use and while I had hoped to get away with only using the EV3 brick, using the NXT brick proved to be necessary in order to support the number of sensors needed.
The final build uses parts from the following sets plus a few extra sensors I’ve bought over the years:
- 31313 - EV3 MindStorms
- 8547 - NXT MindStorms
- 42005 - Monster Truck
- 42040 - Fire Plane
The following are key elements of the Lego build.
Tank Base
The basics of the tank base were simple as one would expect, but were complicated by the requirements to support the weight of the Echo Show, the need to include the tilt mechanism and the need to provide mounting points for other elements of the build.
Key to handling the weight was to ensure there were no gaps between the two tank tracks to stop the Lego bending in on itself. In the end, fitting the tilt mechanism between the two tank motors helped provide a lot of the required stability.
Echo Show Tray
The tank base provided most of the weight management but the Echo Show needed a base and some sides to sit on / in. The main work here was trying to match the angles of the Echo Show and reaching a compromise between sides high enough to keep it in place without stopping the tilting mechanism from working properly.
EV3 & NXT Bricks Mounting Frame
The decision to mount the EV3 and NXT bricks behind the Echo Show was pretty easy as options were limited, but I did have a single goal, which was to at least allow easy access for changing the batteries. I achieved this by mounting the bricks on rotating parts at the bottom and then securing them at the top with a slide in / out axel bar. Some of the cables need unplugging, but in the end, I was pleased with how easy it is to change the batteries.
The vertical mounting needed to be pretty well secured to the main elements to ensure there was as little vibration / wobble when moving as possible:
Here you can see how the hinged mechanism made it easy to change batteries:
Tilting
Making the Echo Show tilt in order to change the viewing angle needed a slow, steady movement. Looking through existing sets for inspiration, my son’s first technic set, little cherry picker 42088 proved to be the winner with its worm gear based approach being exactly what I needed.
Making the same approach work for tilting the Echo Show using a motor was pretty straight forward, though it took a few attempts to work it in with the main tank base in a compact manner I was happy with:
Sensors
Sensors were required for edge, object and state detection. As mentioned earlier, the additional NXT brick would be required as I realised that achieving what I wanted with only 4 sensors was not going to be workable and I was losing a lot of time trying to make it happen. With the additional sensor possibilities in place, everything was simplified apart from the rear edge detection.
Determining whether the Echo Show was tilted or whether the notification flag was raised was a simple case of using touch sensors. If the Echo Show is pressing in its associated touch sensor then it is not tilted and the same applies to the notification flag and its touch sensor. Having information at startup from these sensors was vital in order to have a known start point for the tilt and notification flag positions. Both touch sensors were used via the NXT brick.
Front object detection was achieved in a straight forward manner using the EV3 IR sensor. It’s not fool proof given its angular range but is better than nothing and also allowed me to build in IR control of the tank element which proved veery useful during prototyping.
Edge detection was a little more difficult as I didn’t have enough appropriate sensors or sensor ports on the EV3 brick as I wanted edge detection to only use sensors attached to the EV3 brick for speed but I also wanted to use the EV3 only IR sensor. For front edge detection, I used an ultrasonic sensor on each corner.
For rear edge detection I had to get creative so as to only use a single sensor. I used a “feelers” approach but instead of having the “feelers” press or release touch sensors, I made them raise and lower white Lego pieces which were being “watched” by a colour sensor measuring the reflected light. If the reflected light dropped below X then one or both feelers had detected an edge. The design allowed both “feelers” to move white pieces that were independent of each other but also very close together so that a single colour sensor could “see” both.
Key Learning
Don’t be afraid to scrap what you’ve got and start again, applying learnings from earlier prototypes. I found I wasted time by trying to make something work on an approach that simply wasn’t right and was faster when starting again or reworking sections.
I took a bite sized approach to the development of the EV3 and NXT control code, creating different prototypes and playgrounds with varying numbers of iterations for each of the key areas. When satisfied with any given code, it was integrated into the main application.
Less than with the Alexa skill code, but still applicable to a degree, the competition tutorial code was used as the basis for the final EV3 application.
The following are key elements of the EV3 and NXT control program.
EV3 Control of the NXT Brick
Control of the NXT brick by code running on the EV3 brick was vital to my plan to use additional sensors. The NXT-Python library by Eelviny (https://github.com/eelviny/nxt-python) helped me achieve this goal by providing a means of querying sensors and driving motors on an NXT brick via Bluetooth.
Prototyping work did prove frustrating at first as while the library and required Bluetooth packages worked first time on the EV3 brick, I couldn’t get them to work on the Mac I was developing on. After spending a bit too much time trying to get Bluetooth on the Mac to work, I decided to check if the code was identical whether a USB connection or a Bluetooth connection to the NXT brick was used and luckily it was. I did the prototype work with the NXT brick wired to the MAC and the final code on the EV3 brick uses Bluetooth.
Usage of the library is straight forward, though you may need to read through the library code to find and use additional functions that aren’t exposed per se but can still be used. Here’s an example of using the touch sensors:
# find the NXT brick
self.nxtBrick = nxt_brick.locator.find_one_brick()
# setup sensors
self.nxtTiltButton = Touch(self.nxtBrick, PORT_1)
self.nxtFlagButton = Touch(self.nxtBrick, PORT_2)
# get sensor data
self.nxtTiltButtonPressed = self.nxtTiltButton.get_sample()
self.nxtFlagButtonPressed = self.nxtFlagButton.get_sample()
One less obvious area of using the NXT brick that needed addressing was stopping it from powering down. The NXT brick was intentionally set to a two minute inactivity timeout and power down for general power related purposes described later in this document and it was assumed that frequently requesting samples would keep the NXT brick “alive” but this did not turn out to be the case. A specific “keep alive” message must be sent periodically which was implemented as part of the detection loop where the NXT brick sensors were being sampled (note that startTime is initially set outside of the loop):
if time.time() - startTime > 30:
self.nxtBrick.keep_alive()
startTime = time.time()
Edge & Object Detection
Edge and object detection is run as a separate thread, solely focussed on detection. Every 100ms, the code will collect the various sensor data and if statements to determine whether values are in the “ok” range or not, setting boolean flags indicating whether forward or backward movement is allowed accordingly.
Other than some additional code to determine whether the user has already been told about any detections and whether the detections are edge or object based, that’s about all there is to it as the boolean flags are then used within the move code. This approach makes the code simply and allows for good separation of concerns on what could have been a complicated implementation.
The values to check against, ultrasonic minimum distance for front edge detection, proximity minimum distance for front object detection and minimum reflected light for rear edge detection were all determined though trial and error. I’m pretty happy that the edge detection values work in all circumstances but the IR based proximity minimum detection distance is a little bit dodgy as the distance an object can be “seen” from changes based on white versus black objects and the range in between.
The following example code determines whether backward movement is allowed or not:
# decide if backward movement is allowed
if self.rearEv3ColourReflectedLight < REFELECTED_LIGHT_MIN:
if not self.backwardStop:
self.backwardStop = True
else:
self.backwardStop = False
toldUserAboutBackwardStop = False
Movement
The movement code starts with some validation to make sure a valid move has been requested, makes sure the Echo Show isn’t tilted before executing any moves and then the move is performed if sensors allow.
For any given direction, the approach is the same but with the use of different boolean flag checks. Each move ends with a while loop making sure it's ok to keep moving which exits either when the move is finished or a sensor has detected an edge or object. Once the while loop is exited, the tank drive is turned off to ensure no continued movement (important for moving forward or backwards with no specified distance or generally when an edge or object is detected). Here’s the code for moving forward as an example:
# move forward either the specified distance or while we can
if direction in Direction.FORWARD.value and not self.forwardStop:
self.currentLocationName = LOCATION_UNKNOWN
self.movedSomewhere = True
if distance != 0:
degreesOfDrive = float(distance) * DEGREES_OF_DRIVE_PER_CM
self.tankDrive.on_for_degrees(speed, speed, degreesOfDrive, True, False)
else:
self.tankDrive.on(speed, speed)
# wait till we're done moving
while not self.forwardStop and not self.powerDown and self.tankDrive.is_running:
time.sleep(0.1)
Finally, if moves are being recorded as part of location management then the given move is added to the moves list. If a distance was specified then this distance is recorded as part of the move, otherwise the distance moved is calculated and recorded as part of the move. This is where using the “abs” function is vital to make sure the motors are turned the correct direction:
if self.recordMoves:
if self.forwardStop or self.backwardStop:
if direction in Direction.FORWARD.value or direction in Direction.BACKWARD.value:
calculatedDistance = round((abs(self.largeMotorB.degrees - motorStartDegrees) / DEGREES_OF_DRIVE_PER_CM), 2)
else:
calculatedDistance = round((abs(self.largeMotorB.degrees - motorStartDegrees) / DEGREES_OF_DRIVE_TO_TURN_ONE_DEGREE), 2)
self.movesList.append(Move(direction=direction, distance=calculatedDistance, speed=speed))
else:
self.movesList.append(Move(direction=direction, distance=distance, speed=speed))
As with the minimum sensor values, the “degrees of drive per centimetre” and the “degrees of drive per degree of turn” were determined through trial and error and lots of averaging. The degrees of turn for example could be hugely improved with the use of a gyroscope sensor but given the limit on sensors that can be used, use of such a sensor would require a redesign and a lot of time spent on determining whether the ultrasonic sensors can be read quickly enough to be used safely for edge detection when connected to the NXT brick to free up EV3 sensor ports.
Tilting
Tilt control is simply a case of checking the tilt button state and setting a negative or positive degrees to turn accordingly. The following code shows how tilting works:
# tilt the opposite way to the current tilt
if self.nxtTiltButtonPressed:
tiltDegrees = TILT_DEGREES
self.tilted = True
self.outputToConsole("Tilting up ...")
else:
tiltDegrees = -TILT_DEGREES
self.tilted = False
self.outputToConsole("Tilting down ...")
# do the tilt
self.mediumTiltMotor.on_for_degrees(20, tiltDegrees)
self.mediumTiltMotor.off()
Location Management
While location management doesn’t work without the base ability to accept commands and move around, it is still probably one of the most important aspects of functionality. Without location management, the user would need to provide full directions each time the Echo Show was required in a different location which would soon become tedious and lead to project not being used.
The crux of location management is to record moves in an in memory list and be able to persist that list to disk and load it again when required. Two custom classes were created for this purpose with adornments making it very simple to persist and load the class data in JSON format:
class Move(object):
"""
Capture move data for later replay if required
"""
def __init__(self, direction: str, distance: int, speed: int):
self.direction = direction
self.distance = distance
self.speed = speed
@classmethod
def from_json(cls, data):
return cls(**data)
class Location(object):
"""
Capture a location as a collection of moves
"""
def __init__(self, moves: List[Move]):
self.moves = moves
@classmethod
def from_json(cls, data):
moves = list(map(Move.from_json, data["moves"]))
return cls(moves)
Adding a move to the list is simply a case of appending a new “Move” class object which can be created on the fly as part of the append code:
self.movesList.append(Move(direction=direction, distance=distance, speed=speed))
Persisting the list of recorded moves is straightforward thanks to the classes being used and is a simple two line statement:
# save the moves list as a JSON file
with open(locationName, "w") as file:
json.dump(Location(self.movesList), file, default=lambda o: o.__dict__, sort_keys=False, indent=4)
When moving to a location, whether that’s home or a loaded location, moves are not recorded as we already have the moves. If we’re going to a loaded location, the moves are loaded and executed in order but if we’re going home then the moves in memory are reversed and then executed. Again, thanks to the classes, loading a location is a simple two line statement:
with open(locationName, "r") as file:
self.movesList = Location.from_json(json.load(file)).moves
And reversing the list if going home is a case of using the “reversed” function to go through the move list backwards, adding reversed moves to a new list:
for move in reversed(self.movesList):
if move.direction in Direction.FORWARD.value:
moves.append(Move(direction="backward", distance=move.distance, speed=move.speed))
...
Once the moves have either been loaded or reversed, the moves are then executed taking the Echo Show to the location requested by the user.
Power
While it’s not possible to make Alexa turn its little helper on, it can issue a command to turn it off. When the command comes through to the Python code, its a case of issuing the Linux “poweroff” command as follows:
subprocess.call(["sudo", "poweroff"])
A method of issuing a power off command to the NXT brick via Bluetooth couldn’t be found so the NXT brick is set to power off after 2 minutes of inactivity which works well.
As well as being able to turn the little helper off, its battery level can also be checked and reported to the user:
batteryInfo = "My little helper's main battery level is {0:.2f} Volts and it's secondary battery level is {1:.2f} Volts.".format(self.powerSupply.measured_volts, (self.nxtBrick.get_battery_level() / 1000))
self.makeAlexaSpeak(batteryInfo)
Notifications & Timer Expiry
The notification and timer code is used almost as is from the Alexa Gadgets Raspberry Pi samples on GitHub with changes to the functions called when notifications are raised or cleared or timers are created or deleted.
Events that fire when the Echo Show triggers them for setting and clearing the notifications indicator call a function to raise or lower a Lego flag and that’s about all there is to notifications.
There’s a bit more to timers and alerts which can be created, modified and deleted but ultimately the code is pretty much as per the previously mentioned samples, apart from calling a new timer countdown function which spins a “spinner” when the timer expires, until the timer alarm is dismissed.
def timer_countdown(self):
"""
Handles making the stand run a spinner when an Alexa timer goes off
"""
time_remaining = max(0, self.timer_end_time - time.time())
while self.timer_token and time_remaining > 1.5:
# update the time remaining
time_remaining = max(0, self.timer_end_time - time.time())
time.sleep(0.2)
# if we still have a valid token then the timer expired so lets dance
if self.timer_token:
self.runTimerSpinner(90)
Key Learning
When recording distance travelled, remember that moving backwards will give a negative distance value and that the absolute value should be used when instructing the motors to turn. This will save you the anguish of seeing your robot drive backwards off a 3 foot drop because it has been told to move forward using a negative distance noting that the drop wasn’t detected because, expecting to move forward, no attention was paid to the rear sensors. Both the Echo Show and the Lego survived amazingly well.
Given there was no need to reinvent the wheel, the basis of the skill code is that from the competition tutorial as it served a lot of the needs of the project from the outset. Changes have been made to suit or add functionality for the specific project but other than that, the core of the code is the same.
The following are key elements of the skill.
Session Without Opening A Specific Skill Session
I didn’t want the user to have to start a skill session each time they want to provide a command, I wanted the user to be able to say “Tell your little helper to …” or “Ask your little helper to …”. To facilitate this, I took the various snippets of code related to endpoints and tokens and so on and combined them into a single function that would either get the data from the handler input or from session if a session was active. If data wasn’t available in session it would be put there in case the next command came in quickly enough.
async function handleTheSession(handlerInput) {
let sessionAttributes = handlerInput.attributesManager.getSessionAttributes();
// act based on whether we're already in a session or not based on checking if we have an endpointID in session
if (!sessionAttributes.endpointId) {
const request = handlerInput.requestEnvelope;
const { apiEndpoint, apiAccessToken } = request.context.System;
const apiResponse = await Util.getConnectedEndpoints(apiEndpoint, apiAccessToken);
// no endpoint, no dice
if ((apiResponse.endpoints || []).length === 0) {
Util.putSessionAttribute(handlerInput, 'haveEndpoint', "no");
return
}
// say the session has been created and we have an endpoint
Util.putSessionAttribute(handlerInput, 'sessionState', "created");
Util.putSessionAttribute(handlerInput, 'haveEndpoint', "yes");
// Store the gadget endpointId to be used in this skill session
const endpointId = apiResponse.endpoints[0].endpointId || [];
Util.putSessionAttribute(handlerInput, 'endpointId', endpointId);
// Set skill duration to 5 minutes (ten 60-seconds interval)
Util.putSessionAttribute(handlerInput, 'duration', 10);
// Set the token to track the event handler
const token = handlerInput.requestEnvelope.request.requestId;
Util.putSessionAttribute(handlerInput, 'token', token);
}
else
{
// say the session was already active
Util.putSessionAttribute(handlerInput, 'sessionState', "active");
}
}
The launch request, move and command handlers were all changed to use this new function so that they ultimately gave the impression the skill session was always open.
Yes & No Intents
Yes and no intents are used as part of confirming whether the EV3 brick should be powered down or not and for generally allowing a sequence of commands.
If a power down command is received in SetCommandIntentHandler it sets a “last question” session variable in order to track what a user may be saying yes or no to and then asks the user if they’re sure:
Util.putSessionAttribute(handlerInput, 'lastQuestion', "powerDown_AreYouSure");
return handlerInput.responseBuilder
.speak("Are you sure you want to turn my little helper off? That will make me sad.")
.reprompt("So, did you want me to turn my little helper off?")
.withShouldEndSession(false)
.getResponse();
In the yes intent handler, we simply check if the user is saying yes to a power down and if they are then issue the instruction:
if (sessionAttributes.lastQuestion == "powerDown_AreYouSure") {
// Construct the directive with the payload containing the command
let directive = Util.build(sessionAttributes.endpointId, NAMESPACE, NAME_CONTROL,
{
type: 'command',
command: "power_down"
});
return handlerInput.responseBuilder
.speak("Ok, telling my little helper to power down")
.addDirective(directive)
.withShouldEndSession(true)
.getResponse();
}
In the no intent handler we’re interested in both power down and general questions. For no to “power down?” we want to keep the session going and for no to “anything else?” we want to end the session:
if (sessionAttributes.lastQuestion == "powerDown_AreYouSure") {
return handlerInput.responseBuilder
.speak("That's good, keeping my little helper switched on")
.withShouldEndSession(false)
.getResponse();
}
else if (sessionAttributes.lastQuestion == "general_AnythingElse") {
return handlerInput.responseBuilder
.speak("Ok, no problem")
.withShouldEndSession(true)
.getResponse();
}
The yes and no intent handlers could also be expanded in the future to be used as part of location management to confirm location names are correct.
Alexa Speech Timing
The timing of Alexa’s speech was tricky to get right given the amount of time that it may take for a command to be executed. Furthermore, depending on the current position and sensor readings and so on, even simply making Alexa say “Ok” after receiving a command was troublesome as the command may not be executed.
After experimenting with various approaches, I decided the best way was to action the given command “silently” (i.e. with no response other than the robot starting to move) and then provide feedback when the move was complete and ask if the user wanted to issue any more commands.
The approach was achieved by making use of the existing custom event handler from the tutorial and issuing a “speech” event call from the Python code whenever a user given command was completed. In the Python code, a function called “makeAlexaSpeak” runs the following code:
self.send_custom_event("Custom.Mindstorms.Gadget", "Speech", {"speechOut": speech})
In the Node JS code, the following main part of the event handler takes care of making Alexa speak and re-prompting:
handle(handlerInput) {
console.log("== Received Custom Event ==");
let customEvent = handlerInput.requestEnvelope.request.events[0];
let payload = customEvent.payload;
let name = customEvent.header.name;
let speechOutput;
if (name === 'Speech') {
speechOutput = payload.speechOut;
} else {
speechOutput = "Event not recognized. Awaiting new command.";
}
Util.putSessionAttribute(handlerInput, 'lastQuestion', "general_AnythingElse");
return handlerInput.responseBuilder
.speak(speechOutput, "REPLACE_ALL")
.reprompt("Need my little helper to do anything else?")
.getResponse();
}
Key Learning
Though not explicitly mentioned already, I found that a very important aspect of creating a skill is spending time on getting the wording for the example utterances right to make use of the skill seem natural from a language perspective.
It's very easy to test using what one thinks is appropriate language only to find that a command is not executed and when debugging, see that the command given to the EV3 brick includes extra or unexpected words. The key to solving this sort of issue is to spend time creating as many slot usage samples as possible in the skill model using both your own and other people’s use of language / way of speaking.
A fun project that allowed me to start learning Python and Node JS. Here are some ideas on future improvements and my final thoughts on the project as a whole.
Future Improvements
There are a number of improvements that could be made to the project in the future. Some require different sensors, some require more research and investigation and some simply require a bit more time:
- Appropriate validation and error handling is needed in both the EV3 Python and the Alexa Skill Node JS code.
- Make the Alexa Skill validate and confirm location names to add more of a conversational approach and make sure the user’s input was understood correctly. E.g. User: Save location as “eating”, Alexa: You want to save the current location as “eating”. Is that correct?
- Auto run the EV3 Python code at startup rather than having to run it from Brickman.
- If possible, turn the Lego sensors off when not in use to reduce battery consumption.
- Add a cable management system for the Echo Show power cable. A prototype was creating based around feeding out and pulling in the cable through two tires stack one above the other where a motor turned one of the motors but the cable kept sliding up the side of the tires and there wasn’t time to solve the issue.
- Be able to move from location A to location B using vector based maths as opposed to having to return home first.
- Add rear object detection using an additional sensor connected to the NXT brick
- Use a Gyro sensor to make accurate degree based turns. This would require a rethink of which sensors are connected to which brick as the Gyro must be connected to the EV3 brick.
- If it were ever possible to access the Echo Show’s sound location data in the form of the direction any speaking originates from (e.g. as an event in the same way as starting a timer or music), the device could be turned to face the speaker.
Final Thoughts
Making an Echo Show portable was an idea I’d had about a year before coming across the LEGO MINDSTORMS Voice Challenge and I thought it would make for a great way of combining combining Lego and Alexa in a more literal way. There were a number of stumbling blocks along the way, including the previously mentioned actual fall off the kitchen counter, but in the end I was pleased with the creation.
I’m not going to claim its the prettiest Echo Show stand you’ll ever see or that it couldn’t have been built or coded better, but in real life use it meets the goals of the project wonderfully. We have an Echo Show that now moves where we need it, when we ask it, as if it was a built in function. It has genuinely enhanced the Echo Show for our needs and will continue you to do so … at least until I need the MindStorms bricks for my next creation.
Comments