The code for this project that is hosted on Amazon Lambda needs to be updated, and isn't currently operational.
What is Sky Finder?LEGO has always been about discovery and learning through play. Become a space explorer with this LEGO Mindstorms gadget and turn Alexa into your personal guide to the universe. Sky Finder will show you the most interesting stars, planets and even galaxies in the sky, and Alexa will tell you about them.
There are many exciting things to see in the sky without the need for expensive telescopes, but they can be difficult to find without some help. This project is a LEGO gadget that you can build easily from the standard Mindstorms EV3 set (31313) and discover what's visible from your window or back yard.
Why did I build this?This project was entered into the LEGO Mindstorms Voice Challenge on hackster.io. I wanted to build something fun and original that couldn't be done with Alexa or LEGOMindstorms on their own.
The combination of Alexa and LEGO is ideal for this project. Alexa provides a friendly interface to explore the sky, and the Sky Finder gadget shows you where to look. It takes a while for your eyes to adjust to the darkness of the night sky after looking at a screen, so the voice interface is the perfect way to locate literally hundreds of different objects in deep space and our own solar system.
What can it do?Sky Finder knows about all the star constellations visible from the northern and southern hemispheres; many of the most interesting stars in the sky; all of the planets in our solar system; and many deep space objects that you can see with the naked eye such as The Andromeda Galaxy and The Pleiades star cluster.
It will also suggest interesting things in the sky that are visible from your location and give you information about what you're looking at.
Thanks to the processing power available on tap in Amazon's AWS service, Sky Finder can perform the complex calculations needed to locate stars, planets and other celestial bodies, which would be challenging to do on the EV3 brick alone.
Making it easy to buildI wanted to make it as easy as possible for people to build the project, learn from it and expand on it if they wanted to.
- The gadget only uses parts from the LEGO Mindstorms EV3 set (31313)
- Full Lego build instructions are available
- Both the code that runs on the EV3 brick and the Alexa skill code are both written in Python
- The skill code can be hosted by Alexa, so everything can be done in the Alexa Skills console.
The walkthrough below contains instructions about how to make your own Skyfinder and information about how it works. If you'd like to build it without reading about how it works, just follow the numbered instructions.
Building the LEGO Gadget1. Use the full build instructions in the Schematics section at the bottom of this page to build the LEGO gadget..
2. When the gadget is completed, connect the motors and sensors to the brick using 4 cables. I would suggest the following:
Connect the colour sensor to socket 1 on the EV3 brick with a 50cm cable, ensuring the cable runs above the EV3 brick.
Connect the touch sensor to socket 4 using a 25cm cable. You may need to temporarily detach the sensor from the gadget to make it easier to plug in.
Connect the large motor to socket D using a 25cm cable, carefully running the cable between the EV3 brick and the motor.
Connect the medium motor to socket C using a 35cm cable.
Setup the ev3dev build environment3. You need to install the ev3dev operating system onto your micro SD card, boot your EV3 brick from the card, and connect EV3 to the Visual Studio Code editor. The LEGO MINDSTORMS Voice Challenge team have provided some step-by-step instructions here. (There's no need to continue to any of the missions unless you want to learn more).
Register your gadget in the Alexa Developer Console4. Next you need to register your gadget in the Alexa developer console. Once again, The LEGO MINDSTORMS Voice Challenge team have provided instructions. Follow steps 1-10 here and make a note of the Amazon ID and Alexa Gadget Secret.
The softwareSky Finder consists of the physical LEGO hardware, plus two software programs written in Python. The first ('the Alexa skill') runs on Amazon's cloud servers and handles voice commands from the Echo device, calculates the star positions and creates a payload/command which is sent to the Sky Finder gadget (via the Echo) to move the motors.
The second software program is also written in Python and is run on the EV3 brick. It connects the Sky Finder gadget to an Echo device via Bluetooth and then listens out for commands from the Alexa skill, relayed via the Echo, and moves the motors accordingly.
In the next section, I'll describe some of the key features of the Alexa skill and explain how to set it up in the Alexa developer console.
The custom Alexa Skill - setting up5. To build a new Alexa skill, go to the Alexa Developer Console, log in if required and click the Create Skill button.
6. On the next screen, type the name of your skill (Sky Finder or another name of your choice) and choose the language/region that matches the settings on your Echo device. Next select Custom skill model and Alexa-Hosted (Python) and click the Create Skill button.
7. On the final screen, select the Hello World template and click the Choose button to create the new skill. It will take a couple of minutes to set up.
The custom Alexa Skill - the interaction modelCustom Alexa skills consist of a voice interaction model, which maps the spoken input from the user to intents that can be handled by classes in the Python software.
Sky Finder accepts two main intents:
- ScanIntent: Scan part of the sky for interesting objects, and tell the user what they are. (e.g. What can you see in the south?).
- LocateIntent: Locate a particular object (star, planet, constellation etc.) in the sky, make the gadget point to it and give the user some information about the object. (e.g. Locate Venus)
The voice interaction model also specifies slots (or parameters), which contain the compass directions or objects shown in bold in the two examples above.
8. First we need to enable the skill to use our gadget, so on the 'build' tab, select Interfaces from the menu on the left and enable the Custom Interface Controller.
9. Click on the Save Interfaces button at the top of the page.
10. Next, select JSON editor from the menu on the left. Download the JSON interaction model code provided at the bottom of this tutorial into the editor, then drop the file onto the drag and drop area above the editor.
11. Click the Save model button and then the Build Model button at the top of the screen. It takes a minute or so to build. You should see the intents (LocateIntent and ScanIntent) appear on the left of the screen along with their corresponding slots.
Click on Intents to see the sample utterances which are examples of what the user might say to trigger that intent. Alexa's AI chooses which utterance is most suitable based on what the user has said - they don't have to say the exact words listed on the examples.
Click on Slot Types to see the names of the compass directions or (celestial) objects that the user might say, along with synonyms. The slot values also contain an ID which is passed to the Python code which uniquely identifies the object. For example, you might ask the skill to "Locate the Big Dipper" if you're American or "Locate The Plough" if you're British, but as well as this information, the official name of 'UMa' (Ursa Major) is passed as the ID.
The Lambda function, written in Python in this case, contains all of the main processing logic of the Alexa skill. It consists of a number of methods to process the intents. The @ syntax applies decorators to the classes as required by the Alexa SDK. All skills follow a similar structure, so you can easily adapt this skill, or one suited to your purposes provided by the Amazon team, rather than writing one completely from scratch.
12. Click on the Code tab of the developer console.
13. Replace the contents of lambda_function.py, utils.py and requirements.txt with the contents of the corresponding files in the Code section at the bottom of this page.
NB: For now, please ignore the files with the same name in the code section marked as "API Code"
14. Right click in the space underneath utils.py and select Create File.
15. Create a new file called data.py in the lambda folder.
16. Copy the contents of the data.py file in the Code section at the bottom of this page to the new file.
17. Click on the Save button to save the files you have created, and then click on the lambda_function.py tab if you want to see the code that is explained below.
LaunchRequest
As well as the two intents we have defined in the interaction model (ScanIntent and LocateIntent), there are some built in intents which need to be handled. The most important of these is LaunchRequest which is called when the skill is opened by the user ("Alexa, open Sky Finder").
@skill_builder.request_handler(can_handle_func=is_request_type("LaunchRequest"))
def launch_request_handler(handler_input: HandlerInput):
The LaunchRequest handler method first checks whether an Alexa gadget is connected to the Echo device.
system = handler_input.request_envelope.context.system
api_access_token = system.api_access_token
api_endpoint = system.api_endpoint
# Get connected gadget endpoint ID.
endpoints = get_connected_endpoints(api_endpoint, api_access_token)
If the gadget is connected, the endpoint ID is stored in session attributes so that we can send locations to the gadget in future.
# Store endpoint ID for using it to send custom directives later.
logger.debug("Received endpoints. Storing Endpoint Id: %s", endpoint_id)
session_attr = handler_input.attributes_manager.session_attributes
session_attr["endpoint_id"] = endpoint_id
Generally speaking the lambda function is called again each time the user says something, so any variables we need to persist during a conversation should be stored in the session attributes.
LocateIntent
LocateIntent is the custom intent we defined which is called when a user asks Sky Finder to locate something in the sky. This handler method is also called when we receive a YesIntent. (More explanation of this later).
@skill_builder.request_handler(can_handle_func=lambda handler_input:
is_intent_name("AMAZON.YesIntent")(handler_input) or
is_intent_name("LocateIntent")(handler_input))
def locate_intent_handler(handler_input: HandlerInput):
First, the request handler attempts to fetch the unique ID of the (celestial) object that we specified in the Slot Types section of the interaction model, such as UMa (Ursa Major).
# Get the name of the object the user is trying to locate from the slot
slots = handler_input.request_envelope.request.intent.slots
try:
# See if it's been matched to one of the specified values for this slot type in the interaction model
object = slots["object"].resolutions.resolutions_per_authority[0].values[0].value.id
The next section of code sends an interim response to the user while we locate the object ("Locating Ursa Major...") I won't explain it in detail here, other than to say that a SpeakDirective is a way of sending a response to the Echo device before the lambda function has finished executing.
# Locate the object in space
az,alt,distance = utils.locate(object)
The location of the object in the sky is specified using azimouth, altitude coordinates. Azimouth is between 0 and 359 degrees, corresponding to a compass direction. Altitude indicates how high the object is in the sky, from 0 degrees on the horizon to 90 degrees directly overhead.
Next the handler prepares a spoken response from Alexa telling the user that the object has been located, and providing some interesting facts if we have them.
# object found and visible with a defined distance
compass = utils.degrees_to_dir(az)
speak_output = "{} Located. {} is currently {} kilometers from your location. <break time=\"2s\"/> {} <break time=\"2s\"/> {}".format(radar_ping, name, distance, description, data.ASK_MESSAGE)
If a gadget is connected, we prepare a payload (command) containing the location for the Echo device to relay to the gadget, specifying the endpoint ID that we saved to session attributes in the LaunchIntent handler.
if ("endpoint_id" in session_attr and object_located):
payload = {
"type": "move",
"az": az,
"alt": alt,
}
endpoint_id = session_attr["endpoint_id"]
directive = build_send_directive(NAMESPACE, NAME_CONTROL, endpoint_id, payload)
Finally, we use methods from the response builder class in the Alexa SDK to build the data structure that is sent back to the Echo device.
return (handler_input.response_builder
.speak(speak_output)
.ask("What do you want me to find?")
.add_directive(directive)
.response)
ScanIntent
ScanIntent is called when a user asks for information about what they can see in a particular area of the sky.
@skill_builder.request_handler(can_handle_func=is_intent_name("ScanIntent"))
def scan_intent_handler(handler_input: HandlerInput):
First, the skill gets the compass direction the user has specified from the slot.
# Get the compass direction the user has specified
slots = handler_input.request_envelope.request.intent.slots
try:
direction = slots["compass_direction"].resolutions.resolutions_per_authority[0].values[0].value.id
Once again, the next section of code sends an interim response to the user while we do the lookup, and we then call a function which returns a list of prominent objects in that part of the sky.
# Find a list of objects in our specified part of the sky
logger.info("Scanning sky in {}.".format(direction))
scan_list = utils.scan(direction)
Next, we create a comma separated list of objects ready for Alexa to speak to the user. We maintain a list of common names of objects in a Python dictionary in the data.py file for this purpose.
for object in scan_list:
# Get list of proper names of objects from database
spoken_list.append(data.OBJECT_DATA[object['name']]['name'])
speak_output += utils.comma(spoken_list)
If only one object is found, Alexa will ask, "Do you want me to show you where it is?". The object name is saved in the session attributes. If the user replies 'yes', the YesIntent will be triggered next time, which is actually handled by the LocateIntent handler which will locate the object that has been saved in the session attributes.
If there is more than one object, Alexa will ask, "Which one would you like me to locate?".
if (len(scan_list) > 1):
speak_output += ". Which one would you like me to locate? "
else:
speak_output += ". Do you want me to show you where it is?"
# save object ready for a possible 'yes' response from the user
object_of_interest = scan_list[0]['name']
session_attr = handler_input.attributes_manager.session_attributes
session_attr["object_of_interest"] = object_of_interest
Finally, the response is built and sent to the Echo device.
return (handler_input.response_builder
.speak(speak_output)
.ask("Do you want me to show you where they are?")
.response)
The custom Alexa Skill - utils.py and data.pyThe Lambda function imports two other files:
utils.py contains a number of utility functions, including the code which looks up the position of stars and planets. Currently, this calls an external lookup API to avoid having to compile dependencies into the Lambda function. The code for the API is included at the bottom of this page for reference, but you don't need to use it.
data.py contains a Python dictionary with information about the stars, constellations, planets and other objects, plus a list of messages which Alexa might speak to the user, such as the welcome message that a user hears when the skill starts.
The custom Alexa skill - specifying latitude/longitude and testing18. Currently, the geographical location of the gadget is specified in the data.py file.
# My latitude and longitude - NB no spaces
LAT = '51.5842N'
LON = '2.9977W'
To replace this with your location, you can use Google to look up the latitude/longitude of a nearby city.
Please remove the spaces and degree signs before pasting it into the code.
19. Once you have done this, click the Save button and then the Deploy button above the editor window.
20. To test the skill, go to the Test tab in the developer console.
21. Type in Open Sky Finder (or the skill name you chose earlier) into the input window of the Alexa Simulator. You should receive the following response.
In the next section, we'll look at the software for the EV3 brick.
5. Code for the EV3 brick22. On your PC, create a new folder called SkyFinder. Download the skyfinder.py and skyfinder.ini files from the Code section at the bottom of this page and move them into a new folder you have just created.
23. Open Visual Studio code editor, click on File -> Open Folder and open the SkyFinder folder you just created,
24. Open the skyfinder.ini file in the editor and add in the Amazon ID and Alexa Gadget Secret from step 4, then click File -> Save
25. Connect Visual Studio to the EV3 brick.
26. Once the device is connected, the circle will turn green. Click the icon next to the ev3dev device browser to send the workspace to the device.
The code on the EV3 brick is relatively simple compared to the Alexa skill class. The main SkyFinder class extends AlexaGadget and calls the super class to initialise the gadget.
class SkyFinder(AlexaGadget):
"""
A Mindstorms gadget that points to a specified position in the sky
For use with the Sky Finder skill
"""
def __init__(self):
"""
Performs Alexa Gadget initialization routines and ev3dev resource allocation.
"""
super().__init__()
The OnConnected and OnDisconnected methods are called when the gadget is connected or disconnected from the Echo device. These simply set the front LEDs to green or off respectively as a status indication.
The on_custom_mindstorms_gadget_control method is called when the gadget receives a payload from the skill, via the Echo device. We support two payloads, one which resets the gadget to its start position (currently only used on startup) and the other which moves the pointer to the specified azimouth, altitiude coordinate.
def on_custom_mindstorms_gadget_control(self, directive):
"""
Handles the Custom.Mindstorms.Gadget control directive.
:param directive: the custom directive with the matching namespace and name
"""
try:
payload = json.loads(directive.payload.decode("utf-8"))
print("Control payload: {}".format(payload), file=sys.stderr)
control_type = payload["type"]
if control_type == "move":
# Expected params: [az, alt]
self.go_to(payload["az"], int(payload["alt"]))
if control_type == "reset":
# Expected params: none
self.reset_position()
except KeyError:
print("Missing expected parameters: {}".format(directive), file=sys.stderr)
The reset_position method first turns the large motor controlling the turntable until the touch sensor is pressed, and then rotates the turntable back by a specified distance to its start position. Next it turns the medium motor which moves the arm until it is detected by the colour sensor. It's then rotated in the opposite direction to the start position.
def reset_position(self):
"""
Move arm to starting position
"""
logger.info(""Resetting arm position)
self.lm.stop_action = 'brake'
self.lm.reset()
self.lm.position = 0
# Rotate arm until it pushes touch sensor switch
# and then move specified distance to start position
self.lm.on_for_rotations(speed = SpeedDPS(72), rotations = -3, block=False)
self.ts.wait_for_pressed()
self.lm.off()
self.lm.on_for_degrees(SpeedDPS(72), START_AZ, block = True)
# Rotate arm in other axis until it's seen by the colour sensor
# and then move specified distance to start position
self.mm.stop_action = 'brake'
self.mm.on_for_rotations(speed = 5, rotations = -3, block=False)
# wait until sensor sees (red) arm
while self.cs.color != 5:
continue
self.mm.off()
self.mm.reset()
self.mm.position = 0
self.mm.on_for_degrees(5, START_ALT, block = True)
# turn off the distracting LED on the colour sensor
self.cs.mode='COL-AMBIENT'
# Reset the position counter to 0
self.mm.off()
self.mm.reset()
self.mm.position = 0
self.lm.off()
self.lm.reset()
self.lm.position = 0
When the reset is complete, the pointer should be horizontal and pointing at a right angle to the EV3 brick. If this isn't the case, try tweaking the variables in lines 23 and 24 of the code.
# Calibration constants for initial position
# Tweak these if required to ensure starting pos is North horizon (0 az, 0 alt)
START_AZ = 388
START_ALT = 210
The go_to method handles the 'move' payload and simply moves the turntable and pointer to the specified position.
To avoid cables tangling, we only move the turntable 180 degrees in either direction (90 degrees either side of north). To point to a position in the south, we move the turntable to point to the north, and move the pointer beyond the zenith (straight up position) to point in the opposite direction.
The motor positions are already specified in degrees, but since they are connected to gears, the we multiply the number by GEAR_LM and GEAR_MM which contain the gear ratios.
def go_to(self, new_az, new_alt):
"""
Move pointer to a specified position
:param new_az: new azimouth position (in degrees)
:param new_alt: new altitude position (in degrees)
"""
self.leds.set_color("LEFT", "YELLOW")
self.leds.set_color("RIGHT", "YELLOW")
# Normalise rotation to prevent cable tangle
if (new_az > 270):
az1 = new_az - 360
alt1 = new_alt
elif (new_az > 90):
az1 = new_az - 180
alt1 = 180 - new_alt
else:
az1 = new_az
alt1 = new_alt
# Move arm
self.lm.on_to_position(SpeedDPS(ARM_SPEED), az1 * GEAR_LM, block = False, brake = True)
self.mm.on_to_position(SpeedDPS(ARM_SPEED), alt1 * GEAR_MM, block = True, brake = True)
self.leds.set_color("LEFT", "GREEN")
self.leds.set_color("RIGHT", "GREEN")
The final section of code is run on startup and calls the main entry point.
27. To run the code on the EV3 brick, navigate to the skyfinder.py file within the ev3dev device browser in VS code, right-click on the file and select Run.
The green lights on the EV3 brick will start to flash. It takes around 45 seconds before the program starts.
If it's successful, the pointer reset procedure should start. If the cables begin to tangle, press the grey button directly below the screen on the EV3 brick to stop the program. It's important that the red brick that clicks the touch sensor underneath the gadget, and the touch sensor itself are positioned correctly. If you look underneath the gadget when the code starts to run, it should look like the video below.
28. If the gadget hasn't already been connected to an Echo device, it will enter bluetooth pairing mode, and you'll see messages similar to this on the EV3 brick screen and in VS Code.
On the Alexa app on your phone or tablet, follow the instructions to pair a new gadget to your device.
29. Once the reset procedure is complete, and the gadget is paired, move SkyFinder so that the pointer is pointing directly to the north** If you don't know where north is, continue to step 30 and ask SkyFinder to locate an object that you know about, such as the Moon. Once the pointer has finished moving, move the gadget so that it's pointing to the object you asked for.
20. Ask Alexa to Open Sky Finder to start to use your new gadget!
Credits:Video music from https://filmmusic.io"Futuristic 4" by Alexander Nakarada (https://www.serpentsoundstudios.com/)License: CC BY (http://creativecommons.org/licenses/by/4.0/)Icons used in the diagrams:Amazon echo dot by Ben Davis from the Noun Projectchat by I cons from the Noun ProjectBluetooth Logo Signal by naim from the Noun ProjectWoman by Deemak Daksina from the Noun Projectlego by Gerardo Martín Martínez from the Noun Projectpython format by ICONZ from the Noun Project
Comments