Let's build more projects that inspire us and bring us joy! This project uses ChatGPT and the Adafruit MagTag (an ESP32 w/ e-ink display) to display interesting facts and inspirational quotes. I set it up so it refreshes each day - you can adjust to whatever frequency you want/need.
What you'll learn:
This project is a beginner-friendly introduction to using the OpenAI Python API with hardware. You'll learn how to write a simple CircuitPython program that calls the OpenAI Python API, how to adjust AI model parameters, and some basics of prompt engineering (i.e., how to get the AI model to do what you want).
What you need to know:
Some prior knowledge of hardware, VS Code, and CircuitPython are helpful. If you're new to the MagTag, here's a handy getting started tutorial.
Let's get started!
Set up a free OpenAI accountFirst, we'll need to create an OpenAI account to use the OpenAI Python API.
1. Create a free OpenAI account here.
2. Grab your OpenAI key and store in a safe place for later.
To do this, visit the OpenAI Platform page. Click on your account (upper right corner), and then select 'View API Keys'.
Select 'Create new secret key' and give it a name that will help you remember where it's being used (e.g. 'ADA Project 1')
MagTag Setup1. Connect the MagTag to your computer via a microUSB to USB cable. (Make sure your cable supports data transfer.)
Check that the MagTag device shows up as a connected device on your computer.
2. Open a new project in VS Code (or your preferred code editor).
3. Download the necessary libraries for the project. We'll need the following:
You can download the libraries here, then load them onto your MagTag by dragging-and-dropping into the \lib folder on the MagTag.
Set up our WiFi and API credentials!1. In VS Code, Clone the GitHub repo.
2. Open the 'secrets.py' file and input your WiFi SSID, password, and OpenAI API key.
3. Save the file, then drag the file onto the MagTag.
Code the MagTag!This section shows you how the 'main.py' file works. We'll go through the pieces of the code where we setup and call the OpenAI API to learn more about how the API works and how you can use it to build other projects!
1. Open the 'main.py' file.
2. Configure the OpenAI API! This is done in the 'get_response()' function definition where we set the 'data' variable (line 60).
To call an OpenAI model, we need a few pieces of info, (i.e., parameters):
- Model: this is the specific AI model that we're using. Our default is 'text-davinci-002' but you're welcome to play around with other models like gpt-3.5-turbo-0301!
- Prompt: a description of what we want the model to do and how we want it to behave. More info on this later :)
- Max tokens*: an estimate of the number of 'pieces of words' (1 token is roughly equal to 4 chars in English, 100 tokens is about 75 words). Note: typically, we use up to 4097 tokens.
- Temperature: from 0.0 to 2.0. This is how much we want (or don't want) the model to make things up, which is called 'hallucinating' in the AI world. A lower temperature means less hallucinating.
- Frequency_penalty: from -2.0 – 2.0. Positive values penalize new tokens based on existing frequency in text which reduces likelihood of repeated lines (Default: 0).
- Presence_penalty: from -2.0 – 2.0. Positive values penalize new tokens if they appear in text thus far to increase likelihood of new topics (Default: 0).
*More info on tokens: What are tokens and how to count them? | OpenAI Help Center
I recommend keeping the parameter values from the repo. If you want to explore how to change model behavior, start with temperature, then try adjusting frequency_penalty and/or presence_penalty.
For example, if you want to see repeated info, you can lower 'frequency_penalty' to 0 or -2.
3. Write a prompt for your ADA bot! For my project, I wanted fun, inspirational quotes and/or facts that were different every day.
To do this, I have use three different types of prompts:
- Context: This gives the model background info on how I want it to behave. My context is the following:
context = "You are a funny, accurate, hopeful, and enthusiastic educator. You love sharing inspiring quotes, messages of self care, and fascinating facts. Every day you give a short and complete fact or quote from a range of subjects like science, technology, history, geography, economics, engineering, mathematics, poetry, language, and more. Today you share a new uplifting or funny fact that is different than your other previous facts"
This context tells the model how I want it to present information (funny, accurate, hopeful, enthusiastic) and the kinds of information I want it to include (quotes, self-care, facts from range of subjects). The last sentence is designed to reduce the likelihood of the model repeating itself.
Play around with the context! Remove or add specific subjects and topics, call out specific people you want to get quotes from, etc. The context shapes the overall behavior of the model and tells it what to focus on.
- Prompt: This is what I want the model to do right now. My prompt is the following:
prompt = "Today you share a new uplifting or funny fact that is different than your other previous facts."
The goal of the prompt is to get the model to give me some piece of information - in this instance, a new fact! Again, play around with the prompt!
- Response: this is a string that holds the previous model response. I defined an empty string for the first response at the top of main.py (line 23):
response = ""
Each time we call the model to get a response, we add the newest response to the prompt, like so:
response = get_response(context + prompt + response)
4. Call the OpenAI model!
For this example, we're using the get_response() function. Inside this function, we setup an https request (https variable), include our authorization credentials (headers variable), setup the OpenAI API (data variable), and then get a model response (response variable) using the OpenAI API URL (set at the top of main.py in line 36), our data, and our credentials. Finally, we format the response so we can easily get the model response back.
When we call the function, we need to input the prompt. As we went over in the previous step, this is 3 pieces of info: context, prompt, and response:
response = get_response(context + prompt + response)
5. Print the model response to the MagTag screen!
I kept the text display simple because this tutorial focuses on how to call an OpenAI model. We configure the text display at the top of the main.py file (lines 15 - 21):
#Format text display on MagTag
magtag.add_text(
text_scale=1,
text_wrap=47,
text_maxlen=600,
text_position=(10, 10),
text_anchor_point=(0, 0),
)
In the program while loop, we set the model response to print the e-ink screen and refresh the screen to display it:
magtag.set_text("ADA: \n{}".format(response))
magtag.refresh()
6. Put the board to sleep!
Our example gives us a new fact or quote every 24 hours by putting the board to sleep:
time.sleep(2)
print("Sleeping")
PAUSE = alarm.time.TimeAlarm(monotonic_time=time.monotonic() + 60 * 60 * 24)
alarm.exit_and_deep_sleep_until_alarms(PAUSE)
You can change how often you call the model by adjusting the PAUSE duration (e.g. adding only 60 would call the model and refresh the screen every minute).
Load the code onto the MagTag!That's it! Drag-and-drop the secrets.py and main.py files onto the MagTag. If everything works correctly, you should see the e-ink display update with the text:
ADA: <some new tidbit of info, like "the human brain is capable of storing approximately 2.5 petabytes of information, which is the equivalent of about 3 million hours of video...."
If you enjoyed the project, please like and share with your friends! Leave a comment if you have any questions or if you run into any issues. Happy making, friends!
Going FurtherYou can make all sorts of fun displays on the MagTag! Check out Adafruit's collection of projects: Adafruit Learning System
Stay tuned for more adaptations on this project!
Comments