As the title suggests, we will bring real-time YouTube comments and weather information and carry out a project to spray them on OLEDs.Let me introduce the hardware first.First, it is a Raspberry Pi Pico board that becomes a body.
I chose it because I wanted to design it compact. (It's also relatively simple.)The communication was conducted through Ethernet. I used the W5100S + PoE board that I made myself. If you're curious, please refer to the link below!
What is PoE? My tough PoE development process
The OLEDs used are very cheap and simple SSD1306s. They communicate using SPI.
If you connect it all, it's small and simple like this.
Now that the hardware connection is complete, let's start designing the software.I want to put weather information on Oled. And if there is a new comment on YouTube that I posted, I want to put it on Oled for 10 seconds. If you have a device like this, it will be very fun and useful if you put it in front of the computer and use it.It would be good to refer to the simplified block diagram below.
First of all, it's weather information, and I want to read it in a fun way as the weather caster tells me, not just weather information. So I decided to read it using the GPT API. GPT does not currently support real-time data services. So I decided to read it on Naver as WebCrawling.It started implementing in a Python environment that is relatively light and easy to implement.
from bs4 import BeautifulSoup
def getweather() :
html = requests.get('http://search.naver.com/search.naver?query=수내+날씨')
soup = BeautifulSoup(html.text, 'html.parser')
global weather
weather = ''
address = "Bundang Sunae"
weather += address + '*'
weather_data = soup.find('div', {'class': 'weather_info'})
# Current Temperature
temperature = (str(weather_data.find('div', {'class': 'temperature_text'}).text.strip()[5:])[:-1])
weather += temperature + '*'
# Weather Status
weatherStatus = weather_data.find('span', {'class': 'weather before_slash'}).text
if weatherStatus == '맑음':
weatherPrint = 'Sunny'
elif '흐림':
weatherPrint = 'Cloud'
weather += weatherPrint
WebCrawling can be done simply with a package called BeautifulSoup.
Let me deal with Crawling briefly.
In the script, temperature was read from temperature_text.
And weather was read in the span class of weather before_slash.
I want to read it in English, but I couldn't find it, so I just hardcoded it...And when you run this, it will look like the following.
It outputs temperature and weather like this.
Youtube APINext, I will read the latest comments using the YouTube API.
import pandas # For Data
from googleapiclient.discovery import build # For Google-API
The package can be imported by importing the google apiclient.discovery package, and the Pandas package is a package for importing comment data.Pandas is an essential package when sending and receiving data.
def getYoutubecomments(youtubeID) :
comments = list()
api_obj = build('youtube', 'v3', developerKey='Youtube_Key')
response = api_obj.commentThreads().list(part='snippet,replies', videoId=youtubeID, maxResults=100).execute()
global df
while response:
for item in response['items']:
comment = item['snippet']['topLevelComment']['snippet']
comments.append(
[comment['textDisplay'], comment['authorDisplayName'], comment['publishedAt'], comment['likeCount']])
if item['snippet']['totalReplyCount'] > 0:
for reply_item in item['replies']['comments']:
reply = reply_item['snippet']
comments.append(
[reply['textDisplay'], reply['authorDisplayName'], reply['publishedAt'], reply['likeCount']])
if 'nextPageToken' in response:
response = api_obj.commentThreads().list(part='snippet,replies', videoId=youtubeID,
pageToken=response['nextPageToken'], maxResults=1).execute()
else:
break
YouTube API Key is required to bring YouTube comments before proceeding. This is a unique key and must be issued by each individual. It was issued by referring to the link below.
The issuance was completed, so the API Key was put in 'Youtube_Key'. And I entered the ID of the desired YouTube video.
youtubeID = 'et1o0K53O4Y'
The YouTube ID is located at the end of the address of the corresponding image.
If you operate it, you can see that it pops up well.
W5100S Ethernet ConnectionLet's send the data to the device via Ethernet communication before we trim it slightly in Chat GPT.
The library using the Wiznet Ethernet Chip on the RP2040, Pico's MCU, is located at the link above.This library allows you to simply attach W5100S to Pico.
SSD1306 OLED Displayfrom usocket import socket
from machine import Pin, SPI
import network
import time
from machine import Pin, I2C, UART, ADC
from ssd1306 import SSD1306_I2C
logo = [
0x0f, 0x80, 0x7c, 0x00, 0x78, 0x61, 0x87, 0x80, 0x40, 0x12, 0x00, 0x80, 0x42, 0x0c, 0x00, 0x80,
0x41, 0x84, 0x30, 0x80, 0x20, 0x44, 0xc1, 0x00, 0x20, 0x25, 0x01, 0x00, 0x10, 0x1e, 0x02, 0x00,
0x0c, 0x3f, 0x8c, 0x00, 0x07, 0xe1, 0xf8, 0x00, 0x08, 0x00, 0x04, 0x00, 0x10, 0xc0, 0x42, 0x00,
0x11, 0xe1, 0xf2, 0x00, 0x37, 0x1e, 0x1b, 0x00, 0x3c, 0x0c, 0x0f, 0x00, 0x68, 0x04, 0x05, 0x80,
0xc8, 0x04, 0x04, 0x80, 0x88, 0x0c, 0x04, 0x40, 0x88, 0x1e, 0x04, 0x40, 0x8c, 0x1f, 0x0c, 0x40,
0xdf, 0xe0, 0xfe, 0xc0, 0x7f, 0xc0, 0x73, 0x80, 0x61, 0x80, 0x61, 0x80, 0x21, 0x80, 0x41, 0x00,
0x20, 0x80, 0x41, 0x00, 0x30, 0xc0, 0x83, 0x00, 0x10, 0xff, 0x82, 0x00, 0x0c, 0xff, 0xcc, 0x00,
0x03, 0x80, 0x70, 0x00, 0x01, 0x80, 0x60, 0x00, 0x00, 0x61, 0x80, 0x00, 0x00, 0x1e, 0x00, 0x00]
# init I2C #
i2c = I2C(0, sda=Pin(0), scl=Pin(1), freq=400000)
oled = SSD1306_I2C(128, 64, i2c)
# W5x00 chip init
def w5x00_init():
spi = SPI(0, 2_000_000, mosi=Pin(19), miso=Pin(16), sck=Pin(18))
nic = network.WIZNET5K(spi, Pin(17), Pin(20)) # spi,cs,reset pin
nic.active(True)
# None DHCP
nic.ifconfig(('192.168.0.20', '255.255.255.0', '192.168.0.1', '8.8.8.8'))
# DHCP
# nic.ifconfig('dhcp')
print('IP address :', nic.ifconfig())
while not nic.isconnected():
time.sleep(1)
print(nic.regs())
def client_loop():
s = socket()
s.connect(('192.168.0.8', 5000)) # Destination IP Address
print("Client Connect!")
while True:
data = s.recv(2048)
oled.fill(0)
oled.show()
if data != 'NULL':
s.send(data)
# oled.text(data, 0, 0)
if data[0] == 45:
oled.text(data[1:15], 0, 0)
oled.text(data[15:30], 0, 10)
oled.text(data[30:45], 0, 20)
oled.text(data[45:60], 0, 30)
oled.text(data[60:75], 0, 40)
oled.text(data[75:90], 0, 50)
else:
weather = (data.decode('utf-8').split("*"))
oled.text(weather[0], 0, 0)
oled.text(weather[1], 0, 10)
oled.text(weather[2], 0, 20)
oled.show()
time.sleep(1)
def main():
w5x00_init()
###TCP CLIENT###
client_loop()
if __name__ == "__main__":
main()
Code that receives data through Ethernet and code that gives data through OLED. It's written like this.OLED could be easily implemented with a package called SSD1306. And when sending weather information and YouTube comments from the server, the server sent '-' in front of the YouTube comment to distinguish the two, and the client separated the comments and weather information with '-' and displayed them on the OLED.
Okay, I made it work up to here.Get weather information from Com, get YouTube comments, create a server, and connect to Pico with PoE.
When the client is connected to the server, the server sends the data, and the client displays the data on the OLED.
It works as shown in the video above. If you post a YouTube comment in the middle, it will be reflected in about 20 to 30 seconds.
This is the last time. I will convert it to Chat GPT and get the data.Chat GPT 3.5 API was officially released recently, making it easy to implement.Chat GPT requires API Key just like YouTube.
I was able to receive the API Key through the posting of the link.Now that you have received the API Key, let's continue writing the code.
import openai
Chat GPT is just one package called openai.
def Chat_GPT_Comments(api_key, query):
global answer
openai.api_key = api_key
model = "gpt-3.5-turbo"
messages = [
{
"role": "system",
"content": "You are a machine that gently changes negative or bad writing.."
},
{
"role": "user",
"content": query
}
]
response = openai.ChatCompletion.create(
model=model,
messages=messages
)
answer = response['choices'][0]['message']['content']
def Chat_GPT_Weather(api_key, query):
global answer
openai.api_key = api_key
model = "gpt-3.5-turbo"
messages = [
{
"role": "system",
"content": "You are a weather forecaster who sums up weather conditions well.."
},
{
"role": "user",
"content": query
}
]
response = openai.ChatCompletion.create(
model=model,
messages=messages
)
answer = response['choices'][0]['message']['content']
First of all, I wrote it like this as a function.
I got weather information, got a function to turn from GPT, got a YouTube comment, and wrote two functions to turn from GPT. It can be implemented more simply by putting one more input parameter and writing If statement, but it was annoying for now.Once you look at the internal structure simply, you can write openai.ChatCompletion.create and send the model and massage.I set a role for GPT and I just gave orders. And I accepted the response to answer in the way I wanted.
while True:
getweather()
getYoutubecomments(youtubeID)
if compareText == df[0][0]:
query = "The current weather is very {} with {} degrees. Summarize this weather as the weatherman says. In one sentence, please.".format(weatherPrint, temperature)
print(f"1:query={query}")
Chat_GPT_Weather(api_key, query)
print(f"1:answer={answer}")
result = connectionSock.send(answer.encode('utf-8'))
print(f"1:result={result}")
else:
if len(df[0][0]) > 30:
query = "'{}' Please summarize the sentence in 60 letters or less than 60 letters.".format(df[0][0])
print(f"2:query={query}")
Chat_GPT_Comments(api_key, query)
compareText = df[0][0]
answer = '-' + answer
connectionSock.send(answer.encode('utf-8'))
print(f"2:answer={answer}")
else:
compareText = df[0][0]
commentsprint = '-' + df[0][0]
connectionSock.send(commentsprint.encode('utf-8'))
print(f"2:commentsprint={commentsprint}")
time.sleep(10)
And the main sentence is written like this.They keep getting weather data and handing it over to GPT to change it like a weatherman says. The changed data is handed over to Pico, and Pico outputs it to OLED.And if there is a new YouTube comment, hand it over to the GPT, and if there is a mix of negative words, change it a little softly. And since text with more than 60 characters cannot be displayed on OLED, if more than 60 characters are posted, it will be reduced to less than that. And Pico outputs to OLED.
I tried to print the movements.
At first, a YouTube comment pops up.
And the command to enter the GPT reflecting the current temperature and weather appears.
And the response from the GPT appears.It works normally.
If I were to provide a full description of the project's operation, it would be as followsI organized the project simply like this. It was so disappointing that OLED was small and there was nothing that could be expressed.Next time, I think it can be a better project if it is implemented in a cool way with a slightly larger LCD.
Comments