Hi!! I did have a few time building an human model, and make this functional.
In last months, i did have install a sonar that uses arduino and a visual interface whith Processing, a servo rotate the head to left and right, when the ultrasonic transductor detect an object, a voice message is activated saying if the object is near, other if the object is at more distance, and another if no object detected, and the visual interface shows distance, angle of detection of object, etc.
Also, i did have worked in a very simply I.A. whith a limited vocabulary that answer back some questions, is not an intuitive system, but i´am really get excited whit the possibilities.
Actually, I work on the motors and engine drivers and his control with an payment artificial intelligence (in free version), which is a real wonder, but does not perform serial communication with arduino other than in the paid version, so the control of external devices is something that I have not achieved through voice commands to the I.A.
I also work on a face that I hope to endow with facial expressions, and on additional movements of the head such as tilting and lateral flexion.
Sorry for the poor quality of the videos, I made them with a low-end android phone, so audio and video are of inferior quality.
If you are interested in any part of this work or think that any part of it is useful to you, feel free to ask, or comment, I will answer you with pleasure.
I reiterate my request for collaborations, everyone with an idea is welcome !!!
- This is the test of the first robotic hand.
- This is the test of legs motion, in that moment i didn´t have a PWM to control the speed
- This is the test of the fourth prototype of the robotic hand, if you look carefully, there is no motor assembly in the arm or forearm.
- This is the test of the fourth prototype of the robotic hand, without the glove.
- This is the test of head rotation and tilt movements.
- This is the ultrasound transducer test, the blinking of the LED over the left eye is the proximity indicator, in this video the Processing sketch was not activated, so the distance or detection angle data cannot be observed, and neither can you hear the vocal reactions of artificial intelligence.
- Hello friends, this is the test of the move of motor of the fingers, the motor no has movement
I present the operation codes for lateral head movement and object detection, I know it is a very simple solution, but it has worked very well for me, currently I am working on the development of artificial intelligence in python, and I am progressing very well !
If you are interested in this part of the project. and you want to develop it by yourself and you have a problem with its implementation, welcome! I will gladly clarify any questions you have about this or any part of the project!
- Hello friends, this is a test of new servos MOT135 for rotate and tilt movements of the head, using facial detection with OpenCV, when detecting a face, the program sends a greeting message. And, today, the system works in Python, a thing that i don't know to i did get that works, really, but i did it, in python, a voice command, activate a series of moves, actually, this program make a serie of four commands with different orders: say yes, say no, move around the head, or look at me, with pyserial, voice recognizer, ttsx3, arduino speedvarservo, and a lot of luck!
- Greetings friends!!!! After a good time, mi project have new advances: now, my two humanoid robots have his own artificial intelligence, a prototype created in python 3.8, using pyttsx3, speech recognition, and a system of serial communication, i know, this are very known, but this is my system, is simply, is fast(run very good in Windows 7), have a dynamic interface in Pyqt5 that run a Gif while the program run, it's funny, i did not think that i would create something how this, but i did make it!!!
- Sorry for no present bad videos oif this, i'am very busy on development, no is a work completed, this is a short sample:
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from AssistantGif_ui import *
from PyQt5.QtWidgets import QMainWindow,QApplication,QLabel
from PyQt5.Qt import QMovie
import speech_recognition as sr
import pyttsx3
import pyjokes
import datetime
import time
import sys #Importamos módulo sys
from PyQt5 import uic, QtWidgets #Importamos módulo uic y Qtwidgets
qtCreatorFile = "AssistantGif.ui" # Nombre del archivo UI aquí.
Ui_MainWindow, QtBaseClass = uic.loadUiType(qtCreatorFile) #El modulo ui carga el archivo
class VentanaPrincipal(QtWidgets.QMainWindow, Ui_MainWindow): #Abrimos la ventana
def __init__(self): #Constructor de la clase
QtWidgets.QMainWindow.__init__(self) #Constructor
Ui_MainWindow.__init__(self) #Constructor
self.setupUi(self) # Método Constructor de la ventana
self.I = QLabel(self)
self.I.resize(450,500)
self.movi = QMovie("original.gif")
self.I.setMovie(self.movi)
self.movi.start()
the_line = QtWidgets.QLineEdit(self) # Open a box to write
the_line.move(550, 140) # Changes the location of the box
the_line.resize(200,25)
the_line.setStyleSheet("color: rgb(255, 255, 255);")
#Aquí irá nuestro código funcional
listener = sr.Recognizer()
engine = pyttsx3.init()
rate = engine.getProperty('rate')
engine.setProperty("rate", 130)
volume = engine.getProperty('volume')
engine.setProperty('volume', volume-0.4)
voices = engine.getProperty('voices')
engine.setProperty('voice',voices[0].id)
now = datetime.datetime.now()
def speak(audio):
print('Assistant: ' + audio)
engine.say(audio)
engine.runAndWait()
def timeSett():
currentH = int(datetime.datetime.now().hour)
if currentH >= 0 and currentH < 12:
speak('Good Morning!')
if currentH >= 12 and currentH < 18:
speak('Good Afternoon!')
if currentH >= 18 and currentH != 0:
speak('Good Evening!')
timeSett()
def talk(text):
engine.say(text)
engine.runAndWait()
def take_command():
try:
with sr.Microphone() as source:
print('Listenning...')
voice = listener.listen(source)
global command
command = listener.recognize_google(voice, language='en-english')
command = command.lower()
if 'Assistant' in command:
command = command.replace('Assistant','')
print(command)
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
window = VentanaPrincipal()
window.show()
app.exec_()
except:
pass
return command
def run_Assistant():
command = take_command()
print('command')
if 'what time is it' in command:
print("Current date and time : ")
print(now.strftime("The time is %H:%M"))
speak(now.strftime("The time is %H:%M"))
engine.runAndWait()
elif 'goodbye' in command:
print("Hasta la vista... Baby!")
speak("Hastala vista...Baby!")
exit()
elif 'what is my phone number' in command:
print('xx xx xx xx xx is your phone number sir')
talk('xx xx xx xx xx is your phone number sir')
elif 'tell me a joke' in command:
talk(pyjokes.get_joke('en'))
elif 'what time is it' in command:
print("Current date and time : ")
print(now.strftime("The time is %H:%M"))
speak(now.strftime("The time is %H:%M"))
engine.runAndWait()
elif 'goodbye' in command:
print("Hasta la vista... Baby!")
speak("Hastala vista...Baby!")
exit()
else:
talk('just now im not ready for this')
while True:
run_Assistant()
Comments