Welcome to Hackster!
Hackster is a community dedicated to learning hardware, from beginner to pro. Join us, it's free!
Jean-Christophe Owens
Published

PYNQ-THNK SeCuRiTy

Have you ever wanted to think less and have your world be more efficient?! New security system with speech recognition through PYNQ!

IntermediateFull instructions provided3 hours933
PYNQ-THNK SeCuRiTy

Things used in this project

Hardware components

PYNQ-Z2 board
AMD PYNQ-Z2 board
×1

Software apps and online services

Jupyter Notebook
Jupyter Notebook

Hand tools and fabrication machines

Apple Microphone Earphones/Headset

Story

Read more

Code

PYNQ-THNK

Python
Must use Jupyter Notebooks to work with PYNQ Framework
# coding: utf-8

# --------------------------------

# # PYNQ Project SERG 2020
# Date: 08/5/2020 
# Supervisor: Rebecca To
# Author: Jean-Christophe Owens
# SERG Team 2020
# 
# **************************************************************************************************************************
# Project Desciption:
# The project function is to read in audio files from the microphone block in the base overlays. Then hopefully processes the audio files into a cloud API that analyzses speech (Speech recognition). After being processed and keywords that have weight, there will be a series of outputs displaying Green for "Login success" and Red for "Login failed".
# The theme of the project is embedded secuirty... Spoooky!
# 
# There is a an easter egg with a specific command if you say it correctly!
# **************************************************************************************************************************
# 
# 1) Connecting Audio Overlay
# 
# 2) Connecting GPIO (LED) Overlay
# 
# 3) Processing Speech Recognition
# 
# 4) Outputting Login Credentials
# 
# **************************************************************************************************************************
# 
# 

# ### Connecting Audio Overlay

# In[9]:


#Inlcudes
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")


# In[10]:


#Audio Function
PAudio = base.audio
Roo = "LivingRoom.wav"


# #### Select specific Audio line (Hp | mic)

# Have the audio record from the mic for 5 seconds and save data in path of folder

# In[53]:


#Selecting MIC block
PAudio.select_microphone()


# Check microphone is working

# In[54]:


#Read Block
#PAudio.bypass(seconds = 5)


# In[66]:


PAudio.record(5)


# In[67]:


#Save in path of folder
PAudio.save("LivingRoom.wav")


# ______________________________________

# #### Log Data

# In[68]:


#Have a media Player and pull from mem
PAudio.load("/home/xilinx/jupyter_notebooks/Xilinx_2020_Capstone_SERG/LivingRoom.wav")
#PAudio.play()


# In[69]:


from IPython.display import Audio as IPAudio
IPAudio("/home/xilinx/jupyter_notebooks/Xilinx_2020_Capstone_SERG/LivingRoom.wav")


# ## Plotting PCM data
# 
# Users can display the audio data in notebook:
# 
# 1. Plot the audio signal's amplitude over time.
# 2. Plot the spectrogram of the audio signal.
# 
# The next cell reads the saved audio file and processes it into a `numpy` array.
# Note that if the audio sample width is not standard, additional processing
# is required. In the following example, the `sample_width` is read from the
# wave file itself (24-bit dual-channel PCM audio, where `sample_width` is 3 bytes).

# In[70]:


get_ipython().magic('matplotlib inline')
import wave
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from scipy.fftpack import fft

#*************************************************************************************************************************
#switch if necessary
#wav_path = "/home/xilinx/jupyter_notebooks/Xilinx_2020_Capstone_SERG/LivingRoom.wav"
wav_path = "/home/xilinx/jupyter_notebooks/base/audio/recording_0.wav"
#*************************************************************************************************************************

with wave.open(wav_path, 'r') as wav_file:
    raw_frames = wav_file.readframes(-1)
    num_frames = wav_file.getnframes()
    num_channels = wav_file.getnchannels()
    sample_rate = wav_file.getframerate()
    sample_width = wav_file.getsampwidth()
    
temp_buffer = np.empty((num_frames, num_channels, 4), dtype=np.uint8)
raw_bytes = np.frombuffer(raw_frames, dtype=np.uint8)
temp_buffer[:, :, :sample_width] = raw_bytes.reshape(-1, num_channels, 
                                                    sample_width)
temp_buffer[:, :, sample_width:] =     (temp_buffer[:, :, sample_width-1:sample_width] >> 7) * 255
frames = temp_buffer.view('<i4').reshape(temp_buffer.shape[:-1])


# In[71]:


for channel_index in range(num_channels):
    plt.figure(num=None, figsize=(15, 3))
    plt.title('Audio in Time Domain (Channel {})'.format(channel_index))
    plt.xlabel('Time in s')
    plt.ylabel('Amplitude')
    time_axis = np.arange(0, num_frames/sample_rate, 1/sample_rate)
    plt.plot(time_axis, frames[:, channel_index])
    plt.show()


# -----------------------------------

# ### Connecting GPIO (LED) Overlay

# In[72]:


#**************************************
#In Order to IO must have this below!
#**************************************

#Libraries 
import time 
from time import sleep
from pynq import Overlay
base = Overlay("base.bit")


# In[73]:


#Declare Variables for specific colours
RGBLEDS_XGPIO_OFFSET = 0
RGBLEDS_START_INDEX = 4
RGB_CLEAR = 0
RGB_BLUE = 1
RGB_GREEN = 2
RGB_CYAN = 3
RGB_RED = 4
RGB_MAGENTA = 5
RGB_YELLOW = 6


# In[74]:


#*************** Block for LEDs to work *********************

#Blink 3 times if in loop or have a counter (no counter in this project)
count = 3;

'''
def main():
    print("****** Execute Program ******");
    print("Make Led turn on and off");
    
#Rainbow colours
    LED_GREEN()
    sleep(1)
    LED_BLUE()
    sleep(1)
    LED_RED()
    sleep(1)
    LED_CYAN()
    sleep(1)
    LED_MARG()
    sleep(1)
    LED_YELLOW()
    sleep(1)
    LED_OFF()
    
#     *************************
# Make a while loop
'''
# *********************COLOUR FUNCTIONS***********************
#Call functions for after processing

def LED_GREEN():
    base.btns_gpio.read()
    base.rgbleds_gpio.write(0, RGB_GREEN)
    
    
def LED_BLUE():
    base.btns_gpio.read()
    base.rgbleds_gpio.write(0, RGB_BLUE)
    
    
def LED_RED():
    base.btns_gpio.read()
    base.rgbleds_gpio.write(0, RGB_RED)
    
    
def LED_CYAN():
    base.btns_gpio.read()
    base.rgbleds_gpio.write(0, RGB_CYAN)
    
    
def LED_MARG():
    base.btns_gpio.read()
    base.rgbleds_gpio.write(0, RGB_MAGENTA)

def LED_YELLOW():
    base.btns_gpio.read()
    base.rgbleds_gpio.write(0, RGB_YELLOW)

# ********************************************
def LED_OFF():
    base.btns_gpio.read()
    base.rgbleds_gpio.write(0, RGB_CLEAR)

    


# ----------

# ### Processing Speech Recognition
# Uses Cloud API architecture to compute 

# In[75]:


import speech_recognition as sr

#*********** Use recordings as a basis to show Speech Recognition works ***********
# If there is no capture of audio there will have no text to transform
# Must record in earlier portion (Step 2 - Audio overlay) listen to words
#**********************************************************************************

#path = "/home/xilinx/jupyter_notebooks/base/audio/recording_0.wav"
path = "/home/xilinx/jupyter_notebooks/Xilinx_2020_Capstone_SERG/LivingRoom.wav"

r = sr.Recognizer()
with sr.WavFile(path) as source:
    print("Wav File : ", path)
    audio = r.listen(source)
text = r.recognize_google(audio)
print(text)


# ### Outputting Login Credentials

# The output data compares words with keywords once the string has been split.
# The input is the wav file recorded from for 10 seconds
# That data is then converted from speech --> text
# Compare the data with keywords (Ideally for security) 
#     If correct = Green - Login success
#     If wrong   = Red   - Login failed
#     
# Easter Egg for special words! 

# In[78]:


#Having Audio Block
#Having LED Block
#Having Recognizer
#-------------------
#Output lights based on audio

pw = text.split()
print (pw)

#State Machine
#Change the login keyword to your liking
if ('hello' or 'Hello') in pw:
    print("Login Success")
    LED_GREEN()
    sleep(5)
    LED_OFF()
    
#     ************************* 
if ('Tim') or ('tim') or ('Jules') or ('jules') or ('Xilinx') or ('xilinx') or ('Serg') or ('serg') or ('Rebecca') or ('rebecca') in pw:
    print("You have unlocked God mode you dont even need secuirty anymore you have hacked the InTeRnEt \r\n")
# Rainbow colour
    LED_GREEN()
    sleep(1)
    LED_BLUE()
    sleep(1)
    LED_RED()
    sleep(1)
    LED_CYAN()
    sleep(1)
    LED_MARG()
    sleep(1)
    LED_YELLOW()
    sleep(1)
    LED_OFF()
    
#     *************************

if not ('Hello' or 'hello' or 'Tim' or 'tim' or 'Jules' or 'jules' or 'Xilinx' or 'xilinx' or 'Serg' or 'serg' or 'Rebecca' or 'rebecca'):
    print("Login Failed")
    LED_RED()
    sleep(5)
    LED_OFF()
    

Credits

Jean-Christophe Owens
8 projects • 17 followers
Just a rogue hacker! Love to make embedded-related projects, playing mostly with Xilinx tools, Vivado and Vitis. Love making and inventing!
Contact

Comments

Please log in or sign up to comment.