The idea was basically to design a complete intelligent automation system for playing a flute right from reading a sheet music.
It was a challenging task and although I could not fully complete the project within the timeline provided I feel I have made significant strides of attempting a novel concept and trying to get it as close to completion as possible.
It was an arduous yet a very fulfilling journey with this project so far, and with the Ultra96 board in particular.
The first step is to get a box with all the fixtures for controlling the actuators. I went about this as described below.
- Built a wooden box from MDF boards.
- Built 6 pinion/rack cartridges for the linear motion to close/open the flute holes.
- For the air blow (and this was the hardest part as I could not get the blow aligned to the flute hole at all to get a nice tone), I went with a simple axial blower fan.
- Fixed the Ultra96, Level Translator to one of the front faces of the boxes. The motor drivers were fixed on the top closer to the pinion/rack mechanism.
I had designed a State Machine based PWM control which allows the simultaneous control of all six motors with PWM that can be set from 1% to 99% and the limitation is basically on the Motor Driver side.
- FPGA Pin details (for the PWM drive of motors and fan)
I used GPIO control (and not AXI Stream) so that I could precisely control the module. The state machine flow is described below.
- PWM Duty Cycle Input - The PWM duty cycle for each of the six motor drives can be input at this state. Six 7 bit registers are instantiated and the duty cycle values are stored for each of the drive specified.
- Clock Counter - A clock division counter has been implemented to provide a tick of 100us for the PWM generator.
- Duty Cycle Counter - A duty cycle counter has also been implemented to load and compare the duty cycle input from Step 1
- Drive Enable - The Drive Enable can be input at this stage - one, many or all of the devices.
The captured PWM (during testing) has been provided below. The top one is 18% and the bottom one is 45%.
- Verilog code for the PWM driver module
`timescale 1ns / 1ps
//////////////////////////////////////////////////////////////////////////////////
// Company:
// Engineer:
//
// Create Date: 11/29/2018 07:07:27 PM
// Design Name:
// Module Name: pwm_generator
// Project Name:
// Target Devices:
// Tool Versions:
// Description:
//
// Dependencies:
//
// Revision:
// Revision 0.01 - File Created
// Additional Comments:
//
//////////////////////////////////////////////////////////////////////////////////
module pwm_generator(
input clock_in,
input [7:0] data,
input [2:0] device_id,
input [1:0] command_id,
input strobe_en,
output pwm_out_m1a,
output pwm_out_m1b,
output pwm_out_m2a,
output pwm_out_m2b,
output pwm_out_m3a,
output pwm_out_m3b,
output pwm_out_m4a,
output pwm_out_m4b,
output pwm_out_m5a,
output pwm_out_m5b,
output pwm_out_m6a,
output pwm_out_m6b,
output pwm_out_f1a,
output pwm_out_f1b
);
parameter [1:0] IDLE = 2'd0;
parameter [1:0] LOAD_DC_DIR = 2'd1;
parameter [1:0] CONTROL_DEVICE = 2'd2;
parameter [2:0] MOTOR1 = 3'd1;
parameter [2:0] MOTOR2 = 3'd2;
parameter [2:0] MOTOR3 = 3'd3;
parameter [2:0] MOTOR4 = 3'd4;
parameter [2:0] MOTOR5 = 3'd5;
parameter [2:0] MOTOR6 = 3'd6;
parameter [2:0] FAN = 3'd7;
reg strobe_en_buf;
wire edge_strobe;
reg [7:0] count_mot1;
reg [7:0] count_mot2;
reg [7:0] count_mot3;
reg [7:0] count_mot4;
reg [7:0] count_mot5;
reg [7:0] count_mot6;
reg [7:0] count_fan;
reg [6:0] duty_cycle_mot1;
reg [6:0] duty_cycle_mot2;
reg [6:0] duty_cycle_mot3;
reg [6:0] duty_cycle_mot4;
reg [6:0] duty_cycle_mot5;
reg [6:0] duty_cycle_mot6;
reg [6:0] duty_cycle_fan;
reg dir_mot1;
reg dir_mot2;
reg dir_mot3;
reg dir_mot4;
reg dir_mot5;
reg dir_mot6;
reg dir_fan;
reg enable_mot1;
reg enable_mot2;
reg enable_mot3;
reg enable_mot4;
reg enable_mot5;
reg enable_mot6;
reg enable_fan;
reg clock_out_buf;
assign edge_strobe = (strobe_en &~ strobe_en_buf)? 1: 0;
assign pwm_out_m1a = (enable_mot1 & dir_mot1)? ((count_mot1 < duty_cycle_mot1)? 1: 0): 0;
assign pwm_out_m1b = (enable_mot1 &~ dir_mot1)? ((count_mot1 < duty_cycle_mot1)? 1: 0): 0;
assign pwm_out_m2a = (enable_mot2 & dir_mot2)? ((count_mot2 < duty_cycle_mot2)? 1: 0): 0;
assign pwm_out_m2b = (enable_mot2 &~ dir_mot2)? ((count_mot2 < duty_cycle_mot2)? 1: 0): 0;
assign pwm_out_m3a = (enable_mot3 & dir_mot3)? ((count_mot3 < duty_cycle_mot3)? 1: 0): 0;
assign pwm_out_m3b = (enable_mot3 &~ dir_mot3)? ((count_mot3 < duty_cycle_mot3)? 1: 0): 0;
assign pwm_out_m4a = (enable_mot4 & dir_mot4)? ((count_mot4 < duty_cycle_mot4)? 1: 0): 0;
assign pwm_out_m4b = (enable_mot4 &~ dir_mot4)? ((count_mot4 < duty_cycle_mot4)? 1: 0): 0;
assign pwm_out_m5a = (enable_mot5 & dir_mot5)? ((count_mot5 < duty_cycle_mot5)? 1: 0): 0;
assign pwm_out_m5b = (enable_mot5 &~ dir_mot5)? ((count_mot5 < duty_cycle_mot5)? 1: 0): 0;
assign pwm_out_m6a = (enable_mot6 & dir_mot6)? ((count_mot6 < duty_cycle_mot6)? 1: 0): 0;
assign pwm_out_m6b = (enable_mot6 &~ dir_mot6)? ((count_mot6 < duty_cycle_mot6)? 1: 0): 0;
assign pwm_out_f1a = (enable_fan & dir_fan)? ((count_fan < duty_cycle_fan)? 1: 0): 0;
assign pwm_out_f1b = (enable_fan &~ dir_fan)? ((count_fan < duty_cycle_fan)? 1: 0): 0;
always @(posedge clock_in) begin
clock_out_buf <= clock_out;
strobe_en_buf <= strobe_en;
end
always@ (posedge clock_out_buf) begin
count_mot1 <= enable_mot1? ((count_mot1 < 8'd100)? (count_mot1+1): 0): 0;
count_mot2 <= enable_mot2? ((count_mot2 < 8'd100)? (count_mot2+1): 0): 0;
count_mot3 <= enable_mot3? ((count_mot3 < 8'd100)? (count_mot3+1): 0): 0;
count_mot4 <= enable_mot4? ((count_mot4 < 8'd100)? (count_mot4+1): 0): 0;
count_mot5 <= enable_mot5? ((count_mot5 < 8'd100)? (count_mot5+1): 0): 0;
count_mot6 <= enable_mot6? ((count_mot6 < 8'd100)? (count_mot6+1): 0): 0;
count_fan <= enable_fan? ((count_fan < 8'd100)? (count_fan+1): 0): 0;
end
always@ (command_id or device_id or data or edge_strobe)
begin
if (edge_strobe)
begin
case(command_id)
IDLE:
begin
enable_mot1 = 0;
enable_mot2 = 0;
enable_mot3 = 0;
enable_mot4 = 0;
enable_mot5 = 0;
enable_mot6 = 0;
enable_fan = 0;
dir_mot1 = 0;
dir_mot2 = 0;
dir_mot3 = 0;
dir_mot4 = 0;
dir_mot5 = 0;
dir_mot6 = 0;
dir_fan = 0;
duty_cycle_mot1 = 0;
duty_cycle_mot2 = 0;
duty_cycle_mot3 = 0;
duty_cycle_mot4 = 0;
duty_cycle_mot5 = 0;
duty_cycle_mot6 = 0;
duty_cycle_fan = 0;
end
LOAD_DC_DIR:
begin
case (device_id)
MOTOR1:
begin
dir_mot1 = data[7];
duty_cycle_mot1 = data[6:0];
end
MOTOR2:
begin
dir_mot2 = data[7];
duty_cycle_mot2 = data[6:0];
end
MOTOR3:
begin
dir_mot3 = data[7];
duty_cycle_mot3 = data[6:0];
end
MOTOR4:
begin
dir_mot4 = data[7];
duty_cycle_mot4 = data[6:0];
end
MOTOR5:
begin
dir_mot5 = data[7];
duty_cycle_mot5 = data[6:0];
end
MOTOR6:
begin
dir_mot6 = data[7];
duty_cycle_mot6 = data[6:0];
end
FAN:
begin
dir_fan = data[7];
duty_cycle_fan = data[6:0];
end
endcase
end
CONTROL_DEVICE:
begin
enable_mot1 = data[0];
enable_mot2 = data[1];
enable_mot3 = data[2];
enable_mot4 = data[3];
enable_mot5 = data[4];
enable_mot6 = data[5];
enable_fan = data[6];
end
endcase
end
end
Clock_Divider CD1(
.clock(clock_in),
.clock_out(clock_out)
);
endmodule
Artificial Intelligence/Machine LearningPytorch Based approach (Not implemented in Ultra96)I had created and ran a 4 layer CNN for the musical notes in PyTorch on my PC. But unfortunately I could not get PyTorch installed on Ultra96 (although there were a couple of PyTorch ARM64 installations on JETSON4 hardware) or get the CNN converted to binary weights to be used with the Xilinx BNN network on time.
The data sets were not that easily available and I found the "Open_OMR-dataset" and got it converted to MNIST type.
https://apacha.github.io/OMR-Datasets/#openomr-dataset
- Image Resizing for MNIST conversion - I have resized the images using a python script to a monochrome 28X28 image as a single batch. The folder which contains all the image files has to be put in the same path as the python script. The Python script is provided below.
import cv2
import os
current_path = r"D:/Personal/Xilinx_AI_Edge_Project/Project_IP/ImageToMNIST-using-Python-3.6-master/ImageToMNIST/images/test"
width = 28
height = 28
dim = (width, height)
os.chdir(current_path)
dir_list = os.listdir()
file_list = []
for i in dir_list:
path_temp = current_path + "/" + i + "/"
file_list.append(os.listdir(path_temp))
for i in range(len(dir_list)):
for j in file_list[i]:
temp = current_path+"/"+dir_list[i]+"/"+j
img = cv2.imread(temp, cv2.IMREAD_GRAYSCALE) # resize image
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(temp,resized)
else:
print("Could Not resize", temp)
- MNIST conversion - I did not want to reinvent the wheel and had opted for an already existing script available for converting custom images (28X28 pixel size) to MNIST dataset. I have provided the link below.
https://github.com/gskielian/JPG-PNG-to-MNIST-NN-Format.git
import os
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torchvision import datasets, transforms
from torchvision.datasets import ImageFolder
import torch.utils.data as data
import torchvision
# Training settings
batch_size = 16
transform = transforms.Compose([transforms.Pad(2),transforms.ToTensor(),])
trainset = datasets.ImageFolder(root='D:/Personal/Xilinx_AI_Edge_Project/Project_IP/ImageToMNIST-using-Python-3.6-master/ImageToMNIST/images/train/',
transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=2,
shuffle=True, num_workers=0)
testset = datasets.ImageFolder(root='D:/Personal/Xilinx_AI_Edge_Project/Project_IP/ImageToMNIST-using-Python-3.6-master/ImageToMNIST/images/test/',
transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=2,
shuffle=True, num_workers=0)
classes = ('bass','crotchet','demisemiquaver','flat','minim','natural','quaver','quaver_line','quaver_tr','semibreve','semiquaver','semiquaver_line','semiquaver_tr','sharp','treble')
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 15)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001)
for epoch in range(300): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
inputs, labels = Variable(inputs), Variable(labels)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.data[0]
correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
class_correct = list(0. for i in range(15))
class_total = list(0. for i in range(15))
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(1):
label = labels[i]
class_correct[label] += c[i]
class_total[label] += 1
for i in range(15):
if class_total[i] != 0:
print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i]))
s1 = torch.sum(net.conv1.weight.data)
s3 = torch.sum(net.conv2.weight.data)
s4 = torch.sum(net.fc1.weight.data)
s5 = torch.sum(net.fc2.weight.data)
s6 = torch.sum(net.fc3.weight.data)
print(s1)
print(s3)
print(s4)
print(s5)
print(s6)
print('Finished Training')
- Training and testing the accuracy in PyTorch
I also tried to evaluate Open CV based approach. So, initially worked on staff line removal of the sheet music using Hough Transform to identify and classify the notes better via ML.
import cv2
import numpy as np
Directory_Name = '.\Sample_Images'
Image_Name = "Sheet_Music.png"
im = cv2.imread(Directory_Name+"/"+Image_Name)
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,64,255,apertureSize = 3)
minLineLength = 30
maxLineGap = 5
lines = cv2.HoughLinesP(edges,1,np.pi/180,15,minLineLength,maxLineGap)
for x in range(0, len(lines)):
for x1,y1,x2,y2 in lines[x]:
cv2.line(im,(x1,y1),(x2,y2),(255,255,255),2)
median = cv2.medianBlur(im,5)
cv2.imwrite(Directory_Name+"/"+Image_Name[:-4]+"_Unfiltered.png",im)
cv2.imwrite(Directory_Name+"/"+Image_Name[:-4]+"_Filtered.png",median)
During this activity, I stumbled upon an excellent pattern recognition python project called as "Sheet Vision" on GitHub - https://github.com/cal-pratt/SheetVision.git. that is based upon OpenCV.
I decided to modify and run it on Ultra 96 to test my automation system, since it had some MIDI output processing in addition. I also wanted to benchmark the ARM performance on Ultra96. The sheet music notes detection is good although it expects the sheet music in a specific template. If not, we have to manually edit the threshold and boundary requirements.
Here is an instance of the note recognition and identification when running the script on Ultra96.
The application software was developed in PYNQ. It basically does the following.
- Read a captured image (offline and not via webcam, as the image from my webcam seems blurry).
- Run SheetVision modifed script over it, and identify the notes from the sheet music.
- Also, a PWM driver has been designed in PYNQ for controlling the motors and fan. It has been designed as a Python class for ease of re-usability.
- The read notes from Sheet Vision script will be played by the Flute Automation hardware (once it is ready)
- Currently, only the blow function to generate a clean tone is not working but the mechanical actuators have been functioning fine.
from Sheet_Vision import Sheet_Vision
import cv2
import time
from pynq import GPIO
from pynq import Overlay
ol = Overlay("./pwm_gen.bit")
# Default duty cycle values
drive1_dc = 50
drive2_dc = 50
drive3_dc = 50
drive4_dc = 50
drive5_dc = 50
drive6_dc = 50
drive_fan_dc_lo = 50 # Duty Cycle value that will generate a lower octave note in the flute. Needs to be tuned periodically
drive_fan_dc_hi = 50 # Duty Cycle value that will generate a higher octave note in the flute. Needs to be tuned periodically.
class PWM_Module:
def init_drives(self):
self.strobe_en = GPIO(GPIO.get_gpio_pin(13), 'out')
self.strobe_en.write(0)
self.data = []
for i in range(0,7):
self.data.append(GPIO(GPIO.get_gpio_pin(i), 'out'))
self.dc_bin = bin(0)
for i in range(0,7):
self.data[i].write(0)
self.dir = GPIO(GPIO.get_gpio_pin(7), 'out')
self.dir.write(0)
self.command_id = []
for i in range(8,10):
self.command_id.append(GPIO(GPIO.get_gpio_pin(i), 'out'))
for i in range(0,2):
self.command_id[i].write(0)
self.device_id = []
for i in range(10,13):
self.device_id.append(GPIO(GPIO.get_gpio_pin(i), 'out'))
for i in range(0,3):
self.device_id[i].write(0)
self.strobe_en.write(1)
self.strobe_en.write(0)
def set_drive_strength(self, drive_no, direction, drive_strength):
self.dc_bin = bin(drive_strength)
self.dc_bin = self.dc_bin[2:]
for i in range(len(self.dc_bin)):
self.data[i].write(int(self.dc_bin[len(self.dc_bin)-i-1]))
if direction == 1:
self.dir.write(1)
else:
self.dir.write(0)
if drive_no == 0:
self.device_id[2].write(0)
self.device_id[1].write(0)
self.device_id[0].write(1)
elif drive_no == 1:
self.device_id[2].write(0)
self.device_id[1].write(1)
self.device_id[0].write(0)
elif drive_no == 2:
self.device_id[2].write(0)
self.device_id[1].write(1)
self.device_id[0].write(1)
elif drive_no == 3:
self.device_id[2].write(1)
self.device_id[1].write(0)
self.device_id[0].write(0)
elif drive_no == 4:
self.device_id[2].write(1)
self.device_id[1].write(0)
self.device_id[0].write(1)
elif drive_no == 5:
self.device_id[2].write(1)
self.device_id[1].write(1)
self.device_id[0].write(0)
elif drive_no == 6:
self.device_id[2].write(1)
self.device_id[1].write(1)
self.device_id[0].write(1)
self.command_id[0].write(1)
self.command_id[1].write(0)
self.strobe_en.write(1)
self.strobe_en.write(0)
def reset_data(self):
self.data[0].write(0)
self.data[1].write(0)
self.data[2].write(0)
self.data[3].write(0)
self.data[4].write(0)
self.data[5].write(0)
self.data[6].write(0)
def start_drive(self, drive_list):
self.reset_data()
for i in drive_list:
if i == 0:
self.data[0].write(1)
if i == 1:
self.data[1].write(1)
if i == 2:
self.data[2].write(1)
if i == 3:
self.data[3].write(1)
if i == 4:
self.data[4].write(1)
if i == 5:
self.data[5].write(1)
if i == 6:
self.data[6].write(1)
self.command_id[0].write(0)
self.command_id[1].write(1)
self.strobe_en.write(1)
self.strobe_en.write(0)
def stop_drive(self, drive_list):
self.reset_data()
for i in drive_list:
if i == 0:
self.data[0].write(0)
if i == 1:
self.data[1].write(0)
if i == 2:
self.data[2].write(0)
if i == 3:
self.data[3].write(0)
if i == 4:
self.data[4].write(0)
if i == 5:
self.data[5].write(0)
if i == 6:
self.data[6].write(0)
self.command_id[0].write(0)
self.command_id[1].write(1)
self.strobe_en.write(1)
self.strobe_en.write(0)
def run(self, drive_no, direction, drive_strength, run_duration):
self.set_drive_strength(drive_no, direction, drive_strength)
self.start_drive([drive_no])
time.sleep(run_duration)
self.stop_drive([drive_no])
def run_group(self, drive_no, direction, drive_strength, run_duration):
for i in range(len(drive_no)):
self.set_drive_strength(drive_no[i], direction[i], drive_strength[i])
self.start_drive(drive_no)
time.sleep(run_duration)
self.stop_drive(drive_no)
def set_motor1_dc(self, dutycycle):
global drive1_dc
drive1_dc = dutycycle
def set_motor2_dc(self, dutycycle):
global drive2_dc
drive2_dc = dutycycle
def set_motor3_dc(self, dutycycle):
global drive3_dc
drive3_dc = dutycycle
def set_motor4_dc(self, dutycycle):
global drive4_dc
drive4_dc = dutycycle
def set_motor5_dc(self, dutycycle):
global drive5_dc
drive5_dc = dutycycle
def set_motor6_dc(self, dutycycle):
global drive6_dc
drive6_dc = dutycycle
def set_fan_lower_octave_threshold(self, dutycycle):
global drive_fan_dc
drive_fan_dc_lo = dutycycle
def set_fan_higer_octave_threshold(self, dutycycle):
global drive_fan_dc_lo, drive_fan_dc_hi
drive_fan_dc_hi = dutycycle
def motor1_fwd(self, duration):
self.run(0,1,drive1_dc,duration)
def motor1_bwd(self, duration):
self.run(0,0,drive1_dc,duration)
def motor2_fwd(self, duration):
self.run(1,1,drive2_dc,duration)
def motor2_bwd(self, duration):
pwm_app.run(1,0,drive2_dc,duration)
def motor3_fwd(self, duration):
self.run(2,1,drive3_dc,duration)
def motor3_bwd(self, duration):
self.run(2,0,drive3_dc,duration)
def motor4_fwd(self, duration):
self.run(3,0,drive4_dc,duration)
def motor4_bwd(self, duration):
self.run(3,1,drive4_dc,duration)
def motor5_fwd(self, duration):
self.run(4,0,drive5_dc,duration)
def motor5_bwd(self, duration):
self.run(4,1,drive5_dc,duration)
def motor6_fwd(self, duration):
self.run(5,0,drive6_dc,duration)
def motor6_bwd(self, duration):
self.run(5,1,drive6_dc,duration)
def turn_on_fan(self, duration):
self.run(6,1,drive_fan_dc,duration)
def turn_off_fan(self, duration):
self.stop_drive(6)
def home_drive(self):
self.run(0,0,drive1_dc,0.5)
self.run(1,0,drive2_dc,0.5)
self.run(2,0,drive3_dc,0.5)
self.run(3,0,drive4_dc,0.5)
self.run(4,0,drive5_dc,0.5)
self.run(5,0,drive6_dc,0.5)
def drive_all_fwd(self, duration):
self.run(0,1,drive1_dc,duration)
self.run(1,1,drive2_dc,duration)
self.run(2,1,drive3_dc,duration)
self.run(3,1,drive4_dc,duration)
self.run(4,1,drive5_dc,duration)
self.run(5,1,drive6_dc,duration)
def drive_all_bwd(self, duration):
self.run(0,0,drive1_dc,duration)
self.run(1,0,drive2_dc,duration)
self.run(2,0,drive3_dc,duration)
self.run(3,0,drive4_dc,duration)
self.run(4,0,drive5_dc,duration)
self.run(5,0,drive6_dc,duration)
def play_note(self, note, duration):
# Just creating functions some notes over 2 octaves here
# Lower Octave notes
if note == "g4":
time_move_forward = 1.2 # Needs to be tuned
self.drive_all_fwd(time_move_forward)
# Blow the air after the actuators moved and covered all holes
self.run(6,1,drive_fan_dc_lo,duration)
# Move back the actuators
time_move_backward = 1.2 # Needs to be tuned
self.drive_all_bwd(time_move_backward)
if note == "c4":
time_move_forward = 1.2 # Needs to be tuned
self.run_group([0,1,2,3], [1,1,1,1], [drive1_dc, drive2_dc, drive3_dc,drive4_dc], time_move_forward)
# Blow the air after the actuators moved and covered all holes except last 2
self.run(6,1,drive_fan_dc_lo,duration)
# Move back the actuators
time_move_backward = 1.2 # Needs to be tuned
self.drive_all_bwd(time_move_backward)
if note == "f4":
# Blow the air after the actuators moved and covered all holes except last 2
self.run(6,1,drive_fan_dc_lo,duration)# Move back the actuators
time_move_backward = 1.2 # Needs to be tuned
self.drive_all_bwd(time_move_backward)
# Higher Octave notes
if note == "g5":
time_move_forward = 1.2 # Needs to be tuned
self.drive_all_fwd(time_move_forward)
# Blow the air after the actuators moved and covered all holes
self.run(6,1,drive_fan_dc_hi,duration)
# Move back the actuators
time_move_backward = 1.2 # Needs to be tuned
self.drive_all_bwd(time_move_backward)
if note == "c5":
time_move_forward = 1.2 # Needs to be tuned
self.run_group([0,1,2,3], [1,1,1,1], [drive1_dc, drive2_dc, drive3_dc,drive4_dc], time_move_forward)
# Blow the air after the actuators moved and covered all holes except last 2
self.run(6,1,drive_fan_dc_hi,duration)
# Move back the actuators
time_move_backward = 1.2 # Needs to be tuned
self.drive_all_bwd(time_move_backward)
if note == "f5":
# Blow the air after the actuators moved and covered all holes except last 2
self.run(6,1,drive_fan_dc_hi,duration)# Move back the actuators
time_move_backward = 1.2 # Needs to be tuned
self.drive_all_bwd(time_move_backward)
# Initialize PWM Drives
pwm_app = PWM_Module()
# Drive one drive at a time
pwm_app.init_drives()
pwm_app.home_drive()
note = Sheet_Vision('/home/xilinx/jupyter_notebooks/Flute_Automation/resources/samples/test1.jpg')
print(note)
# Set to 88 beats per minute if not available in the sheet music for testing
tempo = 88
note_value = []
note_duration = []
for i in note:
x = i
x1 = x.split(" ")
x2 = x1[1].split(",")
note_value.append(x1[0])
# Calculate duration in seconds based upon the base note duration, bpm and the note duration.
# It is a quarter note and the base note is an eighth.
note_duration.append(int(x2[0])/int(x2[1])*4*60/88)
print(note_value)
print(note_duration)
# Play select notes on the flute
# All notes play will be implemented soon
# This has not been tested on the flute as I have been having issues getting the air blow properly.
for i in range(len(note_value)):
print(note_value[i])
print(note_duration[i])
if (note_value[i] == "g4" or note_value[i] == "g5" or note_value[i] == "c4" or note_value[i] == "c5" or note_value[i] == "b4" or note_value[i] == "b5"):
pwm_app.play_note(note_value[i], note_duration[i])
Future Improvements- Planning to incorporate ML to identify notes first which will minimize the software execution time in the processing system.
- Some of the portions could be accelerated by the FPGA (especially pattern/contour recognition)
- Improve the mechanical fixture by using proper linear shafts which are identical, and also use feedback sensors (like quadrature encoders) for motion control.
The mechanical system for automatic flute playing is not fully functional now for demonstration on this project. But, hopefully, with the above improvements and modifications, a complete working system should be ready.
Comments