During our training period at a fabric printing plant, we noticed that a considerable amount of fabric gets wasted due to the conventional method used for GSM measurement. GSM (grams per square meter) is an indication of the quality of the fabric, and fabric industry uses various machines and techniques to keep the GSM of the fabric consistent through out the manufacturing and printing process.
Conventional method of fabric GSM measurement involves cutting a small round piece of fabric with an area of 0.01 square centimeters and measuring its weight using a precision balance. This is specially done during the stentering process, where the overfeed of the machine is controlled depending on the input and output GSM.
This project was aimed to replace this conventional method with an optical based one. Currently the scope of the project is limited to single light color (white) fabrics with a plain, twill weave. (Algorithm can also be applied to knit fabrics with a considerable accuracy as well)
Setting Up PYNQ on the Ultra96For anyone looking to implement a hardware accelerated application with minimal development and debugging time, PYNQ is a great option. Currently PYNQ supports several boards including the Ultra96. So if you happen to have an Ultra96 or a similar board, I seriously recommend giving PYNQ a try. (Anything that supports Python is really really easy to work with, in my experience)
First thing you have to do is download the PYNQ 2.3 image for your board from here. For the Ultra96, you can find it on board's official site as well.
After downloading, you can reflash the SD card that came with your board with PYNQ. If you have trouble formatting the card, this video shows how you can do it using DISKPART for windows.
After installing, you should be able to access your Ultra96 using its wifi access point (pynq_<mac address of your board>). You can open up Jupyter Notebook by accessing 192.168.2.1:9090 from your favorite browser, and start developing right away.
Tips :
The password to log in is 'xilinx'
You can access the file system using smb. Just type //192.168.2.1/xilinx if you're on Windows.
For a more comprehensive guide, check out https://pynq.readthedocs.io/en/v2.3/
Getting the PYNQ Computer Vision LibraryThe computer vision library for PYNQ provides several overlays for accelerating OpenCV functions in hardware. Currently Filter2D and dilation operations are supported. (If you're someone from Xilinx reads this, I hope you guys make the rest of the overlays that can be built using the xfopencv library available for download too. Downloading the SDSoC of 17gigs to build overlays is not something I could do with my third world internet)
You can get the Computer vision library from here.
Or simply, open up a terminal in Jupyter Notebook and type,
sudo pip3 install --upgrade git+https://github.com/Xilinx/PYNQ-ComputerVision.git
and you're done.
Getting the PYNQ: BNN OverlaysPYNQ keeps getting better and better. The awesome folks at Xilinx also provide some overlays that allows us to run a quantized neural network in hardware! You can read more about it at their GitHub page. (Down the road we will attempt to train our own small Neural net to identify fabric structures to make calibration easier.)
sudo pip3 install git+https://github.com/Xilinx/BNN-PYNQ.git (on PYNQ v2.3)
Now that we have all these fancy overlays installed, let's get down to business.
Seperating Warp and Weft Yarns of Woven FabricWe decided to use a USB microscope camera to obtain magnified images of fabric.
Despite it's price, we were able to get surprisingly good images of fabrics. Picture below shows what a woven fabric looks like when magnified.
Woven fabrics are made up of yarns running in the horizontal and vertical directions. So what better way to separate them than to use the trusty ol Sobel Filter! The Sobel operators are used to identify edges of agrayscale image (called an image intensity function, where each pixel is onlygiven a value ranging from least intensity to most). Using these gradients, pixels forming an edge can be found by comparing their gradients to that of their neighbors. We can generate sub images for the warps and wefts by convolving our original image with Sobel X and Y kernels.
Before we get to play around with the hardware accelerated filter 2D, we need to import the necessary overlays. Let's open up a new Notebook and start developing the algorithm. The syntax and structure that must be followed when working with overlays are provided in the notebooks you get when you install the Computer Vision Library.
Hint : Run the provided notebooks yourself to see how fast these operations are when compared to software!
Hint 2 - If you get an error accessing your camera, do Kernel > Restart and Run All.
import cv2 #NOTE: This needs to be loaded first# Load filter2D + dilate overlay
# Load filter2D + dilate overlay
from pynq import Bitstreambs = Bitstream("/usr/local/lib/python3.6/dist-packages/pynq_cv/overlays/xv2Filter2DDilate.bit")bs.download()
import pynq_cv.overlays.xv2Filter2DDilate as xv2
# Load xlnk memory mangager
from pynq import XlnkXlnk.set_allocator_library('/usr/local/lib/python3.6/dist-packages/pynq_cv/overlays/xv2Filter2DDilate.so')mem_manager = Xlnk()
We also need to capture frames from our webcam.
camera = cv2.VideoCapture(0)
width = 640
height = 480
camera.set(cv2.CAP_PROP_FRAME_WIDTH,width)camera.set(cv2.CAP_PROP_FRAME_HEIGHT,height)
You're also going to need this function to display images in Jupyter. Later we're gonna intergrate this project with Flask so we only need this in the initial stages just to check whether things are working as they should.
import IPython
def imshow(img):
returnValue, buffer = cv2.imencode('.jpg', img)
IPython.display.display(IPython.display.Image(data=buffer.tobytes()))
Let's set up our kernels as numpy arrays.
sobelx = np.ones((480,640),np.uint8)sobely = np.ones((480,640),np.uint8)blur_frame = np.ones((480,640),np.uint8)
#These must be 3x3 since filter2d overlay supports a 3x3 kernel size
kernel_sobelx = np.array([[1.0,0.0,-1.0],[2.0,0.0,-2.0],[1.0,0.0,-1.0]],np.float32)#Sobel Xkernel_sobely = np.array([[1.0,2.0,1.0],[0.0,0.0,0.0],[-1.0,-2.0,-1.0]],np.float32)#Sobel Ykernel_sharp = np.array([[-1.0,-1.0,-1.0],[-1.0,32.0,-1.0],[-1.0,-1.0,-1.0]],np.float32)#sharpkernelb = np.array([[1/16.0,1/8.0,1/16.0],[1/8.0,1/4.0,1/8.0],[1/16.0,1/8.0,1/16.0]],np.float32)#blur
kernelVoid = np.zeros(0)
You're going to need these to run your hardware functions.
xFin= mem_manager.cma_array((height,width),np.uint8)
xFbuf= mem_manager.cma_array((height,width),np.uint8)
xFout= mem_manager.cma_array((height,width),np.uint8)
blur = mem_manager.cma_array((height,width),np.uint8)
Let's start applying the sobel filter.
# Flush webcam buffers (needed when rerunning notebook)
for _ in range(5):
ret, frame_in = camera.read()# Read in a frame
ret, img = camera.read()
if ret:
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) #Convert color image from webcam to grayscale image
xFin[:] = gray[:] # load grayscale image to frame in buffer
xv2.filter2D(xFin, -1, kernelb, xFout, borderType=cv2.BORDER_CONSTANT) #See how easy it is to call the overlay!
blur_frame[:] = xFout[:]
imshow(blur_frame) #Display gaussian blurred image
blur[:] = xFout[:]
xv2.filter2D(blur, -1, kernel_sobelx, xFbuf, borderType=cv2.BORDER_CONSTANT) #convolve with sobel x filter
xv2.filter2D(xFbuf, -1, kernelVoid, xFout, borderType=cv2.BORDER_CONSTANT)
sobelx[:] = xFout[:]
xv2.filter2D(blur, -1, kernel_sobely, xFbuf, borderType=cv2.BORDER_CONSTANT) #convolve with sobel y filter
xv2.filter2D(xFbuf, -1, kernelVoid, xFout, borderType=cv2.BORDER_CONSTANT)
sobely[:] = xFout[:]
imshow(sobelx)
imshow(sobely)
else:
print("Error reading frame from camera.")
Outputs
Since we need a strong blur to remove stringy parts of yarns, the input image is slightly made to be out of focus by using the camera's focus adjustment.
With some additional morphological transformations and thresholding, we can easily obtain the warp and weft yarns as white stripes on a black background.
Since we had a requirement to obtain the number of warp and weft yarns, we had to utilize some time consuming software functions. (Recall the sobelx and sobely images we obtained using the code above)
warpcount = 0
weftcount = 0
warparea = 0
weftarea = 0
draw_warp = 1
draw_weft = 1
warp_perimeter_thresh = 800
weft_perimeter_thresh = 800
############ This part should come after obtaining sobelx and y images##########
if draw_warp:
im2x, contoursx, hierarchyx = cv2.findContours(sobelx,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for cnt in contoursx:
perimeter = cv2.arcLength(cnt,True)
if perimeter > warp_perimeter_thresh:
cv2.drawContours(img, cnt, -1, (204,50,153), 2)
warpcount = warpcount+1
warparea = +cv2.contourArea(cnt)
if draw_weft:
im2y, contoursy, hierarchyy = cv2.findContours(sobely,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for cnt in contoursy:
perimeter = cv2.arcLength(cnt,True)
if perimeter > weft_perimeter_thresh:
cv2.drawContours(img, cnt, -1, (0,0,255), 2)
weftcount = weftcount+1
weftarea = +cv2.contourArea(cnt)
mem = "Warp Yarns : "+str(warpcount)
mem1 = "Warp Pixel Area : "+str(warparea)+ " pixels"
mem2 = "Weft Yarns : "+str(weftcount)mem3 = "Weft Pixel Area : "+str(weftarea)+ " pixels"
areatot = (warparea+weftarea)
mem4 = "Totalized Pixel Area : "+str(areatot)+ " pixels"
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(img,mem,(5,370), font, 1,(0,255,0),2,cv2.LINE_AA)
cv2.putText(img,mem1,(5,400), font, 1,(255,100,100),2,cv2.LINE_AA)
cv2.putText(img,mem2,(5,430), font, 1,(0,255,0),2,cv2.LINE_AA)
cv2.putText(img,mem3,(5,460), font, 1,(255,100,100),2,cv2.LINE_AA)
cv2.putText(img,mem4,(5,30), font, 1,(255,0,0),2,cv2.LINE_AA)
imshow(img)
Here's the output. This part is a bit resource intensive, but our frame rate is already capped at 25FPS (due to it being a USB Camera) so it won't affect us much.
So far, we have obtained the warp and weft yarn counts (We can use these directly to obtain yarn density, which is the amount of yarns per unit area), as well as the totalized pixel count of the yarns. Since there is no way to estimate mass of yarns from number of pixels without knowing the mass per pixel of the given fabric type, we can simply calibrate using a fabric sample with known GSM.
As an example. the fabric sample shown above has is 110GSM. Let us incorporate that to calibrate our algorithm to measure the GSM of this fabric type.
gsm = (areatot/27747)*110mem5 = "Predicted GSM : "+str(gsm)+ " GSM"
cv2.putText(img,mem5,(5,60), font, 1,(255,0,0),2,cv2.LINE_AA)
Let's run the code again and supply a picture of a different part of the same fabric.
The predicted GSM is close but a little deviation can be observed. Once calibrated, we have to make sure that the distance between the sensor and fabric does not change when moving to a different area. This prompted us to develop the XY cartecean slider system shown in the latter part of this article.
Testing on Knit FabricsThe fabric structure of knits is more complex in order to capture in this way. However, as seen below, we can capture intertwined yarn clusters (called loops) on one side.
So we can probably build a relationship between these yarn clusters and the mass. For that we need to change our algorithm to only find vertical edges.
Recall the draw_warp and draw_weft control variables. We can set one of them to zero.
Making Our Algorithm Smarter Using PYNQ : BNNIf our algorithm can identify a woven fabric and a knit during calibration, it can easily adapt to different fabric structures. Let's utilize the PYNQ BNN overlays to make our inferencing faster.
(After Installing BNN on Ultra96, be sure to check out the example notebooks to familiarize yourself)
In order to train the provided models on our datasets, we'll need to move over to our comparatively more powerful computers for a bit. (I did this on Linux) So clone the PYNQ: BNN repo to your pc.
You will need to follow the instructions over here to install Theano, Lasagne, Pylearn2 and some other required packages. Since our dataset is teeny tiny, we won't be needing CUDA for GPU acceleration for now, but if your data set gets larger you might need it.
After installing pylearn2, remember to set your path variables for pylearn2 datasets by typing in,
export PYLEARN2_VIEWER_COMMAND="eog --new-instance"
export PYLEARN2_DATA_PATH=/YOURPATHTOHERE/pylearn2/datasets
in your terminal.
We'll now try to generate an MNIST dataset for our model to train on. First we need to obtain some photos of woven and knit fabric structures.
You can use the phototaker.py code attached at the end to take multiple grayscale images from your webcam.
After getting the photos, we can use this amazing tool to generate the MNIST data.
https://github.com/gskielian/JPG-PNG-to-MNIST-NN-Format
Clone the repo into your PC, copy the images to the training and testing directories.
Edit the batches.meta.txt according to the number of labels.
Type./resize-script.sh on your terminal to resize all images to 28x28 size and run convert-images-to-mnist-format.py to generate the dataset.
Tip : You will get 4 files.
gskielian's tool will generate the training image set and labesl as train-images-idx3-ubyte and train-labels-idx1-ubyte, but you will have to rename train to t10k, since that's what the training script looks for.
You will then have to go to pylearn2/pyearn2/datasets and edit the mnist.py file and change the size of the training and testing data, number of epochs and batch size to suit your dataset.
After everything is adjusted, you can go to your BNN directory and run mnist.py
This will generate a file called mnist_parameters.npz. You need to generate the bitstream using mnist-gen-weights-W1A1 or W1A2.py
After successfully generating it, place the generated files at /usr/local/lib/python3.6/dist-packages/bnn/params/newmodel/lfcW1A1/
We can test it out using the LFC-BNN_MNIST_Webcam.ipynb notebook.
import bnn
hw_classifier = bnn.LfcClassifier(bnn.NETWORK_LFCW1A1,"newmodel",bnn.RUNTIME_HW)
import cv2
from PIL import Image as PIL_Image
from PIL import ImageEnhance
from PIL import ImageOps# says we capture an image from a webcam
import numpy as np
import math
from scipy import misc
from array import *
cap = cv2.VideoCapture(0)
_ , cv2_im = cap.read()
cv2_im = cv2.cvtColor(cv2_im,cv2.COLOR_BGR2RGB)
img = PIL_Image.fromarray(cv2_im).convert("L")
Image enhancement
contr = ImageEnhance.Contrast(img)img = contr.enhance(3)
# The enhancement values (contrast and brightness)
bright = ImageEnhance.Brightness(img)
# depends on backgroud, external lightsetc
img = bright.enhance(4.0)
#Adding a border for future cropping
img = ImageOps.expand(img,border=80,fill='white')
inverted = ImageOps.invert(img)
box = inverted.getbbox()
img_new = img.crop(box)
width, height = img_new.size
ratio = min((28./height), (28./width))
background = PIL_Image.new('RGB', (28,28), (255,255,255))
if(height == width):
img_new = img_new.resize((28,28))
elif(height>width):
img_new = img_new.resize((int(width*ratio),28))
background.paste(img_new, (int((28-img_new.size[0])/2),int((28-img_new.size[1])/2)))
else:
img_new = img_new.resize((28, int(height*ratio)))
background.paste(img_new, (int((28-img_new.size[0])/2),int((28-img_new.size[1])/2)))
background
img_data=np.asarray(background)
img_data = img_data[:,:,0]
misc.imsave('/home/xilinx/img_webcam_mnist.png', img_data)
#Resize the image and invert it (white on black)
smallimg = ImageOps.invert(img_load)
smallimg = smallimg.rotate(0)
data_image = array('B')
pixel = smallimg.load()
for x in range(0,28):
for y in range(0,28):
if(pixel[y,x] == 255):
data_image.append(255)
else:
data_image.append(1)
# Setting up the header of the MNIST format file - Required as the hardware is designed for MNIST dataset
hexval = "{0:#0{1}x}".format(1,6)
header = array('B')
header.extend([0,0,8,1,0,0])
header.append(int('0x'+hexval[2:][:2],16))
header.append(int('0x'+hexval[2:][2:],16))
header.extend([0,0,0,28,0,0,0,28])
header[3] = 3 # Changing MSB for image data (0x00000803)
data_image = header + data_image
output_file = open('/home/xilinx/img_webcam_mnist_processed', 'wb')
data_image.tofile(output_file)
output_file.close()
class_out = hw_classifier.classify_mnist("/home/xilinx/img_webcam_mnist_processed")
print("Class name: {0}".format(hw_classifier.class_name(class_out)))
Output for a Knit Fabric Image which was placed under the class name 1 when training.
Inference took 7.00 microsecondsClassification rate: 142857.14 images per second
Class name: 1
Great. Our model was able to successfully determine the fabric structure, and its using hardware for inferencing which makes is really fast!
Making the XYZ SliderThis XYZ sliding mechanism was made in order to take multiple samples from different regions of a fabric without altering the distance between the camera and the sensor after calibration. It is a mix of 3d printed parts and fabricated metal parts. 3D printable files are attached.
A PCB was designed using EAGLE to control the 4 stepper motors using A4988 stepper motor drivers. These drivers were controlled using a STM32 F103C8T6 MCU. The MCU controls the absolute position of each axis according to the data sent by the Ultra96 board over Serial. PCB gerber files and STM32-Arduino codes are attached. ( Head over to the end of the page!)
You will also need to download the AccelStepper Library for Arduino.
The MCU expects the absolute positions with ', ' as a delimiter, and * to signal end of string. We can easy generate a string and send it over serial using pySerial.
Now that our code works without problems, lets bring everything together. You can quickly generate a nice webapp for your image processing application using Flask.And the great thing is, the Ultra96 board comes pre-installed with Flask!You can find a great example of how to use Flask with OpenCV on GitHub.
https://github.com/log0/video_streaming_with_flask_example
With a little bit of Bootstrap and Javascript magic, you can make a really responsive web GUI like the one below. I.e : I have not provided the entire Flask GUI, since its still an ongoing project being developed for commercial use, but we're more than happy to help you out if you have any questions on how to make the GUI.
The Sobel filter based yarn detection was carried out on 30 images from 3 different fabric samples. The following images show the type of samples used.
Different samples of the same fabric were obtained by placing the camera on different parts of the fabric. The yarn detection algorithm was run on 10 such photos for each sample. Results indicated an accuracy <94% for woven fabrics and a 90% accuracy for the vertical loop structures on knits. ( Tabulated results provided on request )
GSM Determination using Yarn Pixel Area
50 different fabric samples (4 frames each at different points, altogether 200 images) with a nominal GSM of 107 GSM was used for this test. Full range error (for a maximum reading of 250GSM) was tabulated. The graph below depicts the variation of the readings relative to the actual value.
A somewhat considerable deviation can be observed, but this is accepted in the fabric industry. However, this can be further minimized by enhancing the algorithm.
ConclusionThe Ultra96 board is a great board for a beginners and experts alike. And its compatibility with PYNQ takes it to a whole new level. If you're not satisfied with the amazing computer vision and BNN libraries Xilinx has provided, you can write your own 'overlays' using Vivado, and run some parts (or even your entire program) on hardware.
Running operations like convolution (For Sobel, Smoothing, Erosion, Blurs) in the hardware layer can decrease the run time of your algorithm by a considerable amount. However, in this application, the frame rate was limited by the resource intensive contour detection, and the maximum frame rate of the camera (which was 24fps).
Hopefully Avnet and Xilinx create more and more resources and libraries for PYNQ and Ultra96, which allows developers to quickly test out their ideas without going through the hassle of building everything from scratch.
Comments