This project aims to look at three defects on a PCB and how machine learning can be used to identify them. One of the defects is missing holes which can be caused by faulty tooling or excessive processing. This hinders components or mountings to be put on the PCB. Next are the faulty defects: open and short circuit. These two can cause a device to be non functional. An open circuit creates an infinitely high resistance therefore stopping current flow. A short circuit on the other hand creates a low resistance causing a high current to flow which can damage components or even cause a fire.
While working on this project I built various models with MobileNetV2 SSD FPN-Lite 320x320, Edge Impulse's FOMO and YOLOv5 each with different parameters to achieve the best result. At the end, I settled with FOMO since it achieved a much better detection compared to the other models.
Therefore to build the Machine Learning model, I used FOMO and afterwards deploy the model to a Raspberry Pi 4B with the 8 megapixel V2.1 camera module.
Dataset PreparationThe dataset I used in this project was sourced from Kaggle PCB Defects dataset. This dataset consists of 1366 PCB images with 6 kinds of defects. However for this demo, I only used 3 kinds (missing hole, open circuit and short circuit) for the detection. This is because the annotations for this dataset are in XML format but the Edge Impulse uploader requires a bounding_boxes.labels
file with the bounding boxes in a unique JSON structure. For this project demo I had to draw bounding boxes for the images therefore the need to reduce our dataset size.
In total, I have 887 images for training and 253 images for testing. For each defect, I have a total of 380 images both for training and testing.
For my Impulse, I set image width and height to 1024x1024 and Resize mode to "Squash". The processing block was set to “Image” and the Learning block was “Object Detection(images)”.
I used a large image size (1024x1024) since lower resolution images reduce useful information for object detection which will in turn reduce the accuracy of the model. This was the case when I used MobileNetV2 SSD FPN-Lite 320x320 which has a supports up to an input image size 320x320. The model did not detect any of the objects in the images.
For the processing block, I set Color depth to "Grayscale" as I was using FOMO and then generated features from the dataset.
The last step of building the model was the training step. In Object detection, I selected FOMO (Faster Objects, More Objects) MobileNetV2 0.35. I set the number of training cycles to 100 and the learning rate to 0.01. After the training process is complete, we get different performance numbers of the model. These include F1 Score, inference time on the Raspberry Pi 4B, peak RAM and flash usage on the Raspberry Pi 4B ( you can select the target board for performance analysis from the Dashboard page in your project. Go to Project info at the bottom of the page and select your target board in the Latency calculations dropdown ).
Model PerformanceAfter training our model I got an F1 score of 48%. However, is an F1 score of 48% good or bad? This depends on the prediction (type of problem we want to solve).
F1-score combines precision and recall into a single metric. Precision is the number of True Positives divided by the number of True Positives plus the number of False Positives (image source: Joos Korstanje).
Recall is a measure of the correctly identified positive cases from all the actual positive cases, that is number of True Positive divided by number of True Positives plus False Negatives. A model with high recall succeeds when finding all positive cases in the data while lower recall models are not able to find a large part of positive cases in the data.
Simply, precision tell us "of the classified classes as missing hole/open circuit/short circuit, how many of them or what fraction are actually missing hole/open circuit/short circuit". Recall on the other hand tells us "of the classes that are actually missing hole/open circuit/short circuit, how many of them are actually classified as missing hole/open circuit/short circuit.
The model performance for each class is shown in the table below:
In certain problems a higher precision is required. In our case we do not want to generate False Positives and notify that a good PCB has a short circuit nor do we want False Negatives which means notifying that a PCB with a short circuit has no short circuit.
Missing hole has a better prediction compared to open or short circuit since its features are more distinct.
However in the confusion matrix, we see that a lot of False Positives from the background. This can be caused by the fact that a bare PCB tends to have sections that look similar but the circuitry is very different at each section. Let's look at a sample from the dataset in the image below:
In this image, the red ovals show a region which our model can predict as an open circuit since the copper trace does not reach the copper pad. However us humans we can tell that the copper trace is actually copper pour therefore this is not an open circuit point. Thus, when labelling our dataset we do not draw bounding boxes at this region. Same case applies to the blue ovals which show some copper pads being connected to the copper pour with some traces. This can be connection to a ground plane and we do not draw bounding boxes here but our Machine Learning model cannot make this complex analysis and therefore it can say that is a short circuit region.
Testing the FOMO modelTo test my model, I click the "Model testing" section and then click the "Classify all" button. My model had an testing accuracy of 13%, but wait! There's more to this. Let's discuss this result.
Let's look at the model performance for a test image that has an F1-score of 0%. This image has 3 short circuit labels and our model was possible to accurately detect one short circuit in the PCB.
Now let's look at another testing sample that has an F1-Score of 28%. This sample has 3 labelled short circuits and our model was able to accurately determine them.
In this test sample, FOMO was able to detect a short circuit between two copper traces. This is very much impressive since the traces are not straight and parallel compared to most of the short circuit cases.
In the model testing, accuracy is the percentage of all samples with precision score above 80%. In our case, since any PCB defect can cause the device to be non functional, even detecting one defect among many is a acceptable. This way if a defect is identified on a PCB, a more detailed inspection can then be done on the PCB afterwards.
Deploying the model to a Raspberry Pi 4Edge Impulse has provided a documentation on how to setup a Raspberry Pi 4 to connect to Edge Impulse studio and also deploying your models to the Pi.
I used the 8 megapixel V2 camera module for capturing the images.
After setting up the raspberry pi, on the Pi we run the command edge-impulse-linux
to select our project from our Edge Impulse account. You can clone the public Edge Impulse project to your Edge Impulse account. We then run the command edge-impulse-linux-runner
to run the model locally on the Raspberry Pi.
When running our model we can see live classification of what the Raspberry Pi camera captures and the inference. To do this, we connect a computer to the same network that the Raspberry Pi is connected to. Next, on a web browser we enter the url provided by the Raspberry Pi when we run the edge-impulse-linux-runner
command.
Below is a screenshot of live classification running on the Raspberry Pi 4 and the model successfully detecting 3 missing holes on a PCB.
On the Raspberry Pi 4 the model has a latency of ~ 1500ms which equals to ~ 1 fps. In an industrial setup with many PCBs to be inspected the Raspberry Pi 4 won't be the ideal hardware to run this model. A better hardware can be the Jetson Nano dev kit which is fully supported by Edge Impulse and has a GPU accelerated processor (NVIDIA Tegra) targeted at edge AI applications. Documentation on setting up a Jetson Nano can be found here.
ConclusionThis project has shown that we can move closer to zero manufacturing defects on bare board PCBs by integrating Machine Learning for visual inspections. An interesting future work can be to extend the object detection to multi-layered PCBs and use Deep Learning models for better performance.
FOMO performed best for this object detection since it is flexible and very fast. With FOMO we used image inputs of 1024x1024, which is the highest input size compared to YOLOv5 and MobileNetV2 SSD FPN-Lite 320x320. This enabled our small objects to still retain their useful information for their detection.
For a more detailed explanation of this project please check the documentation on Edge Impulse (Identifying PCB Defects with Machine Learning). The documentation also has the results I obtained with YOLOv5.
The Edge Impulse project is also public and can be accessed here.
Comments
Please log in or sign up to comment.