Hackster is hosting Hackster Holidays, Finale: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Tuesday!Stream Hackster Holidays, Finale on Tuesday!
Sebastian S.
Published © GPL3+

Efficient Dishwasher Loading Detection Using AMD and YOLO-V8

Optimize your dishwasher loading with AI! Using AMD and YOLO-V8, this project detects and corrects inefficient dishwasher organization.

AdvancedFull instructions providedOver 1 day201
Efficient Dishwasher Loading Detection Using AMD and YOLO-V8

Things used in this project

Hardware components

Minisforum Venus UM790 Pro with AMD Ryzen™ 9
Minisforum Venus UM790 Pro with AMD Ryzen™ 9
-for training and inference
×1
Smartphone
-to take pictures for the dataset -optional: try out the roboflow dataset
×1
Webcam
-video or image capture for inference
×1

Software apps and online services

Roboflow.com
for labeling, and training and (optional) for inference
YoloV8
Jupyter Notebook
Jupyter Notebook

Story

Read more

Code

First test - YOLOv8

Python
with this code you can run a first simple test with a YOLOv8 data model - please use Jupyter Notebook if you want to try it yourself
import cv2
import matplotlib.pyplot as plt
from ultralytics import YOLO
import urllib.request
import numpy as np

# load image
url = 'https://ultralytics.com/images/bus.jpg'  
resp = urllib.request.urlopen(url)
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)

# Load YOLOv8 model
model = YOLO('yolov8n.pt')  # you can also use 'yolov8s.pt', 'yolov8m.pt', etc. 

# Perform object recognition
results = model(image)

# visualize results
annotated_frame = results[0].plot()

# convert image from BGR to RGB 
annotated_frame = cv2.cvtColor(annotated_frame, cv2.COLOR_BGR2RGB)

# show image 
plt.figure(figsize=(10, 10))
plt.imshow(annotated_frame)
plt.axis('off')
plt.show()

Third test - Dishwasher Dataset with Roboflow functions

Python
in this code snippet we test the dishwasher model with functions provided by Roboflow
#Dishwasher Dataset:

# import the inference-sdk
from inference_sdk import InferenceHTTPClient
import supervision as sv

# initialize the client
CLIENT = InferenceHTTPClient(
    api_url="https://detect.roboflow.com",
    api_key="YOUR_API_KEY"
)

import cv2
import numpy as np
import urllib.request as rq

url = "https://szm-media.sueddeutsche.de/image/szm/9d35b0b30cce848483871728a8b28970/640/image.jpeg" #example image


req = rq.urlopen(url)
arr = np.asarray(bytearray(req.read()), dtype=np.uint8)
image = cv2.imdecode(arr, -1) 


# infer on a local image
result = CLIENT.infer(url, model_id="dishwasher-ioaho/2")

detections = sv.Detections.from_inference(result)

print(detections)


import cv2
import supervision as sv

bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()

annotated_image = bounding_box_annotator.annotate(
    scene=image, detections=detections)
annotated_image = label_annotator.annotate(
    scene=annotated_image, detections=detections)

sv.plot_image(annotated_image)

Second test - compare the different YOLOv8-models

Python
in this Python code (past it in Jupyter Notebooks) we can compare the performance of the different YOLOv8-models
import cv2
import matplotlib.pyplot as plt
from ultralytics import YOLO
import urllib.request
import numpy as np

# load image
url = 'https://ultralytics.com/images/bus.jpg' 
resp = urllib.request.urlopen(url)
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)

# List of the different YOLOv8-models
model_names = ['yolov8n.pt', 'yolov8s.pt', 'yolov8m.pt', 'yolov8l.pt', 'yolov8x.pt']


results_images = []

for model_name in model_names:
    # Load YOLOv8 model 
    model = YOLO(model_name)
    
    # Perform object recognition
    results = model(image)
    
    # visualize results
    annotated_frame = results[0].plot()
    
    # convert image from BGR to RGB 
    annotated_frame = cv2.cvtColor(annotated_frame, cv2.COLOR_BGR2RGB)
    
    
    results_images.append(annotated_frame)

# showing results next to each other
fig, axes = plt.subplots(1, len(results_images), figsize=(20, 10))
for ax, img, model_name in zip(axes, results_images, model_names):
    ax.imshow(img)
    ax.set_title(model_name)
    ax.axis('off')

plt.show()

Credits

Sebastian S.
1 project • 0 followers

Comments