Hackster is hosting Hackster Holidays, Finale: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Tuesday!Stream Hackster Holidays, Finale on Tuesday!
Life Merlyn
Created July 28, 2024

Optimizing XARM7 with Mini PC AMD for Culinary Image Analys

Optimizing XARM7 on AMD hardware with ROCm 5.7 for efficient and accurate culinary image analysis.(dataset is come from roboflow website)

7
Optimizing XARM7 with Mini PC AMD for Culinary Image Analys

Things used in this project

Hardware components

AMD Mini pc rocm
×1

Story

Read more

Schematics

circuit diagram

Explanation of Connections:
AMD Mini PC:

Connect the AMD Mini PC to a monitor, keyboard, and mouse for interaction.
Install ROCm 5.7 and YOLOv8 on the Mini PC for running the AI model.
Connect the Intel RealSense D435 camera to the AMD Mini PC using a USB 3.0 cable.
Connect the XARM7 robotic arm to the AMD Mini PC using a USB cable for control signals.
Intel RealSense D435 Camera:

Connect the camera to the AMD Mini PC using a USB 3.0 cable to capture real-time images.
XARM7 Robotic Arm:

Connect the XARM7 to the AMD Mini PC using a USB cable for receiving control signals.
Sensors and Actuators:

Sensors (e.g., Temperature Sensor, Weight Sensor): Connect these sensors directly to the AMD Mini PC's GPIO pins or use USB interfaces if available. Place the sensors in appropriate positions in your workspace.
Actuators (e.g., Heating Elements): Connect these actuators to relay modules, which are then connected to the GPIO pins or USB interfaces of the AMD Mini PC for control.
Power Supply:

Ensure all components are connected to an appropriate power source.
Use a power strip with surge protection to connect the AMD Mini PC, XARM7, and sensors/actuators to the power supply.
Software Integration:
Install ROCm 5.7 on AMD Mini PC:

Follow the ROCm installation guide to set up ROCm 5.7 on your AMD Mini PC.
Install and Configure YOLOv8:

Set up a Python environment and install YOLOv8.
Configure YOLOv8 to use the ROCm runtime for leveraging the AMD GPU.
Develop Control Scripts:

Write Python scripts to capture images from the D435 camera and run them through YOLOv8 for object detection.
Based on YOLOv8’s output, send commands to the XARM7 robotic arm to manipulate ingredients.
Use the GPIO pins or USB interfaces of the AMD Mini PC to read sensor data and control actuators.
Run and Test:

Start the system and run the control scripts.
Monitor the performance and make adjustments as necessary.

Code

training.py

Python
This script is designed to train a YOLO (You Only Look Once) model using the ultralytics library. YOLO is a popular deep learning model for real-time object detection and segmentation. The provided script performs several tasks, including loading training and validation data, training the model, validating it, and saving the best model checkpoint.

Breakdown of the Script
Import Libraries

python
Copy code
from ultralytics import YOLO
import os
Define the train_model Function

Paths for training and validation images and labels are defined.
It checks the number of images and labels and ensures they match.
Initializes the YOLO model.
Trains the model using the specified configuration.
Validates the trained model.
Saves the best model checkpoint.
python
Copy code
def train_model():
train_images_path = 'D:/project/competition/AMD/Xarm7_making_sandwich/Media/Ingredients.v11i.yolov8/train/images'
train_labels_path = 'D:/project/competition/AMD/Xarm7_making_sandwich/Media/Ingredients.v11i.yolov8/train/labels'
val_images_path = 'D:/project/competition/AMD/Xarm7_making_sandwich/Media/Ingredients.v11i.yolov8/valid/images'
val_labels_path = 'D:/project/competition/AMD/Xarm7_making_sandwich/Media/Ingredients.v11i.yolov8/valid/labels'

print(f"Training images: {len(os.listdir(train_images_path))}")
print(f"Training labels: {len(os.listdir(train_labels_path))}")
print(f"Validation images: {len(os.listdir(val_images_path))}")
print(f"Validation labels: {len(os.listdir(val_labels_path))}")

assert len(os.listdir(train_images_path)) == len(os.listdir(train_labels_path)), "Mismatch between train images and labels"
assert len(os.listdir(val_images_path)) == len(os.listdir(val_labels_path)), "Mismatch between validation images and labels"

model = YOLO('yolov8n-seg.yaml')

print("Starting training...")
model.train(data='D:/project/competition/AMD/Xarm7_making_sandwich/Media/Ingredients.v11i.yolov8/data.yaml', epochs=100, imgsz=640)
print("Training complete.")

print("Validating model...")
results = model.val()
print(results)

if model.ckpt is None:
print("Error: Model checkpoint is None. Skipping model saving.")
else:
print("Saving model checkpoint...")
model.save('best.pt')
print("Model saved successfully.")
Execute the train_model Function

The script will run the train_model function if it is executed as the main module.
python
Copy code
if __name__ == '__main__':
train_model()
How to Use the Script
Set Up Your Environment

Ensure you have Python installed.
Install the ultralytics library, which includes the YOLO implementation:
bash
Copy code
pip install ultralytics
Prepare Your Data

Organize your training and validation images and their corresponding labels in the specified directories.
Ensure your dataset configuration file (data.yaml) is properly set up.
Run the Script

Execute the script in your terminal or command prompt:
bash
Copy code
python your_script_name.py
Key Points
Paths: The paths to images and labels need to be correctly set. Ensure the directories exist and contain the expected files.
Assertions: The script checks that the number of images matches the number of labels for both training and validation datasets.
Model Configuration: The YOLO model is initialized using a configuration file (yolov8n-seg.yaml). Make sure this file exists and is correctly configured for your use case.
Training and Validation: The model is trained for 100 epochs with an image size of 640. Adjust these parameters as needed.
Saving the Model: The best model checkpoint is saved to a file named best.pt.
This script automates the training process of a YOLO model for object detection or segmentation tasks, provided that the data is well-prepared and organized according to the paths and configuration files specified in the script.
from ultralytics import YOLO
import os

def train_model():
    train_images_path = 'D:/project/competition/AMD/Xarm7_making_sandwich/Media/Ingredients.v11i.yolov8/train/images'
    train_labels_path = 'D:/project/competition/AMD/Xarm7_making_sandwich/Media/Ingredients.v11i.yolov8/train/labels'
    val_images_path = 'D:/project/competition/AMD/Xarm7_making_sandwich/Media/Ingredients.v11i.yolov8/valid/images'
    val_labels_path = 'D:/project/competition/AMD/Xarm7_making_sandwich/Media/Ingredients.v11i.yolov8/valid/labels'

    print(f"Training images: {len(os.listdir(train_images_path))}")
    print(f"Training labels: {len(os.listdir(train_labels_path))}")
    print(f"Validation images: {len(os.listdir(val_images_path))}")
    print(f"Validation labels: {len(os.listdir(val_labels_path))}")

    assert len(os.listdir(train_images_path)) == len(os.listdir(train_labels_path)), "Mismatch between train images and labels"
    assert len(os.listdir(val_images_path)) == len(os.listdir(val_labels_path)), "Mismatch between validation images and labels"

    model = YOLO('yolov8n-seg.yaml')

    print("Starting training...")
    model.train(data='D:/project/competition/AMD/Xarm7_making_sandwich/Media/Ingredients.v11i.yolov8/data.yaml', epochs=100, imgsz=640)
    print("Training complete.")

    print("Validating model...")
    results = model.val()
    print(results)

    if model.ckpt is None:
        print("Error: Model checkpoint is None. Skipping model saving.")
    else:
        print("Saving model checkpoint...")
        model.save('best.pt')
        print("Model saved successfully.")

if __name__ == '__main__':
    train_model()

Credits

Life Merlyn
1 project • 0 followers

Comments