Rayv
Created September 5, 2024 © GPL3+

SafeSteps: Enhancing Outdoor Mobility

Empowering those with visual impairments to navigate outdoor spaces safely by detecting obstacles, elevation, and distance in real-time.

IntermediateWork in progressOver 4 days35
SafeSteps: Enhancing Outdoor Mobility

Things used in this project

Hardware components

UNIHIKER - IoT Python Programming Single Board Computer with Touchscreen
DFRobot UNIHIKER - IoT Python Programming Single Board Computer with Touchscreen
×1
DFRobot 0.3 MegaPixels USB Camera for Raspberry Pi / NVIDIA Jetson Nano / UNIHIKER
×1
nRF52 Development Kit
Nordic Semiconductor nRF52 Development Kit
×1
Grove - Haptic Motor
Seeed Studio Grove - Haptic Motor
×1
Blues Notecarrier A
Blues Notecarrier A
×1
Blues Notecard (Cellular)
Blues Notecard (Cellular)
×1
DFRobot IO Extender for micro:bit / UNIHIKER
×1

Hand tools and fabrication machines

Soldering iron (generic)
Soldering iron (generic)
Solder Wire, Lead Free
Solder Wire, Lead Free
10 Pc. Jumper Wire Kit, 5 cm Long
10 Pc. Jumper Wire Kit, 5 cm Long

Story

Read more

Schematics

Pin config

Code

UNIHiker

Python
import cv2
import numpy as np

# Load the pre-trained MobileNet SSD model and the corresponding class labels
model_path = "path/to/ssd_mobilenet_v2/frozen_inference_graph.pb"
config_path = "path/to/ssd_mobilenet_v2/ssd_mobilenet_v2.pbtxt"
net = cv2.dnn.readNetFromTensorflow(model_path, config_path)

# Class labels for the objects that the model can detect
class_names = {0: 'background', 1: 'person', 2: 'bicycle', 3: 'car', 4: 'motorcycle', 
               5: 'airplane', 6: 'bus', 7: 'train', 8: 'truck', 9: 'boat', 
               10: 'traffic light', 11: 'fire hydrant', 13: 'stop sign', 14: 'parking meter', 
               15: 'bench', 16: 'bird', 17: 'cat', 18: 'dog', 19: 'horse', 20: 'sheep', 
               21: 'cow', 22: 'elephant', 23: 'bear', 24: 'zebra', 25: 'giraffe', 27: 'backpack', 
               28: 'umbrella', 31: 'handbag', 32: 'tie', 33: 'suitcase', 34: 'frisbee', 
               35: 'skis', 36: 'snowboard', 37: 'sports ball', 38: 'kite', 39: 'baseball bat', 
               40: 'baseball glove', 41: 'skateboard', 42: 'surfboard', 43: 'tennis racket', 
               44: 'bottle', 46: 'wine glass', 47: 'cup', 48: 'fork', 49: 'knife', 50: 'spoon', 
               51: 'bowl', 52: 'banana', 53: 'apple', 54: 'sandwich', 55: 'orange', 56: 'broccoli', 
               57: 'carrot', 58: 'hot dog', 59: 'pizza', 60: 'donut', 61: 'cake', 62: 'chair', 
               63: 'couch', 64: 'potted plant', 65: 'bed', 67: 'dining table', 70: 'toilet', 
               72: 'tv', 73: 'laptop', 74: 'mouse', 75: 'remote', 76: 'keyboard', 77: 'cell phone', 
               78: 'microwave', 79: 'oven', 80: 'toaster', 81: 'sink', 82: 'refrigerator', 
               84: 'book', 85: 'clock', 86: 'vase', 87: 'scissors', 88: 'teddy bear', 89: 'hair drier', 
               90: 'toothbrush'}

# Initialize video capture from the camera
cap = cv2.VideoCapture(0)  # Change the index if you are using a different camera

while True:
    # Capture frame from the camera
    ret, frame = cap.read()
    if not ret:
        break

    # Prepare the frame for object detection
    height, width = frame.shape[:2]
    blob = cv2.dnn.blobFromImage(frame, size=(300, 300), swapRB=True, crop=False)
    net.setInput(blob)

    # Perform object detection
    detections = net.forward()

    # Loop over the detected objects
    for i in range(detections.shape[2]):
        confidence = detections[0, 0, i, 2]

        # Filter out weak detections
        if confidence > 0.5:
            class_id = int(detections[0, 0, i, 1])
            box_x = int(detections[0, 0, i, 3] * width)
            box_y = int(detections[0, 0, i, 4] * height)
            box_width = int(detections[0, 0, i, 5] * width)
            box_height = int(detections[0, 0, i, 6] * height)

            # Draw bounding box and label on the frame
            cv2.rectangle(frame, (box_x, box_y), (box_width, box_height), (0, 255, 0), 2)
            label = f"{class_names[class_id]}: {confidence:.2f}"
            cv2.putText(frame, label, (box_x, box_y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

    # Display the resulting frame
    cv2.imshow('Obstacle Detection', frame)

    # Exit on pressing 'q'
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release video capture and close windows
cap.release()
cv2.destroyAllWindows()

Credits

Rayv

Rayv

0 projects • 0 followers

Comments