Today, I would like to share an interesting case about the application of the myAGV mobile robot. This is not an ordinary robot; it is a story that stands out due to the wisdom and innovation of makers. The origin of this case comes from an event organized by iMakerbase Korea, where the myAGV robot was used to participate in a competition, showcasing its capabilities.
In this competition, myAGV's task was to follow a white line on the ground without deviating from the path. To achieve this goal, myAGV is equipped with a camera that captures real-time ground information, ensuring the robot can accurately follow the white line.
Let’s take a look at the overall effect.
ProductThe myAGV 2023 Raspberry Pi, launched by Elephant Robotics, is a mobile robot particularly suited for scientific research, education, and personal maker use. It utilizes the Raspberry Pi 4B as its core motherboard and runs on a custom Ubuntu Mate 20.04 operating system designed specifically for robotics, ensuring smooth and user-friendly operation.
● Raspberry Pi 4B Core Motherboard: Offers strong performance and excellent expandability.
● 360-Degree LiDAR: Enables comprehensive scanning and environmental awareness.
● Built-in 5-Megapixel HD Camera: Used for object recognition and precise positioning.
● Competition-Grade Mecanum Wheels: Allow omnidirectional movement, flexibly handling complex terrain.
● Supports Graphical Programming: Users can develop and debug through an intuitive visual interface.
This project does not use the common SLAM navigation functionality. Instead, it relies solely on vision to track a white line and then drive the robot accordingly. The primary tools used are the pymycobot API to control myAGV’s movements and OpenCV for object recognition and tracking.
pymycobotpymycobot is a functional control library launched by Elephant Robotics for controlling their robotic arms and mobile robots. This library allows users to easily operate and control these devices through Python code, significantly simplifying the development process and enhancing the programmability and flexibility of the devices.
https://pypi.org/project/pymycobot/
from pymycobot.myagv import MyAgv
agv = MyAgv("/dev/ttyAMA2", 115200)
agv.go_ahead(go_ahead_speed, timeout=5)
#Control the car to move forward. with a default motion time of 5 seconds.
retreat(back_speed, timeout=5)
Control the car back. with a default motion time of 5 seconds.
pan_left(pan_left_speed, timeout=5)
Control the car to pan to the left. with a default motion time of 5 seconds.
pan_right(pan_right_speed, timeout=5)
Control the car to pan to the right. with a default motion time of 5 seconds.
clockwise_rotation(rotate_right_speed, timeout=5)
Control the car to rotate clockwise. with a default motion time of 5 seconds.
OpenCVOpenCV (Open Source Computer Vision Library) is an open-source computer vision and machine learning software library. Created by Intel and now supported and maintained by the OpenCV Foundation, OpenCV offers hundreds of computer vision algorithms. These algorithms cover a wide range of fields, including image processing, video analysis, and machine learning.
import cv2
image = cv2.imread('path_to_image.jpg')
cv2.imshow('Displayed Image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Project ImplementationmyAGV starts from the START point and first encounters a traffic light ahead. It waits for the light to turn green, then begins to move along the white line. Upon reaching the blue area, myAGV stops and waits for the robotic arm to complete its task (the specific operation of the robotic arm is not involved in this competition). After the robotic arm completes its task, it sends a command to the car, and myAGV continues forward until it reaches the obstacle position, where it identifies the obstacle and stays for 5 seconds. Then, myAGV proceeds to the END point, concluding the competition.
myAGV Motion ControlAs previously mentioned, you can use the pymycobot library in Python to control myAGV’s movements. Below is an example:
# import pymycobot
from pymycobot.myagv import MyAgv
# Create an object
agv = MyAgv("/dev/ttyAMA2", 115200)
agv.pan_left(100, timeout=0.1) # trun left
agv.pan_right(100, timeout=0.1) # turn right
agv.go_ahead(100, timeout=0.1) # go stright
It is very easy to control. The first parameter is speed, ranging from 1 to 127; the higher the value, the faster myAGV runs. The `timeout` parameter represents the duration of the movement in seconds.
Vision RecognitionWhite Line FollowingThis is one of the more important technical points in this project: recognizing the white line, traffic lights, and obstacles. The robot needs to pause when encountering these. We will start with the primary task of recognizing and following the white line.
Below is the code to identify the center point of the white line and draw the central position.
import cv2
import numpy as np
# Load the image
image_path = '/mnt/data/myagv.jpg'
image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
# Check if the image loaded successfully
if image is None:
raise ValueError(f"Image not found at {image_path}")
# Threshold the image to binary
_, binary_image = cv2.threshold(image, 200, 255, cv2.THRESH_BINARY)
# Find contours in the binary image
contours, _ = cv2.findContours(binary_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Filter out small contours and find the largest one which is assumed to be the white line
largest_contour = max(contours, key=cv2.contourArea)
# Compute the center of the contour
M = cv2.moments(largest_contour)
if M["m00"] != 0:
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
else:
cX, cY = 0, 0
# Draw the center point on the image
output_image = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
cv2.circle(output_image, (cX, cY), 5, (0, 0, 255), -1)
# Save or display the output image
cv2.imwrite('/mnt/data/white_line_center.jpg', output_image)
# cv2.imshow("Output Image", output_image)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
print(f"Center of the white line is at: ({cX}, {cY})")
Next, we will implement the algorithm to make myAGV move along the line. The motion of myAGV follows the logic below:
● If the center point is in the central region of the image, the AGV moves straight.
● If the center point is in the left region of the image, the AGV turns left.
● If the center point is in the right region of the image, the AGV turns right.
if cX is not None:
if rotating:
agv.stop()
rotating = False
if cX < frame_center - 20:
print("Turn left")
agv.pan_left(50, timeout=0.1)
elif cX > frame_center + 20:
print("Turn right")
agv.pan_right(50, timeout=0.1)
else:
print("Go straight")
agv.go_ahead(50, timeout=0.1)
The code logic compares the identified center point of the white path with the center position of the camera view. If `cX` is less than the center of the view, the robot should turn left; otherwise, it should turn right. This code is a theoretical implementation of following the white line.
Color RecognitionThis is relatively easy. You only need to convert the pixels to the HSV color space for recognition and determine the corresponding HSV values for green. Similarly, for recognizing blue lines, you can use this method to make the determination.
def detect_signal_light(frame):
# Convert image to HSV color space
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# Define the HSV range for red and green
red_lower = np.array([0, 120, 70])
red_upper = np.array([10, 255, 255])
green_lower = np.array([40, 70, 70])
green_upper = np.array([80, 255, 255])
# Create red and green masks
mask_red = cv2.inRange(hsv, red_lower, red_upper)
mask_green = cv2.inRange(hsv, green_lower, green_upper)
# Count the number of non-zero pixels of the red and green masks
red_detected = cv2.countNonZero(mask_red)
green_detected = cv2.countNonZero(mask_green)
if red_detected > green_detected:
return 'red'
elif green_detected > red_detected:
return 'green'
else:
return 'none'
def detect_yellow_line(frame):
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
yellow_lower = np.array([20, 100, 100])
yellow_upper = np.array([30, 255, 255])
mask_yellow = cv2.inRange(hsv, yellow_lower, yellow_upper)
yellow_detected = cv2.countNonZero(mask_yellow)
return yellow_detected > 0
Obstacle RecognitionThe obstacle is designed as a zebra crossing. When myAGV encounters the zebra crossing, it pauses and waits for five seconds, just like in real life, where we yield to pedestrians at a zebra crossing.
To recognize this pattern, observe that a zebra crossing consists of multiple white segments lined up together. The detection logic can be designed to identify a zebra crossing when three or more white segments are detected.
def detect_crosswalk(frame):
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(blurred, 50, 150)
lines = cv2.HoughLinesP(edged, 1, np.pi / 180, threshold=50, minLineLength=50, maxLineGap=10)
if lines is not None:
lines = sorted(lines, key=lambda line: line[0][1])
parallel_lines = 0
for i in range(1, len(lines)):
if abs(lines[i][0][1] - lines[i - 1][0][1]) < 20:
parallel_lines += 1
if parallel_lines >= 4:
return True
return False
From the observations, the theoretical code does not always work in practice. The main issues encountered are summarized below:
1. Lighting Conditions: Visual recognition is heavily affected by lighting. Testing results can vary at different times of the day because lighting conditions change, leading to errors in recognition and computation.
2. myAGV's Movement Speed: Some teams experienced issues where the myAGV moved too fast, causing it to deviate from the line, lose the white line target, or oversteer. This sometimes led to the robot backtracking instead of following the intended path.
3. Recognition Algorithm: The camera on the robot did not always accurately identify the center point of the white line, causing the program to fail.
The best approach is to move slowly and make smaller turns to ensure the robot can complete the course accurately.
SummaryWhat seems like a simple project actually contains many technical aspects, and achieving the initial goal can be challenging. It requires continuous experimentation and adaptation based on the environment and the machine. In practical applications, myAGV needs constant testing and iteration to adapt to different work environments and task requirements. Each adjustment and optimization aims to make myAGV more intelligent and efficient. This not only requires a solid technical foundation but also creativity and perseverance. Only through continuous trials and improvements can the full potential of myAGV be realized, providing reliable solutions for various complex tasks.
I hope this case can inspire more makers and developers to explore the infinite possibilities of robotics technology, promoting technological advancement and innovative applications.
Comments
Please log in or sign up to comment.