This is going to be a series of articles about using computer vision for line following and lane keeping. We’ll use K210 chip based board cyberEye (a modified version of Maixduino), which has support for OpenMV and also features a KPU, a neural network inference accelerator. For the reference mobile platform we will use M.A.R.K. (stands for Make A Robot Kit, I’ll just call it MARK in the article :) ), which is a robotics platform for educators and makers. I am on the team currently developing it, so I decided to share some of its internal works with my readers. MARK is an open source project and despite the examples in the articles are written for using with cyberEye and MARK, you certainly can apply them on a different platform with some modifications.
Let’s buckle up and begin!
I think for most of people who are into robotics, when we just started one of our first projects was a simple IR sensor line follower. There are different implementations of the algorithm, based on the number of IR sensors, but they all boil down to the same principle of measuring the intensity of IR light bouncing back from the surface.
One step in complexity above that is computer vision line following. For MARK there are two options to use CV line following:
- Graphical programming in Codecraft
- Micropython in MaixPy IDE
The first option is very suitable for education, since it allows the students to study different parameters available and the internal mechanics of the algorithm, without need for coding skills.
UPDATED 03/29/2022. I try my best to keep my articles updated on a regular basis and based on your feedback from YouTube/Hackster comments section. If you'd like to show your support and appreciation for these efforts, consider buying me a coffee (or a pizza) :) .
You can have a look at it at https://ide.tinkergen.com/ . Select MARK(cyberEye) from devices menu and go to Machine Vision tab, you’ll see three blocks related to CV line following:
- Set line identification color to black(0-64)/white(128-255)
- Set line identification region weight A: B: C:
- Turn angle
The complete CV line following code looks as following(we add servo block to tilt the cameras servo):
Set line identification region weight A: B: C: block is useful for when you have dotted line or particularly sharp angles – and also to account for different camera angles. It basically tweaks the Turn angle sensitivity to line deviation from the center of the screen in three different regions of interest(ROI). Let’s have a look at two examples to gain empirical understanding of how it works.
In the first two images we see straight line produces turn angle of 0 degrees, which means go straight. In the images 3 and 4 we see the camera is facing a turn on the map, and the output of turn angle function is 3, which equals to slight turn to the right. Isn't it supposed to be turn left? It is because with our default region weights (A:70 B:50 C:30), the turn angle is influenced by line segment in A region the most - and as we can see the black line segment in region A is on the right! Finally, in the last picture, I changed region weights to (A:30 B:50 C:70) and now the turn angle outputs -9, which means medium speed turn left. It is because now the turn angle is influenced by line segment in C region the most.
Here is the video of basic CV line following and more advanced examples, all programming done with Codecraft.
For Micropython, you can use MARK high-level API, which is behind Codecraft blocks. The CV line following part is in camera.py and consists of the following functions:
Set_GRAYSCALE_THRESHOLD(1)
Set_roi_weight(70, 50, 30)
track_line()
These functions directly correspond to blocks in Codecraft.
If you need to make more tweaks to algorithm, you can find it in camera.py file. We based it on grayscale line following algorithm from OpenMV, which you can find here:
OpenMV team did a terrific job at explaining the algorithm behind CV line following in the comments of the script and I would like to supplement the text explanation with the pictures for people who understand graphical representations better (like me :) ).
""" You'll need to tweak the weights for your app depending on how your robot is setup. """
ROIS = [(0, 100, 160, 20, 0.7),
(0, 50, 160, 20, 0.3),
(0, 0, 160, 20, 0.1)]
""" Compute the weight divisor (we're computing this so you don't have to make weights add to 1). """
weight_sum = 0
for r in ROIS:
weight_sum+=r[4] # r[4] is the roi weight.
Here we specify ROIs(regions of interest) in the picture and assign a “weight” to each one. We also calculate the weight divisor, as sum of all weights. In our case that's 0.7+0.5+0.3=1.5.
For every region of interest we
a) find the blobs
b) find the largest blob
c) add it’s weighted x-coordinate to centroid sum
In the end we have center_pos variable which tells us what is the deviation of the line from the center of the image.
In our case it is:
(108*0.3+99*0.5+68*0.7)/1.5=32.4+49.5+47.6=129.5/1.5=86.33
for r in ROIS:
blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=r[0:4], merge=True)
if blobs: """Find the blob with the most pixels."""
largest_blob = max(blobs, key=lambda b: b.pixels())
"""Draw a rectangle around the blob."""
img.draw_rectangle(largest_blob.rect())
img.draw_cross(largest_blob.cx(),largest_blob.cy())
centroid_sum += largest_blob.cx() * r[4] """ r[4] is the roi weight."""
center_pos = (centroid_sum / weight_sum) """ Determine center of line. """
deflection_angle = -math.atan((center_pos-80)/60)
Okay, when you see this you might think to yourself, "whatever was before was quite clear, now what is this black magic?". Here is the description from OpenMV script comments:
The equation below is just computing the angle of a triangle where the opposite side of the triangle is the deviation of the center position from the center and the adjacent side is half the Y res. This limits the angle output to around -45 to 45. (It's not quite -45 and 45).
Difficult to grasp the meaning at first, but when you do the drawing, it makes perfect sense. We are trying to find the angle of the right triangle, with it's right angle(90 degrees angle) located at the center of the screen. When deviation is 0, the angle is 0 as well, which means go straight. The bigger the deviation, the larger the angle.
If we plug our numbers from above into the formula, we'll get
-atan(86.33-80)/60=-atan(0.105)=-0.104616658 rad or -5.99 degrees
Which is slight turn to the left. Sounds reasonable from the image we are seeing.
Stay tuned for more articles from me and updates on MARK Kickstarter campaign. In the next article of The Road from Line Following to Lane Following series we will have a look at Lane Keeping algorithms and apply Deep Learning for that task as well.
Add me on LinkedIn if you have any questions and subscribe to my YouTube channel to get notified about more interesting projects involving machine learning and robotics.
Until the next time and stay safe from the coronaviru
Comments