OverviewThis project presents an autonomous RC car, built by the GSA Owls team, capable of lane keeping, red stop sign detection, and encoder-based speed control using a Raspberry Pi 5.
We extended code from prior ELEC 424 projects and the following Instructable :https://www.instructables.com/Autonomous-Lane-Keeping-Car-Using-Raspberry-Pi-and/
PDControllerTuningWe performed parameter tuning by observing real-time system response to various gains.
- Proportional Gain (Kp = 0.020) ensured the car responded quickly to deviations without overcorrecting.
- Derivative Gain (Kd = 0.010) reduced oscillation, especially in turns.
The PWM output was clipped between 6.4% and 8.6% to fit the servo’s constraints.Plots of PD responses versus error confirmed stability and responsiveness during the full run
Stop Sign Handling and Object DetectionWe used OpenCV to threshold red HSV values and detect red stop paper.
- When red pixels exceeded a threshold, the car stopped.
- If it was the first stop, the car resumed after 3 seconds.
- At the final stop, the car halted permanently.
System PlotsFigure 1
- Demonstrates how lane error triggers proportional steering changes and how speed PWM remains regulated within bounds.
- Shows how the PD controller dynamically adjusts steering using both magnitude and rate of error change.
The demo is set up so that when the program is run, the car waits at start until the live camera feed is visible on the computer. Then the user can hit space to begin the track run. After completing the track hit ctrl+c to close the program.
Note: A similar demo (without the speed encoder) was shown to Joe, and he confirmed that it works.
TerminalOutputs
The terminal outputs our Lane error (which used by the PD controller), the steering PWM, and the speed in cm/s as measured by the speed encoder wheels.
The encoder-based speed controller driver detects both rising and falling edges of an optical encoder signal using interrupts and measures the time interval between edges with high-resolution kernel timestamps. To ensure reliable readings, it implements a 1 ms debounce filter to ignore noisy or spurious pulses. The most recent valid interval is made accessible to user space via a read-only sysfs file named speed, allowing external applications to calculate rotational speed or RPM. Additionally, the driver supports device tree integration for easy hardware configuration and portability.
LaneTracking
Typically you would use Hough lines to track both lanes to calculate your direction of travel. Our camera limits field of view and our car construction tries to compensate for that with a short tower at the front. Still the maximum view of the track is only a maximum of 18 inches using the whole frame, and typically there is a 6-12 inch blind spot in front of the car. This short range makes tracking each lane individually almost impossible, because in the corners only one lane is visible. Our solution to the limited field of view is to only track the centroid of the blue that is visible within the HSV mask. This means that as long as one lane is visible, we have a path to follow. The figures below are from preliminary code where we were testing the vision without motor control, but they demonstrate this approach nicely. This means that while our run is not centered within the lane, the centering around one of the lane lines is quite precise.
ObjectDetection (video omitted due to performing live demo in the lab)
To meet the 553 requirement, we deployed YOLOv5n (320×320) for object detection and streamed labeled output to the laptop, achieving a ~5 FPS frame rate. Object detection in YOLOv5s consists of partitioning an input image into a grid and predicting class probabilities and bounding boxes directly from image features in a single pass through a neural network. The model extracts hierarchical spatial information through convolution layers. It also utilizes anchor boxes to detect objects of different sizes. During inference time, YOLOv5s returns one or more predictions for each grid cell, and these are pruned based on a confidence threshold) to prevent redundant detection. The "s" model type refers to a fast, low-complexity model designed for real-time performance. Aspects of YOLOv5s machine learning are trained across the COCO data set and generalize to common object classes.
Comments