The objective of our final semester project was to create a robot able to transition between the caster and Segbot modes and navigate between given points or along predetermined paths.
HardwareFirst Hardware Approach: Kraut Space Magic
The first Idea was to utilize the motors driving the robot itself to erect it. We hoped that this would not shift the center of gravity too much and thus interfere with the already implemented controller, and that we would be able to raise the robot when lying on its backside as well as its front side. Due to the nationality of the originator of the idea and the - as it turned out - excessive complexity of the solution, this approach is referred to as the "Kraut Space Magic" approach.
The resulting mechanism consists of axially and rotationally freely mounted arms on the motor axes, which can be coupled to the drive wheels by a servo using a claw coupling, allowing the robot to lift itself up with the help of its main motors. The robot would then get up to an angle at which the already implemented balance controller would activate and transfer it to SegBot mode, balancing freely on its main wheels. The arms would decouple at the same instance and be returned to their initial position by the axial return translation along a movement screw.
However, when working out the design, it became clear that a complicated construction with a comparatively large number of parts and the sometimes tight tolerances required would result. With the limitations of the available FDM printers in mind, time-consuming reworking of the parts was to be expected and intuitively it did not seem certain that the mechanism would function robustly.
Final Hardware State: Plain and Simple
As a quick fix, we chose to directly drive the pushing arms with one servo each, as seen with other project groups. This approach added more weight to our Segbot but was simple and robust in nature. After we considered the timeline we had in place for our project, we saw this as the best solution and ended up 3D printing and installing these components onto the robot as soon as possible in order to test the balancing and coding features.
State Machine Architecture and Control
The robot employs a state-machine architecture to govern its motion throughout its operational sequence, encompassing four distinct states: 5, 10, 20, and 30.
State 5 serves as a buffer, allowing ample time for sensor calibration, including the IMU and gyrometer. Simultaneously, the lifting mechanisms are reset to their default horizontal position at -100 degrees. Following this, the robot enters a 10-second waiting period before transitioning to state 10.
In State 10, the robot initiates its transition from caster to segbot state. The lifting mechanism gradually moves from -100 to 50 degrees, while the robot continuously monitors its tilt angle relative to the vertical during SWI interrupts. Upon surpassing a predefined tilt threshold, the robot switches to segbot mode, halting the servos. After a brief pause (currently set at 5 seconds), the robot progresses to the subsequent state, ensuring that turn and forward speed control parameters are reset to zero to prevent abrupt movements.
State 20 activates the segbot balance controller and restores the servos to their default horizontal position of -100 degrees. The robot then transitions smoothly to State 30.
In State 30, the robot waits for a minimum of 10 seconds to stabilize after the previous state. Armed with an array of coordinates received from LabVIEW, the robot commences its position controller, sequentially moving to each specified coordinate. The robot concludes its motion at the conclusion of this state and remains in its place balancing itself.
Low Level Motion Control
The low-level control algorithms consist of three key components: balance control, turn control, and speed control.
For balance control, a linear full state-feedback law is applied to stabilize the segbot in its vertical position. This law is represented as u = -k1*x1 - k2*x2 - k3*x3 - k4*x4, where x1, x2, x3, and x4 are the low-level robot states - tilt, angular rate (tilt rate), average motor speed, and angular acceleration (time change in tilt rate). The associated gain values are set as k1 = -60.0, k2 = -8.0, k3 = -1.1, and k4 = -0.1. The Kalman filter implementation is utilized to estimate these states by fusing accelerometer and gyrometer data.
Turn control aims to make the robot turn along the vertical axis at a specified turn rate. In each time step, the controller calculates a reference turn angle by integrating the turn rate using the trapezoidal rule. Subsequently, a Proportional-Integral controller generates PWM commands based on an error signal. This error signal is computed by subtracting the current turn angle of the robot from the reference turn angle. The real-time turn angle is measured using optical-encoder-based angular position data from the wheels.
Speed control employs a Proportional-Integral algorithm to maintain the robot's speed at a predetermined horizontal reference speed. The error signal is derived as the reference speed minus the current speed. The current speed is estimated from the angular rate of the motors using optical encoders, taking into account the no-slipping assumptions between the wheels and the ground.
PositionTrajectory Control
In addition to the aforementioned control components, trajectory control is implemented through a combination of turn and speed control. When provided with a set of waypoints, the robot utilizes its current turn angle and position values, derived exclusively from optical encoder readings, to determine the required amount of turn and distance to cover. This logic is presented in the code file titled user_xy.c
As previously explained, the desired turn angle is presented to the robot as a reference turn rate. Simultaneously, depending on the distance to the target, a reference speed is provided to the speed control mechanism outlined in the preceding section. The reference speed remains constant when the robot is distant from its target XY position. However, as the robot approaches the close vicinity of the target, the reference speed is proportionally reduced in relation to the distance from the target. This integrated approach ensures precise trajectory tracking as the robot navigates through its waypoints.
As seen in the video below, we can see the input positions to the robot are sent via LabView to the segbot by clicking locations on a grid within the software. They are stored as an array of X and Y coordinates which can be looped through in code later. These coordinate points are then mapped to the robot's coordinate plane and received as values it can use to determine the proper control effects on the right and left wheels. We can see in the SWI interrupt in the main.c file that the code is able to recognize its current location and the location it is asked to go toward and find out what distance and turn it needs to make to get there. Once this is determined, a control effect for the forward/backward displacement and for the turn is added to the balance control effect for each wheel to make the proper adjustments in the navigation of the Segbot.
Plansforfurtherdevelopment
We think our project was a great success and achieved many of the things we wanted to, however, it is not a polished and perfect robot at completion. Two things we have in mind we would like to fix are: 1) To reduce the shaking and the improve accuracy of the robot's balance and motion control2) To add in a ringtone whenever the robot notices it has a new location to go to, or notices it has achieved a location it has been sent
In the code, we tried to achieve both of these tasks, but we were left with a bit of tweaking still to do due to the time constraint we ran into. They are not necessary for the basic utility of our robot but would be a great addition to making our Segbot a more robust and user-friendly design.
Comments