Our project has three functionality which is described as Project Goals :
a) Ask the segway robot to navigate to (X,Y) position by giving a command from LabVIEW. The connection is established using the ESP Wi-Fi module of the chip.
b) Adding obstacle avoidance functionality in the Segway robot using IR sensors via navigation.
c) Adding functionality to follow a person in front of the segway robot using the proximity sensor. We can give commands to either do obstacle avoidance or follow the person from the LabView.
Project Description :
a) Project Goal: Ask the segway robot to navigate to (X,Y) position by giving a command from LabVIEW. The connection is established using the ESP Wi-Fi module of the chip.
In order to achieve this, we have implemented the following milestones.
1) Dead reckoning: Our segway robot does not have any information about its position in the global frame. Hence, we need to focus on the data from the encoders. We can update the position and heading of the robot by calculating the rotation of the left and right wheels.
2) Navigation Control: We send our target position from the LabVIEW. To achieve this, we implemented the position control and heading control and merge with the balance control of the robot. We need to be careful about the control in order to avoid passing the stability control. Hence, anti-windup is also added along with position control.
b) Project Goal: Adding obstacle avoidance functionality in the Segway robot using IR sensors via navigation.
1) Sensor Integration: Integrate the IR sensor and try to get the value from the sensor. The noise and range of operation are the major constraints for integrating these sensors. Hence, they need to be calibrated very thoroughly.
Here is the video explanation for sensor :-
2) Mounts Design: Sensors mounts are designed in such a way that there exists 3 three sensors with 10 degrees of angle with each other. We are using information from all of three sensors and fusing them together when necessary.
3) Object Avoidance maneuvering: In order to develop this functionality, first we implemented the wall following scenario. This was achieved using the PI controller integrating with sensor information. The second step is to create a state flow to say when to do wall following and when to navigate. The exit conditions for each state are very crucial and the heart of our algorithm.
We begin in the most basic state, assuming there is no obstacle and defaulting to the navigation control implemented in part a. This will find the shortest distance from the robot's current position to the desired position, expressed in a straight line at a certain angle, and move the robot on that path. To detect obstacles, we have designated certain 'threshold' values that apply to each sensor. When these values are exceeded the robot registers an object in its path and switches cases to react accordingly. The case it switches to depends on the trajectory of the approaching object. For example, if the robot is approaching an obstacle and it gets within a certain distance of the robot's right sensor, the robot will switch to the 'left turn' case, and vice versa for the left sensor. If the robot approaches an obstacle head-on, the robot will turn to whichever direction gets triggered first.
As the robot begins turning, it will follow the side of the obstacle by keeping the middle sensor parallel to the obstacle. When the sensor facing the wall (either right or left) stops reading, the obstacle will have passed and the robot can return back to the original navigation case. We also tend to offset the robot by a certain angle in order to make sure it comes out from the wall following without clipping any sharp corners.
c) Adding functionality to follow a person in front of the segway robot using the proximity sensor. We can give commands to either do obstacle avoidance or follow the person from the LabView.
1) State Machine Upgrade: First we add different switch cases from LabVIEW to let our robot know what to do. (2 cases possible: Go to XY position along with obstacle avoidance or Follow the trajectory). These cases will be toggled manually.
2) Linear control Law: Writing control law to follow the object in front of it and maintain a certain distance. We have also added the functionality to avoid running through the obstacle. And if we suddenly remove the obstacle, our robot will go "Look through" state where it will rotate in it's own position to look through the object.
3) Turn Control: The most tricky part was to implement the turn control. If an obstacle in front rotates, the segway will also move. The main logic behind this control law is to maintain the difference between adjutant sensors' reading constant. And that leads us to give turn control.
The video below is a demonstration of the control that we made for the Segway bot.
Comments
Please log in or sign up to comment.