Automated guided vehicles(AGV) in industry are designed to follow markers or lines on the floor to movematerials around the manufacturing facilities and industrial plants. Most ofthem does not use any Computer vision based navigation system because of itsdifficulty and the complexness. The mapping of the robot's environment could bedone from scratch, where the robot has no map to begin with and no idea of itslocation on the map. According to the past researches and projects simultaneousrobot localization and map-building (SLAM) has problems with mapping the robotsenvironment and the location of the robot with respect to the globalcoordinates. Localizing the robots on an unknown environment has been thesubject of a significant amount of research during past decades. But from theoverhead camera based centralized system we can navigate mobile robots (DonkeyCarwill be the Mobile robot) easily with global coordinates by path planning atthe first glance of the platform with real-time obstacle avoidance.
ScopeThis centralized system mainly focusing on replacing the AGVs (auto guided vehicle) that are being used in the industry. The system will be designed as a prototype of a delivery robot system for warehouses. The robot will be monitored and driven through a Arm-based computer vision. The system will be able to identify static and dynamic obstacles, as well as objects. The system will also predict the path within a few seconds and navigate all the robots without any local sensor network. The main feedback will be the centralized camera system.
MethodologyHardware
· Raspberry Pi 3 Model B - computer vision based centralized system
· Raspberry Pi 3 Model B- DonkeyCar - will be powered by another Raspberry Pi module with IoT
· Intel Movidius Neural Compute Stick - will enhace the neural power of the Raspberry Pi.
· Raspberry Pi Camera Module - will be the overhead camera.
In this method video pre-processing and post-processing, path planning, robot control, and wireless communication with the mobile robot will be done a by single-board computer, which in this case will be the Raspberry Pi. Raspberry Pi 3 will be used to perform the post processing techniques, including path planning algorithm, robot control algorithm and the wireless communication. The Raspberry Pi is a low-cost SBC with a credit card-size form factor that can be used for many tasks that your computer does. Raspberry Pi uses software which is either free or open source. It provides direct accessible processor pins as GPIOs, so prototyping vision projects or learning computer science from scratch in such a device is better. One can learn on a PC also but implementation at hardware level is not feasible as a PC does not provide much hardware details. Compared to its class, the processor is very good. It is a Broadcom 900 MHz quad core CPU. It is an Arm Cortex-A7-based device. It is convenient to use Raspberry Pi for beginning as it has very less software glitches and provides overall performance. Processesthat will be processed in the Raspberry Pi also can be implemented in the FPGA but it would be much complex structure and time consuming.
Main advantage of using a single board is to perform multiple task with the same architecture (in this scenario the main coding language will be Python). It is easier to interface all theprocesses and multiple threads which leads to a better system performance. Pathplanning algorithm and robot controlling algorithms interact with specifiedcommands to perform final robot navigation process. Only few sets of integervariable will be passed over the MQTT protocol to control orientation and speedof the mobile robot. Intel Movidius Neural Compute Stick willbe used in image pre-processing, path planning algorithms to provide the neural power to the Raspberry Pi. Movidius Neural Compute Stick (NCS) is produced by Intel and it can be run without any need of Internet. This software development kit enables rapid prototyping, validation, and deployment of deep neuralnetworks.
Main architecture of the DonkeyCar is based on IoT technology with a use of another Raspberry Pi 3 Model B. The main goal is to implement the wireless network over the area using an efficientcommunication protocol such as Message Queuing Telemetry Transport (MQTT) lightweight protocol. An algorithm will be developed to build the mechanism ofhow it operates and complete tasks in real-time. The mobile robots (DonkeyCar in this scenario) in the ground level completes given task without any aid of sensor by followingnavigation instructions given by a server. Between server and mobile robot havestable wireless communication method.
Obstacles and the ground robot willbe monitored by an Overhead camera as shown in figure 2. Raspberry Pi camera is used to acquire the video feed ofthe ground plane. Two main algorithms can be used for path planning. They are Wavefront and A* search algorithm. System implementation from these twotechniques will be main two methods to approach the final goal.
Wavefront Technique for Path PlanningIt is the most easiest and efficientway to find the shortest path possible. In wave front based methods, values areassigned to each node starting from target node. It is followed by traversalfrom start node to target node using the values assigned. Goal is to ensureoptimal path length along with faster execution time. It will address thisproblem by preventing the full expansion of waves and used a new cost functionso that optimality is not compromised. In this algorithm It starts at the goalcell and marks each adjacent cell with the distance to the current goal. Using 8-point connectivity, each cell has up to eight adjacent cells, with thediagonal cells having a distance of √ 2 ≈ 1.4 to the current cell. Theremaining cells each have a distance of this process is then repeated for eachcell, continuously marking neighboring cells, until the robot position has beenreached. Cells defined as obstacles1 after dilation are ignored. Focused Wavefront algorithm is a further modification to MWF. This algorithm is quitefaster than previous algorithms because it explores only a limited number ofnodes. Each node is allocated two values – weight and cost. Weight is the valueassigned to the node depending on its position. It is assigned in exactly same fashionas we allocate values in modified wavefront algorithm. This MWF algorithm canbe used as a modification.
Image pre-processing and post-processing methods are one of the most critical areas from the whole process. A gray scale image will of the ground be produced as the initial step of the image processing. Obstacle areas will be identified with use of binary conversion and a proper pixel coordinate system. To find the correspondence between the real coordinates and the image coordinates a calibration procedure has to be developed [3]. Pin-hole model will be the main method, which gives a way to compute the world coordinates from the image coordinates and the focal distance f.
To calibrate the camera we work into phases: first from image to floor, then from floor to robot. The estimate of the twelve elements of the matrix M is reduced to eleven considering the scale factor. In the first calibration phase the camera takes a picture of calibration object
whose dimension is known. We do notwant to use the classical least-squares method, which requires precise measuresin world coordinates, so we choose all the points of the calibration object tobe on the floor (z is null, and 3 elements of M are null). The calibrationobject is a white square, 21-cm width. The vertex coordinates are computed. Theestimate of M is done on the first picture using least squares and trying tomatch the reference square. The initial estimate is then improved with Newtonmethod. This is a minimization problem, where the function to minimize is thedifference between the estimated segment length and the real length.
After the reference system on thefloor as shown in figure 5, weconstruct the matrix to transform it in the reference of the robot. During thissecond calibration phase, pictures of the object are taken from differentpositions and orientations of the robot, and again this minimization problem issolved as before. We consider as robot reference system the one used bydead-reakoning of the robot. Let x be the coordinate vector on the robot,
To get the shortest path wavefronttechnique will be used. Related algorithms will be developed in python languagein a Linux environment.
Wavefront technique was used tocreate a proper path plan. As shown in figure11 the red dot represents the start and the blue dot represents thedestination. An image of the ground plane was given as an input to thealgorithm. The grayscale image will be processed to identify darker and lighterareas in order to identify the free path by eliminating darker areas
DiscussionAccording to the figure 1 the whole system is divided into three main stages Image acquisition, Image processing and controlling, Mobile robot. When processing the ground plane via overhead camera a clearimage acquisition from the Pi cam is a must. One of the main problems we haveidentified is the calibration process of the overhead camera. Pin-hole model will be the mainmethod, which gives a way to compute the world coordinates from the imagecoordinates and the focal distance f. The next task was to develop an algorithmto determine the orientation of the robot in order to drive through waypointssmoothly.
Developing an image processing-based orientation feedback system was continued with OpenCV ArUco marker method. AnArUco marker is a synthetic square marker (figure..) composed by a wide blackborder and a inner binary matrix which determines its identifier (id). Theblack border facilitates its fast detection in the image and the binarycodification allows its identification and the application of error detectionand correction techniques. The marker size determines the size of the internalmatrix.
It must denoted that a marker can be found rotated in the environment, however, the detection process needs to be able to determine its original rotation, so that each corner is identified unequivocally. This is also done based on the binary codification. A dictionary of markers is a set of markers that are considered in an specific application. It is simply the list of binary codifications of each of its markers. The main properties of a dictionary are the dictionary size and the marker size. The dictionary size is the number of markers that composed the dictionary.
The marker size is the size of those markers (the number of bits). The aruco moduleincludes some predefined dictionaries covering a range of different dictionarysizes and marker sizes.One may think that the marker id is the number obtainedfrom converting the binary codification to a decimal base number. However, thisis not possible since for high marker sizes the number of bits is too high andmanaging so huge numbers is not practical. Instead, a marker id is simply themarker index inside the dictionary it belongs to. For instance, the first 5markers inside a dictionary has the ids: 0, 1, 2, 3 and 4.
ConclusionMain goalis this project is to build a much more efficient AGV system for industrialplants with an overhead camera solution. In this project scope, a prototypesystem will be built with a centralized system and a swarm intelligence poweredmobile robot. Standalone wavefront algorithm will be used to determine the pathplan. Intel neural power stick will be used to provide enough neural source tothe main controlling board. Image orientation algorithms such as OpenCV’s ArUcomarkings will be used to create real-time orientation feedback system whichcommunicates via Wi-Fi MQTT protocol. Progress of the second semester is mainly based on robot building and establishing the MQTT protocol.
Comments