This project is dedicated to exploring and implementing a highly integrated robotic system aimed at solving specific automation tasks and challenges by combining the modern Robot Operating System (ROS) with advanced hardware components. One part is the LIMO PRO SLAM rover based on the Jetson Orin motherboard, which is capable of autonomous navigation, map building, and path planning; the other part is the myCobot 280 M5 robotic arm, a compact arm with 6 degrees of freedom and a working radius of 280mm, capable of precise item handling and manipulation. By tightly integrating these two parts, we have created a compound robotic system capable of performing complex tasks such as automated item transfer, environmental monitoring, and other applications requiring high levels of autonomy and operational flexibility.
During the development of the project, we delved into the powerful features of ROS, including but not limited to autonomous navigation implemented using move_base, and real-time SLAM (Simultaneous Localization and Mapping) conducted with gmappping_ros. It also involves control of the robotic arm and application of computer vision technologies for object recognition and environmental perception, thereby enhancing the robotic operation's intelligence and adaptability.
The project hopes to provide a practical reference for technology enthusiasts, educators, and researchers, inspiring more innovative ideas and exploratory applications.
Technology and Hardware OverviewmyCobot 280 M5StackThe myCobot 280, developed by Elephant Robotics, is a 6-degree-of-freedom collaborative robotic arm that is flexible in design and powerful in functionality, making it particularly suitable for education, research, and similar scenarios.
The myCobot 280 M5 supports multiple programming and control methods, applicable across various operating systems and programming languages, including:
● Main control and auxiliary control chips: ESP32
● Performance: Working radius of 280mm
● Support for Bluetooth (2.4G/5G) and wireless (2.4G 3D Antenna)
● Multiple input and output ports
● Supports free movement, joint motion, Cartesian movement, trajectory recording, and wireless control
● Compatible operating systems: Windows, Linux, MAC
● Supported programming languages: Python, C++, C#, JavaScript
● Supported programming platforms and tools: RoboFlow, myblockly, Mind+, UiFlow, Arduino, mystudio
● Supported communication protocols: Serial control protocol, TCP/IP, MODBUS
These features make the myCobot 280 M5 a versatile, easy-to-use robot solution applicable to a wide range of application scenarios.
LIMO ProAGILE.X Robotics' LIMO represents an innovation in the field of mobile robotics, integrating flexibility and powerful functionality into a compact platform. It is the world's first ROS development platform designed for robot education, functional research and development, and product development, capable of adapting to a wider range of scenarios and meeting the needs of industry applications. Here is a detailed overview of the hardware and technical characteristics of LIMO Pro:
LIMO utilizes LiDAR and a depth camera for environmental perception, combined with the powerful computing capabilities of NVIDIA Jetson Orin Nano, to achieve high-precision SLAM mapping and autonomous navigation. Not only does LIMO serve as a mobile robot performing complex navigation and transportation tasks, but its multimodal mobility capability also significantly enhances the robot system's applicability range and flexibility. Together with the myCobot 280 M5 robotic arm, LIMO provides an efficient and reliable solution for automation applications, demonstrating great potential and value in the fields of robot education, research and development, and product development.
Software ArchitectureThe software architecture is primarily divided into: navigation and mapping, object detection, robotic arm control, and system integration and communication. These parts are integrated through the ROS (Robot Operating System) framework, utilizing ROS's communication mechanisms (topics, services, actions) to enable interaction between modules.
The overall project is divided into 3 main modules: the functionality of LIMO PRO, machine vision processing, and the functionality of the robotic arm.
Gmapping:
Gmapping is a commonly used open-source SLAM algorithm based on the filter SLAM framework. Gmapping effectively utilizes wheel odometer information and does not require a high frequency from the LiDAR, making it capable of constructing small scene maps with less computational demand and higher precision.
Once a map is constructed, it enables navigation on the map. This functionality mainly involves robot localization and path planning, for which ROS provides the following two packages:
1. move_base: Implements optimal path planning in robot navigation.
2. amcl (Adaptive Monte Carlo Localization): Implements robot localization in a two-dimensional map.
Based on these two packages, ROS offers a complete navigation framework.
The robot only needs to publish the necessary sensor information and the navigation target location, and ROS can complete the navigation function. In this framework, the move_base package provides the main execution and interaction interface for navigation. To ensure the accuracy of the navigation path, the robot must precisely determine its location, a functionality implemented by the amcl package.
During navigation, two algorithms, DWA (Dynamic Window Approach) and TEB (Timed Elastic Band), are used. These algorithms deal with global and local path planning, respectively, ensuring that the vehicle can safely proceed to its destination without colliding with obstacles.
myCobot 280 FunctionROS primarily supports two programming languages: Python and C++. The control of the robotic arm is mainly based on the Python pymycobot API library. This functionality comprehensively provides control methods for the myCobot 280. Below are introductions to several commonly used methods:
pymycobot API:
The following two methods can control the movement of the robotic arm by controlling the angles of the arm's joints. They allow for the control of the angle of a single joint as well as the control of the angles of all joints for movement.
send_angle(id, angle, speed)
send_angles(angle_list, speed, mode)
For executing grabbing motions, control over angles alone is often insufficient. Therefore, pymycobot also offers coordinate control, which allows for the control of the robotic arm's end effector movement in space.
send_coord(id, coord, speed)
send_coords(coords, speed, mode)
These two control methods allow for the individual control of the end effector's X, Y, Z, RX, RY, RZ directions, facilitating more convenient actions for grabbing and manipulation.
pymycobot is one of the control methods, which is quite user-friendly. Another control method is based on the ROS framework's MoveIt, a powerful robot motion planning framework that includes path planning, motion control, collision detection, kinematic calculations, and more. Below is a demonstration within MoveIt.
Additionally, we also process vision. In ROS, opencv_ros and image_transport are important tools and libraries for handling image data, playing a key role in robot vision systems and image processing.
In fact, 'cv_bridge' provides an interface between ROS and OpenCV. cv_bridge allows for the conversion between ROS messages and OpenCV image formats, enabling the use of OpenCV for image processing within the ROS framework.
When using OpenCV in ROS, image data is typically published and subscribed to as ROS messages. Therefore, cv_bridge is needed to convert the data format. Below is a simple example showing how to subscribe to an image topic in a ROS node and process the image using OpenCV:
import rospy
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
import cv2
def image_callback(msg):
try:
# Convert ROS image messages to OpenCV image format
cv_image = bridge.imgmsg_to_cv2(msg, "bgr8")
except CvBridgeError as e:
print(e)
# Process the image, such as converting it to grayscale
gray = cv2.cvtColor(cv_image, cv2.COLOR_BGR2GRAY)
# display image
cv2.imshow("Image window", gray)
cv2.waitKey(3)
# Initialize ROS node
rospy.init_node('image_listener')
# create CvBridge
bridge = CvBridge()
# Subscribe to image topics
image_sub = rospy.Subscriber("/camera/rgb/image_raw", Image, image_callback)
# Entering the ROS event loop
rospy.spin()
Although using OpenCV for image processing in the ROS environment introduces additional steps for data format conversion and node communication, this approach also brings higher modularity and system integration flexibility. This makes image processing more conveniently integrated with other systems and functions of the robot.
Scene IntroductionThis project aims to implement an integrated automatic animation system, including a LIMO Pro and a myCobot 280 M5. The system design allows the Limo Pro to autonomously navigate to a specified location. Upon arrival, the myCobot 280 M5 robotic arm executes a grabbing task and then returns to the starting point or another specific location.
Project ProcessStartup and Initialization:
● Upon system startup, a self-check is performed, including checks of the Limo Pro's navigation system and the functionality of the myCobot 280 M5 robotic arm.
Navigate to the Target Point:
● Using SLAM technology and navigation algorithms on the Limo Pro, an optimal route to the target point is planned based on preset or dynamically inputted coordinates.
● Limo Pro autonomously avoids obstacles and moves along the planned path to the target point.
Execute the Grabbing Task:
● Upon reaching the target point, sensors on the Limo Pro are used to locate the target object.
● The myCobot 280 M5 robotic arm executes the grabbing action based on the location of the target object. This step may involve precise motion planning to ensure successful grabbing.
Return to a Specific Location:
● After completing the grabbing task, Limo Pro plans the route again to return to the starting point or move to another specified location for item delivery or task completion.
This series of articles is divided into two parts: The first article mainly introduces the project's conceptual design, system architecture, and the selection of key components, providing readers with a comprehensive project overview and technical background. The following article will delve into the technical details of the project, including the construction of the software architecture, the application of key technologies, the system debugging process, and the challenges and solutions encountered during development.
In the next article, we will officially enter into the technical core of the project, sharing practical coding practices, debugging tips, and the thought processes and solutions strategies when facing project challenges. Stay tuned for the next article, where we will explore the depths and breadth of technology in this integrated autonomous navigation and mechanical operation robot project.
Comments