In modern robotics, high-precision environmental perception and mapping are crucial for achieving autonomous navigation. This paper demonstrates how to use the myAGV Jetson Nano mobile platform, equipped with the Jetson Nano B01 board, in combination with RTAB-Map and a 3D camera to achieve more detailed and three-dimensional environmental mapping. The myAGV Jetson Nano supports SLAM (Simultaneous Localization and Mapping) radar navigation, and the Jetson Nano offers robust computational power, making it suitable for handling complex SLAM tasks. By incorporating a 3D camera, we can integrate the depth information captured by the camera into the map, enriching it with three-dimensional data in addition to the traditional planar information. In this paper, we will provide a detailed overview of the technologies used in this process and address the challenges encountered during implementation.
Background and Requirements AnalysisIn robotic autonomous navigation, precise environmental perception and map construction are essential. Traditional 2D SLAM techniques can achieve real-time localization and mapping, but they often fall short in describing the three-dimensional structure of the environment in complex spaces.
To address this issue, we chose the myAGV Jetson Nano, a product equipped with high-performance SLAM radar navigation capabilities and powerful computational processing, making it ideal for autonomous tasks in complex environments. However, 2D SLAM still has limitations when describing three-dimensional spaces. Therefore, we introduced a 3D camera, which captures the depth information of the environment, generating a more detailed and three-dimensional map, thereby enhancing the robot's environmental perception capabilities.
To achieve this goal, we employed RTAB-Map as the mapping tool, which can process RGB-D data and supports real-time 3D mapping and localization. By integrating RTAB-Map with a 3D camera on this platform, we aim to achieve high-precision 3D SLAM mapping in complex environments to meet practical application needs.
Product OverviewmyAGV Jetson NanoThe myAGV Jetson Nano 2023 utilizes the NVIDIA® Jetson Nano B01 4GB core board, paired with the Ubuntu Mate 20.04 operating system, customized by Elephant Robotics for robots, providing a smooth and user-friendly experience. The myAGV 2023 supports 2D mapping and navigation, 3D mapping and navigation, graphical programming, visualization software, ROS simulation, and multiple control methods such as joystick and keyboard control. It is an ideal choice for research, education, and individual makers.
The Astra Pro2 depth camera uses 3D structured light imaging technology to capture depth images of objects while simultaneously collecting color images through a color camera. It is an intelligent product suitable for 3D object and space scanning within a distance range of 0.6m to 6m, capable of measuring the depth data of objects within the measurement range. As an upgraded version of the Astra series, the Astra Pro 2 is equipped with the MX6000 self-developed depth sensing chip, supporting a maximum resolution of 1280x1024 for depth images. It also features depth and color image alignment functions at multiple resolutions and can be widely applied in scenarios such as robot obstacle avoidance, low-precision 3D measurement, and gesture interaction. With RGB-D functionality, it can capture both color images and depth information for generating three-dimensional maps.
All the necessary dependencies and function packages required for this setup are pre-installed on the Ubuntu 20.04 system of the myAGV, allowing us to directly use the ROS packages for RTAB-Map and Astra Pro2.
Implementation of RTAB-MapThe myAGV platform comes pre-packaged with several essential functions, allowing for direct usage. In this section, we will analyze the functionalities provided and walk through the process of deploying them.
Program InitializationThe first step is to launch the odometry and LiDAR components:
roslaunch myagv_odometry myagv_active.launch
The `myagv_active.launch` file is responsible for initializing and launching the core components related to robot motion estimation and sensor data acquisition.
<launch>
<node pkg="myagv_odometry" type="myagv_odometry_node" name="myagv_odometry_node" output="screen" />
<param name="robot_description" textfile="$(find myagv_urdf)/urdf/myAGV.urdf"/>
<node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher" />
<node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" />
<node name="base2camera_link" pkg="tf" type="static_transform_publisher" args="0.13 0 0.131 0 0 0 /base_footprint /camera_link 50"/>
<node name="base2imu_link" pkg="tf" type="static_transform_publisher" args="0 0 0 0 3.14159 3.14159 /base_footprint /imu 50"/>
<node pkg="robot_pose_ekf" type="robot_pose_ekf" name="robot_pose_ekf" output="screen">
<param name="output_frame" value="odom"/>
<param name="base_footprint_frame" value="base_footprint"/>
<param name="freq" value="30.0"/>
<param name="sensor_timeout" value="2.0"/>
<param name="odom_used" value="true"/>
<param name="odom_data" value="odom"/>
<param name="imu_used" value="true"/>
<param name="vo_used" value="false"/>
</node>
<include file="$(find ydlidar_ros_driver)/launch/X2.launch" />
</launch>
● myagv_odometry_node: This node starts the odometry process, calculating the robot's position and orientation in the environment.
● robot_description: Loads the robot's URDF (Unified Robot Description Format) file, describing the robot's physical structure.
● joint_state_publisher and robot_state_publisher: These nodes publish the robot's joint states and overall state information.
● static_transform_publisher: Defines fixed coordinate transformations to link the relative positions and orientations of the robot's base to sensors like the camera and IMU.
● robot_pose_ekf: Uses an Extended Kalman Filter (EKF) to fuse sensor data from the odometry, IMU, and other sensors, providing a more accurate estimate of the robot's pose.
● ydlidar_ros_driver: Launches the LiDAR driver node to acquire laser scan data from the environment.
Starting the Astra Pro2 Depth CameraNext, we start the Astra Pro2 depth camera:
roslaunch orbbec_camera astra_pro2.launch
This launch file sets up the necessary ROS nodes to handle the RGB-D data stream from the camera, including initializing the camera, configuring various image and depth processing parameters, and publishing camera data to ROS topics for use by other nodes, such as SLAM or object detection.
Some key parameters include:
● `/camera/color/camera_info`: Topic for color camera information (CameraInfo).
● `/camera/color/image_raw`: Topic for the raw color image stream.
● `/camera/depth/camera_info`: Topic for depth image stream information.
● `/camera/depth/image_raw`: Topic for the raw depth image stream.
● `/camera/depth/points`: Point cloud topic, available only when `enable_point_cloud` is set to true.
● `/camera/depth_registered/points`: Colored point cloud topic, available only when `enable_colored_point_cloud` is true.
● `/camera/ir/camera_info`: Topic for infrared camera information (CameraInfo).
● `/camera/ir/image_raw`: Topic for the raw infrared image stream.
If you need to modify these settings, please refer to the official SDK documentation provided at
[Orbbec Developer Documentation]
Launching RTAB-Map for MappingFinally, launch the RTAB-Map to begin mapping:
roslaunch myagv_navigation rtabmap_mapping.launch
<launch>
<group ns="rtabmap">
<arg name="rtabmap_viz" default="true" />
<node pkg="nodelet" type="nodelet" name="rgbd_sync" args="standalone rtabmap_sync/rgbd_sync" output="screen">
<remap from="rgb/image" to="/camera/color/image_raw"/>
<remap from="depth/image" to="/camera/depth/image_raw"/>
<remap from="rgb/camera_info" to="/camera/color/camera_info"/>
<remap from="rgbd_image" to="rgbd_image"/>
<param name="approx_sync" value="true"/>
</node>
<node name="rtabmap" pkg="rtabmap_slam" type="rtabmap" output="screen" args="--delete_db_on_start">
<param name="frame_id" type="string" value="base_footprint"/>
<param name="subscribe_rgbd" type="bool" value="true"/>
<param name="subscribe_scan" type="bool" value="true"/>
<remap from="odom" to="/odom"/>
<remap from="scan" to="/scan"/>
<remap from="rgbd_image" to="rgbd_image"/>
<param name="queue_size" type="int" value="100"/>
<!-- RTAB-Map's parameters -->
<param name="RGBD/NeighborLinkRefining" type="string" value="true"/>
<param name="RGBD/ProximityBySpace" type="string" value="true"/>
<param name="RGBD/AngularUpdate" type="string" value="0.01"/>
<param name="RGBD/LinearUpdate" type="string" value="0.01"/>
<param name="Grid/FromDepth" type="string" value="false"/>
<param name="Reg/Force3DoF" type="string" value="true"/>
<param name="Reg/Strategy" type="string" value="1"/>
<param name="Icp/VoxelSize" type="string" value="0.05"/>
<param name="Icp/MaxCorrespondenceDistance" type="string" value="0.1"/>
</node>
<node pkg="rviz" type="rviz" name="rviz" args="-d $(find myagv_navigation)/rviz/rtabmap.rviz" output="screen"/>
<node pkg="tf" type="static_transform_publisher" name="base_footprint_to_laser"
args="0.0 0.0 0.2 3.1415 0.0 0 /base_footprint /laser_frame 40" />
</group>
</launch>
● Launch Group: Groups RTAB-Map-related nodes under a common namespace (`rtabmap`) for easier management and data processing.
● RGB-D Sync Node: Synchronizes RGB and depth images from the camera, converting raw camera data into a format that RTAB-Map can process.
● RTAB-Map SLAM Node: Runs the RTAB-Map SLAM algorithm, configuring SLAM parameters such as subscribed sensor data, queue size, and optimization and ICP-related parameters. This node handles real-time sensor data processing, map generation, and robot pose estimation.
● RViz Visualization: Launches RViz for real-time visualization of the map and robot pose generated by RTAB-Map.
● Static Transform Publisher: Defines and publishes fixed coordinate transformations between the LiDAR and the robot's base frame, ensuring that the SLAM algorithm can correctly align sensor data within the same coordinate system.
Not very smooth
Issue
While the basic mapping functionality has been successfully implemented, there is noticeable lag during the process, even when using the Jetson Nano board. This indicates that the Jetson Nano's performance is insufficient for seamless mapping, leading to potential disruptions in the mapping process.
Solution
The solution lies in utilizing ROS's multi-machine communication capabilities.
ROS Multi-Machine Communication
ROS multi-machine communication enables sharing information and tasks across multiple computing devices within a ROS network. This is particularly useful in complex robotic applications where a single device (like the Jetson Nano) cannot handle all the computational tasks. By offloading some tasks to a more powerful device (such as a high-performance PC), you can achieve more efficient processing.
In essence, the Jetson Nano will handle some SLAM computations, while a more powerful PC processes the data from the depth camera, ensuring a smoother and more complete mapping experience.
Steps to Implement ROS Multi-Machine Communication1. Network Configuration
● Ensure that both the PC and Jetson Nano are on the same network and can communicate with each other.
● Set up the ROS environment variables on each device, particularly `ROS_MASTER_URI` and `ROS_IP` (or `ROS_HOSTNAME`).
PC Configuration:
export ROS_MASTER_URI=http://<PC_IP>:11311
export ROS_IP=192.168.1.100
Jetson Nano Configuration:
export ROS_MASTER_URI=http://<PC_IP>:11311
export ROS_IP=192.168.1.121
2. Launch Core Node
● Start the core ROS node on the PC. This will allow the Jetson Nano to communicate with the PC's ROS core through multi-machine communication.
3. Node Distribution
● PC (SLAM Mapping): Run the RTAB-Map node on the PC. It will subscribe to sensor data from the Jetson Nano and handle the SLAM mapping process.
● Jetson Nano (Sensor Processing): Run the sensor driver nodes on the Jetson Nano, such as the depth camera nodes, and publish the image and depth data.
● Optionally, run nodes for processing the SLAM results or map data on either device, depending on their roles.
4. Data Transmission
● Use ROS topics to transmit data between the PC and Jetson Nano. For example, the Jetson Nano can publish the camera's RGB-D data to topics like `/camera/color/image_raw` and `/camera/depth/image_raw`, while the PC's RTAB-Map node subscribes to these topics to perform mapping.
Expected OutcomeBy distributing the workload between the Jetson Nano and a more powerful PC, the mapping process should become much smoother. The PC's superior processing power will handle the intensive tasks, allowing for more fluid and complete mapping, with significantly reduced lag compared to running everything solely on the Jetson Nano.
In this technical case, we successfully utilized the Jetson Nano board and a 3D camera to implement RTAB-Map for three-dimensional mapping. However, during the implementation, we encountered performance bottlenecks, particularly when running complex SLAM algorithms on the Jetson Nano board, which led to a heavy computational load, impacting the system's real-time performance and stability.
To address this issue, we introduced multi-machine communication, offloading some computational tasks to another computer. This optimization not only alleviated the burden on the Jetson Nano but also enhanced the overall system performance, ensuring a smoother and more efficient SLAM mapping process.
Comments
Please log in or sign up to comment.