Introduction:
The project, Rescue and Post-Disaster Response Robot (Amphi-Sentry), is designed to enhance post-disaster response and relief efforts. This autonomous robot navigates disaster-stricken zones, gathering crucial information about debris and affected individuals. It is specifically developed for high-risk disaster-prone areas, where initial human rescue efforts can be hazardous. By deploying an autonomous bot to survey these regions first, rescuers can navigate more efficiently and execute organized rescue operations.
Working Principle:
The bot employs differential drive locomotion, which consists of two motorized wheels at the back and a caster wheel in the front. This setup allows the bot to maneuver effectively through various terrains.
Our bot navigates using a LiDAR sensor and employs SLAM (Simultaneous Localization and Mapping) and Nav2 for mapping and navigation. This technology enables the bot to create a real-time map of its environment and navigate autonomously.
A motor driver controls the two motors, and it is connected to an Arduino Nano. This configuration protects the primary controller (KR260) from reverse voltage or load damage. The Arduino Nano and KR260 are connected via a USB-A to mini-USB-B cable, utilizing the ROS-Arduino bridge for seamless control.
Data from the encoder motors is parsed through the Arduino Nano to ROS2 nodes, ensuring organized and controlled movement of the bot. We have built a workspace to integrate all these nodes, resulting in a fully autonomous robot powered by ROS2 Humble.
Mechanical Design:
The Amphi-Sentry robot is designed to be simple, cost-effective, and highly functional. Key aspects of the design include:
Material: The robot's body is constructed from polycarbonate sheets, known for their excellent durability and sturdiness. This material ensures the robot can withstand the harsh conditions of disaster-stricken areas while keeping the overall cost low.
Chassis Structure: The chassis is designed with three levels (stories):
- First Story: Houses the motor driver, motors, and battery. This placement ensures a low center of gravity, enhancing stability.
- Second Story: Contains the AMD KR260, Arduino Nano, and ESP32-CAM module. This central layer manages the primary processing and communication functions.
- Third Story: Positioned at the top, it holds the LiDAR sensor centrally, providing a clear and unobstructed view for accurate mapping and navigation.
Wheel Configuration: The bot features a differential drive system with:
- One Front Wheel: A 360-degree rotating caster wheel at the front for balance and smooth turning.
- Two Rear Wheels: Each driven by a separate motor, enabling precise control and maneuverability.
Design Aesthetics: Despite its functional focus, the design is intended to be simple and aesthetically pleasing, ensuring ease of maintenance and operation.
Development Process & Obstacles:
The development process faced numerous challenges, intensified by time constraints and resource limitations. Key aspects include:
Time Constraints: Although the hardware development phase was allocated two months, concurrent college internships reduced available time, compressing the project completion to just 15 days.
Budget Constraints: Financial limitations necessitated a focus on a simple design using cost-effective components. This approach ensured that the project remained within budget without compromising essential functionalities.
Hardware Integration Challenges:
- Component Selection: Selecting components that balanced cost and functionality was challenging. The use of polycarbonate sheets, economical motor drivers, and other low-cost materials helped manage expenses.
- ROS2 Workspace Issues: Setting up the ROS2 workspace involved significant hurdles, including port identification errors (e.g., CH341 driver issues) and compatibility problems with the KR260 and various hardware units and TTY ports.
Overcoming Obstacles: Despite these challenges, the team persevered:
- Node Creation: Developed nodes for running the differential drive (URDF), ROS-Arduino bridge, and RPLIDAR, integrating them to achieve autonomous functionality.
- Machine Learning Integration: Implemented an ML algorithm for image processing using the ESP32-CAM feed. The processed data is transmitted to a server (currently on a separate laptop) for further analysis.
Implementation Details:
ROS2 for Development:
The development of the Amphi-Sentry robot heavily relied on ROS2 (Robot Operating System 2). ROS2 offers a collection of packages and tools that facilitate the creation of complex robotic systems. Using ROS2, we were able to develop a flexible and modular architecture for the robot. The primary advantages of using ROS2 include:
- Modularity and Reusability: The ROS2 framework allows for the creation of reusable and interchangeable software modules. This means that even if we change hardware components like motors or drivers, the software remains unaffected, ensuring smooth operation.
- Communication and Integration: ROS2 provides robust communication capabilities between different nodes, allowing seamless integration of various sensors, actuators, and processing units. This is critical for the coordinated functioning of the robot's navigation, perception, and control systems.
- Scalability and Extensibility: The workspace created in ROS2 can serve as a foundational building block for future projects. This scalability ensures that the same software infrastructure can be adapted and extended for different robotic applications, making it a versatile tool for ongoing development.
SLAM Toolbox for Mapping:
The robot uses the SLAM (Simultaneous Localization and Mapping) toolbox to map the disaster region. During a dry run, the robot is teleoperated to navigate through the area, and the SLAM toolbox creates a detailed map. This map is stored and used as a reference for autonomous navigation.
Nav2 for Autonomous Navigation:
Once the map is generated, Nav2 is sourced on the exported map, enabling the robot to navigate the region autonomously. Nav2 processes real-time data to detect and avoid obstacles, ensuring the robot follows the correct path even in dynamic environments.
- ML Model:
Post-Earthquake Exterior Environment Dataset
The "Post-Earthquake Exterior Environment" dataset hosted on Roboflow is a comprehensive collection of images designed to aid in the development of machine learning models for post-earthquake damage assessment. The dataset includes a variety of images captured from exterior environments affected by earthquakes, focusing on visual features like debris, collapsed structures, and other signs of destruction. These images are annotated with labels identifying different types of damage and debris, providing essential data for training and evaluating computer vision models.
The primary purpose of this dataset is to facilitate the development of automated systems that can quickly and accurately assess earthquake damage. Such systems are crucial for emergency response, enabling rapid identification of hazardous areas and prioritization of rescue efforts. The dataset's detailed annotations allow for precise training of models, ensuring they can distinguish between different types of damage and debris with high accuracy.
This dataset is especially valuable for researchers and engineers working on disaster response technologies. By providing a large and varied collection of annotated images, it supports the creation of robust models capable of operating in diverse and challenging environments. The images in this dataset cover various conditions, times of day, and degrees of damage, ensuring that models trained on it are well-prepared for real-world deployment.
YOLOv9 for Debris Detection
YOLO (You Only Look Once) is a state-of-the-art object detection algorithm known for its speed and accuracy. YOLOv9, a recent iteration, builds upon its predecessors with improved architectural innovations and optimization techniques, making it highly efficient for real-time object detection tasks.
When integrated with the "Post-Earthquake Exterior Environment" dataset, YOLOv9 processes the images to detect and classify debris and other post-earthquake damage. The technical workflow involves several key steps:
- Image Input:
YOLOv9 takes an input image and divides it into an S x S grid.
- Feature Extraction:
Each grid cell is responsible for predicting bounding boxes and class probabilities for objects whose centers fall within the cell. The model uses a convolutional neural network (CNN) to extract features from the input images.
- Bounding Box Prediction:
Each grid cell predicts a fixed number of bounding boxes, providing the coordinates, confidence scores, and class probabilities.
- Non-Maximum Suppression:
To filter out redundant and overlapping bounding boxes, YOLOv9 applies non-maximum suppression, ensuring that only the most confident predictions are retained.
- Output:
The final output consists of bounding boxes around detected debris, each labeled with the respective class and confidence score.
YOLOv9's real-time detection capabilities make it particularly suited for rapid damage assessment in post-disaster scenarios. Its ability to quickly analyze large volumes of images and accurately identify debris can significantly enhance the efficiency of emergency response operations, enabling faster and more effective allocation of resources.
Key Features:
- Autonomous Navigation: The Amphi-Sentry robot can autonomously navigate through various terrains, including debris-filled and uneven surfaces, using advanced sensors and navigation algorithms.
- Post-Disaster Area Surveying: Equipped with LiDAR and cameras, the robot can survey post-disaster areas, creating detailed maps and gathering crucial information about the environment.
- Debris and Victim Detection: The robot can identify and locate debris and trapped individuals in the affected areas, using machine learning and sensor data. This information helps rescuers prioritize their efforts.
- Rescue Assistance: By providing detailed information about the terrain and locating obstacles and victims, the robot significantly eases the task of rescuers, enabling them to execute rescue operations more efficiently and safely.
Use of AMD KR260:
The AMD KR260 kit plays a pivotal role in the functionality and performance of the Amphi-Sentry robot:
- Operating System: The kit runs Ubuntu 22.04, providing a robust and stable environment for developing and executing the robot's software.
- Software Environment: The robot's software suite includes ROS2, VSCode, Colab, Nav2, RPLIDAR, and more, all of which are seamlessly supported by the AMD KR260.
- Performance Advantages: Compared to alternatives like the Raspberry Pi, the AMD KR260 offers superior computing power, enabling the smooth operation of resource-intensive applications such as machine learning algorithms.
- Machine Learning: The enhanced computing capabilities of the AMD KR260 allow for efficient execution of machine learning programs, essential for tasks like image processing and victim detection.
The integration of the AMD KR260 kit ensures that the Amphi-Sentry robot operates efficiently, with the computing power necessary to handle complex navigation, sensor data processing, and machine learning tasks.
Future Enhancements
- Implementing Thermal Cameras:
To significantly improve the robot's capability in identifying individuals in disaster-stricken environments, we plan to integrate a thermal camera. This advanced feature will enable the robot to detect individuals by analyzing thermal maps and identifying heat signatures. By assessing body temperature variations, the robot can prioritize those in critical conditions, as an elevated body temperature might indicate severe injuries or distress. This capability will enhance the precision of rescue operations and provide vital information for emergency responders.
- Implementing a Storage or Portable Feature:
To augment the robot’s role in disaster relief, we aim to add a storage or portable feature. This enhancement will allow the robot to transport and deliver essential supplies, such as medical kits, food, and water, directly to affected areas. The ability to carry and distribute resources will make the robot a crucial asset in providing immediate aid and support to disaster-struck regions, facilitating more efficient and effective relief efforts.
- Making the Bot Capable of Waterway Navigation:
Although we have devised a navigation method for waterway travel using two rear propellers, the project faced challenges due to the high cost and availability of waterproofing materials and suitable propellers. We are determined to address these issues to achieve full multi-terrain functionality. By enabling waterway navigation, the robot will be able to operate effectively in both terrestrial and aquatic environments, enhancing its versatility and extending its operational range in diverse disaster scenarios.
- Enhanced Environmental Sensors:
Future developments will include the integration of additional environmental sensors to gather more comprehensive data. Sensors for detecting hazardous gases, radiation, or structural damage could provide critical information about the disaster area. This would allow for more informed decision-making and safer rescue operations.
- Advanced AI and Machine Learning Capabilities:
We plan to incorporate more sophisticated AI and machine learning algorithms to improve the robot's decision-making and autonomous functions. Enhanced algorithms could enable better pattern recognition, improved obstacle avoidance, and more accurate analysis of environmental conditions and human status.
- Improved Communication Systems:
Upgrading the robot’s communication systems to support better data transmission and real-time updates will be crucial. Enhanced connectivity will facilitate more reliable communication with emergency response teams and ensure that data collected by the robot is transmitted promptly for analysis and action.
Conclusion:
Our journey throughout this project has been marked by continuous learning and personal growth. We take pride in the advancements we achieved and the skills we developed along the way. Although we faced obstacles, our efforts have resulted in a noteworthy achievement that reflects our dedication and hard work. We look forward to applying the lessons learned to future endeavors and continuing our pursuit of excellence.
Comments