1) Tell us your idea
To keep up with the supply and demand of a rapidly expanding agriculture business, cost-effective technical solutions that encourage the efficient use of limited resources must be deployed. Our startup intends to make drone swarms available to the agriculture sector for accessible data collection and fire safety. Drone swarm systems can be used to collect data from multiple drones and sensors to survey huge fields and obtain analytical insight into crop health, variation in growth patterns, and irrigation. The variation of agricultural environments often makes it difficult to apply a standardized approach to the surveying and analysis of crop health. Combining adaptive swarm system techniques with versatile sensor arrays, optimized flight can be used in virtually any environment. It is the goal of this project to develop and integrate a functional environment where drone swarms can be adapted for successful data collection assisting agricultural understanding and risk assessment.
As mentioned, agricultural drone swarms would be focused on the ability to provide significant fire safety to large agricultural environments that otherwise require extensive resources to ensure their safety. Assessment of crop health during dry conditions would allow for risk analysis, as well as provide an effective means for fire suppression in the event of a fire. Equipping drones with fire extinguishing resources is already a well-researched field, and applying this project’s methodology for effective and efficient drone swarm flight can bridge these efforts better into the agriculture industry. Given the scarcity of agricultural resources combined with the often remote location, detecting and combating fires is a major challenge. In addition to destroying the crops and land, this crisis has overburdened the administration's response capabilities in public safety. However, swarm drones can be a viable option for early detection and quick suppression of such flames. This project proposes the development of low-cost drones loaded with swarm programs that may be used for surveying, infrastructure monitoring, and safety purposes.
We want to target growers of soybeans, oil seeds, and corn in the southern and western United States, as well as, Central Africa and Latin America, specifically Brazil, Argentina, and Paraguay. These fields vary drastically in environment and can be hundreds or thousands of acres in size. It would never be possible to scan fields of this magnitude with a single drone, however a drone swarm provides the ability to survey the field, identify areas with soil deficiencies, gather data, and even alert of fires.
2) Describe your development journey
Our team has ambitious goals to transform agriculture through swarm robotics. Historically, this domain has been constrained to academia, defense, and light shows. We spent a significant amount of time reading the academic literature, considering different frameworks, and understanding how this would all come in together.
Due to the complexity of this domain, we wanted to leverage an existing framework and apply it to the domain of agriculture. We tried using SwarmSim by MIT, micro swarm framework in ROS, mavSwarm, and a couple of others. What we discovered was that many of these frameworks are unmaintained, incompatible with ROS 2 and PX4, or do not provide sufficient capability beyond basic simulation.
This was the main source of frustration in the development journey. There are many versions of PX4, ROS 2, along with the connection between them (Miro-RTPS bridge, Fast-DDS, Mavlink), and the different environments to run them (Docker, VM). Documentation was often hard to follow as the technologies moved much faster than the documentation got updated. After having to restart many times because we discovered that our version of ROS is not compatible with the version of PX4, or that the connection between them is only valid on the main branch not on the latest stable release, we switched our focus to creating a solution that will make this easier for all future developers.
Along with creating the drone swarm, we wanted to create an environment that works across platforms and anyone can use it without prior knowledge. To do that, we set out to create a virtual machine that houses all the required components, so all installations can be automatic. We spent a lot of time reading through discussion on the PX4 discord, looking through various github gists created by the community, and reading on industry best practices. The PX4 community was an incredible resource, especially the community calls.
Our team then divided into subteams and subtasks: building and testing the drone, researching drone swarm and flight optimization algorithms, setting up the team’s working environment (e.g.: github, ros-docker-px4 connection) and programming computer vision capabilities.
We managed to create a virtual machine using vagrant that contains all required software in Docker, which interfaces with QGroundControl to control the drone and the simulation. We used github submodules to manage all the required packages automatically, and added sample projects for openCV and gazebo. After demonstrating that the functionality works, we managed to get the drones spinning, and the simulations running.
Unfortunately, because of restarting over many times, we were unable to fully implement the swarm logic as we hoped. Although we discuss how that can be done in the future iterations section.
The team also wanted to be able to customize the simulation by being able to import our own robot models. However figuring out how to import CAD files into Gazebo was a big learning curve, as there is not direct method of doing this and many of the solutions found online failed for us. Eventually a consistent method was found which allowed us to convert the Solidworks part files of our drone into URDF (which contains all the information needed on the robot’s motors, joints and link’s position) and dae and stl files (which define meshes and complex geometries). Finally, the team also learned how to import existing gazebo worlds and spawn the urdf model into the world. This entire method is clearly defined in the readME file of our code and can be easily reproduced for other models.
Finally, building and testing the drone had its own ups and downs. Thankfully, we had lots of resources for help, especially the handout provided by NXP and Discord. The process to build both our drones went very smoothly, despite the testing and setup steps giving us a few issues. Initially, we were unable to connect the FMU to QgroundControl. This was very worrying because no one in Discord seemed to have this issue. Nevertheless, we figured out that we were using a bad cable or adapter. We then had no more issues with the rest of the setup (e.g.: calibrating the sensors or testing the motors), except for some initial connectivity and update issues with the radio controller.
3) Remember your solution could save lives. Document how people can replicate your solution for further development and testing
[***] Might have to talk through reserving what we want to consider “IP” for our business, especially what will be highly commercializable points of business.
Environment: there are a lot of software components that go into developing drone logic. Our team worked on integrating ROS 2, Docker, PX4, OpenCV, Gazebo, and QGroundControl. We created a virtual machine using Vagrant that will automatically set up all the components and integrate them using the correct ports and versions. Users can be up and running with developing custom ROS 2 logic along with PX4 in a matter of 4 terminal commands. This is documented in the readme of the project, but the main steps are:
Clone the repo of the code
- Clone the repo of the code
Install vagrant
- Install vagrant
Run vagrant up to create a virtual box, which will automatically:
- Run vagrant up to create a virtual box, which will automatically:
clone the required git repositories (PX4, MICRO-XRCE-DDS agent)
- clone the required git repositories (PX4, MICRO-XRCE-DDS agent)
Build and run docker image
- Build and run docker image
Install all software dependencies
- Install all software dependencies
Install QGroundControl
- Install QGroundControl
Configure the environment and ports
- Configure the environment and ports
Now, anyone using it should be able to run PX4 simulation by running the commands in the readme, connect that to QGroundControl to download the firmware on the drone. Additionally, the PX4 ROS2 connection can be used for offboard control in order to allow for the connection between the NavQPlus and the microcontroller. The sample video shows the drone surveying the field, and how the NavQPlus can control the microcontroller running PX4.
In order to interface ROS2 with OpenCV you would need to follow the below steps as well adhere to the following pre-requsities:
Pre-requisites:
Ubuntu 20.04 (VirtualBox)
- Ubuntu 20.04 (VirtualBox)
ROS2
- ROS2
Webcam connected to the Ubutntu box
- Webcam connected to the Ubutntu box
Steps:
Create an Image publisher node (image_cam.py)
- Create an Image publisher node (image_cam.py)
This node allows to convert between OpenCV and ROS2 images.
- This node allows to convert between OpenCV and ROS2 images.
Create an Video subscriber node (video_cam.py)
- Create an Video subscriber node (video_cam.py)
This node enables live streaming of a connected camera/webcam
- This node enables live streaming of a connected camera/webcam
Run the nodes on the Ubuntu machine
- Run the nodes on the Ubuntu machine
Comments