Existing precision landing or docking solutions for drones primarily rely on computer vision techniques that demand heavy computational power, especially to track and land on a moving platform. In the #PX4 Developer Summit 2020, a precision landing technique using UWB was introduced, but they are still under development. This is the main motivation for this project.
AimOur solution is a precision landing (docking) technique that allows a drone to land autonomously and safely on static and moving platforms. Our solution thus makes a simple solution and also reduces a lot of computational overhead. We incorporate an RF-based homing technique followed by a vision-based relative pose estimation to complete the precise landing sequence. This is expected to work on static as well as moving platforms. Both static and moving platform will be termed as Docking station from here onwards.
Our method is particularly useful for landing autonomously in restricted or congested spaces where the exact spot for landing could not be determined as a way-point and where common methods for landing are impractical to be executed. Eg: Landing autonomously into a hospital area filled with emergency vehicles and personnel during an ongoing emergency. In such cases, a HoverDock can be conveniently placed at a safer location inside the hospital premises so that the drone could land precisely to carry out its tasks such as medicine delivery, blood collection & delivery, etc.
The NavQ will be the brain for the drone that takes care of the necessary software-hardware interfacing and program execution to successfully enable precision landing on any drone which are particularly useful in crisis situations as I explained with a hospital scenario above.
AdvantageThe solution makes use of the novel HoverLive package, which is a rapidly expanding project that is based on a high-level framework that incorporates the NXP FMU, NavQ, coral camera, and various Drone APIs into a common framework.
WorkflowThe framework revolves around the reference design set by the NXP FMU and its capabilities. The solution runs on the NavQ board which will manage the FMU via UART connection. It also manages the coral camera feed for visual inspection and marker-based autonomous landing.
There are two stages for the HoverDock precision landing technique.
1. RF-Based Homing
The Docking station transmits a homing beacon in the 2.4GHz range (shared by BLE and WiFi channels). Once the drone reaches inside the beacon range, possibly within a 10-meter radius, it starts tracking the beacon using RSSI and proceeds to the docking station.
2. Vision-based Landing
Once the drone is conveniently above the docking station (detected where the RSSI is strongest), it uses its downward-facing camera to perform a relative pose estimation (based on the marker) and precise adjustments to align with the docking station before initiating a landing sequence.
Since we combine both RF-based and vision-based techniques, better precision can be attained.
In the case of a moving platform, the tracking of the docking station can be extended using a long-range homing signal (LF or MF) coupled with a real-time obstacle avoidance setup. This implementation requires additional research and the results obtained will be included with this project.
Setting up NavQThe 8MMNavQ is a small purpose-built experimental Linux computer based on the NXP i.MX 8M Mini SOC. It is focused on the common needs of Mobile Robotics systems. The system is built as a stack of boards, the top board is a SOM (system on module) containing the Processor, memory, and other components with strict layout requirements, and where the secondary boards are relatively inexpensive (often 4 layer boards) and allows for versions with customization to be easily built.
- The SD card included in the NavQ kit is preloaded with our HoverGames-Demo Linux distribution. The default username and password are:username: navq | password: navq
- Power on the NavQ using the included USB-C cable.
- Connect the Serial USB adapter to the UART2 port on the bottom port and connect to the laptop USB.
- Default NavQ terminal settings are: 115200 Baud, N, 8, 1 (no Parity, no flow control)
$ dmseg | tail # to check if USB serial connection established and port assigned.
$ minicom -D /dev/ttyUSB0 -b 115200 # connect to NavQ serial console.
Either u will get a login prompt.Else press enter.
Log in using navq:navq
as the credentials.
- When the NavQ arrives, the Demo image will already be loaded to the SD card. This image does not take up the full amount of space on the SD card, so you'll need to expand the space in order to install more packages such as ROS or OpenCV. Follow the steps below to expand the filesystem.Download and run the following script resizeDisk.sh.
$ chmod a+x ./resizeDisk.sh
$ sudo ./resizeDisk.sh sd # For SD card
$ sudo ./resizeDisk.sh eMMC # For eMMC
- NavQ can be connected to the internet directly via an ethernet connection or via WiFi. To connect to WiFi, follow the steps below:A package named
connman
is included in the image to help you connect to WiFi through the command line. To connect to WiFi, run the following commands:
$ connmanctl
connmanctl> enable wifi
connmanctl> scan wifi
connmanctl> services
WIFI_SSID wifi_name_managed_psk
connmanctl> agent on
connmanctl> connect wifi_name_managed_psk
<enter passphrase>
<wait for connection success message>
connmanctl> exit
- Check for a successful connection
$ ping google.com
- It is recommended to create a unique hostname to connect to the NavQ instead of using an IP address all the time.To change the hostname we need to modify
/etc/hostname.
$ sudo echo navq > /etc/hostname
- For remote access, the navq needs to be connected to SSH.Follow the steps below.
openssh-server
is pre-installed.Find IP of navq or assign a static one in your router
$ ssh navq@<ip>
Type yes and this opens the terminal of navq on our laptop.
If you want to set up a desktop environment for navq, follow the guide here.Now the NavQ is setup for HoverDock.
HoverDock: Setting upHoverDock is based on HoverLive, which is a high-level framework, that incorporates the NXP FMU, NavQ companion computer, and the coral camera, into a common framework based on the PX4 flight stack, MAVSDK, ROS, and additional software elements (such as OpenCV, TF Lite, etc) that can be easily deployed into their Hovergames drone for easy management of autonomous missions via a dedicated application.It gives complete freedom for the users to use their choice of drone API (currently supporting MAVSDK and MAVROS) in their choice of programming language (currently supporting C++ and Python). In the case of hardware-dependent parts, they are implemented as add-on modules for isolating the core package.
The capabilities of the framework include autonomous takeoff, simplified waypoint navigation with active obstacle avoidance, and vision-based autonomous landing.
- SSH into the navq and clone the HoverLive repository.
$ git clone https://github.com/crisdeodates/Hovergames-HoverLive.git
$ git checkout master
- Build the project
$ cd Hovergames-HoverLive/
$ catkin init
$ mkdir build
$ cd build/
$ cmake ..
$ make
Now that the project is set up, let's go through the Hardware and Software structure of HoverDock and how it can be used in our projects.
HoverDock: Hardware DependenciesThe Precision Landing has 2 phases.
1. RF-based Homing
This phase takes advantage of the BLE homing capability of the NXP KW38 series boards. An FRDM-KW38 is integrated into the docking station that emits a BLE based RF beacon. The USB-KW38 that is connected to the navq onboard the drone will detect the beacon and homes in the near vicinity of the docking station. Navq communicates to the USB-KW38 via UART serial connection.
The approximate distance to the docking station and the directional gradient in which the drone needs to travel to minimize the error threshold is based on the RSSI values of the captured BLE beacon signals. As the error decreases, the RSSI value increases.
2. Vision-based Landing
Once the drone is in the near vicinity of the docking station and within a threshold distance, the control switches to vision-based landing based on pose estimation using the onboard google camera, connected to the navq.
Currently, HoverDock, based on HoverLive tries to implement high-level wrappers for MAVSDK and MAVROS. So both are dependencies for the package. The implementation also depends on OpenCV and NXP KW38 SDK. In the case of firmware and SITL, PX4 is also a requirement.
1. RF-based Homing
- Install MCUXpresso IDE.
- Built SDK for FRDM-KW38 and USB-KW38 via NXP online SDK builder tool.
- Install the KW38 SDK's in MCUXpresso IDE and create a project.
- Import the respective examples for the corresponding KW38 board and customize it as per requirement.
- Compile, Build and upload to the board.
This part is hardware-specific dependent and is implemented as an add-on module.
2. Vision-based Landing
- This depends on the standard OpenCV Aruco tag detection and the relative pose estimation.
- Once the relative pose is calculated, the drone is commanded in the horizontal absolute plane for fine adjustments to reduce the relative errors while smoothly landing at the docking station.
Learn more about HoverLive here.
The basic implementation of HoverDock is as follows:
- Here the basic implementation revolves around abstracting the core API features in such a way that it won't be ambiguous to the user.
- The core feature of Precision Landing which is a modified version of the general landing state is extracted to a user callable state onto which the HoverDrone instance transits.
The main advantage is the ease of structured state callbacks that allows uniform implementation throughout each API and environment.HoverDock: Usage
Integrating and using HoverDock based on HoverLive into a new or current project is quite easy.
Let's take the case of MAVSDK Python implementation of HoveLive based HoverDock.
- Import the HoverLive packages
$ from HoverDrone import Drone
- Import the required states
from HoverDrone.states import Arm
from HoverDrone.states import Disarm
from HoverDrone.states import Takeoff
from HoverDrone.states import PrecisionLand
- Connect to the HoverDrone instance
my_drone = Drone.connect("myDrone", "udp://:14540")
- Configure and add each required states as individual threads
my_drone.start()
my_drone.add_state(Arm)
my_drone.add_state(Disarm)
my_drone.add_state(Takeoff(5.0))
my_drone.add_state(PrecisionLand(threshold=4.0, Guided=True, MarkerID=xxx))
- Execute the threads
my_drone.join()
IMPORTANT: Some features may not work as the package is still in development and modifications might be done without notice that may change the current implementation and topology.HoverDock: Testing
A simple SITL mission with takeoff, Position request, and a precision land scenario is tested using the HoverLive package implemented on NavQ.
However, HoverDock is experiencing some bugs currently in the RF-based Homing maneuver that are under correction at the time of this writing. So the precision landing is yet to be tested. Normal landing is used in the following test. As soon as the bugs are corrected, the testing will be updated.
Due to the restriction of flying the hardware drone, a PX4 SITL with JMAVSim is run on the NavQ. Also it should be noted that in this season of Hovergames, the focus of the competition is based in NavQ and not Hovergames drone kit.
The capability of HoverDock is vast. With the help of my contributors, we wish to extend the capability of the HoverDock beyond the current APIs and make it accessible to all the ones that are available in the HoverLive framework.
Since the current implementation is not hardware-dependent, it can be applied to a wide range of hardware that supports the APIs and their subsets.
Contributing Projects:
Comments