One of the best ways to deal with the COVID-19 epidemic is to control infection sources through early diagnoses, epidemiological investigation, isolation, social distancing, quarantine, banning the gathering groups, improving personal hygiene, and wearing medical masks, and by keeping rooms well ventilated.
The main goal of a quarantine zone is to separate the people exposed to disease from healthy people in order to limit the spread of the disease and monitor their health state evolution.
In this project, I propose a solution to sustain and enforce the quarantine zones. The solution is based on an autonomous unmanned aerial vehicle (a HoverGames drone) to detect human activity and timely send research data to a control center.
2. The main system conceptsThe developed system, which I will present in the following, has as a starting point:
- the KIT-HGDRONEK66 (a drone developer kit produced by NXP), named in this project HoverGames drone or quadcopter,
- RDDRONE-FMUK66 (a robotic drone Flight Management Unit β FMU, able to sustain PX4 autopilot, also produced by NXP), named on this project FMUK66, and
- RDDRONE-8MMNavQ (a Linux companion computer, developed by NXP and produced by EmCraft) β this development system is named in this project NavQ.
Using the PX4 autopilot, the HoverGames drone will carry out a pre-programmed autonomous flight mission. The quadcopter will follow a planned path around the quarantine zone, and, at the end of the mission, it will return to the landing point. In the case when human activity is(are) detected (based on an intelligent system), the HoverGames will send research data to the base station.
The HoverGames drone is equipped with two video systems. The first one of these (see Figure 1, data path (1)) is connected to a NavQ embedded system, and its role is to detect and localize humans in an automatic manner. When a human is localized, the data is sent, through the radio telemetry module, to the base station (a PC system), see Figure 1, data path (4). In the usual way of working, all the image processing steps are done on NavQ onboard computer, and no images are sent to the ground station. Only in the development stage, the final results of the human activity detection are sent to the base station in order to evaluate the recognition system performances and operating mode β data path (6).
From all my previous experience [1], [2], in order to have a human activity detection system able to obtain accurate detection, it is necessary to use complex algorithms. But a complex and powerful algorithm comes with a number of costs, and the most "painful" one for us is given by the great computing power it requires.
These constraints determined me to adopt the following technical solutions:
- to obtain a high classification rate I used a deep neural network (DNN), and
- to solve the computational power problem, I was using an Intel Neural Compute Stick 2 (NCS2 β data path (2)) to take the DNN computational burden.
The second video system (the (8), (9) and (10) data paths) is dedicated for
- the remote control of the UAV (Unmanned Aerial Vehicle) β only in a special case when it is required, and
- for the validating of the automatic recognition process.
The first of the above functions is managed through an RC unit (data path (7) in Figure 1), and it is activated only when a human operator considers as being necessary.
3. The project goalsA. To build a such a system able to satisfy all the requirements presented above.
B. The system to be built as a flexible, autonomous unmanned aerial vehicle development platform able:
- to support different type of application: human activity detection, mini transport of goods, landmark detection, fire detection etc.;
- to work with different types of methods and algorithms (e.g. of human activity detection and not only), combined in different structures, without any modification of those parts of the system that were not directly related to them;
- to be easily scalable β to support a more powerful embedded system, able to have more computational power.
C. To improve the existing design of the HoverGames drone bringing new innovative and inexpensive solutions in order to have a better, a more reliable system able to work more closely to real environments (e.g., an urban one).
4. Hardware components4.1. Improving the reliability of 3D printed landing gear connectors
In a crash or in a hard landing, the bottom and upper landing gear connectors are the first components that will break.
To solve this problem, you can download the landing gear connectors from Thingiverse (the bottom component [3] and the upper component [4]) and start the printing process.
Printing a component like in Figure 2 is not a good option. The landing gear connector's strength is given mainly by the layers' adhesion, which means how the individual layers of material are bound together. So, all the printed parts (the bottom and upper landing gear connectors) are mechanically weak because of the weak bonding between the individual printed layers that make up the 3D part.
In not a perfect landing, a landing in which the drone landing gear makes an angle with the landing surface (see Figure 3) or in a high-speed landing, the bottom landing gear connector will crack along the 3D printed layers, see Figure 4.
A different analysis [5] prove that to have a strong component, printed as in Figure 2, you must use a layer height of 0.1 millimeters up to 0.15 millimeters:
Now, an easy and costless solution, to at least double the bottom landing gear connector's durability, is to print the component in the horizontal position, see Figure 5. In this mode, the object's layers will be perpendicular to the landing surface, and the layer adhesion should play a minor role in the reliability of the entire component.
Table 1, [5], presents a PLA component resistance analysis in the function of the printing layer height on two different printing directions. Even if, in my case, the landing connectors were printed using ASA material (great for outdoor applications having a high impact resistance), the results should be similar to the one presented in Table 1. The most important thing is that the practical results have shown that the components' strength is significantly increased by printing these elements, as presented in Figure 5.
To get more information related to this subject (like another method to increase the strength of the landing gear connectors or 3D printing tips to obtain a flowless final component), please watch this movie:
4.2. New approaches to build the body of the HoverGames drone
The first approach to redesign the HoverGame body is mainly a tip on how to place the ESCs inside the drone's body to shorten the repairing time after a crash considerably.
On the official GitBook documentation website for the NXPHoverGames drone, there are mainly two recommended arrangements for the system's ESCs [6]:
But, in a crash or a hard landing, the landing gear connectors (upper and bottom) are the first ones that will break; see Figure 7. The first landing gear connector (1 in Figure 7) is very easy to replace. But the second one (the higher landing gear connector) is difficult to be replaced mainly because you must first remove the four screws β indicated with red arrows in Figure 7.
Using the recommended placement of the four ESCs, as in Figure 6, to be able to remove the four screws that fix the upper landing connector, you must:
- Remove the upper plate. To do this, you must remove all of the screws used to fix it β 16 pieces (4 per drone's arm), and after this step,
- You must remove at least one ESC to have access to the landing gear's screws.
All of these steps represent a lot of work and a lot of wasted time.
I propose the placement of the ESCs as in Figure 8. In this mode, all the screws of the upper landing gear connectors will be uncovered β see Figure 8, yellow arrow.
You can use some zip-ties to strap the ESCs to the bottom plate, or you could also use the double-sided foam tape, or you can use the booth methods.
When you have the upper plate mounted, using the holes indicated with a yellow arrow in Figure 9, you will be able to access the screws of the landing gear connectors without any problems.
Based on the placement of the ESCs like in Figure 8, the replacement of the upper landing gear connectors becomes a straightforward task. In this mode, you are not required to remove the drone's upper plate, and you must only remove the four screws associated with one of the upper landing gear connectors you want to change.
An interactive film that presents this improvement, as well as the one that will follow, but also other aspects related to them is the following:
The last HoverGames drone design proposal is related to the PDB placement. The classical approach of mounting the PDB is like in the (a) pictures from Figure 10. For easy wire management, I propose the following way to mount it, Figure 10(b) or Figure 11 (a) β the PDB will be placed upside down. The only problem that arises is the need to detach the power supply wires from PDB, pass the connector and the cables through a hole of the bottom plate of the drone body (see Figure 11(c)), and, finally, you must solder the wires back to the PDB.
The PDB final mount, from different points of view, is presented in Figure 11(a), (b), and (c). The images where you can compare the two approaches are Figure 10(a) and Figure 11(a).
4.3. The companion computer enclosure
To avoid using double-sided tape, in order to mount the NavQ to the carbon fiber plate, see Figure 12, I 3D print an enclosure downloaded from Thingiverse [7]. This enclosure protects very well the NavQ board. Here I made only a small modification to the antenna support. Using the heat, I bent at 90 the supporting rod for the antenna. Then, I glued the bottom part of the rod to the NavQ enclosure. To fix the top of the enclosure and the HDMI module, I used two self-locking plastic straps, Figure 12..
To power the NavQ board, I use a JST-GH power input connector cable terminated with an XT30 male connector. The board is power up directly from the PDB through a cable having two bullet connectors and to the other side one XT30 female connector.
4.4. Upgrading the telemetry unit
On the HoverGamesdrone based on the PX4 autopilot, the telemetry unit can be used without problems to develop the application and to prove the functionality of the system's main concepts. The Holybro units (the first unit place on the drone and the second one placed on the ground station), which have 100 mW, are perfect for testing. However, if you use the system in conditions close to real life in which the system should still work (e.g., you fly with the drone around concrete buildings) or you go a little bit further, you will hear "connection lost" and, after a while, "connection regain."
Therefore, as a direct result of these problems, I decide to upgrade the telemetry units. All the supported telemetry units by the PX4 autopilot are the ones presented here [8]. But, part of them are a little bit expensive, and some are discontinued or unavailable.
As a direct result, I decide to use 3DR Telemetry units class of device, see Figure 13, that are discontinued by the PX4 autopilot, but you can buy at a lower price (having a range of 20 to 30$) from many places and use with your drone. Now, there are several legitimate questions related to the 3DR Telemetry units: "Is it safe to use?" or "Can I upgrade to a newer software version?" or "Can the PX4 use these devices?".
I can confirm that I have been using two such modules for over 5-6 months and work flawlessly. Moreover, in the next film, I present the upgrading process done in the 4.0.11 version (the last version at that time) of the QGroundControl β the updating process took place on January 23, 2021. The update process worked perfectly, as you can see.
But in order for the FMUK66 to work without any problems with a 3DR Telemetry unit,there is required to be done one simple trick that I will present in the following.
In a standard way of operation (e.g., using the Holybro telemetry unit), after the drone is power-up, the drone FMU (FMUK66) requires at least 40 seconds to get a GPS lock. Using the 3DR Telemetry unit and power-up the HoverGames drone, we will see that the GPS unit cannot get a GPS lock. Both processes are presented in the following movie:
In the previous movie, I waited for 3 minutes, and the drone was unable to get a GPS lock. You can stay for 10, 12 minutes, and the result will be almost the same. But I have to admit that, sometimes, after a long period of waiting time, you can get a GPS lock, but it is a rare case.
When this problem first appeared, I quickly solved it by unplugging the telemetry unit, waiting for a GPS lock, and plugging it once again. After this procedure, the HoverGames quadcopter will fly without any problem and functions as "return to home" works very well. This is more a hacking solution than a correct and elegant solution.
I suspected that the electromagnetic and harmonic disturbances generated by the 3DR Telemetry module are the ones that stop the GPS module from getting a lock. This assumption proves to be true. I also suspect that the PX4 developer team discontinued 3DR Telemetry units' support precisely because of this cause.
Analyzing the flight logs, especially the GPS Noise & Jamming records, it was easy to see that the GPSjamming indicator has higher values, see Figure 14. Mainly, the GPS jamming varies from 0, which means no jamming, and it goes up to 255, which means very intense jamming.
The solution to this problem (of the electromagnetic and harmonic disturbances) is very easy to implement. So, wrap the telemetry unit and all the cables between the FMU and the telemetry unit using food aluminum foil. In this mode, you will build a Faraday cage that will block almost all the telemetry unit's electromagnetic fields. In this mode, the interference problem was solved very quickly and straightforwardly, see Figure 15.
Base on this solution, the GPS lock is done in around 40 seconds β see the above movie. When the Faraday cage was used, the jamming noise decreased substantially β see Figure 16.
Now, you have a long-range telemetry unit that does not influence the drone's operation in any unwanted way.
Below is a link to a movie with the tests done on the FS-iA6B receiver. The film is from November 26, 2020. In this movie, starting with the 0.25-time index, the telemetry unit is presented. In this test, the shielded telemetry unit, shown in Figure 15, worked without any problems from more than 850 meters. The telemetry unit had 1000 mW power.
The limiting factor that determined obtaining a maximum distance of around 850 m was the receiver on the drone β the FS-iA6B receiver. The equipment manufacturer of the 3DR Telemetry unit states that these modules operate at 2-3 km.
So, as a conclusion, I recommend this upgrade.
4.5. The secondary video system
This video system (the (8), (9), and (10) data paths in Figure 1) is dedicated mainly for the validating of the automatic recognition process. There is currently no recognition system (of objects, diseases, people, etc.) that provides 100% identification performance. Under these conditions, when an alert signal is sent by the quadcopter (meaning that at least one subject has been noticed), a human user must confirm the detection via this secondary video system.
The most direct and straightforward approach is to connect an FPV (First-person view) camera to a VTX (video transmitter) and using a goggle to receive images from the drone.
But FPV means more than that. For example, I want to see in real-time:
- the GPS coordinate of the drone in the video stream,
- the battery status (to have a double confirmation of its condition and the estimation of the remaining flight time),
- altitude, and speed (it is respected the pre-programmed autonomous flight mission or a technical problem occurred?), or
- the receiver's signal strength (Can I still take control of the drone in that flight zone, or is it safer to let it follow its course?).
To have all of this information from a quadcopter sustained by a PX4 autopilot, we need additional hardware to combine different data from the FMU with the video data. This piece of hardware, in my case, is an On-Screen Display (OSD) board called Micro OSD. The Micro OSDreads all the MAVLink data from the FMU telemetry stream and overlays it on the input video stream obtained from an onboard camera. The result will be sent to the wireless video transmitter (VTX).
Figure 17 shows the wiring diagram between the OSD module, the FMU, the FPV camera, the filter module, and the VTX module. The FPV video camera is a low latency Caddx Ratel, able to work with a broad power input voltage of 5V up to 40V. The VTX is a long-range AKK Infinite VTX able to work at 25, 200, 600, or 1000 mW emission power, and it has a range of input voltages between 7V and 26V.
Micro OSD uses two power setup stages to avoid noises coming from motors, mainly because these noises could introduce some glitches and interferences on the video signal. Figure 17shows the first power stage from the FMUK66FMU and the second power stage sustained from the quadcopter power system β extracted from the PDB. The second stage power supply voltage is recommended to be 12 volts, as it is presented in the datasheet. But in time, Micro OSD gets too hot on 12 Volts setups, so I decide to use a 7809 power regulated to lower the voltage to 9 volts. The input LC group, presented in Figure 17, has the role of filtering the DC power disturbances caused by the motors, mainly because the frequency ripple passes right through a linear regulator (like 7809) without any problems.
Without the filtering part of the video system, the goggle's received images are similar to those presented in Figure 18. The images from Figure 18 were obtained with a system where the FPV cam is connected directly to the VTX system, without any: (a) Micro OSD module and (b) filtering system.
The hardware implementation of the secondary video system is the one presented in Figure 19. So, the video system's core is the Micro OSD module paced under the AKK Infinite VTX, lighten in blue in Figure 19 (a) and (c). The VTX system uses a pigtail to connect to the omnidirectional pagoda antenna.
Before we use the system and get the MAVLink information in the video stream (GPS coordinates, battery voltage, etc.), we must configure the FMU to send the information to the Micro OSD module. So, first, in QGroundControl, go to setup (in Figure 20, the red arrow), Parameters (Figure 20, the yellow arrow), and from here, select the MAVLink (Figure 20, the blue arrow). Because my OSD module is connected to Telemetry port 2, I choose on the MAV config the Telemetry port two (Figure 20, the white arrow) and in Mode the OSD (Figure 20, the green arrow).
Now, you will be notified (the dark yellow rectangle in the right upper part in Figure 20) that I must reboot the quadcopter so that these settings to take effect. I will do this a little bit later after I make the last setting on the serial section. Here, I set the baud rate to 57600, 8 bits of data, no parity, and 1 bit of stop for the Telemetry port 2, see Figure 21.
Now, in the end, save all the settings, go to the tools (right upper button), and reboot the HoverGames drone. Press OK, and that was all in QGroundControl.
The next step is to connect the Micro OSD to a computer and, from here, to configure the information that you choose to be presented to you in the video stream. First, you need a USB to RS232 adapter. There are a large number of different types of adapters on the market. You need a USB to RS232 adapter that has a DTR line, like the one presented in Figure 22, and more it must be able to generate 5 volts to power supply the OSD board.
You also need to build another cable with five wires. Compared with the cable used between the OSD board and FMU, with four wires, this new cable has an additional wire connected to the GRN pin (the left upper pin of the Micro OSD, see Figure 17). The BLK pin must be left unconnected. Now let's connect the Micro OSD board to the adapter β the one from Figure 22. Connect the DTR to GRN, TXD of the OSD board to RXD of the adapter, RXD of the OSD board to TXD of the adapter, ground to ground, and +5V from the adapter to +5 volts of the Micro OSD board. By marking all these connections and plugging the USB adapter into the computer, we will see two blue LED lighting on the OSD board.
Several tools can be downloaded from the internet and use to configure a Micro OSD module: ArduCAM OSD Config, MinimOSD-Extra Config, or MWOSD. In my case, I worked with MinimOSD-Extra Config.
Plugging the USB to RS232 adapter, a new port will be created on your PC. In my particular case, it was COM7. In MinimOSD-Extra Config, from the bottom, select the right COM port. In this application (MinimOSD-Extra Config), you have several options. You can change the video mode from PAL to NTSC or update the firmware - I personally did this, but I saw no difference. What is essential, you have three tabs there, Figure 23. The information on each tab is now in an undefined state, loaded from a program config file. To have the real data from the OSD module, you must first press the "Read From OSD" button. At this moment, the information from all three tabs was updated from the OSD module.
You can select your remote control unit (RC) channel, from the config tab, that will change between Panel 1 and Panel 2 on your goggle. From the Panel 1 and Panel 2 tabs, you can select which data to display or not on the FPV video stream. For example, you can choose the home distance, and you will see that this new information will appear. You can also put this element wherever you want to be located on the grid from the right; see Figure 23. If you have trouble understanding the significance of the displayed information, please follow the link: https://github.com/diydrones/MinimOSD-Extra/wiki/Panel-descriptions.
Having the hardware part implemented and all the configuration done, in the next movie (where are presented almost all the steps to implement and use an FPV video system), I also present (in the end part of the film) a fling session (takeoff, flight, and landing) made exclusively by using the secondary video system:
Or you can see only one of the first FPV flight of the HoverGames drone:
Based on the video system presented here, I made a range test for the FS-iA6B receiver unit. The FS-iA6B receiver unit is the HoverGames drone component used to receive the user's commands based on the FS-i6S transmitter. This test is relevant for the FS-iA6B receiver but also for the 3DR Radio Telemetry unit. To test the receiver range, the quadcopter was commanded to go-ahead up to a point where the radio link will be lost, and the failsafe system will be triggered. In this situation, the quadcopter will go in return to launch point mode, and the HoverGames drone will return autonomously to home. As a result, the HoverGamesdrone flewmore than 850 meters up to a point where the failsafe system activated. The FS-iA6B receiver range is around 850-860 meters in a straight line without any obstacles, and the 3DR Radio Telemetry unit has a range of at least 850-860 meters.
The movie that will prove all the previous affirmation is presented in the end part of section 4.4 - FS-iA6B receiver range test.
4.6. Improving the power supervision system
The link between the station (FS-i6S) and the receiver (placed on the quadcopter - FS-iA6B) is done base on the AFHDS 2A protocol. This protocol is capable of bidirectional communication. In this mode, we can send data from the transmitter to the receiver (mainly the drone's user commands). We can also receive data from the quadcopter receiver β the FS-iA6B through its dual omnidirectional antennas.
The FS-iA6B receiver has two iBUS ports, located at the top 6 horizontal pins on the receiver, see the above picture.
The "servos" slot is the iBUS output from the receiver, into which you plugged the wires coming out of your flight management unit (FMUK66).
The "sensor" slot is for telemetry input into the FS-iA6B, in which port I connected the FS-CVT01. In this mode, I measure the quadcopter battery voltage, and the result is displayed on the FS-i6S transmitter. On the FS-i6S transmitter, I set a threshold warning. If the voltage drops quicker than usual due to a more demanding fly, an audio alarm will be generated. In this mode, I can avoid the over discharge of the battery pack.
5. The support software packages and librariesOne of the hardest, annoying, and time-consuming part of this project was putting all together the components required to sustain each application's function running on the NavQ system.
On the net are many gurus and many tutorials that explain to you how to do a thing, but after a while, you will see that it doesn't work, or it works up to a point or works correctly but only alone, not together with other packages that you need on the project. So, as a direct result, I will present all the steps required to install all the support packages. In the end, I will give you a link with the image of the SD card, having all of these packages ready to be used β an image that you can use without any problem with your NavQ system. Even if, in time, new versions of these packages will appear, it will be far easier to update them than to install them from the beginning.
5.1. Installing of the general packages
In the beginning, be sure that the NavQ is ready for the following installation process:
$ sudo apt update
$ sudo apt upgrade
The first command will update the local package index with the latest changes made in the repositories. In this mode, you will be sure that "sudo apt upgrade" knows the latest versions of the available packages, and it will install them. The last command is the one that will do the hard work: upgrade the packages (the packages currently installed with new versions available are retrieved and upgraded β the kernel will also be updated), and no presently installed packages will be removed even if new versions of the currently upgraded packages will no longer depend on packages that you previously installed.
Pip is a handy tool for installing different Python packages. Based on pip, you can search, download, and install packages from Python Package Index (PyPI). So be sure you have it in your system.
$ sudo apt install python3-pip
Before starting the primary installing process, make sure you have installed all the necessary packages (tools utilized when compiling, configuring, and using applications, packages, and libraries):
$ sudo apt-get install libtool pkg-config build-essential autoconf automake unzip git
5.2. Software components required to support NCS2
All the applications sustained by the Intel Neural Compute Stick 2 (NCS2) neural engine involves the installation of the OpenVINO toolkit. The only way I found to work in installing the OpenVINOon the NavQ (an ARM processor running a 64 bits Ubuntuoperating system) is to build it from the source code. This operation requires a Linux build environment on which the following components are mandatory (accordingly to the Intel):
- OpenCV 3.4 or higher;
- GNU Compiler Collection (GCC) 3.4 or higher - this component exists by default;
- CMake 3.10 or higher;
- Python 3.6 or higher - this component exists by default.
To use the Neural Compute Stick 2 (NCS2), I will first present the install process for all the software components required to sustain the development of an application able to use the computational power of this piece of hardware.
CMake install
CMake version 3.10 or higher is required for building the Inference Engine sample application. To install, open a terminal window and run the following command:
$ sudo apt install cmake
Installing OpenCV
Working with different examples or tutorials from books and the internet to understand various concepts and test the most effective human detection approach determined me to reach a limit offered by the standard OpenCVenvironment. Consequently, I decided to use the so-called "extra" modules or contributed functionalities of OpenCV.
There are several ways to install OpenCV. One of them is to compile it from the source. Installing using pip(Python's package manager), it seems at the beginning to be the easiest method to install OpenCV on the NavQ. Different OpenCV packages are pip-installable from the PyPI repository. The opencv-contrib-pythonrepository contains the main modules and the contrib modules. This is considered the complete library, including almost all functionality, so I decided to install it.
Mainly because I am using the HoverGames-Demo image at the beginning, I installed python3-opencv using:
$ sudo apt install python3-opencv
But now, to avoid conflicts, I removed the previous version of OpenCV using:
$ sudo apt remove python3-opencv
There are two different packages for:
- standard desktop environments and
- headless environments.
You should always use the last package (for headless environments) if you do not use cv2.imshow
class of functions. First, make sure that your pip
version is up-to-date:
$ pip install --upgrade pip
Now install the package you want:
$ pip install opencv-contrib-python -v
or
$ pip install opencv-contrib-python-headless -v
The -v option is essential mainly because the install time is considerable (at least 4 hours), and you want to control the installation process to see that it has not crashed. Unfortunately for me, neither of these two methods worked. Both crashed at almost 100%.
As a direct result, I was forced to compile the OpenCV from the source. In the following, I present all the steps. I extracted and combined these steps mainly from these two tutorials [9] and [10], but personal knowledge was also used. I also present the approaches used to solve several specific issues related to the Ubuntu distribution running on the NavQ.
1. First, install a handful of image and video I/O libraries. These libraries enable us to load images from disk as well as read video files.
$ sudo apt install libjpeg-dev libpng-dev libtiff-dev openexr
$ sudo apt install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
$ sudo apt install libxvidcore-dev libx264-dev
Here I choose not to install the libdc1394-22-dev package. The libdc1394 provides an interface for application developers who wish to control IEEE 1394 based cameras β this is not my case, and, more, the NavQ has not such a port.
2. From there, let's install GTK for our GUI backend:
$ sudo apt install libgtk-3-dev
3. The following two packages contain different automatic generation and mathematical optimization functions of numerical software:
$ sudo apt install libatlas-base-dev gfortran
4. Now, install the TBB library to helps you develop a multi-core processor application:
$ sudo apt install libtbb2 libtbb-dev
5. And finally, let's install the Python 3 development headers and the package for array computing with Python:
$ sudo apt install python3-dev python3-numpy
6. Clone the OpenCV's and OpenCV contrib repositories
$ mkdir ~/opencv_build && cd ~/opencv_build
$ git clone https://github.com/opencv/opencv.git
$ git clone https://github.com/opencv/opencv_contrib.git
The contrib repo contains extra modules and functions, which are frequently used.So, it is indicated to install the OpenCV library with the additional contrib modules as well.
7. Once the download is complete, create a temporary build directory, and switch to it:
$ cd ~/opencv_build/opencv
$ mkdir build && cd build
8. Set up the OpenCV build with CMake:
$ cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D ENABLE_NEON=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D BUILD_TESTS=OFF \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_build/opencv_contrib/modules \
-D BUILD_EXAMPLES=OFF ..
Notice the use of the "-D OPENCV_ENABLE_NONFREE=ON" flag. Setting this flag with OpenCV ensures that you'll have access to the well-known 2D feature detection SIFT/SURF algorithms and other patented algorithms. More the NEON optimization flag has been enabled. I know from the documentation that the ARM chip CORTEX A53, the core of NavQ, supports VFPv4, but in the CMakeLists.txt file, only VFPV3 is available. Unfortunately, when I select the "-D ENABLE_VFPV3=ON" I get errors. So, on the NavQ development board, the VFP optimization does not work.
If all finished OK, you would get:
-- Configuring done
-- Generating done
-- Build files have been written to: /home/navq/opencv_build/opencv/build
9. Start the compilation process:
$ make -j3
The make process will take a long time, so you must be patient enough.
The -j
flag is related to the development board number of cores. The official advice is to use -j4, and it has the significance to runs four jobs in parallel, one compiling job per core. For the NavQ, you can use 4, mainly because you have four cores. But my advice is to use -j3 even -j2. Using -j4, the load average (check it with w, top, or uptime) is above 4.00 (see the following image, a real image from the building process). The following figure presents the real values for a 4 core-CPU system imply that the system was over-utilized: more CPUs needed than the available CPUs.
Several errors will arise in the making process, and here are the methods used to solve them:
A. You will get the error:
make[2]: *** No rule to make target '/lib/libz.so', needed by 'lib/libopencv_core.so.4.5.1'. Stop.
This issue is generated by the broken symbolic link of libz.so in /usr/lib.
$ ls -l /usr/lib/libz.so
lrwxrwxrwx 1 root root 14 Jul 16 20:05 /usr/lib/libz.so -> libz.so.1.2.11
Search for the libz.so.1.2.11:
$ find / -name libz.so.1.2.11 2>/dev/null
/usr/lib/.debug/libz.so.1.2.11
/usr/lib/aarch64-linux-gnu/libz.so.1.2.11
/home/navq/opencv_build/opencv/build/libz.so.1.2.11
/home/navq/MAVSDK/build/default/third_party/zlib/zlib/src/zlib-build/libz.so.1.2.11
/home/navq/MAVSDK/build/default/third_party/install/lib/libz.so.1.2.11
Remove that broken symbolic link and created a new link to the file where my libz.so is located (/usr/lib/aarch64-linux-gnu/libz.so.1.2.11):
$ sudo rm /usr/lib/libz.so
$ sudo ln -s /usr/lib/aarch64-linux-gnu/libz.so.1.2.11 /usr/lib/libz.so
Run the make command again and everything will work as expected with this library.
B. After a while, you will get a similar error. See below. The solution is identical to the one presented above.
make[2]: *** No rule to make target '/lib/libgobject-2.0.so', needed by 'lib/libopencv_videoio.so.4.5.1'. Stop.
Search for the broken symbolic link:
$ ls -l /lib/libgobject-2.0.so
lrwxrwxrwx 1 root root 19 Jul 16 20:14 /lib/libgobject-2.0.so -> libgobject-2.0.so.0
Search for the libgobject-2.0.so.0 library
$ find / -name libgobject-2.0.so.0 2> /dev/null
/usr/lib/aarch64-linux-gnu/libgobject-2.0.so.0
Remove the broken link and created a new one:
$ sudo rm /lib/libgobject-2.0.so
sudo ln -s /usr/lib/aarch64-linux-gnu/libgobject-2.0.so.0 /lib/libgobject-2.0.so
C. Similar errors were for libglib-2.0.so library and for another 2 lib files. The solution is similar with the ones presented above, please follow the similar steps.
make[2]: *** No rule to make target '/lib/libglib-2.0.so', needed by 'lib/libopencv_videoio.so.4.5.1'. Stop.
make[2]: *** No rule to make target '/lib/libgio-2.0.so', needed by 'lib/libopencv_highgui.so.4.5.1'. Stop.
make[2]: *** No rule to make target '/lib/libgthread-2.0.so', needed by 'lib/libopencv_highgui.so.4.5.1'. Stop.
D. In the end, you will get a new type of error. This error is the following one:
[99%] Building CXX object modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2.cpp.o
c++: fatal error: Killed signal terminated program cc1plus
Using dmesg
, I was able to identify the error:
$ dmesg | tail
β¦
[14666.002787] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/session-5.scope,task=cc1plus,pid=2003,uid=1000
[14666.002832] Out of memory: Killed process 2003 (cc1plus) total-vm:1780596kB, anon-rss:1704848kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:3472kB oom_score_adj:0
[14666.165280] oom_reaper: reaped process 2003 (cc1plus), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
This issue is caused by trying to run the build without enough free RAM memory β the compiler got out of memory. So, it will fail at the end of the build with this error. The device is headless, so the solution to get more memory is to use a swap file. Right now, if you check the swap memory on NavQ, with the top command, you will see that is zero β see the following image.
Or you can use it to check if the Ubuntu installation already has swap enabled by typing:
$ sudo swapon --show
On an empty output, the system does not have swap space enabled β as in the case of NavQ.
To use the swap memory, start by creating a file that will be used for the swap; here, you have two options:
$ sudo fallocate -l 1G /swapfile
or
$ sudo dd if=/dev/zero of=/swapfile bs=1024 count=1048576
Set up a Linux swap area on the file:
$ sudo mkswap /swapfile
Mainly because only the root user should have the right to write and read the swap file, you must set the permissions by using:
$ sudo chmod 600 /swapfile
Activate the swap file:
$ sudo swapon /swapfile
Verify that the swap is active by using either the swapon
or top
or free
command:
$ sudo free -h
To make the change permanent open the /etc/fstab file and paste the following line:
/swapfile swap swap defaults 0 0
Now, run the make command once again, and everything will work flawlessly,
10. Install OpenCV with:
$ sudo make install
11. Test your OpenCV 4 install on Ubuntu. Into a terminal, perform the following:
$ python3
>>> import cv2
>>> cv2.__version__
'4.5.1-dev'
>>> quit()
In the end, analyzing the problem of installing the OpenCV environment, from the perspective of everything I know so far, I think that the lack of memory gave the impossibility of installing OpenCV using pip. Using the swap memory, as presented above, I am convinced that installing OpenCV via pip will work very well.
Install OpenVINO
Installing OpenVINO is a little bit difficult. There are several approaches, but most of them don't work!
If you use a method similar to installing OpenVino on a Raspberry Pi (presented here: https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_raspbian.html) it is a total failure and a monumental waste of time. More, at this moment, Intel only provides 32-bit "armhf" libraries for Raspbian. These libraries and this install are not fully compatible with NavQUbuntu distribution. There are broken links, wrong dependencies, the Python 3.8 is not supported, etc. From my perspective, based on all my attempts, and base on my Linux knowledge, it is impossible to make it work.
Even if you download and install the runtime archives like l_openvino_toolkit_runtime_ubuntu20_p_20xy.z.abc.tgz, developed especially for Ubuntu 20.04 version, it is impossible to put them together with the OpenCV and to create a functional program. Mainly, because these OpenVino archives are for systems with an x86 architecture.
The guide that provides installation steps for the OpenVINO toolkit for Linux distributed through the APT repository doesn't work either.
In the following, I will present the installation steps that worked, finally, in my case. Simultaneously, the ways to correct all the errors will be presented, and the settings made to the operating system so that the compilation and installation can take place without problems.
1. For the beginning, in the case of the OpenVino, you must use a swap file of at least 2 GB. If you use the swap file from the OpenCV install process (it was 1 GB), it will not be enough, and the installing process will crash. So, please follow the step previously presented and create a swap file of 2 GB. But, first, remove the previously swap file:
(a). Start by deactivating the swap space by typing:
$ sudo swapoff -v /swapfile
(b). Remove the old swapfile file using the rm command:
$ sudo rm /swapfile
2. It is required to install Cython (the Cython compiler will convert Python code into C code) before running CMake. So let's install it:
$ pip3 install Cython
3. Clone the open-source version of the OpenVINO toolkit:
$ cd ~/
$ git clone https://github.com/openvinotoolkit/openvino.git
4. This repository also has several submodules that must be fetched as follows:
$ cd ~/openvino/inference-engine
$ git submodule update --init --recursive
5. The OpenVINO toolkit has several build dependencies. Use the install_build_dependencies.sh script to fetch them.
$ cd ~/openvino
$ sh ./install_build_dependencies.sh
6. Now, if the script is finished successfully, you are ready to build the toolkit. The first step to beginning the build is telling the system where the installation of OpenCV is. So, use the following command:
$ export OpenCV_DIR=/usr/local/lib
7. To build both the inference engine and the MYRIAD plugin for Intel NCS2 use the following commands:
$ cd ~/openvino
$ mkdir build && cd build
$ cmake -DCMAKE_BUILD_TYPE=Release \
-DENABLE_MKL_DNN=OFF \
-DENABLE_CLDNN=ON \
-DENABLE_GNA=OFF \
-DENABLE_SSE42=OFF \
-DTHREADING=SEQ \
-DENABLE_SAMPLES=ON \
-DENABLE_PYTHON=ON \
-DPYTHON_EXECUTABLE=`which python3.8` \
-DPYTHON_LIBRARY=/usr/lib/aarch64-linux-gnu/libpython3.8.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.8
..
8. At a specific moment, you will get the following error:
-- error: extract of '/home/navq/openvino/inference-engine/temp/download/VPU/usb-ma2x8x/firmware_usb-ma2x8x_1540.zip' failed: /usr/bin/cmake: /usr/lib/libcurl.so.4: no version information available (required by /usr/bin/cmake)
The steps to solve this error are similar with the one presented above (in the making process of the OpenCV). Here the libcurl.so.4 library is the problem.
$ ls -la /usr/lib/libcurl.so.4
lrwxrwxrwx 1 root root 16 Jul 29 00:44 /usr/lib/libcurl.so.4 -> libcurl.so.4.6.0
$ find / -name libcurl.so.4.6.0 2>/dev/null
/usr/lib/aarch64-linux-gnu/libcurl.so.4.6.0
/usr/lib/libcurl.so.4.6.0
Here it is an interesting fact, even if the file libcurl.so.4.6.0 exist in the correct place (/usr/lib) in order to the build to work you must create the link with the following library /usr/lib/aarch64-linux-gnu/libcurl.so.4.6.0:
$ sudo rm /usr/lib/libcurl.so.4
$ sudo ln -s /usr/lib/aarch64-linux-gnu/libcurl.so.4.6.0 /usr/lib/libcurl.so.4
9. Start the compilation process:
$ make -j4
10. When the build completes successfully the OpenVINO toolkit will be ready to run. The 64-bit builds are placed in the ~/openvino/ bin/aarch64/Release folder.
Having the OpenVINO build, now we must set the NCS2 Linux USB Driver.
11. So, add the current Linux user to the user
s
group:
$ sudo usermod -a -G users "$(whoami)"
12. While logged in as a navq user in the users group, install the USB rules running:
$ sudo cp ~/OpenVINO/97-myriad-usbboot.rules_.txt /etc/udev/rules.d/97-myriad-usbboot.rules
$ sudo udevadm control --reload-rules
$ sudo udevadm trigger
$ sudo ldconfig
The file 97-myriad-usbboot.rules_.txt contain the following information and can be created, or you can download from here.
SUBSYSTEM=="usb", ATTRS{idProduct}=="2150", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
At this moment, the USB driver must be installed correctly. If you have problems with the Intel NCS2 please restart your NavQ and try again.
13. Checking the USB Device. Run lsusb to see what USB devices are connected. First run this command with the NCS2 connected. Next run without. It looks like {Bus 001 Device 004: ID 03e7:2485 Intel Movidius MyriadX} is the NCS2.
$ lsusb
Bus 001 Device 004: ID 03e7:2485 Intel Movidius MyriadX
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Now check without the NCS2 stick:
$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
14. After the build process finishes, export the environment variables:
$ export PYTHONPATH=$PYTHONPATH:/home/navq/openvino/bin/aarch64/Release/lib/python_api/python3.8/
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/navq/openvino/bin/aarch64/Release/lib/
15. Check the Python wrapper by running the following script to import IENetwork and IECore:
$ python3
>>> from openvino.inference_engine import IENetwork, IECore, Blob, TensorDesc
If you can successfully import these classes, you have correctly built the OpenVINO toolkit with the Python wrapper.
5.3. Installing other libraries
Due to the problem that arises from the deep learning neural networks, and the availability of different components (like image processing or linear algebra), I decide to install the Dlib library and use it in conjunction with the OpenCV and OpenVINO. So, to install the Dlib, use:
$ sudo pip3 install dlib
Now, mainly because I will need imutils, it must be installed. The imutils contains a series of functions that make performing basic image processing with OpenCV a bit easier. In my case, I will use this library mainly on the server application, but for easiness of package management, I installed this library on both systems. So, install imutils:
$ sudo pip3 install imutils
Or if you have it previously, only update:
$ sudo pip3 install --upgrade imutils
For the management of the running processes, I used psutil (process and system utilities) library. Base on it, I set, for example, the process affinity. So, to use it, please install it:
$ sudo pip3 install psutil
5.4. Installing ZMQ
imageZMQis a set of Python classes that transport OpenCV images from one computer to another using PyZMQ messaging. (https://github.com/jeffbass/imagezmq)
$ sudo pip3 install imagezmq
$ sudo pip3 install pyzmq
$ sudo apt-get install libzmq3-dev
5.5. Programs versions and SD card image
The actual NavQ SD card was developed based on the HoverGames-DemoLinux distribution, so it is a Ubuntu 20.04.2 LTS.
On the SD card, placed inside the NavQ system, are installed the following: CMake (3.16.3), Cython (0.29.21), dlib (19.21.99), future (0.18.2), git (2.25.1), imutils (0.5.4), imagezmq (1.1.1), libzmq3-dev (4.3.1), lxm (4.6.2), MAVSDK (0.30.1), numpy (1.17.4), OpenCV (4.5.1-dev), pip (21.0.1), psutil (5.8.0), pymavlink (2.4.14), Python2 (2.7.18), Python3 (3.8.5), pyzmq (17.1.2). Obviously, here I mentioned only a part of the packages. As you have seen from all the installation steps, many other packages are installed in the system. More than that, the installation process of part of these components will be presented in the following.
To help all those who want to work with the NavQ system, an archived copy of the SD card image, with all these previous applications, can be downloaded from here. The SD image was created with the "Disks" application (gnome-disks) from Ubuntu.
6. Ground station, FMU, and onboard computer communicationAn essential chain of software components is the one used to send the result of the NavQ processing to the base station. When a human is localized, a the data is sent (that contain the GPS coordinate of the detected human activity, the number of the detected people, the detection's confidence, etc.), through the radio telemetry units, to the base station (a PC system), see Figure 1, data path (4).
The first implemented approach, used to communicate with the FMUK66 from the NavQ, was based on the MAVSDK C++ library and the mavsdk_server. Because my application running on the companion computer is one developed in Python, the mavsdk_server is required.
Later I found that this communication process can be done much simpler and more straightforward through custom uORB and MAVLinkmessages.
In this section, I will present the steps required to implement both methods.
Both methods of communication were published as tutorials on hackster.io. Their access addresses are:
- Communication through custom uORB and MAVLink messages(https://www.hackster.io/mdobrea/communication-through-custom-uorb-and-mavlink-messages-269ebf);
- C++ and Python interface&management application for FMUK66 (https://www.hackster.io/mdobrea/c-and-python-interface-management-application-for-fmuk66-6dd935).
On the official documentation for the NXP HoverGames drone development kit from GitBook, βC++ and Python interface&management application for FMUK66β is included as a tutorial for all those who want to learn to develop applications for HoverGames. The link to this tutorial on GitBook is: https://nxp.gitbook.io/hovergames/developerguide/c++-and-python-interface-and-management-application-for-fmuk66.
6.1.C++ and Python interface&management application for FMUK66
A. The steps required to connect from a C++ application to the FMUK66 flight management unit base on MAVLink protocol
In order to run a C++ application that uses MAVLink protocol, first, you must have installed the MAVSDK C++ library. On Ubuntu or Fedora, the users should install the MAVSDK C++ *.deb or *.rpm packages from the Github release page (https://github.com/mavlink/MAVSDK/releases) or, the second option, they must build and install the MAVSDK C++ Library. This last option will be presented here.
So, please follow the steps:
1. Build & Install MAVSDK core C++ Library
- [a] First, get the NavQ ready:
$ sudo apt update
$ sudo apt upgrade
Now weβre going to build this library on the NavQ. So letβs start:
- [b] Go here to build the C++ SDK from source:
https://mavsdk.mavlink.io/develop/en/contributing/build.html
and follow all the steps from βBuilding SDK from Sourceβ. In my case, I use the instructions for the βLinuxβ Operating System (OS). Use the latest stable build (master) and build the Release libraries.
But, if you can have a problem to build the Release binaries (only in Raspberry Pi β from my experience, for the NavQ is working correctly), in this situation, make the Debug C++ library.
- [c] Troubleshooting - the vast majority of common build issues can be resolved by updating submodules and cleaning the distribution (it wasn't my case, but I inserted this information here just in case):
$ cd MAVSDK
$ git submodule update --recursive
$ rm -rf build
Then attempt to make it again β repeat the [b] step once again.
- [d] Install the SDK and use the system-wide install as described here:
https://mavsdk.mavlink.io/develop/en/contributing/build.html#install-artifacts.
If you build a new version of MAVSDK, don't bother how to uninstall the previous version of the MAVSDK, this is because βSystem-wide installation overwrites any previously installed version of the SDKβ.
2. Set up TELEM2 on the FMU
Communication between NavQ and FMU (FMUK66) requires some configuration to be done on the FMUside through the QGroundControl. So, navigate in QGroundControl to Settings β Parameters β MAVLink and set the MAV_1_CONFIG to TELEM 2.
The rest of the parameters (MAV_1_FORWARD, MAV_1_MODE, etc.) will appears only after you will reboot the FMU. So, press the Tools button (right-up corner) and select Reboot Vehicle. Now you can set these parameters accordingly with the following image:
Also, you'll need to make sure that, the settings for SER_TEL2_BAUD (Settings β Parameters β Serial) is like in the next image:
3. Configuring the serial port on NavQ
Make sure that your NavQ is set up correctly to communicate over serial by running the following command:
$ stty -F /dev/ttymxc2 921600
4. Building one C++ example and test, base on it, all the setup
- [a] Now build one example:
$ cd examples/takeoff_land/
$ mkdir build && cd build
$ cmake ..
$ make
- [b] Now run the example app (from the example/takeoff_land/build directory) as shown:
$ ./takeoff_and_land serial:///dev/ttymxc2:921600
In the end, if all is OK, you will get this (or something similar):
$ ./takeoff_and_land serial:///dev/ttymxc2:921600
[11:35:13|Info ] MAVSDK version: v0.29.0 (mavsdk_impl.cpp:27)
[11:35:13|Debug] New: System ID: 0 Comp ID: 0 (mavsdk_impl.cpp:404)
Waiting to discover systemβ¦
[11:35:13|Debug] Component Autopilot (1) added. (system_impl.cpp:344)
[11:35:13|Debug] Discovered 1 component(s) (UUID: 10832640680271026576) (system_impl.cpp:517)
Discovered system with UUID: 10832640680271026576
Discovered a component with type 1
Vehicle is getting ready to arm
Vehicle is getting ready to arm
Vehicle is getting ready to arm
Here is a short movie that presents a connection from a C++ application (running on NavQ - RDDRONE-8MMNavQ) to FMUK66:
If you develop your code in C/C++, you have all the tools and packages installed and configurated β now you are ready to work with MAVLink from a C++ program, so happy coding! But mainly because I need to work in Python, I will continue with all packages require to be installed for Python.
B. The steps required to connect from a Python application to the FMUK66 flight management unit base on MAVLink protocol
So, for all of you who want to work in Python, please continues with:
5. Building mavsdk_server
The MAVSDK programming language-specific libraries (e.g., Swift, Python) share a standard backend (called "mavsdk_server"). This server may be optionally built as part of the C++ library. In our case, we need this server so letβs start building it.
- [a] Go to the MAVSDK directory and:
$ sudo cmake -DBUILD_BACKEND=ON --symlink-install --cmake-args "-DCMAKE_SHARED_LINKER_FLAGS='-latomic'" "-DCMAKE_EXE_LINKER_FLAGS='-latomic'" -Bbuild/default -H.
In the end, you will get:
-- Configuring done
-- Generating done
-- Build files have been written to: /home/navq/MAVSDK/build/default
And, now the last step:
$ sudo cmake --build build/default
- [b] Take note, the file mavsdk_server that you will need for MAVSDK-Python is in the directory: MAVSDK/build/default/src/backend/src/
- [c] To check the mavsdk_server you can use:
./mavsdk_server -v
[03:38:31|Info ] MAVSDK version: v0.30.1 (mavsdk_impl.cpp:27)
[03:38:31|Debug] New: System ID: 0 Comp ID: 0 (mavsdk_impl.cpp:405)
[03:38:31|Info ] Server started (grpc_server.cpp:44)
[03:38:31|Info ] Server set to listen on 0.0.0.0:37277 (grpc_server.cpp:45)
[03:38:31|Info ] Waiting to discover system on -v... (connection_initiator.h:22)
[03:38:31|Warn ] Unknown protocol (cli_arg.cpp:71)
[03:38:31|Error] Connection failed: Invalid connection URL (connection_initiator.h:48)
6. Install MAVSDK-Python
To install MAVSDK-Python, simply run:
$ pip3 install mavsdk
Copy that mavskd_server file into MAVSDK-Python/mavsdk/bin/
7. Testing the MAVLink connection based on a Python program
- [a] First the mavsdk_server must be active:
$ cd ~
$ cd MAVSDK/build/default/src/backend/src/
$ ./mavsdk_server -p 50051 serial:///dev/ttymxc2:921600
- [b] Download from the repository an example, in my case I downloaded the same example used in the case of C/C++ test:
$ wget https://raw.githubusercontent.com/mavlink/MAVSDK-Python/master/examples/takeoff_and_land.py
- [c] Replace the following two code line (from: takeoff_and_land.py):
drone = System()
await drone.connect(system_address="udp://:14540")
with:
drone = System(mavsdk_server_address='localhost', port=50051)
await drone.connect(system_address="serial:///dev/ttymxc2:921600")
On each example downloaded from the repository, in order to work correctly, you must replace the two lines of code previously presented.
- [a] In the end, run the program:
$ python3 takeoff_and_land.py
If all is correctly installed and configured, you will get the following messages:
Waiting for mavsdk_server to be ready...
Connected to mavsdk_server!
Waiting for drone to connect...
Drone discovered with UUID: 10832640680271026576
Waiting for drone to have a global position estimate...
6.2.Communication through custom uORB and MAVLink messages
This section of the project I would have liked to have had when I started to learn about MAVLink and uORB. I wasted a lot of time learning about these subjects as much as possible. I had to learn on my own, and it took way too much time for me, and now I wish I could have known earlier about all of this information.
In the beginning, I started from my desire to communicate between a companion computer (in my case NavQ - RDDRONE-8MMNavQ "NavQ") and an FMU unit (in my case, an RDDRONE-FMUK66). Both of them are NXP company products. However, I will present the steps necessary to achieve the proposed goals and the main concepts behind these steps.
Use a custom uORB message and send it as a MAVLink message
The first part of this section will explain how to use a custom uORB message and send it as a MAVLink message. To accomplish this, please follows the steps:
1. Add a custom uORBvideo_monitor
message in:
msg/video_monitor.msg
- To add a new topic, you need to create a new .msg file in the
msg/
directory, in my case, with the following content:
uint64 timestamp # time since system start (microseconds)
char[11] info # specific info
int32 lat # Latitude in 1E-7 degrees
int32 lon # Longitude in 1E-7 degrees
uint16 no_people # number of identified people
float32 confidence # the highest confidence value for a human recognition
- Second, add the file name to the
msg/CMakeLists.txt
list. Base on this, the needed C/C++ code is automatically generated.
set(msg_files
...
video_monitor.msg
...
)
2. Right now, we will test the new uORB message. But first, build the firmware for FMUK66 and upload it to the FMU.
The listener
command can be used to listen to the content of a special topic. To test this command, you can tray with (from a QGroundControlMAVLink console):
nsh> listener sensor_accel
and something similar with the following image you will receive:
If you use the listener
command to see the content of the video_monitor topic, you will get the following message:
nsh> listener video_monitor
never published
In this way, you are warned that the topic exists but has never been published before.
To publish a topic, a new program (named inject_VideoMsg.cpp) must be developed and placed into /src/examples/inject_customUORBmsg
folder in the PX4 project.
#include <px4_platform_common/px4_config.h>
#include <px4_platform_common/posix.h>
#include <unistd.h>
#include <stdio.h>
#include <poll.h>
#include <string.h>
#include <uORB/uORB.h>
#include <uORB/topics/video_monitor.h>
extern "C" __EXPORT int inject_myUORB_main(int argc, char *argv[]);
int inject_myUORB_main(int argc, char *argv[])
{
PX4_INFO("Hello, I am only a test program able to inject VIDEO_MONITOR messages.");
// Declare structure to store data that will be sent
struct video_monitor_s videoMon;
// Clear the structure by filling it with 0s in memory
memset(&videoMon, 0, sizeof(videoMon));
// Create a uORB topic advertisement
orb_advert_t video_monitor_pub = orb_advertise(ORB_ID(video_monitor), &videoMon);
for (int i=0; i<40; i++)
{
char myStr[]={"Salut !!"}; memcpy(videoMon.info, myStr, 9);
videoMon.timestamp = hrt_absolute_time();
videoMon.lat = i;
videoMon.lon = 12345678;
videoMon.no_people = i+5;
videoMon.confidence = 0.369;
orb_publish(ORB_ID(video_monitor), video_monitor_pub, &videoMon);
//sleep for 2s
usleep (2000000);
}
PX4_INFO("inject_myUORB finished!");
return 0;
}
In the same folder place the following CmakeLists.txt file:
px4_add_module(
MODULE examples__inject_customUORBmsg
MAIN inject_myUORB
SRCS
inject_VideoMsg.cpp
DEPENDS
)
To enable the compilation of the previous application into the PX4 firmware create a new line for your application in the boards/nxp/fmuk66-v3/default.cmake file:
px4_add_board (
PLATFORM nuttx
...
examples
inject_customUORBmsg
...
)
Executing the inject_myUORB program in the background (presented above, which involves compiling it, and compiling the PX4 autopilot) will permit us to run other command-line programs (like the listener
) to check the video_monitor topic:
nsh> inject_myUORB &
inject_myUORB [667:100]
INFO [inject_myUORB] Hello, I am only a test program able to inject VIDEO_MONITOR messages.
nsh> listener video_monitor
TOPIC: video_monitor
video_monitor_s
timestamp: 244441865 (1.819939 seconds ago)
lat: 7
lon: 12345678
confidence: 0.3690
no_people: 12
info: βSalut !!β
To enumerate all existing topics, list the file handles associated with them using:
nsh> ls /obj
But here a short mention, to find your topic, this topic must be previously published at least one time. You can also use the uorb top <message_name>
command to verify, in real-time, that your message was published and to list different other types of information.
3. The next goal is to add a custom MAVLinkvideo_monitor
message placed in:
mavlink/include/mavlink/v2.0/video_monitor/mavlink_msg_video_monitor.h
a. The enums and messages that are generally useful for many flight stacks and ground stations are stored in a file named common.xml, which is managed by the MAVLink project.
b. The dialects are stored in separate XML files, which typically include
(import) common.xml and define just the elements needed for system-specific functionality.
c. When a MAVLink library is generated from a dialect file, the code is created for all messages in both the dialect and any included files (e.g., common.xml)
e. Steps:
- A messages must be declared between the <messages> </messages
>
tags in either common.xml file or in an independent dialect file. I decide to use the last option. So, create your own dialect named:video_monitor.xml
.
<?xml version="1.0"?>
<mavlink>
<include>common.xml</include>
<dialect>4</dialect>
<messages>
<message id="369" name="VIDEO_MONITOR">
<description>This message send the number of peoples descovered! NavQ => FMU => GroundStation</description>
<field type="uint64_t" name="timestamp">time since system start (microseconds)</field>
<field type="char[11]" name="info">General information (11 characters, null terminated, valid characters are A-Z, 0-9, " " only)</field>
<field type="int32_t" name="lat" units="degE7">Latitude WGS84 (deg * 1E7). If unknown set to INT32_MAX</field>
<field type="int32_t" name="lon" units="degE7">Longitude WGS84 (deg * 1E7). If unknown set to INT32_MAX</field>
<field type="uint16_t" name="no_people">number of identified peoples</field>
<field type="float" name="confidence">I'n not sure for what to using it</field>
</message>
</messages>
</mavlink>
For the <dialect>4</dialect> line, in your specific case, replace the 4 value with the next-largest unused dialect number (based on the other files in the folder: mavlink/message_definitions/v1.0/).
All messages within a particular generated dialect must have a unique ID - <message id="369" name="VIDEO_MONITOR">. But, there are some constrains regarding the ID number. So, the first constrain not allows to create messages with IDs in the "MAVLink 1" range. MAVLink v1 has only has 8 bits messages IDs, and hence can only support messages with IDs on 0 β 255 range. More, the PX4 recommendation is βwhen creating a new message you should select the next unused ID for your dialect (after the last one defined in your target dialect file)β. In my case, I analyze the common.xml file (placed in mavlink/include/mavlink/v2.0/message_definitions) and there are all the used messaged IDs, reserved ranges etc. β see the following figure. So, I decided tom use ID=369 β a very easy to remember number.
- Place the file under the folder:
mavlink/message_definitions/v1.0/
In my case, I have a HoverGames folder in my home folder where are two other folders:
i. src (here is all the PX4 code), created with:
$ cd~
$ mkdir -p ~/HoverGames/src
$ cd ~/HoverGames/src && git clone --recursive https://github.com/PX4/Firmware.git px4-firmware
ii. mavlink, created with (see https://mavlink.io/en/getting_started/installation.html):
$ cd~
$ sudo apt-get install python3-pip
$ pip3 install --user future
$ cd ~/HoverGames
$ git clone https://github.com/mavlink/mavlink.git --recursive
- Your message needs to be generated as a C-library for MAVLink 2. Once you've installed MAVLink you can do this on the command line using the following commands:
$ cd ~/HoverGames/mavlink
$ python -m pymavlink.tools.mavgen--lang=C --wire-protocol=2.0 --output=generated/include/mavlink/v2.0 message_definitions/v1.0/video_monitor.xml
- For your own use/testing you can just copy the generated headers into PX4-Autopilot/mavlink/include/mavlink/v2.0, in my case:
~/HoverGames/src/px4-firmware/mavlink/include/mavlink/v2.0
But first delete the following folders: common, minimal, and video_monitor(the last one if you previously generated this folder):
$ rm -r ~/HoverGames/src/px4-firmware/mavlink/include/mavlink/v2.0/common
$ rm -r ~/HoverGames/src/px4-firmware/mavlink/include/mavlink/v2.0/minimal
$ rm -r ~/HoverGames/src/px4-firmware/mavlink/include/mavlink/v2.0/video_monitor
So, use the following commands to copy in the right place the files and folders generated at the previous step:
$ cd ~/HoverGames/mavlink/generated/include/mavlink/v2.0
$ cp -R -v * ~/HoverGames/src/px4-firmware/mavlink/include/mavlink/v2.0
4. This section explains how to use a custom uORB message and send it as a MAVLink message.
In the web tutorial, associated with the PX4 autopilot, it is presented the approach how to change the mavlink_messages.cpp file, placed in my case in src/modules/mavlink/mavlink_messages.cpp (or on the HDD ~/HoverGames/src/px4-firmware/src/modules/mavlink/mavlink_messages.cpp), to deal with all these messages.
But, in the new version of the PX4 autopilot, the trend is to define the class in a header file placed under streams folder: src/modules/mavlink/streams. So, create here the file VIDEO_MONITOR.hpp with the following content:
#ifndef VIDEO_MON_HPP
#define VIDEO_MON_HPP
#include <uORB/topics/video_monitor.h> //placed in:
// build/nxp_fmuk66-v3_default/uORB/topics
#include <v2.0/video_monitor/mavlink.h>
#include "v2.0/video_monitor/mavlink_msg_video_monitor.h"
class MavlinkStreamVideoMonitor : public MavlinkStream
{
public:
static MavlinkStream *new_instance(Mavlink *mavlink)
{ return new MavlinkStreamVideoMonitor(mavlink); }
// In a member function declaration or definition, override specifier ensures that
// the function is virtual and is overriding a virtual function from a base class.
const char*get_name() const override
{ return MavlinkStreamVideoMonitor::get_name_static(); }
// The constexpr specifier declares that it is possible to
// evaluate the value of the function or variable at compile time.
static constexpr const char *get_name_static()
{ return "VIDEO_MONITOR"; }
uint16_tget_id() override
{ return get_id_static(); }
static constexpr uint16_t get_id_static()
{ return MAVLINK_MSG_ID_VIDEO_MONITOR; }
unsigned get_size() override
{ return MAVLINK_MSG_ID_VIDEO_MONITOR_LEN + MAVLINK_NUM_NON_PAYLOAD_BYTES; }
private:
uORB::Subscription _sub{ORB_ID(video_monitor)};
/* do not allow top copying this class */
MavlinkStreamVideoMonitor(MavlinkStreamVideoMonitor &);
MavlinkStreamVideoMonitor& operator = (const MavlinkStreamVideoMonitor &);
protected:
explicit MavlinkStreamVideoMonitor(Mavlink *mavlink) : MavlinkStream(mavlink)
{}
bool send() override
{
struct video_monitor_s _video_monitor; //make sure video_monitor_s is the
//definition of your uORB topic
if (_sub.update(&_video_monitor))
{
mavlink_video_monitor_t _msg_video_monitor; // mavlink_video_monitor_t is the
// definition of your custom
// MAVLink message
_msg_video_monitor.timestamp = _video_monitor.timestamp;
_msg_video_monitor.lat = _video_monitor.lat;
_msg_video_monitor.lon = _video_monitor.lon;
_msg_video_monitor.no_people = _video_monitor.no_people;
_msg_video_monitor.confidence = _video_monitor.confidence;
for(int i=0; i<11; i++)
_msg_video_monitor.info[i] = _video_monitor.info[i];
mavlink_msg_video_monitor_send_struct(_mavlink->get_channel(),
&_msg_video_monitor);
PX4_WARN("uorb => mavlink - message was sent !!!!");
return true;
}
return false;
}
};
#endif // VIDEO_MON_HPP
Right now, please include in the mavlink_messages.cpp the following code:
...
#include <uORB/topics/video_monitor.h>
...
#include "streams/VIDEO_MONITOR.hpp"
...
static const StreamListItem streams_list[] = {
...
#if defined(VIDEO_MON_HPP)
create_stream_list_item<MavlinkStreamVideoMonitor>(),
#endif // VIDEO_MON_HPP
...
};
5. The mavlink module implements the MAVLink protocol, which can be used on: a Serial link or UDP network connection. It communicates with the system via uORB: some messages are directly handled in the module (e.g., mission protocol), others are published via uORB (e.g., vehicle_command).
There can be multiple independent instances of the MAVLink module, each connected to one serial device or network port.
For the HoverGames drone are three instances of the mavlink module:
In order to check this information, please use:
nsh> mavlink status
Now, make sure to enable the stream from the MavLink console:
nsh> mavlink stream -r 50 -s VIDEO_MONITOR -d /dev/ttyS1
To see if all is ok, print all enabled streams (on instance 2 of the mavlink you will see at the end the VIDEO_MONITOR stream):
nsh> mavlink status streams
6. Now enable the stream at the HoverGames boot process, for example, by adding the following line to the startup script (e.g., /ROMFS/px4fmu_common/init.d/rcS on NuttX. Note that -r
configures the streaming rate and -d
identifies the MAVLink serial channel).
mavlink stream -r 50 -s VIDEO_MONITOR -d /dev/ttyS1
For the UDP channel please use (is not the case of HoverGames drone):
mavlink stream -r 50 -s VIDEO_MONITOR -u 14556
7. You have at least two options in order to: (a) check if a uORB message is sent as a MAVLink dialect and (b) to check if all the above-accomplished steps are functional.
In the first approach:
a. Use a PX4_WARN call inside the send() function as presented above, at the fourth step.
b. Now, arm the HoverGames quadcopter. It is required to arm it to start the logging process β the logging will be stopped when you disarm the quadcopter.
c. From a command line, run the inject_myUORBprogram β developed at the second step.
d. Download the last log file.
e. Use the following website: https://review.px4.io/for log file plotting and analysis.
f. You will see all the logged messages at the bottom of the page, as in the following figure. If you will see the WARNING message "uorb => mavlink - message was sent !!!!" this means that all the code is correctly written and the link between uORB message and the MAVLink message was done correctly.
8. The most challenging approach is developing a Python program (running on the NavQ companion computer, see https://mavlink.io/en/mavgen_python/) to receive the uORB, sent by the inject_myURB application (running on the FMUK66FMU), through MAVLink messages.
Communication between NavQand FMU (FMUK66) requires:
(a) a serial connection between the two systems, and
(b) some configuration to be done on the FMUside through the QGroundControl. As a direct result, some configurations are needed. So, navigate in QGroundControl to Settings β Parametersβ MAVLink and set the MAV_1_CONFIG to TELEM 2.
The rest of the parameters (MAV_1_FORWARD, MAV_1_MODE, etc.) will appear only after rebooting the FMU. So, press the Tools button (right-up corner) and select Reboot Vehicle. Now you can set these parameters accordingly with the following image:
Also, you'll need to make sure that the settings for SER_TEL2_BAUD (Settings β Parameters β Serial) are like in the next image:
If you want to wait and access a particular type of message when it arrives, you can use the recv_match() method. This method waits for a specific message and intercepts it when it comes. The following example (receiveCustomMavlinkMSG.py
) will also check that the message is valid before using its content.
from pymavlink import mavutil
mavutil.set_dialect("video_monitor")
# create a connection to FMU
hoverGames = mavutil.mavlink_connection("/dev/ttymxc2", baud=921600)
# wait for the heartbeat message to find the system id
hoverGames.wait_heartbeat()
print("Heartbeat from system (system %u component %u)" %(hoverGames.target_system, hoverGames.target_component))
while (True):
msg = hoverGames.recv_match(type='VIDEO_MONITOR', blocking=True)
#check that the message is valid before attempting to use it
if not msg:
print('No message!\n')
continue
if msg.get_type() == "BAD_DATA":
if mavutil.all_printable(msg.data):
sys.stdout.write(msg.data)
sys.stdout.flush()
else:
#Message is valid, so use the attribute
print('Info: %s' % msg.info)
print('Latitude : %d' % msg.lat)
print('Longitude: %d' % msg.lon)
print('No.people: %d' % msg.no_people)
print('Confidence: %f' % msg.confidence)
print('\n')
Send a message over MAVLink and publish it to uORB
The following section explains how to send a message over MAVLink and publish it to uORB!
9. Insert the following include files, in mavlink_receiver.h (you can find it here: src/modules/mavlink):
#include <uORB/topics/video_monitor.h> //placed in:
// build/nxp_fmuk66-v3_default/uORB/topics
#include <v2.0/video_monitor/mavlink.h>
Add a function able to handle all the incoming MAVLink message in the MavlinkReceiver class (class MavlinkReceiver : public ModuleParams) in mavlink_receiver.h:
void handle_message_video_monitor(mavlink_message_t *msg);
Add also an uORB publisher in the MavlinkReceiver class in mavlink_receiver.h:
uORB::Publication<video_monitor_s> _videoMon_pub{ORB_ID(video_monitor)};
You can put any name in the place of _videoMon_pub
, but it will be nice to keep _pub
string in order to respect the PX4 convention.
10. Now it is time to implement the handle_message
_video_monitorfunction in mavlink_receiver.cpp file:
Void MavlinkReceiver::handle_message_video_monitor (mavlink_message_t *msg)
{
mavlink_video_monitor_t videoMon_my;
mavlink_msg_video_monitor_decode(msg, &videoMon_my);
struct video_monitor_s uorb_vm;
memset (&uorb_vm, 0, sizeof(uorb_vm));
uorb_vm.timestamp = hrt_absolute_time();
uorb_vm.lat = videoMon_my.lat;
uorb_vm.lon = videoMon_my.lon;
uorb_vm.confidence = videoMon_my.confidence;
uorb_vm.no_people = videoMon_my.no_people;
for (int i=0; i<11; i++)
uorb_vm.info[i] = videoMon_my.info[i];
_videoMon_pub.publish(uorb_vm);
}
and finally make sure it is called in MavlinkReceiver::handle_message():
MavlinkReceiver::handle_message(mavlink_message_t *msg)
{
switch (msg->msgid) {
...
case MAVLINK_MSG_ID_VIDEO_MONITOR:
handle_message_video_monitor (msg);
break;
...
}
11. The MAVLink, in HoverGames drone, is associated with several different ports and works in a specific stream mode scenario (Normal, Onboard, OSD, etc.) that defines the specific streamed messages able to be sent or received. For all accepted stream mode scenarios, see the following image.
In order for the MAVLink VIDEO_MONITOR topic to be streamed, add in the mavlink_main.cpp file placed in src/modules/mavlink your topic to the lists of streamed topics. In my case, I added:
configure_stream_local("VIDEO_MONITOR", 10.0f);
to the MAVLINK_MODE_NORMAL and MAVLINK_MODE_ONBOARD.
12. In the header mavlink_bridge_header.h (placed on src/modules/mavlink) change the following line:
#include <v2.0/standard/mavlink.h>
with the header placed on your custom message:
#include <v2.0/video_monitor/mavlink.h>
Without this header replacement, the custom MAVLink messages (like VIDEO_MONITOR) will not arrive as uORB messages!
A Python program able to generate the MAVLink message
The following will implement a Python program used to generate the MAVLink message. This program will run on the NavQ companion computer. But, for this program to work, several support components are required.
13. Install MAVLink on the NavQ companion computer:
$ cd~
$ sudo apt-get install python3-pip
$ pip3 install --user future
$ git clone https://github.com/mavlink/mavlink.git --recursive
14. Copy the same video_monitor.xml file from the development station (a laptop running Ubuntu in my case) generated at the third step (see above) into the following folder (on the NavQ companion computer): ~/mavlink/message_definitions/v1.0/video_monitor.xml.
15. Generate the Python MAVLink library on NavQ companion computer:
$ cd ~/mavlink
$ python3 -m pymavlink.tools.mavgen --lang=Python --wire-protocol=2.0 --output=generated/my_MAVLinkLib message_definitions/v1.0/video_monitor.xml
As a direct result, a my_MAVKinkLib.py library will be generated in ~/mavlink/generated folder.
16. Install Pymavlink. Pymavlink is a low level and general purposeMAVLink message processing library, written in Python. This library is able to work with MAVLink 1 and MAVLink 2 versions of the protocol and it is used with Python 2.7+ or Python 3.5+.
$ pip3 install pymavlink
Collecting pymavlink
Downloading pymavlink-2.4.14.tar.gz (4.1 MB)
|ββββββββββββββββββββββββββββββββ| 4.1 MB 1.3 MB/s
Requirement already satisfied: future in /usr/lib/python3/dist-packages (from pymavlink) (0.18.2)
Collecting lxml
Downloading lxml-4.6.2-cp38-cp38-manylinux2014_aarch64.whl (7.3 MB)
|ββββββββββββββββββββββββββββββββ| 7.3 MB 145 kB/s
Building wheels for collected packages: pymavlink
Building wheel for pymavlink (setup.py) ... done
Created wheel for pymavlink: filename=pymavlink-2.4.14-cp38-cp38-linux_aarch64.whl size=4199475 sha256=da42b2cd2e55fb66fc460a54de9b6f79b03ca5e024d6c7b3844849241f91df29
Stored in directory: /home/navq/.cache/pip/wheels/2d/27/ef/9c613a4ce79c9762bb5ff981e4ac49a725a632da1879149f86
Successfully built pymavlink
Installing collected packages: lxml, pymavlink
Successfully installed lxml-4.6.2 pymavlink-2.4.14
17. Now, include the my_MAVKinkLib.py library, generated for your video_monitor dialect, in pymavlink. So, copy the generated my_MAVKinkLib.py dialect file into the appropriate directory of your clone of the mavlink repo (the mavlink folder from the home directory):
- MAVLink 2: pymavlink/dialects/v20 <== copy here
- MAVLink 1: pymavlink/dialects/v10
18. Open a command prompt and navigate to the pymavlink directory (~/mavlink/pymavlink).
19. If needed, uninstall previous versions (for example, you had an error in video_monitor.xml message, and you repeat once again step 14):
navq@imx8mmnavq:~/mavlink/pymavlink$ pip3 uninstall pymavlink
20. Install dependencies (if you have not previously installed pymavlink using pip3) - it was done at the 13th step:
navq@imx8mmnavq:~/mavlink/pymavlink$ pip3 install lxml future
21. Run the Python setup program:
navq@imx8mmnavq:~/mavlink/pymavlink$ python3 setup.py install --user
The pymavlink package includes the dialect-specific generated module (video_monitor), which provides low-level functionality to encode and decode messages, and apply and check signatures.
22. Now tray a pymavlink minimal test:
$ python3
>>> import pymavlink
Use pymavlink.__doc__
to show some information about the package
>>> pymavlink.__doc__
Python MAVLink library - see http://www.qgroundcontrol.org/mavlink/start
>>>
To finish python3, in the end, press CTRL+D to exit or use exit().
23. Configure the serial port. To print all serial characteristics:
$ stty -F /dev/ttymxc2 -a
Configure the serial port on 921600 baud rate:
$ stty -F /dev/ttymxc2 921600
24. In the following, a basic program was developed (on NavQ) to connect to the FMUK66 flight management unit and take some data (getBasicData.py):
from pymavlink import mavutil
import time
# create a connection to FMU
hoverGames = mavutil.mavlink_connection("/dev/ttymxc2", baud=921600)
# Once connected, use 'the_connection' to get and send messages
# wait for the heartbeat message to find the system id
hoverGames.wait_heartbeat()
print("Received heartbeat message from FMUK66...")
# Get some basic information!
while True:
try:
print(hoverGames.recv_match().to_dict())
except:
pass
time.sleep(1.0)
For some basic knowledge use: https://mavlink.io/en/mavgen_python/.You will also find many beautiful examples here: https://www.ardusub.com/developers/pymavlink.html.
25. In the case, you get the following error:
ModuleNotFoundError: No module named 'serial'
install the serial module:
$ pip3 install pyserial
26. In the end, let's send a custom MAVLink message (video_monitor) from the NavQ computer to the FMU (FMUK66). Please implement the following Pythoncode (in sendCustomMavlinkMSG.py file):
from pymavlink import mavutil
import time
mavutil.set_dialect("video_monitor")
# create a connection to FMU
hoverGames = mavutil.mavlink_connection("/dev/ttymxc2", baud=921600)
# wait for the heartbeat message to find the system id
hoverGames.wait_heartbeat()
print("Heartbeat from system (system %u component %u)" %(hoverGames.target_system, hoverGames.target_component))
counter = 0
#send custom mavlink message
while(True) :
hoverGames.mav.video_monitor_send(
timestamp = int(time.time() * 1e6), # time in microseconds
info = b'Salut!',
lat = counter,
lon = 231234567,
no_people = 6,
confidence = 0.357)
counter += 1
print ("The custom mesage with the number %u was sent it!!!!" %(counter))
time.sleep(1.0)
27. To run the Python program, use:
$ python3 sendCustomMavlinkMSG.py
28. To check if a MAVLinkvideo_monitor dialect was translated into a uORB mesage, you will be required to develop an application that will subscribe to the VIDEO_MONITOR topic (uORB) and it will print the subscribed results. The new program, named test_commCompCom.cpp, was developed and placed into the /src/examples/test_commCompCom
folder, in the PX4 project. The code is the following one:
#include <px4_platform_common/px4_config.h>
#include <px4_platform_common/posix.h>
#include <unistd.h>
#include <stdio.h>
#include <poll.h>
#include <string.h>
#include <uORB/uORB.h>
#include <uORB/topics/video_monitor.h>
extern "C"__EXPORT intuorb_mavlink_main(int argc, char *argv[]);
int uorb_mavlink_main(int argc, char *argv[])
{
int poll_ret;
int getOut = 1;
//char c;
PX4_INFO("Hello, I am only a test program able to receive VIDEO_MONITOR messages.");
// Subscirbe to "video_monitor", then set a polling interval of 200ms
int video_sub_fd = orb_subscribe(ORB_ID(video_monitor));
orb_set_interval(video_sub_fd, 200);
// Configure a POSIX POLLIN system to sleep the current
// thread until data appears on the topic
px4_pollfd_struct_t fds_video;
fds_video.fd = video_sub_fd;
fds_video.events = POLLIN;
while (getOut)
{
poll_ret = px4_poll (&fds_video, 1, 2000);
if ( poll_ret == 0 )
{
PX4_ERR ("Got no data within a second !");
}
// If it didn't return 0, we got data!
else
// Double check that the data we recieved was in
// the right format (I think - need to check)
if(fds_video.revents & POLLIN)
{
// declare a video_monitor_s variable to store the data we will receive
struct video_monitor_svideoMon;
// Copy the obtaned data into the struct
orb_copy(ORB_ID(video_monitor), video_sub_fd, &videoMon);
printf ("lat= %d|long= %d|no. people= %d|confidence= %1.3f|%s \n",
videoMon.lat, videoMon.lon, videoMon.no_people,
(double)videoMon.confidence, videoMon.info);
}
}
return 0;
}
In the same folder place the following CmakeLists.txt file:
px4_add_module(
MODULE examples__test_commCompCom
MAIN uorb_mavlink
SRCS
test_commCompCom.cpp
DEPENDS
)
To enable the compilation of the previous application into the PX4 firmware, create a new line for your application in the boards/nxp/fmuk66-v3/default.cmake file:
px4_add_board(
PLATFORM nuttx
...
examples
test_commCompCom
...
)
29. Now, run the Python program (placed on the NavQ onboard computer):
$ python3 sendCustomMavlinkMSG.py
and run the last developed program (placed on the
FMUK66FMU
) from a
QGroundControlMAVLink console:
nsh> uorb_mavlink
All the messages sent from the NavQ onboard computer will be received and presented on the MAVLinkconsole connected to the FMUK66FMU
system
.
The basic programs for streaming (client and server, https://github.com/dmdobrea/HoverGames_Challenge2/tree/main/03_ZMQ_base => server.py and client.py) can be downloaded and used without any problem. These are fully functional programs. You will need to run them: (a) one NavQ system together with (b) a laptop or a PC. Your computer will be the server (the server application will run here), and NavQ will be the client.
The server application (server.py) is the simplest one. First, the required packages are imported in lines 4 and 5. In line 8 (imageHub = imagezmq.ImageHub()), the imageHub is initialized in order to be able to manage the client connections. After each successful reception of the client name (NavQ_name) and frame (frame), in line 14, the server application sends an acknowledgment message to the client (imageHub.send_reply(b'OK')). In line 19, the client's name is inserted in the frame (using cv2.putText function), and in line 22, the frame is displayed (through the cv2.imshowfunction).
The client application is a little bit complex, and it is also used to measure the client's performance β the time required to acquire an image and send it to the server. The performance measurement was done with the time module, mainly using time.time() that returns the time in seconds since January 1, 1970, 00:00:00 (UTC). The client requires one command-line argument, which is the server's IP address. Through the code from the 43rd line, the application opens the 5555 port on the server. The video stream is initialized on lines 49and 50, having 640 x 480 or 1280 x 720 resolution. I chose these two resolutions for performance testing because the first is the lowest resolution offered by the Google Coral camera, the second one is the average resolution of the camera. To get camera features, like resolution, you can use:
$ v4l2-ctl --list-formats-ext
In line 60, an image is grabbed from the Google Coral camera. In the following line, the function cv2.flip flips the image around the horizontal ax. This operation is required mainly because the cam is fixed in an inverted position. The code from line number 66 sends the image to the server (sender.send_image(navq_Name, frame)) together with the client's hostname.
To check the ZMQ streaming, first, start the server (on your laptop):
$ python3 server.py
Once the server is running, you can start the client in the second step (assuming the server has the IP address 192.168.100.104):
$ python3 client.py --server-ip 192.168.100.104
or
$ python client.py -s 192.168.100.104
It is possible to get the following error:
File "client.py", line 11, in <module>
import cv2
ImportError: /lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
The issue is in glibc. To solve this problem, add the library path in your bashrc file, and it should solve the problem on aarch64 devices:
export LD_PRELOAD=/lib/aarch64-linux-gnu/libgomp.so.1:/$LD_PRELOAD
Sending an image of 1280 x 720 will take 0.711 seconds in the happiest situation, 6.189 in the worst case (it rarely happens, but it does happen), and most often, it takes 1.113 seconds. So on average, the client program takes 1.1 seconds to send an image, see Table 2.
If the resolution is decreased to 640 x 480, the situation improves but not enough. In this situation, the minimum streaming time for an image becomes 0.227 seconds, the maximum, around 5.54 seconds (a rare case). But most of the time, an image is sent somewhere approximately in 0.4 seconds, see Table 2.
To create a detection system that works in real-time, we notice that this approach is not one that is satisfactory in terms of performance. So, a new solution must be found.
The solution was to compress first the acquired images to jpeg form:
ret_code, jpg_buffer = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), jpeg_quality])
and send them via imageZMQ to the ground station's server application. The streaming was done through a WiFi link. The server receives the jpeg images, uncompress them to OpenCVimages:
frame = cv2.imdecode(np.frombuffer(jpg_buffer, dtype='uint8'), -1)
and, in the end, display them.
The programs for streaming images in the compressed form (jpeg) client and server can be downloaded from here: https://github.com/dmdobrea/HoverGames_Challenge2/tree/main/03_ZMQ_base => server_jpg.py and client_jpg.py. The client application also has the possibility to change the jpeg compression factor from the jpeg_quality variable.
Table 2 presents the obtained performances for three compression factors: 35, 65, and 95. The compression factor can vary between 0 to 100. A more quality image represents a higher value. As a direct result, I decide to stream images on 640 x 480 resolution using a compression factor of 35 or 65. These parameters give me a number of around 14 FPS, streaming performances, see Table 2.
In the next movie, starting with the 3:04 time index, the functionalities of the ZMQ images streaming applications (client and server) are presented in real word:
The main problem is the low quality of the acquired images sent through the ZeroMQstreaming protocol. Unfortunately, when the HoverGames drone flies, the quality of the images decreases a lot. When the movie was done, the wind was quite strong, but this does not justify this image quality. The propellers have been balanced, and the images transmitted by the VTX system had a usual quality without any blur.
So, the problem is the Google Coralcamera. I suspect that this problem is caused by the autofocus mechanism of the Google Coral camera. Recording a movie and analyzing each frame, I found that there are frames in which the image is of good quality and other frames in which the objects are blurred. I suspect that due to the system's vibrations, the camera continuously tries to focus (some times it succeeds at other times it doesn't), but immediately the drone moves, and the camera has to focus once again. So I tried to disable autofocus:
- The camera datasheet mentions that the sysfs file for autofocus is placed at /sys/module/ov5645_camera_mipi_v2/parameters/ov5645_af. But in my Ubuntu distribution, this is not implemented.
- The next option was from the Python code, but it didn't work again:
cap.set(cv2.CAP_PROP_AUTOFOCUS, 0)
- Querying the camera for its settable/adjustable parameters:
v4l2-ctl -l
I get nothing, so a command similar to the following one will not work:
$ v4l2-ctl -c focus_auto=0
In the end, I concluded that the camera's Linux driver does not support any configuration options.
8. Performances analysis NavQ vs. Raspberry Pi 4In the implementation of the human activity recognition system, I encountered many problems and errors. Some of these were my own mistakes. Other errors were due to the lack of packages or the incompatibility of some packages between them. For this reason, I went with the development of the system in parallel on both NavQ and a Raspberry Pi 4 (Rpi) system. In this way, I compared the results obtained and correct the errors that appeared more easily. In the following, everything is focused on the NavQ system, and this is the default one. When I present results related to the RPisystem, I will mention this.
After working on both systems (NavQ and RPi), I noticed some strange things that I couldn't explain at that moment. For this reason, I decided to do a performance test, which compares the two systems. The comparison will be mainly from the point of the computational power required to sustain a human detection algorithm from a video stream.
Before moving to the practical analysis of systems performances, let's analyze the two processors' datasheet. First, we observe that both of them are from the ARMv8-A architecture class. But the NavQSoC has a Cortex-A53, and the Raspberry Pi 4 has a more powerful Cortex-A72. The Cortex-A53 processor is the one that exists inside the Raspberry Pi 3 development board. Even if right now, the Cortex-A53is the most widely used architecture for mobile, the Cortex-A72 is a more powerful processor from almost all points of view.
Table 2 shows that both processors have the same number of powerful cores and are on the same number of bits. We notice that NavQ has a supplementary Cortex-M4processor. But the most significant difference is in the processing power. The RPiprocessor is more than two times faster than NavQ, 4.72 DMIPS/MHz versus 2.24 DMIPS/MHz. The NavQ system has a small advantage in working at a slightly higher frequency, 1.8 GHz versus 1.5 GHz. But the L1 cache and the L2 cache are to the advantage of the Raspberry Pi4. The pipeline depth of the Cortex-A72 has 15 stages, whereas Cortex-A53implements only an 8 stages pipeline. From this perspective, the Cortex-A72will have virtually 15 instructions simultaneously in different stages, whereas Cortex-A53 will have only 8 instructions. The Raspberry Pi 4 also has an out-of-order execution unit that is missing on the NavQ system. They have the same amount of memory β the RPi system on which I did all the tests had 2 Gb of memory. Based on this information, the filling is that the NavQsystem has no chance in this competition with the RPi system.
The benchmark program is a Python application having only the essential function for human detection. For this reason, if someone wants to understand the final program developed for NavQ system, this program is a good starting point because it does not include any communication with the FMUor other functions. It can be downloaded from GitHub: https://github.com/dmdobrea/HoverGames_Challenge2/tree/main/04_NavQ_vs_RPi. The Python application was developed based on the OpenCV library and OpenVINOtoolkit. It can run the human recognition algorithm entirely on the ARM CPUor supported by the Neural Compute Stick 2. The detection is done based on the MobileNet Single-Shot multibox Detection (MobileNet-SSD) network; a deep neural network (DNN) implemented using the Caffe framework. The MobileNet-SSD network requires around 2.3 GFLOPS, and it has about 5.8 million parameters. Mainly because the NavQ is a headless device, we can save the resulting movie (an AVI file with the recognized subjects), and after that, we can analyze the results. The main program (human detection algorithm) runs on a core, and the writing process runs on the other core. In this way, the recognition algorithm's processing performance will not be influenced by the writing process.
I will evaluate both systems' detection speed using the number of frames per second (FPS) capable of being obtained by the detection algorithms. To satisfy a personal curiosity, I tested the Raspberry Pi system performance with the graphical user interface started, and after that, I made all the tests from the command-line interface (CLI). In this way, I will also evaluate how much of the system power is stolen by the graphical interface.
The tests were done in the following ways:
- The RPi system performance with the graphical user interface started;
- The RPi system in the command-line interface (CLI) β a headless device;
- The NavQ system in the CLI.
In each of the previous cases, the DNN was first executed only on the system CPU. The second test was done running the DNNwith the support of the Neural Compute Stick 2. The results are presented in Table 4.
In the case of NavQ, by running the recognition process on the NCS2, I got the following error, and I could not solve this issue up to the finish of this contest.
Traceback (most recent call last):
File "hr_benchmark.py", line 155, in <module>
networkOutput = net.forward()
cv2.error: OpenCV(4.5.1-dev) /home/navq/opencv_build/opencv/modules/dnn/src/dnn.cpp:1379: error: (-215:Assertion failed) preferableBackend != DNN_BACKEND_OPENCV || preferableTarget == DNN_TARGET_CPU || preferableTarget == DNN_TARGET_OPENCL || preferableTarget == DNN_TARGET_OPENCL_FP16 in function 'setUpNet'
But, even in this situation, we have plenty of data to make a performance comparison between the two systems and to be able to draw some surprising conclusions.
The first observation is that the human detection performance differences between a CLI (a headless device) system and a desktop system with GUI are minimal, see Table 4 (the first two lines).
But the most surprising conclusion is that we have two different systems, one of them, the RPi, is more powerful than the NavQsystem, but the human detection performances are very similar. Even the ARMcompany divided the ARMv8-A family of processors into three classes: (1) high-performance core, (2) high-efficiency core, and (3) ultra-efficiency core. Cortex-A53 belongs to the high-efficiency core group (providing good performance per GHz). Based on this hierarchy, Cortex-A72 sits at the top, belonging to the high-performance group (performance-oriented). But what we see is that a processor focused on consumption efficiency (placed on NavQ) has similar performance to a processor focused on maximizing them (RPi) β 1.29 FPS for RPi versus 1.2 FPS for NavQ. These small differences are given, in my opinion, by the fact that on the NavQsystem, the OpenCV and OpenVino libraries work on 64 bits. On the RPi system, all is working on 32 bits.
The proof of all these analyzes and the presentation of these results is given in the following film:
But another unexpected and delightful surprise arose. Using psutilthe human detection algorithm ran on a 2nd core, and the writing process ran on the 3rd core. The results are the ones presented in Table 4. Changing the Python program, removing all the code associated with the process affinity (the operating system did the process affinity), I got the performances from Table 5.
As you can see from the above movie, the CPU load for the human recognition algorithm process is 270%, and you see that this process is jumping all the time from one core to another. Very beautiful! As a direct result, the performance is two times greater than the previous approach in a system running without the NCS2 support. When the NCS2 has been used, the performance increases on average with 1 FPS, see Table 5.
9. Human activity detection systemDue to the issues related to NCS2 on the NavQ system, the only solution for the human detection algorithm is to be executed only on the CPUs of the NavQ companion computer. But looking at the data from Table 5, I was able to get a little more than 2.5 FPS in this mode - which is not enough for a human detection system running on a drone β in my case, the HoverGamesdrone.
But how many FPS means real-time for human detection system running on a HoverGames drone? It all depends on how fast the scene change in 1 second. As a joke, you do not need to analyze 25 times per second the same image. In the autonomous mode, the HoverGames default mission speed is 5 m/s. In this situation, 2.6 FPS is enough. Previously I was able to obtain around 1 FPS (on Raspberry Pi 3 system running HOG-SVMclassification system) [1], 2.2 FPS (on NVIDIA Jetson Nano running a ResNet-18 DNN) [2], and 1.36 FPS (on NVIDIA Jetson Nano running a ResNet-34 DNN) [2].
Analyzing the HoverGames flight logs, I saw a maximum horizontal speed of more than 35 km/h. This value means almost 10 m/s. In such a case, 2.6 FPS is not enough.
So, how can we solve this problem? Obviously, starting from the premise that I have to use what I have (the most straightforward approach would be to put a more powerful companion computer - but this involves financial and power consumption costs), and without using an NCS2 (mainly due to software issues that I have not yet been able to solve them).
They're a class of tracking algorithms able to run in real-time and track different objects in a very robust way. This class of algorithms requires prior information regarding an object's correct location (a human in our case), more precisely, a bounding box around the object (similar to the one presented in Figure 38(a) and (b)), and the current frame to identify the new object's position. Through these algorithms, you do not have to run a computationally expensive DNN (MobileNet-SSD) in each frame of the input video stream. So the MobileNet-SSD will run only from time to time, from X frames to X frames, and the tracking algorithms will run on other frames.
In conclusion, I decided to use this approach in my project. This approach is similar to the one presented in [12]. The dlib library implements the correlation tracking algorithm (CTA) introduced, for the first time, in [13]. This algorithm has the advantage of being able to deal with object large scale variation in complex images. This situationperfectly describes a drone approaching the detected object or identifying the same item from different altitudes.
The file that implements the main program running on the NavQ system, hr_RealAppNavQ.py, can be downloaded from GitHub: https://github.com/dmdobrea/HoverGames_Challenge2/tree/main/05_RealApplication_NavQ. This software component was developed inPython
and can:
- acquire images from (a) the Google Coral camera or (b) a movie MP4 file;
- get GPS coordinate;
- detect human activity from frames (based on MobileNet-SSD network and acorrelation tracker algorithm);
- stream the results to the ZeroMQ server, and
- sends custom warning messages to the base station.
The hr_RealAppNavQ.py is a multiprocessing application using safe FIFO queues for interprocess communication. The application uses OpenVINO to set the target processor for inference.
All the above functions of the main program (hr_RealAppNavQ.py) are activated through the command line's arguments. For a short description of them, see Table 6.
In the local working mode, the detection system's output result can be saved to a directory as an AVI file or presented to a user if the system has GUI. The human activity is detected in each frame of the input video stream, and bounding boxes with labels (having the confidence level of detection) are being applied; see Figure 38(a) and (b). The input can be grabbed from the Google Coral camera or from an MP4 file given to the application as an argument. In the following analysis, two input movies were used VideoTest1.mp4 (one human subject) and VideoTest2.mp4 (three human subjects). Both films have the same resolution, 640 x 480 pixels, to sustain a streaming process (if required) with high throughput, see Table 2. 640 x 480 is also a suitable resolution for the DNN input. The acquisition of the images from the Coral camera was also made at this resolution.
The human detection performance analysis was done using several different initial parameters. In this analysis, different values were used for the compression factor of the resulted image (35 and 65) and the number of frames skipped by the detection algorithm (5, 10, and 15 frames), Table 7. These parameters are controlled by the jpeg_quality and skip_frames variables from the hr_RealAppNavQ.py application. The results are presented in Table 7. The FPS number is an average FPS computed based on the algorithm's three inference instances with the same parameters (same movie, the same number of skipped frames, and the same JPEG compression value).
The frames per second (FPS) rate has been calculated using the time required: (a) to get the input image, (b) preprocess the frame for the DNN (scaling, mean subtraction, etc.), or (c) for the dlib library (converting to RGB color space), (d) the DNNinference time itself or (e) the processing time of CTA and (f)the time required to stream the resulting image.
On the second line from Table 7 are, as a reference point, the results obtained with MobileNet-SSD on the first and second movie. Here the human detection system was run on each input frame.
Increasing the number of skipped frames, like in Table 7, the human recognition algorithm starts to be faster: more than five times faster for the movie with one human subject and two and a half times faster for the video stream with three human subjects. Part of these analyses are presented in the following film (skip_frames = 15):
In Table 7 there is only one strange thing. On a compression factor of 65 (so, a larger image than at a compression of 35), the performances increased for 5, and 6 frames skipped β compared with the same cases when a compression factor of 35 was used and decreased for 15 skipped frames. I guess it's all about the ratio of compression time to transmission time.
So far, we have analyzed the performances obtained by the human detection system on two movies. The question that arises: In real-time, on a data stream taken from a camera, how well does the system do? You can find the answer here (the film is a bit out-of-date, at that moment, the software was not optimized for the maximum performance it can achieve, but this film perfectly highlights the concepts presented so far and the obtained results):
Using this proposed approach (MobileNet-SSD and the correlation tracker algorithm (CTA)), the detection performance speed depends on the balance between different parameters.
From Table 7, it can be observed that the recognition time is in direct relation with the number of the detected subjects. On each detected subject, a different correlation tracker algorithm was running. Indeed, having over a certain number of subjects, the gain obtained using CTA will disappear, and running DNN on each frame will have superior performance. This factor can be easily controlled. Given that the drone only needs to detect people(s) activity and send a notifcation, only the first 2-3 subjects (with the highest detection confidence) can be tracked.
To obtain a higher number of FPS, the DNN must skip as many frames as possible. But there is no perfect object correlation tracker algorithm (CTA), and it will be times where the CTA will lose the human. The higher the number of frames skipped, the more likely the CTA will lose the correlation, and more frames will be with the false detection. To correct this situation is recommended to run the DNN (the more computationally expensive algorithm) from time to time.
In conclusion, using the MobileNet-SSD and CTA combination, the FPS number was significantly improved for the human detection algorithm.
10. List of the filesTo find more easily a specific file, in this section are presented all the files that were:
(a)
created and added to the
NavQ
or
FMUK66FMU system by me, and (b)all the files that were modified in order to achieve a specific objective. To be easier to find the code added to a file, that code is marked with the following specific string: "//===β¦====> DDM".
On the
FMUK66FMU system:
Β·
msg/video_monitor.msg
β
created
Β·
msg/CMakeLists.txt
β
modified
Β· The application used to
publish video_monitor topic
src/examples/inject_customUORBmsg/inject_VideoMsg.cpp
-
created
src/examples/inject_customUORBmsg/CmakeLists.txt
-
created
boards/nxp/fmuk66-v3/default.cmake
β
modified
Β·
mavlink/message_definitions/v1.0/ video_monitor.xml
β
created
Β·
/src/px4-firmware/mavlink/include/mavlink/v2.0
β all the file were
automaticaly generated
by mavgen
Β·
The files used to use a custom uORB message and send it as a MAVLink message:
src/modules/mavlink/streams/VIDEO_MONITOR.hpp
β
created
src/modules/mavlink/mavlink_messages.cpp
β
modified
/ROMFS/px4fmu_common/init.d/rcS
β
modified
Β· The files used to implement a bridge between the MAVLink to uORB -send a MAVLink message and publish it as a uORB message:
src/modules/mavlink/
mavlink_receiver.cppβ
modified
src/modules/mavlink/
mavlink_receiver.hβ
modified
- src/modules/mavlink/mavlink_main.cpp β modified
- src/modules/mavlink/mavlink_bridge_header.hβ modified
Β· A subscribe application to the VIDEO_MONITOR topic (to check if a MAVLink video_monitor dialect was translated into a uORB mesage):
src/examples/test_commCompCom.cpp
β
created
- CmakeLists.txt
β
created
- boards/nxp/fmuk66-v3/default.cmake
β
modified
On the
NavQ
board:
A
Python
program able to receive a uORB message through a MAVLinkmessage:receiveCustomMavlinkMSG.py
β
created
;
- video_monitor.xml(here in my case: ~/mavlink/message_definitions/v1.0/video_monitor.xml)
β
created
;
A
Python
program that connects to the FMUK66 flight management unit and takes some data: getBasicData.pyβ
created
;A
Python
program that sends a custom MAVLink message (video_monitor) from the NavQ computer to the FMU (FMUK66): sendCustomMavlinkMSG.pyβ
created
;hr_benchmark.py
, a
Python
program, used to test the computational power of the NavQ system versus the Raspberry Pisystemβ
created
.hr_RealAppNavQ.py,
the main
Python
program used to acquire images (from (a) the Google Coral camera or (b) a movie MP4 file), get GPS coordinate, detect human activity from frames (based on MobileNet-SSD network and acorrelation tracker algorithm), stream the results to the ZeroMQ server, and sends custom warning messages to the base stationβ
created
.
- Right now, the development of the HoverGames hardware part is finished. The VTX system is working, and also the companion computer is working very well. Here I have only one problem, both previous systems require a serial connection, and the FMUK66FMU has only one free external port. To solve this problem, I intend to integrate in the future the RDDRONE-UCANS32K146 node. The RDDRONE-UCANS32K146 node is designed to bridge a HoverGamesCAN port and other ports like I2C, SPI, UART, GPIO, etc. In this mode, I will have more free UART ports.
- Another problem is the low quality of the obtained Google Coral camera images; these images are blurred. So, I will either change the camera or put it on a gimbal to improve the pictures' clarity.
- The identification of more efficient detection algorithms with higher classification rates, but which should keep the characteristics of recognizing the subjects in real-time, is also necessary.
- Finding the optimum ratios between engine power, propeller type, and the battery's capacity to increase the flight time.
Checking "My activity" on the huckster.io discussion board, I have 12 posts and 23 comments on other posts. In all of these interventions, I offered technical solutions and answers to my colleagues' various questions regarding the competition.
Two communication tutorials, between the NavQ and FMUK66, were published on hackster.io. Their web addresses are:
- "Communication through custom uORB and MAVLink messages" (https://www.hackster.io/mdobrea/communication-through-custom-uorb-and-mavlink-messages-269ebf);
- "C++ and Python interface&management application for FMUK66" (https://www.hackster.io/mdobrea/c-and-python-interface-management-application-for-fmuk66-6dd935).
On the official documentation for the NXP HoverGamesdrone development kit from GitBook, the "C++ and Python interface&management application for FMUK66" is included as a tutorial for all those people eager to learn to develop applications for HoverGames quadcopter. The link to this tutorial on GitBook is https://nxp.gitbook.io/hovergames/developerguide/c++-and-python-interface-and-management-application-for-Mmuk66.
In the frame of the NXP HoverGames Challenge 2 competition, on my personal YouTube channel, I posted six movies with the technical solutions to different practical problems and improvements of the HoverGamesquadcopter. I also have another film created last year with a technical solution from FS-i6S gimbals swapping. This movie (https://youtu.be/7OANkv1rqaU) was not reported up to now on any contest. Five other films are advertising the results obtained and presenting the various beautiful HoverGames features. These movies are grouped on a special YouTube playlist with the following link (the first eleven movies): https://www.youtube.com/watch?v=r7od-P3qX8Q&list=PLrDmzHP7HWk1T_eetVaGlWl38vw6p1CDa.
Another result, which will save tens of hours of work, is the archived copy of the SD card image used by me in this competition. The SDcardimage can be downloaded from here. This SD card image contains all the packages presented in Subchapter 5.5 of this project report.
All the programs developed in the framework of this competition can be downloaded from github.com/dmdobrea/HoverGames_Challenge2.
In the end, I can say thatthe initially proposed goal of developing a system capable of identifying human subjects has been achieved.
In conclusion, in this project, I presented several innovative solutions to build a better human detection system having as a support platform the HoverGames quadcopter. The solutions are hardware and software and do not imply additional costs or imply limited costs, but the benefits are much more significant than the costs. In summary, they are:
- Improving the reliability of 3D printed landing gear connectors: the components' strength is significantly increased (at least two times);
- A new approach to build the body of the HoverGames drone: the repairing time is strongly diminished, and you get better wire management;
- Increasing the reliability of the telemetry unit β more range, no connection lost;
- Increasing the system's reliability to measure the battery pack β by adding a secondary monitoring path, I can be sure that we will not discharge the battery more than is allowed.
- I have increased the human recognition system's processing speed by 2 up to 5 times, without any external support (no additional hardware β like NCS2, or a better companion computer) and without any supplementary costs.
[1] D.M. Dobrea, A Video Warning HoverGames Drone to Fight with the Fire, https://www.hackster.io/mdobrea/a-video-warning-hovergames-drone-to-fight-with-the-fire-33cfbd, 25 January, 2020
[2] D.M. Dobrea, M.C. Dobrea, An autonomous UAV system for video monitoring of the quarantine zones, Romanian Journal of Information Science and Technology, vol. 23, no. S, 2020, pp. S53-S66, ISSN 1453-8245
[3] Daniel Hadad, s500 s550 multirotor landing gear remix, https://www.thingiverse.com/thing:3761929
[4] Andrew Pi, s500 s550 multirotor landing gear parts, https://www.thingiverse.com/thing:2706763
[5] Stefan Hermann, Which LAYER HEIGHT gives you the STRONGEST 3D prints?,
[6] GitBook documentation website for the NXP HoverGames drone, https://nxp.gitbook.io/hovergames/userguide/assembly/escs-fmu-power-module-and-rc-receiver
[7] Iain Galloway, HoverGames NavQ case, https://www.thingiverse.com/thing:4555491
[8] PX4 Telemetry Radios/Modems, https://docs.px4.io/v1.9.0/en/telemetry/
[9] How to Install OpenCV on Ubuntu 18.04, https://linuxize.com/post/how-to-install-opencv-on-ubuntu-18-04/
[10] Adrian Rosebrock, How to install OpenCV 4 on Ubuntu, https://www.pyimagesearch.com/2018/08/15/how-to-install-opencv-4-on-ubuntu/
[11] NXP gitbook, Streaming Video to QGroundControl using NavQ over WiFi, https://nxp.gitbook.io/8mmnavq/navq-developer-guide/gstreamer/streaming-video-to-qgroundcontrol-using-navq-over-wifi
[12] A. Rosebrock, Object tracking with dlib, https://www.pyimagesearch.com/2018/10/22/object-tracking-with-dlib/, October 22, 2018
[13] M. Danelljan, G. HΓ€ger, F.S. Khan, M. Felsberg, Accurate Scale Estimation for Robust Visual Tracking, Proceedings of the British Machine Vision Conference, BMVA Press, September 2014.
Comments