The main goal of this project is to map farmland and crops, working out crop heights, position and density in order to infer growth and distribution statistics. This prototype autonomously collects, interprets and analyses valuable data from crops and presents it in a useful way to farmers, allowing for a better understanding of their crops and surroundings.
IntroductionThis project focuses on how combing drone technology with edge computing can perhaps save farmers time, money and energy. By using these technologies, we can push the agricultural industry into the future.
The Crop Mapping and Survey Drone allows farmers to automate frequent, expensive and time consuming activities such as scouting, crop monitoring and analysis. Using computer vision, statistical analysis and data visualisation, this prototype flies over crops, scanning them and converting image data to pointclouds. These pointclouds can then be analysed to pick out crop rows and individual plants. After perfoming analysis on this data, inferences can be made as to the current height of the crops, the distribution/coverage of the crops and health.
Context, background and understandingWhy is a project like this necessary or important? The agricultural industry is going through a smart revolution. ‘AI-ML’ tractors such as in self-steering tractors, automated feeding and irrigation systems, smart gardens and sensor-driven environmental data science are allowing farms to monitor and understand their crops and assets better than ever. This is making farming more efficient, cost effective and environmentally-friendly.
Due to Britain’s exit of the EU, farmers are finding it more difficult to gather a workforce, resulting in farmers doing the work of multiple workers. Technological advancements in crop management and surveillance can be used to reduce the workload on farmers. Climate change and the global move toward more sustainable practises is putting pressure on the agricultural industry to change it’s ways. Therefore, there is a lot of investment into agri-tech. More efficient irrigation systems, environmentally-friendly pesticides/feed, and computer-aided farming machinery are all rapidly developing areas. The ability to monitor and analyse crop health and growth as well as survey fields for issues would be very beneficial to all parties involved.
Using drones for crop analysis makes a lot of sense. The ability to get very close to crops without disruption, cover land in seconds and provide unencumbered access to an area makes drones an excellent vessel for scientific instruments - not just in agriculture, but many fields.
With this context in mind, I tried to design my prototype to accomodate these issues. This drone should be easy to use, automated and provide useful, meaningful outputs that can help agricultural workers in mulitple areas.
In order to gain a fuller understanding of the target plant-life, we must use multiple sensor types; collecting information from the physical environment, via a range of devices. This is know as sensor fusion. Using camera vision for crop identification and analysis is a valid starting point, however, full inferences should be made up of visual data, instrumental and time-series data. For instance, the relationship between light-level, temperature and humidity is directly linked to plant conditioning. It is the culmination of different sensors that will make the analysis more accurate and creditable. Consequently, It will be difficult to integrate this method of analysis due to it’s complexity but achievable under an attentive system-design and platform.
An example of this combination sensing method is in IBM Hypertaste. Although this system is geared toward defining chemical composition, the method of combining multiple sensors to produce a result is important and relevant to my application as it illustrates multi-sensor inferences very well. I will try my best to encorporate sensor fusion into this project best, I can. Under the time constraints, a fully working system might not be possible but a prototype demonstration is.
OverviewIn order to map the terrain and flora below, the drone is equipped with a stereo camera system which produces depth images to be converted into pointclouds data for subsequent analysis.
Here is a working diagram for how the system is being developed.
The system consists of 2 coral cameras mounted side by side to make a stereo camera. The navplus controls them through its dual csi-mipi and processes the image data. Images are captures in sync to produce stereo image pairs which can be used to make depth maps. These depth maps can be converted to pointclouds. The accumilated pointclouds are reconstructed to form the 3d scene around the drone (in this case the field below as the drone scans over it). This larger pointcloud can then be studied to find meaningful information. Clustering is used to find individual plants and further analysis is then performed on these segmented pointclouds.
Before showing the development process, I will layout the current stage of the project and where I want to go with it next.
Current systemStereo coral camera system is able to develop somewhat accurate depth images but requires fine tuning quite often. This is due to some hardware issues I will explain in the development section.
Depth images are being calculated reasonably well using opencv BM and SGBM algorithms and then are filtered to improve them further. Calibration, rectification and undistortion also being used to ensure accuracy and clarity of depth maps.
Pointclouds are generated from depth maps using open3d with good clarity depending on the input data. I improve these pointclouds using multiple conditioning/downsampling methods to make analysis more accurate and efficient.
Simulation version has been built using blender to create realistic crop environments and a perfect stereo camera rig.
Mapping is being carried out using ICP registration combined with pose estimation. So far the system is not fully working due to knock on effects of above components and is still in development.
Analysis of pointcloud scenes uses open3d and tools like numpy. Pointclouds are clustered well using DBSCAN algorithm open3d implementation to compute the clusteres and then segement them from the main pointcloud. These clusters are then analysed, looking at their height, shape and relative position to other clusters to determine crop distribution. This is also still in development with promising results so far.
Future systemSLAM (Simultanious Location and Mapping) will be implemented into the mapping process with ROS2 integration into open3d mapping and the navqplus as a companion computer to the FMU. Current system runs waypoint missions designed for that field.
Sensorfusion could be used by encorporating BME688 to detect the growth stage of plantlife below, perhaps mapping on top of poincloud. Plants release different chemical signatures during growth stages, training a classification model to pick this up via sensor readings would be very useful for cross validation of pointcloud analysis.
Prediction will be performed using anaysis data to build up a history from a given environemnt, tracking the growth and health of crops and estimating its trajectory. Also anticipating areas in which crops may need different levels of feriliser or pesticide using domain knowledge of pest lifecycles and crop-specific growth requirements.
Display the outputs live of the crop analysis and visualises in a meaninful way using either a companion app or web app.
DEVELOPMENT LOGApproximatly 75% of this project was purely research and experimenting with hardware and coding to find a solution, to save you some time I will only cover the logs of components which ended up being useful or in the prototype system, perhaps with the exception of 1 or 2 cool concepts.
Stereo camera rig using two Coral cameras and 3d printed frame with a custom built library for handling synced image capture and video capture.
Library - Python based library that uses gstreamer and opencv to manage image caputure and saving. Uses threading to reduce delay between left/right image pairs.
python3 /home/user/stereo_camera/v4/camera.py
Find the code in the github repo attached to this project.
Before I could begin building my stereo camera system I had to setup the navqplus. This was fairly straightforward with the help of the Hovergames gitbooks and discord community.
My first rig was very very crude!
How ever crude this system may be it allowed me to get an understanding of how these cameras behaved. It wasnt long till I has a basic stereo camera system coded and functional.
An issue I realised early on is that the cameras were clearly not capturing in the same way. The quality and settings were different. This would go on to become a serioes problem in this project, but I had no idea yet.
I also realised that there was a delay in the stereo image capture. I needed them to capture in sync in order to get good disparity espeaclly as it will be mounted to a drone so motion would be an issue otherwise.
Therefore I decided to use threading to make the captures or simultanious, it worked very well after fixing some troubleshooting and gstreamer issues.
I can live with 1 centisecond delay! Threading was working wuite well to reduce disparity in captures (I'm sure there is a better and much more effective way to do this but it was enough for the time being.)
Depth mapsUsing opencv's two popular algorithms for stereo matching, Block Matching (BM) and Semi-Global Block Matching (SGBM), I started to experiment with generating depth images using my stereo camera setup and got some terrible results!
Depth map tuning is honestly like an art form, it takes patience, a high level of understanding and a good stereo system. I had none of these things yet.
An important aspect of a stereo camera is the baseline. The baseline with the distance between your cameras along the x axis. This value determines the distance in which the cameras will be best at sensing depth. In order to find right baseline you have to consider how far away your subject is. I was not sure how far away the drone would be and what distance would work best for this appliation so I looked ay comercial products and added a bit more as I knew I'd be a decent distance from the target crops. As a a rule, your baseline should be a 30th of the distance to the subject. For initial testing I used a baseline of 120mm, so my ideal subject should be at 3.6 meters. Of course you can still get good depth perception of subjects closer and further than this goldilocks zone but there will be greater reprojection error the further the objects are.
I did some more testing using this setup and got similarly poor results and so I needed to figure out what was underperforming, my code or my setup. (Hint: It was both!)
After doing lots and lots of research I got to understand the algorithms better and what the parameters do, along with good starting values and techniques.
I also needed to calibrate and rectify the images taken from the cameras to give the algorithm the best chance at finding the disparity in the images. There are lots of resources to help you do these things online, and after some trial and error I has stereo calibrated and rectified/undistorted images. I go into more detail further down.
Some tips for calibration: Hold the chessboard as close as you can to the camera while keeping the whole board in frame. Vary the angles of the board and verify all the images are clear.
I also looked into trying different baselines, and 3d printed a range of baseline mounts, all of this are linked in this project for others to use. Ultimately I chose a baseline of 56mm because most of my testing was done from about 2 meters or so away.
During this time, I was also deep down the rabbit hole of trying to find the right drivers for the coral cameras. As at the moment I believe the issue of camera inequality is due to the fact the cameras are setup in different modes and resolutions. For instance on camera would crash if i tried to capture over 1080p. I did find some version of the drivers but they did not allow me to change the settings I needed. This issue did greatly impact the progress of my project and there is a lot more to this issue but ultimately the problem is still unresolved so any explanation is not verified.
The stereo system should be straight along the x axis but due to the different unaccessible settings on the camera, specifically the autofocus and resolution, the right image suffers a defection and is not optically straight or focused correctly.
After lots of work trying to get the cameras to work as I needed them to, I settled on positioning the cameras best I could and capturing images in the individual best quality, then using rectification parameters to compensate for the underlying issue.
During this time I also explored using a monocular camera and predicting depth using MiDaS and Keras machine learning models. I got some good results with urban environments but they both underperofrmed in natural settings and were not usable.
I also began working with the open3d python library to generate pointclouds from depthmaps and maniuplate those pointclouds. The pointclouds are generated from depth maps. However, first we must convert the depth maps to the open3d standard RGBD data structure using the depth map and an source image.
As you can see these results above are admirable but not accurate enough for this application. When trying to capture the depth of plant life, there is very little detail in most cases.
And so my efforts turned back toward stereo camera system. I was determined to use the both cameras and I could not find any other suitable hardware as it seemed every other camera module uses either a 15 or 22 pin FFC and the CSI-MIPI on the NavQPlus is 24 pin. I also could not get another Coral camera due to supply chain shortages unfortunately.
I continued to develop my depth maps and pointclouds, looking into depthmaps filters such as WLS and pointcloud matching methods like ICP registration.
We can reconstruct scenes directly from pointclouds or instead use the RGBD structures they are generated from. This allows use to look at the camera pose and estimate the transformation we need to apply to a pointcloud to align it with a target pointcloud.
I also designed a nice case for the NavQPlus with a mount for the stereo camera on the front with ffc routing and mounting holes.
This new housing for the computer and cameras was very useful as it made sure the test environment was constant. From this point I did not change the baseline, angle or setup anymore. CAD files are linked in this project.
I began delving into the pointcloud side of things a lot deeper, with more work around manipulating the pointclouds and finding clusters using DBSCAN. I then segmented those pointcloud clusters, generating sub-clouds of the (hopefully) the crops.
I also remove any outlier points from the clouds using both radial and statistical outlier removal tools in open3d. Depth map data often has noise embedded in it due to the source data being inaccurate and so this is very important step.
Earlier on I mentioned how all image pairs are calibrated, rectified and undistorted. Well we calculate things like camera intrinics along with extrinsics and some other values during this process. We can use these parameters to better generate depth maps and pointclouds as well as in reconstruction of a 3d scene using these values when finding the camera pose.
Here is a real world example form one of my datasets. The warping of the images is a result of the calibration algorithm trying to alter the image to keep it rectified.
Alongside these real world tests, I was also developing a system using the graphical software blender. I modelled a simple mesh plane with realistic foliage to mimic a farmers field. Next, I set up a stereo camera system by setting a two cameras up along the same axis. This creates the perfect stereo rig. Whats more is that using blenders python API I was able to extract the cameras parameters, including it's intrinisc matrix. These intrinsics basically convert points from the camera coordinate system to the pixel coordinate system.
Using this setup I managed to get some very good depth maps. Below is a larger field I modelled where I rendered stereo image pairs to use to make depth images and pointclouds.
Notice how the lighting doesn't effect depth perception. This is the mark of a good stereo setup as the images are pixel aligned and equaly rendered. Here is an overhead example where the camera is perpendicular to the ground. This example is good becuase it shows the actual disparity in plant height.
This gave me some hope that my current pipeline (stereo images to refined pointcloud) did work well, I just needed better data collection.
The stereo camera module code developed further to include proper mission folder trees and functions to sort and convert between them. Each mission has a folder with stereo pairs, single image (taken from stereo pairs) and depth (generated from the previous folder) this is so we can convert folders to the RGBD data structure easily using the image an depth folders.
I made another dataset with the new refactored stereo camera code, this time taking a sequence of images in my garden. I chose a small strip of vegetation in one of the borders and slowing scanned down the whole border.
The results were better than before recongnisable pathces of vegetation visible in the depth images and pointclouds.
Clearly the noise has caused some error in the pointclouds but for the most part the features we can see in the source image are there.
After condtioning the pointcloud, running it through outlier highlighting and then cluster analysis we are able to see some good features. Clusters are visibale painted in different colours and the outliers are highlighted in light pink with the ground a brighter pink.
This processed dataset although promising, still lacked a lot of clarity and accuracy. After further tuning of the clustering algorithm, I was able to pick out a very colourful selection of features. I also tested this tuned algoritm on my blender setup and it worked really well I think.
Now that I have a version of clustering segmentation working, I wanted to started performing some analysis.
To start with, we can use open 3d tools to find out the colume of these clusters, i.e. how big each plant is.
This can be either using a minimal oriented bounding box or axis aligned. Aligning it to the groung plane will allows use to measure the height of the plant, and see it's 'footprint' while the minimal oriented box will allow use to find the overall volume. For instance, if a crop spreads out furthers than it's height.
Output for axis and rotational volumes
Volume =
0.005228545635448791
0.0037324525846827534
Once we apply this to the entire cluster sequence we can gain an average.
This map has an average axial volume of 0.006314, rotated volume of 0.008735 and the avarage relative position is 0.561294
We can also measure height and width of the plants by looking at the points of the bounding box and calculating distanct beween points.
We also most look at finding the distnace between two clusters, with this we can understand how they are spread across the field. This is done using point to point registration. I will use domain knowledge to make predicitons as to the growth stage of the plants.
I am still developing the analyis part of this project, and am working to get good conclusions from my measures and make informed preditions.
In order to move on with the project toward good data visualisation, I decided to mock up what I want my depth images to output in terms of the meshes generated. Below is a mesh I used to perform
for analysis and am working with it
to make good calculations and inferences.
Visualising with depth visuals. You can spot both the topology changes and plant heights. I think this would be very useful for farmers to get an accurate map of their fields. This will help them decide how to irrigate their crops, where to use fertiliser and pesticide with areas of defeinct grow visible by looking at the heighlighted plant height.
I was also using cron to schedule my stereo code to beign when the navqplus started up. Find the short tutorial in the NavQPlus gitbook, if my additions are approved that is :)
Test FlightsFlight test 00
The first test flight of this system in action revealed a few issues. The main issue being that I needed a bigger battery as I only got about a minute of flight time after wasting time messing with code and settings and to secure the stereo camera better as the motors were vibrating the cameras produces poor images.
This highlighted the need for something to stablise or absorb this vibration. I decided to mount navqplus with shorter standoffs as I noticed they were also introduce horizontal wobble.
Flight test 01
In the second test flight a few days later, things were more successful. I was able to get muliptle 3 minute flights of the new battery. Also my new mount stablise the camera to some extent, however another issue developed. The vibratiosn were unscrewsing the mounting bolts of the computer and stereo camera. Thsi Made the camera shake just as bad as before. I need to make surefixings are properly secured and use loctite in the future. About 2 minutes into my 4th flights the drone began to spriral and I crashed it into some tall grass (emergency landing!) which I belive was due to a motor arm being slightly rotated askew by heavy winds on the props that day. Another reason to check your bolts are tight. I then tried to use my second battery after recovering the drone and deciding it seemed okay. This final flight was very short and dramatic. After liftoff the drone imeediatly spun and when trying to fight it I ended up plowing it into the ground. Now my drone cuts grass it seems. The flight logs are nominal so I think it was most likely the motor arms were not aligned propelry. Rookie mistake!
The data collected in these misison,
although an improvement from
flight 00,
was not good enough to build depthmaps with. I need to build a more stable camera mount that absorbs shock or maybe even use a motorised
stabliser.
Furthermore, I am working on combing the depthmap and pointcloud code so that I can run the full pipeline one the edge, with the end resutls being displayed in a web GUI using
FastAPI
. Currently the workload is split at the depthmaps stage where I am still processing the rest of the pipeline on my persoanl machine.
Now although I did oto get to a demonstrate my solution fully, in all it's glory, I did get some really interesting results both from real world data and simulation.
I am currently working to imporve my analysis functions and inferences. This is perhaps the most important part of the project as it is what the farmer will see and so should be as meanignful as possible. I will continue work in this area and develop a prediciton system for factors like growth rate and health trajectory.
Now although I have tried to get in touch with farmers to gets some feedback on the system I currently have ahd no luck. I did also resort to asking on the /farming subreddit and got some information about how farmers usaully find this data and how my solution is an improvement on this. They look at satalitle imagery and calulate values like Normalized difference vegetation index (NDVI) which tells you how much vegetaion there is in a given area as well as measuring the sulfur content of the soil. With this in mind I will encorporate these data colleciions into my future work. Perhaps using the BME688 with its VOC detecting features. Mapping these values to a scan would provide farmers with positional readings and so they can see areas of greater and and lesser sulfur content. My mapping solution also mimics the NDVI scale in some ways as it tells you the spread and dfferentec in crops accross the field.
Another key area for development is automation. Early on in this project I was researching and developing a SLAM solution for this, and still am. Other tasks blocked this objective however as I needed a good stereo depth and pointcloud system in order to get good scans for intergating into ROS. As my pointclouds become more reliable I will encorporate ROS based SLAM into this system. This will allow the drone to autonomously fly over the crops without the need for waypoint mission software or GPS connection, which can be unreliable and inaccurate.
Further, I would like to visualise these outputs in a simple web based GUI. This gui will tell the user all the analysis resutls and inferences it has made along with an interactive map of the field.
In conclusion is probably the wrong word for this is as I am still developing the drone. I am happy with the progress I've made and am excited to progress and encorporate better analysis, automated flight and user experience features.
Thank you for taking the time to read this! Have alook at the slideshow below!
Collaboration & contributionsMy contributions to this proejct included helping others with how to use the coral cameras, FastAPI and troubleshooting issues as well as being supportive in the channel. See my activity in the hackster discord community (mr_finnie_mac #0802). I also have multiple files available for of my CAD models I made in this project, available on thingiverse. I also contributed this to the Hovergames gitbook in the 3d printing section and submittted to the NavQplus gitbook a short tutorial on how to use boot scheduler Cron in the NavQplus. We have developed a nice community on the hacketer discord and I'm exctied to continue sharing my project with them on there.
Comments