Factors such as the climate change, resource restrictions and the growing world population require intelligent and efficient farming. Modern IoT devices and sensors can provide data and decision support and they can enable such smart farming scenarios. Measured and generated data have the potential to establish much more intelligent, and resource-saving farming processes.
For example, if the fertilizer concentration of the soil for each individual section of the farm land were available, it would be possible for the farmer to only apply the needed and missing fertilizer quantities to those areas that lack fertilizers (savings in fertilizer and reduced environmental footprint). Soil density information would allow for a more precise soil preparation and plowing. Humidity information would support for a more precise irrigation process, etc.
In order to improve farming efficiency, farmers in both, developed and not developed countries need individual digital data and maps with condition data for their particular farm land. And they need the help of robots that can collect farmland-condition data without human work. Current products are either expensive, or the data collected by machines is not accessible by the farmer.
Not all data can be collected with drones. Therefore farmers need autonomous rovers that are capable to collect environment data. In addition, certain type of farm goods require manual work (i.e. pesticides are not permitted for food in bio-quality, and therefore weed is to be removed by hand). Rovers that are equipped with specific manipulators can automate many of these processes.
For more information about the business context, read the following documentation:
- 5 Ways Smart Agriculture Turns Less Water into More Food https://arad.co.il/blog/5-smart-agriculture-strategies-to-turn-less-water-into-more-food/
- The discovery and collection of data https://www.precisionhawk.com/blog/smart-farming-how-drones-are-transforming-agricultural-operations
This project provides a high level design for such a solution, including tests for some elements of the solution in form of hardware (Buggy, sensor, etc). The solution includes the following elements:
- Autonomous and inexpensive rover that is capable to navigate farm land. It uses its sensors to collect and partially preprocess condition-data that is relevant for smart farming (i.e. gas concentrations, humidity, etc).
- The collected data is stored on the device. Once transferred to a host computer, this data can be visualized on maps, and further processed with data pipelines and AI-Models, i.e to create fertilization, irrigation, etc.. plans for the farmer.
- The autonomous vehicle made of the NXP components is managed by ROS2 and finds it path by means of GPS-positioning and a video stream from the vehicle’s stereo camera.
- ETL-Data pipelines that use the stereo-camera attached to the NavQPlus-board, GStreamer, OpenCV and PYTorch in order to collect, process visual data, and to detect plants. These data pipelines are capable to identify unwanted objects, such as weed, or other waste.
- ETL-Data pipelines that capture and process data from environmental sensors for humidity, gas (Bosch) using ROS2 and ROS topics
- As the other sensor data, the stored gas sensor information is can be used in specific models that calculate the optimal fertilization strategy for the given farmland.
An autonomous rover consists of the following parts:
- Rover that is capable of driving a preplanned route
- Software to manage the vehicle (ROS and ROS topics) and the attached sensors
- Software to manage the sensors and cameras (GSTREAMER, OpenCV, possibly integrated into ROS)
The autonomous vehicle provided by NXP consists of many parts.
The build process requires heavy customizing, and is described in this GitBook: https://nxp.gitbook.io/nxp-cup/mr-buggy3-developer-guide/mr-buggy3-build-guide. Configure your vehicle as described.
NXP provided a computer (NAVQPlus) that is capable to manage the buggy and all sensors. In addition, you need to purchase a battery.
My buggy looks like this, while the Bosch Sensor, and the NAVQPlus still require a proper housing. This is a design from Thingiverse, provided by other participants of the challenge, which fits the NAVQPlus https://www.thingiverse.com/thing:5742189
The NAVQPlus-Board manages the rover and all attached sensors. It comes with several capabilities that are needed to drive autonomous cars, and it is equipped with many interfaces as documented here: https://nxp.gitbook.io/8mpnavq/dev-guide/hardware-interfaces.
A full-blown autonomous farm vehicle requires environment information so that the vehicle is able to run on a predefined path and can react to obstacles. It is either possible to follow a route that is planned on a separate host computer (In this setup, the vehicle requires sensors that are able to detect obstacles). Or it is possible to employ the cameras in a way that the vehicle finds its path.
A farm vehicle is also equipped with different sensors that are used to collect information about the environment and farming-related data in general. These sensors produce a constant datastream. To process this stream, the vehicle needs to implement ETL-pipelines (Extract, transfer, load).
Among others NAVQPlus is capable to process ROS code. Install ROS2 as described in this gitbook https://nxp.gitbook.io/8mpnavq/dev-guide/software/ros2, and also install the libraries needed for the communication with the FMUK66v3. The FMUK66v3 is used to manage the autonomous vehicle.
The datapipeline for this project manages both of the mentioned data streams and will be implemented in ROS2. Therefore the datapipeline consist of different nodes and data topics that are responsible to sense the environment, and to collect and process sensor data.
The scientific paper „„Robot Operating System: A Modular Software Framework for Automated Driving“ (Hellmund et.al FZI Research Center for Information Technology, 76131 Karlsruhe, Germany details the overall setup of an autonomous car, and serves as an interlectual guideline for the system design. Remember that the paper permits personal use, but may not be used in other contexts.
The high-level design for the ROS2 data-pipeline, described here is using this example code as a guideline https://roboticsbackend.com/build-a-ros2-data-pipeline-with-ros2-topics/.
3.3. Integration of SensorsThe agricultural buggy is equipped with different sensors, such as cameras, or biometric sensors. These sensors need to be integrated into the different pipelines that are running in NAVQPlus. The measured values are stored in ROS pockets for later evaluation and processing.
Some sensors, such as the Bosch BME688 measure environment data, and are directly connected to their ROS nodes. Other sensors use the cameras of the vehicle to provide pictures and video streams.
3.3.1. GSTREAMER and openCVGSTREAMER is a capable streaming solution widely used in image or video processing, and can help to process the picture or video datastreams. In many situations, you want to prepare decisions with this grafics data. For example you want to classify the plants that are shown on a picture. Such tasks require AI models that are trained with the data to be classified
To run AI Models on video streams (i.e. path detection, obstacle detection, etc), or to process photos by such models (i.e. pest detection in plants) the visual data typically needs to be preprocessed after the photo is taken (i.e. edge detection, color depth reduction, etc). Therefore the cameras are often integrated into ETL pipelines.
With the given hardware, it is either possible to collect data from video streams by means of ROS Nodes, while the GSTREAMER that preprocesses the data is integrated into ROS. The following GitHub repository describes a bridge between ROS2 and GSTREAMER https://github.com/BrettRD/ros-gst-bridge that might be used in this scenario.
Alternatively the cameras can directly link to GSTREAMER-pipelines. In this setup OpenCV is used to capture video and photos and it is part of GSTREAMER pipelines that process the individual frames. For more information, see this part of NXP’s documentation https://nxp.gitbook.io/8mpnavq/dev-guide/software/opencv. In the latter case there would be multiple pipelines that are implemented in different technologies.
3.3.2. Bosch Gas SensorThe Bosch BME688 is capable of measuring environment variables such as temperature, gas concentration or humidity. This data can be used as an input to AI models in order to establish an intelligent environment monitor. This sensor is used as described in this section.
A) Hardware
The supplied Bosch Sensor board comes from Adafruit, and besides the tiny breakout board the package includes different cables to connect the sensor to a board (see photo).
The so-called Stemma cables (technically these are 3 and 4 pin JST PH connectors) can be attached directly to your board without soldering https://learn.adafruit.com/introducing-adafruit-stemma-qt.
To test the breakout board and the sensor you can attach it to an Arduino Uno. It can either be wired for the I2C, or for the SPI interface, as described here https://learn.adafruit.com/adafruit-bme680-humidity-temperature-barometic-pressure-voc-gas/arduino-wiring-test. In the Arduino IDE you need to install the library that includes proprietary Bosch drivers for the sensor (see here https://learn.adafruit.com/adafruit-bme680-humidity-temperature-barometic-pressure-voc-gas/). As shown in the serial console the sensor measures different environment variables.
B) AI Studio
Bosch offers the AI Studio, which allows you to train KI models with the captured data. On the Bosch homepage you find a description of the workflow using a different board than described here (this package includes a microcontroller). Bosch has published a video that shows a model that is trained to detect coffee just by its smell. This video is very informative, and demonstrates the capabilities of the sensor. For more information, see https://www.bosch-sensortec.com/software-tools/software/bme688-software/.
It is not possible to link the supplied Adafruit breakout board directly to the AI studio, and a ready made library is not available. Nevertheless, the Bosch GitHub includes example code about sending data over an BLE connection, or to use it in a datalogger mode (see here) https://github.com/boschsensortec/Bosch-BSEC2-Library/tree/master/examples/bme68x_demo_sample.
A further option is to connect the sensor to a linux board and use micropython to program it. Adafruit has included a chapter about this possibility into its documentation and Pimeroni seems to be working on libraries to store data in JSON so that it can be used in AI studio. (see Luftqualität mit dem BME688 messen https://www.raspberry-pi-geek.de/ausgaben/rpg/2021/08/luftqualitaet-mit-dem-bme688-messen/)
A different path would be to integrate the sensor into a mathematics solution. The following article describes a project that uses Wolfram Mathematica in order to process data from the BME688 and to train KI models (see „Algorithmusentwicklung für intelligente Gassensorik“, Jonas Friedemann Heuer und Dr. Daniel Fabian Kärcher, Intervall Beratung GmbH https://www.additive-net.de/de/component/jdownloads/send/175-industrie/712-algorithmusentwicklung-fuer-intelligente-gassensorik). However, this path would require a further technological stack To be available on the vehicle.
C) Application areas for the gas sensor
The sensor is able to detect very small quantities of molecules. Using the AI studio, data from the sensor can be used to train an AI model that is for example capable to detect how many other plants are growing on a field besides the intended crop. The following scientific study gives an overview over electric nose technologies that are possible with the BME688: „Diverse Applications of Electronic-Nose Technologies in Agriculture and Forestry“ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3649433/
3.4. Artificial Intelligence - Neural Processing UnitNAVQPlus also includes a Neural Processing Unit that can accelerate interference on the edge. Therefore, single pictures, preprocessed as above in these GSTREAMER-pipelines can be classified on the edge, using TensorFlow models (i.e. classification of plants, or plant deseases). For more information, see https://nxp.gitbook.io/8mpnavq/dev-guide/software/ai
4. Future WorkDue to the sudden illness and death of a very close relative just a few weeks before the deadline, I was not able to implement the above design, like I originally planned to do so. As I have already acquired all needed material, and knowledge before, I plan to continue with this project in future.
Comments