Drones for agricultural spraying have been assisting farmers for several years. They are simple and easy to use, are superior accurate, and affordable owing to their GPS command and guidance. Drones’ usage to detect as well locally manage and control weeds is a common phenomenon. Yu et al. (2019) analyze an example of drones use in weed control where they assess Deep Learning Convolutional Neural Network'srole in weed detection in Lolium perenne. Further, precision herbicide usage can significantly minimize herbicide input along with costs of weed control in turfgrass.
Sharma et al. (2019) assert that by 2020, the international pesticide use had been projected to surge up to about 3.5 million tonnes. Regarding pesticide usage in the third world, Sanyal and Law (2021) argue that several developing nations are importing industrial procedures that use toxic chemicals. In addition, pesticides that are harmful by design are used growingly in public health and agriculture programs to manage and control pests. According to recent estimates, pesticides comprise over 20,000 fatalities annually and the majority of these deaths have happened within developing nations.
Although broad-spectrum weed-killer is among the most essential herbicide, it is also the costliest practice for farmers. The high cost of broad-spectrum weed killer can be attributed to its broad spectrum and wide selectivity towards wide leaf weeds and used according to the climatic condition as well as local authorities approval.
Our aim is to give farmers worldwide access to cost-effective and easy-to-use weed control tools.
We are building a low-latency neural network of a few MB size using a self-developed 1-bit CNN based on the MobileNet topology. In this project we present a system to locate weeds on a grass field. This system could later be used in low-cost vehicles (e.g. drones) to mark weeds or diseases on a larger area on site. As a board we use a Raspberry Pi Pico with an ArduCam camera. In order to make the best use of the limited resources of the microprocessor, we are implementing our own 1Bit CNN in the C programming language.
Software
Many different approaches to building very small, low-latency models involve either compressing pre-trained nets or training small nets directly.
MobileNets focus primarily on latency optimisation, but also yield small nets. Much work on small nets focuses only on size and does not consider speed.
Since the first experiments with TensorFlow were not very successful for me, I decided to implement the BNN in C myself. I use a so-called quantised network. Quantised neural networks with low accuracy are very well suited for resource-saving requirements, as they need fewer computing resources for inference and offer high performance. Their advantages are obvious in FPGA and ASIC devices, while general-purpose processor architectures are not always able to perform low-bit integer computations efficiently. The most commonly used low-precision neural network model for mobile central processors is a quantised 8-bit network. However, in a number of cases it is possible to use fewer bits for weights and activations - the only problem is the difficulty of efficient implementation. Since our target system has neither a floating-point processing unit nor uses a GPU, this is a minor issue.
We try to push the boundaries of what is possible and implement a 1-bit convolutional neural network (1-bit CNN, also known as a binary neural network), where both the weights and the activations are binary. It offers a 32-fold memory compression and up to 58-fold reduction in practical computational power on the CPU. With its purely logical computation (i.e. XNOR operations between binary weights and binary activations), the 1-bit CNN is also extremely power efficient for embedded devices and has the potential to be used directly on next-generation memristor-based hardware.
The CNN topology used is based on the established MobileNets model for portable and embedded image processing applications. MobileNets are based on depthwise separable convolutions to build lightweight deep neural networks. MobileNets is already used in a wide range of applications and use cases, including object recognition, fine-grained classification, face attributes and large-scale geolocation.
A MobileNet with 160x160x3 input data includes approximately 4.2 million parameters. Together with the coefficients, the topology will require less than 1MB.
HardwareA Raspberry Pi Pico board is used as the basis for the binarized neural network. It is based on a RP2040 microcontroller with a dual-core Arm Cortex-M0+ processor with 264KB internal RAM and 133 MHz.
Key features from the datasheet:
- Dual ARM Cortex-M0+ @ 133MHz
- 264kB on-chip SRAM in six independent banks
- Support for up to 16MB of off-chip Flash memory via dedicated QSPI bus - DMA controller
- Fully-connected AHB crossbar
- Interpolator and integer divider peripherals
- On-chip programmable LDO to generate core voltage
- 2 on-chip PLLs to generate USB and core clocks
- 30 GPIO pins, 4 of which can be used as analogue inputs
- Peripherals
- 2 UARTs
- 2 SPI controllers
- 2 I2C controllers
- 16 PWM channels
- USB 1.1 controller and PHY, with host and device support
- 8 PIO state machines
The camera used is an SPI camera from ArduCam (ArduCam Mini 2MP Plus OV2640 SPI Camera Module). This camera has the following features:
Features:
- 2MP image sensor OV2640 (B0067) / 5MP image sensor OV5642 (B0068)
- M12 mount or CS-mount lens holder with changeable lens options
- IR sensitive with proper lens combination
- 12C interface for the sensor configuration
- SPI interface for camera commands and data stream
- All IO ports are 5V/3.3V tolerant
- Support JPEG compression mode, single and multiple shoot mode, one-time capture multiple read operation, burst read operation, low power mode and etc.
- Well mated with standard Raspberry Pi Pico boards
- Provide open-source code library for Arduino, STM32, Chip kit, Raspberry Pi, BeagleBone Black
- The small form of factor
The camera has a resolution of 1600x1200 pixels. With a data transmission of 8Mhz (SPI Speed) and a frame buffer of 8MB
The Data setAs a data set, a lawn was filmed from different directions and approx. 25000 images were generated from the video. 5000 images were sorted out of the data set for later testing. Areas with weeds were marked accordingly and the BNN was trained to recognise the weeds. No distinction is made between different types of weeds.
We use a microcontroller with only 2 MB of memory by default and try to implement a complete neural network for image recognition on it.
TinyML is an area of research in machine learning and embedded systems that explores the types of models that can be run on small, low-power devices like microcontrollers. It enables low-latency, low-power and low-bandwidth model inference on edge devices. While a standard CPU consumes between 65 and 85 watts and a standard GPU between 200 and 500 watts, the power consumption of a typical microcontroller is on the order of milliwatts or microwatts. That is about a thousand times less power consumption. Thanks to this low power consumption, TinyML devices can run for weeks, months and in some cases even years without batteries while ML applications run in the background.
However, the problem with deep neural networks is that they involve too many parameters, so they require powerful computing devices and a large amount of memory. This makes it almost impossible to run them on devices with lower computing power such as Android and other low-power devices.
We were able to show that it is in principle possible to operate a model like the MobileNet with less than 2MB of memory. The achieved detection rate is 70 percent. The detection rate depends strongly on the size of the weed. Detailed investigations will follow. We are still at the beginning of the development. Another requirement is to generate training data for different altitudes in order to mimic the flight path of a drone as closely as possible.
Also, the FPS achieved could not yet be determined perfectly. When using a drone, however, a high frame rate is not particularly relevant, as an area can also be flown over several times. The network currently only detects weeds or no weeds. In a next step, different weed types are trained. For this, the training data must be expanded accordingly. In addition, the image recognition has so far only been trained and tested in the garden at home. The C code is optimised for the Raspberry Pi Pico. However, training with the help of a graphics card still needs to be implemented.
ReferencesGill, H. K., & Garg, H. (2014). Pesticide: environmental impacts and management strategies. Pesticides-toxic aspects, 8, 187.
Kraus, E. C., & Stout, M. J. (2019). Direct and indirect effects of herbicides on insect herbivores in rice, Oryza sativa. Scientific reports, 9(1), 1-13.
Mendes, K. F., Régo, A. P. J., Takeshita, V., & Tornisielo, V. L. (2019). Water resource pollution by herbicide residues. In Biochemical Toxicology-Heavy Metals and Nanomaterials. IntechOpen.
Sanyal, S., & Law, S. (2021). Chronic pesticide exposure induced aberrant Notch signalling along the visual pathway in a murine model. Environmental Pollution, 282, 117077.
Sharma, A., Kumar, V., Shahzad, B., Tanveer, M., Sidhu, G. P. S., Handa, N., ... & Thukral, A. K. (2019). Worldwide pesticide usage and its impacts on ecosystem. SN Applied Sciences, 1(11), 1-16.
Thumbtack.com. (2021). Average Cost of Lawn Weed Control. Retrieved 17 August 2021, from https://www.thumbtack.com/p/weed-control-cost
TUPAC International Union of Pure and Applied Chemistry. (2021). Retrieved 17 August 2021, from https://agrochemicals.iupac.org/index.php?option=com_sobi2&sobi2Task=sobi2Details&catid=3&sobi2Id=31
Wu, Z., Chen, Y., Zhao, B., Kang, X., & Ding, Y. (2021). Review of Weed Detection Methods Based on Computer Vision. Sensors, 21(11), 3647.
Yu, J., Schumann, A. W., Cao, Z., Sharpe, S. M., & Boyd, N. S. (2019). Weed detection in perennial ryegrass with deep learning convolutional neural network. frontiers in Plant Science, 10, 1422.
Comments