Create a rover to aid terrace farming, maximizing the yields in existing terrace farms. Also increasing the amount of land that can be farmed in a sustainable fashion.
The Concept:Many autonomous rovers will be used to inspect and monitor farming processes. This will help make the use of human labor more efficient. The many computer vision eyes of the system will help dispatch humans to the most important tasks of system care in a large farm.
Implementation:The neural processing unit and vision system features of the i.MX 8M Plus will be used to increase the throughput and reduce the power required to make an autonomous rover function. A vision system using an image classifier will be used to detect conditions that need to be corrected. If the condition is in a category that can be handled by the rover, then the rover will take action. If the condition cannot be handled by the rover, the condition is reported to a central control point.
The ultimate solution for this type of rover will have the most powerful vision system in a very power efficient implementation. While other systems could be used to test concepts, the best evolved hardware systems will win out over less specially adapted systems.
Step by Step InstructionsSetting up NAVQ+ for developing project
- Connect Power - I used the side power connector and not the USB port to supply power. There was a warning that there was a current limit if the USB port was used to supply power. It is also convenient to use a bench power supply with a current meter to understand NAVQ+ power consumption in various modes of operation.
- Turn off ethernet message - I created a batch file to unbind the ethernet driver. The batch file needs to be executed after logging in to NAVQ+ to prevent the error messages.
sudo sh -c "echo -n 30bf0000.ethernet > /sys/bus/platform/drivers/imx-dwmac/unbind"
- This course is a quick introduction to TensorFlow
The best way to make progress is to build on an existing working example. The NAVQ+ has an Tensor Flow Lite inference example. I will build on this example and make changes to it to adapt the example code to this project.
i.MX 8M Plus NPU accelerator
The Neural Processing unit is an important part of this project. The first thing to test is the inference speed improvement when the NPU is used. This is a key in two ways.
- It reduces amount of power required for a vision system to operate, extending battery life.
- It increases the ability to use larger neural networks with higher performance over non-accelerated systems.
I wrote two BASH scripts to call the Tensor Flow Lite example inference. I have the BASH scripts in the code section of this project. The picture below shows the inference speed-up when the NPU is used. The inference time is 2.721 mS with the NPU acceleration used. Without the NPU acceleration the inference time takes much longer, 62.8 mS. So for this example there is a 23 time speed improvement.
Now work can begin on adapting the Tensor Flow Lite example to the needs of this project.
I have found it useful to convert examples like Fashion MNIST from Tensor Flow to a Tensor Flow Lite model and transfer the Tensor Flow Lite model to the NAVQ+ with a USB jump drive.
I did find the 24 pixel by 24 pixel images of Fashion MNIST to be a bit small. The images are in monochrome which seems limiting. So I have switched to a CNN example that uses the CIFAR-10 data set.
The CIFAR-10 data set uses color images that are 32 pixels by 32 pixels. I found this link useful to access the CIFAR-10 data set.
The goal is to have access to the test portion of the data set on the NAVQ+. Note that NAVQ+ uses Tensor Flow Lite and does not have the full capabilities of Tensor Flow.
The time to do an inference with a CNN Tensor Flow Lite model is 30.5 mS.
Now a new CNN Tensor Flow Lite model can be created if the CNN model is trained with a new data set useful for terrace farming.
A data set containing JPEG images of several types of crop leaves, both healthy and diseased is found at the link below.
Leaf Data Set Healthy and Diseased
In future work the Tensor Flow CNN model will be retrained using the pictures of healthy and diseased leaves, replacing two categories of picture in the image classifier. A scanning method will also be used to scan a much larger image for healthy or diseased crop leaves. The scanning method breaks the larger image into a grid of 32 by 32 pixel sub images for running classifier inferences. A second half step pass can be run to detect leaf images cut in half on the first pass.
Solving Additional I/O needs
I need additional I/O lines for the BOSCH BME688 sensor and items added to the rover like controlling servos for a azimuth elevation mount for the camera. Because I don't quite have enough time to figure out how to use the Cortex M7 core on the NAVQ+, I am using a RT1010 board connected via serial port to NAVQ + .
Below is a link to a GitHub project for the RT1010 board to accept serial communication from the NAVQ+ board. This solves a problem that accessing the Cortex M7 core on the NAVQ+ to do some simple embedded software I/O tasks ( I2C , SPI and digital I/Os). Getting these I/O tasks done with the Cortex M7 core on the NAVQ+ does not seem easy with a rapidly approaching deadline.
GitHub project for RT1010 serial communication with NAVQ+ board
Planned Future Work
The focus of this project has been on using Tensor Flow Lite on the NPU of the NAVQ+ board. Future work has the following goals.
- Irrigation Water Level Detection - Use an image classifier to check that water levels are proper for rice farming. A stick with three levels painted different colors will be used as a measuring rod to detect that the water levels are proper for growing rice.
- Pest insect and animal detection - Use an image classifier to detect nuisance insects and animals that destroy crops and reduce crop yields.
- Crop Blight detection - Use an image classifier to check the conditions of leaves and detect crop blight as early as possible.
- Autonomous Rover Development - Develop the neural networks, vision system and program logic that create a autonomous rover that balances the above tasks well and communicates the conditions of the terrace farm to central control hub.
What we need to learn to learn to get the above goals accomplished:
ROS, PIX4, Mavlink, Vision Systems, Rover mechanical design and finally much more MACHINE LEARNING!
My robotic old friend is ready to start a new life with much improved hardware.
Look for further refinements in the future.
Many thanks to the Discord community for the spirit of cooperation that helped our projects succeed in spite of many challenges!!! You are all awesome! Thanks to NXP and Hackster.io for the awesome learning opportunity are the Hovergames!
Useful Links:
Mobile Robotics Buggy3 Build Instructions
Programming NXP PX4 Controller
Comments