This project demonstrates how to build a HELIUM connectedagricultural tool that continuously measures farm andplantsdata like temperature, humidity, gas levels, moisture, light intensity, plant height and also detects disease causing pests or activities like cutting trees and making real-time data available to users online. The system can be deployed to remote villages, forests with a one-time setup, and can be left to run on its own, automatically collecting data without the need for users to be present. Multiple Farmium devices can be placed throughout a village to collect many points of data eventually creating a star mesh topology with Farmium base stations to analyse complex data.
Measurement data is pushed to the Amazon AWS cloud or other IoT dashboard through the Helium network connectivity, and can be viewed online through IoT dashboard and other services are used to send email notifications to users when the values reaches a critical level.
Technical background:While I was daydreaming, I thought of an idea to use sensors to record height of the crops as well as detect early sign of diseases and that too without using 3G, 4G or 5G, so almost no carbon footprint.
I got to know about Helium, a LoRaWAN communication protocol, though speed and data transmission capacity is still it's limitation but that what makes it reliable and perfect for IoT appliances. Helium helps companies solve connectivity challenges without expensive cell plans or worrying about building and maintaining wireless infrastructure.
Helium is the next heatwave in IoT technology, from powering billion devices to collection of data from billions data-points, Helium takes care of all without a complex network infrastructure and heavy network usage charges. Helium is a widely adopted technology as they help in reaching previously inaccessible areas.
I have already seen in many tech expos that 5G is going to be the world's next unified network connectivity not only this but all devices are now rolling out with 5G connectivity but Helium has an unique opportunity which 5G does not promise to provide in coming years, that is reaching those places where the climate is harsh or which would normally be inaccessible or too costly to reach. The next sector where Helium can be widely adopted is in monitoring climate change and agricultural data, this is where computing devices are embedded in other objects and able to transfer data to other interrelated pieces of tech, without needing any human interaction.
By using Helium and IoT technology, the researchers will be able to monitor areas of forest and rural agricultural farms which would normally be inaccessible or too costly to reach, this innovative method of transmitting data offers new potentials in forestry and agricultural research.
To save the world from another pandemic, billions of devices needs to be interconnected and communicate with each other and share the most critical data which can be used to study our dependence on nature. I am focusing on using IoT to study how forests and sustainable agriculture can be used to tackle the climate crisis.
Why did I choose IoT and Helium for study of forest and sustainable agriculture? The answer is simply that the scientists hope to uncover more on how the trees and sustainable agriculture' ability to store carbon can be harnessed to mitigate climate change. This is the place where Helium can win over other forms of network by it's LongFi technology (BLE and LTE has already expanded itself and 5G is going to get more devices covered soon but they all will fail when comes to reduced network infrastructure and costs for remote locations, the thing Helium can do very well).
My project will help newly-emerging IoT technologies to improve our understanding of the impacts of environmental change on our nation’s forests and farms, which will help in policy making.
By using IoT technology, the researchers are able to monitor areas of forest which would normally be inaccessible or too costly to reach, explains that this innovative method of transmitting data offers new potentials in forestry and agricultural research, which is a perfect example of how technology can be used in new ways to help create a more sustainable future.How our Farmium will bring the change?
1.Sustainable agriculture :
With increasing population a sustainable method of farming is important. It is very important to analyse what's going on in the farm, if a farmer would have means to know how is the growth of plant affected if he uses different techniques to grow crops then he might also predict what kinda safe methods and chemicals and right amount of water is required for maximum yield. Also it is very necessary to make farmers farm scientists so that they get maximum yield and restore the thriving quality of nature. In my country, India, farmers suffer due to lack of use of IoT devices in farming, refer to this website for more details on problems with Indian farmers, https://www.mckinsey.com/industries/agriculture/our-insights/harvesting-golden-opportunities-in-indian-agriculture
2. Detection of certain beneficial andharmful organisms using sound analysis :
Flowers can hear buzzing bees—and it makes their nectar sweeter
Farmers are using lots of chemicals to boost their farm yield but we tend to forget that due to excessive use of those pesticides, insecticides and inorganic manures, cross pollinating agents, the arthropods( 90% of living organisms found on earth are insects) don't get attracted to those plants or otherwise die due to excessive chemical exposure and pollution.
More than 2, 000 pollinating insects are extinct by now and only 500 are left to extinct if not protected.We also need to detect diseases in crops as early as possible to stop it from spreading in farm. To solve this problem I thought why not to tell farmer to use chemicals only if there are any bad growth patterns or harmful pests, nowadays even if there is no disease farmers uselessly spray chemicals and kill useful insects like honeybees. It will also detect for certain pollinating agents and how frequently they are there on fields.
3. Health awareness for people and village :
The device would also detect harmful insects like mosquitoes(alsocan detect different breeds too) and make villagers aware of breeding of deadly insectsandpotentialdiseaseoutbreaks also making sure that the device is not too expensive for developing nations to implement, the same audio analysis technique as described earlier would solve this problem. In next sections we have put up the proof why the audio analysis is going to work.
Scientists are worried about stagnation in counties with high cases of malaria that once enjoyed rapid declines. “In Africa and India, the cases had reduced from 80 percent to 40 percent. However, for the last six years, the rate of decline has stagnated, meaning that the mechanisms we are using, the bed nets, anti-malarials and insecticides are no longer as strong as they used to be.
There is an upsurge in cases in places that previously did not have malaria, due to changes in climate, coupled with a population which lacks natural immunity against the disease.
To better understand and predict which part of our earth is going to be the next place for malaria attacks or any dangerous breeding grounds for deadly insects, we need technology like our device Farmium which can effectively be trained to recognize insect breeds using audio analysis and also Helium brings it the power to gather all environmental and community dataeven of remotest place on earth and make all the data available to researchers. The environmental data might give us a clue why the pests and dangerous insects are becoming immune and stronger day by day.
Now a major new fightback is being planned. According to the WHO, with the right focus, funding and international cooperation it should be possible to eradicate the disease by 2030,we think Helium and our Farmium devicecan bring about a change.
4. Stopping deforestation :
Using audio analysis and frequency detection of wood cutting tools like chainsaw, we can easily detect if there is illegal cutting of trees in the forest and my device would make the concerned authorities aware of the cutting of trees, particularly it would detect the sound quality and frequency of wood cutting tools being operated.
[Note: Please look into every image carefully in next sections, as each image contains critical data required to run project successfully]
Sustainable agriculture :
Monitoring plant growth is very important factor for understanding whether the crop yield is going to be good or not. We have tried to measure the relative height of the crops using a distance sensor(ultrasonic distance measurement sensor). This will also help the user to understand whether the plant is getting sufficient light for photosynthesis, right water and nutrient balance. The same device can also be used in forests to understand tree growths with other environmental factors. Below is the simple C++ code for the plant height task,
#include "Arduino.h"
class PlantHeight
{
private:
long duration;
int deviceHeight;
byte trigPin, echoPin;
//Constructor to initialize HCS04 ultrasonic sensor pin and
public: PlantHeight(byte t, byte e)
{
trigPin = t;
echoPin = e;
}
//set the height read for the first time as a control or test value height
//or device height from the ground for comparisons in further readings
public: void set_initial_height()
{
// Running a 20 round loop and get an average to make sure that I get a correct value,
// I found while testing the ultrasonic sensor that it gives 0 values for first 2 times(it might be due to no delay when I start the sensor)
delay(1000);
for (int i = 1; i <= 20; i++)
{
pinMode(trigPin, OUTPUT);
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
pinMode(echoPin, INPUT);
duration = pulseIn(echoPin, HIGH);
deviceHeight += microsecondsToCentimeters(duration); //height of device installed from ground
delay(1000);
}
deviceHeight = deviceHeight / 20;
}
// Calculating plant relative height from the ground
public: int relativeHeight()
{
int readings;
for (int i = 1; i <= 20; i++)
{
pinMode(trigPin, OUTPUT);
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
pinMode(echoPin, INPUT);
duration = pulseIn(echoPin, HIGH);
readings += microsecondsToCentimeters(duration);
delay(1000);
}
//the updated value of sensor reading is going to be sent
readings = readings / 20;
return deviceHeight - readings ;
}
private: long microsecondsToCentimeters(long microseconds)
{
return microseconds / 29 / 2;
}
};
We are also taking environmental as well as soil sensor readings. Environmental sensors except a few are already present on X-NUCLEO-SENSOR-BOARD, there's already a very good example code provided by the board manufacturers which makes it easier to use.
Also we have used air quality, CO2 and light intensity sensor to help us give more insight about the farm and this will be very useful too when we have to do machine learning models for better understanding of farms and large amount of data will help us to deploy even larger ML models. Below are the codes for X-NUCLEO-SENSOR-BOARD:
// Components
HTS221Sensor *HumTemp;
void setup()
{
// Initialize I2C bus.
DEV_I2C.begin();
HumTemp = new HTS221Sensor (&DEV_I2C);
HumTemp->Enable();
}
void loop()
{
//Read humidity, temperature and pressure.
float humidity = 0, temperature = 0;
HumTemp->GetHumidity(&humidity);
HumTemp->GetTemperature(&temperature);
}
Here is the code for light_intensity VCNL4040 sensor, air quality and equivalent CO2 sensor CCS811:
#include <Wire.h>
#include "SparkFun_VCNL4040_Arduino_Library.h"
#include "SparkFunCCS811.h"
#define CCS811_ADDR 0x5B //Default I2C Address
CCS811 airQualitySensor(CCS811_ADDR);
VCNL4040 lightSensor;
void setup()
{
Srial.begin(9600);
Wire.begin();
lightSensor.begin();
lightSensor.powerOffProximity(); //Power down the proximity portion of the sensor
lightSensor.powerOnAmbient(); //Power up the ambient sensor
if (airQualitySensor.begin() == false)
{
Serial.print("CCS811 error. Please check wiring. Freezing...");
while (1)
;
}
}
void loop()
{
if (airQualitySensor.dataAvailable())
{
//If so, have the sensor read and calculate the results.
//Get them later
airQualitySensor.readAlgorithmResults();
}
unsigned int co2 = airQualitySensor.getCO2();
unsigned int tvoc = airQualitySensor.getTVOC();
unsigned int ambientValue = lightSensor.getAmbient();
delay(100);
}
Above codes are just for the explanation part of the components, the final code is attached in the code section.
Audio analysisand classification:
We have used Sparkfun Artemis Redboard ATP for this task as it is powerful board capable of running TensorFlow Lite framework. We wanted to use more powerful devices with higher audio signal manipulation strengths but Helium aims to make its technology affordable and more accessible but a very expensive device would never fulfil our mission #IoT for Good
Our concept is already in action some places where it is used to capture audio and detect illegal logging of trees. You can see our previous version of this project using Artemis ATP with full details here: https://www.hackster.io/vilaksh/tensorcrop-crop-quality-and-farm-control-3831b8
Step 1: Collecting the required dataset for audio analysis:
I got audio files from kaggle dataset for different kinds of mosquito wing beat, honeybee, you can too download it from there, the file is big but we don't need that much dataset since our device memory is small, so we will limit our self to 500 audio files of each class only, of one seconds utterance each. Although I self recorded sound for chainsaw and locust. You can also record your own data if you have a silent room with insects, that would be much better and accurate and that would be very much exciting.
Note: If we directly load the audio data downloaded for the training process it won't work because the microphone architecture would differ and the device won't recognize anything. So once the files are downloaded we need to record it again via the Artemis microphone so that we can train on the data which is exact to be used for running inference. So let's configure our Arduino IDE for Artemis Redboard ATP, look the below link for that.
https://learn.sparkfun.com/tutorials/hookup-guide-for-the-sparkfun-redboard-artemis-atp
Select the board as Artemis ATP then from File->Examples->Sparkfun Redboard Artemis Example->PDM->Record_to_wav
Along with the code there comes a python script which you need to run to record audio from the on-board mic. It is very necessary because audio files came from different mics so there might be a chance that the board would not be able to recognize the frequencies accurately and treat the sound as noise.
(Protip:While recording make variations of air column by bringing source of sound back and forth slowly as a simulation of real insect coming near the mics so that we get better results. I tried myself and it increased the accuracy.)
#!/usr/bin/python
from __future__ import division
"""
Author: Justice Amoh
Date: 11/01/2019
Description: Python script to stream audio from Artemis Apollo PDM microphone
"""
import sys
import serial
import numpy as np
import matplotlib.pyplot as plt
from serial.tools import list_ports
from time import sleep
from scipy.io import wavfile
from datetime import datetime
# Controls
do_plot = True
do_save = True
wavname = 'recording_%s.wav'%(datetime.now().strftime("%m%d_%H%M"))
runtime = 50#100 # runtime in frames, sec/10, set it according to your audio duration default is 5 seconds, I set it to 4 minutes as per my audio duration
# Find Artemis Serial Port
ports = list_ports.comports()
try:
sPort = [p[0] for p in ports if 'cu.wchusbserial' in p[0]][0]
except Exception as e:
print 'Cannot find serial port!'
sys.exit(3)
# Serial Config
ser = serial.Serial(sPort,115200)
ser.reset_input_buffer()
ser.reset_output_buffer()
# Audio Format & Datatype
dtype = np.int16 # Data type to read data
typelen = np.dtype(dtype).itemsize # Length of data type
maxval = 32768. # 2**15 # For 16bit signed
# Plot Parameters
delay = .00001 # Use 1us pauses - as in matlab
fsamp = 16000 # Sampling rate
nframes = 10 # No. of frames to read at a time
buflen = fsamp//10 # Buffer length
bufsize = buflen*typelen # Resulting number of bytes to read
window = fsamp*10 # window of signal to plot at a time in samples
# Variables
x = [0]*window
t = np.arange(window)/fsamp # [x/fsamp for x in range(10)]
#---------------
# Plot & Figures
#---------------
plt.ion()
plt.show()
# Configure Figure
with plt.style.context(('dark_background')):
fig,axs = plt.subplots(1,1,figsize=(7,2.5))
lw, = axs.plot(t,x,'r')
axs.set_xlim(0,window/fsamp)
axs.grid(which='major', alpha=0.2)
axs.set_ylim(-1,1)
axs.set_xlabel('Time (s)')
axs.set_ylabel('Amplitude')
axs.set_title('Streaming Audio')
plt.tight_layout()
plt.pause(0.001)
# Start Transmission
ser.write('START') # Send Start command
sleep(1)
for i in range(runtime):
buf = ser.read(bufsize) # Read audio data
buf = np.frombuffer(buf,dtype=dtype) # Convert to int16
buf = buf/maxval # convert to float
x.extend(buf) # Append to waveform array
# Update Plot lines
lw.set_ydata(x[-window:])
plt.pause(0.001)
sleep(delay)
# Stop Streaming
ser.write('STOP')
sleep(0.5)
ser.reset_input_buffer()
ser.reset_output_buffer()
ser.close()
# Remove initial zeros
x = x[window:]
# Helper Functions
def plotAll():
t = np.arange(len(x))/fsamp
with plt.style.context(('dark_background')):
fig,axs = plt.subplots(1,1,figsize=(7,2.5))
lw, = axs.plot(t,x,'r')
axs.grid(which='major', alpha=0.2)
axs.set_xlim(0,t[-1])
plt.tight_layout()
return
# Plot All
if do_plot:
plt.close(fig)
plotAll()
# Save Recorded Audio
if do_save:
wavfile.write(wavname,fsamp,np.array(x))
print "Recording saved to file: %s"%wavname
Once the audio is recorded you can use any audio splitter to split audio files into 1 second each. I used the below piece of code with jupyter notebook to achieve the above step.
from pydub import AudioSegment
from pydub.utils import make_chunks
myaudio = AudioSegment.from_file("myAudio.wav" , "wav")
chunk_length_ms = 1000 # pydub calculates in millisec
chunks = make_chunks(myaudio, chunk_length_ms) #Make chunks of one sec
#Export all of the individual chunks as wav files
for i, chunk in enumerate(chunks):
chunk_name = "chunk{0}.wav".format(i)
print "exporting", chunk_name
chunk.export(chunk_name, format="wav")
I used Audacity to clean up my downloaded audio files before I slice them up, also there are very awesome features which you can use to know whether you audio is pure or not, audio spectrogram. Also if you notice the waveforms of different species of the mosquito dataset viz, culex, aedes, bee dataset and chainsaw and background noise too.
Once the audio files are sliced up you are ready to train those files for your Artemis. However you need to tweak to make it work perfectly because the data might contain lots of noise due to enclosure or working environment hence I recommend you to train background dataset also so that it can work even there is constant peculiar noise. Background contains all the trimmed audio segment in Audacity software and some distinct noise.
Step 2:Training theaudio data
For training I used Google Colab, below is full training process images, you need to upload these files while training in Colab notebook and run on GPU, for me it took nearly 2 hours. I had to train thrice because the notebook kept disconnecting due to my slow internet connection, I trained the whole dataset for two times and luckily I got success.
Audio Training #2: Labels = aedes,culex,anopheles,bee,chainsaw
Download all the files generated from the above steps, also you need to generate microfeature file for every class audio file, you can read more about the formats on Tensorflow website. The program file is big, you all can see the program file in code section or in my previous project TensorCrop-Crop Quality and Farm Control.
We are really thankful to Helium and Hackster for donating us Helium Developer kit to work upon our idea. We don't have a 3D Printer so we had to look for a beautiful hardware case, luckily we got an old sensor box in good condition.
Since most of the sensor we used were I2C and Sparkfun QUIIC connectors saved us from soldering and fitting of the sensors. You can clearly see all the sensor wirings in the pictures(as I did not had custom fritzing parts so I decided to take detailed pictures of wirings).
We just need to upload codes to our devices and the Farmium hardware is ready.
Helium Network Configurations:Create an account on Helium console and add a device. Follow official Helium documentation for configuring and adding devices: https://developer.helium.com/console/adding-devices After you have added a device you should see that device in your console.
Please refer to this for more details about helium console https://developer.helium.com/ I made my own function for decoding the payload sent through my device.With the Decoder Function, users can transform and/or parse a raw payload before it is posted to an endpoint. When a Decoder Function is applied to a device or integration via a Label, the Decoder Function code is executed on the payload sent by the device. The Decoder Function can be written using custom JavaScript code provided by the user or selected from prebuilt decoders (currently Browan Object Locator and Cayenne LPP).
The decoding code for the payload is below:
function Decoder(bytes)
{
//LSB (least significant byte first) and sign-extension to get 4
//bytes for proper decoding of negative values:
var tempAir = bytes[1]<<24>>16 | bytes[0];
var humidity = bytes[2];
var SHT10_Moisture = bytes[3];
var SHT10_Temperature = bytes[5]<<24>>16 | bytes[4];
var eCO = bytes[6];
var tVOC = bytes[7];
var height = bytes[8];
var lightIntensity = bytes[9];
var serialData = bytes[10];
temp_data = tempAir / 100;
soil_temperature = SHT10_Temperature / 100;
return {
airTemperature: temp_data.toFixed(2),
airHumidity: humidity,
soilHumidity: SHT10_Moisture,
soilTemperaure: soil_temperature.toFixed(2),
CO2: Math.pow(eCO * 10, 2),
tVOC: Math.pow(tVOC * 10, 2),
plantHeight: height * 10,
lux: Math.pow(lightIntensity, 2),
audioAnalysis: serialData
}
}
Reason for using own decoder: Since LoRa does not has high bandwidth and it can transmit data in small packets only so it would not be better if we don't encode our values into some small values, also encoded values can be retrieved or decoded in the console by doing the reverse of what we did during its encoding. Payload input is given in HEX digits as the payload is auto-generated to HEX while its transmission to any LoRa gateway. Below is the image of the label created for linking the function with the integrations we are going to use.
Sending data for visualization:
Once the data is being sent to the Helium console the data will start appearing in pipedream too and we could see our appropriate data in row field using the code event.body.payload.data{ }.
Once you deploy it, a new row will be added to the Google Sheet
We can even add our custom charts in google sheet for more visualization options. There's a very good tutorial on using Google sheets to send data from Helium console, that is actually a good starting point to understand how our data would be sent. https://www.hackster.io/evan-diewald/routing-sensor-data-from-helium-devices-to-google-sheets-285699 Helium is going to make Farmium a real people's network device. I have already covered up scalability in the beginning only.
Our Farmium device would look something like shown above when deployed in farms and forests.
Next Steps:We hope to bring Helium Mesh network to make Farmium device coverage for large farms and forests in future. The Farmium- Helium Farm Companion device will start evolving and providing quality data as soon as it is implemented in in our lives. Actually Helium has done a great job when it comes to managing a fleet of devices, their console is very powerful and simple.
We would thank Chris from rerobots.net for helping those who do not enjoy Helium connectivity to test their ideas using hardware sharing over the internet, though it has some limitations but it is still good for anyone to experiment with Helium networks. Also our sincere thanks to helium and Hackster team for donating us the hardware and keeping us motivated to complete this project. #IoT for Good
Comments