Plants are one of the most valuable entities on the planet. There are different kinds of plants and almost all of them help humans to reduce the pollution on the planet. Thanks to the photosynthesis the pollution is reduced and cleaner air is obtained. The quality of the planet is directly related to the wellness of the planet. ¿How can we know the feelings of our plants?
In this project we will create a smart system to understand better the behaviour of plants.
ScopeThe scope of the project is reduced due to some technical problem during the execution. The voice inference with our custom ML model is omitted until new updates.
Hands-on projectThere are three sensors used in this project: light, temperature and moisture. All of them are interfaced with an analog pin on the extension board.
For the temperature sensor a conversion is done to obtain the final value into Celsius degrees. For the light and moisture sensors the direct output of the analogue signal is used. A calibration process is done to obtain the maximum and minimum value. The idea is to know which are the limits of our plant.
The threshold obtained from the calibration is used to understand the needs of the plant. Depending on this value an audio file is played, so the plant can express their needs.
switch(ePredictedClass){
case eInferenceClasses::NOISE:
break;
case eInferenceClasses::HELLO:
{
long randNum = random(1,4);
if(randNum == 1)
AudioDevice.play(eAudioFiles::HELLO_HIGH);
if(randNum == 2)
AudioDevice.play(eAudioFiles::HELLO_MEDIUM);
if(randNum == 3)
AudioDevice.play(eAudioFiles::HELLO_LOW);
}
break;
case eInferenceClasses::HOW_ARE_YOU:
{
long randNum = random(1,4);
if(randNum == 1)
AudioDevice.play(eAudioFiles::HOW_ARE_YOU_HIGH);
if(randNum == 2)
AudioDevice.play(eAudioFiles::HOW_ARE_YOU_MEDIUM);
if(randNum == 3)
AudioDevice.play(eAudioFiles::HOW_ARE_YOU_LOW);
}
break;
case eInferenceClasses::WATER:
if(MoistureDevice.getData() > 700 )
AudioDevice.play(eAudioFiles::WATER_HIGH);
else if(MoistureDevice.getData() > 200 )
AudioDevice.play(eAudioFiles::WATER_MEDIUM);
else
AudioDevice.play(eAudioFiles::WATER_LOW);
break;
case eInferenceClasses::TEMPERATURE:
if(TempDevice.getData() > 30 )
AudioDevice.play(eAudioFiles::TEMPERATURE_HIGH);
else if(TempDevice.getData() > 20 )
AudioDevice.play(eAudioFiles::TEMPERATURE_MEDIUM);
else
AudioDevice.play(eAudioFiles::TEMPERATURE_LOW);
break;
case eInferenceClasses::LIGHT:
if(LightDevice.getData() > 650 )
AudioDevice.play(eAudioFiles::LIGHT_HIGH);
else if(LightDevice.getData() > 200 )
AudioDevice.play(eAudioFiles::LIGHT_MEDIUM);
else
AudioDevice.play(eAudioFiles::LIGHT_LOW);
break;
default:
break;
}
The switch case value depends on the prediction of the ML model.
ML modelSpresense is used as a ML framework to train our model. There are 5 different classes: Hello, Water, Temperature, How are you, Noise. To create the dataset of these words we have used python and Windows voice. Some speakers from different regions have been automatized to create the required amount of data. The scripts used to do this are available in the repository.
Comments