Observation is essential for making a weather forecast, i.e. measurements of meteorological elements' current state. A list of meteorological elements required for weather forecast is available here. The measurement of almost all the meteorological elements is fully automated except a few, which are hard to automate. Some of these elements are
A trained observer estimates these hard to automate elements at essential places like an airport [1], especially cloud type, and a total could amount [2]. Having a trained human observer at all the observation sites is not feasible, especially at a remote location (e.g. uninhabited islands) and marine observation station (e.g. Moored buoys, Lightships, etc.). This limitation may reduce the sample amount of hard to automate meteorological elements.
The amount of cloud and cloudtype observations can be automated using machine learning with relatively cheap hardware if TinyML is effectively used.
💡 My SolutionMy proposed solution uses a TinyML based cloud type classification algorithm. This algorithm will run on a microcontroller board with a low power camera sensor. The microcontroller board will capture an image of the sky every hour using the camera and feed it to the TinyML algorithm, which will perform in-device inference to predict the cloud type from the image.
After performing the classification, the prediction and the captured image will be published to the remote server using MQTT and or LoRa. The microcontroller will also record that image and other information to prepare a dataset of the cloud images. This dataset can be applied to enhance the accuracy of the TinyML algorithm further.
After performing cloud type inference, publishing prediction, and creating a dataset, the microcontroller will go into a deep sleep to conserve the battery. The microcontroller will wake after an hour to perform the same activity. This cycle will repeat every hour.
⚙️ HardwareThe hardware used in this project are
- Arduino Portenta H7
- Arduino Portenta Vision Shield
- A computer
- A Powerbank
This solution can be implemented using any development board in the market which has camera support. However, the software is written in micro-python and uses OpenMV specific libraries; therefore, the software may require some tweaks based on the hardware of your choice. That being said, it will run without any modification of the OpenMV boards.
Arduino Portenta and Vision shield are made for each other. They snap together via high-density connectors, and that is it. Connect the board to your computer via a USB-C cable, and it's ready for the next step.
💻 SoftwareTo program the Portenta H7, I have used the OpenMV IDE. The IDE is freely available on their website. Download the IDE and follow this Getting Started with OpenMV and MicroPython guide to set up Portenta H7.
The software I have prepared for this project is available at the below-mentioned Github link.
Github link - click here
Download the software and save the cloud_classifier.py
to the Portenta H7 board as a main.py.
Add a Wi-Fi credential and MQTT related credential in the software. This credential is required to publish information on the MQTT network and get the NTP server's current time and date.
Apart from the credentials, this software expects a trained.tflite
and a labels.txt
file. EdgeImpulse will generate these two files when you prepare the TinyML model. For this specific project, the trained.tflite
and labels.txt
files are available at the below link.
TinyML Model - click here
Save the trained.tflite
and labels.txt
to the Portenta H7 along with the main.py.
And that is it. The cloud classifier is ready to identify clouds ☁️. Point the camera towards the sky and observe the results in the IDE.
Please continue reading to learn about how Cloud Classifier works and its other capabilities.
❓ How It WorksThe inner working of the cloud classifier is described below in detail.
The cloud classifier works precisely the way it is described in the My Solution
section.
- Portenta H7 takes an image using the vision shield.
- The image is fed to a TinyML model, which predicts the cloud type.
- The image, cloud type prediction and other information are recorded to prepare the dataset.
- The prediction and other information are published via the MQTT protocol.
- The system then goes into a deep sleep.
- It wakes up and performs its tasks again.
1. Cloud Type Classification
The magical part of the system is the cloud type classification algorithm, which is prepared using the Edge Impulse Studio. To train the machine learning algorithm, the Swimcat-ext dataset is utilised. This dataset has 2100 images of sky/cloud patches divided into six different classes.
- Clear Sky
- Patterened Cloud
- Thin White Cloud
- Thick White Cloud
- Thick Dark Cloud
- Veil Cloud
This dataset is used to train a neural network using transfer learning. The neural network used is MobileNetV2 0.35. With only 20 epochs, the network's accuracy was 93.3%.
The process of generating a machine learning model has become effortless by using Edge Impulse. To build your own TinyML based object detection or classification algorithm, please follow the adding sight to your sensor tutorial.
For a quick start, you can also download the model I have prepared from here.
After preparing the model, deploy it to your board of choice using these deployment guides.
I am using Arduino Portenta H7 with the OpenMV IDE; therefore, I followed OpenMV specific guide.
2.
Sending Prediction
Immediately after inferencing, the cloud classifier first connects to the broker and then publishes the result. It uses the MQTT protocol to publish the prediction.
After sending the prediction, it instantly disconnects from the MQTT broker.
The cloud classifier publishes the following information via the MQTT
Note: The Portenta H7 gets its power via USB port therefore the remaining battery percentage is a random number in the currrent design.
I am using the CloudMQTT as a broker; however, you can use any broker of your choice. The reason I am using CloudMQTT is that it is easy to set up and simple to use.
To receive the information published by the system, I am using a smartphone app called MQTT Dashboard. I have designed an interface in it to visualise data.
I have also prepared a python script to receive the published information on a computer. This script is available here.
3.
Dataset Preparation
Along with cloudtype classification, the cloud classifier also prepares the dataset by storing the images taken for the cloud type classification and logs them in a CSV document. A snippet of the prepared CSV file and captured images by the cloud classifier is shown below.
You can see that most of the predictions are spot-on; however, it is making some mistakes by classifying a tree into a patterned cloud type. I have also noticed that the algorithm classifies the sun or any bright white spot as a thick dark cloud. Using this data and retraining the model can resolve some of these issues.
4.
Battery Saving
Battery saving is an integral part of Cloud Classifier as it may be deployed in a remote location or Moored buoys. To survive for an extended period, it must go into battery saving mode between operations.
OpenMV provides a convenient feature called deep sleep. In deep sleep mode, a development board effectively turns itself off except for the RTC. Arduino Portenta H7 consumes 2.95uA in standby mode [3] (deep sleep in the OpenMV software terminology). So, in theory, the power consumption of the cloud classifier should be 0.01475uW in the deep sleep mode. That is extremely low.
After performing cloud classification, sending prediction and dataset preparation, the Cloud Classifier goes into a deep sleep for 1 hour. After 1 hour, it wakes up, performs its task and goes back to deep sleep. The deep sleep period can be effortlessly adjusted by changing the wake-up time in the software.
# sleep duration in minute.
sleep_duration = 60
rtc.wakeup(sleep_duration*60*1000)
With these constraints, the cloud classifier takes 24 observations in a day.
📹 DemoTo move the system into the demo mode, please set below mentioned variable to True in the software.
# Keep system into demo mode. In this mode the system will NOT go into deepsleep
# and performs its task every 3 seconds.
demo = True
Note: In demo mode, the system does NOT go into a deep-sleep state and performs its task roughly every 3 seconds.
1. Thick White Cloud
In the above demo, the cloud classifier correctly identifies the cloud type as a thick white cloud. To test the system, I covered the camera with my hand, and the cloud classifier responded immediately by changing the result to a thick dark cloud (because it does not know that it is my hand and thinks it is a thick dark cloud). The moment I uncovered the camera, it correctly identifies again as a thick white cloud.
Along with predicting the cloud type correctly, it sends the prediction and other information to the MQTT broker. The smartphone app(a subscriber) reads the published information immediately, as shown in the video.
The cloud classifier captured the above images, clearly illustrating what the camera saw during the demo recording period. The pictures show that the cloud classifier is good at classifying thick white clouds but sometimes incorrectly classifies them.
2. Clear Sky
In the above demo videos, the system correctly classifies as a clear sky. I have also tried to show the classification and captured image on the laptop, but it is barely visible due to poor viewing conditions. The system also publishes the result to the broker which is displayed by the smartphone app.
3.
Thin White Cloud
In the above demo, the system correctly classifies the cloud as a thin white cloud. This time, I am not trying to read the classification out of my laptop screen (Poor visibility due to a bright environment). Still, the result is visible in the smartphone app.
The cloud classifier captured the above images during the demo recording period.
4. Cloud Classifier Running on Battery
This demo is to demonstrate the cloud classifier's capability as a battery-operated system. It consumes very low energy when in operation and conserves as much energy possible by spending most of its time in the deep-sleep mode.
I haven't observed the Thick Dark Cloud, Patterned Cloud and Veil Cloud so far; that is why I have not tested the cloud classifier on those clouds yet.
👣 Next StepI started this project to classify clouds based on World Meteorological Organization's classification [4], i.e. into ten main categories.
- Cirrus
- Cirrocumulus
- Cirrostratus
- Altocumulus
- Altostratus
- Nimbostratus
- Stratocumulus
- Stratus
- Cumulus
- Cumulonimbus
But I could not find any good dataset. The ones I found were not as clean as Swimcat-extended's image, and because of that, my early TinyML based algorithms were not good at classifying them.
I still want to achieve that goal and want to classify the clouds into these ten main categories.
Even though the cloud classes used in this project are not in line with the WMO's cloud classes, there is some correlation between them.
- Thin white cloud = Cumulus clouds
- Patterned cloud = Altocumulus and Stratocumulus clouds
- Thick dark cloud = Nimbostratus clouds
- Thick white cloud = Altostratus clouds
- Veil cloud = Cirrus clouds
[1] https://www.metoffice.gov.uk/weather/guides/observations/uk-observations-network
[2] https://www.metoffice.gov.uk/weather/guides/observations/how-we-measure-cloud
Comments