Recent blog reveals that Sony's Spresense Microcontroller is making it's way into Space!
With lower size, comes lower power required!
A few days ago Sony announced Spresense capabilities and test procedures on how Spresense will be useful in Space Applications and Satellite Monitoring Systems. While currently this board will be used on ongoing missions or to develop existing technology systems, it might prove of great help in expanding core research in the EdgeAI sector. With opening avenues in low power consumption monitoring systems and software under constrained computational power, further this will help in democratizing Aerospace based terrain monitoring or in that matter, even space exploration.
Satellite based surveillance of classification systems under low computing and power may benefit in longer life, easy replacement and enabling monitoring multiple systems at once with a single chip. Satellite imagery is important for many applications including disaster response, law enforcement, and environmental monitoring. These applications require the manual identification of objects and facilities in the imagery. Because the geographic expanses to be covered are great and the analysts available to conduct the searches are few, automation is required.
For those who aren't aware, this was one of the proclamations by Sony,
"One small step will be testing in earth orbit, but 2022 could see a “giant leap” with a lunar landing involving a transformable robot on the Moon’s surface."
And many tweets following up with this news, including one from EdgeImpulse:
"One small step for Sony's Spresense, one giant leap for edgeAI"
As rightly mentioned by Sony that the New Space Moment is about commercial collaboration to make use of, wherever possible, low power and lightweight microcontrollers replacing tradition system and enabling inexpensive but precise classification and inferencing units.
Last year- a dive- EdgeAI in space:Last year Intel unveiled world's first AI-powered Satellite to head it's way into space. PhiSat-1 (Φ-Sat-1) is the most-advanced nano-satellite to carry onboard an artificial intelligence VPU to improve the quality of geospatial terrestrial data. Intel’s Movidius Myriad is just one of other advanced nanosatellite components that made it to space. This year, around the same time, Sony's Spresense is creating a new benchmark with lower power and lightweight inference systems to test AI and DNN reliability in aerospace domain.
Low power and other specs:The Spresense microcontroller certainly fits the bill for a COTS component on a New Space movement project. Spresense is compact, power-saving, and high-performance. The main board is about 21mm x 50mm and weighs about 6g (at sea level on Earth). This means, you get a good deal of computing power while only adding a tiny amount to the payload at take-off. All of this makes Spresense the best suited for EdgeAI applications in space.
Understanding AeroSpace Requirements:Different orbits expose satellites to various levels and types of radiation. The LEO dose level is about 2 krad/yr. This is relatively low compared to the Medium Earth Orbit (MEO) (~100 krad/yr at 2000 km-35000km) and GEO (~50 krad/yr aboave 35000 km). These values could be assuming some minimal amount of shielding. Some regions of space have a high electron flux, such as LEO, but the energy of these electrons is relatively low. This means that the microcontroller can be protected from a large portion of these electrons with very minimal shielding.
Microprocessors can experience failures ranging from soft errors to total system failure due to radiation effects. A soft error, also known as a single-event upset (SEU), is when radiation causes enough of a charge disturbance to alter the state of a memory bit. These are called soft errors because they are repairable, and the system is still functional after an SEU. There are both multibit upsets and single bit upsets, but the latter is more common. A single-event effect (SEE) happens when high-energy particles in the space environment strike the components of the microcontroller circuit. An SEE can cause a range of damage including, no observable effect, a transient disruption of circuit operation, a change of logic state, or even permanent damage to the device or integrated circuit (IC). All these event effects have to be factored in before making a device compatible to go to space.
These tests conducted by Spresense yielded the following results -
Spresense was designed for IoT use, so reliability is already part of that equation. But in space, there are other environmental challenges: cosmic and solar radiation, atomic oxygen, as well as vacuum and launch conditions. From June 2020, Spresense started a series of tests to show its suitability for the rigours of space, including vibration and shock testing, and proton irradiation.
Vibration and shock testing verifies that Spresense can withstand a rocket launch. The Spresense main board was subjected to low and high-frequency tests as well as shock tests at different frequencies. In the low-frequency transient test at 5-100Hz, the board experienced forces of up to 20G. The high-frequency tests involved a range of frequencies from 10 – 2kHz in a power spectrum density range of 0.1 – 2.0 G2/Hz, with forces equivalent to between 82 and 336G.
The three shock tests measured, respectively at: 500 Hz with typically, 100 GSRS (Shock Response Spectrum of Gravity), 500 – 2.4kHz at 5.32 dB/oct. and lastly, 2.4 – 4kHz with typically, a mindboggling 1, 000 GSRS!
The thermal vacuum tests were over temperature range -20C – 60C cycled at +/- 2C per minute, at a pressure of 1.33 mPa. The Spresense board showed no appearance damage or malfunction during or after the tests.
In the proton irradiation tests, the Spresense board was subjected to proton bombardment (as though from a particle accelerator) at energies ranging between 10 and 70MeV, for durations of between 95 and 166 minutes, at fluxes of up to 1.742 x 107 p/cm2/s. That is a lot of protons. The results showed no significant ill effects other than requiring power-on resets, but mean that the board needs an aluminum shield to protect from radiation in a sun-synchronous orbit of 500km above the Earth.
Spresense to the MoonNo jokes, Spresense is also contributing to a Jaxa Mission in 2022, taking a transformable robot to the moon. Sony Spresense will contribute to surveying and gathering information about the lunar surface by acquiring data using a transformable robot, using COTS equipment. Multiple factors induced the contribution of Sony Spresense in the mission with Sony Spresense being a core component as well as its compatibility and integration ease with the Transformable robot. This brings us back to our point- What point can the contribution of Embedded ML on Embedded Systems go up to? This is far extensive than we anticipate the extent of the contribution to be. While initially on Sateliites, Embedded Neural Processing system can be used to classify or predict time series data points from images, with this mission extending to the moon, it can provide a secondary analysis system over the lunar surface. The applications are much more diverse!
November 9, 2021: The Japan Aerospace Exploration Agency (JAXA), today launched its Epsilon Rocket No. 5 rocket with a payload consisting of the Innovative Satellite Technology Demonstration No. 2, including Sony’s Spresense microcontroller board. The delayed launch happened on the morning of November 9 from the Uchinoura Space Center, Japan. You can see footage of the launch in the JAXA video, below:
Automating Satellite Image Classification using EdgeImpulse:EdgeImpulse recently partnered with Sony to democratize Embedded ML solutions and applications on Spresense. EdgeImpulse allows, easy prototyping of Machine Learning models and deploying and quantising these models to perform real time analysis over Spresense. With Sony's news that Spresense will make it's way into space, Satellite image classification along with applications using Inertial Measurement Units(IMU) and autonomous classification systems on-board are few of the core applications which would accompany Spresense to space,
Cloud and Non-Cloud Classification:
Sony's Spresense, can be used in multiple applications in space. One of them being filtering out images of clouds from captured images to be later sent back to Earth stations to save bandwidth over those images. Data and network efficiency is one of the most sought topics over several years, which through different applications is tried to be achieved by Researchers. With low power microcontrollers going to space, data management has become a crucial part. Sending high resolution images acquired on the Sony Spresense microcontroller.
The above image is a clear demonstration how cloud-occluded images compared to clear images carry a vast difference in the amount of information they supply. Cloud-occluded images cost a large bandwidth, while supplying far less information in the captured images, and these images eventually end up as unused or uninformative. Rather a better way to handle these would be to classify the captured ones as cloud or non-cloud and only send the non-cloud class for further analyzing which yield much more information from the data provided.
Datasets:The Datasets used for this project are adopted from a Kaggle Cloud Classification Challenge hosted by Max-Planck Institute for Meteorology, and while direct access to images for cloud occlusion aren't published by NASA or ESA, this project uses NASA and ESA's Earth Observation and tracking information to analyze the movement and density of occlusion and to what extent the occlusion hinders the information loss. Based on the subjective observation, it can be reasonably concluded that Cloud based occlusions do yield in a much lesser information extracted, through cloud point occlusion as well as shadow hindered occlusion.
If you're a Researcher working in this field, you might have read about different prototypes on reconstruction of areas obscured by clouds using Contextualized Auto Encoder Neural Networks (another branch of GANs). But this opens more room for thought. Are these Algorithms efficient and practical or computationally inexpensive to be deployed on Satellites which are capable of running on low power? Looking at the current trends, the answer is No and this methodology is not sustainable for couple of years coming forth unless we undergo a microprocessor breakthrough capable for just that. Also, these algorithms which reconstruct these images are dependent on the data fed to the Model and learns to reconstruct the information which is already available. This leaves no room to the unobserved image information which is what the Earth Observation Departments really require. And finally, the computational cost for capturing two subsequent images of the same region at different points in time is far less than what the algorithm requires to process 1 reconstruction, which brings us to our Conclusion. Low power Embedded Machine Learning which is the simplest yet the most effective solution to the evident problem.
The following dataset includes occluded images of the terrain either through clouds or unidentified objects during image capture and such images produce no information about the Terrain or observation of information on terrain.
The Images highlighted show Dense Cloud-occlusion of terrain and aren't capable of yielding little or any terrain information. Such Data samples are discarded after classification.
Model Training & EdgeImpulse:Now the whole process sounds like data-engineering, processing and selection of the appropriate Framework might be time consuming, so here's EdgeImpulse to simplify things and EdgeImpulse AutoML Eon Tuner to select the most appropriate model!
The Data Acquisition part is pretty straightforward. The cloud and non-cloud image data is uploaded to the model as a balanced dataset, and the data is then split to 80-20% train-test split by the EdgeImpulse Studio. A total of 810 images were used for training here.
You can go ahead and clone the project hosted on EdgeImpulse and get started quickly: https://studio.edgeimpulse.com/public/50254/latest
Even in the Embedded Machine Learning Space, there are multiple Frameworks and Architectures with varying performance according to the dataset; not only Frameworks, but also multiple Hyperparameters and class-wise precision or F1 score for example which might be your base parameters on selecting the right model for you. While it's always possible to experiment with different values and parameters and see what's with for you, it's going to take an eternity. That's where AutoML comes in! EdgeImpulse EON tuner was the easiest way for me to pick the right model and understand the inference time individually for each and quickly select the best one for me. If you haven't already, give this deep-dive article by me on EON Tuner a read!
After running the EON Tuner for a while, these are the results obtained. The two models on the left are Grayscale, while the one of the right is RGB. While all three models yield 94% accuracy with nearly same inferencing time- with different architectures- shows how powerful EON Tuner is in finding the best Architecture and hyperparameters for the data. The one highlighted in Red is used as the default model for the project. Selecting the model as primary, will update all parameters, features and most importantly, also the trained model, so everything is done :))
Taking a look at the parameters shows us -
The model we have selected takes a 32x32 input image. While this might seem too small too classify accurately, we can see the precision and recall scores doing very well indicating that it's not that very high resolution images are required. 32x32 or 64x64 also performs just as well. You might want to go for 96x96 RGB and drop the accuracy a bit if resolution is important, or just go with 96x96 Grayscale with 94% accuracy! The experimentation is open to you in the EdgeImpulse project made public!!
Here's the EdgeImpulse model Feature explorer plotting the data points for each class in the latent space and look at how perfectly the features are separated in a semi-spiral manner! Taking a closer look, the features unanimously lie in almost the same plane with very little deviance between the classes, showing similarity in features between cloud and non-cloud, but also an invisible periphery separating the cloud from non cloud. The samples to the tail are a few ambiguities in the sample, with non-cloud terrain images looking a bit hazy or cloudy, and the cloud set with lesser cloud density making it look similar to terrain. These ambiguities in the latent space improve the robustness of the model, and are more likely to perform better on the test set.
The three step process to train a machine learning model comes down to the third step eventually, where the model is trained and accuracy metric are available.
The NonCloud and Cloud Class perform very well with F1 score of 0.92 and 0.9 respectively, and cumulatively accuracy of 93.8%, loss of 0.17. A 2D Convolutional Neural Network with 32 filters scaling till 128 is used. The Feature Explorer points out the false classifications in Red and Positives in Green. The False Classifications, as pointed out are generally due to increased similarity between cloud and non cloud in those samples.
Specific Examples of Samples where cloud samples are falsely predicted as non-cloud. These samples clearly represent features more similar to non-cloud and so they're falsely interpreted. The bottom two have barely visible cloud images and while they can be considered as terrains, on a safer side, they are taken as cloud samples.
Finally the model is tested on 199 samples and an accuracy of 93.47% is obtained which is outstanding as seen for 32x32 sized images. The Feature explorer shows a clear distribution of both the classes in the latent space, and the false samples at the border of each of the sample clusters which sums up our observation previously. The model performs very well on train and test samples and is ready to be deployed!!
The model deployment offers a range of options, but the one we are deploying right now, is Sony's Spresense board with Camera Extension! After selecting the Sony Spresense board, Enable EON Optimization which will reduce the RAM and ROM usage significantly.
Finally the firmware is built and flashed to the Spresense Board using the pre-compiled binary! The model is set to launch and can be deployed on an Spresense in a remote satellite!
Future Implementations?There are multiple applications to aerospace imaging and embedded machine learning in space. This is just one of them. I have also designed a model to classify terrains on the basis of type of terrain or artifact. While these methods are already in use, the project proposes a method to classify terrains with lower resolutions, lower power and inference time and higher accuracy with EdgeImpulse.
The project can be viewed here: https://studio.edgeimpulse.com/public/50260/latest/
The cloud classification project can be viewed here: https://studio.edgeimpulse.com/public/50254/latest
Code: https://github.com/dhruvsheth-ai/Satellite-Image-Classification
Thanks for Reading the Article! All Image data can be found on the EdgeImpulse Dashboard :)
Comments