Hackster is hosting Hackster Holidays, Finale: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Tuesday!Stream Hackster Holidays, Finale on Tuesday!

Casting a MicroNet

MicroNets are a family of compute/memory optimized models that can be deployed with TensorFlow Lite for common TinyML tasks.

Nick Bild
4 years ago β€’ Machine Learning & AI
Search space exhibits latency linear with ops (πŸ“·: C. Banbury et al.)

New applications for machine learning open up as the computational workload and energy requirements are reduced. These workloads that have been moved from the data center to the devices themselves make for lower latency experiences and, perhaps counterintuitively, more accurate results. When transferring sensor data over a network, much is typically discarded due to bandwidth limitations β€” on-device processing circumvents these bandwidth issues and can make use of a more complete data set.

It should come as no surprise then that there is a great deal of effort at present to improve the state-of-the-art in Tiny Machine Learning (TinyML). Deep neural network inference has been phenomenally successful in recent years, however, these deep models have very high computational and memory requirements.

One strategy to optimize for low resource usage and high model accuracy is to perform a neural architecture search (NAS) in which an automated process tests out many different types of architectures to determine which offers the best characteristics for a particular application. While a NAS can produce very good results, the search space of all possible architectures is fantastically large. A collaboration between Arm ML Research and Harvard University has discovered a key insight that can help to winnow down that search space.

The researchers noticed that model latency varies linearly with model operation count under a uniform prior over models in the search space. This observation inspired them to develop MicroNets, which is a method to search for neural architectures that will yield low computational and memory requirements.

In evaluating MicroNets, they were found to demonstrate state-of-the-art results in three common TinyML benchmarks: visual wake words, audio keyword spotting, and anomaly detection. They have open sourced their models for all three of these tasks in the hope that microcontroller researchers will be able to make use of them as standard models for benchmarking.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles