Hackster is hosting Hackster Holidays, Ep. 7: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Friday!Stream Hackster Holidays, Ep. 7 on Friday!

A New Approach to "Super-TinyML" Packs Bigger Machine Learning Models Into Printed Electronics

"Sequential super-tinyML multi-layer perceptron circuits" an order of magnitude better than current approaches bring tinyML to a new height.

Researchers from the Karlsruhe Institute of Technology and the University of Patras have come up with an approach to bring the benefits of tiny machine learning (tinyML) to compact printed electronics — delivering what they call "sequential super-tinyML multi-layer perceptron circuits" for multi-sensory applications.

"Super-tinyML aims to optimize machine learning models for deployment on ultra-low-power application domains such as wearable technologies and implants," the team explains. "Such domains also require conformality, flexibility, and non-toxicity which traditional silicon-based systems cannot fulfill. Printed Electronics (PE) offers not only these characteristics, but also cost-effective and on-demand fabrication. However, Neural Networks (NN) with hundreds of features — often necessary for target applications — have not been feasible in PE because of its restrictions such as limited device count due to its large feature sizes."

The team's solution: a super-tinyML architecture targeting application-specific neural networks, dubbed sequential super-tinyMLs. "The sequential super-tinyML architecture is composed of controller logic, the hidden layer, the output layer, and the argmax. The hidden and output layers consist of the neurons of the MLP [Multi-Layer Perceptron], where some neurons are multi-cycle and some are single-cycle."

Using a multi-cycle approach combined with neuron approximation, the team claims, it's possible to shrink the resource demand of neural networks to the point where it's possible to implement usable models on printed electronics for wearables, implantables, and more — delivering, experiments suggest, a near-36-fold increase in feature size and over 65 times the number of coefficients than rival approaches. Overall, the team claims its architecture delivers a 12.7× improvement in area and 8.3× improvement in power draw over the current state-of-the-art.

The researchers' work is available in preprint on Cornell's arXiv server.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles