It’s Close Enough

Visual attention condensers bring deep learning visual perception models to TinyML applications.

Nick Bild
5 years agoMachine Learning & AI
Yes, AttendNets can recognize cats (📷: A. Wong et al.)

Sometimes close is good enough. That is the idea behind AttendNets, a highly compact, low-precision deep neural network architecture designed for visual perception in TinyML applications.

Deep learning has provided a seemingly endless stream of breakthroughs in computer vision in recent years. Generally, accuracy is valued more highly in deep learning applications than is optimizing the model for minimal complexity. Accordingly, the complexities of these models makes deploying them on low-power, highly resource-constrained devices a major challenge.

AttendNets adapts deep learning models to resource-constrained devices by introducing the concept of visual attention condensers and by tailoring models specifically to the type of hardware that they will run on. Building upon the attention condenser, which is a self-attention mechanism that provides condensed embeddings, the team added optimizations for working with images. The resulting visual attention condensers reduce model complexity through better handling of the high channel dimensionality of image data. An iterative generative synthesis approach is taken to generate the final architectural design of the network. This yields an optimal balance between image recognition accuracy and performance on a constrained edge device.

AttendNets were compared to several existing TinyML model architectures, including MobileNet and AttoNet. Using the ImageNet50 dataset as a benchmark, the new technique was found to have 7.2% greater accuracy in classifying images as compared to MobileNet. AttendNets impressively used 4.17 times fewer parameters, 16.7 times less memory, and 3 times fewer add/multiply operations to achieve this improved performance. When compared with AttoNet, similar accuracy was achieved, but again with greatly reduced resource requirements.

Inspired by the early successes of AttendNets, the researchers plan to explore the effectiveness of the new architecture for object detection, semantic segmentation, and instance segmentation in the future. This may open the door to more TinyML applications, such as in remote sensing, wearable assistive technology, and even in autonomous vehicles.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles