BrainChip Unveils Its Second-Generation Akida Platform, Now Boasting Vision Transformer Acceleration
Brainchip's Akida 2.0 gains some impressive new features, along with a three-tier launch strategy scaling up to 128 nodes and 50 TOPS.
BrainChip has announced the launch of its second-generation Akida processor family, designed for high-efficiency artificial intelligence at the edge, adding Temporal Event-Based Neural Net (TENN) support and optional vision transformer acceleration on top of the company's existing spiking neural network capabilities.
"Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device," claims BrainChip's chief executive officer Sean Hehir of the next-generation design. "By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience."
BrainChip began offering development kits for its first-generation Akida AKD1000 neural network processors in October 2021, building two kits around the user's choice of a Shuttle x86 PC or a Raspberry Pi. Ease of use took a leap earlier this year when the company announced the fruit of its partnership with Edge Impulse to bring Akida support to the latter's machine learning platform β offering what Edge Impulse co-founder and chief executive officer Zach Shelby described as a "powerful and easy-to-use solution for building and deploying machine learning models on the edge."
The promise of the Akida platform, which was developed based on the operation of the human brain, is high performance at a far greater efficiency than its rivals β when, at least, the problem to be solved can be defined as a spiking neural network. It's this efficiency which has seen BrainChip primarily position its Akida hardware for use at the edge, accelerating on-device machine learning in power-sensitive applications.
The second-generation Akida platform brings with it high-efficiency eight-bit processing and support for Temporal Event-Based Neural Nets (TENNs), giving it the ability to consume raw real-time streaming data from sensors, including video sensors. This, the company claims, provides "radically simpler implementations" for tasks including video analytics, target tracking, audio classification, and even vital sign prediction in medical imaging analysis.
BrainChip's Akida refresh also brings with it support for accelerating vision transformers, as an optional component that can be discarded if not required, as primarily used for image classification, object detection, and semantic segmentation. Combined with Akida's ability to process multiple layers at once, the company claims the new parts will allow for complete self-management and execution of even relatively complex networks like RESNET-50 β without the host device's processor having to get involved at all.
The company has confirmed that it will be licensing the Akida IP in three product classes: Akida-E will focus on high energy efficiency with a view to being embedded alongside, or as close as possible, to sensors and offering up to 200 giga-operations per second (GOPS) across one to four nodes; Akida-S will be for integration into microcontroller units and systems-on-chip (SoCs), hitting up to 1 tera-operations per second (TOPS) across two to eight nodes; and Akida-P will target the mid- to high-end, and will be the only tier to offer the optional vision transformer acceleration, scaling between eight and 128 nodes with a total performance of up to 50 TOPS.
While the part launches to unnamed "early adopters" today, though, BrainChip isn't quite ready to start selling them to the public β promising instead that second-generation Akida processors will be available in the third quarter of 2023 with as-yet unannounced pricing. More information is available on the BrainChip website.