SynSense Launches Speck, Xylo Neuromorphic Development Kits for Edge AI Vision and Audio Work
Offering milliwatt and microwatt power budgets for vision and audio processing respectively, these new kits mimic the human brain.
Neuromorphic computing specialist SynSense has announced the launch of hardware development kits for ultra-low-power vision and audio processing, featuring on-device neural network processing and tiny power requirements β thanks to their inspiration: the human brain.
"We are building a user community around our cutting-edge technologies, not just targeting commercial applications but including research users as well," claims SynSense's Dylan Muir, PhD, of the company's development kit launches. "We are working with universities and research institutions to support teaching, scientific experiments, and algorithmic research. At present, more than 100 industry customers, universities and research institutes are using SynSense neuromorphic boards and software."
The two new development kits will, the company hopes, boost those numbers. The first is the Speck, an edge AI computer vision board, which features a system-on-chip dedicated to low-power smart vision processing. A 320,000 neuron processor combines with event-based image sensing technology to, the company claims, offer real-time vision processing "at milliwatt power consumption" β and with the ability to train and deploy convolutional neural networks (CNNs) up to nine layers deep on-chip.
The Xylo-Audio board, meanwhile, focuses on audio processing in a "microwatt energy budget." The device offers more than keyword detection, the company claims, with the ability to detect "almost any audio feature." An open-source Python library, Rockpool, is provided to speed development.
"SynSense releases new development tools to help researchers and engineers further explore neuromorphic intelligence," says company founder and chief executive Qiao Ning, PhD. "The development boards and open-source software are made to strengthen the basic environment for developers. Allowing them to quickly develop, train, test, and deploy applications using spiking neural networks. We expect more developers to join the neuromorphic community and make breakthroughs."
"Before SynSense existed, designing, building and deploying an application to neuromorphic SNN [Spiking Neural Network] hardware required a PhD level of expertise, and a PhDs amount of time β 3β4 years," claims Muir. "Now we have interns joining the company and deploying their first applications to SNN hardware after only 1-2 months. This is a huge leap forward for commercialisation, and a huge reward for the hard work of the company."
The two development kits are compatible with Ubuntu 18.04 and 20.04, with the Xylo-Audio board also supporting macOS 10.15, and require a host PC with a USB 3.0 port and at least 4GB of RAM. More information is available on the SynSense website, but pricing is available only on application.