Hackster is hosting Hackster Holidays, Ep. 7: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Friday!Stream Hackster Holidays, Ep. 7 on Friday!

Smarter Semiconductors Needed for Smarter AI, Researchers Say — Delivering a Novel ECRAM Accelerator

Rather than trying to just make existing approaches more energy-efficient, this research team is looking into entirely new methods.

A team of researchers from the Pohang University of Science and Technology, Korea University, and Kyungpook National University believe that smarter artificial intelligence is going to require smarter semiconductor technology — and have developed an analog accelerator, based on electrochemical memory, to help deliver exactly that.

"By realizing large-scale arrays based on novel memory device technologies and developing analog-specific AI algorithms," claims co-corresponding author Seyoung Kim, a professor in POSTECH's department of materials science and engineering and department of semiconductor engineering, "we have identified the potential for AI computational performance and energy efficiency that far surpass current digital methods."

The current boom in artificial intelligence technology is placing increasing demand on global power grids. As the size and complexity of models grows, so too does the computational power required to train them — and breakthroughs in the energy efficiency of acceleration hardware is failing to keep pace.

Rather than making faster versions of today's technology, Kim's team is taking a different approach: an accelerator, tailored to analog computation, built using electrochemical random-access memory (ECRAM), which uses ion movement and concentration to vary its electrical conductivity — and that draws considerably less power than traditional electronic RAM.

The team's work saw production of a 64×64 array of three-terminal ECRAM semiconductors, considerably larger than the 10×10 arrays that had previously represented the state of the art. This array was then used to train the Tiki-Taka v2 algorithm, an analog machine learning model, to demonstrate that the accelerator is functional — reducing the computational complexity from O(n²) to near O(1) for the same matrix size, the team claims.

The researchers' work has been published in the journal Science Advances under open-access terms.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles