AI Model of Efficiency
This chip puts processing and memory in the same package, providing twice the energy efficiency of existing chips in running AI algorithms.
In recent years, the field of artificial intelligence (AI) has experienced a significant increase in innovation, with groundbreaking advancements transforming many industries and significantly impacting daily life. The spread of deep learning techniques, reinforcement learning models, and natural language processing algorithms has allowed AI systems to perform complex tasks with increasing accuracy and efficiency. AI applications have become more common and impactful, powering personalized user experiences, and enhancing healthcare diagnostics and autonomous vehicle operation.
However, this meteoric rise in AI capabilities has come at a significant cost. The cutting-edge algorithms and sophisticated models demand an immense amount of computational power, leading to an unprecedented consumption of energy and financial resources. The reliance on traditional computing architectures, and their Achilles’ heel, the von Neumann bottleneck, has become a critical limitation in the pursuit of efficient and scalable AI solutions. The inefficiencies in data transfer and processing within these architectures have led to an unsustainable surge in energy consumption, hindering the expansion of AI capabilities.
As the demand for AI technologies continues to soar, the need for innovative hardware solutions has become increasingly pressing. There is a growing realization that a fundamental shift in hardware design is imperative to overcome the limitations imposed by conventional computing architectures. Not only would such innovations make cloud processing more affordable and energy-efficient, but they would also help usher in an era where cutting edge algorithms can run on low-power wearable and edge computing devices. That shift will be necessary to reduce latency and protect the privacy of the users of these applications.
A multi-institutional team led by researchers at the University of Stuttgart and Robert Bosch GmbH is working toward solving these inefficiencies that exist when running AI algorithms. They have developed a new type of chip that combines both processing and memory in the same package to avoid the frequent, slow lookups that are typically required. This has the effect of reducing processing times, while simultaneously reducing energy consumption — and it was demonstrated that this chip is twice as good as other similar chips presently available when considering these factors.
The chip is constructed of ferroelectric field effect transistors, each 28 nanometers in length. These transistors can perform computations, much like traditional transistors, but have the added ability to store data, and retain it even if the power supply is turned off. Millions of these transistors were leveraged by the researchers to create each chip, which is capable of performing multi-bit multiply and accumulate operations. These are the primary calculations used in AI algorithms.
To validate their approach, the team tested their chip in a number of different scenarios. It was found that handwriting could be accurately recognized in 96.6% of cases on average, and similarly, images could be classified with 91.5% accuracy. While these are good results, other systems can match, or even beat, this level of accuracy. The interesting finding was that these results could be achieved with an efficiency of 885.4 trillion operations per second per watt. This is almost double the efficiency of similar chip designs presently available.
Given the observed accuracy and efficiency that can be achieved using this approach, it is possible that it may power the devices that run deep learning algorithms in future drones and self-driving vehicles. The researchers believe that it will be several years before this begins to take place, however. They note that not only does the chip need to be reliable, but it must also meet regulatory requirements and industry standards.
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.