Tsetlin the Bar Higher

ETHEREAL slices a neural network alternative called a Tsetlin Machine down in size by nearly 90% while maintaining performance.

Nick Bild
3 months agoMachine Learning & AI

Neural networks are by far the most promising artificial intelligence (AI) algorithms developed to date. But that does not necessarily mean they are the right tool to take us where we want to go. As researchers strive to achieve goals like the development of artificial general intelligence and more compact and energy-efficient algorithms that can run on tiny hardware platforms, the question of which software architecture is the most appropriate for each must be revisited. Maybe today’s big, clunky neural networks will not have a part in the future of AI.

A lesser-known learning algorithm called a Tsetlin Machine has been garnering some attention lately due to the fact that it requires far less computational resources than neural networks, yet has been shown in some cases to perform comparably. The reduction in computational complexity is achieved by relying on relatively simple logic operations, rather than huge numbers of multiply-accumulate operations. Yet even with a Tsetlin Machine, there is plenty of room for further optimization, says a trio of researchers from Newcastle University.

Their work introduces an optimized version of the Tsetlin Machine called ETHEREAL, which significantly reduces the model’s size while maintaining strong classification accuracy. ETHEREAL, which stands for Energy-efficienT, High-throughput, and accurate infErence through the practical implementation of a compREssed tsetLin mAchine, addresses a key inefficiency in standard Tsetlin Machines — the inclusion of literals with weak correlation to a target class.

Unlike deep neural networks, which rely on arithmetic-heavy computations, a Tsetlin Machine learns by forming propositional logic patterns using Tsetlin Automata. These patterns are represented by literals that make up clauses, each contributing to classification decisions. However, in conventional implementations, some literals are redundantly included in both positive and negative clauses, effectively canceling out their impact and leading to unnecessary computational overhead. ETHEREAL introduces an exclusion-based training approach to eliminate these redundant literals, making the model more efficient.

The ETHEREAL approach refines the model in a two-step process: first, it iteratively identifies and removes literals that appear in both positive and negative clauses, reducing the model’s complexity. Then, the standard training process resumes, ensuring that important literals remain and classification accuracy is maintained. This method enables ETHEREAL to achieve up to an 87.54% reduction in model size with minimal accuracy loss — at most 3.38% was observed. In some cases, accuracy even improves due to the removal of noisy or irrelevant features.

The team tested ETHEREAL on eight real-world TinyML datasets, benchmarking it against traditional Tsetlin Machines, Random Forest (RF), and Binarized Neural Networks (BNN). Their results showed that ETHEREAL significantly outperforms these alternatives in terms of computational efficiency. On the STM32F746G-DISCO microcontroller development kit, ETHEREAL-based models demonstrated an order-of-magnitude reduction in inference time and energy consumption compared to BNNs, while requiring seven times less memory than RF models.

With the growing demand for deploying AI in low-power, resource-constrained environments, such as IoT devices, embedded systems, and edge AI hardware, ETHEREAL presents a compelling alternative to neural networks. Its ability to perform rapid, logic-based inference using minimal computational resources makes it particularly well-suited for applications where energy efficiency and real-time processing are critical requirements.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Get our weekly newsletter when you join Hackster.
Latest articles
Read more
Related articles