Hackster is hosting Hackster Holidays, Ep. 7: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Friday!Stream Hackster Holidays, Ep. 7 on Friday!

Researchers Build a RISC-V Chip That Calculates in Posits, Boosting Accuracy for ML Workloads

Designed as an alternative to floating-point numbers, posits may prove key to boosting machine learning performance.

A team of scientists at the Complutense University of Madrid has developed the first processor core to calculate in posits, a novel number representation designed as an alternative to floating-point arithmetic — and offering orders-of-magnitude improvements in accuracy.

"In this work, we present PERCIVAL, an application-level posit RISC-V core based on CVA6 that can execute all posit instructions, including the quire fused operations," the team explains of its progress in using posits for real-world computing. "This solves the obstacle encountered by previous works, which only included partial posit support or which had to emulate posits in software."

Posits, an official universal number (unum) format since 2017, are designed to offer a more accurate alternative to floating-point data types in computation. They're of particular interest in machine learning applications, where improved accuracy can have a dramatic effect on performance — but, as the team notes in PERCIVAL's paper, previous work proving their potential has relied on software emulation.

Built atop the free and open source RISC-V architecture, using the CVA6 core formerly known as Ariane as its base, PERCIVAL is the first open source processor core to offer support for all 32-bit posit instructions. Coupled with Xposit support in the LLVM compiler, it makes it possible to execute operations on posit numbers in-hardware with no emulation — including the use of the "quire," a fixed-point two's-complement register used for fused operations.

The PERCIVAL processor includes a Posit Arithmetic Unit (PAU), next to the usual integer and floating-point units, which the team does admit brings with it a considerable cost in additional hardware and footprint. The results, though, suggest the trade-off could be worth it for workloads including machine learning: "In general matrix multiplications, the accuracy error is reduced up to four orders of magnitude," the researchers found.

"Furthermore, performance comparisons show that these accuracy improvements do not hinder their execution, as posits run as fast as single-precision floats and exhibit better timing than double-precision floats, thus potentially providing an alternative representation."

The team's work has been published under open-access terms in the journal IEEE Transactions on Emerging Topics in Computing, after a preprint was made available on Cornell's arXiv server in November last year.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles