The limitations of IEEE floating point are now well documented, and the leaders in AI all have ditched IEEE floating point in favor of other number systems. But even though much better number systems exists, particularly for Deep Learning, the fact that no high-performance implementations exists, applications are forced to waste energy on sub-optimal solutions. eFPGAs have the potential to open this area of innovation as they provide a mechanism to create custom computational engines tailored to the application.
Our goal with the Posit-accelerated Ultra96 project is to create a platform that enables edge intelligence applications to take advantage of more efficient number systems to create new solutions that would otherwise not be possible.
Posits are a tapered precision floating point representing the real numbers more efficiently than IEEE floating point. From a silicon efficiency perspective posits and floats are roughly equivalent. The performance benefit comes from the numerical efficiency that posits enable. Posits are able to increase precision by roughly 30% compared to the same size float. But posit arithmetic further improves that to a factor of 2 by introducing the quire, which is a super-accumulator that implements fused dot products.
For this first phase, the goal is to create a posit-based tensor processor with the first instruction being the fused dot product, and integrate that into the tiny-dnn C++ library to enable an end-to-end solution for DNN algorithmic research and development. We are working with the tiny-dnn community to deploy this configuration into computer vision and autonomous truck applications as a building block for smart video cameras.
Comments
Please log in or sign up to comment.