InAccel Promises ML Performance Boosts with No Code Changes, Courtesy of FPGA Acceleration
Company's Accelerated Machine Learning Studio allows existing code to be accelerated tenfold or more on automatically-managed FPGAs.
Acceleration startup InAccel claims that its field programmable gate array (FPGA)-powered machine learning platform can boost performance up to tenfold β without users having to make any changes to their code
"Data scientists and ML engineers can now speed-up by more than 10x computationally intensive workloads and reduce the total cost of ownership with zero code changes," the company, founded in 2018, claims of its plaform. "It fully supports widely used frameworks like Keras, Scikit-learn, Jupyter Notebooks, and Spark."
The secret sauce: Shoving the hard work of the application off to FPGA hardware, which can run the company's own core IP or those of any third parties β including specialized acceleration cores tailored to a particular workload.
"InAccel provides an FPGA resource manager," the company explains, "that allows the instant deployment, scaling and resource management of FPGAs making easier than ever the utilization of FPGAs for applications like machine learning and data processing applications. Users can deploy their applications from Python, Jupyter notebooks or even terminals instantly."
Those looking to use the company's platform can do so wholly remotely, with FPGAs spun up in the cloud on Amazon Web Services (AWS) f1 instances; alternatively, on-premises hardware can be installed. In both cases, though, the company claims that performance can be boosted and latency reduced over running the same code outside its platform.
More information, and a live demo, is available on the company's website.