Hackster is hosting Hackster Holidays, Finale: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Tuesday!Stream Hackster Holidays, Finale on Tuesday!

More Is Never Enough

NVIDIA's Blackwell platform offers powerful, efficient GPUs with advanced AI features to meet the escalating demands of cutting-edge models.

Nick Bild
10 months agoMachine Learning & AI
A Blackwell architecture chip (📷: NVIDIA)

The present artificial intelligence (AI) boom has been characterized by unprecedented advancements in the underlying technologies, and is fueled by the convergence of powerful algorithms, vast amounts of data, and increasingly sophisticated computing hardware. Across industries ranging from healthcare and finance to automotive and entertainment, organizations are leveraging AI to revolutionize processes, drive innovation, and unlock new insights. This surge in AI adoption has led to a burgeoning demand for computing power, particularly GPUs, to train and run the latest models.

As AI models grow in complexity and scale, the computational requirements for training and inference have skyrocketed. Cutting-edge AI architectures, such as large language models (LLMs) and deep neural networks (DNNs), often consist of many billions, or even trillions, of parameters, necessitating immense computational resources for training. Organizations are finding that traditional computing infrastructure is insufficient to support the scale and speed required for training these models effectively.

In response to the growing demand for computing power, both hardware manufacturers and cloud service providers are racing to develop and deploy solutions tailored to the needs of AI practitioners. One of the most notable names in the field these days is undoubtedly NVIDIA. And as NVIDIA’s GTC Conference got underway this week in San Jose, they unveiled a very interesting new platform that will be drawing a lot of attention from those working with AI in the months to come. Called the Blackwell platform, this new entrant into the field consists of powerful GPUs and interconnects to link them together.

The new GPUs implementing the Blackwell architecture pack in over 208 billion transistors between the two onboard dies. Communication between these chips happens at a blazing 10 terabytes per second. With the supporting NVLink technology, this communication can be extended to other GPUs — even GPUs connected to different servers. In total, 576 Blackwell GPUs can be linked. This level of performance and connectivity has the potential to make massive trillion-parameter AI models commonplace in the near future.

Another important aspect of these new GPUs is their energy efficiency. The huge models behind today’s DNNs can draw enough power to make the Griswold’s Christmas light display look like child’s play. But GPUs built on the Blackwell architecture have 25 times better energy consumption than their predecessors. This is not only a crucial factor in cutting costs, but also offers environmental benefits.

Special attention was given to accelerating some of the most popular models of the day, like LLMs and Mixture-of-Experts models. The second-generation Transformer Engine contained within these chips utilizes custom Blackwell Tensor Core technology in conjunction with TensorRT-LLM and the NeMo Framework to dramatically speed up inference times. Support for new levels of precision also allow developers to pack even larger models into available memory space. Performance is said to be about 30 times better than what was seen with previous generations like the H100 Tensor Core GPU.

Focusing on production applications in industry, Blackwell incorporates a dedicated Reliability, Availability, and Serviceability Engine, powered by AI, to identify potential faults early and minimize downtime. It continuously monitors hardware and software health, providing diagnostic information for effective maintenance and remediation.

This year’s GTC is off to a great start already with the Blackwell platform announcement. Keep an eye on Hackster News this week for other notable news from the conference.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles