During the annual GTC event in San Jose, California, Nvidia CEO Jensen Huang introduced the upcoming Blackwell Ultra AI chip as well as the GB300 superchip, which integrates two Blackwell Ultras along with the Grace central processing unit (CPU). These chips are tailored to enhance AI systems for a broad customer base that includes major players like Amazon, Google, Microsoft, and Meta, as well as research facilities globally.
Nvidia emphasized that the Blackwell Ultra delivers 1.5 times the performance of the previous Blackwell version and opens up a 50-fold increase in data center revenue potential compared to its Hopper chip due to its enhanced AI capabilities. This new chip is specifically crafted for AI reasoning, a form of AI processing that imitates human thought processes and decision-making patterns. Notable AI reasoning models include DeepSeek's R1, OpenAI's o1, and Google's Gemini 2.0 Flash Thinking.
Despite initial concerns raised by DeepSeek about cost efficiencies in AI model development, Nvidia contends that leveraging powerful GPUs, such as those in the Blackwell Ultra, is critical for improving response times to user inquiries. The Blackwell Ultra can be integrated into Nvidia's NVL72 rack server system, which combines 72 GB300 superchips for enhanced efficiency and manageability.
Nvidia also unveiled plans to incorporate the GB300 into its DGX SuperPod, an AI supercomputer that merges NLV72 servers to create a robust AI solution. The SuperPods will be equipped with an impressive configuration of 288 Grace CPUs, 576 Blackwell Ultra GPUs, and 300TB of memory.
Having entered full production, the Blackwell chip has seen rapid growth, becoming Nvidia's most quickly adopted product to date. This chip significantly contributed to Nvidia's recent quarterly revenue of $39.3 billion, adding $11 billion to the company's total earnings.