Nvidia debuts Blackwell Ultra AI processor for ‘Era of AI Thinking’
Nvidia (NVDA) CEO Jensen Huang revealed the next-generation Blackwell Ultra AI processor on Tuesday at the company’s annual GTC event in San Jose, Calif.
In addition to the Blackwell Ultra chip, Nvidia introduced the GB300 super chip, which combines two Blackwell Ultras with the company’s Grace CPU. The chips are intended to power AI systems for a wide range of clients, including hyperscalers such as Amazon (AMZN), Google (GOOG, GOOGL), Microsoft (MSFT), and Meta (META), as well as global research institutes. According to Nvidia, the Blackwell Ultra provides 1.5 times the performance of the Blackwell. It represents a 50x increase in data center income possibility over the Hopper processor due to its better AI capabilities.
Nvidia claims that the Blackwell Ultra is intended for the “age of AI reasoning,” a sort of AI processing replicating how people think and reach conclusions. DeepSeek’s R1 AI model propelled it into the mainstream. Both reasoning models are OpenAI’s o1 and Google’s Gemini 2.0 Flash Thinking. DeepSeek first shocked Wall Street when it said it generated its AI models for a fraction of the cost that Silicon Valley heavyweights pay while employing below-the-line hardware. However, Nvidia has disputed this notion, claiming that reasoning models benefit from using powerful GPUs, which allow them to respond to user inquiries more quickly.
Like the Blackwell, the Blackwell Ultra will be integrated into Nvidia’s huge NVL72 rack server, which combines 72 GB300 Superchips and improves efficiency and serviceability, according to the firm. According to the business, the GB300 NVL72 can handle 1,000 tokens per second when DeepSeek’s R1 AI model is used. This is increased from 100 tokens per second when utilizing Nvidia’s Hopper processor. That implies the GB300 NVL72 can answer user inquiries in 10 seconds, compared to Hopper’s 1.5 minutes. In other words, Blackwell Ultra is a significant upgrade from previous Hopper systems. Furthermore, Nvidia claims the GB300 will be available in the company’s AI supercomputer, the DGX SuperPod, which combines a succession of NLV72 servers into a single AI powerhouse.
The SuperPods will use 288 Grace CPUs, 576 Blackwell Ultra GPUs, and 300TB of RAM. Nvidia’s Blackwell chip is now in full production and the firm claims it has had the quickest ramp-up in history. In its most recent quarter, Nvidia said that Blackwell contributed $11 billion to its $39.3 billion overall sales. Despite the impressive quarterly performance, Nvidia’s stock price has been hampered by concerns that hyperscalers overpay for AI without receiving adequate returns. President Trump’s threat to impose a 25% tax on foreign-produced chips and the possibility of another export limit haven’t helped.