Complex Mathematics

While OpenAI races to build AI data centers, Nadella reminds us that Microsoft already has them


Microsoft CEO Satya Nadella on Thursday tweeted a video of his company’s first deployed massive AI system — or AI “factory” as Nvidia likes to call them. He promised this is the “first of many” such Nvidia AI factories that will be deployed across Microsoft Azure’s global data centers to run OpenAI workloads.

Each system is a cluster of more than 4,600 Nvidia GB300s rack computers sporting the much-in-demand Blackwell Ultra GPU chip and connected via Nvidia’s super-fast networking tech called InfiniBand. (Besides AI chips, Nvidia CEO Jensen Huang also had the foresight to corner the market on InfiniBand when his company acquired Mellanox for $6.9 billion in 2019.)

Microsoft promises that it will be deploying “hundreds of thousands of Blackwell Ultra GPUs” as it rolls out these systems globally. While the size of these systems is eye-popping (and the company shared plenty more technical details for hardware enthusiasts to peruse), the timing of this announcement is also noteworthy.

It comes just after OpenAI, its partner and well-documented frenemy, inked two high-profile data center deals with Nvidia and AMD. In 2025, OpenAI has racked up, by some estimates, $1 trillion in commitments to build its own data centers. And CEO Sam Altman said this week that more were coming.

Microsoft clearly wants the world to know that it already has the data centers — more than 300 in 34 countries — and that they are “uniquely positioned” to “meet the demands of frontier AI today,” the company said. These monster AI systems are also capable of running the next generation of models with “hundreds of trillions of parameters,” it said.

We expect to hear more about how Microsoft is ramping up to serve AI workloads later this month. Microsoft CTO Kevin Scott will be speaking at TechCrunch Disrupt, which will be held October 27 to October 29 in San Francisco.



Source link