Google just made a major move in the AI infrastructure arms race, elevating Amin Vahdat to chief technologist for AI infrastructure, a newly created position reporting directly to CEO Sundar Pichai, according to an internal memo first reported by Semafor. It’s a signal of just how critical this work has become as Google pours up to $93 billion into capital expenditures by the end of 2025 — a number that parent company Alphabet expects will be a whole lot bigger next year.
Vahdat isn’t new to the game. The computer scientist, who holds a PhD from UC Berkeley and started as a research intern at Xerox PARC back in the early ’90s, has been quietly building Google’s AI backbone for the past 15 years. Before joining Google in 2010 as an engineering fellow and VP, he was an associate professor at Duke University and later a professor and SAIC Chair at UC San Diego. His academic credentials are formidable — with what appears to be around 395 published papers — and his research has always focused on making computers work more efficiently at massive scale.
Vahdat already maintains a high profile with Google. Just eight months ago, at Google Cloud Next, he unveiled the company’s seventh-generation TPU, called Ironwood, in his role as VP and GM of ML, Systems, and Cloud AI. The specs he rattled off at the event were staggering, too: over 9,000 chips per pod delivering 42.5 exaflops of compute — more than 24 times the power of the world’s No. 1 supercomputer at the time, he said. “Demand for AI compute has increased by a factor of 100 million in just eight years,” he told the audience.
Behind the scenes, as noted by Semafor, Vahdat has been orchestrating the unglamorous and essential work that keeps Google competitive, including those custom TPU chips for AI training and inference that give Google an edge over rivals like OpenAI as well as the Jupiter network, the super-fast internal network that allows all its servers to talk to each other and move massive amounts of data around. (In a blog post late last year, Vahdat said that Jupiter now scales to 13 petabits per second, explaining that’s enough bandwidth to theoretically support a video call for all 8 billion people on Earth simultaneously.) It’s the invisible plumbing connecting everything from YouTube and Search to Google’s massive AI training operations across hundreds of data center fabrics worldwide.
Vahdat has also been deeply involved in the ongoing development of the Borg software system, Google’s cluster management system that acts as the brain coordinating all the work happening across its data centers and whose job is to figure out which servers should run which tasks, when, and for how long. And he has said he oversaw the development of Axion, Google’s first custom Arm-based general-purpose CPUs designed for data centers, which the company unveiled last year and continues to build.
In short, Vahdat is central to Google’s AI story.
Indeed, in a market where top AI talent commands astronomical compensation and constant recruitment, Google’s decision to elevate Vahdat to the C-suite may also be about retention. When you’ve spent 15 years building someone into a linchpin of your AI strategy, you make sure they stay.
Techcrunch event
San Francisco
|
October 13-15, 2026











Add Comment