Google is deepening its partnership with the AI startup Thinking Machines Lab through a new multi-billion-dollar cloud agreement, a move that escalates the costly battle with Amazon and Microsoft to supply the infrastructure for frontier artificial intelligence.
"By leveraging A4X Max and the AI Hypercomputer integrated stack, Google Cloud got us running at record speed with the reliability we demand," said Myle Ott, a founding researcher at Thinking Machines, in a statement.
The deal, valued in the single-digit billions, grants Thinking Machines access to Google's latest AI systems built on Nvidia's new GB300 chips. In early testing, Thinking Machines saw a 2x improvement in training and serving speed compared to prior-generation GPUs, accelerated by Google's Jupiter networking fabric. The startup will also use a suite of Google Cloud services, including its Kubernetes Engine and Spanner database, to support its reinforcement learning workloads.
This partnership underscores the escalating capital expenditures required to compete at the AI frontier. For Google, it's a critical move to lock in a high-growth client and showcase its AI infrastructure capabilities against rivals. For the broader market, it signals that the primary currency in the AI race is not just talent, but also access to massive-scale, high-performance computing.
Fierce Competition for AI Workloads
The agreement is the latest in a series of high-stakes deals by cloud providers to secure the business of promising AI startups. The computational demands of training and deploying large-scale AI models have created a massive new market for cloud infrastructure, and the top players are spending heavily to attract and retain key customers.
Just this week, Anthropic, another leading AI lab, signed a new agreement with Amazon to secure up to 5 gigawatts of capacity. This followed a separate deal Anthropic made with Google and Broadcom for multiple gigawatts of Google's custom TPU capacity. The non-exclusive nature of these deals highlights the multi-cloud strategy many AI developers are adopting, but the scale of the Google-Thinking Machines partnership suggests a deep integration.
Thinking Machines, founded in February 2025 by former OpenAI chief technologist Mira Murati, has been a focal point of the AI industry's talent and capital churn. The company raised a $2 billion seed round at a $12 billion valuation shortly after its founding and launched its first product, Tinker, a tool for automating the creation of frontier AI models.
Talent Wars and Technical Muscle
While the cloud deal provides crucial infrastructure, Thinking Machines has also been at the center of the industry's "talent war." The startup has seen some of its founding members and key researchers depart for larger tech firms. Meta has reportedly hired at least seven members from the founding team, while OpenAI has also recruited from its ranks.
Despite these departures, Thinking Machines retains significant technical depth. The company's CTO, Soumith Chintala, is the creator of PyTorch, one of the most widely used open-source AI frameworks. The deal with Google, and its access to Nvidia's top-tier hardware, is critical for retaining and attracting the elite researchers needed to build next-generation models.
The partnership provides a glimpse into the startup's technical approach, with Google noting its ability to support the reinforcement learning workloads that are central to Tinker's architecture. This training technique has been behind many recent AI breakthroughs and is notoriously computationally expensive, justifying the multi-billion-dollar scale of the cloud agreement.
This article is for informational purposes only and does not constitute investment advice.