Nvidia’s biggest customers are becoming its biggest competitors, a strategic shift that could reshape the nearly $725 billion AI infrastructure market.
Back
Nvidia’s biggest customers are becoming its biggest competitors, a strategic shift that could reshape the nearly $725 billion AI infrastructure market.

Nvidia Corp. shares fell more than 4 percent after two of its largest customers, Google and Amazon, signaled definitive plans to sell their custom-developed AI accelerator chips directly to enterprise customers, creating a new competitive front for the market leader.
The move threatens to turn Nvidia’s most important partners into its direct rivals. “This could fundamentally disrupt Nvidia,” Seaport Research semiconductor analyst Jay Goldberg said. “I think this is a pretty significant risk.”
Google parent Alphabet announced it will sell its proprietary Tensor Processing Unit (TPU) chips to a select group of external customers for use in their own data centers this year. Morgan Stanley estimates that selling just 500,000 TPUs could add roughly $13 billion in revenue for Google by 2027. Amazon followed with its own declaration, with CEO Andy Jassy stating there's a “good chance” the company will offer racks of its Trainium chips beyond its own cloud services within two years.
For Nvidia, which commands an estimated 90 percent of the AI accelerator market, the announcements represent a long-term challenge to its high-margin business. While the company’s leadership is not in immediate danger, the shift by hyperscale cloud providers to supply their own silicon marks a structural change in the chip landscape.
The strategic pivot from Google and Amazon centers on offering more cost-effective and specialized alternatives to Nvidia's powerful-but-pricey GPUs, such as the H100 which has a bill of materials of over $3,000 and sells for much more. Both companies are framing their custom chips as better-suited for specific AI workloads, particularly inference—the process of running trained models—which is becoming a larger portion of cloud computing costs as AI applications scale.
Google has already announced a new TPU designed specifically for inference. While its chips have historically been tailored for its internal services, offering them to external clients is a direct challenge to Nvidia's hardware. Similarly, Amazon’s Trainium business has reportedly surpassed a $20 billion annualized revenue run-rate, according to Bloomberg, showing significant momentum. This expansion comes as tech giants are collectively expected to invest up to $725 billion in AI infrastructure by 2026.
However, not all analysts view this as a zero-sum game. Stacy Rasgon of Bernstein Research argued that the core issue for the AI industry is a lack of supply, not a lack of demand. With computational needs growing exponentially, he suggests that any company with viable chip production will likely sell all it can make. Nvidia itself holds $95.2 billion in supply commitments from major AI players including OpenAI, Anthropic, and Meta.
Beatriz Valle, a senior analyst at GlobalData, called the decision by Google and Amazon an “extraordinary move” that will diversify the chip sector. “This process will take years but it is irreversible now,” she said. The transition from being a chip consumer to a chip vendor is not simple. Analysts note that Google and Amazon will need to build out extensive support, education, and service ecosystems to compete with the deep moat Nvidia has built over the years.
“Selling products is very different than access to them,” said Alvin Nguyen, a senior analyst at Forrester, pointing out the robust software and support network that makes Nvidia an easy choice for enterprises. Furthermore, the custom chips from Google and Amazon are highly proprietary and designed for their own data center architectures, which could pose a challenge for mass adoption, according to Patrick Moorhead of Moor Insights & Strategy.
Still, the trend is clear. With Meta also pursuing its own custom MTIA silicon, the largest players in AI are aggressively vertically integrating. By developing their own chips, these companies can optimize for their specific software and workloads, control their own technology roadmap, and capture a larger slice of the burgeoning AI value chain.
This article is for informational purposes only and does not constitute investment advice.