Google is adding a third custom chip designer to its supply chain, aiming to slash the multi-billion dollar cost of AI inference by reducing its dependence on Broadcom.
Back
Google is adding a third custom chip designer to its supply chain, aiming to slash the multi-billion dollar cost of AI inference by reducing its dependence on Broadcom.

A three-pronged chip strategy
Google is in talks with Marvell Technology to develop two new custom AI chips, adding a third design partner to a supply chain that includes Broadcom for high-performance silicon and MediaTek for cost-optimised variants. The move signals a strategic shift to diversify suppliers and reduce long-term dependence on any single partner for its Tensor Processing Units (TPUs).
The discussions, which have not yet resulted in a signed contract, focus on two new pieces of silicon: a memory processing unit (MPU) designed to work alongside existing TPUs and a new TPU built specifically for AI inference. According to people familiar with the plans, Google aims to finalize the MPU design as early as next year, with a potential production target of nearly 2 million units. That compares to a Morgan Stanley estimate that Google will produce roughly 6 million TPUs in 2027.
The talks come just weeks after Broadcom, which commands over 70% of the custom AI accelerator market, locked in a new agreement to supply Google with TPUs and networking components through 2031. This indicates Google’s strategy is one of diversification, not immediate replacement. “Broadcom is an excellent partner,” a Google spokesperson said in a related 2023 statement, adding the company is “productively engaged with Broadcom and multiple other suppliers for the long term.”
This multi-supplier approach is critical as the cost of AI shifts from training to inference. While training a large model is a massive but finite expense, inference costs scale directly with user engagement, becoming the dominant operational expenditure for services like Google Search and Gemini. Optimising chips for inference performance and cost is now a key competitive battleground.
Marvell brings significant custom silicon experience to the table, already designing chips for Amazon’s Trainium processors, Microsoft’s Maia AI accelerator, and Google’s own Arm-based Axion CPUs. The company’s custom silicon business is its fastest-growing segment, with a $1.5 billion annual run rate. The potential Google deal further solidifies its position as the primary challenger to Broadcom.
The move also reflects pressure from competitors like Nvidia, which recently announced its own inference-focused Language Processing Unit (LPU). Marvell was the design partner for the first generation of that technology from Groq, giving it proven expertise in the area. The broader market for custom ASICs is forecast by TrendForce to grow 45% in 2026, far outpacing the 16% growth projected for GPUs.
The partnership news has already impacted Marvell’s stock, which has rallied approximately 50% year-to-date. Following the reports, Barclays analyst Tom O’Malley upgraded the stock to overweight and raised his price target from $105 to $150. For Broadcom, while the 2031 deal secures its role, the loss of exclusivity signals long-term pricing pressure. Mizuho analysts still project Broadcom will record $21 billion in 2026 AI revenue from its Google and Anthropic relationships.
For Google, the strategy is about long-term cost control and supply chain resilience. By cultivating a competitive ecosystem with Broadcom, MediaTek, and now potentially Marvell, Google gains leverage to manage the multi-billion dollar annual cost of AI inference. While any Marvell-designed chip is likely years from production, the direction is clear: in the race to power AI, no company can afford to depend on a single supplier.
This article is for informational purposes only and does not constitute investment advice.