Google wants to build its own AI chips — and it's bringing in new friends

Google is in talks with Marvell Technology to develop two new custom AI chips, according to a report from The Information. The chips in question: a memory processing unit and an inference-optimized TPU (Tensor Processing Unit). If the deal goes through, Marvell would become Google's third design partner, joining Broadcom and MediaTek in the search giant's custom silicon supply chain.

The discussions are still early — no contract has been signed yet. But the timing is telling. The talks come just days after Broadcom locked in a massive deal to continue supplying Google with AI chips. That deal, reportedly worth billions, is a huge win for Broadcom. But it also highlights a problem: Google is heavily dependent on a single supplier for its most critical hardware.

Why diversify?

Relying on one vendor for custom silicon is risky. Supply chain disruptions, pricing pressure, and strategic misalignment can all bite you. Google learned this the hard way with its Pixel phones, where it relied on Qualcomm for years before finally switching to its own Tensor chips. Now it's applying the same logic to its data center hardware.

Adding Marvell and MediaTek as design partners gives Google more leverage. It can play suppliers against each other, negotiate better terms, and ensure it has backup options if one partner falls short. It's a classic procurement strategy, but applied to the cutting edge of AI hardware.

What are these chips?

Let's break down the two chips Google wants Marvell to help design.

Memory Processing Unit (MPU): This is a chip designed to handle memory-intensive workloads. In AI, memory bandwidth is often the bottleneck — your compute units are starving for data while the memory bus struggles to keep up. An MPU aims to fix that by integrating processing logic directly into the memory subsystem. Think of it as a smart memory module that can do simple computations without shuffling data back and forth to the main processor.

Inference-optimized TPU: Google already has TPUs for training and inference. But this new chip would be specifically tuned for inference — the process of running a trained model to make predictions. Inference is becoming a huge market as companies deploy AI models in production. An inference-optimized chip can be cheaper, more power-efficient, and faster for that specific task compared to a general-purpose TPU.

What does this mean for developers?

For developers building on Google Cloud, this could mean access to more specialized hardware at lower costs. If Google can produce cheaper inference chips, it can pass those savings to customers. That's good news for anyone running large-scale AI inference workloads.

But there's a cynical take: Google's custom silicon strategy has had mixed results. The Pixel's Tensor chips, while impressive, haven't exactly set the world on fire. And Google's previous attempts at custom server chips (like the Titan security chip) have been slow to roll out. So don't expect these Marvell-designed chips to appear overnight. Even if a deal is signed, production could be years away.

The bigger picture

Google is not alone in building custom AI chips. Amazon has its Trainium and Inferentia chips. Microsoft is reportedly working on its own AI silicon. Even Meta is designing custom chips for its data centers. The trend is clear: hyperscalers want to own their hardware stack from top to bottom.

For Broadcom, this diversification is a threat. Sure, it just signed a big deal, but Google is clearly looking for alternatives. Broadcom's recent acquisition of VMware has also ruffled some feathers, with Google being a VMware competitor in the cloud space. That might be another reason Google is hedging its bets.

The bottom line

Google's talks with Marvell are a smart move. Diversifying its custom silicon supply chain reduces risk and gives it more bargaining power. For developers, the promise of cheaper, more specialized AI hardware is tantalizing. But as always, the devil is in the details — and the timeline. Keep an eye on this space, but don't hold your breath.