Industry & Competition

Anthropic Locks Every Compute Channel While xAI Rents Out Its Castle

In February, Elon Musk called Anthropic “the most hypocritical company” on X, accusing it of “hating Western civilization.” By May, he wrote that he had “spent a lot of time last week with senior members of the Anthropic team and was impressed — no one set off my evil detector.” The reason for this 180-degree turn is a single deal: Anthropic signed for the full capacity of SpaceXAI’s Colossus 1 data center — 300 megawatts, 220,000 NVIDIA GPUs (CNBC). An entire supercomputer, leased to a competitor.

Place this news in the sequence of the past six months, and two mirroring stories emerge. On the surface, one is about locking up supply, the other about excess capacity. But their underlying logic points in the same direction — compute is transitioning from a strategic moat into a commodity. The two companies’ behaviors expose a deeper disagreement within the industry: what is compute becoming?

Four Deals, One Strategy

Over the past six months, Anthropic has signed four compute contracts. Taken together, they cover nearly every available chip architecture and cloud channel.

The first is Amazon’s Project Rainier. When this Indiana-based supercomputer came online in October 2025, it already held nearly 500,000 Trainium2 chips, all dedicated to training Claude; Amazon says it will scale to one million by year-end (CNBC). The deal was dramatically expanded in April 2026: Amazon invested $25 billion, and Anthropic committed over $100 billion in AWS spending over the next decade, with total capacity reaching 5 gigawatts (CNBC). Trainium is Amazon’s custom chip, not sold externally — it only runs inside Amazon’s own data centers. Through this channel, Anthropic locked up compute that no other company can buy.

The second is Google’s TPU, delivered through Broadcom. An initial October 2025 agreement provided over 1 GW of TPU capacity. In April 2026, Broadcom’s SEC filing revealed this had expanded to approximately 3.5 GW of next-generation TPUs, beginning delivery in 2027 (Broadcom 8-K). Combined with the initial portion, total Google-sourced capacity approaches 5 GW — roughly matching the Amazon deal. Anthropic CFO Krishna Rao called it “our most significant compute commitment to date” (TechCrunch). The Information reported Anthropic’s total Google Cloud commitment at $200 billion.

The third is today’s SpaceXAI Colossus 1. Unlike Amazon and Google, this data center uses NVIDIA GPUs — a mix of H100s, H200s, and GB200s, totaling approximately 220,000 (WIRED). At 300 MW, Anthropic is taking the full allocation. The company also “expressed interest” in collaborating with SpaceXAI on orbital data centers.

The fourth is CoreWeave. In April 2026, CoreWeave announced a multi-year agreement with Anthropic, with compute coming online later this year (CoreWeave). CoreWeave is an independent GPU cloud, primarily running NVIDIA hardware.

Four deals spanning three chip architectures — AWS custom Trainium, Google custom TPU, and NVIDIA commercial GPUs (Colossus 1 + CoreWeave) — covering three major cloud providers and two independent GPU clouds. No AI company needs to “diversify risk” by locking up 5 GW of TPUs — each architecture requires independent engineering investment. Anthropic is executing supply-side systemic control.

The Counterpart: Two Signals From xAI

The picture on xAI’s side looks entirely different. Two independent pieces of information point in the same direction.

The first is today’s deal itself. Colossus 1, built in 122 days and billed as one of the largest AI supercomputers globally, has had its full capacity handed to a competitor. What does xAI get in return? Anthropic’s expressed interest in co-developing orbital data centers — a future-stage proposition with unresolved technical questions (thermal management in vacuum, debris shielding).

The second comes from a Business Insider report: xAI is leasing compute to Cursor, letting it train Composer 2.5 on tens of thousands of xAI GPUs (Business Insider). This effectively turns xAI into a cloud provider. The same article revealed that xAI president Michael Nicolls told staff in an internal memo that GPU utilization was “embarrassingly low” at 11%, with a target of 50%. Industry average is 35-45%.

The common thread: xAI’s compute buildout outpaced model demand. Colossus 1 was built too large, too fast, and Grok’s user base and utilization couldn’t keep up. Excess capacity is being sold off — first to Cursor, then the entire facility to Anthropic.

The Scarcity Narrative No Longer Holds

The dominant narrative of the past two years has been: GPUs are in short supply, whoever secures more compute wins. This remains partially true — Anthropic is indeed securing massive amounts. But the how and why need reexamination.

Anthropic’s ARR has grown from $9 billion at end of 2025 to over $30 billion (CNBC). CEO Dario Amodei described 80x growth in Q1 (CNBC). Claude Code hit $500 million in annualized revenue within two months of launch. Users complain about rate limits and outages during peak hours — Anthropic’s compute problem isn’t insufficient reserves, it’s surging demand outstripping infrastructure build speed.

xAI tells the opposite story: not all built compute can be filled. xAI’s expansion pace mirrors Elon’s engineering philosophy — build first, find demand later. Colossus 1 went from groundbreaking to operational in 122 days, with a target of one million GPUs, but Grok’s user scale hasn’t kept pace. The result: a fortress built for xAI’s own models now runs a competitor’s.

Against this frame, consider OpenAI. OpenAI signed 6 GW of AMD GPUs (CNBC) and partnered with Oracle on Stargate, but has no presence in custom silicon (Trainium, TPU). xAI acts as a counter-indicator — its excess is the first evidence of industry buildout rhythm misaligning with demand. Three companies with three completely different compute strategies, revealing something more fundamental than volume differences: their beliefs about what compute is becoming.

Compute Is Moving From Moat to Commodity

Three companies face the same compute market. Their behavior reveals a deeper divergence: different judgments about the nature of compute itself.

Anthropic’s strategy is the most counterintuitive. On the surface, she’s stockpiling — three chip architectures, five suppliers, hundreds of billions in long-term commitments. But if the goal were simply “more compute,” one supplier would suffice. Signing three, including two non-NVIDIA architectures (Trainium and TPU), is doing something else: using engineering adaptability to erase hardware differences, actively commoditizing compute.

When Anthropic proves its models can run efficiently across Trainium, TPU, and GPU architectures — potentially distributing training and inference across different silicon — NVIDIA’s unilateral pricing power is broken. As long as Amazon and Google’s custom chip capacity scales, Anthropic gains negotiating leverage across suppliers. She isn’t just hoarding chips. She’s building a multi-supplier bidding dynamic, playing the role of a market participant actively reshaping market structure.

xAI’s actions form the other half of this narrative. 11% GPU utilization and full Colossus 1 leasing shouldn’t be read simply as “Grok is failing.” Under Musk’s first-principles framework, idle compute isn’t failure — it’s unpriced assets. Leasing to Cursor and Anthropic is compute arbitrage: use construction speed and scale to build clusters, then sell access to those who can’t or won’t build their own. This gives xAI dual identity: both a model company and a cloud provider.

But this identity carries an internal contradiction. When your competitor trains better models on your best infrastructure — and you collect the rent — your competitive position enters a self-weakening loop. Musk’s announcement that xAI will be “dissolved as a separate company” into SpaceXAI effectively removes Grok from independent competition. Compute arbitrage may recover investment, but renting out infrastructure doesn’t build model competitiveness.

OpenAI’s position is the easiest to underestimate. Her dual-track procurement (NVIDIA + AMD) appears to be the middle path — less aggressive than Anthropic, less inverted than xAI. But the risk lies precisely in this “normalcy.” If Anthropic succeeds in commoditizing compute, weakening NVIDIA’s pricing and driving down compute rental costs, then OpenAI’s deeply integrated, long-term, high-price contracts with Microsoft shift from being an advantage to a high-cost liability. When competitors’ marginal compute costs are declining while your fixed costs are locked at a cycle peak, the token margin disadvantage compounds.

“Token margins will only get thinner.” Use this as a prism, and the three companies’ strategic differences snap into a unified framework.

Anthropic is buying an option with fixed expenditure. The option: when compute commoditization arrives, she already has deployment capability on every architecture and can choose the cheapest silicon at that moment. The 5 GW of TPU commitment isn’t for today — it’s to get Google to reserve capacity for 2027. Cost today, payoff on every inference token from 2027 onward.

xAI is arbitraging construction speed. Built before anyone else, but Grok’s quality and user scale didn’t keep up. Excess capacity wasn’t wasted — it was monetized directly on the market. The question is whether Grok can eventually catch up. If it doesn’t, xAI becomes a company permanently providing infrastructure for its competitors.

OpenAI’s risk is an implicit assumption that GPUs will remain scarce. If that assumption fails — if Anthropic’s commoditization succeeds, if compute truly transitions from scarce resource to cheap utility — then OpenAI’s greatest advantage today (massive priority GPU supply) becomes its greatest cost structure liability tomorrow.

The final judgment isn’t “whose strategy wins.” It’s “does compute transition from moat to commodity?” If it does, Anthropic’s bet is directionally aligned — she’s pushing the trend while preparing for it. xAI’s position is neutral — she’s already on the sell side. OpenAI’s position implicitly denies the trend — she assumes sustained scarcity, which is the riskiest assumption of the three.