The GPU Wars Erupt: AMD Lands 12GW, NVIDIA Fires Back With 50x, and OpenAI Bets on Everyone
Our claim store has reached 150 claims from 124 sources with 128 cross-reference edges. This brief focuses exclusively on high-confidence signals — claims backed by official company accounts (authority 0.90+), C-suite executives, or multiple independent converging sources. Today’s picture: the datacenter GPU market is fracturing into a multi-vendor war, OpenAI is hedging its compute bets across every supplier, and the AI bull/bear debate has its sharpest data points yet.
1. AMD’s Datacenter Breakout: 12GW in Confirmed Deals
AMD has quietly assembled the most impressive GPU deal pipeline outside of NVIDIA. All of these are confirmed by CEO Lisa Su or official AMD accounts:
- Meta signed a 6GW deal for AMD Instinct MI450 GPUs worth over $100B, with shipments starting H2 2026. AMD received a performance warrant for up to 160 million Meta shares. (Lisa Su, authority: 0.90)
- OpenAI signed a 6GW deal for AMD Instinct GPUs, confirmed by Lisa Su — making AMD’s second major hyperscaler deal. OpenAI is now sourcing GPUs from NVIDIA, AMD, and Amazon Trainium. (Lisa Su, authority: 0.90)
- Helios rack-scale architecture launched — powered by MI455X GPUs, EPYC CPUs, and Pensando Vulcano NICs. TCS is bringing it to India for sovereign AI deployment. (AMD official, authority: 0.95)
- Three-tier GPU portfolio revealed at CES 2026: MI455X for hyperscale training, MI440X for enterprise AI, MI430X for supercomputing. AMD VP called MI455X “the fastest GPU bringup in AMD’s history.” (Lisa Su + AMD VP, authority: 0.90–0.95)
- Saudi Arabia JV with Humain and Cisco to deliver MI450 series GPUs to the Kingdom for sovereign AI. (Lisa Su, authority: 0.90)
The numbers are staggering: 12 gigawatts of confirmed GPU deals from two of the world’s largest AI companies, plus sovereign AI deployments in India and Saudi Arabia. AMD’s datacenter GPU business has gone from “credible alternative” to “indispensable second source” in a single quarter.
AMD also posted record Q4 2025 earnings: $10.3B revenue, non-GAAP EPS of $1.53 (beating estimates by 23%), and guided Q1 2026 at $9.8B (+32% YoY). The company is targeting 60%+ annual datacenter growth with a path to tens of billions by 2027.
2. NVIDIA’s Response: 50x Performance, 35x Lower Cost
While AMD was signing deals, NVIDIA was shipping data. All of these come from NVIDIA’s official account or verified third-party benchmarks:
- GB300 NVL72 delivers 50x performance per watt vs Hopper, using the Dynamo + TensorRT-LLM software stack. This is the single largest generational leap in GPU history. (NVIDIA official, authority: 0.95)
- Blackwell Ultra delivers 35x lower cost per million tokens for agentic AI inference vs Hopper. Cloud providers are deploying GB300 NVL72 at scale for low-latency agentic coding. (NVIDIA official, authority: 0.95)
- SemiAnalysis InferenceX benchmark confirms GB300 NVL72 as the lowest inference cost of any platform — third-party validation of NVIDIA’s “best performance = lowest cost” thesis. (NVIDIA official, authority: 0.95)
- LMSYS independent benchmarks show GB300 racks achieving 1.5x lower latency and 1.87x higher user throughput vs GB200 for long-context open-source inference. (NVIDIA official, authority: 0.95)
- Record Q4 revenue: $68.1B, supported by 6 independent sources at maximum confidence. Guiding April quarter at $78B vs $72.7B consensus. (Multiple sources, authority: 0.75–0.95)
GB300 NVL72 with Dynamo + TensorRT-LLM. Sources: NVIDIA official, SemiAnalysis InferenceX, LMSYS benchmarks.
NVIDIA’s counter-narrative is clear: while competitors are signing future deals, NVIDIA is shipping the lowest-cost inference platform today. The 50x perf/watt claim, if sustained in production, would make it economically irrational not to use NVIDIA for new deployments.
The Eli Lilly AI factory — 1,016 Blackwell Ultra GPUs delivering over 9,000 petaFLOPs — shows NVIDIA pushing beyond hyperscalers into enterprise. Jensen Huang stated enterprise is “by far larger” than hyperscaler as a long-term market.
3. OpenAI’s $110B Round and the Multi-Vendor Compute Strategy
Sam Altman confirmed the largest private funding round in history, and the infrastructure strategy behind it reveals OpenAI is hedging every bet:
- $110B raised from Amazon, NVIDIA, and SoftBank — confirmed directly by Sam Altman. NVIDIA is both investor ($30B) and hardware supplier, creating a circular capital flow Burry flags as a red flag. (Altman, authority: 0.90)
- Total valuation: $840B ($730B pre-money + $110B new capital), nearly tripling in one year. Altman has expressed interest in an IPO. (Multiple sources, authority: 0.50–0.90)
- 13GW compute infrastructure across three vendors: 5GW NVIDIA (3GW inference + 2GW training), 6GW AMD Instinct, and 2GW Amazon Trainium. No single-vendor dependency. (Multiple C-suite + analyst sources)
- Codex users tripled since the start of 2026, with India as the fastest-growing market (4x growth in 2 weeks). (Altman, authority: 0.90)
- Department of War deal signed — deploying AI models in classified military networks. Altman stated the DoW “displayed a deep respect for safety,” directly contrasting with Anthropic’s refusal. (Altman, authority: 0.90)
The 13GW multi-vendor strategy is the most important structural signal in this brief. OpenAI is spending at nation-state scale while ensuring no single hardware supplier can become a chokepoint. This is a direct challenge to NVIDIA’s monopoly thesis and validation of AMD’s datacenter push.
Burry’s circular capital flow thesis deserves attention: NVIDIA invests $30B into OpenAI, which then spends $20B+ back on NVIDIA chips. Altman defended this, saying revenue keeps growing and demand is “just incredible.” The market will resolve this debate — eventually.
4. The Anthropic Divergence: Lawsuit vs. Pentagon Deal
Two AI leaders have taken diametrically opposed paths on defense — and this is now the most contested regulatory story in AI:
- Anthropic CEO Dario Amodei issued an official statement on the company’s discussions with the Department of War regarding the supply chain risk designation. (Anthropic official, authority: 0.95)
- Anthropic is suing the US government to challenge the designation in court, escalating from protest to litigation. (Confidence: 0.90)
- OpenAI publicly stated that Anthropic should NOT be designated as a supply chain risk, and communicated this directly to the DoD. (OpenAI official, authority: 0.95)
- OpenAI signed a Pentagon deal for classified military AI networks while Anthropic refuses to work with defense. Same industry, opposite strategies. (Altman, authority: 0.90)
- Existential risk to distribution: If enforced, AWS and Google Cloud could be forced to remove Anthropic’s models to maintain government contracts. (Confidence: 0.85)
The cross-industry solidarity is remarkable: OpenAI defending its biggest competitor against government overreach suggests the industry sees this as a precedent that could target any AI company. But the strategic divergence — OpenAI embracing defense while Anthropic sues — creates a widening gap in their business models.
Meanwhile, Anthropic’s business is booming: ARR reportedly reached $14B, up from ~$9B at year-end, implying roughly $3.5B ARR growth per month. The company’s inference margin thesis remains central to its valuation — but the supply chain designation adds regulatory overhang.
Signal Distribution Across 150 Claims
How our full claim store breaks down by topic — AI investment and capex dominate, reflecting the current mega-cycle of infrastructure buildout:
Highest-Conviction Signals to Watch
- NVIDIA April quarter ($78B guide) — if achieved, it confirms the 50x thesis is translating to revenue. The whisper number is $75B, so a beat here is genuinely bullish.
- AMD Instinct shipments (H2 2026) — Meta and OpenAI deals are signed but not shipping yet. Execution risk is the key variable.
- Anthropic lawsuit ruling — if the court blocks the designation, it sets precedent protecting AI companies from political retaliation. If it stands, cloud providers face a hard choice.
- OpenAI IPO timeline — at $840B valuation with tripling Codex growth, the IPO window is opening. Watch for S-1 filing.
- Inference cost convergence — NVIDIA claims 35x cheaper, AMD claims fastest bringup, and the tinygrad thesis says prices collapse to electricity cost. This debate resolves in 2026.