SGNL Intelligence.
EN 中文
7 min read

OpenAI Takes the Pentagon, DeepSeek V4 Targets Sonnet, and the CPU/GPU Ratio Flips

OpenAIPentagonDeepSeekCPU/GPUNVIDIACoherentMicronPJMIntel 18A

15 new claims from 14 sources landed today, adding 10 graph edges (6 from the proposer + 4 from healing). The headline signals: OpenAI wins a specific $200M Pentagon contract that Anthropic lost, DeepSeek drops V4 specs optimized for Chinese silicon, and an industry insider makes the case that CPUs will outnumber GPUs in inference data centers. Meanwhile, NVIDIA deepens its optical interconnect bet with a $2B Coherent investment, and a single Meta campus will consume 8 million miles of fiber.

Convergences Strengthened Today
CPU bottleneck thesis
2 sources
OpenAI military engagement
3 sources
NVIDIA-Coherent $2B deal
2 sources
Micron-NVIDIA memory
3 sources
DeepSeek V4 Huawei optimization
2 sources
PJM capacity demand
2 sources

1. OpenAI Wins $200M Pentagon Contract

Claim confidence: 0.95 | Source: @minchoi (popular fintwit) | Refines existing DoW deal narrative

The OpenAI-Pentagon relationship just got a price tag. Anthropic lost the $200 million contract to OpenAI, adding a specific dollar figure to the existing narrative of OpenAI deploying AI models on classified military networks.

  • $200M contract value — the first concrete dollar amount attached to the OpenAI-Pentagon relationship. Previously we only knew about the classified network deployment agreement. (@minchoi)
  • Anthropic explicitly lost — not just sitting out. This is a competitive loss, not a policy refusal. Adds context to the ongoing lawsuit and supply chain risk designation. (@minchoi)
  • Deepens the regulatory divergence — OpenAI now has a confirmed dollar-value defense contract while simultaneously publicly defending Anthropic against the supply chain designation. Competing in the market while supporting in the courts.

This claim now connects to 3 existing claims in the regulation cluster: OpenAI’s DoD pact with layered protections, the classified network deployment, and Anthropic’s supply chain risk designation. The $200M figure refines the existing narrative from “OpenAI is engaging with military” to “OpenAI won a specific contract that Anthropic lost.”


2. DeepSeek V4 Specs: Two Models, Two Strategies

2 claims | Sources: @zephyr_z9, @minchoi (popular fintwit) | Confidence: 0.90

DeepSeek is dropping two variants of V4, each targeting a different competitive axis:

  • V4 lite (285B parameters) — positioned to compete directly with Anthropic’s Sonnet 4.6. This is the “fast and efficient” model aimed at the Western inference market. (@zephyr_z9)
  • V4 big (1 trillion parameters) — multimodal (text + image + video), optimized for Huawei and Cambricon chips. This is the domestic Chinese silicon play, reducing dependence on NVIDIA hardware. (@minchoi)
  • Hardware divergence — DeepSeek is bypassing the usual step of letting NVIDIA and AMD optimize V4 for their hardware (existing claim in store). V4 big is designed China-silicon-first.

The trillion-parameter claim refines the existing store entry about DeepSeek V4’s Huawei/Cambricon optimization by adding the specific parameter count and multimodal capabilities. The V4 lite claim is a novel signal — first mention of a smaller model variant specifically targeting Sonnet 4.6.


3. The CPU/GPU Ratio Pivot

2 claims | Source: @BenBajarin (industry insider, 0.65) | Confidence: 0.85-0.90

Ben Bajarin makes a structural argument that could reshape how we think about inference data center economics:

  • CPUs will outnumber GPUs in pure inference data centers as agentic AI models evolve. The ratio flips — inference workloads need more orchestration compute than raw GPU throughput. (@BenBajarin)
  • Intel foundry thesis — if agentic CPUs drive massive demand, Intel’s foundry business has a lifeline. Bajarin positions this as a potential savior for Intel’s manufacturing strategy. (@BenBajarin)
  • Converges with existing evidence — the store already holds a claim that “agentic AI workloads are CPU-bottlenecked” and that doubling the CPU install base would be needed for just 1 hour/day of agent usage. Bajarin’s prediction supports this from an independent source.

This thesis, if correct, has massive implications. It means the AI infrastructure buildout isn’t just about GPUs — it’s about a parallel CPU supercycle that benefits Intel and AMD’s EPYC line. The healing pass linked both new claims to the existing CPU bottleneck thesis, strengthening it to 2 independent supporting sources.


4. The Physical Layer: Fiber, Memory, and Optical Interconnects

4 claims | Sources: @BenBajarin, @PatrickMoorhead, @hms1193 | Confidence: 0.95

While everyone debates GPU wars, the physical infrastructure layer is scaling at jaw-dropping rates:

  • Meta’s single DC campus: 8 million miles of Corning fiber — one campus, one company, enough optical fiber to stretch to the sun and back 40 times. The copper-to-fiber transition in AI infrastructure is here. (@BenBajarin, industry insider)
  • NVIDIA invests $2B in Coherent ($COHR) plus a multi-year, multi-product supply agreement. This is NVIDIA locking in co-packaged optics supply through 2030 with revenue starting CY2027. Independent confirmation from @PatrickMoorhead of the deal already in our store.
  • Micron ships 256GB SOCAMM2 — the world’s first, designed specifically for agentic AI workloads. Specialized memory form factors for the agentic era. (@hms1193, financial journalist)
  • Micron-NVIDIA memory co-design — collaborative R&D on advanced memory for AI infrastructure. This supports the existing claim that Micron is expected to supply NVIDIA’s Vera Rubin platform. (@BenBajarin)

The physical layer tells a story the GPU headlines miss. NVIDIA isn’t just selling chips — it’s vertically integrating into optical interconnects (Coherent) and co-designing memory (Micron). Meta isn’t just buying GPUs — it’s laying 8 million miles of fiber for a single campus. The infrastructure moats are being built in glass and silicon, not just software.


5. Power Grid Under Pressure: PJM 9.3x

1 claim | Source: @SemiAnalysis_ (industry insider, 0.65) | Confidence: 0.90

PJM capacity prices have experienced 9.3x growth, driven by data center power demand and what SemiAnalysis calls “poor market design.” This refines the existing BloombergNEF projection that PJM data center capacity could reach 31 GW by 2030.

  • 9.3x capacity price growth — the most specific pricing metric we’ve seen for the data center power crunch. Not just demand growing, but the cost of securing capacity exploding. (@SemiAnalysis_)
  • Existing context — the store already holds PJM interconnection queue data (2,600 GW pending, 12-year delays) and BloombergNEF’s 106 GW US demand forecast. The 9.3x price growth is the financial consequence of those physical constraints.

The power story keeps getting worse. Physical constraints (transformers, cooling, grid interconnection) are the binding constraint on AI infrastructure deployment, and the price signal is now screaming.


Claim Store by Topic (113 Claims)
AI investment
35
Product launch
27
Supply chain
23
Capex
18
Regulation
15
Macro
15
Data centers
13
Earnings
12
Valuation
10
Revenue
9

Novel Signals: First Mentions

These claims have no prior edges in the store — potentially new information entering the picture:

  • Intel Clearwater Forest (Xeon 6+) — up to 288 Darkmont E-cores via 12 Intel 18A compute tiles. First Intel 18A product claim in the store. (@PatrickMoorhead)
  • Qualcomm AI200 — datacenter rack showcased at MWC 2026. A new entrant to the inference hardware market beyond NVIDIA and AMD. (@PatrickMoorhead)
  • AMD Zen 6 (Medusa) / Zen 7 (Grimlock) — next-gen CPU roadmap competing with Intel Nova Lake in gaming and AI. (@mooreslawisdead)
  • Junyang departs Qwen — key lead for Alibaba’s Qwen AI team leaves the company. Potential talent disruption at one of China’s leading AI labs. (@zephyr_z9)
  • AMD India Developer Program — using ROCm open software stack to accelerate AI development in India. (@AIatAMD, official account)

What to Watch

  • DeepSeek V4 lite benchmarks — if it truly competes with Sonnet 4.6, it validates the Chinese open-source AI thesis at frontier quality.
  • Intel 18A yields — Clearwater Forest with 288 cores is impressive on paper, but 18A manufacturing yield is the make-or-break for Intel foundry.
  • PJM capacity auctions — 9.3x price growth is unsustainable. Watch for policy responses or demand destruction signals.
  • Anthropic’s next move — lost the $200M Pentagon deal AND fighting a supply chain designation. Revenue is booming ($14B ARR) but the regulatory overhang deepens.
  • Micron SOCAMM2 adoption — first specialized memory form factor for agentic AI. If hyperscalers adopt, it signals the agentic infrastructure buildout is real.

This analysis is powered by GIKE (General Iterative Knowledge Engine). The database now holds 113 claims from 87 sources with 90 cross-reference edges. All claims cited are sourced from public tweets with authority scores and extraction confidence noted.

Get the signal, not the noise

New analysis delivered to your inbox. No spam, unsubscribe anytime.