SGNL Intelligence.
EN 中文
9 min read

The New Power Stack

Data CentersPowerGridBloom EnergyNuclear800V HVDCLiquid CoolingNVIDIAVera RubinInfrastructure

There are 2,600 gigawatts of power waiting in line to plug into America’s electric grid right now.

That’s more than twice everything the United States has ever built — every coal plant, every dam, every wind farm, every nuclear reactor since 1882. Stacked into a queue. Waiting. Some of those projects will wait twelve years.

If you’re trying to build the data center that trains the next GPT, that timeline is a death sentence. So a quiet thing has happened over the last eighteen months: the people building AI factories stopped waiting.

They started building their own power plants.


The Demand Wall

Let’s get the scale right, because the numbers are genuinely strange.

US Data Center Power Demand Projections
2024
≈55 GW
2026
75.8 GW
2028
108 GW
2030
134.4 GW
2035 (BNEF)
106 GW
S&P Global (US)
BloombergNEF (US)
US data center power demand. S&P Global projects 134.4 GW by 2030; BloombergNEF projects 106 GW by 2035 — the gap shows how uncertain the denominator is. Trend slope is consensus.

Picture it this way: 30 gigawatts is roughly the entire electrical consumption of New York State. OpenAI — one company — wants that much for itself by 2030, and has already identified 8 GW of it.

Globally, data center electricity goes from 460 TWh in 2024 to roughly 1,000 TWh by 2030. It’s not just OpenAI. Microsoft is deploying 130,000 next-gen NVIDIA Rubin GPUs at one Nscale site. Oracle is pouring tens of billions into new capacity. Alphabet’s putting 40% of its tech infrastructure capex into data centers and networking.

PJM, the grid operator covering 13 states, has 2,600 GW of pending interconnection requests. That’s more than 2× the entire installed US grid. Project waits stretch to 12 years.

The grid can’t keep up. So the data center industry is doing something it has never done before: it’s becoming a power industry.


The Five-Layer Stack

Here’s the mental model that unlocked this for me.

A traditional data center is a building. You plug it in. The grid feeds it. Done.

A 2026 AI factory is a chemical plant that happens to compute. It generates its own electricity, transforms voltage on-site with new physics, runs DC current through its bones, and talks back to the grid like a partner instead of a customer.

The Five Layers of the 2026 AI Factory
5
Grid Relationship
Bidirectional. The data center talks back to the grid — throttling under stress, exporting power on demand.
NVIDIA + Emerald AI + 6 utilities (AES, Constellation, Invenergy, NextEra, Nscale, Vistra)
4
Cooling
Two-phase direct-to-chip is the new reference. Coolant boils against silicon, condenses elsewhere.
Accelsius NeuCool HyperStart · Coherent Thermadite 800 · Microloops 40k cold plates/yr
3
Rack
120 kW today, 600 kW productized, 1 MW on the horizon. A megawatt rack costs $3–7M.
GB300 NVL72 · Vera Rubin NVL72 · Vertiv 600kW reference
2
Substation / MV
Solid-state transformers feed an 800 V DC busbar. No more AC inside the building.
Delta SST · 800V HVDC architecture · PUE 1.15
1
Source
Nuclear PPAs, on-site fuel cells, gas turbines for the bridge. Not the grid alone.
Microsoft / Three Mile Island 835MW · Bloom + Oracle 2.8 GW SOFC · NextEra nuclear fleet
The five layers of a 2026 AI factory. Read bottom-up: source → distribution → rack → cooling → grid.

Let’s walk through each one.


Layer 1 — Source: Nukes, Fuel Cells, and a Little Honest Gas

The clean answer is nuclear.

Microsoft signed an 835 MW deal to restart Three Mile Island Unit 1. NextEra’s CEO said this week that the company sees its entire nuclear fleet as part of its data center strategy. There’s even a stealth startup, NX Atomics, designing a new reactor specifically for AI factories.

But Three Mile Island won’t restart until 2031. Nuclear is the right answer for the 2030s. It’s not the answer for next quarter.

The surprise winner of 2026? Solid oxide fuel cells.

Bloom Energy — yes, that Bloom Energy — just signed a 2.8 GW deal with Oracle for AI infrastructure. Their commercial backlog grew 135% year-over-year. They delivered an Oracle “AI factory” power module in 55 days against a 90-day commitment.

Think of a fuel cell like a battery that never runs out, as long as you keep feeding it natural gas or hydrogen. No combustion, no smokestack, no air-quality permit fight. You can deploy it next to the building. Bloom’s CEO says it bluntly: “the debate over on-site power is over.”

Bloom Energy delivered 2.8 GW to Oracle, grew its C&I backlog 135% YoY, and shipped one AI factory order in 55 days against a 90-day commitment. The fuel cell era arrived without a press conference.

For the bridge? Yes — gas turbines.

AWS just partnered with Siemens Energy on gigawatt-scale on-site generation and microgrids. Digital Realty is openly evaluating “bridge power” in markets where the utility just can’t deliver in time. The EIA expects fossil fuel capacity to grow because of this.

That’s the honest answer for 2026–2028. Anything else is greenwashing.


Layer 2 — The Substation Reset: 800 Volts of DC

Here’s the wonkiest layer, and I promise it’s the coolest.

Every data center you’ve ever seen takes high-voltage AC from the grid, drops it through a transformer to medium voltage, drops it again to 415 V three-phase AC, and then converts it to DC at the server. Each conversion loses energy.

The new playbook: a solid-state transformer — basically a refrigerator-sized power-electronics box from companies like Delta — takes the medium voltage straight to 800 V DC and runs it down a copper busbar to the rack. No more AC inside the building.

Why now? Because above ~250 kW per rack, the old 415 V AC busways become physically impractical. You either go to scary-high AC (and trip arc-flash safety codes) or you go DC. The industry is going DC. The result: a clean PUE of 1.15 in early reference designs — close to the theoretical floor.

It’s the electrical-engineering equivalent of the iPhone moment: ten years from now, we’ll look at AC distribution the way we look at floppy disks.


Layer 3 — The Rack: From Toaster to Particle Accelerator

A 2018 server rack pulled about 8 kW. A nice steady toaster.

Rack Power Density: 125× in Eight Years
2018 enterprise
≈ 1 toaster, idle
8 kW
2022 hyperscale
Hopper-era H100 row
30 kW
GB200 NVL72
Today's mainstream AI rack
120 kW
GB300 NVL72
Blackwell Ultra (288 GB HBM3E/GPU)
140 kW
Vertiv 600kW
Productized reference design
600 kW
Rubin / Kyber-class
Industry trajectory, 2027+
1000 kW
Rack power density has 125×'d in 8 years. Above ~250 kW you can no longer run a 415 V AC busway — the industry's pivot to 800 V DC is forced, not chosen.

Today’s NVIDIA GB300 NVL72 rack pulls ~140 kW — call it twenty toasters welded together. Vertiv just productized a reference design that supports 600 kW per rack and 12.5 MW per system. The next NVIDIA generation, Vera Rubin, is being deployed at scales of 130,000 GPUs at a single Nscale site for Microsoft.

A single Vera Rubin rack costs $3–7 million. The rack is now a more valuable object than the building it’s sitting in.


Layer 4 — Cooling: Welcome to the Boiler Room

You can’t air-cool a megawatt rack. Physics says no.

So liquid coolant is now piped directly onto the chip. The new reference design isn’t even just liquid — it’s two-phase, meaning the coolant boils as it absorbs heat (like sweat on your skin) and condenses elsewhere. Accelsius just launched a hyperscaler validation program for it called NeuCool HyperStart, claiming 35–44% operational savings.

The supply chain for cold plates is being rebuilt from the ground up:

  • Coherent shipped its Thermadite 800 cold plate this March
  • Microloops is scaling to 40,000 cold plate units per year across China and Vietnam by year-end
  • Asia Vital Components is expanding hard into liquid cooling
  • Supermicro is expanding capacity specifically for Vera Rubin in H2 2026

There’s even a beautiful efficiency loop emerging — combined heat and power (CHP) systems can use the waste heat from the fuel cells to drive absorption chillers, cutting total power demand by another 20%. Heat goes in a circle. Nothing wasted.


Layer 5 — The Grid Relationship: From Load to Partner

This is the most genuinely new thing.

NVIDIA partnered with six major US power producers — AES, Constellation, Invenergy, NextEra, Nscale Energy, Vistra — through a startup called Emerald AI. The mission: build grid-responsive AI factories. Data centers that don’t just consume electricity but actively participate in grid management — throttling down when the grid is stressed, returning power when the grid needs it.

This flips fifty years of utility relationships on their head. A data center used to be a giant unmovable block of load that utilities had to plan around. Now it’s a dispatchable resource, a battery the grid can call on.

It’s also why the White House just declared transformers, transmission lines, substations, and high-voltage circuit breakers “essential to national defense”. Transformer prices are up 77% since 2019. The bottleneck has gotten serious enough to be a security issue.

The “data center as island” narrative is wrong. Constellation’s Joseph Dominguez was explicit this week: nuclear-co-located data centers stay grid-connected and return energy when the grid needs it. The hyperscale buildout is becoming a grid asset, not a grid drain.

What I’d Watch Next

  • First commercial 800 V DC GPU rack deployment — the spec isn’t standardized yet (NVIDIA vs OCP), and whoever wins shapes the next decade.
  • First SMR (small modular reactor) PPA broken ground for a hyperscaler — there are letters of intent everywhere; no shovels yet.
  • Bloom Energy’s capacity ramp — if 1.2 → 2 → 4+ GW executes, fuel cells move from “interesting” to “structural.”
  • PJM queue reform outcomes — the only policy lever that changes the demand picture inside five years.
  • Two-phase D2C TCO data from Accelsius HyperStart pilots — validates or kills the case that two-phase is worth the complexity over single-phase liquid.

The Future: When Computing Becomes Civil Engineering

Here’s the picture I want you to leave with.

In 2018, building a data center was a real-estate problem. You bought land near a fiber line, signed a power contract, and rolled in racks.

By 2030, building an AI factory will be a heavy industrial project. Picture it: a campus the size of an oil refinery. Rows of fuel cells humming next to a small modular reactor that came online during the Trump-Vance administration’s permitting reforms. Solid-state transformers the size of shipping containers. A river of 800-volt DC current pouring down copper busbars into liquid-cooled racks where coolant boils against silicon at 40 °C.

Outside, a control room watches a real-time price signal from the grid. When the wind dies in West Texas at 7 PM, this AI factory throttles its training runs and sells power back to keep your air conditioner on.

The grid doesn’t fight AI anymore. AI helps run the grid.

This is the trade we’re making. The AI revolution doesn’t just need silicon. It needs the biggest reinvention of industrial power infrastructure since the 1950s. And that’s actually wonderful news — because rebuilding the grid is something humans are extraordinarily good at. We did it once. We’re doing it again, faster, cleaner, and smarter.

The 12-year queue at PJM is real. The transformer shortage is real. The challenges are real.

But so are the engineers welding cold plates in Vietnam, the linemen pulling new transmission in Virginia, the fuel-cell technicians installing 2.8 gigawatts for Oracle in 55 days, and the policy folks writing the next chapter of grid regulation.

The bottleneck for the next decade of AI isn’t a chip. It’s a plug, a pipe, and a permit.

And we’re going to ship them all.

Confidence:
High
Medium
Low
1.
130,000 NVIDIA Rubin GPUs are being deployed at Nscale for Microsoft
Source: @hms1193surfaced Apr 2026
4dde2039
2.
Alphabet's technical infrastructure CapEx was allocated with 60% to servers and 40% to data centers and networking.
Source: GOOGLsurfaced Apr 2026
2a6839ab
3.
Bloom Energy expanded its partnership with Oracle to support up to 2.8 GW of fuel cell deployments for AI and cloud infrastructure
Source: @wallstenginesurfaced Apr 2026
279d3d44

Get the signal, not the noise

New analysis delivered to your inbox. No spam, unsubscribe anytime.