The New Power Stack
There are 2,600 gigawatts of power waiting in line to plug into America’s electric grid right now.
That’s more than twice everything the United States has ever built — every coal plant, every dam, every wind farm, every nuclear reactor since 1882. Stacked into a queue. Waiting. Some of those projects will wait twelve years.
If you’re trying to build the data center that trains the next GPT, that timeline is a death sentence. So a quiet thing has happened over the last eighteen months: the people building AI factories stopped waiting.
They started building their own power plants.
The Demand Wall
Let’s get the scale right, because the numbers are genuinely strange.
Picture it this way: 30 gigawatts is roughly the entire electrical consumption of New York State. OpenAI — one company — wants that much for itself by 2030, and has already identified 8 GW of it.
Globally, data center electricity goes from 460 TWh in 2024 to roughly 1,000 TWh by 2030. It’s not just OpenAI. Microsoft is deploying 130,000 next-gen NVIDIA Rubin GPUs at one Nscale site. Oracle is pouring tens of billions into new capacity. Alphabet’s putting 40% of its tech infrastructure capex into data centers and networking.
The grid can’t keep up. So the data center industry is doing something it has never done before: it’s becoming a power industry.
The Five-Layer Stack
Here’s the mental model that unlocked this for me.
A traditional data center is a building. You plug it in. The grid feeds it. Done.
A 2026 AI factory is a chemical plant that happens to compute. It generates its own electricity, transforms voltage on-site with new physics, runs DC current through its bones, and talks back to the grid like a partner instead of a customer.
Let’s walk through each one.
Layer 1 — Source: Nukes, Fuel Cells, and a Little Honest Gas
The clean answer is nuclear.
Microsoft signed an 835 MW deal to restart Three Mile Island Unit 1. NextEra’s CEO said this week that the company sees its entire nuclear fleet as part of its data center strategy. There’s even a stealth startup, NX Atomics, designing a new reactor specifically for AI factories.
But Three Mile Island won’t restart until 2031. Nuclear is the right answer for the 2030s. It’s not the answer for next quarter.
The surprise winner of 2026? Solid oxide fuel cells.
Bloom Energy — yes, that Bloom Energy — just signed a 2.8 GW deal with Oracle for AI infrastructure. Their commercial backlog grew 135% year-over-year. They delivered an Oracle “AI factory” power module in 55 days against a 90-day commitment.
Think of a fuel cell like a battery that never runs out, as long as you keep feeding it natural gas or hydrogen. No combustion, no smokestack, no air-quality permit fight. You can deploy it next to the building. Bloom’s CEO says it bluntly: “the debate over on-site power is over.”
For the bridge? Yes — gas turbines.
AWS just partnered with Siemens Energy on gigawatt-scale on-site generation and microgrids. Digital Realty is openly evaluating “bridge power” in markets where the utility just can’t deliver in time. The EIA expects fossil fuel capacity to grow because of this.
That’s the honest answer for 2026–2028. Anything else is greenwashing.
Layer 2 — The Substation Reset: 800 Volts of DC
Here’s the wonkiest layer, and I promise it’s the coolest.
Every data center you’ve ever seen takes high-voltage AC from the grid, drops it through a transformer to medium voltage, drops it again to 415 V three-phase AC, and then converts it to DC at the server. Each conversion loses energy.
The new playbook: a solid-state transformer — basically a refrigerator-sized power-electronics box from companies like Delta — takes the medium voltage straight to 800 V DC and runs it down a copper busbar to the rack. No more AC inside the building.
Why now? Because above ~250 kW per rack, the old 415 V AC busways become physically impractical. You either go to scary-high AC (and trip arc-flash safety codes) or you go DC. The industry is going DC. The result: a clean PUE of 1.15 in early reference designs — close to the theoretical floor.
It’s the electrical-engineering equivalent of the iPhone moment: ten years from now, we’ll look at AC distribution the way we look at floppy disks.
Layer 3 — The Rack: From Toaster to Particle Accelerator
A 2018 server rack pulled about 8 kW. A nice steady toaster.
Today’s NVIDIA GB300 NVL72 rack pulls ~140 kW — call it twenty toasters welded together. Vertiv just productized a reference design that supports 600 kW per rack and 12.5 MW per system. The next NVIDIA generation, Vera Rubin, is being deployed at scales of 130,000 GPUs at a single Nscale site for Microsoft.
A single Vera Rubin rack costs $3–7 million. The rack is now a more valuable object than the building it’s sitting in.
Layer 4 — Cooling: Welcome to the Boiler Room
You can’t air-cool a megawatt rack. Physics says no.
So liquid coolant is now piped directly onto the chip. The new reference design isn’t even just liquid — it’s two-phase, meaning the coolant boils as it absorbs heat (like sweat on your skin) and condenses elsewhere. Accelsius just launched a hyperscaler validation program for it called NeuCool HyperStart, claiming 35–44% operational savings.
The supply chain for cold plates is being rebuilt from the ground up:
- Coherent shipped its Thermadite 800 cold plate this March
- Microloops is scaling to 40,000 cold plate units per year across China and Vietnam by year-end
- Asia Vital Components is expanding hard into liquid cooling
- Supermicro is expanding capacity specifically for Vera Rubin in H2 2026
There’s even a beautiful efficiency loop emerging — combined heat and power (CHP) systems can use the waste heat from the fuel cells to drive absorption chillers, cutting total power demand by another 20%. Heat goes in a circle. Nothing wasted.
Layer 5 — The Grid Relationship: From Load to Partner
This is the most genuinely new thing.
NVIDIA partnered with six major US power producers — AES, Constellation, Invenergy, NextEra, Nscale Energy, Vistra — through a startup called Emerald AI. The mission: build grid-responsive AI factories. Data centers that don’t just consume electricity but actively participate in grid management — throttling down when the grid is stressed, returning power when the grid needs it.
This flips fifty years of utility relationships on their head. A data center used to be a giant unmovable block of load that utilities had to plan around. Now it’s a dispatchable resource, a battery the grid can call on.
It’s also why the White House just declared transformers, transmission lines, substations, and high-voltage circuit breakers “essential to national defense”. Transformer prices are up 77% since 2019. The bottleneck has gotten serious enough to be a security issue.
What I’d Watch Next
- First commercial 800 V DC GPU rack deployment — the spec isn’t standardized yet (NVIDIA vs OCP), and whoever wins shapes the next decade.
- First SMR (small modular reactor) PPA broken ground for a hyperscaler — there are letters of intent everywhere; no shovels yet.
- Bloom Energy’s capacity ramp — if 1.2 → 2 → 4+ GW executes, fuel cells move from “interesting” to “structural.”
- PJM queue reform outcomes — the only policy lever that changes the demand picture inside five years.
- Two-phase D2C TCO data from Accelsius HyperStart pilots — validates or kills the case that two-phase is worth the complexity over single-phase liquid.
The Future: When Computing Becomes Civil Engineering
Here’s the picture I want you to leave with.
In 2018, building a data center was a real-estate problem. You bought land near a fiber line, signed a power contract, and rolled in racks.
By 2030, building an AI factory will be a heavy industrial project. Picture it: a campus the size of an oil refinery. Rows of fuel cells humming next to a small modular reactor that came online during the Trump-Vance administration’s permitting reforms. Solid-state transformers the size of shipping containers. A river of 800-volt DC current pouring down copper busbars into liquid-cooled racks where coolant boils against silicon at 40 °C.
Outside, a control room watches a real-time price signal from the grid. When the wind dies in West Texas at 7 PM, this AI factory throttles its training runs and sells power back to keep your air conditioner on.
The grid doesn’t fight AI anymore. AI helps run the grid.
This is the trade we’re making. The AI revolution doesn’t just need silicon. It needs the biggest reinvention of industrial power infrastructure since the 1950s. And that’s actually wonderful news — because rebuilding the grid is something humans are extraordinarily good at. We did it once. We’re doing it again, faster, cleaner, and smarter.
The 12-year queue at PJM is real. The transformer shortage is real. The challenges are real.
But so are the engineers welding cold plates in Vietnam, the linemen pulling new transmission in Virginia, the fuel-cell technicians installing 2.8 gigawatts for Oracle in 55 days, and the policy folks writing the next chapter of grid regulation.
The bottleneck for the next decade of AI isn’t a chip. It’s a plug, a pipe, and a permit.
And we’re going to ship them all.