Artificial intelligence is reshaping the global energy landscape. Training frontier models and running inference at scale now demands power measured in gigawatts -- rivaling the electricity consumption of entire nations.
For decades, data centers operated at megawatt scale. A 50 MW campus was considered large. The AI revolution has changed the arithmetic entirely. A single frontier model training run can consume hundreds of megawatts over months. Inference demand scales with every new user, every new application, every new enterprise deployment.
The industry now plans in gigawatts. Individual campuses are being designed at 1 GW and above -- the power equivalent of a nuclear reactor or a city of one million people.
"The unit of measure has become gigawatts."-- On the scale of AI infrastructure investment required, Fortune, September 2025
Gigawatt-scale campuses require dedicated power sources: natural gas peaker plants, on-site nuclear (SMRs), solar farms, and grid interconnection agreements measured in billions of dollars and years of permitting.
Connecting loads of 500 MW or more to existing transmission infrastructure strains regional grids. FERC rulemaking on large-load interconnection is reshaping how data center developers access the power they need.
Liquid cooling, immersion cooling, and novel thermal management architectures become mandatory at gigawatt density. Air cooling alone cannot dissipate the heat generated by tens of thousands of GPUs operating simultaneously.
At gigawatt scale, the network fabric connecting GPU clusters becomes a first-class engineering challenge. Ultra-low latency interconnects must span buildings, not just racks.
State-level legislation on data center energy use, environmental permitting, water consumption limits, and grid reliability standards create a complex regulatory landscape for gigawatt-scale deployments.
Public and investor pressure demands that gigawatt-scale AI operations achieve carbon neutrality. Renewable energy procurement, carbon credit strategies, and clean energy generation are now table-stakes requirements.
ChatGPT's rapid adoption triggers a global race to build AI training and inference capacity. Hyperscaler capex projections begin climbing from tens of billions to hundreds of billions annually.
Data center developers secure land, power, and permits for campuses in the 100-500 MW range. Grid operators begin reporting unprecedented demand growth from AI workloads.
The first gigawatt-class AI campuses break ground. NVIDIA introduces the Gigawatt AI Factories concept, and the industry coalesces around the gigawatt as the standard unit for measuring AI infrastructure ambition. Hyperscaler capex crosses $500 billion.
FERC finalizes large-load interconnection rules. Regional grid operators confront multi-gigawatt shortfalls. The Stargate project scales toward its 10 GW target. Power -- not compute -- becomes the binding constraint on AI progress.
Individual operators manage portfolios measured in tens of gigawatts. Nuclear renaissance accelerates as SMRs deploy alongside AI campuses. The convergence of AI and energy becomes one of the defining industrial transformations of the decade.
The following table tracks publicly announced data center campuses and AI infrastructure projects with planned capacity of 1 GW or greater. This is not exhaustive -- many projects remain under NDA or in early permitting stages. Data sourced from company press releases, SEC filings, and verified reporting as of Q1 2026.
| Project | Developer | Capacity | Location | Investment | Status |
|---|---|---|---|---|---|
| Stargate Phase 1 | OpenAI / Oracle / SoftBank | 10 GW | Abilene, TX (initial); multi-site | $500B | Under construction (1.2 GW Phase 1) |
| Goodnight Campus | Crusoe Energy | 1+ GW | Claude, TX | $29B | Announced Q1 2026 |
| Abilene Campus | Crusoe Energy / Lancium | 1.2 GW | Abilene, TX | $11.6B | Under construction |
| Wyoming JV | Crusoe Energy / Tallgrass | 1.8 GW | Wyoming | -- | Development |
| Polaris Forge 1 | Applied Digital (APLD) | 1+ GW | Ellendale, ND | $11B+ lease | Phase 1 operational; expanding |
| Polaris Forge 2 | Applied Digital (APLD) | 1 GW | Undisclosed, US | $5B lease | Announced Q4 2025 |
| Project Horizon | CoreWeave | 2 GW | West Texas | -- | Development |
| Clean Campuses | Lancium | 1.2 GW+ | Abilene, TX (initial); multi-site | $600M debt | Under construction |
| Stargate UAE | OpenAI / G42 / MGX / Microsoft | 5 GW | Abu Dhabi, UAE | $60B+ | Announced Q1 2026 |
Sources: Company press releases, SEC filings (APLD 10-K), project announcements. Capacity figures represent planned maximum buildout, not current operational capacity. Investment figures include infrastructure, equipment, and long-term lease commitments where disclosed.
The transition from megawatt to gigawatt scale is not merely quantitative -- it fundamentally changes the economics and logistics of AI infrastructure deployment. Several interconnected dynamics are driving this shift.
Training compute is doubling roughly every 6-9 months. Each generation of frontier models requires more FLOPs than the last. GPT-4 training consumed an estimated 50-100 MW over several months. Next-generation models are projected to require sustained power draws measured in hundreds of megawatts -- pushing single-campus requirements past the gigawatt threshold.
Inference is the long-tail cost. While training grabs headlines, inference (running trained models for end users) now accounts for the majority of AI compute demand. As AI applications proliferate across enterprise, consumer, and government use cases, inference demand scales with adoption -- and adoption is accelerating across every sector.
Power purchase agreements (PPAs) are being signed at unprecedented scale. Hyperscalers are locking in multi-gigawatt power supplies years in advance, competing with traditional industrial consumers and residential demand. This competition is driving up wholesale electricity prices in key markets, particularly in PJM Interconnection territory (the largest US grid operator, serving 65 million people across 13 states).
The "behind-the-meter" model is gaining traction. Rather than drawing from the public grid, several gigawatt-scale developers are building dedicated power generation facilities co-located with their data centers. Natural gas, nuclear (both existing plants and next-generation SMRs), and large-scale renewables are all being deployed in this configuration -- reducing grid dependency and permitting complexity at the cost of higher upfront capital expenditure.
"Power is the new compute bottleneck."-- On why energy constraints, not chip supply, now limit AI scaling, 2026
Gigawatt-scale AI infrastructure is attracting regulatory attention at the federal, state, and international levels. The following developments are shaping the operating environment for large-load data center deployment.
The Federal Energy Regulatory Commission initiated a rulemaking proceeding on large-load interconnection, addressing how gigawatt-scale facilities connect to the transmission grid. Key issues include co-location with generation assets, cost allocation for grid upgrades, and queue reform for large loads. Comments from major data center operators, utilities, and grid operators reveal deep tensions between rapid AI buildout and grid reliability obligations. Final rulemaking expected Q2 2026.
The Department of Energy has exercised emergency authority under Section 202(c) of the Federal Power Act in response to grid reliability concerns linked to data center load growth. This represents the first use of emergency grid authority directly attributable to AI infrastructure demand -- a precedent with significant implications for future siting decisions.
Over 200 state bills addressing data center energy consumption, water use, tax incentives, and environmental impact were introduced across US legislatures in 2025. Key themes include mandatory renewable energy procurement percentages, water recycling requirements, local grid impact assessments, and clawback provisions on tax incentives tied to job creation thresholds. Virginia, Texas, Georgia, and Ohio are primary battlegrounds.
PJM Interconnection's capacity auction results revealed a 6.6 GW shortfall against reliability targets, driven by accelerating data center load growth in Northern Virginia and adjacent markets. Capacity prices hit record levels. The results intensify the debate over whether AI infrastructure developers should bear a greater share of grid upgrade costs and whether existing market structures can accommodate gigawatt-scale load growth.
Gigawatt-scale AI infrastructure is becoming a geopolitical priority. The UAE's $60B+ Stargate partnership (OpenAI/G42/MGX/Microsoft) signals sovereign wealth funds competing for AI compute capacity. The EU's AI Factories initiative, Japan's MEXT AI compute procurement, and Saudi Arabia's NEOM-linked AI infrastructure investments each represent national-scale commitments to gigawatt-class AI capability -- extending the concept beyond the US market.
The following public sources inform the data and analysis presented on this page. All figures are drawn from official filings, agency publications, or verified reporting.
This page tracks the emergence of gigawatt-scale artificial intelligence infrastructure as a distinct category at the intersection of energy systems, data center engineering, and AI compute. The term "gigawatt AI" has entered mainstream industry vocabulary through adoption by semiconductor manufacturers (NVIDIA's Gigawatt AI Factories program), hyperscale cloud providers, infrastructure developers, energy analysts (Goldman Sachs, McKinsey, IEA), and regulatory bodies (FERC, DOE) worldwide.
Content is curated from public filings, government agency publications, industry analyst reports, and verified journalism. No proprietary data is included. Project data in the Market Data table is drawn from company press releases and SEC filings.
GigawattAI.com is held as a premium digital asset in the AI energy infrastructure category.