About 0.3 percent of US power was generated by microgrids in 2024, but data centers use about 4.4 percent of US power today, a figure expected to grow to about 12 percent by 2030. The urgent rush to develop new and more capable “frontier models,” which are critical to the functioning of AI applications, is viewed as an existential requirement for hyperscalers and is inherently linked to enormous energy consumption. These models are developed using power-hungry machine learning algorithms that run on graphics processing units (GPUs), tensor processing units (TPUs), and conventional central processing units (CPUs).
The power required to create these frontier models has become a limiting factor for hyperscalers seeking to remain relevant and competitive, driving them to increasingly act as their own utilities. Traditionally, data centers sourced power from utilities, but new hyperscale data centers are unwilling to wait through five-plus-year planning cycles to access grid power. For example, the Stargate data center currently under construction is planned for a power consumption of 1.2 GW at its flagship site in Abilene, Texas. Stargate is a portfolio of massive sites designed to reach a total commitment of 10 GW and $500 billion in investment across the US.
Crusoe Energy is building these data centers and is actively developing the power plants and underlying infrastructure required to support the initiative at the flagship Abilene campus. In this effort, Crusoe is acting as a vertically integrated AI infrastructure provider, handling both the power generation and the data center build.
The Hypergrid Regulation Problem
The regulation of microgrids has been problematic. FERC Order No. 2023 (issued July 2023) has helped reduce connection queues for new power sources by introducing the Cluster Study Process, the “First-Ready, First-Served” reform, and firm deadlines for grid operators to complete studies, including financial penalties for failure to process requests on time. FERC Order No. 2023 deals exclusively with the generator interconnection queue and applies to new gas and nuclear power plants, as well as renewables such as wind and solar and energy storage.
Historically, a data center’s primary function has been to act as a massive consumer (load) of electricity. Connecting a load, such as a factory or data center, has traditionally fallen under the authority of state public utility commissions (PUCs), not FERC. Because Order No. 2023 addresses only generator queues, it provides no relief for load interconnection queues, which are the primary source of the multiyear delays faced by data centers.
If a data center’s microgrid meets the regulatory requirements to sell power (export) to the wholesale interstate grid—for example, by qualifying as a Qualifying Facility or Exempt Wholesale Generator—the interconnection of that specific generating asset would be governed by FERC’s generator interconnection procedures, including the 2023 reforms. However, data centers can simultaneously be large loads, making them subject to state utility regulation as well as certain federal approvals.
The US Department of Energy (DOE) has formally urged FERC to initiate rulemaking to clarify federal jurisdiction and establish standardized rules for the interconnection of large electrical loads, typically defined as greater than 20 MW and including data centers. However, the jurisdictional boundary between state and federal authority remains unsettled as of the end of 2025.
The Hypergrid Interconnection Problem
In principle, utilities welcome additional business and the opportunity to sell power to data centers, but hyperscalers are not typical grid customers. In the current frenzied rush to build data centers, utilities are not prepared to meet the aggressive schedules that data center customers demand.
The Stargate project is a massive joint-venture data center complex involving OpenAI, Oracle, and SoftBank. The project relies on Crusoe to address the primary bottleneck facing new hyperscale AI data centers: the speed and availability of power. Crusoe is the developer and operator of Stargate’s flagship campus in Abilene, Texas, which is planned to scale up to 1.2 GW of power capacity. Crusoe’s core business model is to control the full stack, from power generation and energy procurement to data center design and hardware deployment, enabling sites to come online in months rather than years.
Bridge Power: For the Abilene site, Crusoe is installing GE Vernova LM2500XPRESS aeroderivative gas turbines. This on-site natural gas plant is a crucial component that allows the data center to energize quickly, bypassing slow utility interconnection queues. These units are flexible, highly efficient, and capable of providing nearly 1 GW of power.
Renewable Integration: The Abilene site is also strategically located to draw on the region’s abundant wind power, a key factor in Crusoe’s site selection, and uses large-scale behind-the-meter battery storage and solar resources.
Backup/Resilience: The gas turbines function as a highly responsive source of backup power for the data halls, replacing traditional, less efficient diesel generators and ensuring 24/7 reliability for highly sensitive AI workload.
Future Plans: Crusoe has announced a long-term strategic partnership with Blue Energy to develop a massive, multi-gigawatt, nuclear-powered data center campus at the Port of Victoria, Texas, demonstrating its commitment to pioneering long-term, high-capacity generation solutions.
In short, Crusoe is not just building a building; it is building a Grid-Interactive Compute Plant (GICP)—a massive power generation and orchestration asset designed to serve the Stargate project’s unprecedented energy demands.
Stargate Data Center (Crusoe Energy)
The Utility Perspective on Power for Data Centers
Utilities have several key performance indicators that help them maintain reliable power, and they will assess whether a hypergrid improves or degrades these metrics. The electric grid (macrogrid) is designed to always have more power available than is being used at any given moment. This “excess generating capacity” is best measured by the Planning Reserve Margin (PRM). The reserve margin represents the amount of available generating capacity a region has above its anticipated peak demand.
The industry standard minimum target for reserve margin across most US regions has historically been around 15 percent. This reserve is intended to protect against long-duration outages. Spinning reserve, used for frequency regulation, is approximately 3 to 7 percent and can be deployed within seconds to help regulate grid frequency.
Both reserve margin and spinning reserves are threatened by massive new loads. With advanced grid control systems, hypergrids can be designed to improve both reserve and spinning margins.
Conclusion and Outlook
In recent years in the US, non-dispatchable wind and solar power have dominated new power additions, but this new capacity has not kept pace with rising power demand, and both reserve margins and spinning reserves have declined. This is due in part to the retirement of generation assets such as steam turbines in coal and nuclear plants, as well as older gas generators. Advanced grid-forming inverters for solar PV and battery systems, along with advanced power converters for wind turbines and Static VAR Compensators (SVCs) and STATCOMs, can provide synthetic inertia and voltage regulation capabilities. While renewable power is not dispatchable, large grid-scale batteries are, and these batteries will play an increasingly important role for data centers, far beyond the function that traditional data center UPS systems served in the past.
Given the current crisis of rapidly rising data center power loads, aging infrastructure, and retiring firm generation, the most effective path to a more reliable grid requires new hypergrids to focus on advanced automation, grid-forming inverters, expanded battery storage, more effective demand response, and a more interconnected and digital grid.
Regulations for connecting hypergrids and microgrids to local macrogrids need to be improved through consistent rules that reduce connection queues without compromising grid stability or reliability. The split authority—where FERC regulates how power generation is added to the grid while state public utility commissions regulate how new loads are added—was established before microgrids were common. Today, the massive scale of hypergrids is placing significant pressure on these outdated regulatory structures. The US should strive to be more highly interconnected across North America to improve the effective reserve margin.
Ultimately, whether it is a 1 MW microgrid or a 700 MW hypergrid, designing these systems with advanced control technologies that enhance grid stability when connected to the macrogrid, while also meeting load requirements in island mode, would significantly ease interconnection. Both microgrids and hypergrids share these requirements:
The Core Requirements
Protection and isolation (safety).
Limit harmonic distortion and voltage flicker.
Capability to absorb or inject reactive power (VARs) during both power import and export.
Advanced Requirements
The microgrid/hypergrid BESS and PV inverters should be capable of providing rapid, advanced voltage support to the utility’s distribution system, effectively acting as a high-speed STATCOM (Static Synchronous Compensator).
The microgrid/hypergrid should be able to modulate its real power output (MW) very quickly to participate in frequency regulation markets.
Microgrids/hypergrids should have black start capability.
The microgrid/hypergrid must contractually offer spare capacity and BESS to participate in the utility’s demand response or virtual power plant (VPP) programs, agreeing to inject power or curtail load when the macrogrid is stressed.
Microgrids/hypergrids need to demonstrate that their advanced inverter controls are sophisticated enough to mimic the stabilizing effect of physical inertia, preventing severe frequency drops when a large generator trips offline.
Where smaller microgrids typically relied on a mix of intermittent renewables (solar PV and wind), modest battery energy storage systems, and smaller, high-speed reciprocating diesel or gas engines for backup during island mode, hypergrids are defined by their sheer scale. These massive facilities integrate gigawatt-class gas turbines or large, modular fuel cell arrays alongside industrial-scale UPS systems and grid-scale BESS measured in tens or hundreds of megawatts (MW). The mission has shifted: traditional microgrids required a grid connection primarily to offload excess renewable generation that exceeded local load, whereas hypergrids are architected to become active partners in grid management, with significant potential to provide high-value grid services, including large-scale demand response (DR), frequency regulation, and dynamic voltage support through controlled injection and absorption of reactive power (VARs). In doing so, they transform the data center from a massive load into a dispatchable, revenue-generating asset.
Hyperscalers (Microsoft, Google, Amazon, Meta) continue to maintain ambitious public goals, such as achieving 100 percent renewable energy, yet many hypergrids are currently powered by natural gas. Hyperscalers are not abandoning their renewable commitments, but they are prioritizing “speed to power” over “immediacy of green power,” creating a significant and visible contradiction. They are not simply building gas plants; they are designing transitional, future-proof energy platforms in which the current reliance on natural gas is a deliberate, temporary step to address the speed-to-power constraint. This contradiction is driving a new hypergrid design philosophy centered on modularity, fuel flexibility, and long-term site viability for clean energy integration.
Hyperscalers are specifying natural gas turbines, often aeroderivative models, that are manufactured to be hydrogen-ready. Hypergrids are deploying BESS systems far larger than required for basic UPS backup. Power-first site selection has become a priority, and hyperscalers, together with their utility partners, are explicitly designing the hypergrid as a multi-phase energy complex intended to ultimately transition away from gas toward firm, zero-carbon energy sources. Site selection is based not only on available land, but also on access to underutilized high-voltage transmission lines or proximity to existing clean energy assets, such as retiring coal plants with established interconnection rights.
In summary, the hypergrid replaces the passive relationship characteristic of traditional microgrids with an active, contractual partnership with the utility, transforming a potentially disruptive massive load into a system-stabilizing asset. If designed correctly, hypergrids can reduce power costs and improve the reliability of the macrogrid on which everyone depends.
F
The post From Microgrids to Hypergrids: Data Center Power Demands + Hyperscaler Capital is Creating a New Grid Architecture appeared first on Logistics Viewpoints.
