AI’s real bottleneck is infrastructure: power, space and cooling

Giulia Rinaldi

02/02/2026

06/02/2026 - 15:21

condividi
Facebook
twitter whatsapp

Chips are arriving faster than data centers can power and cool them - reshaping where returns are likely to emerge.

AI's real bottleneck is infrastructure: power, space and cooling

AI talk focuses on better models and faster chips, but running systems at scale depends on electricity, floor space, and cooling. AI data centers increasingly look like industrial energy and thermal assets, not conventional IT. If a site can’t secure power and safely remove heat, new hardware doesn’t become usable compute. Capacity is productive only when physical systems are in place.

Why data centers are the pinch point

Global capacity looks ample on paper, but much sits behind permits, grid upgrades, and long‑lead equipment like transformers, switchgear, and backup systems. Even where regional power exists, delivering it at high rack density with Tier‑grade reliability requires redesigned electrical layouts, added redundancy, and scarce commissioning talent. Large facilities in major hubs often need 24–72 months to activate, with delays driven by interconnection queues, supply‑chain bottlenecks, and labor constraints. Developers and hyperscalers are shifting to secondary markets with faster power access and clearer approvals.

Read more: These 2 stocks signal that the AI bubble may be starting to crack

Cooling moves to centre stage

Cooling is the second binding limit. Air cooling is near its practical ceiling for AI training. Production racks routinely run at 50-120 kW, with newer systems higher; airflow becomes inefficient and costly at those densities. Direct‑to‑chip liquid and hybrid air‑liquid cooling are moving from pilots to standard practice in new builds and significant retrofits. Brownfield sites can often support inference with partial liquid solutions or rear‑door heat exchangers, while dense training clusters need purpose‑built, liquid‑ready designs. Investment in liquid‑cooling infrastructure is accelerating.

Structural limits, not a cycle

These constraints stem from physics, grid planning, and construction timelines. A demand slowdown wouldn’t eliminate the need to upgrade power and thermal systems already near limits, and faster chip deliveries don’t create effective capacity if infrastructure isn’t ready.

The pick‑and‑shovel opportunity - plus risks

Value is shifting to suppliers of enabling infrastructure suppliers: switchgear, transformers, UPS, power distribution, heat exchangers, cold plates, coolant distribution units, chillers, and liquid‑loop integration. These companies work on multi‑year projects with milestone billing, offering better revenue visibility and less dependence on which chip or algorithm leads. Key risks are executional: project delays, component inflation, labor shortages, and integration errors. Industry expectations call for very large cumulative data‑center capex through 2030, with power and cooling taking a larger share.

A regional reality check

Constraints are most acute in Tier‑1 hubs like Northern Virginia, Dublin, Frankfurt, and Singapore, where grid capacity, permitting friction, and community opposition restrict growth. Expansion is shifting to secondary U.S. markets and power‑rich regions with hydro resources or favorable PPAs. To ease approvals and improve sustainability, developers are adopting water‑efficient and waterless cooling technologies, especially in water‑stressed areas.

Read more: The risks of artificial intelligence that no one talks about

Investor takeaways

Underwrite power, not promises: how many megawatts are secured, when do they arrive, and at what rack density? Backup power and peak‑load profiles matter as much as headline capacity.
Back proven integrators: execution is the moat; prioritise operators with a record of delivering high‑density, liquid‑ready capacity on time.
Track the cooling mix: expect direct‑to‑chip liquid to dominate near term; immersion will grow in the highest‑density zones. Hybrids will bridge legacy rooms.
Diversify exposure: pair core positions in compute and platforms with power and thermal enablers to reduce volatility and improve balance.
Mind geography and water: markets with shorter interconnection queues, proven grid capacity, and water‑efficient heat rejection offer faster revenue and lower execution risk.

Bottom line

AI progress is governed by megawatts, floor space, and heat rejection. Software draws headlines; power and thermals set the timetable. Pricing power is migrating to scarce, commissioned capacity and reliable delivery. Assets that convert capex into live megawatts quickly achieve higher utilisation, longer contracts, and better cash‑flow visibility. Infrastructure readiness is becoming a determinant of return on invested capital, not a technical footnote.

Trading online
in
Demo

Fai Trading Online senza rischi con un conto demo gratuito: puoi operare su Forex, Borsa, Indici, Materie prime e Criptovalute.