Rack-Level DC Power Architecture Using Cluster Mesh Turbines for AI Data Centers

Hyperscaler Rack-Level DC Power Generation Using Supercritical CO2 with Busbar and Buffer

Replace centralized AC power chains with rack-level DC generation: Cluster Mesh turbines paired with DC busbars and buffer storage redefine efficiency, resilience, and density for next-generation AI data centers.

1. Introduction: The Shift to Rack-Level Power Domains

Modern AI data centers are rapidly transitioning from centralized AC distribution toward rack-level power architectures. The traditional model—medium voltage AC stepped down through UPS systems and converted again at the server—introduces multiple inefficiencies, thermal losses, and infrastructure complexity.

A new approach leverages Cluster Mesh turbine generators integrated at the rack level to produce direct current (DC) power, distributed via busbars at 48 VDC or higher voltage DC architectures. This eliminates the need for per-server AC power supply units (PSUs) and significantly simplifies the electrical chain.

2. Cluster Mesh Rack-DC Architecture Overview

The proposed architecture places power generation adjacent to the rack (sidecar or rear-mounted module) rather than embedding it within compute slots.

Core Components

1. Cluster Mesh Turbine Module (25–100 kW per rack)

• Supercritical CO2 or ORC-based microturbine array

• Outputs regulated DC power

• Modular (1–4 × 25 kW units)

2. DC Bus (48 V or High Voltage DC)

• Rack-level busbar distribution

• Designed for high current (48 V) or reduced current (higher voltage DC)

3. Energy Buffer Layer

• Lithium-titanate battery or supercapacitor

• Handles transient GPU loads and ride-through

• Stabilizes turbine output

4. Compute Sleds (DC Input Only)

• No AC PSU

• Local DC-DC conversion (point-of-load regulators)

5. Cooling Module (Dedicated Blade or Sidecar)

• Liquid cooling CDU

• Removes heat from GPUs and turbine electronics

• Optionally integrates waste heat recovery loop

3. Electrical Architecture Flow

Conventional Path (Legacy)

MV AC → Transformer → UPS → PDU → Rack PSU → Server VRMs

Cluster Mesh Rack-DC Path

Thermal Input → Cluster Mesh Turbine → DC Regulation → DC Bus → Buffer → Compute Nodes (POL only)

4. Voltage Strategy: 48 V vs Higher Voltage DC

48 VDC (Baseline)

• Proven (OCP Open Rack standard)

• High current at scale

• Mature ecosystem

Higher Voltage DC (Emerging)

• Reduced copper losses

• Lower current → smaller conductors

• Better for 50–100 kW+ racks

Engineering Recommendation:

• Use 48 VDC for compatibility and early deployment

• Transition to higher-voltage DC bus (e.g., 300–800 VDC class) for ultra-high-density AI racks

5. Efficiency Gains

Each eliminated conversion stage improves total system efficiency.

Conventional Loss Points

• Transformer losses

• UPS double conversion

• AC distribution losses

• Server PSU losses

Cluster Mesh Gains

• Direct DC generation

• Fewer conversion steps

• Reduced thermal load

• Lower parasitic losses

Estimated improvement:

5% to 15% total system efficiency gain depending on configuration.

6. Thermal Integration Advantage

The Cluster Mesh architecture enables co-design of power and cooling:

• Turbine waste heat can be reused or rejected efficiently

• Cooling loops can serve both compute and power modules

• Enables closed-loop thermal optimization

This is particularly valuable for AI workloads where thermal density exceeds 100 kW per rack.

7. Transient Load Management

AI workloads exhibit rapid power swings.

Solution:

• DC buffer layer absorbs spikes

• Turbine operates at steady-state optimal efficiency

• Decouples mechanical generation from digital load dynamics

8. Reliability and Modularity

Advantages

• Rack-level fault isolation

• Modular turbine replacement

• Reduced dependency on centralized UPS systems

• Black-start capable racks (with buffer)

Design Considerations

• Redundant turbine modules (N+1 per rack group)

• Hot-swappable power sidecar

• Sealed working fluid systems

9. Deployment Model

Rack-Level Power Independence

Each rack becomes a semi-autonomous power node:

• Local generation

• Local buffering

• Local cooling

Data Center Impact

• Reduced central electrical infrastructure

• Faster deployment timelines

• Scalable power blocks

10. Architecture Comparison Table

| Parameter | Cluster Mesh Rack-DC Architecture | Conventional MV AC → UPS → PSU Architecture |

| ------------------------• | --------------------------------• | ------------------------------------------• |

| Primary Energy Input | Thermal (waste heat, NG, etc.) | Grid electricity (AC) |

| Power Conversion Stages | Minimal (DC generation → POL) | Multiple (AC→DC→AC→DC→POL) |

| Rack Input Power | DC (48 V or higher) | AC (208–480 V) |

| Server PSU Requirement | Eliminated | Required per server |

| Efficiency | High (fewer conversions) | Lower (stacked losses) |

| Thermal Load | Reduced | Higher due to PSU losses |

| Infrastructure Complexity | Lower | High (UPS, transformers, PDUs) |

| Scalability | Modular per rack | Centralized scaling |

| Fault Isolation | Rack-level | System-level dependencies |

| Transient Handling | Buffer-based (fast response) | UPS + PSU response |

| Cooling Integration | Integrated with power system | Separate systems |

| Space Utilization | Slight overhead (power sidecar) | PSU occupies server space |

| Maintenance | Modular turbine units | Distributed PSU failures |

| Deployment Speed | Fast, modular | Slow, infrastructure heavy |

| Suitability for AI Loads | High | Increasingly constrained |

11. Key Engineering Insight

The real innovation is not just replacing AC with DC—it is redefining the rack as the fundamental power unit.

Cluster Mesh turbines enable:

• Power generation at the point of consumption

• Elimination of redundant conversion hardware

• Integration of thermal and electrical systems

This aligns with the future of hyperscale design, where AI racks become self-contained compute, power, and cooling nodes.

12. Conclusion

A Cluster Mesh Rack-DC architecture using 48 V or higher DC with buffer storage represents a fundamental shift in data center design.

It delivers:

• Higher efficiency

• Lower complexity

• Improved scalability

• Better alignment with AI power density trends

While implementation requires careful attention to buffering, serviceability, and thermal design, the architecture is technically sound and aligned with the trajectory of hyperscale infrastructure.



100 MW Data Center Savings Using Cluster Mesh Rack-DC Power Architecture

Quantified savings for a 100 MW AI data center using Cluster Mesh rack-level DC power architecture. Reduce conversion losses, eliminate server PSUs, and achieve multi-megawatt efficiency gains.

A 100 MW AI data center can eliminate multiple megawatts of continuous losses by replacing traditional AC power chains with Cluster Mesh rack-level DC architecture—delivering millions in annual savings and redefining hyperscale efficiency.

1. Executive Summary

As AI data centers scale beyond 100 MW, traditional AC power architectures are becoming increasingly inefficient due to layered power conversions and distributed losses. A Cluster Mesh Rack-DC architecture, using sidecar turbine modules with 48 V or higher-voltage DC distribution and buffer storage, fundamentally changes the power delivery model.

By eliminating the per-server AC PSU stage and reducing total conversion steps, this architecture delivers 2% to 5.6% total electrical savings, equivalent to 2.2 MW to 5.6 MW of continuous power reduction in a 100 MW facility.

2. Baseline: Conventional Power Architecture

Typical hyperscale data centers use:

MV AC → Transformer → UPS → PDU → Rack PSU → Server VRMs

This architecture introduces losses at every stage:

• Transformer and switchgear losses

• UPS double-conversion losses

• Distribution losses

• Server-level AC-DC PSU inefficiencies

These losses compound at scale and directly increase both electrical consumption and cooling demand.

3. Proposed Architecture: Cluster Mesh Rack-DC

The Cluster Mesh approach shifts power generation and conversion to the rack level:

Thermal Input → Cluster Mesh Turbine → DC Regulation → Rack DC Bus → Buffer → Compute Nodes (POL only)

Key Characteristics

• Sidecar-mounted 25–100 kW turbine modules per rack

• 48 VDC or higher-voltage DC busbar distribution

• Elimination of server PSUs

• Integrated energy buffer (battery or supercapacitor)

• Co-designed power + cooling architecture

4. 100 MW Data Center Model Assumptions

To quantify savings, the following efficiency ranges are used:

| Architecture | Efficiency Range |

| --------------------• | ---------------• |

| Conventional AC Chain | 92% – 94% |

| Cluster Mesh Rack-DC | 96% – 97% |

All results below assume 100 MW delivered to IT load.

5. Quantified Power Savings

Continuous Power Reduction

• Conservative Case: 2.22 MW saved

• Base Case: 3.90 MW saved

• Aggressive Case: 5.60 MW saved

Annual Energy Savings

• 19,415 MWh to 49,081 MWh per year

This represents a permanent reduction in facility power draw without reducing compute capacity.

6. Annual Cost Savings

At typical commercial electricity rates:

| Case | Energy Saved (MWh/year) | $0.08/kWh | $0.12/kWh | $0.20/kWh |

| -----------• | ----------------------• | --------• | --------• | --------• |

| Conservative | 19,415 | $1.55M | $2.33M | $3.88M |

| Base | 34,163 | $2.73M | $4.10M | $6.83M |

| Aggressive | 49,081 | $3.93M | $5.89M | $9.82M |

7. Architecture Comparison Table (100 MW Model)

| Parameter | Cluster Mesh Rack-DC Architecture | Conventional MV AC → UPS → PSU |

| -------------------------• | --------------------------------• | -----------------------------• |

| IT Load Delivered | 100 MW | 100 MW |

| Total Input Power | 103.1 – 104.2 MW | 106.4 – 108.7 MW |

| Continuous Power Loss | 3.1 – 4.2 MW | 6.4 – 8.7 MW |

| Net Power Savings | 2.2 – 5.6 MW | Baseline |

| Annual Energy Consumption | 903,000 – 913,000 MWh | 932,000 – 952,000 MWh |

| Annual Energy Savings | Up to 49,081 MWh | — |

| Conversion Stages | Minimal (DC direct + POL) | Multiple AC/DC conversions |

| Server PSU Requirement | Eliminated | Required per server |

| Electrical Efficiency | 96% – 97% | 92% – 94% |

| Cooling Load Impact | Reduced | Higher (PSU + UPS losses) |

| Rack Power Density Support | High (optimized DC bus) | Limited by PSU constraints |

| Scalability | Modular per rack | Centralized infrastructure |

| Fault Isolation | Rack-level | System-wide dependencies |

8. Secondary Benefits

Reduced Cooling Load

Every watt of electrical loss becomes heat. By removing multiple conversion stages:

• Lower heat rejection requirement

• Improved liquid cooling efficiency

• Reduced HVAC and CDU load

Infrastructure Simplification

• Reduced need for large centralized UPS systems

• Less copper and fewer transformers

• Faster deployment timelines

Modular Scaling

Each rack becomes a semi-independent power node:

• Easier expansion

• Improved redundancy

• Reduced single points of failure

9. Strategic Positioning

This architecture aligns with broader hyperscale trends:

• Movement toward higher-voltage DC distribution

• Elimination of redundant conversion layers

• Increasing rack power density (50 kW → 100 kW → beyond)

The Cluster Mesh system extends this trend by introducing localized generation + DC distribution + integrated cooling at the rack level.

10. Conclusion

For a 100 MW AI data center, the Cluster Mesh Rack-DC architecture provides:

• 2.2 MW to 5.6 MW continuous power savings

• $1.5M to $9.8M annual cost reduction

• Reduced thermal load and infrastructure complexity

The key innovation is not just DC power—it is redefining the rack as a self-contained power, cooling, and compute unit.

This approach positions data centers for the next generation of AI workloads, where efficiency, density, and modularity are critical to scaling.







INFINITY TURBINE | SALES | DESIGN | DEVELOPMENT | ANALYSIS CONSULTING MENU

TEL: +1-608-238-6001 (Chicago Time Zone )

Email: greg@infinityturbine.com

40 MW to 100 MW Using IT1000 Supercritical CO2 Gas Turbine Generator Silent Prime Power 1 MW (natural gas, solar thermal, thermal battery heat) ... More Info

Developing Rack Prime Power DC for Server Racks Sidecar 48V to 800V DC plus DC buffer for hyperscalers... More Info

The Shift from AC to DC Power Production for AI Data Centers AI data centers are pushing electrical infrastructure to its limits. The traditional AC power chain is no longer optimal for GPU-driven workloads. A DC-native architecture using Infinity Turbine’s Cluster Mesh system offers a path to higher efficiency, lower costs, and scalable modular power—potentially saving tens of millions per year at hyperscale... More Info

ORC and Products Index Infinity Turbine ORC Index... More Info

________________________________________________________________________________

CONTACT TEL: +1-608-238-6001 (Chicago Time Zone USA) Email: greg@infinityturbine.com (Standard Web Page) | PDF