Power Usage Effectiveness (PUE) is a metric used to determine the energy efficiency of a data center. Developed by The Green Grid consortium in 2007, PUE has become the industry standard for measuring how efficiently a data center uses its power, specifically how much of the power is used by the computing equipment in contrast to cooling and other overhead.

Definition and Calculation

Basic Formula

PUE is calculated using the following formula:

PUE = Total Facility Energy / IT Equipment Energy

Where:

  • Total Facility Energy: All energy used by the data center facility, including IT equipment, cooling, power distribution, lighting, and other infrastructure
  • IT Equipment Energy: Energy used by computing equipment (servers, storage, networking) for processing, storing, and transmitting data

Interpretation

The theoretical ideal PUE value is 1.0, which would mean all energy entering the data center is used by IT equipment with zero overhead:

  • PUE = 1.0: Perfect efficiency (theoretical only)
  • PUE < 1.5: Excellent efficiency
  • PUE = 1.5-2.0: Good efficiency
  • PUE = 2.0-2.5: Average efficiency
  • PUE > 2.5: Poor efficiency
  • Global Average PUE: Approximately 1.58 (as of 2022)
  • Hyperscale Cloud Providers: Best performers, with PUE values of 1.1-1.2
  • Older Data Centers: Often have PUE values of 2.0 or higher
  • Improvement Over Time: Global average has improved from about 2.5 in 2007 to 1.58 in 2022

Components of Data Center Power

Understanding the components that contribute to total facility energy helps identify opportunities for PUE improvement:

IT Equipment Power (Denominator)

The core computing resources:

  • Servers: Processing units that run applications and services
  • Storage: Devices that store data (SSDs, HDDs, etc.)
  • Network Equipment: Switches, routers, load balancers, etc.
  • Other IT Hardware: Security appliances, KVM switches, etc.

Facility Overhead Power (Numerator minus Denominator)

Non-computing power consumption:

Cooling Systems (typically 30-40% of total power)

  • Air conditioning units
  • Chillers
  • Cooling towers
  • Computer Room Air Handlers (CRAHs) and Computer Room Air Conditioners (CRACs)
  • Pumps for water cooling systems
  • Fans and blowers

Power Delivery (typically 10-15% of total power)

  • Uninterruptible Power Supplies (UPS)
  • Power Distribution Units (PDUs)
  • Transformers
  • Switchgear
  • Generators (during testing)

Other Infrastructure

  • Lighting
  • Security systems
  • Fire suppression systems
  • Building Management Systems (BMS)
  • Office space within the data center building

Measurement Methodology

The Green Grid defines several levels of PUE measurement, each with increasing accuracy:

Category 0: Annual Calculation

  • Based on utility bills or similar high-level measurements
  • Lowest accuracy, used for basic reporting
  • Single measurement for the entire year

Category 1: Monthly Calculation

  • Based on monthly power readings at facility input and IT output
  • Moderate accuracy, captures seasonal variations
  • Twelve measurements per year

Category 2: Daily Calculation

  • Based on daily power readings
  • Higher accuracy, captures weekly patterns
  • 365 measurements per year

Category 3: Continuous Measurement

  • Based on continuous monitoring (15-minute intervals or better)
  • Highest accuracy, captures all operational variations
  • At least 35,040 measurements per year

Factors Affecting PUE

Several factors influence a data center’s PUE value:

Climate and Location

  • Ambient Temperature: Hotter climates require more cooling energy
  • Humidity: High humidity locations may need more dehumidification
  • Altitude: Affects cooling efficiency and equipment performance
  • Regional Weather Patterns: Seasonal variations impact cooling needs

Data Center Design

  • Airflow Management: Hot/cold aisle containment, raised floors, rack arrangement
  • Building Envelope: Insulation, orientation, materials
  • Equipment Density: Higher density requires more focused cooling
  • Cooling System Design: Free cooling, liquid cooling, air-side economizers

Operational Practices

  • Temperature Setpoints: Higher acceptable temperatures reduce cooling needs
  • Equipment Utilization: Higher utilization improves overall efficiency
  • Maintenance Practices: Regular maintenance ensures optimal performance
  • Power Management: Server power management features, UPS efficiency modes

Scale

  • Size: Larger facilities often achieve better PUE due to economies of scale
  • Load Profile: Consistent high loads typically yield better PUE than variable loads

Improving PUE

Strategies to improve data center PUE:

Cooling Optimization

  • Raise Temperature Setpoints: Operating at the upper end of ASHRAE recommendations
  • Hot/Cold Aisle Containment: Preventing mixing of hot and cold air
  • Free Cooling: Using outside air when temperature and humidity permit
  • Liquid Cooling: More efficient than air cooling, especially for high-density racks
  • Variable Speed Fans: Adjusting cooling capacity to match demand

Power Infrastructure Efficiency

  • High-Efficiency UPS Systems: Modern UPS systems with 95%+ efficiency
  • Modular UPS: Right-sizing UPS capacity to match load
  • Power Distribution at Higher Voltages: Reducing conversion losses
  • DC Power Distribution: Eliminating AC-DC conversion losses

IT Equipment Optimization

  • Server Consolidation: Higher utilization of fewer servers
  • Virtualization: Increasing utilization of physical hardware
  • Equipment Refresh: Newer equipment is typically more energy-efficient
  • Power Management Features: Enabling CPU power states, storage spin-down

Facility Design Improvements

  • Airflow Optimization: Eliminating hotspots and recirculation
  • Building Management System Integration: Intelligent control of all building systems
  • Economizer Modes: Using outside air or water when conditions permit
  • On-site Generation: Solar, wind, or fuel cells to offset grid power

Limitations and Criticisms of PUE

Despite its widespread adoption, PUE has several limitations:

Measurement Inconsistencies

  • Methodology Differences: Varying approaches to what’s included in measurements
  • Boundary Definition: Different interpretations of where the data center boundary lies
  • Timing of Measurements: Point-in-time vs. continuous measurement
  • Inclusion/Exclusion of Systems: Variations in what’s counted as IT load

Incomplete Picture of Efficiency

  • IT Equipment Efficiency Not Addressed: A data center with inefficient servers can have a good PUE
  • Workload Efficiency Not Reflected: No indication of useful work per watt
  • Water Usage Not Considered: Some cooling techniques improve PUE but increase water consumption
  • Carbon Impact Not Included: No consideration of energy sources or carbon intensity

System-Level Trade-offs Not Captured

  • Heat Reuse: Systems that capture and repurpose waste heat may have worse PUE but better overall efficiency
  • Climate Impact: Data centers in harsh climates face inherent challenges
  • Resilience Requirements: Redundancy needs may increase PUE

Enhanced and Alternative Metrics

To address PUE limitations, several complementary metrics have been developed:

Water Usage Effectiveness (WUE)

WUE = Annual Water Usage / IT Equipment Energy

Measures water efficiency in data centers, particularly important where cooling techniques use significant water.

Carbon Usage Effectiveness (CUE)

CUE = Total CO₂ Emissions from Energy / IT Equipment Energy

Addresses the carbon impact of the energy sources used.

Energy Reuse Effectiveness (ERE)

ERE = (Total Energy - Reused Energy) / IT Equipment Energy

Accounts for energy reused outside the data center (e.g., waste heat used for building heating).

Data Center Infrastructure Efficiency (DCiE)

DCiE = 1/PUE = IT Equipment Energy / Total Facility Energy × 100%

The inverse of PUE, expressed as a percentage.

Green Energy Coefficient (GEC)

GEC = Green Energy / Total Energy

Measures the proportion of energy from renewable sources.

IT Equipment Utilization (ITEU)

Measures how efficiently the IT equipment uses the energy it consumes to perform useful work.

PUE in Cloud Provider Data Centers

Major cloud providers have significantly invested in improving PUE:

Google

  • Average PUE: ~1.10 across all data centers
  • PUE Tracking: Publishes trailing twelve-month average PUE for all data centers
  • Key Strategies: Machine learning for cooling optimization, custom server design, advanced building management

Microsoft

  • Average PUE: ~1.12 for newer data centers
  • Innovations: Underwater data centers (Project Natick), hydrogen fuel cells
  • Approach: Standardized data center designs optimized for specific regions

Amazon Web Services

  • Average PUE: Estimated at 1.15-1.20 (less public with exact metrics)
  • Focus Areas: Renewable energy, custom cooling technologies
  • Scale Advantage: Large facilities with custom designs for efficiency

Facebook (Meta)

  • Average PUE: 1.10
  • Open Source: Published designs through Open Compute Project
  • Locations: Strategic placement in cold climates where possible