ee data center bestpractices.pdf

Upload: sajuhere

Post on 02-Apr-2018

224 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 ee data center bestpractices.pdf

    1/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Best Practices Guide

    or Energy-EfcientData Center Design

    February 2010

    Prepared by the National Renewable Energy Laboratory (NREL), a national laboratory

    o the U.S. Department o Energy, Oce o Energy Eciency and Renewable Energy;

    NREL is operated by the Alliance or Sustainable Energy, LLC.

  • 7/27/2019 ee data center bestpractices.pdf

    2/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Acknowledgements | Contacts

    Acknowledgements

    This report was prepared by John Bruschi, Peter Rumsey, Robin Anliker, Larry Chu, and Stuart Gregson oRumsey Engineers under contract to the National Renewable Energy Laboratory. The work was supported bythe Federal Energy Management Program led by Will Lintner.

    Contacts

    William Lintner

    U.S. Department o Energy FEMPwilliam.lintner.ee.doe.gov202-586-3120

    Bill Tschudi

    Lawrence Berkeley National [email protected]

    Otto VanGeet

    National Renewable Energy [email protected]

    On the Cover

    Center or Disease Control and Prevention's Arlen Specter Headquarters and Operations Center reached LEED

    Silver rating through sustainable design and operations that decrease energy consumption by 20% and water

    consumption by 36% beyond standard codes. PIX 16419.

    Employees o the Alliance or Sustainable Energy, LLC, under Contract No. DE-AC36-08GO28308 with the U.S. Dept. o Energy have authored

    this work. The United States Government retains and the publisher, by accepting the article or publication, acknowledges that the United

    States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published orm o this work,

    or allow others to do so, or United States Government purposes.

    mailto:wftschudi%40lbl.gov?subject=mailto:Otto.VanGeet%40nrel.gov?subject=mailto:Otto.VanGeet%40nrel.gov?subject=mailto:wftschudi%40lbl.gov?subject=
  • 7/27/2019 ee data center bestpractices.pdf

    3/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Table o ContentsTable o Contents

    Table o Contents

    Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

    1

    Inormation Technology (IT) Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    Ecient Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    Storage Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    Network Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    Power Supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    Consolidation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    Hardware Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    Environmental Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2009 ASHRAE Guidelines and IT-Reliability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    Air Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    Implement Cable Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    Aisle Separation and Containment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    Optimize Supply and Return Air Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    Raising Temperature Set Points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    Cooling Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

    8

    Direct Expansion (DX) Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    Air Handlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    Central vs. Modular Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    Low Pressure Drop Air Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    High-Eciency Chilled Water Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    Ecient Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    Optimize Plant Design and Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    Ecient Pumping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    Free Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    Air-Side Economizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    Water-Side Economizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    Thermal Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    Direct Liquid Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    Humidication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

  • 7/27/2019 ee data center bestpractices.pdf

    4/28

    ii FEDERAL ENERGY MANAGEMENT PROGRAM

    Table o Contents

    Electrical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    Power Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    Uninterruptible Power Supplies (UPS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    Power Distribution Units (PDU) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    Distribution Voltage Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    Demand Response. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15DC Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    Other Opportunities or Energy-Efcient Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    On-Site Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    Co-generation Plants. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    Reduce Standby Losses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    Use o Waste Heat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    Data Center Metrics and Benchmarking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

    17

    Power Usage Eectiveness (PUE) and Data Center Inrastructure Eciency (DCiE) . . . . . . . . . . . . . . . . . . . . . . . 17

    Rack Cooling Index (RCI) and Return Temperature Index (RTI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    Heating, Ventilation and Air-Conditioning (HVAC) System Eectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    Airfow Eciency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    Cooling System Eciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    On-Site Monitoring and Continuous Perormance Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    Bibliography and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    Figures

    Figure 1: Eciencies at varying load levels or typical power supplies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    Figure 2: 2009 ASHRAE environmental envelope or IT equipment air intake conditions . . . . . . . . . . . . . . . 4

    Figure 3: Example o a hot aisle/cold aisle conguration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    Figure 4: Sealed hot aisle/cold aisle conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    Figure 5: Comparison o distributed air delivery to central air delivery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    Figure 6: Typical UPS eciency curve or 100 kVA capacity and greater . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    Table

    Table 1: ASHRAE Recommended and Allowable Inlet Air Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4or Class 1 and 2 Data Centers

  • 7/27/2019 ee data center bestpractices.pdf

    5/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) Systems

    Summary

    This guide provides an overview o best practices or energy-ecient data center design which spans the

    categories o Inormation Technology (IT) systems and their environmental conditions, data center air manage-ment, cooling and electrical systems, on-site generation, and heat recovery. IT system energy eciency andenvironmental conditions are presented rst because measures taken in these areas have a cascading eect osecondary energy savings or the mechanical and electrical systems. This guide concludes with a section onmetrics and benchmarking values by which a data center and its systems energy eciency can be evaluated. Nodesign guide can oer the most energy-ecient data center design but the guidelines that ollow oer sugges-tions that provide eciency benets or a wide variety o data center scenarios.

    Background

    Data center spaces can consume up to 100 to 200 times as much electricity as standard oce spaces. With suchlarge power consumption, they are prime targets or energy-ecient design measures that can save money andreduce electricity use. However, the critical nature o data center loads elevates many design criteriachiefyreliability and high power density capacityar above energy eciency. Short design cycles oten leave little

    time to ully assess ecient design opportunities or consider rst cost versus lie-cycle-cost issues. This canlead to designs that are simply scaled up versions o standard oce space approaches or that reuse strategies

    and specications that worked good enough in the past without regard or energy perormance. This BestPractices Guide has been created to provide viable alternatives to inecient data center building practices.

    Inormation Technology (IT) Systems

    In a typical data center with a highly ecient cooling system, IT equipment loads can account or over hal othe entire acilitys energy use. Use o ecient IT equipment will signicantly reduce these loads within thedata center, which consequently will downsize the equipment needed to cool them. Purchasing servers equipped

    with energy-ecient processors, ans, and power supplies, high-ecient network equipment, consolidating

    storage devices, consolidating power supplies, and implementing virtualization are the most advantageous waysto reduce IT equipment loads within a data center.

    Eicient Servers

    Rack servers tend to be the main perpetrators o wasting energy and represent the largest portion o the ITenergy load in a typical data center. Servers take up most o the space and drive the entire operation. Themajority o servers run at or below 20% utilization most o the time, yet still draw ull power during the

    process. Recently vast improvements in the internal cooling systems and processor devices o servers have beenmade to minimize this wasted energy.

    When purchasing new servers it is recommended to look or products that include variable speed ans asopposed to a standard constant speed an or the internal cooling component. With variable speed ans it ispossible to deliver sucient cooling while running slower, thus consuming less energy. The Energy Starprogram aids consumers by recognizing high-eciency servers. Servers that meet Energy Star eciencyrequirements will, on average, be 30% more ecient than standard servers.

    Additionally, a throttle-down drive is a device that reduces energy consumption on idle processors, so that whena server is running at its typical 20% utilization it is not drawing ull power. This is also sometimes reerredto as power management. Many IT departments ear that throttling down servers or putting idle servers tosleep will negatively impact server reliability; however, hardware itsel is designed to handle tens o thousands

    o on-o cycles. Server power draw can also be modulated by installing power cycler sotware in servers.During low demand, the sotware can direct individual devices on the rack to power down. Potential powermanagement risks include slower perormance and possibly system ailure; which should be weighed againstthe potential energy savings.

  • 7/27/2019 ee data center bestpractices.pdf

    6/28

    2 FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsInormation Technologies (IT) Systems

    Multi-core processor chips allow simultaneous processing o multiple tasks, which leads to higher eciencyin two ways. First, they oer improved perormance within the same power and cooling load as compared tosingle-core processors. Second, they consolidate shared devices over a single processor core. Not all applica-tions are capable o taking advantage o multi-core processors. Graphics-intensive programs and high peror-mance computing still require the higher clock-speed single-core designs.

    Further energy savings can be achieved by consolidating IT system redundancies. Consider one power supply

    per server rack instead o providing power supplies or each server. For a given redundancy level, integratedrack mounted power supplies will operate at a higher load actor (potentially 70%) compared to individualserver power supplies (20% to 25%). This increase in power supply load actor vastly improves the power

    supply eciency (see Figure 1) in ollowing section on power supplies). Sharing other IT resources such asCentral Processing Units (CPU), disk drives, and memory optimizes electrical usage as well. Short term loadshiting combined with throttling resources up and down as demand dictates is another strategy or improvinglong term hardware energy eciency.

    Storage Devices

    Power consumption is roughly linear to the number o storage modules used. Storage redundancy needs to berationalized and right-sized to avoid rapid scale up in size and power consumption.

    Consolidating storage drives into a Network Attached Storage or Storage Area Network are two options thattake the data that does not need to be readily accessed and transports it ofine. Taking superfuous data ofinereduces the amount o data in the production environment, as well as all the copies. Consequently, less storageand CPU requirements on the servers are needed, which directly corresponds to lower cooling and power needsin the data center.

    For data that cannot be taken ofine, it is recommended to upgrade rom traditional storage methods to thinprovisioning. In traditional storage systems an application is allotted a xed amount o anticipated storagecapacity, which oten results in poor utilization rates and wasted energy. Thin provisioning technology, incontrast, is a method o maximizing storage capacity utilization by drawing rom a common pool o purchasedshared storage on an as-need basis, under the assumption that not all users o the storage pool will need theentire space simultaneously. This also allows or extra physical capacity to be installed at a later date as the data

    approaches the capacity threshold.

    Network Equipment

    As newer generations o network equipment pack more throughput per unit o power, there are active energymanagement measures that can also be applied to reduce energy usage as network demand varies. Suchmeasures include idle state logic, gate count optimization, memory access algorithms and Input/Output buerreduction.

    As peak data transmission rates continue to increase, requiring dramatically more power, increasing energy isrequired to transmit small amounts o data over time. Ethernet network energy eciency can be substantially

    improved by quickly switching the speed o the network links to the amount o data that is currently transmitted.

    Power Supplies

    Most data center equipment uses internal or rack mounted alternating current/direct current (AC-DC) powersupplies. Historically, a typical rack server's power supply converted AC power to DC power at eciencieso around 60% to 70%. Today, through the use o higher-quality components and advanced engineering, it ispossible to nd power supplies with eciencies up to 95%. Using higher eciency power supplies will directlylower a data centers power bills and indirectly reduce cooling system cost and rack overheating issues. At

    $0.12/kWh, savings o $2,000 to $6,000 per year per rack (10 kW to 25 kW, respectively) are possible just romimproving the power supply eciency rom 75% to 85%. These savings estimates include estimated secondarysavings due to lower uninterruptible power supply (UPS) and cooling system loads.

  • 7/27/2019 ee data center bestpractices.pdf

    7/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsInormation Technologies (IT) Systems

    The impact o real operating loads should also be considered to select power supplies that oer the besteciency at the load level at which they are expected to most requently operate. The optimal power supply loadlevel is typically in the mid-range o its perormance curve: around 40% to 60%, as shown in Figure 1.

    Power Supply Eciencies

    70%

    75%

    80%

    85%

    90%

    95%

    100%

    Load

    Eciency

    48Vdc-12Vdc 350W PSU

    380Vdc-12Vdc 1200W PSU

    277Vac-12Vdc 1000W PSU

    240Vac-12Vdc 1000W PSU208Vac-12Vdc 1000W PSU

    Legacy AC PSU

    0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

    Figure 1. Eciencies at varying load levels or typical power supplies

    (Source: Quantitative Eciency Analysis o Power Distribution Congurations or Data Centers, The Green Grid)

    Ecient power supplies usually have a minimal incremental cost at the server level. Power supplies that meetthe recommended eciency guidelines o the Server System Inrastructure (SSI) Initiative should be selected.There are also several certication programs currently in place that have standardized the eciencies o power

    supplies in order or vendors to market their product. For example, the 80 PLUS program oers certications orpower supplies with eciencies o 80% or greater at 20%, 50%, and 100% o their rated loads with true poweractors o 0.9 or greater.

    Consolidation

    Hardware Location

    Lower data center supply an power and more ecient cooling system perormance can be achieved whenequipment with similar heat load densities and temperature requirements are grouped together. Isolating equip-ment by environmental requirements o temperature and humidity allow cooling systems to be controlled to theleast energy-intensive set points or each location.

    This concept can be expanded to data acilities in general. Consolidating underutilized data center spaces to a

    centralized location can ease the utilization o data center eciency measures by condensing the implementationto one location, rather than several.

    Virtualization

    Virtualization is a method o running multiple independent virtual operating systems on a single physicalcomputer. It is a way o allowing the same amount o processing to occur on ewer servers by increasing serverutilization. Instead o operating many servers at low CPU utilization, virtualization combines the processingpower onto ewer servers that operate at higher utilization. Virtualization will drastically reduce the numbero servers in a data center, reducing required server power and consequently the size o the necessary coolingequipment. Some overhead is required to implement virtualization, but this is minimal compared to the savingsthat can be achieved.

  • 7/27/2019 ee data center bestpractices.pdf

    8/28

    4 FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsEnvironmental Conditions

    Environmental Conditions

    2009 ASHRAE Guidelines and IT-Reliability

    The rst step in designing the cooling and air management systems in a data center is to look at the standard-ized operating environments or equipment set orth by the American Society o Heating, Rerigerating and

    Air-Conditioning Engineers (ASHRAE) or Network Equipment Building System (NEBS). In 2008, ASHRAEin collaboration with IT equipment manuacturers expanded their recommended environmental envelope orinlet air entering IT equipment. The revision o this envelope allows greater fexibility in acility operations, andcontributes to reducing the overall energy consumption. The expanded recommended and allowable envelopesor Class 1 and 2 data centers are shown in Figure 2 and tabulated in Table 1 (or more details on data centertype, dierent levels o altitude, etc., reer to the reerenced ASHRAE publication, Thermal Guidelines or

    Data Processing Environments, 2nd Edition).

    RH=20%

    RH=50%

    RH=80%

    h = 30 Btu/lb

    0.000

    0.002

    0.004

    0.006

    0.008

    0.010

    0.012

    0.014

    0.016

    45 50 55 60 65 70 75 80 85 90 95 100

    HumidityRatio(lbsH2Operlbsdrya

    ir)

    Dry Bulb Temperature (F)

    ASHRAE Class 1 and Class 2 Computing Environment, Recommended

    ASHRAE Class 1 Computing Environment, Allowable

    ASHRAE Class 2 Computing Environment, Allowable

    Figure 2. 2009 ASHRAE environmental envelope or IT equipment air intake conditions (Source: Rumsey Engineers)

    Class 1 and Class 2

    Recommended Range

    Class 1 Allowable

    Range

    Class 2 Allowable

    Range

    Low Temperature Limit 64.4F DB 59F DB 50F DB

    High Temperature Limit 80.6F DB 89.6F DB 95F DB

    Low Moisture Limit 41.9F DP 20% RH 20% RH

    High Moisture Limit 60% RH & 59F DP 80% RH & 62.6F DP 80% RH & 69.8F DP

    Table 1. ASHRAE Recommended and Allowable Inlet Air Conditions or Class 1 and 2 Data Centers (Source: Rumsey Engineers

  • 7/27/2019 ee data center bestpractices.pdf

    9/28

  • 7/27/2019 ee data center bestpractices.pdf

    10/28

    6 FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsAir Management

    (vacant equipment slots) are blocked o on the intake side to create barriers that reduce recirculation, as shownin Figure 3 below. Additionally, cable openings in raised foors and ceilings should be sealed as tightly aspossible. With proper isolation, the temperature o the hot aisle no longer impacts the temperature o the racksor the reliable operation o the data center; the hot aisle becomes a heat exhaust. The air-side cooling system iscongured to supply cold air exclusively to the cold aisles and pull return air only rom the hot aisles.

    Return Air Plenum

    Hot

    Aisle

    Hot

    Aisle

    Raised Floor

    Cold

    Aisle

    Figure 3. Example o a hot aisle/cold aisle confguration (Source: Rumsey Engineers)

    The hot rack exhaust air is not mixed with cooling supply air and, thereore, can be directly returned to theair handler through various collection schemes, returning air at a higher temperature, oten 85F or higher.Depending on the type and loading o a server, the air temperature rise across a server can range rom 10F

    to more than 40F. Thus, rack return air temperatures can exceed 100F when densely populated with highlyloaded servers. Higher return temperatures extend economizer hours signicantly and allow or a control algo-

    rithm that reduces supply air volume, saving an power. I the hot aisle temperature is high enough, this air canbe used as a heat source in many applications. In addition to energy savings, higher equipment power densi-ties are also better supported by this conguration. The signicant increase in economizer hours aorded by ahot aisle/cold aisle conguration can improve equipment reliability in mild climates by providing emergencycompressor-ree data center operation when outdoor air temperatures are below the data center equipments topoperating temperature (typically 90F to 95F).

    Using fexible plastic barriers, such as plastic supermarket rerigeration covers (i.e. strip curtains), or othersolid partitions to seal the space between the tops o the rack and air return location can greatly improve hot

    aisle/cold aisle isolation while allowing fexibility in accessing, operating, and maintaining the computer equip-ment below. One recommended design conguration, shown in Figure 4, supplies cool air via an under-foorplenum to the racks; the air then passes through the equipment in the rack and enters a separated, semi-sealedarea or return to an overhead plenum. This approach uses a bafe panel or barrier above the top o the rack and

    at the ends o the hot aisles to mitigate short-circuiting (the mixing o hot and cold air). These changes shouldreduce an energy requirements by 20% to 25%, and could result in a 20% energy savings on the chiller sideprovided these components are equipped with variable speed drives (VSDs).

    Fan energy savings are realized by reducing an speeds to supply only as much air as a given space requires.There are a number o dierent design strategies that reduce an speeds. Among them is a an speed controlloop controlling the cold aisles temperature at the most critical locationsthe top o racks or under-foor

    supply systems, the bottom o racks or overhead systems, end o aisles, etc. Note that many Direct Expansion(DX) Computer Room Air Conditioners (CRACs) use the return air temperature to indicate the space

  • 7/27/2019 ee data center bestpractices.pdf

    11/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsAir Management

    temperature, an approach that does not work in a hot aisle/cold aisle conguration where the return air is ata very dierent temperature than the cold aisle air being supplied to the equipment. Control o the an speedbased on the IT equipment needs is critical to achieving savings. Unortunately variable speed drives on DXCRAC unit supply ans are not generally available despite being a common eature or commercial packagedDX air conditioning units. For more descriptions on CRAC units and common energy-eciency options, reerto the "DX Systems" subsection o "Cooling Systems" below.

    Return Air Plenum

    Hot

    Aisle

    Hot

    Aisle

    Raised Floor

    Cold

    Aisle

    Physical

    Separation

    Figure 4. Sealed hot aisle/cold aisle confguration (Source: Rumsey Engineers)

    Optimize Supply and Return Air Coniguration

    Hot aisle/cold aisle congurations can be served by overhead or under-foor air distribution systems. Whenan overhead system is used, supply outlets that dump the air directly down should be used in place o tradi-tional oce diusers that throw air to the sides, which results in undesirable mixing and recirculation with the

    hot aisles. The diusers should be located directly in ront o racks, above the cold aisle. In some cases returngrilles or simply open ducts have been used. The temperature monitoring to control the air handlers should belocated in areas in ront o the computer equipment, not on a wall behind the equipment. Use o overhead vari-able air volume (VAV) allows equipment to be sized or excess capacity and yet provide optimized operationat part-load conditions with turn down o variable speed ans. Where a rootop unit is being used, it should belocated centrally over the served areathe required reduction in ductwork will lower cost and slightly improve

    eciency. Also keep in mind that overhead delivery tends to reduce temperature stratication in cold aisles ascompared to under-foor air delivery.

    Under-foor air supply systems have a ew unique concerns. The under-foor plenum oten serves both as a ductand a wiring chase. Coordination throughout design and into construction and operation throughout the lie othe center is necessary since paths or airfow can be blocked by electrical or data trays and conduit. The loca-tion o supply tiles needs to be careully considered to prevent short circuiting o supply air and checked peri-

    odically i users are likely to recongure them. Removing or adding tiles to x hot spots can cause problemsthroughout the system. Another important concern to be aware o is high air velocity in the under-foor plenum.This can create localized negative pressure and induce room air back into the under-foor plenum. Equipmentcloser to down fow CRAC units or Computer Room Air Handlers (CRAH) can receive too little cooling air due

    to this eect. Deeper plenums and careul layout o CRAC/CRAH units allow or a more uniorm under-foorair static pressure. For more description on CRAH unites as they relate to data center energy eciency, reer tothe "Air Handler" subsection o "Cooling Systems" below.

  • 7/27/2019 ee data center bestpractices.pdf

    12/28

    8 FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsCooling Systems

    Raising Temperature Set Points

    Higher supply air temperature and a higher dierence between the return air and supply air temperaturesincreases the maximum load density possible in the space and can help reduce the size o the air-side coolingequipment required, particularly when lower-cost mass produced package air handling units are used. Thelower required supply airfow due to raising the air-side temperature dierence provides the opportunity or anenergy savings. Additionally, the lower supply airfow can ease the implementation o an air-side economizer

    by reducing the sizes o the penetrations required or outside air intake and heat exhaust.

    Air-side economizer energy savings are realized by utilizing a control algorithm that brings in outside airwhenever it is appreciably cooler than the return air and when humidity conditions are acceptable (reer to the

    Airside Economizer subsection o Free Cooling or urther detail on economizer control optimization). Inorder to save energy, the temperature outside does not need to be below the data centers temperature set point;it only has to be cooler than the return air that is exhausted rom the room. As the return air temperature isincreased through the use o good air management, as discussed in the preceding sections, the temperature atwhich an air-side economizer will save energy is correspondingly increased. Designing or a higher return airtemperature increases the number o hours that outside air, or a waterside economizer/ree cooling, can be usedto save energy.

    A higher return air temperature also makes better use o the capacity o standard package units, which are

    designed to condition oce loads. This means that a portion o their cooling capacity is congured to servehumidity (latent) loads. Data centers typically have very ew occupants and small outside air requirements, and,

    thereore, have negligible latent loads. While the best course o action is to select a unit designed or sensible-cooling loads only or to increase the airfow, an increased return air temperature can convert some o a standardpackage units latent capacity into usable sensible capacity very economically. This may reduce the size and/ornumber o units required.

    A warmer supply air temperature set point on chilled water air handlers allows or higher chilled water supplytemperatures which consequently improves the chilled water plant operating eciency. Operation at warmerchilled water temperatures also increases the potential hours that a water-side economizer can be used (reer tothe subsection on Water-Side Economizer o the Free Cooling section or urther detail).

    Cooling Systems

    When beginning the design process and equipment selections or cooling systems in data centers, it is importantto always consider initial and uture loads, in particular part- and low-load conditions, as the need or digitaldata is ever-expanding.

    Direct Expansion (DX) Systems

    Packaged DX air conditioners likely compose the most common type o cooling equipment or smallerdata centers. These units are generally available as o-the-shel equipment rom manuacturers (commonlydescribed as CRAC units). There are, however, several options available to improve the energy eciency ocooling systems employing DX units.

    Packaged rootop units are inexpensive and widely available or commercial use. Several manuacturers oerunits with multiple and/or variable speed compressors to improve part-load eciency. These units reject theheat rom the rerigerant to the outside air via an air-cooled condenser. An enhancement to the air-cooledcondenser is a device which sprays water over the condenser coils. The evaporative cooling provided by thewater spray improves the heat rejection eciency o the DX unit. Additionally, these units are commonlyoered with air-side economizers. Depending on the data centers climate zone and air management, a DX

    unit with air-side economizer can be a very energy-ecient cooling option or a small data center. (For urtherdiscussion, reer to section Raising Temperature Set Points and subsection Air-Side Economizer o thesection on Free Cooling.

  • 7/27/2019 ee data center bestpractices.pdf

    13/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsCooling Systems

    Indoor CRAC units are available with a ew dierent heat rejection options. Air-cooled CRAC units includea remote air-cooled condenser. As with the rootop units, adding an evaporative spray device can improve theair-cooled CRAC unit eciency. For climate zones with a wide range o ambient dry bulb temperatures, applyparallel VSD control o the condenser ans to lower condenser an energy compared to the standard stagingcontrol o these ans.

    CRAC units packaged with water-cooled condensers are oten paired with outdoor drycoolers. The heat rejec-

    tion eectiveness o outdoor drycoolers depends on the ambient dry bulb temperature. A condenser water pumpdistributes the condenser water rom the CRAC units to the drycoolers. Compared to the air-cooled condenseroption, this water-cooled system requires an additional pump and an additional heat exchanger between the

    rerigerant loop and the ambient air. As a result, this type o water-cooled system is generally less ecient thanthe air-cooled option. A more ecient method or water-cooled CRAC unit heat rejection employs a coolingtower. To maintain a closed condenser water loop to the outside air, a closed loop cooling tower can be selected.A more expensive but more energy-ecient option would be to select an oversized open-loop tower and a sepa-rate heat exchanger where the latter can be selected or a very low (less than 3F) approach. In dry climates, asystem composed o water-cooled CRAC units and cooling towers can be designed to be more energy ecientthan air-cooled CRAC unit systems. (Reer to the Ecient Equipment subsection o the section on High-Eciency Chilled Water Systems or more inormation on selecting an ecient cooling tower.)

    A type o water-side economizer can be integrated with water-cooled CRAC units. A pre-cooling water coilcan be added to the CRAC unit upstream o the evaporator coil. When ambient conditions allow the condenser

    water to be cooled (by either drycooler or cooling tower) to the point that it can provide a direct cooling benetto the air entering the CRAC unit, condenser water is diverted to the pre-cooling coil. This will reduce, or attimes eliminate, the need or compressor-based cooling rom the CRAC unit. Some manuacturers oer thispre-cooling coil as a standard option or their water-cooled CRAC units.

    Air Handlers

    Central vs. Modular Systems

    Better perormance has been observed in data center air systems that utilize specically-designed central airhandler systems. A centralized system oers many advantages over the traditional multiple distributed unitsystem that evolved as an easy, drop-in computer room cooling appliance (commonly reerred to as a CRAH

    unit). Centralized systems use larger motors and ans that tend to be more ecient. They are also well suitedor variable volume operation through the use o VSDs and maximize eciency at part-loads.

    In Figure 5, the pie charts below show the electricity consumption distribution or two data centers. Bothare large acilities, with approximately equivalent data center equipment loads, located in adjacent buildings,

    Multiple Distributed CRAC Units Central Air Handler

    Computer Loads

    38%

    UPSLosses

    6%

    HVAC

    54%

    Lighting

    2%

    Computer Loads

    63%

    UPS Losses

    13%

    HVAC-

    Air Movement

    9%

    HVAC -

    Chilled

    Water Plant

    14%

    Lighting

    1%

    Figure 5. Comparison o distributed air delivery to central air delivery (Source: Rumsey Engineers)

  • 7/27/2019 ee data center bestpractices.pdf

    14/28

    10 FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsCooling Systems

    and operated by the same company. The acility on the let uses a multiple distributed unit system based on air-cooled CRAC units, while the acility on the right uses a central air handler system. An ideal data center woulduse 100% o its electricity to operate data center equipmentenergy used to operate the ans, compressors andpower systems that support the data center is strictly overhead cost. The data center supported by a centralized airsystem (on the right) uses almost two thirds o the input power to operate revenue-generating data center equip-

    ment, compared to the multiple small unit system that uses just over one third o its power to operate the actualdata center equipment. The trend seen here has been consistently supported by benchmarking data. The two mostsignicant energy saving methods are water cooled equipment and ecient centralized air handler systems.

    Most data center loads do not vary appreciably over the course o the day, and the cooling system is typically

    signicantly oversized. A centralized air handling system can improve eciency by taking advantage o surplusand redundant capacity to actually improve eciency. The maintenance benets o a central system are wellknown, and the reduced ootprint and maintenance trac in the data center are additional benets. Implementa-tion o an airside economizer system is simplied with a central air handler system. Optimized air management,such as that provided by hot aisle/cold aisle congurations, is also easily implemented with a ducted centralsystem. Modular units are notorious or battling each other to maintain data center humidity set points. Thatis, one unit can be observed to be dehumidiying while an adjacent unit is humidiying. Instead o modularunits independently controlled, a centralized control system using shared sensors and set points ensures propercommunication among the data center air handlers. Even with modular units humidity control over make-up air

    should be all that is required.

    Low Pressure Drop Air Delivery

    A low-pressure drop design (oversized ductwork or a generous under foor) is essential to optimizing energyeciency by reducing an energy and acilitates long-term buildout fexibility. Ducts should be as short aspossible in length and sized signicantly larger than typical oce systems, since 24-hour operation o the datacenter increases the value o energy use over time relative to rst cost. Since loads oten only change whennew servers or racks are added or removed, periodic manual air fow balancing can be more cost eective thanimplementing an automated air fow balancing control scheme.

    High-Eiciency Chilled Water Systems

    Efcient Equipment

    Use ecient water-cooled chillers in a central chilled water plant. A high-eciency VFD-equipped chiller withan appropriate condenser water reset is typically the most ecient cooling option or large acilities. Chillerpart-load eciency should be considered since data centers oten operate at less than peak capacity. Chillerpart-load eciencies can be optimized with variable requency driven compressors, high evaporator tempera-tures and low entering condenser water temperatures.

    Oversized cooling towers with VFD-equipped ans will lower water-cooled chiller plant energy. For a given

    cooling load, larger towers have a smaller approach to ambient wet bulb temperature, thus allowing or opera-tion at lower cold condenser water temperatures and improving chiller operating eciency. The larger ansassociated with the oversized towers can be operated at lower speeds to lower cooling tower an energycompared to a smaller tower.

    Condenser water and chilled water pumps should be selected or the highest pumping eciency at typicaloperating conditions, rather than at ull load condition.

    Optimize Plant Design and Operation

    Data centers oer a number o opportunities in central plant optimization, both in design and operation. A

    medium-temperature, as opposed to low-temperature, chilled water loop design using a water supply temperature

    o 55F or higher improves chiller efciency and eliminates uncontrolled phantom dehumidifcation loads (reer to

    Humidifcation and Controls sections). Higher temperature chilled water also allows more water-side econo-

    mizer hours, in which the cooling towers can serve some or the entire load directly, reducing or eliminating the

    load on the chillers. The condenser water loop should also be optimized; a 5F to 7F approach cooling tower plant

    with a condenser water temperature reset pairs nicely with a variable speed chiller to oer large energy savings.

  • 7/27/2019 ee data center bestpractices.pdf

    15/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsCooling Systems

    Efcient Pumping

    A well thought out ecient pumping design is an essential component to a high-eciency chilled watersystem. Pumping eciency can vary widely depending on the conguration o the system, and whether thesystem is or an existing acility or new construction. Listed below are general guidelines or optimizing

    pumping eciency or existing and new acilities o any conguration.

    Existing Facilities: Reducetheaveragechilledwaterowratecorrespondingtothetypicalload.

    Convertexistingprimary/secondarychilledwaterpumpingsystemtoprimary-only.

    Convertexistingsystemfromconstantowtovariableow.

    Reducethepressuredropofthechilledwaterdistributionsystembyopeningpumpbalancingvalvesand allowing pump VFDs to limit the fow rate.

    Reducethechilledwatersupplypressuresetpoint.

    Addachilledwaterpumpingdifferentialpressuresetpointresetcontrolsequence.

    Eliminateunnecessarybypassedchilledwaterbyreplacing3-waychilledwatervalveswith2-wayvalves.

    New Construction:

    Reducetheaveragechilledwaterowratecorrespondingtothetypicalload. Implementprimary-onlyvariableowchilledwaterpumping.

    Specifyanuntrimmedimpeller,donotinstallpumpbalancingvalvesandinsteaduseaVFDto limit pump fow rate.

    Designforalowwatersupplypressuresetpoint.

    Specifyawaterpumpingdifferentialpressuresetpointresetcontrolsequence.

    Designalowpressuredroppipelayoutforpumps.

    Specify2-waychilledwatervalvesinsteadof3-wayvalves.

    InstallVFDsonallpumpsandrunredundantpumpsatlowerspeeds.

    Free Cooling

    Air-Side EconomizerThe cooling load or a data center is independent o the outdoor air temperature. Most nights and during mildwinter conditions, the lowest cost option to cool data centers is an air-side economizer; however, a proper

    engineering evaluation o the local climate conditions must be completed to evaluate whether this is the caseor a specic data center. Due to the humidity and contamination concerns associated with data centers, careulcontrol and design work may be required to ensure that cooling savings are not lost because o excessivehumidication and ltration requirements, respectively. Data center proessionals are split in the perceptiono risk when using this strategy. It is standard practice, however, in the telecommunications industry to equiptheir acilities with air-side economizers. Some IT-based centers routinely use outside air without apparentcomplications, but others are concerned about contamination and environmental control or the IT equipmentin the room. Nevertheless, outside air economizing is implemented in many data center acilities and results in

    energy-ecient operation. While ASHRAE Standard 90.1 currently does not require economizer use in data

    centers, a new version o this standard will likely add this requirement. Already some code authorities, such asthe Department o Planning and Development or the city o Seattle, have mandated the use o economizers indata centers under certain conditions.

    Control strategies to deal with temperature and humidity fuctuations must be considered along with contamina-tion concerns over particulates or gaseous pollutants. For data centers with active humidity control, a dewpointtemperature lockout scheme should be used as part o the air-side economizer control strategy. This schemeprevents high outside air dehumidication and humidication loads by tracking the moisture content o theoutside air and locking out the economizer when the air is either too dry or too moist. Mitigation steps mayinvolve ltration or other measures. Other contamination concerns such as salt or corrosive matter should be

    evaluated. Generally, concern over contamination should be limited to unusually harsh environments such aspulp and paper mills or large chemical spills.

  • 7/27/2019 ee data center bestpractices.pdf

    16/28

    12 FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsCooling Systems

    Wherever possible, outside air intakes should be located on the north side o buildings in the northern hemi-sphere where there is signicantly less solar heat gain compared to the south side.

    Water-Side Economizer

    Free cooling can be provided via a waterside economizer, which uses the evaporative cooling capacity o acooling tower to produce chilled water to cool the data center during mild outdoor conditions. Free cooling isusually best suited or climates that have wet bulb temperatures lower than 55F or 3,000 or more hours peryear. It most eectively serves chilled water loops designed or 50F and above chilled water or lower tempera-ture chilled water loops with signicant surplus air handler capacity in normal operation. A heat exchanger is

    typically installed to transer heat rom the chilled water loop to the cooling tower water loop while isolatingthese loops rom each other. Locating the heat exchanger upstream rom the chillers, rather than in parallel tothem, allows or integration o the water-side economizer as a rst stage o cooling the chilled water beoreit reaches the chillers. During those hours when the water-side economizer can remove enough heat to reachthe chilled water supply set point, the chilled water can be bypassed around the chillers. When the water-sideeconomizer can remove heat rom the hot chilled water but not enough to reach set point, the chillers operate atreduced load to meet the chilled water supply set point.

    Thermal Storage

    Thermal storage is a method o storing thermal energy in a reservoir or later use, and is particularly useulin acilities with particularly high cooling loads such as data centers. It can result in peak electrical demandsavings and improve chilled water system reliability. In climates with cool, dry nighttime conditions, coolingtowers can directly charge a chilled water storage tank; using a small raction o the energy otherwise requiredby chillers. A thermal storage tank can also be an economical alternative to additional mechanical coolingcapacity; or example water storage provides the additional benet o backup make-up water or cooling towers.

    Direct Liquid Cooling

    Direct liquid cooling reers to a number o dierent cooling approaches that all share the same characteristico transerring waste heat to a fuid at or very near the point the heat is generated, rather than transerring itto room air and then conditioning the room air. One current approach to implementing liquid cooling utilizescooling coils installed directly onto the rack to capture and remove waste heat. The under-foor area is otenused to run the coolant lines that connect to the rack coil via fexible hoses. Many other approaches are avail-able or being pursued, ranging rom water cooling o component heat sinks to bathing components with dielec-tric fuid cooled via a heat exchanger.

    Liquid cooling can serve higher heat densities and be much more ecient than traditional air cooling as water

    fow is a much more ecient method o transporting heat. Energy eciencies will be realized when such systemsallow the use o a medium temperature chilled water supply (55F to 60F rather than 44F) and by reducing thesize and power consumption o ans serving the data center. These warmer chilled water supply temperaturesacilitate the pairing o liquid cooling with a water-side economizer, urther increasing potential energy savings.

    Humidiication

    Low-energy humidication techniques can replace traditional electric resistance humidiers with an adiabaticapproach that uses the heat present in the air or recovered rom the computer heat load or humidication.

    Ultrasonic humidiers, evaporative wetted media and micro droplet spray are some examples o adiabatichumidiers. An electric resistance humidier requires about 430 Watts to boil one pound o 60F water, whilea typical ultrasonic humidier only requires 30 Watts to atomize the same pound o water. These passivehumidication approaches also cool the air, in contrast to an electric resistance humidier heating the air, whichurther saves energy by reducing the load on the cooling system.

    Controls

    More options are now available or dynamically allocating IT resources as computing or storage demands vary.

    Within the ramework o ensuring continuous availability, a control system should be programmed to maximizethe energy eciency o the cooling systems under variable ambient conditions as well as variable IT loads.

  • 7/27/2019 ee data center bestpractices.pdf

    17/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsElectrical Systems

    Variable speed drives on CRAH and CRAC units (i available or the latter) allow or varying the airfow as thecooling load fuctuates. For raised foor installations, the an speed should be controlled to maintain an under-foor pressure set point. However, cooling air delivery via conventional raised foor tiles can be ill-suited orresponding to the resulting dynamic heat load without either over-cooling the space or starving some areas osucient cooling. Variable air volume air delivery systems are a much better solution or consistently providingcooling when and where it is needed. Supply air and supply chilled water temperatures should be set as high as

    possible while maintaining the necessary cooling capacity.Data centers oten over-control humidity, which results in no real operational benets and increases energy use.Tight humidity control is a carryover rom old mainrame and tape storage eras and generally can be relaxed or

    eliminated or many locations.

    Humidity controls are requently not centralized. This can result in adjacent units serving the same spaceghting to meet the humidity set point, with one humidiying while the other is dehumidiying. Humiditysensor drit can also contribute to control problems i sensors are not regularly recalibrated. One very importantconsideration to reducing unnecessary humidication is to operate the cooling coils o the air-handling equip-ment above the dew point (usually by running chilled water temperatures above 50F), thus eliminating unnec-essary dehumidication.

    On the chilled water plant side, variable fow pumping and chillers equipped with variable speed driven

    compressors should be installed to provide energy-ecient operation during low load conditions. Anotheroption to consider or increasing chiller plant eciency is to actively reset the chilled water supply temperaturehigher during low load conditions. In data centers located in relatively dry climates and which experience rela-

    tively low partial loads, implementing a water-side economizer can provide tremendous savings over the courseo the year (see earlier discussion on "Water-Side Economizers").

    Electrical Systems

    Similar to cooling systems, it is important to always consider initial and uture loads, in particular part- andlow-load conditions, when designing and selecting equipment or a data centers electrical system.

    Power DistributionData centers typically have an electrical power distribution path consisting o the utility service, switchboard,switchgear, alternate power sources (i.e. backup generator), paralleling equipment or redundancy (i.e. multipleUPSs and PDUs), and auxiliary conditioning equipment (i.e. line lters, capacitor bank). These compo-nents each have a heat output that is tied directly to the load in the data center. Eciencies can range widelybetween manuacturers and variations in how the equipment is designed. However, operating eciencies can becontrolled and optimized through thoughtul selection o these components.

    Uninterruptible Power Supplies (UPS)

    UPS systems provide backup power to data centers, and can be based on battery banks, rotary machines, uelcells, or other technologies. A portion o all the power supplied to the UPS to operate the data center equipmentis lost to ineciencies in the system. The rst step to minimize these losses is to evaluate which equipment, inot the entire data center, requires a UPS system. For instance the percent o IT power required by a scientic

    computing acility can be signicantly lower than the percent required or a nancial institution.

    Increasing the UPS system eciency oers direct, 24-hour-a-day energy savings, both within the UPS itseland indirectly through lower heat loads and even reduced building transormer losses. Among double conver-sion systems (the most commonly used data center system); UPS eciency ranges rom 86% to 95%. When aull data center equipment load is served through a UPS system, even a small improvement in the eciency othe system can yield a large annual cost savings. For example, a 15,000 square oot data center with IT equip-

    ment operating at 100 W/s requires 13,140 MWh o energy annually or the IT equipment. I the UPS systemsupplying that power has its eciency improved rom 90% to 95%, the annual energy bill will be reduced by

  • 7/27/2019 ee data center bestpractices.pdf

    18/28

    14 FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsElectrical Systems

    768,421 kWh, or about $90,000 at $0.12/kWh, plus signicant additional cooling system energy savings romthe reduced cooling load. For battery-based UPS systems, use a design approach that keeps the UPS load actoras high as possible. This usually requires using multiple smaller units.

    Redundancy in particular requires design attention; operating a single large UPS in parallel with a 100%capacity identical redundant UPS unit (n+1 design redundancy) results in very low load actor operation,at best no more than 50% at ull design buildout. Consider a UPS system sized or two UPS units with n+1

    redundancy, with both units operating at 30% load actor. I the same load is served by three smaller units (alsosized or n+1 redundancy), then these units will operate at 40% load actor. This 10% increase in load actorcan result in a 1.2% eciency increase (see Figure 6). For a 100 kW load, this eciency increase can result in

    savings o approximately 13,000 kWh annually.

    82%

    84%

    86%

    88%

    90%

    92%

    94%

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

    10% increase in

    Load Factor

    1.2% increase in

    Eciency

    Load Factor

    Figure 6. Typical UPS eciency curve or 100 kVA capacity and greater (Source: Rumsey Engineers)

    Evaluate the need or power conditioning. Line interactive systems oten provide enough power conditioningor servers at a higher eciency than typical double conversion UPS systems. Some traditional double conver-sion UPS systems (which oer the highest degree o power conditioning) have the ability to operate in the more

    ecient line conditioning mode, usually advertised as economy or eco mode.

    New technologies currently being proven on the market, such as fywheel systems, should also be considered.

    Such systems eliminate replacement and disposal concerns associated with conventional lead-acid battery basedUPS systems, the added costs o special ventilation systems and, oten, conditioning systems required to main-tain temperature requirements to ensure battery lie.

    Power Distribution Units (PDU)

    A PDU passes conditioned power that is sourced rom a UPS or generator to provide reliable power distributionto multiple pieces o equipment. It provides many outlets to power servers, networking equipment and otherelectronic devices that require conditioned and/or continuous power. Maintaining a higher voltage in thesource power lines ed rom a UPS or generator allows or a PDU to be located more centrally within a datacenter. As a result, the conductor lengths rom the PDU to the equipment are reduced and less power is lost in

    the orm o heat.

  • 7/27/2019 ee data center bestpractices.pdf

    19/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsElectrical Systems

    Specialty PDUs that convert higher voltage (208V AC or 480V AC) into lower voltage (120V AC) via a built-in step-down transormer or low voltage equipment are commonly used as well. Transormers lose power inthe orm o heat when voltage is being converted. The parameters o the transormer in this type o PDU canbe specied such that the energy eciency is optimized. A dry-type transormer with a 176F temperature riseuses 13% to 21% less energy than a 302F rise unit. The higher eciency 176F temperature rise unit has arst-cost premium; however, the cost is usually recovered in the energy cost savings. In addition, many trans-

    ormers tend to operate most eciently when they are loaded in the 20% to 50% range. Selecting a PDU with atransormer at an optimized load actor will reduce the loss o power through the transormer. Energy can also

    be saved by reducing the number o installed PDUs with built-in transormers.

    Distribution Voltage Options

    Another source o electrical power loss or both AC and DC distribution is that o the conversions required romgoing rom the original voltage supplied by the utility (usually a medium voltage o around 12 kV AC or more)to that o the voltage at each individual device within the data center (usually a low voltage around 120V AC to240V AC). Designing a power distribution network that delivers all o the required voltages while minimizingpower losses is oten a challenging task. Following are general guidelines or delivering electrical power in themost energy-ecient manner possible:

    Minimizetheresistancebyincreasingthecross-sectionalareaofthedistributionpathandmakingitas

    short as possible. Maintainahighervoltageforaslongaspossibletominimizethecurrent.

    Useswitch-modetransistorsforpowerconditioning.

    Locateallvoltageregulatorsclosetotheloadtominimizedistributionlossesatlowervoltages.

    Demand Response

    Demand response reers to the process by which acility operators voluntarily curb energy use during times opeak demand. Many utility programs oer incentives to business owners that implement this practice on hotsummer days or other times when energy demand is high and supply is short. Demand Response programs canbe executed by reducing loads through a building management system or switching to backup power generation.

    For reducing loads when a demand response event is announced, data center operators can take certain reduc-

    tion measures such as dimming a third o their lighting or powering o idle oce equipment. Through auto-mated network building system solutions this can be a simple, ecient, inexpensive process, and has beenknown to reduce peak loads during events by over 14% (reer to Ciscos site, Enterprise Automates UtilityDemand Response http://www.cisco.com/en/US/prod/collateral/ps6712/ps10447/ps10454/case_study_

    c36-543499.html).

    DC Power

    In a conventional data center power is supplied rom the grid as AC power and distributed throughout the datacenter inrastructure as AC power. However, most o the electrical components within the data center, as wellas the batteries storing the backup power in the UPS system, require DC power. As a result, the power must gothrough multiple conversions resulting in power loss and wasted energy.

    One way to reduce the number o times power needs to be converted is by utilizing a DC power distribution.

    This has not yet become a common practice and, thereore, could carry signicantly higher rst costs, but ithas been tested at several acilities. A study done by Lawrence Berkeley National Labs in 2007 compared thebenets o adopting a 380V DC power distribution or a datacom acility to a traditional 480V AC power distri-bution system. The results showed that the acility using the DC power had a 7% reduction in energy consump-tion compared to the typical acility with AC power distribution. Other DC distribution systems are availableincluding 575V DC and 48V DC. These systems oer energy savings as well.

    http://www.cisco.com/en/US/prod/collateral/ps6712/ps10447/ps10454/case_study_c36-543499.htmlhttp://www.cisco.com/en/US/prod/collateral/ps6712/ps10447/ps10454/case_study_c36-543499.htmlhttp://www.cisco.com/en/US/prod/collateral/ps6712/ps10447/ps10454/case_study_c36-543499.htmlhttp://www.cisco.com/en/US/prod/collateral/ps6712/ps10447/ps10454/case_study_c36-543499.html
  • 7/27/2019 ee data center bestpractices.pdf

    20/28

    16 FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsOther Opportunities or Energy-Efcient Design

    Lighting

    Data center spaces are not uniormly occupied and, thereore, do not require ull illumination during all hourso the year. UPS, battery and switch gear rooms are examples o spaces that are inrequently occupied. There-ore, zone based occupancy sensors throughout a data center can have a signicant impact on reducing thelighting electrical use. Careul selection o an ecient lighting layout (e.g. above aisles and not above theserver racks), lamps and ballasts will also reduce not only the lighting electrical usage but also the load on the

    cooling system. The latter leads to secondary energy savings.

    Other Opportunities or Energy-Efcient Design

    On-Site Generation

    The combination o a nearly constant electrical load and the need or a high degree o reliability make large

    data centers well suited or on-site electric generation. To reduce rst costs, on-site generation equipmentshould replace the backup generator system. It provides both an alternative to grid power and waste heat thatcan be used to meet nearby heating needs or harvested to cool the data center through absorption or adsorptionchiller technologies. In some situations, the surplus and redundant capacity o the on-site generation plant can

    be operated to sell power back to the grid, osetting the generation plant capital cost.

    Co-generation Plants

    Co-generation systems, also known as combined heat and power, involve the use o a heat engine or powerstation to simultaneously produce electricity and useul heat. In data centers it is very common to see a dieselgenerator as the source o backup power, which can easily be utilized as a co-generation system. The waste heat

    produced by the generator can be used to run an absorption chiller which provides cooling to the data center.(Reer to the Dierent Uses o Waste Heat section.) Due to the signicant air pollution impact o dieselgenerators, the number o hours o generator operation can be limited by air quality regulations.

    Reduce Standby Losses

    Standby generators are typically specied with jacket and oil warmers that use electricity to maintain thesystem in standby at all times; these heaters use more electricity than the standby generator will ever produce.

    Careul consideration o redundancy congurations should be ollowed to minimize the number o standbygenerators. Using waste heat rom the data center can minimize losses by block heaters. Solar panels could beconsidered as an alternative source or generator block heat. Another potential strategy is to work with gener-

    ator manuacturers to reduce block heater output when conditions allow.

    Use o Waste Heat

    Waste heat can be used directly or to supply cooling required by the data center through the use o absorptionor adsorption chillers, reducing chilled water plant energy costs by well over 50%. The higher the cooling air orwater temperature leaving the server, the greater the opportunity or using waste heat. The direct use o wasteheat or low temperature heating applications such as preheating ventilation air or buildings or heating waterwill provide the greatest energy savings. Heat recovery chillers may also provide an ecient means to recover

    and reuse heat rom data center equipment environments or comort heating o typical oce environments.

    Absorbers use low-grade waste heat to thermally compress the chiller vapor in lieu o the mechanical compres-sion used by conventional chillers. Single stage, lithium bromide desiccant based chillers are capable o usingthe low grade waste heat that can be recovered rom common onsite power generation options including micro-turbines, uel cells, and natural gas reciprocating engines. Although absorption chillers have low coecient operormance (COP) ratings compared to mechanical chillers, utilizing ree waste heat rom a generating plantto drive them increases the overall system eciency. Earlier absorption chiller model operations have expe-rienced reliability issues due to lithium bromide crystallization on the absorber walls when entering coolingtower water temperatures were not tightly controlled. Modern controls on newer absorption chiller models,though more complicated, remedy this problem. However, start-up and maintenance o absorption chillers have

    oten been viewed as signicantly more involved than that or electric chillers.

  • 7/27/2019 ee data center bestpractices.pdf

    21/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsData Center Metrics and Benchmarking

    A potentially more ecient and more reliable thermally driven technology that has entered the domestic marketis the adsorption chiller. An adsorbent is a silica gel desiccant based cooling system that uses waste heat toregenerate the desiccant and cooling towers to dissipate the removed heat. The process is similar to that oan absorption process but simpler and, thereore, more reliable. The silica gel based system uses water as thererigerant and is able to use lower temperature waste heat than a lithium bromide based absorption chiller.Adsorption chillers include better automatic load matching capabilities or better part-load eciency compared

    to absorption chillers. The silica gel adsorbent is non-corrosive and requires signicantly less maintenance andmonitoring compared to the corrosive lithium bromide absorbent counterpart. Adsorption chillers generally

    restart more quickly and easily compared to absorption chillers. While adsorption chillers have been in produc-tion or more than 20 years, they have only recently been introduced in the U.S. market.

    Data Center Metrics and Benchmarking

    Energy eciency metrics and benchmarks can be used to track the perormance o and identiy potentialopportunities to reduce energy use in data centers. For each o the metrics listed in this section, benchmarking

    values are provided or reerence. These values are based on a data center benchmarking study carried out byLawrence Berkeley National Laboratories. The data rom this survey can be ound under LBNLs Sel-Bench-

    marking Guide or Data Centers: http://hightech.lbl.gov/benchmarking-guides/data.html

    Power Usage Eectiveness (PUE) and Data Center Inrastructure Eiciency (DCiE)

    PUE is dened as the ratio o the total power to run the data center acility to the total power drawn by all ITequipment:

    Standard Good Better

    2.0 1.4 1.1PUE =

    Total Facility Power

    IT Equipment Power

    An average data center has a PUE o 2.0; however, several recent super-ecient data centers have been knownto achieve a PUE as low as 1.1.

    DCiE is dened as the ratio o the total power drawn by all IT equipment to the total power to run the data

    center acility, or the inverse o the PUE:

    Standard Good Better

    0.5 0.7 0.9DCIE =

    1

    PUE=

    IT Equipment Power

    Total Facility Power

    The Green Grid developed benchmarking protocol or these two metrics or which reerences and URLs areprovided at the end o this guide.

    It is important to note that these two termsPUE and DCiEdo not dene the overall eciency o anentire data center, but only the eciency o the supporting equipment within a data center. These metricscould be alternatively dened using units o average annual power or annual energy (kWh) rather than an

    instantaneous power draw (kW). Using the annual measurements provides the advantage o accounting or

    variable ree-cooling energy savings as well as the trend or dynamic IT loads due to practices such as ITpower management.

    PUE and DCiE are dened with respect to site power draw. Another alternative denition could use a sourcepower measurement to account or dierent uel source uses.

    Energy Star denes a similar metric, dened with respect to source energy, Source PUE as:

    Source PUE =Total Facility Energy (kWh)

    UPS Energy (kWh)

    http://hightech.lbl.gov/benchmarking-guides/data.htmlhttp://hightech.lbl.gov/benchmarking-guides/data.html
  • 7/27/2019 ee data center bestpractices.pdf

    22/28

    18 FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsData Center Metrics and Benchmarking

    As mentioned, the above metrics provide a measure o data center inrastructure eciency in contrast to anoverall data center eciency. Several organizations are working on developing overall data center eciencymetrics with a protocol to account or the useul work produced by a data center per unit o energy or power.Examples o such metrics are the Data Center Productivity and Data Center Energy Productivity metricsproposed by The Green Grid, and the Corporate Average Data Center Eciency metric proposed by TheUptime Institute.

    Rack Cooling Index (RCI) and Return Temperature Index (RTI)

    RCI measures how eectively equipment racks are cooled according to equipment intake temperature guide-lines established by ASHRAE/NEBS. By using the dierence between the allowable and recommended intaketemperatures rom the ASHRAE Class 1 (2008) guidelines, the maximum (RCIHI) and minimum (RCILO) limitsor the RCI are dened as ollows:

    RCIHI

    = RCI LO

    = 100 [%] 100 [%]1Tx>80

    (Tx 80)

    (90 80)n1

    Tx

  • 7/27/2019 ee data center bestpractices.pdf

    23/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsData Center Metrics and Benchmarking

    Heating, Ventilation and Air-Conditioning (HVAC) System Eectiveness

    This metric is dened as the ratio o the annual IT equipment energy to the annual HVAC system energy:

    Eectiveness =kWh/yrITkWh/yrHVAC

    Standard Good Better

    0.7 1.4 2.5

    For a xed value o IT equipment energy, a lower HVAC system eectiveness corresponds to a relativelyhigh HVAC system energy use and, thereore, a high potential or improving HVAC system eciency. Notethat a low HVAC system eectiveness may indicate that server systems are ar more optimized and ecientcompared to the HVAC system. Thus, this metric is a coarse screen or HVAC eciency potential. According

    to a database o data centers surveyed by Lawrence Berkeley National Laboratory, HVAC system eectivenesscan range rom 0.6 up to 3.5.

    Airlow Eiciency

    This metric characterizes overall airfow eciency in terms o the total an power required per unit o airfow.This metric provides an overall measure o how eciently air is moved through the data center, rom the supply

    to the return, and takes into account low pressure drop design as well as an system eciency.

    Standard Good Better

    1.25W/cm 0.75 W/cm 0.5 kW/cm

    Total Fan Power (W)

    Total Fan Airfow (cm)

    Cooling System EiciencyThere are several metrics that measure the eciency o an HVAC system. The most common metric used tomeasure the eciency o an HVAC system is the ratio o average cooling system power usage (kW) to theaverage data center cooling load (tons). A cooling system eciency o 0.8 kW/ton is considered good practicewhile an eciency o 0.6 kW/ton is considered a better benchmark value.

    Average Cooling System Power (kW)

    Average Cooling Load (ton)

    Standard Good Better

    1.1 kW/ton 0.8 kW/ton 0.6 kW/ton

    On-Site Monitoring and Continuous Perormance Measurement

    Ongoing energy-usage management can only be eective i sucient metering is in place. There are manyaspects to monitoring the energy perormance o a data center that are necessary to ensure that the acilitymaintains the high eciency that was careully sought out in the design process. Below is a brie treatmento best practices or data center energy monitoring. For more detail, reer to the Sel Benchmarking Guide or

    Data Center Energy Perormance in the Bibliography and Resources section.

    Energy-eciency benchmarking goals, based on appropriate metrics, rst need to be established to determine

    which measured values need to be obtained or measuring the data centers eciency. The metrics listed aboveprovide a good starting point or high-level energy-eciency assessment. A more detailed assessment couldinclude monitoring to measure losses along the electrical power chain equipment such as transormers, UPSand PDUs with transormers. (For a list o possible measured values, reer to the Sel-Benchmarking Guide or

    High-Tech Buildings: Data Centers at Lawrence Berkeley National Laboratorys Web site: http://hightech.lbl.gov/benchmarking-guides/data.html .)

    http://hightech.lbl.gov/benchmarking-guides/data.htmlhttp://hightech.lbl.gov/benchmarking-guides/data.htmlhttp://hightech.lbl.gov/benchmarking-guides/data.htmlhttp://hightech.lbl.gov/benchmarking-guides/data.html
  • 7/27/2019 ee data center bestpractices.pdf

    24/28

    20 FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsData Center Metrics and Benchmarking

    The accuracy o the monitoring equipment should be specied, including calibration status, to support the levelo desired accuracy expected rom the monitoring. The measurement range should be careully consideredwhen determining the minimum sensor accuracy. For example, a pair o +/- 1.5F temperature sensors providesno value or determining the chilled water T i the operating T can be as low as 5F. Electromagnetic fowmeters and strap-on ultrasonic fow meters are among the most accurate water fow meters available. Threephase power meters should be selected to measure true root mean square (RMS) power.

    Ideally, the Energy Monitoring and Control System (EMCS) and Supervisory Control and Data Acquisi-tion (SCADA) systems provide all o the sensors and calculations required to determine real-time eciencymeasurements. All measured values should be continuously trended and data archived or a minimum o one

    year to obtain annual energy totals. An open protocol control system allows or adding more sensors ater initialinstallation. IT equipment oten includes on-board temperature sensors. A developing technology includes acommunications interace which allows the integration o the on-board IT sensors with an EMCS.

    Monitoring or perormance measurement should include temperature and humidity sensors at the air inlet oIT equipment and at heights prescribed by ASHRAEs Thermal Guidelines or Data Processing Environments,2009. New technologies are becoming more prevalent to allow a wireless network o sensors to be deployedthroughout the IT equipment rack inlets.

    Supply air temperature and humidity should be monitored or each CRAC or CRAH unit as well as the

    dehumidication/humidication status to ensure that integrated control o these units is successul.

  • 7/27/2019 ee data center bestpractices.pdf

    25/28

    FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsBibliography and Resources

    Bibliography and Resources

    General Bibliography

    Design Recommendations or High Perormance Data Centers. Rocky Mountain Institute, 2003.

    High Perormance Data Centers A Design Guidelines Sourcebook. Pacic Gas and Electric, 2006.http://hightech.lbl.gov/documents/data_centers/06_datacenters-pge.pd . Accessed December 3, 2009.

    Best Practices or Datacom Facility Energy Eciency. ASHRAE Datacom Series, 2008.

    Design Considerations or Datacom Equipment Centers. ASHRAE Datacom Series, 2005.

    U.S.DepartmentofEnergy,EnergyEfciencyandRenewableEnergyIndustrialTechnologiesProgram,

    Saving Energy in Data Centers.http://www1.eere.energy.gov/industry/datacenters/index.html .Accessed December 3, 2009.

    Resources

    IT Systems

    ServerSystemInfrastructure(SSI)Forum.http://www.ssiorum.org. Accessed December 3, 2009.

    EfciencyofPowerSuppliesintheActiveMode.EPRI.http://www.ecientpowersupplies.org.Accessed December 3, 2009.

    80PLUS Energy-Ecient Technology Solutions. http://www.80plus.orghttp://www.hightech.lbl.gov. Accessed December 3, 2009.

    Program Requirements or Computer Servers, Version 1.0. Energy Star, 2009.

    Data Processing and Electronic Areas, Chapter 17. ASHRAE HVAC Applications, 2007.

    TheGreenDataCenter2.0,Chapter2,Energy-EfcientServerTechnologies,2009.

    http://www.processor.com/editorial/article.asp?article=articles%2Fp3008%2F32p08%2F32p08.asp .

    Accessed December 3, 2009. TheGreenGrid,Quantitative Eciency Analysis o Power Distribution Congurations or Data Center.

    http://www.thegreengrid.org/en/Global/Content/white-papers/Quantitative-Eciency-Analysis .Accessed December 3, 2009.

    JuniperNetworks.Energy Eciency or Network Equipment: Two Steps Beyond Greenwashing.

    http://www.juniper.net/us/en/local/pd/whitepapers/2000284-en.pd. Accessed December 3, 2009.

    EnergyStar.Enterprise Server and Data Center Energy Eciency Initiatives.http://www.energystar.gov/datacenters. Accessed December 3, 2009.

    EnergyStar.Enterprise Servers or Consumers.http://www.energystar.gov/index.cm?c=ent_servers.enterprise_servers. December 3, 2009.

    Environmental Conditions

    Thermal Guidelines or Data Processing Environments, 2nd Edition, ASHRAE Datacom Series 1, 2009.

    Thermal Guidelines or Data Processing Environments, TC9.9 Mission Critical Facilities, ASHRAE, 2004.

    Air Management

    ThermalGuidelinesforDataProcessingEnvironments,TC9.9 Mission Critical Facilities, ASHRAE, 2004.

    DataProcessingandElectronicAreas,Chapter 17, ASHRAE HVAC Applications, 2003.

    http://hightech.lbl.gov/documents/data_centers/06_datacenters-pge.pdfhttp://www1.eere.energy.gov/industry/datacenters/index.htmlhttp://www.ssiforum.org/http://www.efficientpowersupplies.org/http://www.80plus.org/http://www.hightech.lbl.gov/http://www.processor.com/editorial/article.asp?article=articles%2Fp3008%2F32p08%2F32p08.asphttp://www.juniper.net/us/en/local/pdf/whitepapers/2000284-en.pdfhttp://www.energystar.gov/datacentershttp://www.energystar.gov/datacentershttp://www.energystar.gov/index.cfm?c=ent_servers.enterprise_servershttp://www.energystar.gov/index.cfm?c=ent_servers.enterprise_servershttp://www.energystar.gov/index.cfm?c=ent_servers.enterprise_servershttp://www.energystar.gov/index.cfm?c=ent_servers.enterprise_servershttp://www.energystar.gov/datacentershttp://www.energystar.gov/datacentershttp://www.juniper.net/us/en/local/pdf/whitepapers/2000284-en.pdfhttp://www.processor.com/editorial/article.asp?article=articles%2Fp3008%2F32p08%2F32p08.asphttp://www.hightech.lbl.gov/http://www.80plus.org/http://www.efficientpowersupplies.org/http://www.ssiforum.org/http://www1.eere.energy.gov/industry/datacenters/index.htmlhttp://hightech.lbl.gov/documents/data_centers/06_datacenters-pge.pdf
  • 7/27/2019 ee data center bestpractices.pdf

    26/28

    22 FEDERAL ENERGY MANAGEMENT PROGRAM

    Summary | Background | Inormation Technologies (IT) SystemsBibliography and Resources

    Cooling Systems

    Best Practices Guide or Variable Speed Pumping in Data Centers, Ernest Orlando, Lawrence BerkeleyNational Laboratory, 2009.

    Data Processing and Electronic Areas, Chapter 17, ASHRAE HVAC Applications, 2003.

    Variable-Primary-Flow Systems Revisited, Schwedler P.E., Mick, Trane Engineers Newsletter, Volume 31,No.4, 2002.

    Thermal Guidelines or Data Processing Environments, TC9.9 Mission Critical Facilities, ASHRAE, 2004.

    TheGreenGrid.Free Cooling Estimated Savings. http://cooling.thegreengrid.org/namerica/WEB_APP/calc_index.html. Accessed December 3, 2009.

    Supervisory Controls Strategies and Optimization, Chapter 41, ASHRAE Applications Handbook, 2003.

    ARI Standard 550/590- 2003, Water Chilling Packages Using the Vapor Compression Cycle, Air-Condition-ing and Rerigeration Institute, 2003.

    High-Density Server Cooling, Engineered Systems, Christopher Kurkjian and Doug McLellan.

    http://www.esmagazine.com/Articles/Cover_Story/48aac855d6ea8010VgnVCM100000932a8c0 .Accessed December 3, 2009

    Processor.Throwing Water at the Heat Problem, Bruce Gain. http://www.processor.com/editorial/article.asp?article=articles%2Fp2736%2F31p36%2F31p36.asp . Accessed December 3, 2009.

    Liquid Cooling Guidelines or Datacom Equipment Centers, ASHRAE Datacom Series, 2006.

    IBMHeateXchangerwatercooledrack,https://www-03.ibm.com/systems/x/hardware/options/cooling.html .

    KnurrCoolTherm.http://www.knuerr.com/web/zip-pd/en/cooltherm/Knuerr-CoolTherm-4-35kW.pd.Accessed December 3, 2009.

    Fujitsu.Water-Cooled Primecenter LC Rack. https://sp.ts.ujitsu.com/dmsp/docs/ds_primecenter_lc.pd.

    Accessed December 3, 2009

    Psychometrics, Chapter 6, ASHRAE HVAC Fundamentals Handbook, 2005.

    Best Practices or Data Centers: Lessons Learned rom Benchmarking 22 Data Centers, ACEEE Summer

    Study on Energy Eciency in Buildings, 2006.

    Seattle.Gov.Seattle Nonresidential Energy Code, Chapter 14 Mechanical Systems, Section 1433.http://www.seattle.gov/DPD/Codes/Energy_Code/Nonresidential/Chapter_14/deault.asp#Section1413 .Accessed December 3, 2009.

    High-PerformanceforHigh-TechBuildings.Data Center Energy Benchmarking Case Study, LBNL andRumsey Engineers, February 2003, Facility Report 8. http://hightech.lbl.gov/dc-benchmarking-results.html .

    Accessed December 3,