sl-24664

Upload: paritosh-sharma

Post on 14-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 sl-24664

    1/24

    A White Paper from the Expertsin Business-Critical ContinuityTM

    Seven Best Practices for Increasing Efciency, Availability andCapacity: The Enterprise Data Center Design Guide

  • 7/30/2019 sl-24664

    2/24

    Table of Contents

    Introduction......................................................................................................................... 3

    Seven Best Practices ............................................................................................................. 4

    Maximize the return temperature at the cooling units to improvecapacity and efciency ................................................................................................ 4

    Sidebar: The Fan-Free Data Center ............................................................................... 8

    Match cooling capacity and airow with IT loads ......................................................... 8

    Utilize cooling designs that reduce energy consumption ............................................. 9

    Sidebar: Determining Economizer Benets Based on Geography ................................. 11

    Select a power system to optimize your availability and efciency needs ..................... 12

    Sidebar: Additional UPS Redundancy Congurations................................................... 17

    Design for exibility using scalable architectures that minimizes footprint .................. 18

    Increase visibility, control and efciency with

    data center infrastructure management ...................................................................... 19

    Utilize local design and service expertise to extend equipment life,reduce costs and address your data centers unique challenges ................................... 21

    Conclusion .................................................................................................................. 22

    Enterprise Design Checklist .................................................................................................. 23

    2

    1

    2

    3

    4

    5

    6

    7

  • 7/30/2019 sl-24664

    3/24

    Figure 1. Top data center issues as reported by the Data Center Users Group.

    Introduction

    The data center is one of the most dynamicand critical operations in any business.Complexity and criticality have only increasedin recent years as data centers experiencedsteady growth in capacity and density,straining resources and increasing theconsequences of poor performance. The 2011National Study on Data Center Downtimerevealed that the mean cost for any typeof data center outage is $505,502 with theaverage cost of a partial data center shutdown

    being $258,149. A full shutdown costs morethan $680,000.

    Because the cost of downtime is so high,availability of IT capacity is generally the mostimportant metric on which data centers areevaluated. However, data centers today mustalso operate efcientlyin terms of bothenergy and management resourcesand beexible enough to quickly and cost-effectivelyadapt to changes in business strategy and

    computing demand. The challenges datacenter managers face are reected in the keyissues identied each year by the Data CenterUsers Group (Figure 1).

    Efciency rst emerged as an issue for datacenter management around 2005 as serverproliferation caused data center energy

    consumption to skyrocket architectures at the

    same time electricity prices and environmentalawareness were rising. The industry respondedto the increased costs and environmentalimpact of data center energy consumptionwith a new focus on efciency; however,there was no consensus as to how to addressthe problem. A number of vendors offeredsolutions, but none took a holistic approach,and some achieved efciency gains at theexpense of data center and IT equipmentavailabilitya compromise few businessescould afford to make.

    In the traditional data center, approximatelyone-half of the energy consumed goes tosupport IT equipment with the other halfused by support systems (Figure 2). EmersonNetwork Power conducted a systematicanalysis of data center energy use and thevarious approaches to reducing it to determinewhich were most effective.

    While many organizations initially focused

    on specic systems within the data center,Emerson took a more strategic approach. Thecompany documented the cascade effectthat occurs as efciency improvements atthe server component level are ampliedthrough reduced demand on support systems.Using this analysis, a ten-step approachfor increasing data center efciency, called

    3

    Spring 2008 Spring 2009 Spring 2010 Spring 2011

    Heat Density Heat Density Adequate Monitoring Availability

    Power Density Efciency Heat Density Adequate Monitoring

    Availability Adequate Monitoring Availability Heat Density

    Adequate Monitoring Availability Efciency Efciency

    Efciency Power Density Power Density Power Density

  • 7/30/2019 sl-24664

    4/24

    Energy Logic, was developed. According to

    the analysis detailed in the Emerson NetworkPower white paper, Energy Logic: Reducing DataCenter Energy Consumption by Creating Savingsthat Cascade Across Systems, 1 W of savings atthe server component level can create 2.84 Wof savings at the facility level.

    This paper builds upon several of the steps inEnergy Logic to dene seven best practicesthat serve as the foundation for data centerdesign. These best practices provide plannersand operators with a roadmap for optimizing

    the efciency, availability and capacity of newand existing facilities.

    Seven Best Practices

    for Enterprise Data Center DesignThe following best practices represent provenapproaches to employing cooling, power andmanagement technologies in the quest toimprove overall data center performance.

    Best Practice 1: Maximize the returntemperature at the cooling units toimprove capacity and efficiency

    Maintaining appropriate conditions in the

    data center requires effectively managingthe air conditioning loop comprising supplyand return air. The laws of thermodynamicscreate opportunities for computer roomair conditioning systems to operate moreefciently by raising the temperature of thereturn air entering the cooling coils.

    This best practice is based on the hot-aisle/cold-aisle rack arrangement (Figure 3),which improves cooling unit performance

    by reducing mixing of hot and cold air, thusenabling higher return air temperatures. Therelationship between return air temperatureand sensible cooling capacity is illustrated inFigure 4. It shows that a 10 degree F increase inreturn air temperature typically results in a 30to 38 percent increase in cooling unit capacity.

    The racks themselves provide something of abarrier between the two aisles when blankingpanels are used systematically to closeopenings. However, even with blanking panels,

    hot air can leak over the top and around thesides of the row and mix with the air in the coldaisle. This becomes more of an issue as rackdensity increases.

    To mitigate the possibility of air mixing as itreturns to the cooling unit, perimeter coolingunits can be placed at the end of the hotaisle as shown in Figure 3. If the cooling unitscannot be positioned at the end of the hotaisle, a drop ceiling can be used as a plenum to

    Figure 2. IT equipment accounts for over 50percent of the energy used in a traditional datacenter while power and cooling account for anadditional 48 percent.

    48%

    38% Cooling

    5% UPS

    3% MV Transformerand Switchgear

    1% Lighting

    1% PDU

    15% Processor

    15% Other Services

    14% Server Power Supply

    4% Storage

    4% CommunicationEquipment

    Powerand Cooling 52%

    ComputingEquipment

    38%

    15%

    15%

    14%4%

    4%

    5%

    3%1%

    1%

    4

  • 7/30/2019 sl-24664

    5/24

    5

    Figure 3. In the hot-aisle/cold-aisle arrangement, racks are placed in rows face-to-face, with arecommended 48-inch aisle between them. Cold air is distributed in the aisle and used by racks onboth sides. Hot air is expelled at the rear of each rack into the hot aisle.

    Figure 4. Cooling units operate at higher capacity and efficiency at higher return air temperatures.

    70F 75F 80F 85F 90F 95F 100F

    21.1C 23.9C 26.7C 29.4C 32.2C 35.0C 37.8C

    10F higher return air temperaturetypically enables 30-38% better CRACefficiency

    Chilled Water CRAC

    Direct Expansion CRAC

    10

    20

    30

    40

    50

    60

    70

    80

  • 7/30/2019 sl-24664

    6/24

    prevent hot air from mixing with cold air as it

    returns to the cooling unit. Cooling units canalso be placed in a gallery or mechanical room.

    In addition to ducting and plenums, airmixing can also be prevented by applyingcontainment and by moving cooling closer tothe source of heat.

    Optimizing the Aisle with Containmentand Row-Based Cooling

    Containment involves capping the ends of the

    aisle, the top of the aisle, or both to isolate theair in the aisle (Figure 5).

    Cold aisle containment is favored over hot aislecontainment because it is simpler to deployand reduces risk during the event of a breachof the containment system. With hot aislecontainment, open doors or missing blankingpanels allow hot air to enter the cold aisle,jeopardizing the performance of IT equipment(Figure 6). In a similar scenario with the cold

    aisle contained, cold air leaking into the hotaisle decreases the temperature of the returnair, slightly compromising efciency, but notthreatening IT reliability.

    Row-based cooling units can operate withinthe contained environment to supplementor replace perimeter cooling. This bringstemperature and humidity control closer tothe source of heat, allowing more precisecontrol and reducing the energy required

    to move air across the room. By placing thereturn air intakes of the precision cooling unitsdirectly in the hot aisle, air is captured at itshighest temperature and cooling efciency

    Figure 5. The hot-aisle/cold-aisle arrangementcreates the opportunity to further increasecooling unit capacity by containing the cold aisle.

    6

    Figure 6. Cold aisle containment is recommended based on simpler installation andimproved reliability in the event of a breach of the containment system.

    Hot Aisle Containment Cold Aisle Containment

    Efficiency Highest potential efciency Good improvement in efciency

    Installation

    Requires ducting or additionalrow cooling units

    Adding new servers putsinstallers in a very hot space

    Easy to add to existing datacenters

    Can work with existing resuppression

    Reliability Hot air can leak into the cold aisle

    and into the server Need to add redundant units

    Leaking cold air into the hot aisleonly lowers efciency

    Redundancy from other oormount units

  • 7/30/2019 sl-24664

    7/24

    is maximized. The possible downside of this

    approach is that more oor space is consumedin the aisle. Row-based cooling can be used inconjunction with traditional perimeter-basedcooling in higher density zones throughoutthe data center.

    Supplemental Capacitythrough Sensible Cooling

    For optimum efciency and exibility, a coolingsystem architecture that supports deliveryof refrigerant cooling to the rack can work in

    either a contained or uncontained environment.This approach allows cooling modules to bepositioned at the top, on the side or at the rearof the rack, providing focused cooling preciselywhere it is needed while keeping return airtemperatures high to optimize efciency.

    The cooling modules remove air directly fromthe hot aisle, minimizing both the distancethe air must travel and its chances to mix withcold air (Figure 7). Rear-door cooling modules

    can also be employed to neutralize the heat

    before it enters the aisle. They achieve evengreater efciency by using the server fans for airmovement, eliminating the need for fans on thecooling unit. Rear door heat exchanger solutionsare not dependent on the hot-aisle/cold-aislerack arrangement.

    Properly designed supplemental cooling hasbeen shown to reduce cooling energy costsby 35-50 percent compared to perimetercooling only. In addition, the same refrigerantdistribution system used by these solutions

    can be adapted to support cooling modulesmounted directly on the servers, eliminatingboth cooling unit fans and server fans (see thesidebar, The Fan-Free Data Center).

    7

    Figure 7. Refrigerant-based cooling modules mounted above or alongside the rack increaseefficiency and allow cooling capacity to be matched to IT load.

  • 7/30/2019 sl-24664

    8/24

    Best Practice 2: Match cooling capacity

    and airflow with IT loads

    The most efcient cooling system is onethat matches needs to requirements. Thishas proven to be a challenge in the datacenter because cooling units are sized forpeak demand, which rarely occurs in mostapplications. This challenge is addressedthrough the use of intelligent cooling controlscapable of understanding, predicting andadjusting cooling capacity and airow basedon conditions within the data center. In

    some cases, these controls work with thetechnologies in Best Practice 3 to adaptcooling unit performance based on currentconditions (Figure 8).

    Intelligent controls enable a shift from coolingcontrol based on return air temperature, tocontrol based on conditions at the servers,which is essential to optimizing efciency.

    This often allows temperatures in the cold

    aisle to be raised closer to the safe operatingthreshold now recommended by ASHRAE(max 80.5 degrees F). According to an

    The Fan-Free Data Center

    Moving cooling units closer to the sourceof heat increases efciency by reducingthe amount of energy required to moveair from where heat is being produced towhere it is being removed.

    But what if the cooling system couldeliminate the need to move air at all?

    This is now possible using a new

    generation of cooling technologies thatbring data center cooling inside the rackto remove heat directly from the deviceproducing it.

    The rst of these systems commerciallyavailable, the Liebert XDS, works byequipping servers with cooling platesconnected to a centralized refrigerantpumping unit. Heat from the server istransferred through heat risers to the

    server housing and then through a thermallining to the cooling plate. The coolingplate uses refrigerant-lled microchanneltubing to absorb the heat, eliminating theneed to expel air from the rack and into thedata center.

    In tests by Lawrence Berkeley NationalLabs, this approach was found to improveenergy efciency by 14 percent comparedto the next best high-efciency coolingsolution. Signicant additional savings

    are realized through reduced serverenergy consumption resulting from theelimination of server fans. This can actuallycreate a net positive effect on data centerenergy consumption: The cooling systemdecreases data center energy consumptioncompared with running servers with nocooling. And, with no fans on the coolingunits or the servers, the data centerbecomes as quiet as a library.

    Figure 8. Intelligent controls like the LiebertiCOM system can manage air flow and coolingcapacity independently.

    8

  • 7/30/2019 sl-24664

    9/24

    Emerson Network Power study, a 10 degree

    increase in cold aisle temperature can generatea 20 percent reduction in cooling systemenergy usage.

    The control system also contributes toefciency by allowing multiple cooling unitsto work together as a single system utilizingteamwork. The control system can shiftworkload to units operating at peak efciencywhile preventing units in different locationsfrom working at cross-purposes. Without thistype of system, a unit in one area of the data

    center may add humidity to the room at thesame time another unit is extracting humidityfrom the room. The control system providesvisibility into conditions across the roomand the intelligence to determine whetherhumidication, dehumidication or no actionis required to maintain conditions in the roomat target levels and match airow to the load.

    For supplemental cooling modules thatfocus cooling on one or two racks, the

    control system performs a similar functionby shedding fans based on the supply andreturn air temperatures, further improving theefciency of supplemental cooling modules.

    Best Practice 3: Utilize cooling designs

    that reduce energy consumption

    The third step in optimizing the coolinginfrastructure is to take advantage of newertechnologies that use less energy than previousgeneration components.

    Increasing Fan Efciency

    The fans that move air and pressurize theraised oor are a signicant component ofcooling system energy use. On chilled water

    cooling units, fans are the largest consumer ofenergy.

    Fixed speed fans have traditionally been usedin precision cooling units. Variable frequencydrives represent a signicant improvementover xed-speed as they enable fan speed tobe adjusted based on operating conditions.Adding variable frequency drives to the fanmotor of a chilled-water precision coolingunit allows the fans speed and power draw to

    be reduced as load decreases, resulting in adramatic impact on fan energy consumption.A 20 percent reduction in fan speed providesalmost 50 percent savings in fan powerconsumption.

    Electronically commutated (EC) fans mayprovide an even better option for increasingcooling unit efciency. EC plug fans areinherently more efcient than traditionalcentrifugal fans because they eliminate beltlosses, which total approximately ve percent.

    The EC fan typically requires a minimum 24-inch raised oor to obtain maximum operatingefciency and may not be suitable for ductedupow cooling units where higher staticpressures are required. In these cases, variablefrequency drive fans are a better choice.

    In independent testing of the energyconsumption of EC fans compared to variabledrive fans, EC fans mounted inside the coolingunit created an 18 percent savings. When EC

    9

  • 7/30/2019 sl-24664

    10/24

    fans were mounted outside the unit, below the

    raised oor, savings increased to 30 percent.

    Both options save energy, can be installedon existing cooling units or specied in newunits, and work with the intelligent controlsdescribed in Best Practice 2 to match coolingcapacity to IT load.

    Enhancing Heat Transfer

    The heat transfer process within the coolingunit also consumes energy. New microchannel

    coils used in condensers have proven tobe more efcient at transferring heat thanprevious generation coil designs. They canreduce the amount of fan power required forheat transfer, creating efciency gains of veto eight percent for the entire system. As newcooling units are specied, verify they aretaking advantage of the latest advances in coildesign.

    Incorporating Economizers

    Economizer systems use outside air to providefree-cooling cycles for data centers. Thisreduces or eliminates chiller operation orcompressor operation in precision coolingunits, enabling economizer systems togenerate cooling unit energy savings of 30to 50 percent, depending on the averagetemperature and humidity conditions of thesite.

    A uid-side economizer (often called water-

    side) works in conjunction with a heat rejectionloop comprising an evaporative cooling toweror drycooler to satisfy cooling requirements.It uses outside air to aid heat rejection, butdoes not introduce outside air into the datacenter. An air-side economizer uses a system ofsensors, ducts and dampers to bring outsideair into the controlled environment.

    The affect of outside air on data centerhumidity should be carefully consideredwhen evaluating economization options.The recommended relative humidity fora data center environment is 40 to 55

    percent. Introducing outside air via an air-side economizer system (Figure 9) duringcold winter months can lower humidity tounacceptable levels, causing equipment-damaging electrostatic discharge. A humidiercan be used to maintain appropriate humiditylevels, but that offsets some of the energysavings provided by the economizer.

    Fluid-side economizer systems eliminate thisproblem by using the cold outside air to coolthe water/glycol loop, which in turn provides

    uid cold enough for the cooling coils inthe air conditioning system. This keeps theoutside air out of the controlled environmentand eliminates the need to condition that air.For that reason, uid-side economizers arepreferred for data center environments(see Figure 10).

    Figure 9: Liebert CW precision cooling unitsutilize outside air to improve cooling efficiencywhen conditions are optimal.

    10

  • 7/30/2019 sl-24664

    11/24

    Determining Economizer BenefitsBased on Geography

    Economizers obviously deliver greater savingsin areas where temperatures are lower. Yet,

    designed properly, they can deliver signicantsavings in warmer climates as well.

    Plotting weather data versus outdoor wet-bulbtemperature allows the hours of operation fora uid-side economizer with an open coolingtower to be predicted for a given geography. Ifthe water temperature leaving the chiller is 45degrees F, full economization can be achievedwhen the ambient wet-bulb temperatureis below 35 degrees F. Partial economizeroperation occurs between 35 and 43 degrees

    F wet-bulb temperature.

    In Chicago, conditions that enable fulleconomization occur 27 percent of the year,with an additional 16 percent of the yearsupporting partial economization. If the watertemperature leaving the chiller is increasedto 55 degrees F, full economization can occur43 percent of the year with partial operationoccurring an additional 21 percent of the year.

    This saves nearly 50 percent in cooling unitenergy consumption.

    In Atlanta, full economizer operation ispossible 11 percent of the year and partial

    operation 14 percent with a leaving watertemperature of 45 degrees F. When theleaving water temperature is increased to 55degrees F, full economization is available 25percent of the year and partial economizeroperation is available an additional 25 percentof the year. This creates cooling energy savingsof up to 43 percent.

    Even in a climate as warm as Phoenix,energy savings are possible. With a leavingwater temperature of 55 degrees F, full

    economization is possible 15.3 percent of thetime with partial economization available 52.5percent of the time.

    For a more detailed analysis of economizationsavings by geographic location see theEmerson Network Power white paperEconomizer Fundamentals: Smart Approaches toEnergy-Efcient Free Cooling for Data Centers.

    11

    Figure 10. Air economizers are efficient but are more limited in their application and mayrequire additional equipment to control humidity and contamination.

    Air-Side Economization Water-Side Economization

    PROS Very efcient in some climates Can be used in any climate Can retrot to current sites

    CONS

    Limited to moderate climates

    Complexity during change-over

    Humidity control can be a challenge;vapor barrier is compromised

    Dust, pollen and gaseouscontamination sensors are required

    Hard to implement in

    high density applications

    Maintenance complexity

    Complexity duringchange-over

    Piping and control morecomplex

    Risk of pitting coils ifuntreated stagnant water sits

    in econo-coils

  • 7/30/2019 sl-24664

    12/24

    Best Practice 4: Select a power system

    to optimize your availability andefficiency needs

    There are many options to consider in thearea of power system design that affectefciency, availability and scalability. Inmost cases, availability and scalability arethe primary considerations. The data centeris directly dependent on the critical powersystem, and electrical disturbances canhave disastrous consequences in the formof increased downtime. In addition, a poorly

    designed system can limit expansion. Relativeto other infrastructure systems, the powersystem consumes signicantly less energy,and efciency can be enhanced through newcontrol options.

    Data center professionals have long recognizedthat while every data center aspires to 100percent availability, not every business ispositioned to make the investments requiredto achieve that goal. The Uptime Institute

    dened four tiers of data center availability

    (which encompass the entire data centerinfrastructure of power and cooling) to helpguide decisions in this area (Figure 11). Factorsto consider related specically to AC Powerinclude UPS design, module-level redundancyand power distribution design.

    UPS Design

    There is growing interest in using transformer-free UPS modules in three-phase criticalpower applications. Large transformer-free

    UPS systems are typically constructed ofsmaller, modular building blocks that deliverhigh power in a lighter weight with a smallerfootprint and higher full load efciency. Inaddition, some transformer-free UPS modulesoffer new scalability options that allow UPSmodules and UPS systems to be paralleledto enable the power system to grow in amore exible manner with simple parallelingmethods, or internally scalable or modulardesigns. Professionals who value full load

    12

    Figure 11. The Uptime Institute defines four tiers of data center infrastructure availability to helporganizations determine the level of investment required to achieve desired availability levels.

    Data CenterInfrastructure Tier

    DescriptionAvailabilitySupported

    I: Basic DataCenter

    Single path for power and cooling distribution withoutredundant components. May or may not have a UPS, raisedoor or generator.

    99.671%

    II: RedundantComponents

    Single path for power and cooling distribution with redundantcomponents. Will have a raised oor, UPS and generator butthe capacity deign is N+1 with a single-wired distribution paththroughout

    99.741%

    III: Concurrently

    Maintainable

    Multiple active power and cooling distribution paths, but only

    path is active. Has redundant components and is concurrentlymaintainable. Sufcient capacity and distribution must bepresent to simultaneously carry the load on one path whileperforming maintenance on the other path.

    99.982%

    IV: Fault Tolerant Provides infrastructure capacity and capability to permitany planned activity without disruption to the critical load.Infrastructure design can sustain at least one worst case,unplanned failure or event with no critical load impact.

    99.995%

  • 7/30/2019 sl-24664

    13/24

    efciency and scalability above all otherattributes may consider a power system designbased on a transformer-free UPS. However,some transformer-free UPS designs utilizehigh component counts and extensive use offuses and contactors, compared to traditional

    transformer-based UPS, which can result inlower Mean Time Between Failure (MTBF),higher service rates, and lower overall systemavailability.

    For critical applications where maximizingavailability is more important than achievingefciency improvements in the powersystem, a state-of-the-art transformer-based UPS ensures the highest availabilityand robustness for mission critical facilities.Transformers within the UPS provide fault

    and galvanic isolation as well as useful optionsfor power distribution. Transformers serveas an impedance to limit arc ash potentialwithin the UPS itself and in some cases withinthe downstream AC distribution system.Transformers also help to isolate faults toprevent them from propagating throughoutthe electrical distribution system.

    Selecting the best UPS topology for a datacenter is dependent on multiple factorssuch as country location, voltage, powerquality, efciency needs, availability demands,fault management, as well as other factors(Figure 12). A critical power infrastructure

    supplier who specializes in both designs isideally suited to propose the optimal choicebased on your unique needs.

    UPS System Congurations

    A variety of UPS system congurations areavailable to achieve the higher levels ofavailability dened in the Uptime Instituteclassication of data center tiers.

    Tier IV data centers generally use a minimum

    of 2 (N + 1) systems that support a dual-busarchitecture to eliminate single points of failureacross the entire power distribution system(Figure 13). This approach includes two ormore independent UPS systems each capableof carrying the entire load with N capacityafter any single failure within the electricalinfrastructure. Each system provides powerto its own independent distribution network,allowing 100 percent concurrent maintenance

    13

    Figure 12. Comparison of transformer-based and transformer-free UPS systems. For more on VFDmode and other UPS operating modes, see the Emerson white paper, UPS Operating Modes AGlobal Standard.

    Characteristic Transformer-Free Transformer-Based

    Fault Management +Low Component Count +Robustness +Input / DC / Output Isolation +Scalability +In the Room / Row +Double Conversion Efficiency Up to 96% Up to 94%

    VFD (Eco-Mode) Efficiency Up to 99% Up to 98%

  • 7/30/2019 sl-24664

    14/24

    and bringing power system redundancy to the

    IT equipment as close to the input terminals aspossible. This approach achieves the highestavailability but may compromise UPS efciencyat low loads and is more complex to scale thanother congurations.

    For other critical facilities, a parallel redundantconguration, such as the N + 1 architecture

    in which N is the number of UPS units

    required to support the load and +1 is anadditional unit for redundancyis a goodchoice for balancing availability, cost andscalability (Figure 14). UPS units should besized to limit the total number of modules inthe system to reduce the risk of module failure.In statistical analysis of N + 1 systems, a 1+ 1design has the highest data center availability,

    14

    Figure 13. Typical Tier IV high-availability power system configurationwith dual power path to the server.

    Utility ACPower Input

    Utility ACPower Input

    TVSS TVSS

    TVSS TVSS

    Service EntranceSwitchgear

    Service EntranceSwitchgear

    Power ControlSWGI

    Power ControlSWGI

    PDU PDU

    BypassCabinet

    BypassCabinet

    Sys.

    ControlCabinet

    Sys.

    ControlCabinet

    DC EnergySource

    DC EnergySource

    DC EnergySource

    DC EnergySource

    UPSUPS UPSUPS

    Generator

    GeneratorN+1

    Generator

    GeneratorN+1

    ATS ATS

    Building Distribution

    Switchgear

    Building Distribution

    Switchgear

    Computer LoadSwitchboard

    Computer LoadSwitchboard

    UPS Input Switchgear UPS Input Switchgear

    Dual CordLoad

  • 7/30/2019 sl-24664

    15/24

    but if there is a need for larger or more scalable

    data center power systems, a system with upto 4 UPS cores (3+1) has greater availabilitythan a single unit and still provides the benetsof scalability (Figure 15). There are also otherparallel congurations available outlined in thesidebar, UPS Redundancy Congurations.

    UPS Efciency Options

    Todays high-availability double-conversionUPS systems can achieve efciency levelssimilar to less robust designs through the use

    of advanced efciency controls.

    Approximately 4-6 percent of the energypassing through a double-conversion UPSis used in the conversion process. This hastraditionally been accepted as a reasonableprice to pay for the protection provided bythe UPS system, but with new high-efciencyoptions the conversion process can bebypassed, and efciency increased, when datacenter criticality is not as great or when utility

    power is known to be of the highest quality.

    Heres how it works: the UPS systemsincorporate an automatic static-switch bypassthat operates at very high speeds to providea break-free transfer of the load to a utility or

    backup system to enable maintenance andensure uninterrupted power in the event ofsevere overload or instantaneous loss of busvoltage. The transfer is accomplished in under4ms to prevent any interruption that couldshut down IT equipment. Using advanced

    intelligent controls, the bypass switch can bekept closed, bypassing the normal AC-DC-ACconversion process while the UPS monitorsbypass power quality. When the UPS sensespower quality falling outside acceptedstandards, the bypass opens and transfers

    Figure 14. N + 1 UPS system configuration.

    PARALLELING SWITCHGEAR

    UPS

    CORE

    UPS

    CORE

    UPS

    CORE SS

    15

    Figure 15. Increasing the number of UPS modules in an N+ 1 system increases the risk of failure.

    System Configuration

    1

    0.60

    0.65

    0.70

    0.75

    0.80

    0.85

    0.90

    0.95

    1.00

    1+1 2+1 3+1 4+1 5+1 6+1 7+1 8+1 9+1 10+1 11+1 12+1 13+1

    1+1 deliversmaximizesreliability

    ModuleMTBF: 10 years

    ModuleMTBF: 15 years

    ModuleMTBF: 20 years

  • 7/30/2019 sl-24664

    16/24

    power back to the inverter so anomalies

    can be corrected. To work successfully, theinverter must be kept in a constant state ofpreparedness to accept the load and thusneeds control power. The power requirementis below 2 percent of the rated power, creatingpotential savings of 4-4.5 percent comparedwith traditional operating modes.

    For more on UPS operating modes, please seethe Emerson white paper titled: UPS OperatingModes A Global Standard.

    Another newer function enabled by UPScontrols is intelligent paralleling, whichimproves the efciency of redundant UPSsystems by deactivating UPS modules thatare not required to support the load andtaking advantage of the inherent efciencyimprovement available at higher loads.For example, a multi-module UPS systemcongured to support a 500 kVA load usingthree 250 kVA UPS modules can support loadsbelow 400 kVA with only two modules while

    maintaining redundancy and improving the

    efciency of the UPS by enabling it to operateat a higher load. This feature is particularlyuseful for data centers that experienceextended periods of low demand, such asa corporate data center operating at lowcapacity on weekends and holidays (Figure 16).

    High-Voltage Distribution

    There may also be opportunities to increaseefciency in the distribution system bydistributing higher voltage power to IT

    equipment. A stepdown from 480 V to 208 V inthe traditional power distribution architectureintroduces minimal losses that can beeliminated using an approach that distributespower in a 4-wire Wye conguration at anelevated voltage typically 415/240V. In thiscase, IT equipment is powered from phase-to-neutral voltage rather than phase-to-phase.The server power supply receives 240V power,which may improve the operating efcienciesof the server power supplies in addition to the

    Figure 16. Firmware intelligence for intelligent paralleling increases the efficiency of a multi-modulesystem by idling unneeded inverters or whole modules and increases capacity on demand.

    % LOAD

    25%

    % LOAD

    38%

    3 Units @ 25% Load Each = 91.5% Efficiency

    2 Units @ 38% Load Each = 93.5% Efficiency

    Stand-by unit idled

    Schedule based on load operations

    Distribute operations to all units

    16

  • 7/30/2019 sl-24664

    17/24

    lower losses in the AC distribution system.

    However, this higher efciency comes at theexpense of higher available fault currentsand the added cost of running the neutralwire throughout the electrical system. Formore information on high-voltage power

    distribution, refer to the Emerson Network

    Power white paper Balancing the HighestEfciency with the Best Performance forProtecting the Critical Load.

    Additional UPS RedundancyConfigurations

    The N+1 architecture is the most common

    approach for achieving module-levelredundancy. However, there are othervariations that can prove effective.

    A Catcher Dual Bus UPS conguration (avariation of Isolated Redundant) is a methodto incrementally add dual-bus performanceto existing, multiple, single-bus systems. Inthis conguration, the static transfer switch(STS) switches to the redundant catcherUPS in the event of a single UPS system

    failure to a single bus. This is a good optionfor high density environments with widelyvarying load requirements (Figure 17.)

    The 1+N architecture, common in Europe, isalso becoming more popular globally. Thisis sometimes referred to as a Distributed

    Static Switch system, where each UPS hasits own static switch built into the moduleto enable UPS redundancy (Figure 18).When comparing N+1 or 1+N designs

    it is important to recognize that an N+1design uses a large static switch whichmust be initially sized for end-state growthrequirements while 1+N systems require theswitchgear be sized for end-state growthrequirements. However in 1+N systems theuse of distributed static switches, rather thanthe single large switch, reduces initial capitalcosts.

    An experienced data center specialist

    should be consulted to ensure the selectedconguration meets requirements foravailability and scalability. There are tradeoffsbetween the two designs such as cost,maintenance and robustness which need tobe understood and applied correctly.

    Figure 17. Catcher dual-bus UPS systemconfiguration

    UPS 1

    PDU PDU PDU

    UPS 2 UPS 3

    STS STSSTS

    UPS 4CATCHER

    Figure 18. 1+N UPS redundancy configuration.

    PARALLELING SWITCHGEAR

    UPS

    CORE

    UPS

    CORE

    UPS

    CORE

    SSSSSS

    17

  • 7/30/2019 sl-24664

    18/24

    Best Practice 5. Design for flexibility

    using scalable architectures thatminimizes footprint

    One of the most important challenges thatmust be addressed in any data center designproject is conguring systems to meet currentrequirements, while ensuring the ability toadapt to future demands. In the past, this wasaccomplished by oversizing infrastructuresystems and letting the data center grow intoits infrastructure over time. That no longerworks because it is inefcient in terms of both

    capital and energy costs. The new generation ofinfrastructure systems is designed for greaterscalability, enabling systems to be right-sizedduring the design phase without risk.

    Some UPS systems now enable modularitywithin the UPS module itself (vertical) acrossmodules (horizontal) and across systems(orthogonal). Building on these highly scalabledesigns allows a system to scale from individual200-1200 kW modules to a multi-module

    system capable of supporting up to 5 MW.

    The power distribution system also plays a

    signicant role in scalability. Legacy powerdistribution used an approach in which the UPSfed a required number of power distributionunits (PDUs), which then distributed powerdirectly to equipment in the rack. This wasadequate when the number of racks andservers was relatively low, but today, with thenumber of devices that must be supported,breaker space would be expended long beforesystem capacity is reached.

    Two-stage power distribution creates the

    scalability and exibility required. In thisapproach, distribution is compartmentalizedbetween the UPS and the server to enablegreater exibility and scalability (Figure19). The rst stage of the two-stage systemprovides mid-level distribution.

    Figure 19. A two-tier power distribution system provides scalability, flexibilitywhen adding power to racks.

    Connect to additional load distribution units

    Connect to additional load distribution units

    UPS System

    UTILITY

    INPUT

    STAGE ONE

    DISTRIBUTION

    STAGE TWO DISTRIBUTION

    18

  • 7/30/2019 sl-24664

    19/24

    The mid-level distribution unit includes most of

    the components that exist in a traditional PDU,but with an optimized mix of circuit and branchlevel distribution breakers. It typically receives480V or 600V power from the UPS, but insteadof doing direct load-level distribution, it feedsoor-mounted load-level distribution units.The oor mounted remote panels provide theexibility to add plug-in output breakers ofdifferent ratings as needed.

    Rack-level exibility can also be considered.Racks should be able to quickly adapt to

    changing equipment requirements andincreasing densities. Rack PDUs increase powerdistribution exibility within the rack and canalso enable improved control by providingcontinuous measurement of volts, amps andwatts being delivered through each receptacle.This provides greater visibility into increasedpower utilization driven by virtualization andconsolidation. It can also be used for charge-backs, to identify unused rack equipmentdrawing power, and to help quantify data

    center efciency.

    Alternately, a busway can be used to supportdistribution to the rack. The busway runsacross the top of the row or below the raisedoor. When run above the rack, the buswaygets power distribution cabling out fromunder the raised oor, eliminating obstaclesto cold air distribution. The busway providesthe exibility to add or modify rack layoutsand change receptacle requirements withoutrisking power system down time. While still

    relatively new to the data center, buswaydistribution has proven to be an effectiveoption that makes it easy to recongure andadd power for new equipment.

    Best Practice 6: Enable data center

    infrastructure management andmonitoring to improve capacity,efficiency and availability

    Data center managers have sometimesbeen ying blind, lacking visibility into thesystem performance required to optimizeefciency, capacity and availability. Availabilitymonitoring and control has historically beenused by leading organizations, but managingthe holistic operations of IT and facilities haslagged. This is changing as new data center

    management platforms emerge that bringtogether operating data from IT, power andcooling systems to provide unparalleled real-time visibility into operations (Figure 20).

    The foundation for data center infrastructuremanagement requires establishing aninstrumentation platform to enablemonitoring and control of physical assets(Figure 21). Power and cooling systems shouldhave instrumentation integrated into them

    and these systems can be supplemented withadditional sensors and controls to enablea centralized and comprehensive view ofinfrastructure systems.

    At the UPS level, monitoring providescontinuous visibility into system status,capacity, voltages, battery status andservice events. Power monitoring shouldalso be deployed at the branch circuit,power distribution unit and within the rack.Dedicated battery monitoring is particularly

    critical to preventing outages. According toEmerson Network Powers Liebert Servicesbusiness, battery failure is the numberone cause of UPS system dropped loads. Adedicated battery monitoring system thatcontinuously tracks internal resistance withineach battery provides the ability to predictand report batteries approaching end-of-life toenable proactive replacement prior to failure.

    19

  • 7/30/2019 sl-24664

    20/24

    Installing a network of temperature sensorsacross the data center can be a valuablesupplement to the supply and return airtemperature data supplied by cooling units.By sensing temperatures at multiple locations,

    the airow and cooling capacity can be moreprecisely controlled, resulting in more efcientoperation.

    Leak detection should also be consideredas part of a comprehensive data center

    monitoring program. Using strategicallylocated sensors, these systems provideearly warning of potentially disastrous leaksacross the data center from glycol pipes,humidication pipes, condensate pumps and

    drains, overhead piping and unit and ceilingdrip pans.

    Communication with a management systemor with other devices is provided throughinterfaces that deliver Ethernet connectivity

    Figure 20. True Data Center Infrastructure Management (DCIM) optimizes all subsystemsof data center operations holistically.

    INFRASTRUCTURE

    MANAGEMENT

    POWER

    SYSTEM

    COOLING

    SYSTEM

    DISTRIBUTED

    INFRASTRUCTURE

    DATA CENTER

    ROOM

    IT DEVICES

    Figure 21. Examples of collection points of an instrumented infrastructure,the first step in DCIM enablement.

    20

    Server Control

    KVM Switch

    UPS Web Cards

    Power MetersLeak Detection

    Battery Monitors

    ManagedRack PDU

    Cooling Control

    Temperature Sensors

  • 7/30/2019 sl-24664

    21/24

    and SMNP and telnet communications, as

    well as integration with building managementsystems through Modbus and BACnet. Wheninfrastructure data is consolidated into acentral management platform, real-timeoperating data for systems across the datacenter can drive improvements in data centerperformance:

    Improve availability: The ability to receiveimmediate notication of a failure, or anevent that could ultimately lead to a failure,allows faster, more effective response to

    system problems. Taken a step further,data from the monitoring system can beused to analyze equipment operatingtrends and develop more effectivepreventive maintenance programs.Finally, the visibility and dynamic controlof data center infrastructure providedby the monitoring can help preventfailures created by changing operatingconditions. For example, the ability toturn off receptacles in a rack that is maxed

    out on power, but may still have physicalspace, can prevent a circuit overload.Alternately, viewing a steady rise in serverinlet temperatures may dictate the needfor an additional row cooling unit beforeoverheating brings down the servers.

    Increase efciency. Monitoring powerat the facility, row, rack and device levelprovides the ability to more efcientlyload power supplies and dynamicallymanage cooling. Greater visibility into

    infrastructure efciency can drive informeddecisions around the balance betweenefciency and availability. In addition,the ability to automate data collection,consolidation and analysis allows datacenter staff to focus on more strategic ITissues.

    Manage capacity. Effective demandforecasting and capacity planning hasbecome critical to effective data center

    management. Data center infrastructure

    monitoring can help identify and quantifypatterns impacting data center capacity.With continuous visibility into systemcapacity and performance, organizationsare better equipped to recalibrate andoptimize the utilization of infrastructuresystems (without stretching them to thepoint where reliability suffers) as well asrelease stranded capacity.

    DCIM technologies are evolving rapidly.Next-generation systems will begin to provide

    a true unied view of data center operationsthat integrates data from IT and infrastructuresystems. As this is accomplished, a true holisticdata center can be achieved.

    Best Practice 7: Utilize local design andservice expertise to extend equipmentlife, reduce costs and address your datacenters unique challenges

    While best practices in optimizing availability,

    efciency and capacity have emerged,there are signicant differences in howthese practices should be applied based onspecic site conditions, budgets and businessrequirements. A data center specialist can beinstrumental in helping apply best practicesand technologies in the way that makes themost sense for your business and shouldbe consulted on all new builds and majorexpansions/upgrades.

    For established facilities, preventive

    maintenance has proven to increase systemreliability while data center assessments canhelp identify vulnerabilities and inefcienciesresulting from constant change within the datacenter.

    Emerson Network Power analyzed data from185 million operating hours for more than5,000 three-phase UPS units operating in thedata center. The study found that the UPSMean Time Between Failures (MTBF) for units

    21

  • 7/30/2019 sl-24664

    22/24

    that received two preventive service events a

    year is 23 times higher than a machine with nopreventive service events per year.

    Preventive maintenance programs shouldbe supplemented by periodic data centerassessments. An assessment will help identify,evaluate and resolve power and coolingvulnerabilities that could adversely affectoperations. A comprehensive assessmentincludes both thermal and electricalassessments, although each can be providedindependently to address specic concerns.

    Taking temperature readings at critical pointsis the rst step in the thermal assessment andcan identify hot spots and resolve problemsthat could result in equipment degradation.Readings will determine whether heat issuccessfully being removed from heat-generating equipment, including bladeservers. These readings are supplementedby infrared inspections and airowmeasurements. Cooling unit performance is

    also evaluated to ensure units are performingproperly. Computational Fluid Dynamics (CFD)is also used to analyze air ow within the datacenter.

    The electrical assessment includes a single-point-of-failure analysis to identify criticalfailure points in the electrical system. It alsodocuments the switchgear capacity and thecurrent draw from all UPS equipment andbreakers, as well as the the load per rack.

    Conclusion

    The last ten years have been tumultuouswithin the data center industry. Facilities areexpected to deliver more computing capacitywhile increasing efciency, eliminatingdowntime and adapting to constantchange. Infrastructure technologies evolvedthroughout this period as they adapted tohigher density equipment and the need forgreater efciency and control.

    The rapid pace of change caused many data

    center managers to take a wait-and-seeattitude to new technologies and practices.That was a wise strategy several years agobut today those technologies have maturedand the need for improved data centerperformance can no longer be ignored. Properdeployment of the practices discussed canhave immediate TCO improvements fromcapital benets, to amazing energy efciencygains to ease of computing adaptations.

    In the cooling system, traditional technologies

    now work with newer technologies to supporthigher efciencies and capacities. Raising thereturn air temperature improves capacityand efciency while intelligent controls andhigh-efciency components allow airow andcooling capacity to be matched to dynamic ITloads.

    In the power system, high efciency optionswork within proven system congurationsto enhance efciency while maintaining

    availability. Power distribution technologiesprovide increased exibility to accommodatenew equipment, while delivering the visibilityinto power consumption required to measureefciency.

    Most importantly, a new generation ofinfrastructure management technologiesis emerging that bridges the gap betweenfacilities and IT systems, and providescentralized control of the data center.

    Working with data center design andservice professionals to implement thesebest practices, and modify them based onchanging conditions in the data center, createsthe foundation for a data center in whichavailability, efciency and capacity can all beoptimized in ways that simply werent possibleve years ago.

    22

  • 7/30/2019 sl-24664

    23/24

    Data Center Design Checklist

    Use this checklist to assess your own data center based on the seven best practicesoutlined in this paper.

    Maximize the return temperature at the cooling unitsto improve capacity and efciencyIncrease the temperature of the air being returned to the cooling system using thehot-aisle/cold aisle-rack arrangement and containing the cold aisle to prevent mixing ofair. Perimeter cooling systems can be supported by row and rack cooling to supporthigher densities and achieve greater efciency.

    Match cooling capacity and airow with IT loadsUse intelligent controls to enable individual cooling units to work together as a team andsupport more precise control of airow based on server inlet and return air temperatures.

    Utilize cooling designs that reduce energy consumptionTake advantage of energy efcient components to reduce cooling system energy use,including variable speed and EC plug fans, microchannel condenser coils and propereconomizers.

    Select a power system to optimize your availability and efciency needs

    Achieve required levels of power system availability and scalability by using the rightUPS design in a redundant conguration that meets availability requirements. Useenergy optimization features when appropriate and intelligent paralleling inredundant congurations.

    Design for exibility using scalable architectures that minimizes footprintCreate a growth plan for power and cooling systems during the design phase. Considervertical, horizontal and orthogonal scalability for the UPS system. Employ two-stagepower distribution and a modular approach to cooling.

    Enable data center infrastructure management and monitoring to improve capacity,efciency and availability

    Enable remote management and monitoring of all physical systems and bring data fromthese systems together through a centralized data center infrastructure managementplatform.

    Utilize local design and service expertise to extend equipment life, reduce costs andaddress your data centers unique challengesConsult with experienced data center support specialists before designing or expandingand conduct timely preventive maintenance supplemented by periodic thermal andelectrical assessments.

    23

  • 7/30/2019 sl-24664

    24/24

    Emerson Network Power.

    The global leader in enabling Business-Critical Continuity. EmersonNetworkPower. com

    AC Power

    Connectivity

    DC Power

    Embedded Computing

    Embedded Power

    Infrastructure Management & Monitoring

    Outside Plant

    Power Switching & Controls

    Precision Cooling

    Racks & Integrated Cabinets

    Services

    Surge Protection

    Emerson Network Power

    1050 Dearborn Drive

    P.O. Box 29186

    Columbus, Ohio 43229

    800.877.9222 (U.S. & Canada Only)

    614.888.0246 (Outside U.S.)

    Fax: 614.841.6022

    EmersonNetworkPower.com

    Liebert.com

    While every precaution has been taken to ensure accuracy andcompleteness in this literature, Liebert Corporation assumes noresponsibility, and disclaims all liability for damages resultingfrom use of this information or for any errors or omissions.

    2012 Liebert Corporation. All rights reserved throughoutthe world. Specifications subject to change without notice.All names referred to are trademarks or registered trademarksof their respective owners.Liebert and the Liebert logo are registered trademarks of theLiebert Corporation. Business-Critical Continuity, Emerson NetworkPower and the Emerson Net work Power logo are trademarks andservice marks of Emerson Electric Co. 2012 Emerson Electric Co.

    SL- 24664 R10-12 Printed in USA