brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · data...

52
missioncriticalpower.uk ISSUE 12: August 2017 12 Ian Bitterlin identifies the weakest links in the power chain and offers a stark warning on the risks 08 Is it time for data centres to change their mindset and improve their green credentials? 22 Virtual reality and the data centre: could it support the next revolution in IT infrastructure? Brewing trouble with a poor power factor? See page 14

Upload: others

Post on 29-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

missioncriticalpower.uk

ISSUE 12: August 2017

12 Ian Bitterlin identifies the weakest links in

the power chain and offers a stark warning on the risks

08 Is it time for data centres to change

their mindset and improve their green credentials?

22 Virtual reality and the data centre: could it

support the next revolution in IT infrastructure?

Brewing trouble with a poor power factor? See page 14

Page 2: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure
Page 3: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

IN THIS ISSUE 3

missioncriticalpower.uk August 2017 MCP

Front CoverABB on why a poor power factor

is brewing business trouble

missioncriticalpower.uk

ISSUE 12: August 2017

12 Ian Bitterlin identifies the weakest links in

the power chain and offers a stark warning on the risks

08 Is it time for data centres to change

their mindset and improve their green credentials?

22 Virtual reality and the data centre: could it

support the next revolution in IT infrastructure?

Brewing trouble with a poor power factor? See page 14

Data centre designVirtual reality: the next data centre revolution?

InfrastructureLife on the edge and the need for

resilience: Equinix MD discusses key data centre issues

Viewpoint Ian Bitterlin: what is the

weakest link in your power chain?

Optimisation‘Data centres anonymous ’

and the myth of the perfect site

12

22

14

18

TrainingThe skills shortage: a perfect

storm of complex issues?

42

32

To subscribe please contact: missioncriticalpower.uk/subscribe

Comment 4

News 6

Energy Efficiency 8

Cooling & Air Movement 16

Data Centre Design 22

Power Storage 25

Demand-side Response 30

Modular Solutions 34

Testing & Inspection 38

Data Centre Optimisation 42

Products 46

Q&A 50

Energy efficiencyHow green is your data centre? A change of mindset is needed, says the Green Grid

8

Page 4: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

COMMENT4

missioncriticalpower.uk

The UK has a long and impressive history of innovation in power engineering. In 1881 two electricians built the world’s first power station at Godalming in Surrey. The station employed two waterwheels to produce an alternating current that was used to supply seven Siemens arc lamps at 250 volts and 34 incandescent lamps at 40 volts. Winding forward to 2017, the UK has thrown down the gauntlet, once again, announcing its intention to lead the world in power engineering – specifically through power storage innovation.

The government is putting its money where its mouth is and pledging to invest £246m into battery technology to ensure the UK builds on its strengths and “leads the world in the design, development and manufacture of electric batteries”.

The government’s commitment will also include putting £45m towards establishing a battery research centre in a bid to bring down the costs of energy storage.

All this is good news for mission critical industries that rely heavily on battery technology

to keep their operations resilient and efficient. It will also support technologies that could provide businesses with additional revenue streams

through participation in demand-side response

schemes, while supporting carbon reductions through

renewables integration. The Green Grid is calling on data centres to re-evaluate

their sustainability practices, by reviewing how they can make their data centres more energy-efficient and determining how they can best use renewable energy to power their facilities. Battery technology will be an important piece in this puzzle.

In this issue, Riello’s Leo Craig highlights the fact that more work needs to be done to reassure mission critical businesses that the use of emergency back-up systems in a demand response capacity can be achieved in a risk-free manner. Data centre operators such as Equinix have commented that, to date, the financial rewards have not provided sufficient incentive to come on board with DSR, and this is something that the UK government will need to address, in the future, in order to encourage greater participation.

However, Craig points out that the UK has more than 4GW of stored power in UPS units and this valuable, additional resource “could and should be exploited to help avert a capacity crisis”.

Ultimately, battery technology is the key to unlocking this potential: businesses can only consider UPS energy storage as a demand response option if their UPS is powered by lithium-ion batteries.

In the long run, the people of the UK will benefit by enhancing the long-term security of our electricity supply, preventing outages and avoiding major price hikes in the future. Whether your business depends on it, or you are just one of the many consumers who will feel the pinch, in the face of an energy crisis, a secure and sustainable energy supply is something that we all hope for.

Louise Frampton, editor

Power to the people...

EditorLouise [email protected]: 020 34092043m: 07824317819

Managing EditorTim [email protected]

ProductionPaul [email protected]: 07790 434813

Sales directorSteve [email protected]: 020 3714 4451m: 07818 574300

Commercial managerDaniel CoyneT: 02037517863 M: 07557109476E: [email protected]

Circulation [email protected]

Energyst Media Ltd, PO BOX 420, Reigate, Surrey RH2 2DU

Registered in England & Wales – 8667229Registered at Stationers Hall – ISSN 0964 8321Printed by Warners (Midlands) plc

No part of this publication may be reproduced without the written permission of the publishers. The opinions expressed in this publication are not necessarily those of the publishers.Mission Critical Power is a controlled circulation magazine available to selected professionals interested in energy, who fall within the publishers terms of control. For those outside of these terms, annual subscriptions is £60 including postage in the UK. For all subscriptions outside the UK the annual subscription is £120 including postage.

Follow us for up-to-date news and information:

MCP August 2017

Page 5: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure
Page 6: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

NEWS & COMMENT6

missioncriticalpower.ukMCP August 2017

KAO Data Campus in Harlow, the £200m science and technology data centre development at the heart of the London-Stansted-Cambridge corridor, has announced the full energisation of its data centre campus nearly two months ahead of schedule.

Paul Finch, chief operating officer at KAO Data, commented: "Taking power out of the critical path de-risks the delivery, positioning the project for further success. It is also an important step towards the realisation of phase one of the development...Energisation of the site means that installation of engineering infrastructure can go ahead without hindrance."

KAO Data Park has secured a 43.5MVA power supply served by its own UKPN adoptable substation. The company has made a significant investment in the power train, with three primary transformers in N+1 configuration independently fed from the primary grid to ensure maximum resilience for the site.

Finch added: "Energisation of the site is a serious tick in the box for any data centre owner and operator. We can

KAO's Harlow data centre powers up for 2017 completion

GE Consumer and Industrial SA and Burland Energy SA have entered into a cooperation to deliver UPSaaS and Facilitate to mission critical facilities. The UPSaaS (Power Conditioning-as-a-Service) programme is a way of provisioning conditioned electricity to mission critical applications, such as data centres, medical facilities and telecommunication infrastructures. UPSaaS is transferring customers from buying physical assets to buying the output (conditioned

electricity) with fixed energy unit (kWh) pricing, thus benefiting from pay-per-use experiences.

Facilitate (Electrical Infrastructure as a Service) is a comprehensive service to maximise the operational availability of the applications/facilities on pay-per-use basis and it covers all components and services, allowing customers to better manage their actual and future needs. Facilitate includes UPSaaS and COOLaaS (Cooling-as-a-Service), which are also available as separate products,

along with back-up power and power distribution.

Customers never assume ownership of any assets (representing a move from Capex to Opex) and, rather than fixed monthly payments, such as traditional leasing solutions, the monthly billing is based upon a fixed rate per kWh consumed.

This includes all needed products, installation, preventive maintenance, service, spare parts and battery changes for the entire duration of the contract.

GE and Burland Energy partner to deliver UPSaaS

now proceed through levels one to five of the commissioning process to keep the facility on target for its practical completion by December this year.

"Keeping this project on time and on budget is to the credit of a very accomplished and dedicated professional and contracting team. Particular credit goes to Matrix Networks in collaboration with JCA Engineering."

With power secured and available to cover the capacity requirements of the entire campus, KAO Data is about to enter negotiations for a power purchasing agreement to supply 100% renewable energy for customers who wish their

data centre operations to be based on zero carbon platform bolstering their sustainability credentials at each of KAO's 16 data halls.

Schneider Electric, the global specialist in energy management and automation, has also signed an agreement with KAO for the use of its StruxureWare for Data Centers DCIM suite.

The data centre management software capability enables power management software (PMS), energy management software (EMS) and building management software (BMS) to be integrated with a DCIM overlay to provide data centre customers with a range of services from IT asset

KAO has 'energised' ahead of schedule at its campus

management to power use monitoring and intelligence down to branch circuits.

Tanuja Randery, Schneider Electric UK and Ireland country president, explained that the use of StruxureWare will "help KAO meet its goals for efficiency and reliability, as well as enhancing their customers' experience".

Matthew Baynes, colocation and telco segment director at Schneider Electric, added that the StruxureWare for Data Centers solution eliminates integration costs and time, reduces risk, and simplifies commissioning and operations. "Importantly, this will deliver value to KAO customers in the colocation space." he commented.

KAO is currently advanced in the construction phase of its DC1 data centre. At launch, one technology suite capable of delivering 2.2MW with immediate capacity of 442 racks up to 58U will be available.

When complete, the campus will comprise four data centres, each with four technology suites, using indirect evaporative cooling for increased efficiency.

Page 7: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

outstanding facility that utilises the very best energy efficient technology.

"We are committed to providing the best energy efficient and resilient critical environments and this is reflected in the range of organisations this new facility will serve.”

The Shield House facility is expected to be completed in late 2017.

7

missioncriticalpower.uk August 2017 MCP

News in briefEnergy storage business launchedSiemens and AES have launched an energy storage business which they claim will be a “game changer” due to their combined financial muscle and global footprint. The two said the business name ‘Fluence’ is derived from the “confluence of forces” that are redefining the global energy system, with the need to store energy becoming “critical” due to the increasing penetration of renewables.

Turnkey UPS projectNetApp, the $5.5bn integrated cloud data storage solution provider, has chosen Piller Power Systems technology for its new campus in Bangalore, India. The new 5.7ha campus is a substantial expansion of NetApp's footprint in India. A turnkey hybrid medium-voltage UPS system comprising four sets of Piller's highly efficient diesel coupled rotary UPS were supplied for phase one of the development by Indian subsidiary Piller Power India. A further four units are expected for phase two of the development in 2018.

Facility to demonstrate data centre technologyPanduit EMEA has opened its latest ‘customer briefing centre’ in Schwalbach, Germany. The new centre contains an operational data centre to illustrate the company’s and its partners’ hardware and software. Customers can visit the the new facility and connect with the physical aspects of the data centre including contained environment cabinets, heat and cooling management systems, structured cabling (fibre and copper) and data centre management systems.

JCB Broadcrown has secured a contract to provide standby power at one of London’s most striking and mixed use developments. Standing 27 storeys high, the 700,000 sq ft development – known as One Bank Street – is being built at Canary Wharf and when completed in 2018 will be one of the capital’s most

prestigious commercial buildings.The multimillion-pound contract includes

the installation of five 2500kVA/2MW diesel generators, plant room noise attenuation, mechanical and electrical installation, basement level fuel tanks, a power management system and commissioning of the installation over the next two years. JCB Broadcrown will install the five G2500SMU5 diesel generators on the 26th floor of the building, creating an acoustically lined plant room to house the 10MW power supply – enough energy to power up to 5,000 domestic homes for 24 hours. The JCB Broadcrown designed power management system (PMS) will provide sequenced load control for critical load supply, to avoid the chance of generator failure.

of 7kW per rack. All rack power will be conditioned and protected by high capacity UPS systems in up to 2(N+N) configuration.

This level of UPS resilience

TeleData invests for high resilience

JCB wins major standby power contract

Shield House contract win

TeleData UK has announced its expansion at its Delta House data centre, with an additional 2,500 sq ft of premium colocation space. The expansion on the ground floor of Delta House will be TeleData’s second in the past two years.

To bring the new expansion space into service, TeleData will be making substantial investment into additional cooling and power infrastructure. Resilient, state-of-the-art air conditioning systems with cold aisle containment will be deployed to ensure best possible energy efficiency, while also providing capacity to support an average

ensures that TeleData hosted customers are protected against factors that could otherwise cause downtime to critical servers or telecommunications equipment, and surpasses the minimum resilience requirement of Tier 3 design – a typical minimum infrastructure design benchmark for data centre service providers.

As well as providing an extra level of resilience, this allows TeleData to carry out critical system maintenance without removing UPS power protection for its customers during maintenance windows on these key components.

Critical infrastructure specialist Sudlows has been awarded the contract to deliver a new state-of-the-art data centre for digital firm Indectron, which will provide highly secure colocation for a range of organisations. Following a competitive tender, Sudlows was appointed to design and build the new 3MW power capacity data centre, based in Gloucester.

Shield House is positioned directly on the UK’s arterial fibre routes in an area that is fast developing into the cyber-security hub of the UK. The facility’s intelligent design will deliver enhanced environmental efficiencies and will be capable of supporting high density computing.

Andy Hirst, technical director at Sudlows, commented: “This is an

Page 8: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

As the global data centre construction market continues to increase,

the Green Grid is urging data centre providers to keep long-term sustainability front of mind when building and maintaining their facilities. The global market for data centre construction is predicted to reach a total value of $73.87bn (£56.61bn) by 2021, according to Technavio.

Lance Rütimann, vice-president at the Green Grid, warns that this rise will mean that data centre providers can no longer rely solely on fossil fuels to power their facilities. He believes a ‘green’ mind-set must be adopted, requiring a renewed focus on sustainability

fuels, renewable energy is clearly a longer term and more sustainable solution.”

5G technologyThe Green Grid also warns that while 5G is set to bring a range of benefits to business consumers, it will also increase the capacity to create and transfer more data than ever. Without a more significant move to sustainable practice, there will be a large increase in the carbon footprint of data centres across the world.

The next generation of mobile networks, 5G, is still in the early stages of development, with no formalised standards in place to define or govern its usage. It will yield a substantial

8

missioncriticalpower.ukMCP August 2017

ENERGY EFFICIENCY

Sustainability and the data centre: beyond the pipe dream...

The adoption of sustainable approaches to energy will require a cultural shift but if the data centre sector is to avoid a large increase in carbon footprint, it will need to change. The Green Grid is urging operators to adopt a ‘green mind-set’ and says that Scandanavian projects have shown that this can be more than just a pipe dream. Louise Frampton reports

and a holistic approach to the entire data centre infrastructure.

Rütimann explains: “This rise in data centre construction is what is to be expected as technology continues to advance; the Internet of Things, social media and digital transformation are all continuing to explode, in turn creating more data that needs to be stored. While fossil fuels were once viewed as an effective resource for powering data centres, this is no longer the case.

“Data centres now account for 5% of global CO2 emissions and this will only increase if we don’t change our approach. What is more, with limited supplies of fossil

A ‘green’ mind-set must be adopted, requiring a renewed focus on sustainability and a holistic approach to the entire data centre infrastructure

Page 9: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

9

missioncriticalpower.uk

increase in data transfer speeds – for example, currently a full HD movie requires a number of minutes to download over 4G, whereas with 5G it is estimated that this could be accomplished in less than 10 seconds. 5G will also reduce response time from about 50 milliseconds on 4G to about 1 millisecond.

The amount of data produced will require more bandwidth and storage – increasing the need to focus on sustainability and avoid a massive escalation in the carbon footprint of ICT.

“When building new data centres – and this applies to existing facilities as well – providers must re-evaluate their sustainability practices, by reviewing how they can

August 2017 MCP

make their data centres more energy-efficient and then determining how they can best use renewable energy to power their facilities,” comments Rütimann.

Roel Castelein, the Green Grid’s EMEA marketing chair, says that enterprise data centres and colocation data centres are increasingly looking at how to lower their energy consumption; however, the bulk of operators are motivated by cost considerations.

Nevertheless, there are some operators that are willing to look at the whole of their data centre’s long-term sustainability, not just from an energy consumption perspective, but also in terms of how much water is consumed, the carbon footprint, how cool the data centre gets, as well as the life cycle in terms of end of life and waste. “Compared to four years ago, there is a lot more consciousness around this,” he comments.

Google is an enlightened example of how data centres can tackle the issue of sustainability: “The tech giant made a very rational financial calculation on the explosive growth rate of its data and the amount of energy it would take to store all its data as it continues to grow. The company then actioned two steps,” Rütimann explains.

“Firstly, it looked at how to make its data infrastructure as well as all of their data networking and data components more energy-efficient. Secondly, the company started buying or sourcing renewable energy to power its data centres. Google has pledged that between 2020 and 2025, all of its operations will be powered by renewable energy. Facebook and Apple have also made similar pledges.”

Tackling the basicsWhile an entirely green approach may not be possible for all, at present, a feasible alternative is to use renewable energy as secondary source of power. However, Castelein points out that there are still too many operators failing to even get the basics right – a

surprising number of data centres are still mixing hot and cold air flows, for example, yet the solution is simple and low cost. Adding a few panels to provide hot and cold aisle containment requires very little effort or investment, but this is often overlooked.

Other more sophisticated approaches are coming to the fore – from the use of complex algorithms and software solutions, to energy efficient technology such as liquid cooling, for example.

However, Castelein adds that if you want to save energy in heating your house, you simply turn down the heating and put on a sweater. The principle is much the same with data centres, according to Castelein: if you are willing to push up the temperature of the facility, by 2-5 degrees, without affecting operations, you will consume less energy. This still relies on fossil fuel, however, and the Green Grid says that, in the long term, data centres will need to increase the use of clean energy.

Onsite power generation“There are clearly more sustainable ways to design,

power and operate data centre facilities. Rather than relying exclusively on unsustainable fossil fuel energies, providers should follow the same suit as the larger hyperscalers and adopt renewable energy sources to power their data centres,” comments Rütimann.

Although there is increased interest in innovative approaches to onsite power generation, Castelein highlights a need for educating the market further. He points out that, within the retail industry, there have been some innovative business models that the data centre sector could learn from. For example, a medium-sized retailer based in Belgium established its own green energy company – firstly to supply its stores with renewable energy but also to sell energy to others in the long term.

“The retail industry is very tough with low margins. If they can do it, anyone can do it,” says Castelein.

Governments could also have a greater role in incentivising uptake of renewables in the data centre market, he believes. Indeed, some geographies are leading the way in the development of sustainable investment.

The Scandinavian data centre sector, for example, has been a pioneer in the use of renewable technology and sustainability, leading the way for others. Governments in the region have wanted to attract inward investment but the local communities have also benefitted at the same time. For example, a huge data centre has been built in Finland that uses heat recovery to provide free heating for the town’s residents during the country’s harsh winters.

“They actually made it happen. It wasn’t just a pipe dream,” says Castelein.

Ultimately, the biggest barrier to increased adoption of renewable energy, he believes, is cultural – there is a fear of change. “Operators need to stop thinking about it and just do it,” he concludes. ●

The biggest barrier to increased adoption of renewables is cultural – there is a fear of changeRoel Castelein, Green Grid

Page 10: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Increasing demand for renewable energy, along with the requirement for a

stable grid supply, is driving major changes within the power industry, with G59 and G83 regulations at the forefront of some of those changes.

The EU and surrounding countries have set ambitious targets to reduce greenhouse gas emissions by as much as 80-95% by 2050, with equally ambitious nearer-term targets for 2030. In order to meet

these tough emissions targets, governments are busy securing initiatives to increase power from renewable energy sources, which, until recently, have formed a very small percentage of the total power produced.

Renewables, by their very nature, are much more variable as they depend on weather or climatic conditions which change, so looking to the future and our higher dependence on renewable energy has led to a review of grid protocols.

Supporting the European

agenda for climate and energy, the European Network of Transmission System Operators for Electricity (ENTSO-E) has developed a series of network codes that are mandatory for all EU states.

The purpose of this article is to outline the key issues facing Europe’s power system today and the path it is taking to addressing future challenges. Part of this includes the Requirement for Generators Network Code (RfG) legislation, which covers

10

missioncriticalpower.uk

protocols for grid connected generators across the 41 transmission system operators (TSOs) it represents, covering 34 countries including Great Britain. Now each country has the task of interpreting the requirements of the legislation and assessing how their energy models can best be adapted.

Greener alternativesWith governments across and beyond the borders of Europe committed to finding greener alternatives for generating

MCP August 2017

ONSITE POWER GENERATION

Increasing demand for renewable energy, along with the requirement for a stable grid supply, is driving major changes within the power industry, with G59 and G83 regulations at the forefront of some of those changes. Deep Sea Electronics’ John Ruddock reports

Connecting with the National Grid and complying with new regulations

Page 11: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

11

missioncriticalpower.uk August 2017 MCP

to be replaced with G99 or G98, which incorporate the new protocols from the RfG. This will provide a much more secure and safe framework for modern energy technologies when connecting to the grid, while at the same time maintaining the robustness of the network.

As equipment differs across borders, so the need for different guidelines has evolved. Each country has its own regulations covering grid protocols specific to their operations, but these must all be updated to encompass the new network codes over the next few years.

In the shorter term, G83 and G59 are under constant review and changes are being developed to maintain the stability of the grid in the face of the challenges posed by the growth in renewable generation.

There are many applications that were intentionally designed as stand alone systems and therefore had no requirement for G59 compliance. These are generally smaller installations primarily required to generate power for own consumption, but an increasing number are incentivised to supply surplus power back to the grid as a way of off-setting investment costs and to support growing energy demands. These applications have had to fit mains protections retrospectively in order to comply with G59. Installations that have chosen to be part of the STOR initiative offering emergency reserves on an ‘as and when called for’ basis, are also required to comply.

Regulations complianceThere are several ways in which connected applications are able to comply with the new regulations: through an

independent mains decoupling device or through built in features within the inverter.

However it is done, the application will be required to disconnect from the grid in the event of a grid failure to prevent ‘islanding’ but at the same time preventing ‘nuisance’ disconnections during temporary and/or fleeting power disturbances and other grid events not caused by islanding events.

In effect the windows have been broadened to prevent applications from being ‘thrown off’ the network and putting all the burden on the remaining producers, while still operating within recommended guidelines.

New technologyPowerful independent devices such as DSE’s P100 mains decoupling relay can offer many benefits to these grid connected applications, especially when faced with the impending changes outlined above.

Used to detect grid failures when in parallel with another supply, the microprocessor-based technology of the P100 has USB connectivity which allows changes to parameters and product upgrades, so the application can keep pace with emerging new requirements and can be configured to comply with different global regulations.

Also built into the product is the ability to record up to 250 events providing useful trend analysis, and a high number of sophisticated protections such as two-stage under and over frequency protection, five-stage under and over voltage protection, 10 second rolling average over voltage protection, voltage asymmetry and vector shift protection, three separate R.O.C.O.F protections, incorrect phase sequence protection

plus a host of other powerful features. The product also has built-in security to prevent unauthorised or accidental configuration changes post commissioning.

So for installations having independent devices such as DSE’s P100, it is easy to adapt which makes it a cost-effective solution when regulations change and benefits those companies wanting to standardise equipment across multiple national TSOs.

Changes within the new RfG legislation, which will be covered in the G98/99 regulations, also take into account the need for intelligent active and reactive power control modes.

The fault ride through (FRT) immunity requirement is intended to provide greater grid stability by reducing unnecessary trips during short power dips caused by network faults. DSE’s 86xx MKII generator controls have frequency dependant kW control and voltage dependant kVar control built in for exactly this purpose.

When changes of the magnitude associated with G98/99 are announced, it can be quite tough to understand what the requirements mean in real terms to the way organisations have to adjust their operations. Some of the difficulties can lie in the inability of equipment to adapt to new criteria.

Recognising the differences between different events on the grid, where the outcome is required to automatically disconnect or stay connected, now requires a much higher level of capability, so intelligent products that help with compliance can provide a very welcome and valuable solution when integrated within appropriate power systems. l

Governments are actively encouraging private energy producers to supply the network in increasing numbers and they form a valuable and growing energy resource

power than burning fossil fuels, the number of inverter connected appliances feeding power into the grid from sources such as solar and wind installations etc, is growing daily. Governments are actively encouraging private energy producers to supply the network in increasing numbers and they form a valuable and growing energy resource.At present, all connected applications in Great Britain are governed by G59 or G83 regulations but these are due

Page 12: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Human error is often cited as being among the top causes of high-profile outages but what about failures in the power chain and is human error ultimately at the heart of these too? Ian Bitterlin considers potential risks and considers the question: what is the weakest link?

In every data centre power system there is a weakest link. The art is to minimise

the weakness through smart design but, as most data centre failures are attributable to human error, we also must consider the ‘operations’ phase, including regular maintenance, repairs and emergency intervention. That said there will always be a weakest link since, as we reinforce one element, another takes its place at the bottom of the league table of resilience. So, the question remains: which, if any, power system element is always the weakest link?

A clue as to where to search may lay in a conference presentation made a couple of years ago by the manager of a major ICT organisations’ North American data centre estate. He described the ‘uptime’ of

the 40-plus facilities and stated that software error and human error accounted for 97% of issues, with just 3% being attributable to the physical M&E infrastructure.

Given the sensitivity of the ICT hardware and lack of any time to correct an error in the power system, it would be reasonable to assume that the 3% was dominated by power problems.

In the same presentation, it was stated that the 40-plus facilities included all ‘types and generations’ from before the Uptime classification system to the latest Tier 4 and he stated that there was ‘no discernible difference’ in the reliability performance between the oldest/worst and newest/best. If we think about the 97%, including the human error (usually agreed to be

70%) then this statement is not that strange over 10-15 years of measurement. If you have poorly trained staff then even a Tier 4 system can be defeated annually.

First, let’s consider the cooling system and specifically the electrical system that drives it. The cooling system mechanicals can be deployed in N, N+1 or 2N architecture depending on your budget, appetite for risk, need for concurrent maintainability and acceptance (or not) of live electrical working.

However, we are considering high availability systems so a basic N system can be ignored and the smart-design would be to ensure that we deployed automatic change-over switches in each element (Cracs, fans, pump-sets and chillers etc) and a 2N (A and B)

12

missioncriticalpower.uk

power system with dual motor control centres (MCCs).

If we pay attention to fire-cells and physical segregation of pathways for cables and pipework it is not hard to achieve resilience if we always have two sources of electrical power, A and B, eg from the utility and the emergency power generators. High load density or any other desire for continuous cooling does complicate things, such as installing UPS for fans, pumps or even chillers or chilled water storage, but keeping a 2N architecture will provide sufficient resilience if the implementation is human-error proof as far as is practicable. It must be said that a single MCC in an N+1 system is a very common design error and produces an unfortunate single point of failure (Spof ).

MCP August 2017

VIEWPOINT

The weakest link in high availability power systems?

Page 13: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

13

missioncriticalpower.uk

mode, despite delivering huge energy cost savings, represents a small risk but alternating it ‘enabled’ in A or B every week/month will offer half the savings with almost no risk at all. So, a dual-bus UPS system can provide a highly reliable supply to a dual-corded load if two conditions are met:• There is always an emergency standby supply available to substitute for a failed utility and that supply must be ready fast enough to avoid temperature excursions in the cooling system.• At least one of the (2N) battery’s autonomy is long enough to bridge the time gap between utility failure and standby generation being available.

So far, we have established that it is possible to protect the critical load from failure caused

August 2017 MCP

Of course, electrical energy is not the only resource that can be vital to the cooling system and often water is used (eg in evaporative or adiabatic systems) so the resilience plan must extend to dual sources and/or on-site water storage and maintenance issues can be important to emergency operations.

However, the cooling system controls the thermal conditions in the data rooms to the chosen limits and a brief excursion into the ‘allowable’ range isn’t going to negatively impact the load so the operators often have time enough to correct (reverse) mistakes and avoid load interruptions. In this respect low-density (and, somewhat perversely, a lack of cold-aisle containment) is favourable to cooling continuity.

But this ‘time enough to

reverse errors’ certainly does not apply to the critical load power supply. With zero-voltage immunity of the hardware sometimes being less than 10ms, the briefest fault/error will lead to an instantaneous load failure.

Luckily most loads are dual-corded (or protected by static transfer switches) and so deploying A and B UPS without any (or at least common) emergency power off buttons (EPOs) can avoid most potential UPS failures as well as protecting the load from your operative who makes a ‘brief’ mistake when under pressure.

If you really want to engineer out potential failures then use two different OEMs for your A and B UPS systems with two different battery OEMs and arrange for the batteries to be two years apart in age. Eco-

by UPS or critical cooling and limited the likelihood of human error by duplicating systems and ‘engineering-out’ inadvertent operations. This leaves the energy sources, the utility and the emergency power generation system, as where we need to look for a weak point, if not the weakest point. I am a firm believer in the design philosophy that utility failure should be treated as a ‘normal’ event, which is not unreasonable considering that the average northern European utility goes out of tolerance (due to switching surges, voltage depressions and other transients) every 250-400 hours.

With most utility failures being less than three seconds in duration, the gensets are rarely started and run in anger but there are events where they will be required to run for longer

I am a firm believer in the design philosophy that utility failure should be treated as a ‘normal’ event, which is not unreasonable considering that the average northern European utility goes out of tolerance (due to switching surges, voltage depressions and other transients) every 250-400 hours

periods, for example a facility with a single transformer that fails, a sub-substation fails or one that has radial utility feed (rather than a ring-main) that suffers a cable fault.

To mitigate extended genset operation, we can install dual transformers (in separate fire cells) and feed the facility from two points on the utility (two discrete substations) via diverse paths/routes into the facility. In that way, we are protected from physical utility failures that result in extended genset operation. That leaves us to look at the last element – the other source of energy, the gensets. The key point in genset starting reliability and successful running is regular maintenance and testing. This testing must be regular, on-load and include the operation of the transfer switchgear –

which is not the norm. Without this we can almost guarantee an eventual failure. It may be ‘soon’ or in several years but it will come at random when the utility fails for longer than the UPS autonomy can support the critical load or the cooling system can keep it below a thermal shut-down. With cabinet loads rising (albeit relatively slowly compared to predictions) the thermal shut-down is a more likely scenario.

So, does it look like the genset system is the weak area? If so, where is the weakest point within the genset system? I would suggest that it is the fuel itself, usually being the item that gets the least attention in many facilities. We don’t burn enough diesel oil (maybe only 12 hours per year if we test the system properly) and we generally store too much. The bulk fuel

tanks have breather pipes and condensation builds up in the bottom. Where the fuel/water boundary exists, bacterial cells grow into molecular chains and if these, along with dead cells that sink to the bottom, are sucked into the injectors in sufficient quantity the engines will stop after a few minutes of running… game over.

Multiple tanks can mitigate the risk and there is no doubt that there are facilities that know the potential problem and pro-actively manage their deliveries (testing before filling) and regularly carry out fuel treatment (polishing) – but these are, in my experience, in the minority. If I had to choose only one ‘weakest’ point it would be fuel-management – again an area where human error (by doing little) will be the root cause. l

Page 14: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Power factor should interest anyone seeking a profitable and reliable business with growth potential, writes Stephen Joyce, power quality business manager at ABB’s Power Grids business in the UK

Power factor is a subject that is often seen as the sole preserve of

electrical engineers. Yet it should attract the attention of anyone interested in the profitability and smooth running of a business, from facility management to the boardroom, because a site with a low power factor is effectively burning money. And high energy bills – with the added risk of financial penalties from your utility supplier – are only part of the problem. Power factor also impacts both the reliability of the network and its capacity to add new loads when your business expands.

When we talk about a site’s power factor (or PF) we are referring to the relationship between the active and reactive power on the network. It measures how effectively you use the electricity you buy and

in an ideal world it would be one (unity).

A useful analogy to help better understand the concept is a frothy latte. The capacity of the glass is the total apparent power as measured in kilovolt amps (kVA). The coffee body is the active power, measured in kilowatts (kW) that you can use to do work, while the froth on the top is reactive power measured in kVAR (kilovolt amps reactive) – some froth is useful but too much is a waste.

Most loads on an electrical distribution system are categorised as one of three types – resistive, inductive and capacitive. The most common in modern networks are inductive loads such as transformers, fluorescent lighting and AC (alternating current) induction motors. They need reactive power – the kVAR – to maintain

the magnetising current they need to function.

One common example of reactive power is an unloaded AC motor. When all load is removed from the motor, you might expect the no-load current to drop close to zero. In reality, the magnetising current draws between 25 and 30% of full load current even when the motor is unloaded.

Why worry?Generally, a value between 0.9 and 1.0 is considered good power factor, meaning that metered power and used power are almost equal. From the consumer’s perspective, you are using what you paid for, with minimal wastage. However, when ABB’s service engineers survey customer sites it is very common to find a much lower PF – sometimes down to 0.5 or below.

missioncriticalpower.uk

To demonstrate why a low PF is a concern, when it drops from 1.0 to 0.9 then 10% more current is required to handle the same load. But the relationship is not linear. A power factor of 0.7 requires approximately 43% more current – and a power factor of 0.5 requires approximately 200% (twice as much) current to handle the same load.

When your PF is low, the utility supplying the site must provide all the power needed – both productive and reactive. For the utility that means larger generators, transformers, conductors and other system devices that push up their own capital expenditure and operating costs. These costs have to be passed on to industrial users. And, in some cases, they are made explicit in the form of power factor penalties.

MCP August 2017

COVER STORY

Why a poor power factor is brewing business trouble

14

ABB’s test bay at its Bromborough facility uses the latest computerised testing technology to test all the power quality equipment manufactured at the site, including power factor correction equipment

Page 15: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

15

missioncriticalpower.uk August 2017 MCP

Compelling reasonsClearly, improving your power factor can contribute directly to your bottom line in terms of energy bills. But there are other compelling reasons to take action. First, reducing the load on the network will help improve the operating life of equipment, boosting reliability and reducing the need for maintenance and replacement.

However, the most significant reason is that optimising PF can help defer, or possibly even avoid completely, major capital investment to increase a site’s load capacity to facilitate the installation of new equipment.

As a typical example, ABB worked with a small manufacturing company that was planning to expand production by installing new manufacturing lines. But their site was already operating at maximum load and they could not afford the extra £150,000 needed to reinforce their connection to the local grid. They also faced major disruption and delay in digging up local roads to carry out the work. In contrast, our

volt ampere of reactive power. VARs are units of measurement for indicating how much reactive power the capacitor will supply. As reactive power is usually measured in thousands of vars, the prefix ‘k’ is added to create the more familiar ‘kVAR’ term. The capacitor kVAR rating shows how much reactive power the capacitor will supply. Each unit of

If 300 kVAR of capacitive reactive power is installed, the power factor will rise to 0.96 and the kVA demand will be reduced from 579.5 to 424.3 kVA. That means savings can vary from 20 to 30% or even more in some cases. This cumulatively translates to considerable financial savings with the PFC equipment often paying for itself in a matter of months.

survey found that the site was operating at a power factor of 0.57. Installing specialised power factor correction (PFC) equipment soon restored this to 0.95 right away, effectively freeing up an extra 81A to more than meet the demands of the new facility.

How do you solve a low PF?A low PF is solved by adding power factor correction (PFC) capacitors to the site distribution system. These capacitors work as reactive current generators that supply reactive power (kVAR) to the system.

By generating their own reactive power, industrial users free the utility from having to supply it. Therefore, the total apparent power (kVA) supplied by the utility will be less, which is immediately reflected in proportionately smaller bills. Capacitors also reduce the total current drawn from the distribution system and subsequently increase system capacity.

PFC capacitors are rated in electrical units known as ‘VARs’. One VAR = one

Practical PFC installationsIn practice, PFC is installed in one of three ways:• Individual capacitor units for

each inductive load (in most cases a motor)

• Banks of capacitor units grouped in an enclosure that is connected at a central point in the distribution system. They come in two types: Fixed capacitor banks comprise multiple capacitors racked in a common enclosure with no switching; while automatic capacitor banks, also called ‘cap banks’ have capacitors in a common enclosure with a contactor or thyristor (SCR) switched by a controller

• Combination, where individual capacitors are installed on the larger inductive loads and banks are installed on main feeders or switchboards

SummaryLow power factor is a critical business issue that impacts on a site’s profitability, reliability and growth potential. It is remedied by PFC solutions that offer rapid deployment and a fast return on investment. l

A site with a low power factor is effectively burning money… Power factor also impacts both the reliability of the network and its capacity to add new loads when your business expands

A frothy latte is a useful analogy to help better understand the concept of power factor

kVAR supplied will decrease the inductive reactive power demand by the same amount.

Let’s take as an example a low voltage network that requires 410 kW active power at full load, with a measured PF of 0.7. Therefore, the system’s full load consumption of apparent power is 579.5 kVA.

Page 16: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

James Kirkwood, head of critical services at EkkoSense, warns that many data centres are failing to follow best practice. Without a precise thermal monitoring strategy, and the technologies to support it, organisations will remain at risk

At a time when global data centre facilities are being asked to

scale their activities, it is imperative that operators have both the capacity and the resilience to support increased volume requirements.

However, when EkkoSense recently analysed some 128 UK data centre halls and more than 16,500 IT equipment racks as part of an industry survey into data centre cooling, the results revealed that eight out of 10 UK data centres currently were not compliant with current best practice ASHRAE thermal guidelines.

The ASHRAE standard – published in the organisation’s Thermal Guidelines for Data Processing Environments – 4th Edition – is highly regarded as a best practice thermal guide for data centre operators, offering clear recommendations for effective data centre temperature testing.

ASHRAE suggests that simply positioning temperature sensors on data centre columns and walls is no longer enough, and that data centre operators should – as

a minimum – be collecting temperature data from at least one point for every 3m to 9m of rack aisle. ASHRAE also goes on to suggest that unless components such as IT racks actually have their own dedicated thermal sensors, there is realistically no way for them to stay within target thermal limits.

Organisations at riskUnfortunately, the problem for the majority of data centre operators that only monitor general data centre room/aisle temperatures is that average measurements can never effectively identify hot and cold spots. Without a more precise thermal monitoring strategy, and the technologies to support it, organisations will always remain at risk – and ASHRAE non-compliant – from individual racks that lie outside the recommended range.

ASHRAE’s recommendations speak directly to the risks that data centre operators face from non-compliance, and almost all operators use this as their stated standard.

The EkkoSense research

revealed that 11% of IT racks in the 128 data centre halls surveyed were actually outside of ASHRAE’s recommended range of an 18-27º C recommended rack inlet temperature – even though this range was the agreed performance window that clients were working towards.

The survey also found that 78% of data centres had at least one server rack that lay outside that range – effectively taking their whole data centre outside of thermal compliance.

This latest EkkoSense research follows on from other recent findings that suggested current average cooling utilisation levels for UK data centres is just 34%.

This study also found that less than 5% of data centres are actively monitoring and reporting individual rack temperatures and their compliance.

This means that the majority of data centre operators simply have no way of knowing if they are actually truly compliant with best practice thermal management guidelines. And that’s a major concern when

16

missioncriticalpower.uk

it comes to data centre risk management.

IoT-enabled temperature sensors solve the problemGiven that UK data centre operators continue to invest significantly in expensive cooling equipment, I believe the cause of ASHRAE non-compliance is not one of limited cooling capacity but rather the poor management of airflow and cooling strategies. That’s why the introduction of the latest generation of Internet of Things-enabled temperature sensors – introduced since the initial publication of ASHRAE’s report – is likely to prove instrumental in helping organisations to cost-effectively resolve their non-compliance issues.

The issue could be addressed by combining innovative software and sensors to help data centres gain a true real-time perspective through the modelling, visualisation and monitoring of thermal performance.

Using the latest 3D visualisation techniques and real-time inputs from Internet of Things sensors, it is possible – for the first time - to provide data centre operators with a 3D real-time picture of their data centre environment’s physical and thermal dynamics.

By tracking rack-level temperatures using thermal monitoring technology, and applying an optimisation process, ASHRAE non-compliant data centres can be returned to a compliant state. However, once compliant, the key is to maintain that status through a programme of regular ASHRAE audits. l

MCP August 2017

COOLING & AIR MOVEMENT

Eight out of 10 UK data centres fail to meet thermal guidelines

EkkoSoft 3.0 cooling

Page 17: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure
Page 18: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

With 188 sites, in 44 markets worldwide, Equinix

specialises in enabling global interconnection between organisations and their employees, customers, partners, data and Clouds. More than 1,275 companies currently colocate in the 11 Equinix data centres based in London and Manchester alone. Ensuring uptime is critical to the business therefore.

Building resilience into a data centre is all about design, training and testing, and it has to be an ongoing strategy, according to Equinix’s UK managing director Russell Poole: “In simplistic terms, data centres are big buildings full of machines run by humans; machines break and humans make mistakes – you have to design for this,” he comments.

“A data centre built 10 years ago will have been built to a very different standard to a data centre built today – we invest significantly in updating infrastructure to deliver additional levels of resilience.

“When we build our latest facility, we go back retrospectively to the rest of the portfolio. Our standard level of resilience is 2N+1. This makes it very hard for a single issue to cause a problem,” says Poole. He adds that it is important to rigorously test for a wide variety of different scenarios. Equinix has a specialist team dedicated to testing the company’s data centres around the world, looking for single points of failure.

“It is quite an uncomfortable process – no one wants to be told that their ‘baby is ugly’,” comments Poole. He adds that it is important to demonstrate that the infrastructure works on the building load. “It needs to be tested in the real world – not just in a theoretical situation or test environment,” Poole continues. Mitigating the risk posed by human factors is also crucial, as recent high-profile outages experienced by British Airways and other enterprises have demonstrated.

work and another overseeing it. They need to follow the process of: ‘this is what I’m going to do’, ‘do you approve it?’; and checks must be made before anything is done.”

Training is crucial and must focus on ensuring processes are: written, updated and, most importantly, followed. Beyond this, training for the future of the industry sector is also a key passion for Poole. He believes

18

missioncriticalpower.ukMCP August 2017

DATA CENTRE INFRASTRUCTURE

there is a pressing need to tackle the skills gap within the data centre sector. Within the past five years, Equinix has established an apprenticeship programme to address this issue and seen all 15 graduates accept positions within the organisation.

“I look at it as growing a youth team. From an educational perspective, it was obvious to us that the

Life on the edge: global interconnection and the need for resilience

“Switching is normally the most dangerous area – switching off equipment that shouldn’t be switched off is surprisingly common in the data centre sector,” comments Poole. “Often this is due to third-party engineers performing maintenance work. You need to supervise them correctly and have a ‘script’ for what is going to happen, with one person performing the

Equinix’s UK managing director Russell Poole discusses strategies for ensuring resillient global interconnection, how data centres can address the issue of energy efficiency and why he is not concerned about Brexit. Louise Frampton reports

Page 19: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

19

missioncriticalpower.uk August 2017 MCP

pathways and the company is currently working with local schools to address this issue.

This includes hosting open evenings for children and parents with a view to offering an insight into the sector and its opportunities. Poole says that there is a need to encourage more women into the sector: “The apprenticeship is currently all boys; we have had just one female applicant in the whole five years. I would like to find a way of getting more girls interested in this as a career. There is no reason why they shouldn’t. It feels like a missed opportunity.

“We need some role models. In our company, we have an equal gender mix in the non-technical areas of the business, yet engineering and technical is almost 100% male,” he comments, adding: “I think there is a misconception that engineering is a ‘dirty world’, when in fact the data centre is a pleasant working environment.

“We are also a progressive company. People who started in technical engineering roles are now in leadership positions all over the world. There are career tracks where people can become master technicians, so they can follow a technical path that gives them seniority, with associated rewards. You have to create that journey for those who want it.”

Energy efficiency Reducing energy use and sustainability are other key areas of interest for Equinix. “We are effectively ensuring that the amount of renewable power put into the grid is equal to the amount we consume,” says Poole.

Equinix has a long-term goal of using 100% clean and renewable energy for its global platform and continues to make advancements in the way it designs, builds and operates its data centres with high energy efficiency standards.

For example, Equinix’s Amsterdam data centres at Science Park realise significant energy savings and reduction in their CO2 footprint using

an in-ground aquifer thermal energy storage, instead of mechanical cooling, and has one of the lowest operating PUE’s in the retail colocation sector.

In the US, Equinix has invested heavily in renewables and the company was ranked 16th by the US Environmental Protection Agency (EPA) in the Top 100 List of the largest green power users. Equinix uses more than 571 million kilowatt-hours (kWh) of green power annually, which represents 43% of its total US power needs.

“In terms of energy consumption, we invest heavily in cooling technology, which offers greater levels of efficiency. This has been an interesting area for us; we are using free cooling when it is cold outside, rather than using refrigeration all the time, as

Life on the edge: global interconnection and the need for resilience

UK government had decided to maximise the number of people going to university. A lot of people were coming out having had a great time, with a degree of no use to them or any industry, while having accumulated a large amount of debt.

“We wanted to offer something for those who wanted to take a different, more vocational path and build a career outside of that academic framework. We will be growing our apprenticeship programme further, throughout 2017/18. Legislation now means that a certain amount of payroll has to be spent on apprenticeships and we are already ahead of this.”

He adds that he is particularly interested in the potential of university

technical colleges – children of 14 years old have a curriculum that is 60% academic and 40% vocational; then, at 16, this balance shifts in favour of vocational study.

“Students come out with qualifications that are actually useful,” says Poole, pointing out that there are already successful schemes established with Jaguar Land Rover and Microsoft. These schemes are focused on developing the skills for the future of their respective industry sectors and Equinix is now looking at whether there is scope to become involved, with a view to supporting the development of a skills base for the future of the data centre sector.

He acknowledges that there is work to be done to raise the profile of data centre career

Cloud adoption will accelerate; there is no question about that. However, it will accelerate in a hybrid way. The biggest growth area for us is the enterprise space as enterprises adopt cloud technology

Russell Poole, Equinix

»

Page 20: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

20 DATA CENTRE INFRASTRUCTURE

well as using adiabatic cooling.”LD6, in Slough, is also

designed to be as energy efficient as possible. The site has a borehole that is “as deep as the Shard is tall”, which allows water to be extracted to support adiabatic cooling.

In addition, at sites where traditional chilled water systems operate, variable speed devices are used to maximise efficiency.

However, Poole believes that PUE has gone as low as

generation. We have some systems deployed in the US and we are considering them for other locations in Europe,” he commented. While such systems are unsuitable for its UK sites, onsite generation is an area that Equinix will continue to watch closely. “One day, it may make sense to build our own power station,” he comments.

A hybrid futureSo how will the data centre sector, in general, evolve in the future? Poole believes that Cloud adoption will accelerate; “there is no question about that”, he says. “However, it will accelerate in a hybrid way. The biggest growth area for us is the enterprise space as enterprises adopt Cloud technology.”

Research by Right Scale (2016) suggests that 71% of enterprises have a hybrid structure. Nevertheless, s significant percentage of computing capacity is still in a basement, Poole points out. These environments are now “coming out of the basement” and the challenge, going forward, will be for businesses to move into a “Cloud-first world”, says Poole.

Innovation through interconnectionEquinix helps businesses leverage the digital edge through its Interconnection Oriented Architecture (IOATM) strategy, a repeatable engagement model that helps companies do business at the digital edge. The company recently hosted its second ‘Innovation through Interconnection’ conference, in London to demonstrate how both enterprises and service providers can leverage IOA to directly and securely connect

increase performance. This means that interconnection is becoming key. Being closer to the edge is particularly important in view of the Internet of Things, which makes data analytics and real-time connectivity matter more than ever,” comments Poole.

The term ‘digital edge’, he explains, has been coined to describe where the suppliers of services and the users of services come together to interconnect, to solve the ‘problem of physics’, ie how to reduce latency, as well as to solve the ‘problem of security’, by eliminating the internet as the access mechanism.

missioncriticalpower.ukMCP August 2017

Power usage effectiveness (PUE) has gone as low as it can ‘without magic’ within the data centre sector. We are dealing with physics at the end of the day… The technology will continue to improve but sub-one PUE is a long way off

71%The percentage of enterprises

that have hybrid structure (Source: Right Scale 2016)

LD6 data centre in Slough

it can “without magic” within the data centre sector. “We are dealing with physics at the end of the day… The technology will continue to improve but sub-one PUE is a long way off,” he comments.

So are there other potential areas where Equinix could look to improve its sustainability credentials? In the past, Equinix has looked at demand-side response schemes but the perceived risk proved to be a factor in the company’s decision not to participate.

“The economic arguments haven’t stacked up, yet,” argues Poole, “The risk/reward profile hasn’t worked for us.”

Poole agrees with the Uptime Institute’s observation that there is increasing innovation and interest around onsite power generation in the data centre sector, however.

“One area that we are looking at is the use of modularised gas turbine power

people, locations, clouds and data. IOA integrates the physical and virtual worlds where they meet. This year the conference programme focused on the digital edge.

“Earlier this year Gartner identified that it is essential for businesses to bring connectivity closer to end users, at the ‘digital edge’, to

Interconnection and being closer to end users at the ‘digital edge’ is becoming key for businesses

Page 21: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

bottlers. One of which, Coca-Cola Enterprises (CCE), adopted a Cloud-focused, interconnection-first strategy with the purpose of driving efficiencies across the business. By leveraging an IOA strategy deployed on Platform Equinix, CCE re-architected for a digital edge and is now more connected, more secure and more responsive.

CCE made the decision to use Amazon Web Services (AWS) but there was a hitch. “Amazon won’t allow customer hardware into its data centres; it’s too much of a risk. So we started looking to see how close we could get to Amazon,” explains Robin Ford, senior manager, Cloud services, Coca-Cola European Partners.

The company needed an efficient connection to AWS. Security and privacy were important considerations, and cost was a factor. Conversations with Equinix began once it became evident the Equinix data centre footprint best matched that of AWS.

Equinix installed a CCE cabinet in LD5 London International Business Exchange (IBX) data centre. The cabinet connects to the Equinix Cloud Exchange, providing direct access to AWS, along with other cloud service providers that can be utilised in the future.

“It’s one thing to connect quicker,” says Ford. “But it’s also a cost saving. Rather than having to pay for lots of physical connections, we have a virtual circuit.

“It’s quicker to order and set up a new virtual circuit, should we need it. We end up paying for what we use, rather than paying for a 10GB link that is

rarely fully used.”“We also put in the Equinix

Connect solution to get internet connectivity,” Ford continues. “To be honest, there’s no business value in looking after hardware and operating systems. So we got out of the data centre game and went to hosted and cloud solutions, but we still have to connect our networks and solutions together.”

Currently, CCEP has single racks with physical cross connects in IBX data centres in Amsterdam and Paris. The goal is to repeat the full Equinix Data Hub set-up, already in place at LD5, and have both Amsterdam and Paris built out as a Cloud Exchange by the end of 2017.

Equinix continues to invest in new facilities and has recently opened new data centres in Amsterdam and Frankfurt but Brexit is one issue that is not proving a concern for the business.

“If anything, we have seen an acceleration in growth for the UK market since Brexit,” Poole reveals.

The trends that have driven growth in the sector will continue to be present, he asserted, and Equinix is “putting its money where its mouth is” by continuing to invest in UK data centre capacity at its site in Slough, as well as expanding at Park Royal and other locations.

Ultimately, Cisco predicts that global IP traffic is set to increase nearly threefold in the next five years.

This growth in IP traffic will accelerate the need for greater connectivity for organisations hoping to create business value as they undergo digital transformation. l

One area that we are looking at is the use of modularised gas turbine power generation. We have some systems deployed in the US and we are considering them for other locations

AM4 data centre in Amsterdam

21

“Equinix is pretty much where the digital edge lives. We are seeing a tremendous amount of activity around this,” Poole continues.

Richard Warner, Microsoft networking partner lead, adds that companies are increasingly moving their applications into the Cloud. “Previously, one of the restrictions has been how to connect to the Cloud. Organisatisations are looking for answers and Equinix, with

partners such as Microsoft, deliver these connectivity solutions,” he explains.

Presentations at the Innovation through Interconnection conference, included a case study from Coca-Cola European Partners (CCEP), which serves 300 million consumers across 13 countries and distributes 2.5 billion cases of drinks annually. CCEP was formed from the merger of three Coca-Cola

Page 22: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

The use of virtual reality technology could change the way data centre

operators design and manage their facilities, as well as training staff to avoid human error. However, according to Future Facilities chief operating officer Jon Leppard, the technology could “come into its own” as we see the next stage of the data centre revolution unfold.

Future Facilities has developed an interactive virtual reality platform that allows users to observe the effects of change to the data centre environment, which has the potential to help

improve performance. The company has been pioneering simulation tools, used by data centre professionals to improve thermal management and reduce energy costs, since 2004, and this latest development builds on this expertise.

Simulation using the company’s 6SigmaDCX platform has already helped high-profile operators eliminate hotspots, improve efficiency and increase computing capacity at their data centres. For example, Dell identified tactical and containment changes at its 15,480 square-foot high data

centre in Texas to improve PUE from 1.86 to 1.77, reducing chiller power consumption by 12% and overall power consumption by 5%, with potential annual savings of $100K. Capacity per cabinet could also be increased from 2.7kW to 3.2kW aiding expansion.

When CBRE’s global finance customer wanted to improve energy efficiency, it used Future Facilities’ Virtual Facility – identifying improvements to save the bank an estimated $10m-plus through combined efficiency and capacity gains in a single data centre. Cisco has also

22

missioncriticalpower.ukMCP August 2017

DATA CENTRE DESIGN

used Virtual Facility analysis to achieve a 30% reduction in power required for cooling, as well as cost savings of $200,000 per year through an increase in chilled water set point.

Proof-of-concept Having identified the potential of virtual reality to take simulation to the next level, the latest proof-of-concept platform allows users to explore data centre design in a safe offline environment, enable trouble-shooting of existing sites, as well as run ‘what-if scenarios’ to support changes in infrastructure.

Virtual reality: the next data centre revolution?Could virtual reality provide the answer to avoiding performance problems in data centres and support the next revolution, at the Edge? Louise Frampton recently visited the UK headquarters of Future Facilities and ‘strapped’ into the 3D world of data centre simulation to explore the potential of the technology

Page 23: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

23

missioncriticalpower.uk

The first time you use the virtual reality program, you are struck by the immersiveness of the experience – it feels very different from seeing an image on screen; you can ‘walk’ through aisles of three-dimensional racks, choose which direction you want to go in, while viewing the assets and crucial information such as air flows – it certainly feels like the dawn of a new era, as Scott Payton, technical director of Global Data Centre Engineering, suggests – after experiencing the program for himself, he described the addition of virtual

reality to the 6SigmaDCX simulation platform as doing for “engineering simulation, what the flight simulator did for the aviation industry”.

Future Facilities product manager Mark Fenton explains that the next stage of development will be to make the platform more interactive, so that, when you are immersed in the virtual facility, you can ‘touch’ devices, look at what applications are running, decommission equipment, and select from a menu of what is going to be installed. Rather than passively walking around

August 2017 MCP

the virtual environment, you will be able to interact and make changes live in this environment.

The next stage will be augmented reality, where the computer-generated image is superimposed on a user’s view of the real world – in this case their data centre. Operators will be able to see live data with visible air flows, while they walk the floor, to help them understand why they have an issue. “This is the final frontier of where we want to go… It will be a new way of interacting with engineering,” comments Fenton.

Technology potentialSo how will virtual reality change the way data centres are designed and managed in the future? The technology has a variety of potential uses, depending on the main goal of the business. Virtual reality makes it possible for designers to give clients a virtual tour of their proposals; colocation operators can show customers a new cage layout and how it will operate; while operational sites can be optimised to improve performance, reliability and costs. Owner operators

experiencing hot spots can use the technology to understand why they are having cooling problems, for example, or simulate the deployment of a new piece of hardware to see the impact on their data centre, or anticipate possible outcomes when performing maintenance.

Overlaying simulation and DCIM data enables greater understanding of data centre performance and it can be used for site assessment, analysis and training to reduce human errors and failures. Failure scenarios can also be run to establish how a data centre will cope with a specific cooling or power problem.

Data centres may also want to improve their efficiency profile, or look at the potential of raising the temperature of the facility. Using the technology, operators can trial the scenario, in the virtual world, with the peace of mind that they can avoid actual risk.

“Virtual reality is an exciting area for owner operators – rather than having to walk the physical site they can ‘strap in’ to virtual reality, overlay the simulation, integrate data

They say a picture paints a thousand words, but virtual reality will give you 10,000 words – you just need to decide which ones you are going to read. Making the technology easy for the lay person to use is crucial

Generator model

»

Page 24: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Edge site simulation

Large facilities will be underpinned by hundreds and thousands of little edge sites. The ability to ‘transport’ staff to these remote sites, via virtual reality, will be a major driver for the technology in the future

from DCIM tools and bring everything together in one place,” comments Leppard.

“We can take the data centre operator on a virtual tour: show them the racks, the performance, tell them from the point of view of the environment how it is going to operate, where the access is going to be and the cooling equipment. The colocation provider may have one room segregated into cages. If they sell some high-performance computing in one corner, they can see how it will impact on the other neighbouring cages. They can use it as a marketing/pre-sales tool, as well as for engineering…It takes them on a full sales journey,” Fenton explains.

“Huge hyperscalers, such as Google and Facebook, may do less day-to-day management, but they will undertake big projects – for example, they may decide to retire half of a room and bring in new hardware, so they will use the tools to design and lay it out, to see what the performance is going to be like. If they lose a server because something overheats, the fact that someone can’t poke or like for a few seconds isn’t the end of the world so they are more interested in efficiencies, while a bank will be looking for as close

24 DATA CENTRE DESIGN

to 100% resilience as possible,” he continues.

Changing marketThe use of computational fluid dynamics (CFD) and engineering simulation, in general, has been steadily growing but the market is changing, according to Leppard: “Three or four years ago, around 75% of our business was in design, but today around half of the business is with owner

“We are seeing a trend towards operational planning – mission critical facilities in the banking, insurance and government sectors are using simulation on a more regular basis, instead of just troubleshooting or using it for energy efficiency trending.

“You wouldn’t buy a suit without trying it on first, yet we, in this industry, seem to think it is fair game. However, there are tools, that give you

decisions they make will not affect their core business. It is their reputation on the line and colocation providers want to distinguish themselves as the best,” Leppard explains.

The technology will also be crucial to supporting the next wave of change in the sector, he claims: “With the future of data centres requiring closer proximity to people and devices, we are seeing large hyperscale data centres supporting

missioncriticalpower.ukMCP August 2017

operators using simulation in-house. Most new data centres have used CFD simulation. Although the percentage of live sites using it on a day-to-day basis is still relatively small, it is significantly growing.

“People are no longer using the technology as a band aid to solve a problem. Rather than IT stating: ‘You have two weeks to install this’, and operators having no idea about what the impact is going to be, following a configuration change, people are getting wiser and utilising it to predict what is going to happen, to avoid problems.

an opportunity to ‘try it on’ first, without actually flicking on the switch and waiting for something to happen.”

The number of enterprise sites has also been reducing; businesses such as Coca-Cola and Deutsche Bank have been moving away from owning data centres and are moving into colocation space.

This, according to Leppard, is further driving the need for virtual reality. “Colocation centres base their business on reliability and cannot afford to get it wrong. They need simulation to ensure the

thousands of discrete edge sites. “The next stage is being

driven by IoT – the vast amounts of data produced from our phones, cars, watches and other devices will need to be transported back to hyperscale facilities and this is where the birth of small edge sites will emerge. It won’t be possible to rely on a large-scale hub in the US, as there will be too much of a delay. In the future, there will be a box on every street corner.”

This will still be architected around large facilities but they will be underpinned by hundreds and thousands of little edge sites, he believes. The ability to ‘transport’ staff to these remote sites, via virtual reality, will be a major driver for the technology in the future, therefore.

“Ultimately, simulation isn’t voodoo,” comments Leppard. “You can see power; you can see space, but you can’t see cooling. We are trying to find a way to communicate how cooling works and why it may fail. They say a picture paints a thousand words, but virtual reality will give you 10,000 words – you just need to decide which ones you are going to read.

“Making the technology easy for the lay person to use is crucial – all the user wants to know is ‘should I, or shouldn’t I?’, ‘where should it go?’, ‘yes or no?’ If we can achieve this, we have done our job.” l

Page 25: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

25

missioncriticalpower.uk August 2017 MCP

The government has announced a major investment initiative for the battery technology setor, which will boost development of leading-edge innovation

Business and energy secretary Greg Clark has announced the

launch of the first phase of a £246m government investment into battery technology to ensure the UK ‘‘builds on its strengths and leads the world in the design, development and manufacture of electric batteries”.

Known as the Faraday Challenge, the four-year investment round is a key part of the government’s Industrial Strategy. It will deliver a coordinated programme of competitions that will aim to boost both the research and development of expertise in battery technology.

An overarching Faraday Challenge Advisory Board will be established to ensure the coherence and impact of the challenge, and the competitions will be divided into three key streams.

To support world-class research and training in battery materials, technologies and manufacturing processes, the government has opened a £45m competition, led by the Engineering and Physical Sciences Research Council (EPSRC), to bring the best minds and facilities together to create a virtual Battery Institute.

The successful consortium of universities will be responsible for undertaking research looking to address the key industrial challenges.

The most promising research completed by the institute will be moved closer to the market through collaborative research and development competitions, led by Innovate UK. The initial competitions will build on the best of current world-leading

science already happening in the UK and helping make the technology more accessible for UK businesses.

To further develop the real-world use and application of battery technology, the government has opened a competition, led by the Advanced Propulsion Centre, to identify the best proposition for a new state-of-the-art open access National Battery Manufacturing Development facility.

The announcement follows a review, commissioned as part of the Industrial Strategy green paper, by Sir Mark Walport in which he identified areas where the UK had strengths in battery technology and could benefit

from linkage through this challenge fund.

Richard Parry-Jones, newly appointed chair of the Faraday Challenge Advisory Board, said: “The power of the Faraday Challenge derives from the joining-up of all three stages of research from the brilliant research in the university base, through innovation in commercial applications to scaling up for production. It will focus our best minds on the critical industrial challenges that are needed to establish the UK as one of the world leaders in advanced battery technologies and associated manufacturing capability.”

Endeco Technologies CEO and co-founder Michael Phelan

commented that the promise of £246m towards battery technology development is crucial to developing an inclusive energy economy.

“Allowing individuals and businesses to participate in the energy market, as well as large generators and suppliers, is critical to the future of our electricity network. Generation, storage and use of power at the right time is essential, and incentivising these actions is a positive step, given the continued electrification of our lives, whether it’s cars, heating, air conditioning or entertainment systems.

“Energy prices are expected to rise by up to 40% by 2020, largely to support the changing electricity mix and upgrading of our infrastructure, meaning sharing the benefits of greater participation is a necessity, not a luxury. Batteries sit at the core of our future network, creating flexibility in when electricity is generated and where it is used – when previously this has not been possible.”

Phelan added that in conjunction with a high-end energy platform such as Endeco’s, batteries can also enhance a business’s ability to take part in demand-side response schemes and take advantage of additional revenues, further bolstering their strategy to reduce net energy spend.

“With the right technology made accessible, we know that battery technology will sustain and support carbon reductions through renewable integration, advance network and business resilience, and create an inclusive and cohesive energy economy for all,” he concluded. l

Investment pledge to drive UK battery innovation

Allowing businesses to participate in the energy market, as well as large generators and suppliers, is critical to the future of our electricity network

POWER STORAGE

Page 26: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Saft’s Holger Schuh discusses the importance of battery power for the railway industry and introduces a new technology with the potential for condition-based monitoring

Uptime of railway control and communication equipment is vital for

safety. Level crossing signals, barrier controls and points need continuous power in the case of a mains outage. Level crossing safety is a top safety priority for rail operators. One example that highlights the vital role of signalling systems is a fatal train crash in the Chinese city of Wenzhou in July 2011, when two trains collided due to a faulty trackside signalling installation.

According to operator Network Rail, there are around 6,000 pedestrian and vehicle level crossings and 20,000 sets of points in the UK. Uninterruptible power is critical for public safety at such sites and is typically provided by rechargeable batteries in trackside cabinets.

The challenge to operators is how to optimise their trackside infrastructure. Choice of battery is an important aspect of this as it sets the maintenance requirements for the life of the installation.

Engineers typically have a

choice between two types of battery. Lead-acid batteries are relatively inexpensive but require regular testing and can suffer a failure mode called ‘sudden death’. This phenomenon describes the potential for lead-acid battereries to stop working overnight.

Nickel-cadmium (Ni-Cd) batteries such as Saft’s Tel.X or Uptimax batteries embed high technology designs that prevent sudden death and, in addition, they require little or no maintenance.

Fear factorThe factor of sudden death is a major advantage for nickel technology in critical applications. It can happen at any time during the life of a lead-acid battery system and is caused by softening and eventual failure of the lead electrodes inside the battery. It does not affect nickel batteries as they rely on a rigid steel structure to guarantee mechanical strength in spite of the potential for trackside vibration or mechanical knocks.

Some operators manage

the risk of sudden death by installing a fully redundant second back-up battery. Other operators carry out capacity testing of batteries every six months – a gruelling task that requires either a constant round of site visits for maintenance engineers or significant cabling for remote testing.

For those who prefer to minimise total cost of ownership (TCO), nickel-based batteries have a clear advantage. Nickel batteries become less expensive after the first seven to eight years, which is helpful considering the operating life of a typical trackside installation is in excess of 20 years. However,

26

missioncriticalpower.uk

this includes only the initial price, maintenance cost and replacement of the battery.

As no operator has the same approach to battery testing, the price of capacity testing has not been included in our TCO calculation, so some operators may find a faster payback on nickel batteries.

Another area that may narrow the difference is where air conditioning and ventilation systems are needed inside cabinets. High temperatures cause premature ageing for all battery chemistries. However, nickel is better at handling hot conditions.

At a constant 20˚C, nickel batteries have a potential to last 20 to 30 years and lead-acid five to 10 years. However, as the thermometer nudges higher, the life expectancy of lead-acid drops away. At 30˚C, lead-acid will last five years, compared with 16 years for Ni-Cd and the gap grows wider as it gets hotter.

Nickel batteries age more slowly at higher temperatures and more predictably, with no risk of sudden death.

Operating conditions inside

MCP August 2017

POWER STORAGE

Choose the right track and help eliminate railway ‘sudden death’

The factor of sudden death is a major advantage for nickel technology in critical applications

Page 27: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

27

Choose the right track and help eliminate railway ‘sudden death’

the cabinets can be tough for battery systems. Temperatures vary between winter chill and summer heat. Some sites can experience anything from -20˚C to 40˚C or even higher.

Although such conditions may only last a few days or weeks, they can reduce the life or require additional testing, either of which impacts the lifetime cost. So by selecting nickel batteries, an operator gains more control over ageing and also eliminates air conditioning and ventilation inside cabinets.

Low temperature extremes can also impact battery performance. Nickel batteries have the edge over lead-acid in the cold. As the temperature drops, batteries’ internal resistance rises, reducing the power output. Engineers overcome this by oversizing the battery so that it will deliver the power needed even on the coldest night of the year.

However, because nickel batteries have a smaller derating factor at lower operating

temperatures, they require less oversizing. This allows for a smaller cabinet and fewer batteries to maintain (and avoid the need for a heater) – advantages which help to keep costs under control.

Recognising the advantages of nickel-cadmium, railway infrastructure operators in the Czech Republic and neighbouring Slovakia have both switched back to nickel technology for their trackside installations after trials with lead-acid batteries.

Since 2000, operators SŽDC and ŽSR have delivered major programmes of modernisation, including the introduction of a new crossing design concept with a slimline trackside cabinet. However, the operators experienced maintenance issues with the original valve regulated lead-acid (VRLA) batteries. In 2016, they switched to Saft’s nickel-based Tel.X batteries after a trial. The Tel.X battery is a reliable, maintenance-free ‘drop-in’ replacement for the original batteries and is

expected to deliver more than 16 years’ service compared with only five years for VRLA batteries.

Even though nickel batteries are reliable by design, Saft recognises the fast-growing trend towards digital railways. Therefore, the company is now developing a digital monitoring system for batteries used in trackside applications. The new system will monitor the state of health of batteries and send updates to operators in control centres.

While nickel batteries have exceptionally high reliability, the system will prove popular with operators that want to achieve enhanced online visibility of all their assets. It will open up the potential for condition-based maintenance of battery installations, reducing maintenance cost, increasing fleet availability and ensuring that maintenance is only carried out when it is needed. The new tool is planned for launch in 2018. l

Operators SŽDC and ŽSR’s modernisation programme includes a slimline trackside cabinet

Page 28: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Ebm-papst collaborated with Vertiv and CBRE to review energy efficiency at three of UBS’s London data centres and delivered significant gains

The simplest way to reduce the energy consumption in buildings is to ensure

that all heating, ventilation and air conditioning equipment is fitted with the highest efficiency EC fans. Those involved in the data centre industry are quickly realising the energy reduction potential in their buildings through upgrading HVAC equipment to innovative electronically commutated (EC) fans.

The motor and control technology in GreenTech EC fans from Ebm-papst has enabled UBS to benefit from proven efficient upgrades to its

data centre cooling systems.Ebm-papst undertook an

initial site survey to review the types of units being used and the potential solutions that were needed, along with an estimation of the payback period for any new kit.

The units that were in place before the project were chilled water, with an optional switch to lower performance, and used AC fan technology. In order to improve efficiency, Ebm-papst recommended upgrading the equipment with EC fan technology.

Based on the survey results, a trial was then agreed on a single

10UC and 14UC computer room air conditioning (CRAC) unit to establish actual performance and energy savings. Data was logged before the upgrade and again once the trial units were converted from AC to EC. Post upgrade trial data revealed that less power was absorbed by Ebm-papst’s EC fan motors than by their AC predecessors.

Based on this information, UBS decided to proceed with the conversion of all units, installing 191 fans within 76 CRAC units. Three different unit models were installed: 39x14UC units; 21x10UC units and 16xCCD900CW.

28

missioncriticalpower.uk

Vertiv then worked with CBRE (which project managed the upgrade) without causing disruption to the live data.

The main element of the upgrade project was the replacement of all fans, with Ebm-papst’s EC technology direct drive centrifugal fans, including the installation of EC fans within a floor void that required modification.

Nearly five years since the project took place, UBS has seen the following key metrics:• 191 fans installed• 76 CRAC units• Three different models

installed

MCP August 2017

DATA CENTRE EFFICIENCY

Banking on big savings

UBS installed GreenTech EC fans

from Ebm-papst at its London data centres

Page 29: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

29

number of down flow units (DFUs) operating around the clock, making them crucial to sustaining the required operating conditions for the computer equipment in the data centre.

The challenge was to improve the energy efficiency of the data centre, freeing up additional electrical capacity to use on IT resource. In addition, the task was to improve the airflow and improve controllability of the cooling units in the data hall.

Project restrictions were extensive given the live data environment and the upgrade teams were only allowed access to three halls, with only one unit switched off at any one time.

However, the upgrade was delivered on time and to budget, without disruption. Work took place while the data centres were live; the project managers had to factor in working space and access around constraints from existing equipment and infrastructure.

Ebm-papst replaced the

• Energy saving of 10,657MWh (fouryears and 10 months)

• Financial saving of £667,836 after payback of total install costs (payback period of 23 months)

On average, UBS has seen a 48% energy saving across all units and a payback period of less than two years. Other project paybacks include a CO2 reduction of 5,229 tonnes. In addition to these savings, new control strategy software was put in place that controls the EC fans on supply air temperature. This saw a further reduction of 14% in energy usage. UBS’s data centres are now also benefitting from reduced noise levels, increased cooling capacity and extended fan and unit life.

Project challengesUBS operates a 130,000 sq ft data centre in west London which is fundamental to the operation of the firm’s global banking systems. Within this site there were a

existing DFUs in the data centre with high efficiency direct drive EC fans in the CRAC units. UBS’s objective for the project was to reduce the drawn-down power by up to 30%, resulting in a 180kW power reduction load to be allocated to IT equipment. The solution resulted in a load reduction of 250 kW. As a consequence, UBS was able increase IT power consumption in addition to reducing CO2 emissions and energy costs. The energy savings from the EC fan replacement project were exactly as predicted and there was no need to perform any additional analysis due to monthly energy reports being dramatically lower.

The EC fans have continued

48%The energy savings achieved for UBS, by upgrading to high efficiency EC fan technology

to deliver energy savings, through increased reliability, resulting in a reduced maintenance burden for CBRE and UBS. HVAC systems can be responsible for more than half of the energy consumed by data centres. In cases where energy is limited, improving the energy efficiency of HVAC equipment will result in an improved allocation of energy resource to IT equipment.

While many new data centre facilities built in the UK already incorporate EC fans in their HVAC systems, most older buildings continue to use inefficient equipment. Rather than spending capital on buying brand new equipment, often the more cost-effective option is to upgrade the fans in existing equipment to new, high efficiency EC fans.

The UBS project is an excellent example of how upgrading from AC to EC technology can impact on energy savings, carbon and CO2 reduction. ●

+44 (0) 1923 296 700 [email protected]

Engineered for the unique conditions of your facility giving you the most sustainable, energy-conscious and compliant solution that delivers ultimate efficiency, performance and lifetime value.

SwitchgearCritical

SystemsIntelligent Solutions

Maintenance Services

• Bespoke design

• LV Switchgear up to 6300A

• Generator Switchboards

• UPS Switchboards

• Packaged Substations

• PDUs and Intelligent PDUs

• ASTA & KEMA certified

• Containerised solutions

• Rotary and Static UPS

• Generators

Resilient, high-integrity critical power solutions

Produced by Supported by

The Directors' Energy Report 2017

Directors Survey revised.indd 1 2/20/17 9:13 AM

Download your copy now at theenergyst.com/directors

Page 30: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Could tapping into UPS battery storage signal the future for demand response in the UK? Leo Craig, general manager of Riello UPS, considers the possibilities and benefits of adopting battery-centred demand-side response

The UK has more than 4GW of stored power in UPS units and this

valuable, additional resource could and should be exploited to help avert a capacity crisis. With electricity demand set to double by 2050, this form of demand response will be crucial in helping to balance the grid. As a renewable energy source, UPS battery power has obvious environmental benefits and can help businesses to reduce their carbon footprint. It can also open up additional revenue streams for businesses via trading on the capacity market.

It is likely to be a significant number of years before this type of demand response mechanism is widely deployed, however. The technology available holds huge potential when it comes to securing our energy future but adopting battery-centred demand response represents a massive leap of faith for businesses in the mission critical sector. We can’t expect a sea-change to happen overnight but information and best practice sharing can help to gradually increase buy-in.

There is a knowledge gap that needs to be plugged when it comes to demand-side response and UPS battery solutions. This need for additional information exists across the commercial sector as a whole – it is not unique to the mission critical industry. With mission critical sites, however, being able to mitigate concerns around UPS resilience is paramount. As specialist providers to the critical power sector, we are engaging with clients and prospects to help educate around the benefits of UPS battery demand response, talk through the technology involved and address any perceived risks.

Understanding the journeyThere is still much distance to be covered when it comes to take up of demand response solutions in general. In a highly risk-averse sector, many mission critical businesses have, understandably, been

reluctant to use their existing back-up generators as a demand response mechanism, for instance. The harnessing of power from back-up generators is viewed as one of the more straightforward ways of providing demand response and yet it is not being widely implemented. So, taking things a step further by asking the mission control sector to consider investing in new UPS technology, to support demand response is bound to meet with resistance. Using UPS battery storage for demand

response purposes has a key advantage over the back-up generator option, of course, in terms of green credentials. The emissions produced by generators defeat one of the objects of demand-side response – carbon footprint reduction.

Businesses can only consider UPS energy storage as a demand response option if their UPS is powered by lithium-ion batteries in the first place and here, again, there is room for more information regarding the benefits of

30

missioncriticalpower.uk

switching battery types.It is clear that we will

need to see some major step changes. Firstly, the mission critical industry needs to consider demand-side response as part of its corporate social responsibility, and secondly, to explore the technologies available to achieve this before finally moving towards adopting a clean demand-side response solution. This is where the UPS with lithium-ion batteries comes into play.

Li-Ion batteries as part of a UPS solution offer numerous

MCP August 2017

DEMAND-SIDE RESPONSE

Unlocking the potential of UPS battery power

Page 31: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

31

missioncriticalpower.uk August 2017 MCP

advantages over their SLA (sealed lead acid) counterparts. For starters, Li-Ion batteries have a much higher power density than SLA batteries which offers around a 50% saving in space and weight. This means that twice as much battery autonomy can be located within the same amount of space as a traditional SLA battery space. Li-Ion batteries also have much faster charging times than SLA batteries.

Where a SLA battery takes six to eight hours to reach 80% charge, for instance, a Li-Ion battery takes 30 minutes. Also, Li-Ion can be discharged and recharged up to 10,000 times where as SLAs can only be charged/recharged 500 times.

The installation of Li-Ion batteries in UPS can also reduce air conditioning costs. SLA batteries need to be kept in a 20°C atmosphere, whereas Li-Ion batteries can operate in temperatures of up to 40°C – the same as the UPS itself. It is these unique benefits of Li-Ion that make the UPS system a

realistic prospect for demand-side response applications.

Switching to Li-Ion batteries does have cost implications but this cost barrier is easy to overcome when you take into account the multiple benefits of Li-Ion batteries. The initial outlay is offset by the open demand response revenue streams and savings on offer. For example, Li-Ion comes with monitoring features as standard and so there is no requirement to install separate costly battery monitoring systems.

Long-term goals Utilising the untapped potential of UPS battery power in demand response across the UK is a long-term goal. It will require a radical shift in the mind set of mission critical businesses if they are to be comfortable in using their UPS as an energy accumulator for use in demand response. Explaining the benefits, both in terms of financial reward and corporate responsibility achievement, is essential to winning mission critical sites over. Alleviating fears around risks to operations, when using a UPS beyond its primary back-up function, plays an important role here too.

Combined efforts from UPS manufacturers, aggregators and consultants to build awareness of the business drivers behind demand-side resource in a straightforward manner will help to boost buy-in. Demonstrating how the theory works in practice is an effective way of communicating benefits to business. Mission critical operators will be keen to see peer-led examples of UPS batteries being successfully used for demand-side response in a risk-free manner.

As increasing numbers of businesses come on board, we need to tell their stories. Industry seminars, workshops and conferences that explore demand-side response and provide an opportunity for best-practice sharing will help to create impetus for change too. For a major sea-change to take place, we also need to see increased incentivisation from the policy-makers. For some time now we have heard positive noises from government around energy storage being a key part of the UK’s industrial strategy. Recommendations from the National Infrastructure Commission to support demand-side response must be realised, namely that: ‘The UK should make full use of demand flexibility by improving regulation, informing the public of benefits it can provide and piloting business models.’

Demand-side response is an integral part of the modern, flexible energy system evolving in the UK today. It offers a multitude of financial benefits to business by reducing energy bills, and providing revenue streams. From a long-term point of view, demand-side response will help to reduce carbon emissions, supporting responsible business practice and protecting the environment. It will also enhance the security of our electricity supply – reducing the potential for disruptive power outages and price hikes that we all want to avoid.

All that said, much more work needs to be done when it comes to reassuring mission critical businesses that the use of emergency back-up systems in a demand response capacity can be achieved in a risk-free manner. l

DSR is an integral part of the modern, flexible energy system evolving in the UK today

Leo Craig, Riello UPS

The mission critical industry needs to consider demand-side response as part of its corporate social responsibility, and secondly, to explore the technologies available to achieve this

Demand for electricity is set to

double by 2050

Page 32: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Research has highlighted the need for data centres to plug the skills gap in power management and other key technical areas. CNet Training’s Dr Terri Simpkin looks at the underlying issues and warns that ‘skills wastage’ is a major problem

There have been many recent data centre-related incidents that

have raised public awareness of the centrality of mission critical infrastructure. While most people outside of the sector have little understanding of what a data centre actually is or how it underpins much of their day-to-day activity, when something goes horribly wrong the focus is squarely on the importance of ‘keeping the lights on’.

However, it is becoming increasingly obvious that the role of humans in the operation of the data centre is still an operational imperative regardless of the advances in

automation of management tasks across the sector. So too, is the creeping fear that widely reported skills shortages are adding an extra element of complexity to the mismatch between rampant technical advances and sector growth and development of adequate organisational capability to keep up.

It is no secret that the data centre sector has been lamenting the lack of readily skilled and motivated staff; and for good reason. It is well recorded that the engineering sectors including IT, infrastructure and power are struggling to establish sufficient numbers of qualified

engineers due to a significant shortfall of skills in specialist areas.

Of course, this is not just the case in the data centre sector, but across all industrial sectors. From construction to software, from mechanical to artificial intelligence, organisations are rallying a call for more exposure in schools, gender diversity and better university education to address graduate skills shortfalls. All industries are in the market actively shaking the ‘magical candidate tree’ to attract the brightest and best to their sectors.

Sadly, the data centre sector is well behind the curve and is coming to the party about

32

missioncriticalpower.uk

a decade too late. And it is paying the price. A recent survey commissioned by Eaton suggests that a skills shortage in power management “is causing a lack of confidence in data centre resilience, as well as the ability to respond effectively to power-related incidents”. The report suggests skills of those working inside data centres were becoming outdated as technology develops to better manage power in particular. But this is indicative of a broader suite of issues that are contributing to a perfect storm of technical advances, outmoded traditional education models, shifts in business approaches and

MCP August 2017

TRAINING

Skill shortages: a perfect storm of complex issues

Page 33: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

33

missioncriticalpower.uk

demographic pressures. In short, no single response is going to adequately address this issue.

Skills shortages: a complicated constructReports such as the Eaton survey illustrate clearly that a skills shortage exists. There are countless reports suggesting the same. Research by the Institution of Engineering and Technology (IET) also suggests that the education system will struggle to keep up with the demand for skilled employees. This is exacerbated by a reported increase in recruitment in growth sectors such as aerospace,

August 2017 MCP

communications, defence and transport. Of course, the data centre sector should position itself in this list too as growth continues globally.

However, it is not just a numbers game. While it is true that there are simply not enough people to fill the gaps available, what is more worrying is the issue of skills wastage; and it is particularly so in the data centre sector.

While people are undertaking traditional university education to become qualified and able to fill graduate positions, the IET’s report identifies a well-repeated lament that graduate capabilities are not matched to industry need. The report suggests that 62% of employers indicate that graduates expecting to take up IT, engineering or other technical roles do not meet reasonable expectations of employers. School leavers/apprentices (53%) and post graduates (45%) are also missing the mark.

What does this mean in reality? Time, effort, energy and money are being spent on individuals only to have their skills deemed inappropriate or inadequate for the workplace. Motivated, capable and interested people are putting effort into courses only to fall at the most important hurdle – employment.

What the sector needs then, is not more graduates but more appropriately designed, delivered and dynamic forms of education that bridge the divide between education and industry. Of course, this must continue on into the workplace with appropriately responsive professional development agendas. No one-size-fits-all university or vocational degree is going to replicate a well-crafted, on-the-job development programme.

Non-traditional forms of training and education such as degree apprenticeships largely remain a mystery despite a perfect opportunity for the data centre sector to get in on the ground floor of creating higher education courses that

actually meet the sector needs. The University Technical College (UTC) movement is a growing and highly dynamic mechanism to get school leavers ready for the rapidly advancing technical demands of a career in data centres by working on real projects while finalising their secondary schooling.

This is no simple classroom-based project approach. It is an immersive, commercially oriented approach to getting talented secondary students aware of what’s needed for the world of work in a demanding technical occupation. And yet, few employers are aware of how to get involved and fewer make an investment to secure a future pipeline of work ready employees through this vehicle.

Gender agendaThe gender agenda is, of course, high profile but still, only around 9% of science, technology, engineering and maths occupations are filled by women. Again, it is not about training or education. A smaller pool of women enters into technical education and even fewer end up in the occupation for which they trained. About half of graduates

end up in science, technology, engineering, and math (Stem) occupations for which they trained and this attrition continues throughout a career lifecycle.

Getting women is one challenge; keeping them is another. Diversity and inclusion initiatives are largely failing industry and a disruptive shift in embedded and underpinning culture is needed but not necessarily palatable. The nerdy, blokey image (or less attractive, ‘pale, stale and male’) is not appropriate and it should be challenged by a raft of policy, cultural and managerial approaches that better accommodate men, women and other non-traditional workers including those from low socio-economic backgrounds.

Overall, while the conversation is about skills shortages in a multitude of different occupations from power engineering to cabling to software innovation, the underpinning issues have little to do with numbers of people with a certain suite of skills. So too, we need to consider that occupational decision making starts before children start school and they begin divesting choices from early primary school. Exposure to the sector and its opportunities has to begin around the age of seven and continue through school.

The urgency of broadly based initiatives pitched at making sure that paths into the data centre sector are recognised is made all the more imperative.

However, there is a long queue of other sectors already in schools and generating a good deal of career interest. Simply put, the data centre sector is doing too little, too late in a very crowded market.

The whole picture needs to be examined and underpinning issues addressed at the core if the data centre sector as well as peripheral industries such as communications, facilities management, IT and engineering are to diminish the risk to operations and limit critical infrastructure failures. l

Time, effort, energy and money are being spent on individuals only to have their skills deemed inappropriate

Dr Terri Simpkin

Page 34: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

MPower’s managing director Michael Brooks argues that flexible, modular systems will increasingly replace traditional standalone and parallel systems with the drive for high availability, fast repair and reduced system footprint

Data centres must eliminate risks that may cause the

downtime of business-critical applications. Systems need to offer resilience to faults or external events including those caused as a result of human error. To achieve this, designers must look to removing any single points of failure.

Even with routine maintenance there are risks, and this needs to be balanced

against the risks of postponing maintenance, which could result in similar issues or worse. Untold financial and reputational damage can result from unplanned downtime, and therefore availability continues to be a major concern for data centre managers and those working in other critical environments.

This is where CumulusPower comes in as it offers availability of 99.9999999%. MPower UPS,

has joined forces with Centiel to market CumulusPower for the first time in the UK. The three-phase UPS system offers continuous power availability, fault tolerance and a distributed active redundant architecture (Dara) which removes single points of failure.

As well as availability, data centres also need to have the flexibility to accommodate future changes in load demand or configuration. Similarly, the

34

missioncriticalpower.uk

direct cost of energy to run equipment and provide cooling, as well as company reputation regarding environmental impact and carbon footprint, also needs to be considered as does the capacity, sustainability and ongoing reliability of the UK power grid.

As a modular solution, CumulusPower offers the flexibility to pay as you grow, installing additional modules as the load increases and, at 96.7%

MCP August 2017

MODULAR SOLUTIONS

Empowering data centres: delivering flexibility and availability

Page 35: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

35

missioncriticalpower.uk August 2017 MCP

efficiency even at low loads, it incurs lower energy bills and a reduced carbon footprint.

State of the marketThe UPS market remains in a state of flux. Established manufacturers are merging or are being bought up by venture capital organisations, therefore reducing competition. There is a constant influx of low-cost products, particularly from the Far East, being rebranded

by established companies – especially for the sub 10kVA market. This means the underlying UPS is very similar across many brand names.

It is also apparent that an increasing number of companies are seen advertising as UPS suppliers and maintainers but are actually only resellers with no in-house service capability, potentially leaving customers vulnerable.

From a product perspective, static UPS systems have now almost entirely migrated over to transformerless designs to reduce cost and weight and improve efficiency.

However, there is a noticeable lag in changing the associated electrical infrastructure design to fully

account for the difference in technology. This has raised issues not only when customers have replaced existing systems but also in new builds that are still being based around traditional designs.

A further challenge for customers is the ongoing shift to closed protocol systems that lock customers in to expensive ongoing maintenance contracts that can be costly and are not always offering the best customer service.

The design concept behind the CumulusPower was to provide a single solution to all the main requirements of a UPS system in a modern, mission-critical environment. High availability is critical. This is not just the reliability of the equipment itself, which is obviously of great importance, but also its resilience to faults or events, both internal to the equipment and external. For example, a UPS system can be extremely reliable but when a fault eventually does occur, then the system fails completely and

It is common practice for UPS manufacturers to quote equipment efficiencies based on high loads, as this is typically the point where the highest efficiencies are achieved. However, this does not tell a prospective customer what the efficiencies are at lower loads

loses load power or transfers to bypass, leaving the critical load vulnerable on raw mains.

By utilising a true N+1 configuration, a failure in one module simply results in that module being isolated, leaving the remaining modules supporting the load. This is where Dara provides a vast improvement over previous system designs. Each module contains all the power elements of a UPS – rectifier, inverter, static switch, display and very importantly all the control and monitoring circuitry. This places it above other current designs that have a separate, single static switch assembly, and separate intelligence modules.

The single, separate static switch module, as used in some

of the most common modular systems is of most concern, as all load power must pass through it, whether the system is on inverter or on static bypass – it becomes a single point of failure. The Dara technology therefore ensures that there is no single active component as a point of failure.

Another issue with many existing modular designs is that the synchronisation, current sharing and control communication between the different power modules, intelligence modules and static switch modules are at risk of disruption by a failure in any one of many components within the communication loop. In comparison, CumulusPower has multiple redundant communication paths between the modules. This ensures that a fault within one path does not disrupt system operation and simply generates a warning.

Looking further at availability, it should also be considered that as any module

can easily be removed from the UPS frame for maintenance, while leaving the remainder to support the load, then there is no requirement to transfer the system to external bypass, ie raw mains, for routine maintenance. This obviously eliminates the risk to the critical load by being on mains, but also eliminates the risk of human error while carrying out the switching procedure between UPS and external bypass.

Intelligent module technologyCumulusPower offers high efficiency at low power levels when in N+1 mode. It is common practice for UPS manufacturers to quote

equipment efficiencies based on high loads, as this is typically the point where the highest efficiencies are achieved. However, this does not tell a prospective customer what the efficiencies are at lower loads. This is not ideal because no system should be installed expecting to run close to 100% from the start, as this leaves no room for expansion or load variation.

A standard N+1 system – designed to provide redundancy and resilience in the event of a UPS failure – by its very nature, has additional capacity and therefore does not run at the high load levels required to achieve the stated efficiencies. For example, with a traditional system consisting of a parallel pair of UPS configured as 1+1 redundant, in normal operation, the maximum load on each UPS can only be 50%. Otherwise when one UPS goes off line for whatever reason, the remaining UPS would have to support over 100% load.

Empowering data centres: delivering flexibility and availability

»

Page 36: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

As any module can easily be removed from the UPS frame for maintenance, while leaving the remainder to support the load, there is no requirement to transfer the system to external bypass, ie raw mains

36 MODULAR SOLUTIONS

Intelligent Power ModulesModules that form the heart of the CumulusPower UPS were specifically designed to reach high operating efficiency at very low loads, keeping energy losses to an absolute minimum while still providing N+1 redundancy (25% load: 96.4% efficiency, 50% load: 96.8% efficiency, 75% load: 97.7% efficiency). Fault-tolerant parallel Dara is a key feature of the solution and refers to the system’s ability to continue supporting the load in the event of a fault, providing the highest level of availability.

This is achieved by utilising a majority load transfer decision in the event of a fault, as well as management of load sharing to prevent overload or backfeed between modules. A vital element in this are the multiple redundant communication paths. The communication lines between modules are the heart of every parallel UPS system, whether a traditional system comprising separate UPS units or a modular system.

Traditionally, parallel UPS communication is a loop or ‘daisy chain’ system (eg for three units, UPS 1 connected to UPS 2, UPS 2 connected to UPS 3, UPS 3 connected back to UPS 1). This ensures against any

single disconnection resulting in a loss of control. However, it still relies on a single communication bus, vulnerable to component failure at the interface to each UPS.

The Intelligent Power Modules not only operate daisy-chained control communication but have multiple independent communication paths between modules. Damage, disconnection, shorting or component failure within one pathway simply generates an alarm while the remaining pathways retain control of the modules. The 10kVA and 20kVA Intelligent Power Modules use dual independent redundant communication controls, while the 50kVA modules utilise three independent communication lines for even higher resilience.

Ease of maintenance Unlike many existing modular UPS systems, CumulusPower incorporates separate bypass, output and battery isolators on the front of the frame, for each Intelligent Power Module. This not only allows a module to be easily isolated and safely removed for maintenance or repair, but crucially also allows the module to be fully tested while remaining isolated from the critical load.

This is an improvement on existing systems where a power module can only power up once fully connected to the critical load and paralleled with other modules.

Many existing modular systems are considered to be sealed units. The traditional work of replacing DC and AC capacitors and cooling fans when they reach end of life is

not possible due to the very compact construction and the fact that they are generally soldered directly to the control circuit boards to save space.

This drastically limits the operational life of a power module and it therefore requires replacing or the customer runs the risk of a catastrophic failure due to a capacitor rupturing inside the module.

This is something often not considered at the time of purchase and as the power modules are generally the most expensive assembly in the UPS, it can come as an uncomfortable surprise.

In contrast, the Centiel Intelligent Power Modules have been designed to account for this and to allow the module life to be extended.

The 10kVA and 20kVA modules incorporate easily accessible and field replaceable AC capacitors and fans. The DC capacitors used within the modules are exceptionally high specification, with a minimum of 10 years operational life. DC and AC capacitors and fans within the larger 50kVA Intelligent Power Modules are all field replaceable.

The futureIn the future, flexible, modular systems will increasingly replace traditional standalone and parallel systems with the drive for high availability, fast repair and commonality of parts, and the reduced system footprint. We will also see more attempts to move away from the traditional lead acid battery as the primary energy store for the UPS.

We have seen flywheels, compressed air, fuel cells, super capacitors, and more recently lithium ion batteries. However, the traditional lead acid battery has steadfastly remained the simple, cost-effective solution for the vast majority of installations.

I believe, the increasing use of Li-on technology in the automotive industry will drive down costs with higher volumes, and then we will see a breakthrough to the mainstream UPS market. l

missioncriticalpower.ukMCP August 2017

CumulusPower offers the flexibility to pay as you grow, installing additional modules as the load

increases

Page 37: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

missioncriticalpower.uk August 2017 MCP

Situated next to a deep, cold fjord, with access to an ample supply of hydroelectric energy, the Lefdal Mine Datacentre aims to be the greenest in Europe and was developed together with technology partners Rittal and IBM

The Lefdal Mine Datacentre (LMD) in Norway has an

ambitious vision; it aims to be the most cost-effective, secure, flexible, and green data centre in Europe. LMD opened on 10 May this year, and is powered exclusively by low-cost, renewable energy.

Rittal is a strategic and technology partner, and has provided the data centre with its top-of-the-range, preconfigured, modular and scalable infrastructure. Located on the west coast of Norway, the giant data centre operates exclusively on renewable energy.

Flexible, scalable solutionsLMD is remarkably flexible in terms of available space and different technical solutions. The mountain halls have 16m-high ceilings and there is

120,000m² of net whitespace and 200-plus MW IT capacity, delivered in container solutions or in traditional white space. The data centre can therefore offer customers a ‘pay as you grow’ model, and removes any risk of paying for unnecessary capacity.

Jørn Skaane, CEO at Lefdal Mine Datacentre, says: “We can accommodate any current white space requirement. Plus, our facility structure includes containers of different shapes and sizes along with customised power density, temperature and humidity control, and operational equipment.”

In what has been dubbed ‘the fourth industrial revolution’, companies increasingly need access to flexible IT resources. To this end, Rittal’s standardised data centre designs and containers enable

the implementation of flexible, scalable IT infrastructures within just six weeks.

“The Lefdal Mine Datacentre project demonstrates how easy it can be to establish a secure, efficient and cost-effective data centre in a very short time,” explains Dr Karl-Ulrich Köhler, CEO of Rittal International.

“Its high degree of standardisation combined with the location advantages of the western coast of Norway result in an excellent TCO analysis. Significant cost savings of up to 40% can be achieved compared to siting a Cloud data centre, for example, in Germany,” he adds.

CoolingThe cooling solution is particularly energy efficient, and will lead to a PUE ranging from 1.08 to 1.15 – depending on UPS configuration and scale of capacity. Less than 3% of the power spent on IT is used for cooling with a 5 KW/rack configuration. The proximity to the fjord ensures access to unlimited 8oC seawater all year round, which cools down the fresh water circuit from 30oC to 18oC.

IBM/CH2M HILL

concluded that the Lefdal Mine Datacenter cooling solution will enable the facility to run with an industry leading PUE under 1.1 once the design is fully developed. The cooling solution is claimed to offer a 20-30% improvement over current leading edge designs operational or under construction in Europe.

LMD will also be one of Europe’s most secure data centres, meeting all Tier III requirements. Security is tight within the mountain halls, with built-in electromagnetic pulse (EMP) technology, and limited access through just two points of entry. The design and documentation of infrastructure installations are highly confidential and there are specially-trained security staff onsite 24/7.iNNOVO Cloud, a German Cloud provider, will be among the first clients to move in with high density IT containers.

The first installation will be up and running in September serving both Norwegian and international clients with its Cloud portfolio, including HPC as a Service. IBM will also be providing customers with its Resiliency Services. ●

Lefdal Mine Datacentre is powered

exclusively by low-cost, renewable energy

Going underground: taking modularity to new depths

37MODULAR SOLUTIONS

Page 38: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Advanced Load Integrated Systems Testing (A:List) could help tackle the critical data centre testing gap, says Sudlows

T horough testing of data centre facilities is vital to ensuring a

reliable, resilient and efficient facility that can deliver at all times and support the day-to-day operations of modern organisation.

The new A:LIST (Advanced Load Integrated Systems Testing) load bank, designed and developed by critical infrastructure specialist Sudlows, is a bespoke solution designed to offer highly detailed and accurate analysis as part of this testing and has recently won the award for Data Centre Facilities Management Product of the Year at the 2017 DCS Awards.

Tackling the testing gap The A:LIST is a new addition to Sudlows’ data centre testing toolkit and forms a key part of the journey from preliminary feasibility assessments through to final testing and commissioning. Crucially, the A:LIST is designed

to fill the critical gap in the data centre final stage Integrated System Test (IST).

Sudlows associate director Zac Potts explains: “Traditional data centre testing has been undertaken with simple resistive load banks which are limited in accurately representing the modern IT loads that will be installed. As the cooling systems are designed more heavily on the characteristics of the IT, such as flow rates and temperatures, there has been a growing discrepancy between the design conditions and the testing conditions.”

This discrepancy has resulted in tests that should fail being caused to pass, and for tests that should pass being caused to fail.

The Sudlows A:LIST addresses this inconsistency between the testing load banks and the IT infrastructure, resulting in an overall testing process which more accurately represents the design and operating conditions and

38

missioncriticalpower.ukMCP August 2017

TESTING & INSPECTION

Making it to the A list of testing

A:LIST is designed to fill the critical gap in the data centre final stage Integrated System Test

Page 39: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

missioncriticalpower.uk

IT load and temperature which will be installed.

Potts adds: “The A:LIST is capable of testing up to 18kW of power density across each module with unique flexibility to rapidly simulate part and full IT load. What distinguishes the A:LIST from traditional heat banks is its ability to provide multiple metrics and realistic simulation of dual fed IT. This comprehensive power testing capacity can be individually scaled up and down and can be delivered within an IST to Uptime Institute Tier IV demonstration standards.”

Innovation recognisedIn winning the DCS Award for Data Centre Facilities Management Product of the Year, the A:LIST was recognised as the only product that could provide such a high level of intelligence and accuracy. The development and use of the advanced features within the product is now leading the industry towards testing methodologies with an acute consideration of the IT proposed for a data centre facility. In addition, the A:LIST is designed exclusively for use within data centres, with flexibility to be deployed in all feasible configurations, and encourages best practices in design and testing.

Potts adds: “Accurate data centre testing is a critical part of any major project and is commonly undervalued and under specified. We believe that this innovation will drive the industry towards a more progressive attitude to testing.

“Ultimately, full load IST testing is something which most owner operators will only get one opportunity to do and this product not only allows the client and their technical teams to participate in a thorough, in-depth and accurate testing of the facility but positively encourages it.” l

therefore provides more overall confidence to the testing which has been carried out.

Accurate representationThe A:LIST has been developed with the specific aim of delivering an accurate representation of the IT loads which will be installed in a data centre facility. The system forms a vital part of Sudlows’ professional services which guide customers through every critical testing stage. The A:LIST system is able to accurately match the flow rate, temperature and power consumption of the proposed IT systems and allows this to be remotely controlled from a single point via an integrated control network.

This remote control allows the testing process to also be planned and programmed for modelling dynamic operation including changes in fan speeds and load steps which might be seen where cloud based systems experience peaks in demand.

Prepogrammed benefitsThis programmatic approach to testing also allows the configuration required for the various test stages within the Site Acceptance Testing (SAT) and ISTs to be preprogramed and loaded as required; not only saving time but improving accuracy and safety by minimising the requirement for switching and modifying temporary cabling on site.

One of the most advanced features of the A:LIST is the dual power architecture that allows each load bank to be provided with two supplies in line with N+N architecture arrangement and allows the system to be set to operate in either a manual selection of A or B power only or an automatic fail over configuration. Multiple units can be configured to operate in a 50% A and 50% B configuration to demonstrate a close representation of the actual

Full load IST testing is something which most owner operators will only get one opportunity to do

August 2017 MCP

Page 40: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Asetek’s Larry Vertal says that air cooling alone is no longer sufficient for today’s data centres; liquid cooling is becoming increasingly necessary to cope with the changing demands being placed on facilities

Increasing wattage of central processing units (CPUs) and graphics processing units

(GPUs) has been the trend for many years in all types of data centres. These step-wise increases have largely been a background issue addressed by facilities and data centre management by limiting rack densities, patch works of hot aisle/cold aisle add-ons and by making sure there was sufficient heating, ventilation, and air conditioning and computer room air conditioning (CRAC) to handle server nodes blasting more heat out into the data centre. Bigger air heat sinks and higher air flows have historically been adequate to manage these incremental increases in heat in enterprise data centres and sometimes for lower density high-performance computing (HPC) clusters.

One of the takeaways from the 2017 International Supercomputing Conference in Frankfurt is that for HPC clusters, the wattages of CPUs and GPUs are no longer addressable with air cooling alone. For HPC, in the near term, and for enterprise computing longer term, an inflection point has been reached in the relationship between server density, the wattage of key silicon components and heat rejection.

This, of course, is becoming critical in HPC as sustained computing throughput is paramount to the type of applications implemented in high density racks and clusters. Unlike most enterprise computing today, HPC is characterised by clusters and their nodes running at 100% utilisation for sustained periods.

Furthermore, as such applications are always

40

missioncriticalpower.ukMCP August 2017

COOLING & AIR MOVEMENT

Is liquid cooling the new norm?

data centre but also cooling that promises no down-clocking or throttling of the CPUs and GPUs. In some cases, the cooling must even enable overclocking of entire racks or clusters. Racks with air heat sinks cannot handle the heat to maintain this maximum sustained CPU throughput and CPU throttling occurs due to inefficient air cooling. In addition, air cooling does not allow reliable sustained overclocking of CPUs.

This wattage inflection point means that to cool high wattage nodes, there is little choice other than node-level liquid cooling to maintain reasonable rack

Because of the need for sustained 100% compute throughput in HPC, cooling requirements cannot be satisfied by ‘good-enough’ levels of heat removal, which may be sufficient in most enterprise data centres today. To support the highest sustained throughput, cooling targets for HPC clusters require not just assured reliability like the enterprise

compute limited, cutting-edge HPC requires the highest performance versions of the latest CPUs and GPUs. This means the highest frequency offerings of Intel’s Knight’s Landing, Nvidia’s P100 and Intel’s Skylake (Xeon) processor are becoming typical.

The wattages for Nvidia’s Tesla P100 GPU is listed at 300W and both Intel’s ‘Knights Landing’ MIC-styled GPU and ‘Skylake’ Xeon CPU have been publicly announced at 200-plus W. Everyone in the industry, even those without NDA views of the CPU roadmaps, anticipate much higher wattages coming sooner rather than later. These chip wattages translate into substantially higher wattages at the node-level and even common cluster racks moving upward beyond 30kW to 50-70kW in HPC environments.

Implementation of liquid cooling at its best requires an architecture that is flexible to a variety of heat rejection scenarios

Demand for iiquid cooling is increasing

Page 41: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

41

missioncriticalpower.uk August 2017 MCP

densities. Assuming that massive air heat sinks with substantially higher air flows could cool some of the silicon on the horizon, the result would be extremely low compute density. This would require 2U-4U nodes and partially populated racks which take up expensive floor space, resulting in costly data centre build outs and expansions becoming the norm.

Moving to liquid cooling can seem daunting as many of today’s offerings require an all or nothing approach. On the far extreme is the immersion tub approach, which not only affects the facility and its layout but tends to require specialised server designs specifically for immersion use. So far, immersion has had limited adoption in the HPC segment overall.

A less extreme method of liquid cooling has liquid piped into servers for heat transfer from cold plates on the CPUs and GPUs. Because many of these systems use centralised pumping, they require the added expenses associated with high pressure systems, including expensive connectors and either copper or high pressure tubing systems. Furthermore, there can be a loss in the total number of square metres of floor space for computation in this approach due to the pumping infrastructure. Often a ‘rack’ needs to be added, which contains no servers but rather is used for the pumping infrastructure of centralised pumping. This includes not only the primary high pressure pumping system but also a redundant secondary pumping system. Since a single pump failure in this architecture affects one or more racks of computing nodes, this becomes a requirement.

Implementation of liquid cooling at its best requires an architecture that is flexible to a variety of heat rejection scenarios, is not cost prohibitive, can be adapted quickly to the latest server designs and allows for a smooth transition that can be incremental in moving the installation from air

This approach allows for a high level of flexibility. This distributed pumping approach is based on placing coolers (integrated pumps/cold plates) within server and blade nodes themselves. These coolers replace the CPU/GPU heat sinks in the server nodes to remove heat with hot water rather than much less efficient air.

Unlike centralised pumping systems, this approach isolates the pumping function within each server node, allowing for very low pressures to be used (4psi typical). This mitigates failure risk and reduces the complexity, expense and high pressures required in centralised pumping systems. In most cases, there are multiple CPUs or GPUs in a given node enabling redundancy at the individual server level as a single pump is sufficient to do the cooling.

The lowest data centre impact is with server-level liquid enhanced air cooling (LEAC) solutions. Asetek ServerLSL replaces less efficient air coolers in the servers with redundant coolers (cold plate/pumps) and exhausts 100% of this hot air into the data centre. It can be

viewed as a transitional stage in the introduction of liquid cooling or as a tool for HPC sites to instantly incorporate the highest performance computing into the data centre. At a site level, all the heat is handled by existing CRACs and chillers with no changes to the infrastructure. While LEAC solutions isolate the liquid cooling system within each server, the wattage trend is pushing the HPC industry toward all liquid cooled nodes and racks. Asetek’s RackCDU system provides a solution, which is rack-level focused, enabling a much greater impact on cooling costs for the data centre. It provides the answer both at the node level and for the facility overall.

RackCDU D2C (direct-to-chip) utilises redundant pumps/cold plates atop server CPUs and GPUs, cooling those components and optionally memory and other high heat components. RackCDU D2C captures between 60% and 80% of server heat into liquid, reducing data centre cooling costs by over 50% and allowing 2.5x-5x increases in data centre server density. Heat management and removal is done by using heat exchangers to transfer heat, not liquid, to data centre facilities water.

Most HPC clusters today utilising Asetek technology use VerticalRackCDU. This consists of a Zero-U rack level CDU (Cooling Distribution Unit) mounted in a 10.5-inch rack extension that includes space for three additional PDUs.Beyond the rack, the hot water cooling in this architecture has additional advantages in the form of overall cost of heat removal. Because hot water (up to 40ºC) is used, the data centre does not require expensive CRACs and cooling towers but can utilise inexpensive dry coolers.

Ultimately, as HPC wattage trends continue to grow, HPC sites using the technology can confidently focus on getting the most performance from their systems rather than worrying about whether their cooling systems are up to the task. l

cooling to liquid cooling. The success of distributed liquid cooling and its accelerating adoption appears to be rooted largely in addressing all of these items. An example of a low pressure distributed pumping architecture is Asetek RackCDU, which is currently installed at the most powerful HPC system in Japan today. In addition, OEMs and a growing number of Top 500 HPC sites have addressed both the near term and anticipated cooling needs using hot water liquid cooling.

The direct-to-chip distributed cooling architecture addresses the full range of heat rejection scenarios. It is based on low pressure, redundant pumps and closed loop liquid cooling within each server node.

For high-performance computing clusters, the wattages of central processing units and graphics processing units are no longer addressable with air cooling alone

Larry Vertal

50%+Reduction in data centre cooling costs achieveable using Asetec RackCDU

Page 42: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Speaking at a recent webinar, Vertiv’s Simon Brady pointed out that the ‘perfect data centre’ does not exist in the real world, yet there is a shroud of secrecy around the problems that are common in legacy facilities. He highlighted some of the low-cost solutions that can help optimise legacy data centres. Louise Frampton reports

“Power usage effectiveness (PUE) is a Marmite issue

– you either love it or you hate it,” according to Simon Brady, head of data centre optimisation, Vertiv.

Whatever your view on the value of PUE as a measure of performance, it is clear there is a need for data centres to enhance their green efforts to tackle carbon emissions and create a more sustainable future.

Optimising legacy data centre infrastructure to maximise efficiencies and energy performance could have

a significant contribution to make. Vertiv recently hosted a webinar on this issue, discussing some of the key factors that can be addressed, strategies for optimisation and common problems that are found in many data centres, that are all too often overlooked.

Brady commented that the perfect data centre will have very low PUE – 1.1 is now being advertised but <1.3 is now the ultimate goal for most facilities. The perfect data centre would also have: 100% server utilisation from day one, limited use of large scale UPS,

which can be a significant part of energy consumption; <10% of total load would be covered; there would be 100% free cooling with little to no use of mechanical cooling, with very good thermal management; and best practice would be implemented from day one, with no use of duct tape. Designed for rapid change from day one, above all, the perfect data centre would stay online at all times.

“This is something that we all aspire to but I will now welcome you to the word of ‘data centres anonymous’. I visit two or three data centres every week and

42

missioncriticalpower.uk

people are very secretive about the problems they face. I assure them, that they are not alone – there are very few sites that I have visited that are perfect. The perfect data centre does not exist in the real world,” said Brady.

He believes that there is not a problem with PUE if it is measured correctly, reported and published openly, and measured as an average over a 12 month period.

“It shouldn’t be used as a marketing tool, although this is often what it is used for, and it should be used in conjunction with KWh measurements.

MCP August 2017

DATA CENTRE OPTIMISATION

‘Data centres anonymous’: welcome to the real world

Page 43: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

43

missioncriticalpower.uk August 2017 MCP

There is no such thing as an average data centre – no two sites are the same so why compare PUE?” Brady continued.

All too often, PUE is used as a “my data centre is better than your data centre” tool, in his view, and it has become divisive. Brady added that high availability and energy efficiency are mutually exclusive: “They don’t live in the same world. The business need for a hospital, a stock exchange or a bank is to have absolute 100% uptime. There is going to be an energy efficiency cost to that. The business need must come before energy efficiency – you cannot compare these sites to facilities such as Facebook or Google.”

“When you set your targets on PUE reduction, it is an easy number to falsely change. If you want to show a reduction you can just add load,” Brady cautioned.

Nevertheless, he believes that it is still relevant if used correctly and part of a robust optimisation plan. A survey, conducted during the presentation, revealed that around two-thirds of the respondents agree that PUE is still relevant to data centre optimisation.

Brady argued that PUE should be used as it is intended: an internal benchmark – a value to improve upon.

He pointed out that the one in PUE represents IT load. “There is a huge amount that can be done to reduce the KW hours consumed, by looking at your IT and the type of software that you run,” he advised. During the presentation, Brady highlighted some photos of real word data centres – including an example of a modern facility with no defined hot and cold aisles. “This is a problem that we still see on a regular basis,” said Brady.

Air management is another problematic area: air grilles are often in the wrong place, too close to air-conditioning units or positioned randomly within the aisles. This can have a significant impact on the efficiency of air-conditioning in the room. Brady also regularly finds data

centres that have failed to install blanking panels, adversely affecting thermal management.

“Changes in data centres can be rapid and sometimes the budget isn’t there to adapt the critical infrastructure. Some of the problems that we see are old, leaking CRAC units, with dirt and debris in the filters. This can cause failures – it is important to ensure facilities management companies are doing exactly as they are required,” warned Brady.

In some instances, Vertiv has found that facilities management companies are not changing filters on a regular basis, despite their claims to the contrary. Vertiv has also found that tiles are being lifted to allow for cabling, causing air flow problems. In addition, high temperatures can also arise when the raised floor is filled with duct work and cables, over a period of time, obstructing the air flow.

“All too often, air conditioning is rendered ineffective through changes to the data centre design,” continued Brady. He

pointed out that enterprises may try to compensate by using temporary fans but this can cost between £5,000-£8,000 per year, per fan, to operate. This money could be better invested in cabling recovery and work within the facility, while improving the efficiency of the

33%The percentage that identified‘building a business case’ as a key barrier to undertaking data centre optimisation

Simon Brady: ‘People are very secretive about the problems they face’

There are very few sites that I have visited that are perfect. The perfect data centre does not exist in the real world

cooling system. Cut-outs of floor tiles for

cable entry is another issue encountered. Individually, these gaps may not seem significant, but they can cause significant problems. Analysis of one data centre by Vertiv showed that collectively, these gaps added up to a 18m2 hole.

“This can result in a huge amount of air loss,” Brady commented. “Air-conditioning has to work much harder, consuming a large amount of energy.”

Those taking part in the interactive webinar were asked whether they had seen open gaps in data centre floors: 36.4% said they had seen gaps on many occasions, 36.4% said sometimes, 18.2% said very rarely and 9.1% has never seen any gaps.

Having identified some of the problems that need to be addressed, Brady went on to offer the following advice on optimisation: • Make a plan: no one knows your site better than you • Engage with stakeholders: business need comes before efficiency • Open your eyes: look up and down – start at the rack and work out/up • Measure and monitor: create a baseline so you know where to improve from

Ultimately, optimisation can make a significant difference. One data centre, visited by Vertiv, implemented simple changes such as: cold aisle containment, an upgrade of the cooling system, and replacement of an ageing UPS.

The data centre went from a PUE of 2.17 to a PUE of 1.67, delivering a saving of 1,454,603 kWh/year – or £170,479 per year. This was achieved without making any changes to the IT load.

Brady asked participants to

identify the main reasons that were preventing them from undertaking optimisation. The time and effort involved in building a business case was the most common barrier – 33.3% said that this was a key issue; a further 22.2% cited budget and 22.2% said that fear of ‘breaking of what is not broken’ was a factor.

A lot can be achieved at low cost or no cost but a lack of knowledge does not appear to be an issue. Even small changes can make a difference, such as tackling air gaps, cable recovery and blanking, while thermal improvement is “the best place to start” according to Brady.

“Data centre professionals know what needs to be done,” he concluded. “Avoid doing nothing – it really isn’t rocket science.” l

Page 44: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

The latest digital monitoring and metering solutions deliver high accuracy, for optimised efficiency and maximum savings, and can be retrofitted quickly and easily, according to Socomec

The role of data centres as the engine room behind mission and IT critical

operations means that a robust energy efficiency strategy is a real prerequisite. Data centres need to deliver quantifiable cost savings by improving efficiency to justify their share of internal resource. As a data centre’s cooling operations can consume up to 50% of the total energy usage, optimising its performance can significantly impact the bottom line.

David Bradley, energy efficiency sales manager with integrated power specialist Socomec, explains: “Most facilities are spending too much money to power too many cooling units to deliver too much cooling. By monitoring a facility in real time, less cooling is required – because you only need enough to address your current IT load.

“Real-time information enables facility managers and capacity planning teams to work together to more quickly identify where equipment or racks can be shifted to improve cooling capacity. Furthermore, it is possible to distinguish between hot spots caused by airflow issues and those that indicate that a facility is running at maximum capacity.”

As a result, additional IT load can frequently be added without the need for more cooling resources. Bradley continues: “Once you see how much cooling your current IT load requires, your cooling capacity team can determine how much additional IT load you can safely add to your existing facility.”

The process of developing more efficient data centres starts with developing a deep understanding about the way that they use resources – and the way that those resources are monitored and managed. By accurately measuring and centrally monitoring energy consumption it is possible to improve efficiency across the entire estate.

One such system designed to meet these demands is Socomec’s Diris Digiware, enabling data centre managers to make fully informed decisions. Diris Digiware is a fully digital, multi-circuit plug and play measurement concept, with a common display for multi-circuit systems. Compact and quick to install, it provides accurate and effective metering, measurement and monitoring of electrical energy quality. Infinitely scalable, it is capable of monitoring thousands of connection points. The system offers an accuracy of class 0.5 to IEC61557-12 from 2% to 120% of the current sensor primary rating.

In order to address the most important industry challenges

today, particularly in terms of efficiency and availability, and optimising the cooling operation, the latest product developments require a specialised and inter-disciplinary approach to provide high performance, reliable and cost-effective power solutions that are flexible enough to meet the rapidly changing demands of data centres.

Monitoring is key in this process; by delivering the most effective cooling solutions exactly when and where they are needed most, energy usage and the associated costs can be reduced. The majority of energy savings come from shutting off redundant cooling resources and reducing variable fan speeds. Additional savings come from raising average facility temperatures above their previous levels, while still maintaining them safely within ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning) guidelines.

Bradley comments: “With ASHRAE TC9.9 recommending

44

missioncriticalpower.uk

hot aisle temperatures as high as 45°C in certain circumstances, we look to monitor at the three-phase PDU where it is cooler and not in the rack. The reason for this is that LCDs in the rack-mounted PDUs could dry up, electronics/communications could fail and access to the rack for maintenance may become challenging or unsafe.

“All facilities have dynamic environments, making it difficult to manage thermal airflow. The challenge is to match the cooling delivered to a facility with the heat generated by the current IT load – all of which needs to be monitored.

“By retrofitting Socomec’s Digiware within any cooling system, it is now possible to obtain the detailed level of real-time data necessary to effectively manage performance.”

Maximising cost savingsLeading hosting and managed service provider Six Degrees

MCP August 2017

DATA CENTRE OPTIMISATION

Delivering cool benefits

By monitoring a facility in real time, less cooling is required – because you only need enough to address your current IT load

Page 45: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

45

missioncriticalpower.uk August 2017 MCP

Group specialises in delivering application performance management, monitoring, reporting and security, deployed on hybrid public/private Cloud platforms.

Trusted with high-profile mission critical technology around the world, everything Six Degrees Group does is underpinned by its own data centres, data network and voice switching infrastructure.

Timothy Arnold, colo technology director at Six

Degrees Group, identified Socomec’s Digiware range for a number of critical applications – including the B30, a three-phase, neutral voltage and current wireless power monitoring device, with analogue and digital I/O in one module.

Arnold explains: “We increasingly need to better understand the specific power utilisation within our facilities. Although we have historically been able to determine the power utilisation for an entire building, we have not been able to monitor the power utilisation across unique data halls and different pump sets within that building. Optimising chiller performance and energy consumption has not previously been possible.

“By retrofitting the Digiware B-30 within one of three data halls – each with two pump sets – we have been able to monitor and measure the power usage for that specific data hall, in turn delivering a far more advanced understanding of energy efficiency.

“Now, when making adjustments, we can confirm,

conclusively, that they have been

effective, enabling us to make more informed decisions in the longer term.

Furthermore, as well as determining the energy usage for a specific hall,

we can even drill down to individual pump set level,

identifying whether one is running harder

than the other, for example.

“As a standalone module installed directly into the pumpset panel, Digiware was easy to integrate. Rather than needing to have multiple controls and a larger system, the Digiware B-30 can be deployed in an isolated environment and into the end unit, instead of deploying full modules. The initial trial has been so successful that we are now rolling the solution out across our other two data halls.”

It has been found that this granular level of monitoring is particularly beneficial for colocation facilities whose environments are continually evolving. Conversely, systems that are using lower levels of power can be consolidated resulting in improved energy efficiency resulting by association in lower operating costs for either the provider or end user.

It is vital to take control of all aspects of system design and operation in order to guarantee uptime and availability both in the near and longer term – but also to better understand energy usage and opportunities for optimisation.

to unbox the equipment than it did to install it.”

A successful operation and attractive ROI ultimately depend upon the optimised performance and flexibility of the system architecture.

Arnold continues: “We are also testing Digiware in other scenarios – in determining UPS efficiencies, for example. I am currently using Digiware as a power logger – a really cost effective solution.

“Across all of these applications, we are now working with accurate and reliable data which means that we can make more informed decisions on how to improve our facilities, particularly in

Socomec Digiware

By retrofitting Socomec’s Digiware within any cooling system, it is now possible to obtain the detailed level of real-time data necessary to effectively manage performance

Modular Digiware solutions (including the I-30 and U-30) have also been deployed by Six Degrees Group to monitor a larger number of circuits in one location.

Arnold comments: “Previously, with in-rack PDU monitoring, we experienced a number of issues as legacy equipment was operating at higher temperatures while not being designed specifically for this purpose.

“Rather than turn them off, we were able to retrofit Digiware live – without downtime – ensuring that our customers were not affected in any way. In this instance, the modularity of the Digiware solution was a significant benefit. Instead of having to deploy it for all racks across all customers, and because Digiware is mounted in the three-phase PDU rather than the rack, we have been able to scale-up over time, therefore reducing capital cost.

“Furthermore, the installation was rapid; it actually took longer

terms of energy efficiency and meeting the terms of the climate change agreement. We can deploy our capital expenditure more effectively as we better understand how energy is being used and there is zero downtime for monitoring.”

Are you prepared?The changing power demands placed upon hard-working data centres are evolving continuously. We capture and use more data now than ever before and the trend in terms of the Internet of Things means that this usage is forecast to continue at almost unimaginable rates.

Those responsible for managing the buildings and facilities that house big data have been plunged into the corporate spotlight; the optimisation of critical power availability and the protection of vital assets requires a careful balancing act between these changing power demands and the provision of greater efficiencies, and cost savings. ●

Page 46: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

PRODUCTS46

missioncriticalpower.uk

Endeco Technologies, a leading UK and Irish technology and demand-side response (DSR) aggregator, has received the prestigious ‘Seal of Excellence’ certificate for the development of its EnergyConnect platform. Presented by Horizon 2020, the EU’s largest ever research and innovation programme, Endeco received the award in recognition of its smart grid optimisation technology platform.

The platform allows electricity network operators, such as National Grid, to rapidly balance network frequency, provide greater use of renewable technologies and helps minimise blackouts, while clients take advantage of automated energy savings through demand peaks, optimised asset performance monitoring and automated switching between schemes to ensure the highest revenues.

“We have the largest R&D team in the

industry,” said Endeco CEO and co-founder Michael Phelan. He continued: “We’ve invested more than 40 years’ worth of man hours into developing EnergyConnect. This Seal of Excellence acknowledges the hard work that has gone into bringing this technology to market. Our award-winning

platform empowers major energy users to take control of their energy flexibility, unlocking powerful, recurring revenue streams from National Grid, with zero disruption to their day-to-day operations.”

Horizon 2020 focuses on excellent science, industrial leadership and tackling societal challenges. With almost £71bn of funding made available from 2014 to 2020, plus additional private investments, the programme aims to fund more breakthroughs.

Endeco was one of thousands of organisations that applied to the Horizon 2020 programme, which is evaluated on three criteria: excellence, impact and quality, efficiency of implementation. The Seal of Excellence certification is assessed by independent experts and represents a high-quality label awarded to projects that were deemed to deserve funding.

ABB’s new ZX2 with AirPlus insulation gas is leading the way in eco-efficient switchgear technology as the company builds on its innovative portfolio. This latest addition to the AirPlus series shares the same compact dimensions and advantages of the established ZX2 switchgear, but boasts a global warming potential (GWP) of less than one.

The ZX2 AirPlus is suitable for demanding applications in single and double busbar configurations. It is available in IEC ratings up to 36kV, with 31.5kA short-circuit and 2,000A nominal current.

AirPlus is a fluoroketone-based gas mixture for medium-voltage gas-insulated switchgear (GIS) and is a climate-friendly alternative to traditional SF6, which is a potent greenhouse gas. It is claimed to be the first and only ‘green’

alternative gas on the market for medium-voltage switchgear and is part of ABB’s ongoing strategy to develop eco-efficient technologies.

“It’s in all our interests that we embrace and invest in greener technologies to safeguard our planet for future generations,” commented Bruno Melles, managing director of ABB’s Medium Voltage business.

Eco-efficient switchgear

Smart grid optimisation technology wins innovation award

MCP August 2017

MPower UPS has joined forces with Centiel to market CumulusPower for the first time in the UK. CumulusPower is a three-phase UPS system which offers continuous power availability, fault-tolerance and a Distributed Active Redundant Architecture (Dara) which removes single points of failure and is now available exclusively through MPower UPS.

MPower UPS managing director Michael Brooks commented: “Availability continues to be a major concern for data centre managers and those working in other critical environments. Unlike traditional multi-module systems, the CumulusPower technology combines Intelligent Module Technology (IMT), with a fault-tolerant parallel Distributed Active Redundant Architecture, to offer availability of 99.9999999%.

“This is achieved through fully independent and self-isolating

intelligent modules, with individual power units, intelligence (CPU and communication logic), static bypass, control display and battery,

“In the unlikely event of a failure, modules can simply be swapped without transferring the load to raw mains.”

The solution has also been designed to reduce the total cost of ownership through low losses, with high double conversion efficiency of 97% at the module level

Vertical and horizontal scalability means clients can pay as they grow as CumulusPower intelligent modules can be connected in parallel configurations to provide redundancy or to increase a system’s total capacity. Serviceability is also straightforward with simple fault clearance and tool-less replaceable, hot-swappable modules.

The small footprint also contributes to achieving a high-power density of 412kW/m2 with an input THD < 3%.

Scalable UPS solution

Page 47: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure
Page 48: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

missioncriticalpower.uk

Test sets in the new Torkel 900 range from Megger offer a convenient and dependable way of determining the true capacity of storage battery installations of the type frequently used in critical standby power applications. Suitable for use with battery systems from 12V to 300V, the new instruments use the discharge testing method, which is universally recognised as giving results that are accurate and reliable.

The Torkel 900 range currently includes two models – the Torkel 930, which can operate with discharge currents up to 220 A, and the Torkel 910, which has a maximum discharge current rating of 110A. For applications where higher discharge currents are needed, the test sets can be used in conjunction with Megger’s TXL additional load units.

Both Torkel 900 models allow tests to be conducted without disconnecting the

battery system from the load, and both support testing in constant current, constant power and constant resistance modes, or in accordance with a pre-selected load profile.

Other features common to both models include real-time monitoring during the test, adjustable voltage alarm level and a battery-protection function that automatically terminates the test if the voltage drops so low that there is a risk of deep discharge.

All test data is stored in the instrument and, using an ordinary USB stick, it can be conveniently transferred to a PC for further analysis and archiving using the Torkel viewer software supplied with the test sets.

As well as having a higher discharge current capability than theTorkel 910, the Torkel 930 offers additional functionality. This includes support for monitoring individual cell voltages using Megger BVM cell monitors.

Up to 120 of these can be daisy-chained for easy connection, and the Torkel 930 can handle two BVM systems, allowing 240 cells to be monitored simultaneously.

Vertiv, formerly Emerson Network Power, has released an updated version of Trellis Power Insight, a data centre management application delivering centralised monitoring and control of uninterruptible power supply (UPS) systems and networked servers.

The solution offers a comprehensive set of alarms, notifications and automated actions, up to and now including controlled server shutdown. The software is included with Liebert UPS systems. More than 500 users are already using the application, which is available globally.

Trellis Power Insight simplifies management of today’s ever-changing data centre environments with auto-discovery of new devices, compatibility and integration with a wide range of Liebert UPS systems.

“Data centres are increasingly complex, and everything we do

is designed to simplify these environments and make management easy and efficient,” said Patrick Quirk, vice-president and general manager, global management systems at Vertiv. “With advanced management capabilities, including remote management of distributed IT environments and the ability to safely shut down servers in the event of an outage, Trellis Power Insight brings peace of mind to data centre managers.”

Most data centre applications allow for anywhere from five to 10 minutes of battery power from UPS units. Trellis Power Insight can monitor up to 50 separate Liebert UPS systems and systematically and safely shut down up to thousands of connected servers in the event of an outage. Controlled server shutdown prevents equipment damage and business interruptions when restarting.

MCP August 2017

Centralised control of UPS

PRODUCTS48

CyberRack from Stulz UK is a new chilled water rear heat exchanger with cooling capacity from 19 to 32kW across the range. Its adapter frame ensures the system fits most 19” server cabinets.

Stulz is expanding its product portfolio for rack-based cooling systems with the CyberRack Active Rear Door. The heat exchanger door replaces the back panel of the rack, and its compact design enables the cooling of all server cabinets, including high-density systems, directly inside the data centre.

The space in the rack remains fully available for IT equipment. As the depth of the rack increases by less than 300mm, the door can also be retrofitted in existing installations – no repositioning of server racks is needed. Two versions of this product are available, with a cooling capacity of 19 or 32kW. Up to five EC fans ensure an optimum airflow. Thanks to its

individual adapter frame, the CyberRack can be installed in all commonly available 19” cabinets. The frame models are available in heights of 42U and 48U, and widths of 600 mm and 800 mm.

The cooling capacity of the CyberRack is automatically adapted to the heat load of the servers. This is achieved either directly, through continuous analysis of the measured temperatures, or indirectly, via differential pressure control.

Rack-based cooling system

Monitoring battery capacity in critical standby applications

Page 49: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

49

August 2017 MCPmissioncriticalpower.uk

PRODUCT & SERVICES DIRECTORY

Contact [email protected]

To feature your company’s products or

services on this page contact

[email protected]

FLOW METERING POWERING MONITORING

UPS

BATTERY MANAGEMENT GENSET CONTROLLERS

DSE8610 MKIISHAPING THE FUTUREOF SYNCHRONISING.

REDUNDANTMSC

EXTENDED PLCFUNCTIONALITY

ENHANCEDCOMMUNICATIONS

TO LEARN HOW DSE SYNCHRONISING SOLUTIONS WILL ENHANCE

YOUR MULTI-SET APPLICATIONS VISIT

WWW.DEEPSEAPLC.COM

T +44 (0) 1723 890099E [email protected]

AVAILABLE

NOW

For further information, contact us on 0845 602 9471 or [email protected]

Video

EnMS – Energy Management (ISO 50001)

PQ – Power Quality (EN 50160)

RCM – Residual Current Monitoring

Monitoring SystemReliable and efficient power supply

www.janitza.com

3 in1

Premium Power Protection for Data Centres

www.riello-ups.co.uk

Reliable power fora sustainable world

Call: 0800 269 394

www.micronicsfl owmeters.com

or call +44 (0) 1628 810456

MADE IN BRITAIN

Clamp-on fl ow & heat/energy

metering solution from Micronics.

0999_Micronics U1000HM 44 x 110mm Ad v4.indd 105/10/2015 19:40

Page 50: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure

Chris DummettSudlows’ commercial director on the Egyptian pyramids, iPads and the existance of other life forms

Who would you least like to share a lift with? Any politician that stops us changing the law so that common sense can prevail.

You’re God for the day. What’s the first thing you do? Make sure every person on the planet has enough food, water and shelter. It is absolutely crazy in the age we live in that this is still an issue.

If you could travel back in time to a period in history, what would it be and why?I’d travel to back to the time of the Egyptian Pharaohs. I’d love to see how they built the cities and the pyramids without the use of modern engineering equipment. Thousands of years on, it’s still amazing.

Who or what are you enjoying listening to? I’m a massive music fan and love all kinds of stuff. I’m not sure if it’s a middle age crisis or the fact that I have young teenage kids, but I’ve found myself listening to Drake, Passionfruit and Teenage Fever being particular favourites.

Q&A50

missioncriticalpower.ukMCP August 2017

What should the energy users be doing to help itself in the current climate? I think we have a huge responsibility to change the way we consume energy and how we produce it. Investment into the research and development of energy efficient technologies and the application of them has to be a priority. In the meantime, we must embrace the technologies that are available and be willing to invest in them for the sake of energy reduction even if there isn’t a strong financial reason for doing so.

What’s the best thing – work wise – that you did recently? We acquired a new head office building for our team and I’m looking forward to making it a cool place to work. ●

What unsolved mystery would you like the answers to? Are there aliens in Area 51? Surely we’re not alone in this universe, are we?

What would you take to a desert island and why? iPad. You just cant beat the amount of applications in one

box; you would never be bored.

What’s your favourite film (or book) and why?There’s so many to choose from, but I’d say its Lone Survivor. The film is so intense, real edge of your seat stuff. I like the fact that the soldier

gets support from a local guy, who by doing so puts his own family at risk; a real example of humanity. If you could perpetuate a myth about yourself, what would it be? That I get to work on time. I really struggle with the concept of getting anywhere for a specific time.

What would your super power be and why? I‘d have the power to heal. Somethings seem so unfair and the ability to make that good would be pretty cool.

What would you do with a million

pounds? I’d use it to take the

financial

Are there aliens in Area 51? Surely we’re not alone in this universe, are we?

burden away from those that otherwise would struggle and treat myself to decent music system.

What’s your greatest extravagance? A watch. For something that you don’t really need to spend a lot of money on, I somehow managed to.

If you were blessed with any talent, what would your dream job be and why? I’d love to be able to sing (although, in my world, I can). Music is a big part of my life and I’d love to be involved in the making and production of it.

What is the best piece of advice ever been given? Treat people how you want to be treated yourself. It has looked after me my whole life and it’s probably no coincidence that, if you do, good things tend to happen.

What irritates you the most in life? Terrorism. I don’t understand it, no one benefits from it and it serves only to destroy life.

‘I’d love to see how they built the pyramids’

Page 51: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure
Page 52: Brewing trouble with a poor power factor?mcp.theenergyst.com/wp-content/uploads/2017/08/... · Data centre design Virtual reality: the next data centre revolution? Infrastructure