HVAC measurements and data centre energy efficiency

By Anu Kätkä and Keith Dunnavant
In the first of three articles from Anu Kätkä from Vaisala and Keith Dunnavant of Munters, we explore the impact of HVAC measurements on energy efficiency

Vaisala is a global leader in weather, environmental, and industrial measurements, and Munters is a global leader in energy efficient and sustainable climate control solutions for mission-critical processes, including data centres. 

In Part 1, we look at data centre energy efficiency

In Part 2, we look at temperature & humidity control

In Part 3, we look at the importance of accurate measurements

Understanding, and improving, data centre PUE

Data centres use approximately 200 terawatt hours of electricity, which is around 1% of total global demand. It has been estimated that there are over 18 million servers in data centres globally. In addition to their own power requirements, these IT devices also require supporting infrastructure such as cooling, power distribution, fire suppression, uninterruptable power supplies, generators etc. 

In order to compare the energy efficiency in data centres, it is common practice to use ‘power usage effectiveness’ (PUE) as the measure. This is defined as the ratio of total energy used in a data centre to the energy used by IT only. Optimally the PUE would be 1, which would mean that all energy is spent on IT, and the supporting infrastructure is not consuming any energy. So, to minimise the PUE, the objective is to reduce the consumption of the supporting infrastructure such as cooling and power distribution. 

The typical PUE in traditional legacy data centres is around 2, whereas big hyperscale data centres can reach below 1.2. The global average was approximately 1.67 in 2020. This means that, on average, 40% of total energy use is non-IT consumption. However, PUE is a ratio, so it does not tell us anything about the total amount of energy consumption, which means that if the IT devices are consuming a high level of energy in comparison with the cooling system, the PUE will look good. It is therefore important to also measure total power consumption, as well as the efficiency and the lifecycle of the IT equipment. 

Additionally, from an environmental perspective, consideration should be given to the way in which the electricity is produced, how much water is being consumed (both in generating the electricity and at the site for cooling), and whether waste heat is being utilised. 

The PUE concept was originally developed by the Green Grid in 2006 and published as an ISO standard in 2016. The Green Grid is an open industry consortium of data centre operators, cloud providers, technology and equipment suppliers, facility architects, and end-users, working globally in the energy and resource efficiency of data centre ecosystems striving to minimise carbon emissions.  

PUE remains the most common method for calculating data centre energy efficiency. At Munters, for example, PUE is evaluated at both a peak and annualised basis for each project. When computing PUE metrics, only the IT load and cooling load are considered in the calculation of PUE. This is referred to as either partial PUE (pPUE) or mechanical PUE (PUEM).

The peak pPUE is used by electrical engineers to establish the maximum loads and to size back-up generators. The annualised pPUE is used to evaluate, and compare with other cooling options, how much electricity will be consumed during a typical year. While PUE may not be a perfect tool, it is increasingly being supported by other measures such as WUE (water usage effectiveness), CUE (carbon usage effectiveness), as well as approaches that can enhance the relevance of PUE, including SPUE (Server PUE), and TUE (Total PUE).

Predictions for global data centre energy consumption

In the past decade efficient hyperscale data centres have increased their relative share of total data centre energy consumption, while many of the less efficient, traditional data centres have been shut down. Consequently, total energy consumption has not yet increased dramatically. These newly built hyperscale data centres have been designed for efficiency.

However, we know that there will be a growing demand for information services and computer-intensive applications, due to many emerging trends like AI, ML, automation, driverless vehicles and so on. Consequently, the energy demand from data centres is expected to increase, and the level of increase is the subject of debate.

According to the best-case scenario, in comparison with current demand, global data centre energy consumption will increase threefold by 2030, but an increase of eightfold is believed more likely. These energy consumption projections include both IT and non-IT infrastructure. The majority of non-IT energy consumption is from cooling, or more precisely, rejecting the heat from the servers, and the cooling cost alone can easily represent up to 25% or more of total annual energy costs. Cooling is of course a necessity for maintaining IT functionality, and this can be optimised by good design and by the effective operation of building systems. 

An important recent trend is an increase in server rack power density, with some as high as 30 to 40 kilowatts and above. According to research conducted by AFCOM, the industry association for data centre professionals, the 2020 State of the Data Centre report found that average rack density jumped to 8.2 kW per rack; up from 7.3 kW in 2019 and 7.2 kW in 2018. About 68% of respondents reported that rack density has increased over the previous three years.

The shift towards cloud computing is certainly boosting the development of hyperscale and co-location data centres. Historically, a 1-megawatt data centre would have been designed to meet the needs of a bank or an airline or a university, but many of these organisations are now shifting to cloud services within hyperscaler and co-location data centre facilities. As a result of this growing demand, there is an increased requirement for data speed, and these data centres are serving mission-critical applications, so the reliability of the infrastructure is very important. 

There is also an increased focus on edge data centres to reduce latency (delay), as well as towards the adoption of liquid cooling to accommodate high performance chips and to reduce energy use.

Share

Featured Articles

Huawei Cloud Expands Global Footprint with New Cairo Region

Huawei Cloud is expanding its footprint with the launch of a new cloud region in Egypt, becoming the first major public cloud provider in the country

Onnec: Building Future-Proof Data Centre Strategies

Data Centre Magazine speaks with Onnec’s Niklas Lindqvist and Matt Salter about the future of sustainable data centres and how to harness the power of AI

STT GDC Vietnam Expansion to Fuel Digital Transformation

STT GDC is partnering with VNG to expand and construct data centre facilities in Vietnam to further accelerate the country’s digital transformation

New OVHcloud Data Centre in Sydney Powered by Liquid Cooling

The Datacloud Congress is back for 2024

Microsoft’s US$4bn Investment in France’s Data Centres