Why hybrid cooling is the future for data centres

Gordon Johnson, Senior CFD Manager at Subzero Engineering discusses growing data centre demand and solutions that can ensure optimum facility function

Rising rack and power densities are driving significant interest in liquid cooling for many reasons. Yet, the suggestion that one size fits all ignores one of the most fundamental aspects of potentially hindering adoption - that many data centre applications will continue to utilise air as the most efficient and cost-effective solution for their cooling requirements. The future is undoubtedly hybrid, Gordon Johnson says, and by using air cooling, containment, and liquid cooling together, owners and operators can optimise and future-proof their data centre environments.

Johnson is Senior CFD Manager at Subzero Engineering and is responsible for planning and managing all CFD related jobs in the US and worldwide. He is experienced in data centre energy efficiency assessments, CFD modelling, and disaster recovery and a certified US Department of Energy Data Center Energy Practitioner (DCEP), a certified Data Centre Design Professional (CDCDP).

Gordon Johnson, Senior CFD Manager at Subzero Engineering. Credit: Subzero Engineering

He shares how today, many data centres are experiencing increasing power density per IT rack, rising to levels that just a few years ago seemed extreme and out of reach, but today are considered both common and typical while simultaneously deploying air cooling. In 2020 for example, the Uptime Institute found that due to compute-intensive workloads, racks with densities of 20kW and higher are becoming a reality for many data centres. 

Johnson details how this increase has left data centre stakeholders wondering if air-cooled IT equipment (ITE), along with containment used to separate the cold supply air from the hot exhaust air, has finally reached its limits and if liquid cooling is the long-term solution. The answer, he explains, is not as simple as yes or no.

Moving forward it’s expected that data centres will transition from 100% air cooling to a hybrid model encompassing air and liquid-cooled solutions with all new and existing air-cooled data centres requiring containment to improve efficiency, performance, and sustainability. Additionally, those moving to liquid cooling may still require containment to support their mission-critical applications, depending on the type of server technology deployed.

Johnson says those moving to liquid cooling may still require containment to support their mission-critical applications. Credit: Subzero Engineering

Why is the debate of air versus liquid cooling such a hot topic in the industry right now? To answer this question, Johnson says we need to understand what’s driving the need for liquid cooling, the other options, and how we can evaluate these options while continuing to utilise air as the primary cooling mechanism. 

With more than 12 years behind him at Subzero and a data centre career spanning more than 30 years, Johnson delves into what’s driving the need for liquid cooling, other methods available, and how options can be evaluated while continuing to utilise air as the primary cooling mechanism. 

Can air and liquid cooling co-exist?

For those who are newer to the industry, this is a position we’ve been in before, with air and liquid cooling successfully coexisting while removing substantial amounts of heat via intra-board air-to-water heat exchangers. This process continued until the industry shifted primarily to CMOS technology in the 1990s, and we’ve been using air cooling in our data centres ever since. 

With air being the primary source used to cool data centres, ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers) has worked towards making this technology as efficient and sustainable as possible. Since 2004, its published a common set of criteria for cooling IT servers with the participation of ITE and cooling system manufacturers entitled “TC9.9 Thermal Guidelines for Data Processing Environments”. 

ASHRAE has focused on the efficiency and reliability of cooling the ITE in the data centre. Several revisions have been published with the latest being released in 2021 (revision 5). This latest generation TC9.9 highlights a new class of high-density air-cooled ITE (H1 class) which focuses more on cooling high-density servers and racks with a trade-off in terms of energy efficiency due to lower cooling supply air temperatures recommended to cool the ITE. 

As to the question of whether or not air and liquid cooling can coexist in the data centre white space, it’s done so for decades already, and moving forward, many experts expect to see these two cooling technologies coexisting for years to come.

What do server power trends reveal?

It’s easy to assume that when it comes to cooling, a one-size will fit all in terms of power and cooling consumption, both now and in the future, but that’s not accurate. It’s more important to focus on the actual workload for the data centre that we’re designing or operating. 

In the past, a common assumption with air cooling was that once you went above 25kW per rack it was time to transition to liquid cooling. But the industry has made some changes in regards to this, enabling data centres to cool up to and even exceed 35kW per rack with traditional air cooling. 

Scientific data centres, which include largely GPU-driven applications like machine learning AI and high analytics like crypto mining, are the areas of the industry that typically are transitioning or moving towards liquid cooling. But if you look at some other workloads like the cloud and most businesses, the growth rate is rising but it still makes sense for air cooling in terms of cost. The key is to look at this issue from a business perspective, what are we trying to accomplish with each data centre?

What’s driving server power growth?

Up to around 2010 businesses utilised single-core processors, but once available, they transitioned to multi-core processors, however, there still was a relatively flat power consumption with these dual and quad-core processors. This enabled server manufacturers to concentrate on lower airflow rates for cooling ITE, which resulted in better overall efficiency. 

Around 2018, with the size of these processors continually shrinking, higher multi-core processors became the norm and with these reaching their performance limits, the only way to continue to achieve the new levels of performance by compute-intensive applications is by increasing power consumption. Server manufacturers have been packing in as much as they can to servers, but because of CPU power consumption, in some cases, data centres were having difficulty removing the heat with air cooling., creating a need for alternative cooling solutions, such as liquid.

Server manufacturers have also been increasing the temperature delta across servers for several years now, which again has been great for efficiency since the higher the temperature delta the less airflow that’s needed to remove the heat. However, server manufacturers are, in turn, reaching their limits, resulting in data centre operators having to increase the airflow to cool high-density servers and to keep up with increasing power consumption. 

What are additional options for air cooling?

Thankfully, there are several approaches the industry is embracing to cool power densities up to and even greater than 35kW per rack successfully, often with traditional air cooling. These options start with deploying either cold or hot aisle containment. If no containment is used typically, rack densities should be no higher than 5kW per rack, with additional supply airflow needed to compensate for recirculation air and hot spots. 

What about lowering temperatures? In 2021, ASHRAE released their 5th generation TC9.9 which highlighted a new class of High-Density Air-Cooled IT equipment, which will need to use more restrictive supply temperatures than the previous class of servers. 

At some point, high-density servers and racks will also need to transition from air to liquid cooling, especially with CPUs and GPUs expected to exceed 500 watts per processor or higher in the next few years. But this transition is not automatic and isn’t going to be for everyone.

Liquid cooling is not going to be the ideal solution or remedy for all future cooling requirements. Instead, the selection of liquid cooling instead of air cooling has to do with a variety of factors, including specific location, climate (temperature/humidity), power densities, workloads, efficiency, performance, heat reuse, and physical space available.

This highlights the need for data centre stakeholders to take a holistic approach to cooling their critical systems. It will not and should not be an approach where we’re considering only air or only liquid cooling moving forward. Instead, the key is to understand the trade-offs of each cooling technology and deploy only what makes the most sense for the application.

******

For more insights into the world of Data Centre - check out the latest edition of Data Centre Magazine and be sure to follow us on LinkedIn & Twitter.

Other magazines that may be of interest - Mobile Magazine.

Please also check out our upcoming event - Cloud and 5G LIVE on October 11 and 12 2023.

******

BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.

BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.​​​​​​​

Share

Featured Articles

Digital Realty Continues Renewable Rollout to the US

After a successful deployment in Europe, Digital Realty expands its HVO solution to the US to address the environmental impact of data centre generators

Google Axion Processors: A New Era of Data Centre Efficiency

Google announces Axion - its own custom Arm-based CPU to support AI workloads in data centres, in addition to a more powerful version of its TPU AI chips

MWC24: Harnessing AI to Modernise Telcos with Tech Mahindra

We spoke with Tech Mahindra’s Manish Mangal at MWC Barcelona 2024 about how AI can transform telco network operations and facilitate greater innovation

Are 3D Printed Data Centres a Viable Sustainable Solution?

Schneider & Digital Realty Tackle Data Centre Circularity

Denmark AI: Digital Realty to Host NVIDIA Supercomputer