Engineering the Future of Sustainable Data Centre Cooling
Today data centres provide the critical infrastructure that powers the world’s digital economy. From the Internet and smart devices, to cloud, AI and compute-intensive research applications, the digital infrastructure sector has become integral to every facet of life.
At the same time however, data centre energy consumption has continued to grow at a prolific rate. Driven by the power and computing demands of Generative AI and the accompanying growth of the hyperscale cloud. A report from IDC, for example, found that global revenue for AI is expected to surpass US$300bn by 2026. Another study also predicts that AI technologies could consume as much electrical energy as the country of Ireland, equating to 29.3 terawatt-hours per year.
“In many respects, the digital infrastructure sector is now at a crossroads,” says Maurizio Frizziero, Cooling innovation and Strategy director, Schneider-Electric. “Legacy approaches to powering and cooling data centres can no longer provide the means for a sustainable future – something many of us have discussed for years. For us to meet the demands of a greener world and reduce the consumption of precious resources such as energy and water, innovation - especially where AI and high-density computing are concerned - is vital.”
Indeed, 80% of the world’s carbon emissions (CO2) are now linked to the production and consumption of energy and current fossil-based energy systems continue to present losses of around 60%.
“The fact is that without change – be that via data centre design and operations, or through new breakthroughs in power and cooling systems architectures – we cannot meet the demands of net zero,” adds Maurizio.
Embracing AI, GPUs and sustainable cooling across data centres
While the acceleration of digital technologies was once the first step in the growth of data centre physical infrastructure, disruptive trends such as the emergence of new, more powerful GPUs, as well as the growth of Generative AI, machine learning and quantum computing applications, are now driving data centre, data transfer and connectivity requirements into new territory.
“Over the next 20 years, for example, we at Schneider Electric expect there to be at least two-times the amount of new data centre capacity that exists today, with AI inference, large language models (LLMs), and high-density workloads making up near 20% of all capacity – around 18GW or more,” explains Maurizio. “Another is in the standardisation of designs and systems architectures and the localisation of said designs based on regional considerations – be they external temperature and humidity levels – and the use of specific technologies to address them, as well as local standards on multiple areas such as efficiency (PUE) and global warming reduction through transition to new refrigerants. On the technology perspective this includes the migration from Indirect air economisers, cooling towers and refrigerant based systems, to the waterless, more efficient and liquid cooling ready chilled water systems.”
Greater standardisation is also accelerating the trend of co-development between the main players of the data centre ecosystem and industry. Where once organisations sought to collaborate to gain a competitive edge, now the same companies are working together to ensure that AI workloads are supported with the most energy efficient and sustainable cooling solutions.
Liquid cooling – the AI data centres of the future
In terms of topology, today there appears to be a growing bias towards Direct-to-Chip liquid cooling, single phase. This is because of its ease of deployment, its ability to manage the higher power densities associated with new chips and CPUs and the capability to handle ‘hybrid’ systems, where air- and liquid-cooled servers coexist. That said, innovation has long been a hallmark of the data centre industry, and there’s no doubt that this could change again quickly.
“The arrival of NVIDIA’s new Blackwell GPU, for example, has surpassed all expectations where performance and energy consumption are concerned, allowing users to both build and run real-time generative AI applications at up-to 25-times less cost and energy consumption than its predecessor,” he continues. “In combination with a portfolio to support liquid cooling, Schneider Electric has established a new partnership with NVIDIA to optimise data centre infrastructure solutions for extreme-density clusters, leveraging high-power distribution and liquid-cooling systems to pave the way for the advancement of edge AI applications.”
Structural changes in servers and architectures
With energy efficiency and sustainability now a critical priority for data centre operators and more so in the era of AI, the shift towards liquid cooling is no longer a trend or topical point of discussion.
Maurizio continues: “Traditional air heat rejection systems, for example, were once effective for cooling legacy data centre densities of <30kW or less, but as rack densities exceed 50kW and grow to the anticipated 100kW or more, air cooling no longer becomes viable. Air cooling will remain to dissipate the heat portion uncovered by liquid cooling technology and to cool the other devices such as UPSs, electrical devices and other rooms to support the white space.”
Higher computing and processing power, particularly the emergence of evermore powerful GPUs and AI-focused processors, has resulted in a step-change in data centre systems architecture. In particular, the landscape requires that liquid-cooled servers are backed by an end-to-end and vendor agnostic approach, which leverages both air and liquid cooling methodologies.
“New design guidelines - whereby hybrid or specific architectures are used for handling transition and servers with different requirements - mean there are challenges regarding operating temperatures, which can only be solved through higher temperature heat rejection,” he says. “Moreover, designing data centre architectures without standards can result in wide-ranging variations, with the impact being a wide portfolio for meeting server requirements.”
A strategy for sustainable cooling deployment
Given the varying nature of server and architecture designs, a strategic approach to liquid cooling is vital.
“Firstly, this approach must be agnostic, ensuring interoperability with different technologies. It’s also important to remember that both legacy and newer data centres have varying cooling requirements, so it’s generally advisable that server manufacturers make the technical and chip-density decisions,” Maurizio says.
“Secondly, approaching liquid cooling from an end-to-end perspective is paramount as it requires a combination of differing building blocks, and the deployment of complete liquid or hybrid-cooled systems can both have challenges. Transitioning to liquid cooling successfully is, therefore, dependent on achieving a design with one main goal - for example, density versus efficiency.
“Thirdly, differentiation is key to building a strong liquid cooling strategy. Focusing on a single element or approach, such as Direct-to-Chip will enable operators to improve availability and simplify the design of their chosen liquid cooling methodology.”
As Schneider Electric looks to the future, liquid cooling undoubtedly requires a host of considerations and a wide portfolio of components. These can include, but are not limited to, heat rejection units, such as high temperature chillers and free-coolers, coolant distribution units (CDUs) and a specific solution for the whitespace – be it direct to the chip, chassis, or tank-immersion.
“What’s clear, however, is that the entire data centre industry has a responsibility to combine density growth with efficiency, and it can do this by championing the adoption of sustainable cooling methods. The four main sustainability benefits of liquid cooling include reduced energy usage and greenhouse gas emissions (GHGs), zero water dependency and waste; as well as a holistic view,” he says. “When it comes to minimising energy usage and transitioning to heat reuse, for example, operators can leverage the opportunities afforded by district heating and heat pumps and use liquid cooling to increase energy efficiencies. Reducing GHGs will involve design based on chilled water, migrating to new refrigerants, and, in the next future, to natural fluids.”
Lastly, enabling a holistic end-to-end view will require green premium certification - adopting a circular economy approach during the design phase of data centres, the completion of life cycle assessments and taking an end-to-end view throughout this process.
“As AI becomes the norm within the data centres of the future, each of these steps will enable operators to cool their servers and architectures with sustainability front of mind and without compromising efficiency or performance,” Maurizio concludes.
******
Make sure you check out the latest edition of Data Centre Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
Data Centre Magazine is a BizClik brand