Data centres need electricity to run their equipment and to keep the machines cool. While just how much electricity all these data centres use is up for debate, data storage and transmission in and from data centres are estimated to be around 1% of global electricity.
This share has hardly changed since 2010, despite the number of internet users doubling and global internet traffic increasing 15-fold since then, according to the International Energy Agency.
Many data centres are “colocation" centres, which are shared by users and managed by specialist companies. As specified on DW.com, these make up the majority of data centres, ‘but it is the mammoth "hyperscale" data centres owned by Bit Tech companies that get the most attention’.
As businesses get rid of their own on-site servers, instead of renting space on cloud servers to focus on their core businesses without worrying about IT issues, ‘it is frequently cheaper and more efficient to farm out the costs of purchasing and maintaining such equipment to outside companies’.
Modular and mobile data centres
With the increasing demands of virtualised, high density and cloud computing environments, modularity is now at the forefront of contemporary data centre construction due to it providing flexibility and a scalable approach to data centre planning and design, as well as eliminating the need for traditional bricks-and-mortar locations.
Speaking with gulfbusiness.com, Sanjay Kumar Sainani, Global SVP and CTO of Huawei Digital, claims that large-scale data centre power solutions – requiring segment-based construction, distributed bidding, and onsite installation and testing – are being threatened by fully modular solutions, as they shorten construction time and improve O&M efficiency: “Traditional construction methods involve multiple vendors and complicated engineering designs, which can take months to draw up, usually resulting in complex communications during construction and multiple interface standards once the job is done. This is far from conducive to efficient, convenient maintenance.
Huawei’s FusionPower6000 3.0, also known as PowerPod, provides power supply and distribution solutions for large-scale data centres. It is convergent and prefabricated in the factory, with AI-based management ensuring steady operations. The solution assists power supply and distribution systems to move towards fully digital Operations and Maintenance (O&M).
With modular, hot-swappable components all prefabricated in the factory, Sainani claims that “Time To Market (TTM) is slashed by 75% and maintenance is simplified, while full-link convergence reduces the physical footprint by more than 30% and power link efficiency also reaches up to 95.5% to supply power in an environmentally-friendly way”.
Hyperscale data centres are business-critical facilities that support robust, scalable applications and are often associated with big data-producing companies such as Google, Amazon, Facebook, IBM, and Microsoft.
A standard data centre is either a space or a building that houses a company’s IT equipment and servers. The company can then use its data centre resources to operate its business or serve those resources up to the public as a service.
The best way to compare hyperscale and enterprise data centres is to look at their scale and performance. Firstly, in terms of size, hyperscale data centres are significantly larger than enterprise data centres and, because of the advantages of economies of scale and custom engineering, they significantly outperform them, too. Technically speaking, a hyperscale data centre should exceed 5,000 servers and 10,000 square feet. Some are as large as multiple football fields with thousands of servers running 24 hours a day, 365 days a year.
Renewable energy consultants Blanchard claim that "economies of scale mean that larger data centres are more energy-efficient than smaller ones".
Electric car manufacturer Tesla Motors is building a data centre as part of its US$5bn battery plant in Nevada. Gigafactory Texas is a U.S. manufacturing hub for Model Y and the future home of Cybertruck. The new global headquarters will cover 2,500 acres along the Colorado River with over 10 million sq ft of factory floor.
Why do data centres use so much energy?
According to the German statistics office, there are well over seven million data centres in the world, with the US in excess of 2,670 alone. They are followed by the UK with 452, Germany with 443, China, the Netherlands, Australia, Canada, France and Japan.
With vast amounts of electricity required to run their equipment, data centres also need a lot of it to keep the machines cool. Just how much electricity all these data centres use is up for debate.
We asked John Booth, Managing Director of Carbon3IT Ltd – an organisation that provides data centre support services such as ISO/IEC management standards, EU Code of Conduct for Data Centres (Energy Efficiency), sustainability and energy efficiency consultancy and training services – just how much energy is used for data centres in the UK.
“The sad fact is that we don’t really know, because we don’t really know how many data centres there are, and this is because the term data centre means different things to different people. They can range from a cloud hyper scaler of 50MW, via colocation sites around 10MW, all the way down to what we call distributed IT, which can be as little as 50kW – a small server room. What we do know is that the data from the Climate Change Agreement for Data Centres 4th Period, which was 3.8TWh, or about 1% of the UK total electricity consumption.”
But Booth reiterates that this figure has to be treated with considerable caution, because it only covers the commercial data centres present in the UK, and even then, not all of them. For instance, it specifically excludes enterprise data centres, those that belong to business, government, academia etc.
“Research conducted in 2017 by Carbon3IT Ltd after the CCA 2nd period suggests that this figure is woefully incorrect and that the actual energy usage for UK Data Centres is closer to 41TWh, almost 12%. This is a truly alarming figure and, sooner or later, will need to be addressed by the Government and the Sector”.
When asked what can be done to reduce energy, Booth added: “The best approach is to adopt the best practices as contained within the EU Code of Conduct for Data Centres (Energy Efficiency) or the CLC TR EN 50600 99-1, which contains the same best practices but reformatted.
“These cover measures can be taken for management, IT procurement, cooling, power systems, other data centre systems, design and build and, finally, monitoring and measurement. You cannot manage what you cannot measure,” said Booth.
Creating more efficient data centres with lower Power Usage Effectiveness (PUE)
Darren Watkins, Managing Director for VIRTUS Data Centres, said: “For a long time, we have recognised the need to produce and operate more efficient data centres to ensure we deliver the right service to our customers, at the right cost”.
VIRTUS, a low-cost colocation provider, is at the forefront of providing support infrastructure to the most powerful IT deployments. They have had many years of refining their data centre designs to optimise the performance with regards to minimising PUEs.
“We strive to produce a 1.0x PUE and achieve varying PUEs across our estate, all of which – according to the Uptime Institute’s annual survey – are well below the average of 1.58x. We have deployed well-established technology developments to tackle power usage such as liquid, evaporative and adiabatic cooling. Direct chip liquid cooling can offer some of the lowest PUE possible, as the temperature at which they operate means that no mechanical or adiabatic cooling would be required,” said Watkins.
More compute power may seem like it will result in significantly more power usage, but in fact, Watkins added that, “as it uses and produces higher temperatures, this leads to greater efficiency – not only in PUE, but also in other resources such as water”.
“Higher powered compute often uses greater intelligence in their software, so there is an opportunity to innovate to lower the PUE further.”
Watkins' opinion is that, in the future, it “may even mean that this kind of software could enable the removal of generators or UPSs completely”.