The Future is Open Source
Open source software has been an important aspect of technological development for decades. The ability to create, peer review and release code and applications to the community at large speeds the pace of innovation. However, until relatively recently, it was rare to find organisations and industries taking the same approach to hardware.
In 2009, Facebook undertook a fundamental transformation of the way the social networking giant approached its digital infrastructure. “As they began to outgrow their infrastructure, they made the decision to start building their own data centres,” explains Steve Helvie, VP of channel development, at the Open Compute Project Foundation (OCP) “As they started to look at that process, they outlined a number of factors for building a facility from the ground up. They were looking at what they could get rid of - what a data centre doesn't need - how to run servers hotter and so on.” A small team spent two years building a hyperscale data centre which ended up being 38% more energy efficient to build and 24% less expensive to run than the company’s previous facilities. Following the project’s success, Facebook spun out the Open Compute Project in collaboration with Intel, Rackspace, Goldman Sachs and Andy Bechtolsheim.
“They took the process one step further; they open-sourced their results to a foundation and a community, which was pretty much unheard of at the time,” Helvie continues. “At the time, open source software was quite prevalent, but no one had open-sourced hardware and data centre designs like that.”
Today, the OCP’s board of directors comprises seven members - six companies and Bechtolsheim. In 2020, the foundation’s revenues rose beyond $5bn (excluding revenues reported by its member companies) and, by 2023, the forecasted revenue for the OCP is $11.8bn. Together, the OCP is working on changing the face of the data centre industry in a very permanent way. “We currently have over 200 companies, and over 8,000 engineers working across 25-30 different common problems throughout the data centre,” says Helvie.
The OCP Process
The OCP’s contribution process is similar to extant methodologies for open-sourcing software, but also exercises tighter controls over the process than, for example, an open source platform like GitHub. “Companies will make a contribution to our open community. Contributors will come together and submit a specification for a cable, switch, rack, etc. That specification is then circulated within its particular project community, which then votes on it,” explains Helvie. “Any piece of approved hardware that ends up on our website has been through a really rigorous review process.”
Once approved, the specification is made into a physical product. In order to prevent itself from becoming a vast library of hardware specifications that aren’t available on the market, the OCP adds an interesting stipulation. “One of the things that we're quite diligent about is that, if you submit a contribution to the OCP, you have to have a supply chain ready to deliver the product within 100 days of the specification being approved,” he says.
Simple, Elegant and Efficient
The design philosophy behind OCP innovations is one of extreme simplicity and efficiency. “A lot of companies out there - particularly software as a service companies - don't want or need a Tier III data centre. They're running hybrid clouds and want their private cloud to look very similar to their public cloud environment. Most of the public cloud out there is running open hardware. Huge, over-engineered Tier III data centres just aren't necessary,” Helvie says. “We approach an OCP-optimised data centre from a point of view where, instead of packing in additional systems, redundancies, bells and whistles, we ask what it is we don't need.”
For example, a traditional data centre server rack might use eight 40mm fans for in-rack cooling. An OCP rack, by contrast, uses just two 80mm fans. This enables the two larger fans to keep the rack at the same temperature while reducing the amount of energy consumed by 7/8th (according to the fan cube law).
OCP hardware is designed to have as few components as possible, with a firm emphasis on modularity. “OCP hardware needs to be tool-less. Technicians need to be able to repair or replace a part of a broken server without using a tool, and complete the process in under three to four minutes,” Helvie says. As a result the server-to-technician ratio in an OCP data centre is significantly lower than in a traditional facility. Facebook’s open-source designed facilities - like its hyperscale data centres in Prineville, Oregon - employ one technician for every 40,000 servers.
Creating a Circular Economy
With data centres expected to account for 8% of the world’s energy demands by 2030, the need for the industry to decarbonise its facilities has never been greater. Brain Johnson, global data centre leader at ABB, notes that, “Although data centres have managed to keep their collective power demand at about 2% of the world’s electricity use [so far], their energy consumption could grow exponentially as demand increases. Therefore, data centers will need to implement every possible strategy to maximize their energy efficiency.” Open source design principles have the potential to play a significant role in that process, not only by designing increasingly efficient hardware, but also by driving the industry towards adopting a more circular economy.
With upgrade cycles getting shorter, as technologies like AI and high performance computing (HPC) drive data centre operators to regularly refit in order to increase density, the industry has a huge problem with e-waste. “Hyperscale data centre operators are getting rid of thousands and thousands of used servers every year,” says Helvie. Operators like Facebook, Google and Microsoft replace thousands of server racks every year. “Most of these servers leaving hyperscale data centres at the end of a hardware cycle are less than three years old.”
One of the OCP’s member companies, ITRenew, has worked for years in the e-waste disposal space. “Traditionally, they would remove a server from the system, disassemble it and move it into the secondary market,” Helvie explains. What they've started doing recently, in response to removing whole racks at a time from hyperscalers like Facebook, is use a team of engineers to sanitise and install new applications on a whole rack of servers for resale.” ITRenew then sells these racks to companies looking to reduce their carbon footprint. “ If you're a company with emissions targets, buying second user hardware is going to go a long way towards meeting those goals,” says Helvie.
The possibilities for sustainable circular economic practice don’t stop there. In November, ITRenew partnered with Blockheating on a new project in Amsterdam. Using second-user servers recovered from a hyperscale facility and packaged into an edge scale container unit and refitted with liquid cooling technology, the excess heat from the new micro data centres is used to heat local greenhouses.
The Netherlands is home to more than 3,700 hectares of commercial greenhouse space. According to Blockheating, the excess heat is enough to help grow “tonnes” of tomatoes every year in a single greenhouse, while even further reducing the carbon footprint of the data centres.
Chayora TJ1: OCP Ready
In October of last year, Chayora’s TJ1 hyperscale facility became the first data centre in China to be OCP-Ready. Located in the Chinese city of Tianjin the facility has a capacity of 25,000 racks and up to 300 MVA of gross power. TJ1 is also strategically located near enough to Beijing’s central business district to provide latencies averaging less than 2 milliseconds per round trip. It will have an average power usage effectiveness (PUE) rating of 1.2, and be cloud and carrier neutral.
All these criteria and more have conspired to earn TJ1 its OCP Ready status.
“An OCP Ready data centre has been through a thorough peer review process and achieved recognition for implementing the industry’s best practices for efficiency and scale. These facilities provide cost and efficiency-optimised operation now and well into the future,” commented Mark Dansie, leader of the OCP Ready program.
Helvie added that, "As the momentum for open hardware designs continues to grow in north Asia, having data centers that are optimised for OCP designs becomes increasingly important. Having Chayora as our first OCP Ready data center in China ensures those enterprises deploying OCP solutions that they will have a strong data center operator who understands open hardware and is committed to openness, scale and efficiency.”
The Future is Open Source
“Over the last three years, we've seen open source adoption move from cloud service providers into telecoms and now down again into the large scale enterprise. As with most hardware cycles, open source adoption takes time. It's not like a software update I can download and have up and running in an afternoon,” Helvie reflects. However, the pace of adoption the OCP is experiencing in the enterprise space is faster than expected. “It's still a relatively new idea, this approach to buying open hardware and using open source concepts. It's really interesting to watch this industry shift from open software towards open hardware and how it's impacting the data centre.”
3 ways crypto mining is impacting the data centre industry
Around the world - particularly in Russia, Eastern Europe and China - the global rise of crypto currency values has been driving an en masse industrialisation of the mining process. The trend has been bubbling away for several years, as the home mining rig has largely found itself edged out by hyperscale server farms comprising some of the largest data centres anywhere in the industry - all designed to mine crypto.
The demands placed on a facility built and run as a mining operation are somewhat different to those placed on a hyperscale cloud facility or enterprise data centre. Reliability isn’t so much of an issue; if a mine goes down for a few hours, money is lost, but your data centre won’t take half the websites in Western Europe down along with it.
On the flipside, density and cooling are much, much more important. To make a crypto mining operation profitable, you need to be harvesting more crypto currency (be it Ethereum, Dogecoin, or the perennial Bitcoin) than you’re paying for electricity by a significant margin. As a result, some of the most efficient cooling and hyper-dense rack architecture from the past few years - like two-stage liquid cooling - has originated as a crypto mining solution. Now, hyperscale cloud operators in particular are recognising the benefits of these innovations and applying them to other aspects of the data centre industry.
1. Liquid Cooling
Crypto data centres have always been as dense as possible, with their racks running at maximum capacity all day, all year round. By contrast, the average enterprise or cloud data centre isn’t necessarily running at peak capacity 24/7; workloads fluctuate with demand. However, as that demand has skyrocketed over the past year in particular, cloud and enterprise operators have looked to crypto’s preference for liquid cooling as a way to run data centres closer to the ragged edge of performance than ever before.
One example of this is LiquidStack. The Hong Kong startup makes a revolutionary two-phase liquid cooling solution for data centres, which was developed over a number of years inside Bitfury, one of the world’s leading crypto miners. “Bitfury is sharing our knowledge with the global data center community and we are excited that Microsoft and other internet giants can benefit from our years of experience and investment to best practice liquid cooling,” said Joe Capes, CEO of LiquidStack in an interview with Data Centre Magazine.
Now, LiquidStack is going mainstream, with substantiated rumours that Microsoft is looking to adopt their DataTank solutions across its ever-expanding portfolio of hyperscale cloud regions.
2. Denser HPC
One of the issues that liquid cooling solves is how to create ultra-dense server racks that can function at high temperatures. Crypto miners have been grappling with this problem for about a decade now, and the lessons they’ve learned are being happily adopted by the burgeoning data centre HPC market - which is swelling in response to greater AI adoption and increasingly-sizable data sets.
With the density that mining rigs can achieve, server architects are cramming hundreds of kilowatts into individual racks - although it should be noted that this is still relatively rare. A 2020 survey from the Uptime Institute still found that the average density of data centre racks was growing rapidly, however.
“We expect density to keep rising. Our research shows that the use of virtualization and software containers pushes IT utilization up, in turn requiring more power and cooling. With Moore’s law slowing down, improvements in IT can require more multi-core processors and, consequently, more power consumption per operation, especially if utilization is low. Even setting aside new workloads, increases in density can be regarded a long-term trend,” said the report.
In 2020, average rack densities of 20kW and higher became a reality for many data centre operator.
3. Sustainability Concerns
Now for the more worrying news. The industrial scale and massive power consumption inherent to the crypto mining business - and the negative attention that miners are now starting to receive from government - could point towards a concerning future for data centre operators in the wider industry.
Last week, the Chinese government announced that it would open an inquiry into the participation of Beijing’s largest data centre operators - which include the country’s three largest telecom firms - in crypto mining. At a time when the PRC government is attempting a significant reversal of its approach towards sustainability, the significant power draw of crypto mining activities may be one more hurdle than China cares to deal with.
The Indian government is mulling a blanket criminalisation of all crypto mining in the country and, in the US, the State of New York is also looking into tightening regulatory restrictions on the industry.
While crypto mining data centres are not the same as cloud or enterprise facilities, operators should be careful lest the ire of lawmakers be the latest trend to make its way from the crypto sector into the mainstream.