The Future is Open Source
Open source software has been an important aspect of technological development for decades. The ability to create, peer review and release code and applications to the community at large speeds the pace of innovation. However, until relatively recently, it was rare to find organisations and industries taking the same approach to hardware.
In 2009, Facebook undertook a fundamental transformation of the way the social networking giant approached its digital infrastructure. “As they began to outgrow their infrastructure, they made the decision to start building their own data centres,” explains Steve Helvie, VP of channel development, at the Open Compute Project Foundation (OCP) “As they started to look at that process, they outlined a number of factors for building a facility from the ground up. They were looking at what they could get rid of - what a data centre doesn't need - how to run servers hotter and so on.” A small team spent two years building a hyperscale data centre which ended up being 38% more energy efficient to build and 24% less expensive to run than the company’s previous facilities. Following the project’s success, Facebook spun out the Open Compute Project in collaboration with Intel, Rackspace, Goldman Sachs and Andy Bechtolsheim.
“They took the process one step further; they open-sourced their results to a foundation and a community, which was pretty much unheard of at the time,” Helvie continues. “At the time, open source software was quite prevalent, but no one had open-sourced hardware and data centre designs like that.”
Today, the OCP’s board of directors comprises seven members - six companies and Bechtolsheim. In 2020, the foundation’s revenues rose beyond $5bn (excluding revenues reported by its member companies) and, by 2023, the forecasted revenue for the OCP is $11.8bn. Together, the OCP is working on changing the face of the data centre industry in a very permanent way. “We currently have over 200 companies, and over 8,000 engineers working across 25-30 different common problems throughout the data centre,” says Helvie.
The OCP Process
The OCP’s contribution process is similar to extant methodologies for open-sourcing software, but also exercises tighter controls over the process than, for example, an open source platform like GitHub. “Companies will make a contribution to our open community. Contributors will come together and submit a specification for a cable, switch, rack, etc. That specification is then circulated within its particular project community, which then votes on it,” explains Helvie. “Any piece of approved hardware that ends up on our website has been through a really rigorous review process.”
Once approved, the specification is made into a physical product. In order to prevent itself from becoming a vast library of hardware specifications that aren’t available on the market, the OCP adds an interesting stipulation. “One of the things that we're quite diligent about is that, if you submit a contribution to the OCP, you have to have a supply chain ready to deliver the product within 100 days of the specification being approved,” he says.
Simple, Elegant and Efficient
The design philosophy behind OCP innovations is one of extreme simplicity and efficiency. “A lot of companies out there - particularly software as a service companies - don't want or need a Tier III data centre. They're running hybrid clouds and want their private cloud to look very similar to their public cloud environment. Most of the public cloud out there is running open hardware. Huge, over-engineered Tier III data centres just aren't necessary,” Helvie says. “We approach an OCP-optimised data centre from a point of view where, instead of packing in additional systems, redundancies, bells and whistles, we ask what it is we don't need.”
For example, a traditional data centre server rack might use eight 40mm fans for in-rack cooling. An OCP rack, by contrast, uses just two 80mm fans. This enables the two larger fans to keep the rack at the same temperature while reducing the amount of energy consumed by 7/8th (according to the fan cube law).
OCP hardware is designed to have as few components as possible, with a firm emphasis on modularity. “OCP hardware needs to be tool-less. Technicians need to be able to repair or replace a part of a broken server without using a tool, and complete the process in under three to four minutes,” Helvie says. As a result the server-to-technician ratio in an OCP data centre is significantly lower than in a traditional facility. Facebook’s open-source designed facilities - like its hyperscale data centres in Prineville, Oregon - employ one technician for every 40,000 servers.
Creating a Circular Economy
With data centres expected to account for 8% of the world’s energy demands by 2030, the need for the industry to decarbonise its facilities has never been greater. Brain Johnson, global data centre leader at ABB, notes that, “Although data centres have managed to keep their collective power demand at about 2% of the world’s electricity use [so far], their energy consumption could grow exponentially as demand increases. Therefore, data centers will need to implement every possible strategy to maximize their energy efficiency.” Open source design principles have the potential to play a significant role in that process, not only by designing increasingly efficient hardware, but also by driving the industry towards adopting a more circular economy.
With upgrade cycles getting shorter, as technologies like AI and high performance computing (HPC) drive data centre operators to regularly refit in order to increase density, the industry has a huge problem with e-waste. “Hyperscale data centre operators are getting rid of thousands and thousands of used servers every year,” says Helvie. Operators like Facebook, Google and Microsoft replace thousands of server racks every year. “Most of these servers leaving hyperscale data centres at the end of a hardware cycle are less than three years old.”
One of the OCP’s member companies, ITRenew, has worked for years in the e-waste disposal space. “Traditionally, they would remove a server from the system, disassemble it and move it into the secondary market,” Helvie explains. What they've started doing recently, in response to removing whole racks at a time from hyperscalers like Facebook, is use a team of engineers to sanitise and install new applications on a whole rack of servers for resale.” ITRenew then sells these racks to companies looking to reduce their carbon footprint. “ If you're a company with emissions targets, buying second user hardware is going to go a long way towards meeting those goals,” says Helvie.
The possibilities for sustainable circular economic practice don’t stop there. In November, ITRenew partnered with Blockheating on a new project in Amsterdam. Using second-user servers recovered from a hyperscale facility and packaged into an edge scale container unit and refitted with liquid cooling technology, the excess heat from the new micro data centres is used to heat local greenhouses.
The Netherlands is home to more than 3,700 hectares of commercial greenhouse space. According to Blockheating, the excess heat is enough to help grow “tonnes” of tomatoes every year in a single greenhouse, while even further reducing the carbon footprint of the data centres.
Chayora TJ1: OCP Ready
In October of last year, Chayora’s TJ1 hyperscale facility became the first data centre in China to be OCP-Ready. Located in the Chinese city of Tianjin the facility has a capacity of 25,000 racks and up to 300 MVA of gross power. TJ1 is also strategically located near enough to Beijing’s central business district to provide latencies averaging less than 2 milliseconds per round trip. It will have an average power usage effectiveness (PUE) rating of 1.2, and be cloud and carrier neutral.
All these criteria and more have conspired to earn TJ1 its OCP Ready status.
“An OCP Ready data centre has been through a thorough peer review process and achieved recognition for implementing the industry’s best practices for efficiency and scale. These facilities provide cost and efficiency-optimised operation now and well into the future,” commented Mark Dansie, leader of the OCP Ready program.
Helvie added that, "As the momentum for open hardware designs continues to grow in north Asia, having data centers that are optimised for OCP designs becomes increasingly important. Having Chayora as our first OCP Ready data center in China ensures those enterprises deploying OCP solutions that they will have a strong data center operator who understands open hardware and is committed to openness, scale and efficiency.”
The Future is Open Source
“Over the last three years, we've seen open source adoption move from cloud service providers into telecoms and now down again into the large scale enterprise. As with most hardware cycles, open source adoption takes time. It's not like a software update I can download and have up and running in an afternoon,” Helvie reflects. However, the pace of adoption the OCP is experiencing in the enterprise space is faster than expected. “It's still a relatively new idea, this approach to buying open hardware and using open source concepts. It's really interesting to watch this industry shift from open software towards open hardware and how it's impacting the data centre.”
- NVIDIA at ‘Tipping Point’ as AI Data Centre Revenue SpikesTechnology & AI
- Microsoft Expands AI with Billion-Dollar Investment in SpainTechnology & AI
- Evolving to Meet the Needs of AI with VIRTUS Data CentresTechnology & AI
- Google Continues to Ramp Up its European AI InvestmentsTechnology & AI