The changing face of remote HPC

Maurizio Davini, CTO at the University of Pisa, discusses the changing demands placed on the university’s IT infrastructure .

Increasingly, the data centre is serving as the backbone of high performance computing (HPC) efforts around the world, an area of the industry that has undergone radical changes over the past few years. Now, the combined effects of increased AI adoption, and remote research driven by the COVID-19 pandemic, are rewriting the rulebook once again. 

“When it comes to HPC, you experience waves of different types of demand,” explains Maurizio Davini, who has served as the Chief Technology Officer at the University of Pisa since 1998, back when the job came with the far humbler title of IT Manager. “Five years ago, everything we were seeing was CFD, CAE and automotive simulations. Then came chemistry, and now - during the pandemic - genomics has obviously exploded in terms of the number of requests we get for our HPC resources.” Just as the applications for HPC, and the types of workloads that are required to run those applications are always changing, so too is the infrastructure that is used to support it. “HPC requests are always changing, so we need to design a new kind of infrastructure. AI workloads and genomic analysis are, for example, completely different from CFD. We have to be flexible,” Davini adds. 

From quantum chemistry to nanophysics and genome sequencing, researchers at the University of Pisa have come to increasingly rely on the university’s HPC resources to broaden the scope of human knowledge. Over the past few years, the university’s data centre has also allowed scholars to branch out into newer fields, like big data analytics, data visualisation and machine learning. “This growing array of HPC demands is creating new challenges for the University of Pisa’s IT Centre”, notes a case study by Intel, which provides software stack management tools to Davino and his department in order to better orchestrate an increasingly complex array of tasks. 

From the Ground Up: The Whiteboard Approach  

The University of Pisa’s IT Centre has been working on a sweeping series of transformation initiatives since 2017, not only to better orchestrate its HPC workloads, but to more successfully support the multitude of IT requirements that need to be met in order to run a modern university. “Before 2017, the IT infrastructure at the University of Pisa was distributed inside various university departments. Our restructuring project involved us redrawing that architecture with the construction of our new data centres, which saw us consolidate our operations, from 20 small data centres down to three,” Davini explains. The restructuring process, he continues, involves meeting some of the unique challenges that are part and parcel with attempting to apply 21st Century infrastructure to an institution founded in 1343. 

“The University of Pisa is laid out similarly to Cambridge in the UK; it's a campus distributed throughout a town,” Davini says. All the buildings that make up the University of Pisa are connected to one another via a private fibre network owned by the university - a network that Davini himself had a hand in establishing. “That fibre network started to be put in place in the late 1990s, and now we have around 90 kilometres of cable, which contain around 9,000 kilometres of optical fibre running underneath the streets of Pisa,” he says. “The network was built using mono-modal fibre, so now we can do almost whatever we want with regards to speed, latency and so on. We like to think of the university network as a whiteboard - a blank slate on which you can create any network that you want. On this whiteboard, we put our three data centres.”  

This whiteboard approach to designing the university’s underlying infrastructure has been incredibly valuable, not only in the University of Pisa’s efforts to adapt to the evolving HPC landscape, but also in the face of the COVID-19 pandemic. 

Powering an HPC Evolution 

As the hardware that’s necessary to support changing HPC demand continues to evolve, Davini explains that IT departments are already needing to restructure in response. When the University of Pisa began its data centre consolidation, the goal “was to make them as green as possible,” which Davini and his department have achieved - hitting a PUE of 1.1. 

However, he continues: “It's a challenge to maintain, because the latest generations of GPUs, for example, produce the kind of heat that would be better served with liquid cooling, and we're using closed-aisle air cooling. We're still able to maintain our PUE of 1.1, but we're definitely having to think about making the transition as changing HPC workloads, which demand higher and higher concentrations of computing power, make it harder to efficiently cool our data centre.” 

Liquid cooling, which uses non-conductive fluids to completely submerge the IT components in fluid that can be chilled more efficiently than air, is a completely different beast. You have to redesign everything to integrate those solutions,” Davini adds. 

Going Remote in the Age of COVID-19

“The pandemic found us in the middle of this transformation process,” Davini recalls, reflecting that those efforts “have been very helpful to our efforts to address the problems that the university faced as a result of the crisis.” 

The first step, which the University of Pisa - along with academic institutions, enterprises and organisations all around the world - had to complete in just a few short, frantic days, was moving the entire entity’s operations online. “We had to transform the university's entire educational offering from being in-person to being fully remote,” says Davini. “This process didn’t relate only to lessons; it also required us to find a way to take our labs, workstations and other teaching activities online. The new infrastructure we'd been designing since 2017 was essential to offering these services. Thanks to this digital transformation project, the pandemic didn't find us unprepared.” 

HPC in a Post-COVID World 

Increasingly, HPC is experiencing a similar journey to other enterprise and administrative functions: it’s headed to the cloud. “The ability to pay-per-use for HPC resources in the cloud makes its strategic advantages affordable for almost any organisation, including universities,” says Christopher Huggins, the EMEA Business Director for Data Centric Workloads & Solutions at Dell Technologies. “And, while some organisations may not be comfortable with every type of cloud computing, sharing HPC compute and storage resources over a network is hardly news to veteran IT shops.” 

Davini adds: “In the past, there was the idea that, to do HPC analysis, you would come to Pisa and do it on-site. Now, that perception has shifted. I think this shift to scientific research and HPC workloads being done remotely will lead to more collaboration between different academic institutions all over the world, especially while international travel is being limited by the effects of the pandemic.” 

Share

Featured Articles

How Kove Unlocks Transformative Growth for Your Organisation

Kove helps clients maximise infrastructure performance using software-defined memory. Learn how

US Data Centres Confront the Strain of Rising Power Demands

Data centres across the United States (US) are preparing for a continued surge in power demand, as customers seeking technology like AI strain power grids

Data storage, memory and generation with IEEE’s Tom Coughlin

We speak with Tom Coughlin, President and CEO of IEEE, about power-hungry AI and memory technologies within the telco market

Digital Realty Continues Renewable Rollout to the US

Data Centres

Google Axion Processors: A New Era of Data Centre Efficiency

Technology & AI

MWC24: Harnessing AI to Modernise Telcos with Tech Mahindra

Technology & AI