HPC and Ultra-HPC are the future of the digital world
There’s no denying that we live in a digital world and our digital world means Big Data and the Internet of Things (IoT) are impacting, arguably, every aspect of our lives. From being able to see who’s ringing your doorbell from wherever you are in the world, to black boxes on car dashboards.
However, this isn’t just about what consumers want. Beyond the headlines of connected devices - and customer behaviour analysis - IoT and Big Data are being used to solve increasingly complex business problems. All businesses, whether born digital or going digital, are turning to IoT technology to manage the connections, devices and applications that underpin their organisations. Automated workflows - which have long been a watchword of manufacturing business strategy - are being embraced by many disparate companies.
As we know, all things digital generate massive amounts of data, and IoT and big data are clearly intimately connected: the IoT industry generates ‘big data’ to take all of the information that it gathers and turn it into something useful, actionable - and sometimes - automated. As well as enabling us to do more things, quickly, IoT provides a wealth of data, which - with compute processing and intelligence - can generate invaluable insight for organisations to use to improve products, services, efficiencies and ultimately, revenues.
With Big Data, comes big requirements for processing power
Of course, everything has a knock-on effect. So, are existing business systems ready to cope with the intense pressure that IoT and big data bring?
Their impact is already being felt all along the technology supply chain and in response, CIOs have placed increasing pressure on IT infrastructure and service providers to help fulfil these mounting business requirements. IT departments need to deploy more forward-looking capacity management to be able to proactively meet the business priorities associated with IoT connections. And big data processing requires a vast amount of storage and computing resources.
As end-user demands increase, it has forced data centres to evolve and keep pace with the changing business requirements asked of them, placing them firmly at the heart of the business. Apart from being able to store IoT generated data, the ability to access and process it as meaningful actionable information - very quickly - is vitally important, and will give huge competitive advantage to those organisations that do it well.
Meeting the increasing requirements – High Density Computing
Historically, for a data centre to meet increasing requirements it would simply add floor space to accommodate more racks and servers. However, the growing needs for IT resources and productivity have come hand in hand with demand for greater efficiencies, better cost savings and lower environmental impact. Third party colocation data centres have increasingly been looked at as the way to support this growth and innovation, rather than CIOs expending capital to build and run their own on-premise capability.
High Performance Computing (HPC) aggregates computing power to deliver much higher performance to solve complex science, engineering and business problems. It was once seen as the reserve of the mega-corporation, but is now being looked at as a way to redress this IT budget/performance dichotomy. It requires data centres to adopt high density innovation strategies in order to maximise productivity and efficiency, increase available power density and the ‘per foot’ computing power of the data centre.
Industry views around HPC vary widely. Data centres built as recently as a few years ago were designed to have a uniform energy distribution of around 2 to 4 kilowatts (kW) per IT rack. Some even added ‘high density zones’ capable of scaling up if required, but many of these required additional footprint to be provided around the higher power racks to balance cooling capability, or supplemental cooling equipment that raised the cost of supporting the kW density increase.
Gartner defined a high performance capability as one where the energy needed is more than 15kW per rack for a given set of rows, but this is being revised upwards all the time with some HPC platforms now requiring performance in the 30-40kW range - sometimes referred to as Ultra High Performance.
Ultra High Performance provides customers with a financial advantage, as the data centre is built to operate at a high density without any supplementary support technology, and therefore the costs per kW reduces with the increasing density within the rack. The denser the computing power can be stacked in a rack, the data centre space can be better optimised and offered to the customer, making a highly dense deployment significantly more cost effective. HPC can be particularly attractive to some industry sector requirements: Cloud service providers; Digital Media workload processing; Big data research; Core telecommunications network solutions.
Making HPC capabilities accessible to a wider group of organisations in sectors such as education, research, life sciences and government requires high performance solutions that, through greater efficiency, lowers the total cost of the increased compute power. HPC is, therefore, a vital component in enabling organisations of varying sizes to affordably benefit from greater processing performance. Indeed, the denser the deployment, the more financially efficient the customer’s deployment becomes.
Choosing the right partner
Many data centres will claim to support high performance computing – and technically speaking, a lot of them will – but only data centres that have been built from the ground up with HPC in mind will be able to do so cost-effectively.
With the Internet of Things and Big Data quickly becoming a reality, organisations across industries will need to ensure that their IT systems are ready and able to deal with the next generation of computing and performance needs to remain competitive and cost efficient. It’s more important than ever to conduct due diligence before signing up with data centre providers in order to avoid the risk of being tied into costly long-term contracts that neither meets current or future needs.
And, although the future seems expensive for these innovative technologies, for many, the possibilities are limited by issues of complexity and capacity. The benefit of IoT and big data will only come to fruition if businesses can run analytics that – with the growth of data – have become too complex and time critical for normal enterprise servers to handle efficiently.
At VIRTUS, we believe that getting the data centre strategy right means that a company has an intelligent and scalable asset that enables choice and growth. But - get it wrong and it becomes a fundamental constraint for innovation. So organisations must ensure their data centre strategy is ready and able to deal with the next generation of computing and performance needs - to remain not only competitive and cost efficient, but also ready for exponential growth.