Servers: the data centre engine

By Harry Menear
This month, Data Centre Magazine is taking a closer look at servers, exploring their applications and role as the engine powering the modern data centre...

Servers are the lifeblood of a data centre. They provide the processing power, memory, local storage and network connectivity that drive applications, supporting every aspect of the modern digital enterprise and underpinning the capabilities of every enterprise in every industry around the world. This month, Data Centre Magazine is taking a deep dive into the past, present and future of the server, exploring its applications and role as the engine powering the modern data centre. We’ll also be taking a look at physical servers versus virtual servers, and exploring some of the latest innovations driving the sector forward into Industry 4.0. 

At face value, a server is a piece of computing hardware, similar to any personal desktop computer, laptop or smartphone. However, they have a very different function to personal computers. Servers are designed to run 24 hours a day, seven days a week, 365 days a year, with as little downtime as possible. Unlike personal computers that run operating systems and applications, a server’s computing power is dedicated to storing and providing data, applications and other services to other computers, bolstering their memory and processing capabilities. There are many different types of servers, from mail servers and web servers to virtual and cloud servers, each performing different functions with their own advantages, drawbacks and specialisations. 

If steel and steam were the backbone of the first industrial revolution, data and the servers that house it are the driving force behind the ongoing evolution of Industry 4.0. The evolution of the data centre, and the servers that comprise them, began back in the 1970s and 80s, when a single computer typically had significantly less processing power than a 2009 Toyota Prius and took up an entire room. According to a report by Verdict, “Two key technologies were critical to the first formations of data centres as we think of them today. Both occurred in the early 1980s; the first was the advent of personal computers (PCs), which proliferated as Microsoft’s Windows operating software became the global standard. The second was the development of the network system protocol by Sun Microsystems, which enabled PC users to access network files. Thereafter, microcomputers begin to fill out mainframe rooms as servers, and the rooms become known as data centres.” Since then, quantum leaps in processing power, an explosion of data generation around the world and the rise of the public cloud have all had profound impacts on the way that data centres approach server architecture.

For example, the modern hyperscale data centre uses a much more bare bones approach to server design. According to Bill Carter, CTO of the Open Compute Project, “You had the opportunity to strip things down to just what you need, and make it specific to your application. We stripped out video connectors, because there’s no video monitor. There’s no blinking lights because there’s no one walking the racks. There’s no screws.” Carter explained in an interview that, on average, one server in a hyperscale centre takes up the same amount of space as 3.75 servers in a conventional data centre.

The modern data centre can be home to tens of thousands of servers, and there are reportedly more than 7mn data centres worldwide, with that figure growing at a dizzying pace. Every enterprise - from SMEs to global conglomerates - and government entity needs access to its own servers. Some build their own, some colocate in carrier neutral facilities (renting rackspace as a service) and some entrust their data to cloud providers like AWS and Microsoft. This year, a report found that the global Data Center Rack Server Market size is expected to grow from $52.1bn in 2019 to $102.5bn by 2024, at a CAGR of 14.5% during the forecast period. 

Physical vs Virtual Servers

With advances in software, data centre customers now have a much greater range of choice when it comes to where to put their data. The traditional option is a physical server (also known as a bare metal server), which has a physical presence, a CPU, some RAM, and some sort of internal storage from which the operating system is loaded and is booted. There are several types of physical server, including towers (low cost, low power systems used in edge networks or when the operator can’t justify building a full rack), rackmount servers (the typical building blocks of a data centre, usually placed together in groups and organised in rows), and blade servers (easily the coolest sounding type, these units designed to be super modular, allowing operators to scale quickly and easily).  

On the other hand, virtual servers work by installing a Hypervisor, software that provides the ability for a single server to run multiple computing workloads as though they were running on multiple servers. These virtual machines have become the industry standard upon which the majority of companies host their virtual environments. There are several benefits to using virtual servers instead of physical hardware, from provisioning, management and configuration to scalability and automation. Purchasing, installing and setting up a physical server can take days or even weeks. By contrast, allotting space in a virtual server theoretically takes a few seconds, and when that space isn’t needed any more, the company just stops paying for the rackspace. 

That’s not to say that virtual servers aren’t without their drawbacks. Your own IT staff won’t have access to any physical resources, making problem solving a potentially complicated process. Also, since servers are typically billed by usage over time, they can end up costing more in the long run than installing owned hardware. Virtual servers also place the security of the client’s data in the hands of the operator, which can be less than ideal if the data in question is particularly sensitive. 

Dell and HPE - the big players

The global server market had a tough first quarter this year (who didn’t?), but nevertheless reported Q1 revenue figures in excess of $18.5bn. While Lenovo, IBM and Cisco are all significant players in the space, the market remains dominated by two companies: Dell Technologies and Hewlett Packard Enterprise (HPE), with market shares of 18.7% and 15.5%, respectively. 

Dell’s PowerEdge range of servers are an industry standard, favoured for their impressive power and scalability contained within a remarkably small package. "As organisations rapidly keep pace with growing sets of information and data, they're also adopting more advanced applications to generate greater insights with digital transformation efforts," said Ashley Gorakhpurwalla, President, Server Solutions Division at Dell EMC. "Our modern infrastructure solutions are a game changer in today's digital economy. With our cyber-resilient architecture and performance innovations, we will enable our customers to unleash their business potential.” 

HPE’s ProLiant range of servers are described as the industry workhorse, built for reliable affordability with the potential to easily scale. “HPE is committed to bringing new infrastructure innovations to the market that enable organisations to derive more value from their data," said Peter Schrady, SVP and general manager HPE ProLiant Servers and Enterprise & SMB Segments. "We are delivering on that commitment by delivering a complete persistent memory hardware and software ecosystem into our server portfolio, as well as delivering enhancements that will allow customers to increase agility, protect critical information and deliver new applications and services more quickly than ever before." 

Crypto mining - virtual squatting

Cryptocurrency mining is big (if admittedly sometimes treacherous and risky) business. In 2019, blockchain miners made an estimated $5bn from a mixture of block rewards and transaction fees. However, as more people mine the rapidly diversifying array of crypto currencies, the margins where the profits exist are getting smaller and smaller. This is largely due to the changing ratio between power costs and financial returns. “Miners must pay to build ever increasing bigger rigs capable of vast amounts of processing power, and then the rigs themselves must be powered with large quantities of electricity,” explains a recent report on the industry. As such, as well as flocking to regions with low tax and energy costs, crypto miners are locked in constant pursuit of more efficient and powerful setups. 

One interesting development has been the case of crypto miners hacking into virtual servers and forcing them to secretly mine currency without their owners’ knowledge. Far from a few isolated incidents, even Microsoft was found to have been the victim of crypto squatting this year, when a malicious bot was discovered that had been launching attacks against Microsoft SQL (MSSQL) databases to take over admin accounts and then install cryptocurrency mining scripts on the underlying operating system for two years without being detected. According to a report by Guardicore, the botnet is still active and targeting 3,000 new databases every day. Back in May, blogging platform Ghost was also the victim of a similar hack, and popular server framework Salt (which is used by IBM, Linkedin and eBay) was similarly targeted

Lenovo - servers for the AI and Deep Learning era

“The constant change in information and ever evolving needs of customers means there must be faster and more efficient solutions to turn data into information that empowers businesses,” said Kamran Amini, Vice President and General Manager of Server, Storage and Software Defined Infrastructure, Lenovo Data Center Group, in a recent press release. In response to the growing need for the massive amounts of processing power that allows AI and machine learning applications to dissect and analyse gigantic datasets, Lenovo recently launched a new line of servers. The ThinkSystem SR860 V2 and SR850 V2 servers are built using 3rd Gen Intel Xeon Scalable processors with enhanced support for SAP HANA based on Intel Optane persistent memory 200 series. In short, these two units are uniquely suited to navigate complex data management needs to deliver actionable business intelligence through artificial intelligence (AI) and analytics. “Our new ThinkSystem servers are designed to enhance mission-critical applications like SAP HANA and accelerate next-generation workloads like AI, analytics and machine learning, enabling mission critical performance and reliability for all data centres and maximum business value for our customers,” added Amini. 

Share

Featured Articles

Blackstone's Vision for Hyperscale Data Centre Campus

Blackstone to transform Northumberland site from car battery factory to a hyperscale data centre campus, in a new initiative to meet growing data demands

Maincubes Bolsters Leadership Team with Martin Murphy as COO

maincubes appoints new COO Martin Murphy, after recent introduction of Zahl Limbuwala to Executive Chairman of the Advisory Board

How Kove Unlocks Transformative Growth for Your Organisation

Kove helps clients maximise infrastructure performance using software-defined memory. Learn how

US Data Centres Confront the Strain of Rising Power Demands

Critical Environments

Data storage, memory and generation with IEEE’s Tom Coughlin

Networking

Digital Realty Continues Renewable Rollout to the US

Data Centres