Feb 7, 2021

Public cloud might be 3-4 times more costly than on-prem

data centre
Tom Christensen
5 min
Tom Christensen, CTO for EMEA at Hitachi Vantara, explains why moving everything to the public cloud isn't always the right move.
Tom Christensen, CTO for EMEA at Hitachi Vantara, explains why moving everything to the public cloud isn't always the right move...

I’m frequently having conversations with CIOs and they often bring up cloud-first strategy, saying they’re considering moving new as well as existing workloads into the cloud. The walls of the data centre have fallen. We’re continuing to move into a hybrid world with edge, core and cloud offerings. 

IT leaders will often ask: where should I run my workload? The answer to that isn’t straightforward. The infrastructure game is changing. It’s becoming more about hybrid-world: combining edge to core with private and public cloud. And it’s becoming increasingly important to interrogate the costs of such a strategy.

The edge, core or cloud conundrum

Firstly, it’s not really a question of choosing one over the other – the answer is often going to be a blend of different models. Today, companies have different workload placement options. You can run workloads in your on-prem data centres or off-prem at a hosting provider or in the public cloud. It’s not that one option is better than the other; each has its place and usually the ideal solution is a combination of different options.

But all workloads are not equal. Different workloads have different characteristics, which will influence your choice. So it’s not always easy to make a decision on where to place those workloads. 

Mission critical workloads that require low latency and high performance should be placed in the on-prem data centre. The same goes for steady production environments that are running 24/7. Other use cases would include: security, compliance or whenever you need to be in full control of the workload and its data. 

Scenarios that make a case for public cloud, on the other hand, would include cloud native applications, test and development, sudden peak workloads, less heavy workloads or if you simply want to offer cloud service capabilities to internal users in the company. For small companies or start-ups, where budgets are particularly tight, public cloud is definitely going to be the go-to option to avoid spending time building in-house data centres.

Of course, for businesses of all sizes, budgetary considerations play a huge role in deciding whether public cloud is the way to go. I’ve noticed recently that some CIOs are starting to set more realistic business ambitions for their cloud-first strategy. They’re shifting away from simply putting forward cost arguments and instead focusing on promoting the value-add. The reason for that is that a public cloud implementation can actually be more expensive to manage than non-cloud and migration is expensive – so they’re being forced to move away from using cost benefits to justify their cloud strategy.  

A closer look at costs: on-premise vs. public cloud 

So let’s look at the cost for running a steady production environment 24/7 in-house compared to the public cloud. 

I spoke to a customer recently and they had the cloud-first strategy top of mind, with the goal of turning their existing data centre into a public cloud service. So we agreed to make a simple price comparison between the price of running a storage environment in their existing data centre over the last five years and the estimated price for running the same service in a public cloud.

Our professional services team developed four scenarios by using publicly available cloud pricing (not taking into account any commercial agreement). 


Hitachi Vantara

As highlighted, lift and shift of a steady production environment is hard to justify from a cost perspective if you need the same performance 24/7. On average, it is between three and four times more expensive as running it in-house. If you drop the cloud performance guarantee (IOPS) the price does decrease, but still not to the same price point as on-prem.  


Hitachi Vantara

Public cloud only offers an IOPS guarantee for the most expensive tier and only 99.9% availability, allowing 8.7 hours of downtime per year. Furthermore, the data reduction guarantee you get on-prem is not offered by a public cloud provider. So you pay for what you provision, not what you actually use or consume.

One argument that keeps coming up is the anticipated drop in cloud pricing over time. But do some quick research and you’ll find the price has remained the same since 2015 per GB. The annual price erosion for storage on-prem is 10 to 15 percent and, on top of that, you get data reduction technology so you can save more data on the provisioned capacity you buy.


Hitachi Vantara

Get the ‘cloud feeling’ on-prem with a Private cloud

I’ve carried out multiple of these assessments for our customers. What I’ve found is that you can still get some of the same features as public cloud – such as an agile cloud-based portal – if you consolidate your current environment and build a private cloud on-prem, rather than migrating to the public cloud.  

From carrying out more than 1,200 enterprise data centre assessments, we’ve discovered there’s potential for additional cost savings through consolidation, optimisation and automation of current on-premise, through the adoption of a private cloud.

What we see is a low asset utilisation of the current on-prem environment: 


Hitachi Vantara

Note that, on average, the committed saving is 33+ % in TCO reduction.

Adopting a private cloud infrastructure platform provides consolidation, optimisation and automation, ultimately giving you an agile cloud functionality for all your workloads – traditional or modern. And this is where we start moving into a hybrid world. 

This approach reduces complexity as you accelerate your business, giving you the mobility to move your workload anywhere between edge, core and private and public cloud. It’s really the best of both worlds. 

But perhaps the biggest bonus is that it will drive down costs and overcome those downsides to public cloud that I mentioned earlier. You can choose to use a consumption model, which means moving from owning IT infrastructure to acquiring IT as a service so that you only pay for what you actually use. 

So, before making any decisions about the future of your workload placement – whether in-house or in the public cloud – it’s worth getting deeper insight into your current data centre and finding out if you could save money on-prem. 

Share article

Jun 6, 2021

Unlocking the next chapter of the digital revolution

Tim Loake
5 min
Tim Loake, Vice President, Infrastructure Solutions Group, UK at Dell Technologies highlights the importance of often-overlooked digital infrastructure

As the world retreated to a hybrid world in 2020, our reliance on technology took the spotlight. But it was the jazzy new social and video calling platforms that took the encore. Behind the scenes, our servers worked overtime, keeping us connected and maintaining the drumbeat of always-on newly digital services.  Let’s take a moment to pay our respect to the unsung technology heroes of the pandemic – the often-forgotten IT infrastructure keeping us connected come what may. After all, as we look ahead to more resilient futures, they will be playing a central role.

Servers could be likened to our plumbing – vital to well-functioning homes but rarely top of mind so long as it is functioning. Never seen, rarely heard – our servers do all the graft with little praise. But it is essential to reflect on the incremental advances in GPU and CPU power, which have paved the way for new workloads that previously were not possible. Chatbots and native language processing that provide essential customer touchpoints for businesses across the retail and banking sectors rely on powerful servers. They also keep businesses competitive and customers happy in an always-on world. 

Tim Loake, Vice President, Infrastructure Solutions Group, UK at Dell Technologies
Tim Loake, Vice President, Infrastructure Solutions Group, UK at Dell Technologies

Serving workplace transformation

But, as businesses grappled with pandemic disruptions, the focus was largely on adopting connected devices – and awe at the rapid increase in the datasphere.  As they reined in their budgets and attempted to do more with less, one aspect was perhaps overlooked—those hard working servers.

When it came to building resilience into a newly remote workforce, the initial concern was focused on the device endpoints – keeping employees productive.  Many companies did not initially consider whether they had the server infrastructure to enable the entire workforce to log in remotely at the same time. As a result, many experienced a plethora of teething problems: virtual office crashes, long waits to get on servers, and sluggish internet connectivity and application performance, often rendering the shiny new PC frustrating and useless.

Most businesses only had a few outward-facing servers that could authenticate remote workers – a vital gateway as the vector for cyber hacks and attacks increased exponentially. That’s not to mention the fact that many business applications simply weren’t designed to work with the latency required for people working from home. What businesses discovered at that moment was that their plumbing was out of date.  

Business and IT leaders quickly realised that to stay ahead of the curve in the hybrid working world, a renewed focus on building agile, adaptable, and flexible IT infrastructures was critical. More importantly, it accelerated the inevitable digital transformation that would keep them competitive in a data-driven economy. It is now abundantly clear to businesses that they need IT infrastructure to meet the demands of diverse workloads – derive intelligent insights from data, deploy applications effectively, and enhance data management and security.  

Ripe for a digital revolution

Unsurprisingly, IDC noted that there was an increase in purchases of server infrastructure to support changing workloads. However, it also forecasts this uptick will be sustainable and last beyond the pandemic. As the economy begins to reopen, business leaders are looking ahead. IT will continue to play a crucial role in 2021 and beyond – and we have already set the foundations for the digital revolution with next-generation servers. 

As we enter the zettabyte era, new innovative technologies are coming on stream, with 5G turbocharging IoT and putting edge computing to work.  Exciting new services improved day-to-day efficiencies, and the transformation of our digital society will be underpinned by resilient IT infrastructures.  By embracing the technological innovations of our next-generation servers, businesses keep pace with the coming data deluge.

The next generation of server architecture promises more power with less heat, thanks to improved, directed airflow, and direct liquid cooling, resulting in reduced operational costs and environmental impact. As we rebuild post-pandemic, manufacturers and customers alike strive to achieve ever more challenging sustainability goals. With this in mind, a focus on environmentally responsible design is imperative for the servers of tomorrow -  uniquely designed chassis for adaptive cooling and more efficient power consumption will be critical, improving energy efficiency generation over generation.

The most notable evolution is the configuration of these next-gen servers around more specific organisational needs. Unlike clunky and often unstable legacy infrastructure, the infrastructure of tomorrow will be sturdier and more modular. The next iteration is streamlined, and in this modular form, can be more easily tailored to business needs. This equates to essential cost savings as businesses only pay for what they use.  

Resolving the problem of the future, today

Tomorrow's IT challenges will focus on response times and latency as Edge and 5G technologies go mainstream. As businesses develop new and innovative services that utilise supercharged connectivity and real-time analytics, staying on top of these challenges will give them a competitive edge. For example, in the world of retail, automation will power new virtual security guards and even the slightest delay in the data relay could result in financial loss. 

Similarly, in the smart cities of tomorrow, the network must be responsive. With city-centre traffic lights controlled by an AI-powered camera that monitors pedestrians, delays in data transfers could cost the life of an elderly pedestrian who has fallen in the road. The stakes are far higher in a 5G-enabled world. As our reliance on technology deepens, the margins for error narrow, placing greater emphasis on the efficiency of those critical underpinning technologies.

Fully enabling the hybrid work model today is just a stepping-stone towards more fluid, tech-enabled lives. A work Zoom call from an automated vehicle on-route to an intelligent transport hub is a highly probable vision of our future. But it requires incredible amounts of compute and seamless data transfers to make it possible. These glossy snapshots need super servers to come to life, making that IT plumbing glisten with next-gen innovation essential. Without exemplary server architecture, we risk future tech advances and the human progression that it enables. 

Share article