Mar 31, 2021

The World According to ... Vertiv

Data Centres
edge computing
Harry Menear
4 min
Experts from leading data centre operator Vertiv share their insight and analysis of the key trends affecting the industry.
Experts from leading data centre operator Vertiv share their insight and analysis of the key trends affecting the industry...

As the COVID-19 pandemic wears on, the global data centre industry continues to experience unprecedented demand. As the hyperscale segment of the market continues to boom, the edge is also expanding, as technologies like 5G, AI and IoT continue to push heavier computational workloads farther away from the network core. 

With the industry undergoing a period of rapid evolution, we've gathered together expert insight and commentary from the executive team at global data centre leader Vertiv, to share their unique predictions for the year ahead. 

Digitalisation on Fast-Forward

COVID-19 will have a lasting effect on the workforce and the IT ecosystem supporting the new work-from-home model. Vertiv experts expect the pandemic-motivated investment in IT infrastructure to continue and expand, enabling more secure, reliable, and efficient remote work capabilities. Remote visibility and management will become paramount to the success of these work-from-home models. Already remote service capabilities have emerged to minimise the need for on-site service calls, and those practices are likely to continue long after the pandemic. Any cautious steps taken early in the crisis will be accelerated as the pandemic pushes into 2021 and organisations accept these changes not as a temporary detour, but rather a permanent adjustment to the way we work and do business. Over time, what is done in-person versus remotely will change, and the change will be driven by customers looking to minimise their on-site presence. That places a premium on connectivity, remote monitoring, data analytics, and even artificial intelligence to make decisions.

“Recovery requires a change in mindset for most organisations,” said John-David Lovelock, distinguished research vice president at Gartner, in a recent statement. “There is no bouncing back. There needs to be a reset focused on moving forward.”

Bringing Large Data Centre Capabilities to Small Spaces and the Edge

Today’s edge is more critical and more complex, functionally an extension of the data centre rather than the glorified IT closet of the past. Cost and complexity have prevented implementation of data centre best practices in these spaces, but that is changing. Vertiv’s experts anticipate a continued focus on bringing hyperscale and enterprise-level capabilities to these edge sites. This includes greater intelligence and control, an increased emphasis on availability and thermal management, and more attention to energy efficiency across systems.

“Wherever there is a high density of data processing, there will be a demand for edge computing. That demand, and scale, will necessitate more resilient and intelligent edge infrastructure,” said Giordano Albertazzi, president of Vertiv in Europe, Middle East and Africa (EMEA). “We are seeing expansion of the edge in many countries and that will eventually extend to emerging markets. Edge deployments are also closely aligned to other key trends such as 5G and environmental sustainability, and the integration of edge sites with energy grids can support the transition towards renewables.” 

The 5G Conversation Turns to Energy Consumption and Efficiency

In this early stage of 5G planning and launches, the discussion has rightly focused on the ultimate benefits of the technology – increased bandwidth and reduced latency – and the applications it will enable. But, as many countries begin their 5G rollouts in 2021, and the early adopters start to drive breadth and scale, the focus will shift to the significant energy consumption increases brought on by 5G and strategies to deploy more efficiently and effectively. The network densification necessary to fully realise the promise of 5G unavoidably adds to the increased energy demands – estimated to be 3.5x more than 4G. The coming year will see greater focus on managing that significant increase in energy consumption by exploring more efficient products and practices. 

Sustainability Comes to the Forefront

5G is one piece of a broader sustainability story. As the proliferation of data centres continues and even accelerates, especially in the hyperscale space, those cloud and colocation providers are facing increased scrutiny for their energy and water usage. The amplification of the climate change conversation and shifting political winds in the United States and globally will only add to the focus on the data centre industry, which accounts for approximately 1% of global energy consumption. The coming year will see a wave of innovation focused on energy efficiency across the data centre ecosystem. The benefits for data centre operators are clear, starting with cost reduction, compliance with existing and anticipated regulations, and the goodwill that comes with establishing a leadership position in the global sustainability movement. Look for important innovations across the data centre infrastructure space and especially in the area of thermal management.

Share article

Jul 18, 2021

Four top tips for cloud-native transformation success

James Harvey
4 min
Getty Images
James Harvey, EMEAR CTO at Cisco AppDynamics, breaks down the key concerns facing technology leaders looking to execute on a cloud-native transformation

Cloud offers a range of opportunities for innovation. However, it is not an easy path to embark on as it also brings increased complexity across the IT estate. IT teams have to demonstrate incredible resilience to tackle spiralling IT complexity and accelerated speed of digital transformation. So with heightened pressure  for technologists, it is now more important than ever for businesses to remove the guesswork from their technology stack and move forward with certainty and intelligence. 

This article provides the answers to some of the most pressing questions faced by technologists when considering a cloud-native approach.

Why cloud-native?

A cloud-native architecture is a design methodology that uses cloud services to make application development modular, agile and dynamic. It uses a suite of cloud-based services, often architected as microservices to make it easier to scale according to workloads and update services independently, without causing downtime.

For DevOps-focused companies in particular, this approach is ideal. It enables development teams to choose the framework, language or system that best meets the specific objectives of a given set of services and their teams. Additionally, cloud-native applications lend themselves to constant evaluation based on how users are experiencing the service. Companies can move with speed to ensure they will scale and adapt to varying workloads .  

How can you better manage the complexity of cloud-native approaches?

According to AppDynamics research - Agents of Transformation 2021: The rise of full-stack observability - 75% of technologists report they are already struggling to manage overwhelming ‘data noise’, much of which arises from managing and monitoring a web of different services and suppliers, and having to control systems both within and outside of the core IT estate. And as more businesses migrate to the cloud, this issue will become increasingly prevalent - 85% of technologists state that quickly cutting through noise caused by the ever-increasing volumes of data to identify root causes of performance issues will represent a significant challenge in the year ahead.

The sheer amount of data itself isn’t a bad thing. In fact, it can be extremely useful but companies need the tools to analyse it, understand it, and act on it in real-time.

So, what do you do if you have too much data to process but a vital need to do so? Many enterprises are finding that manual anomaly detection amongst such high data volumes is painful, and impossible to scale. Therefore data issues can go undetected or take too long to resolve — driving up MTTR, risking SLOs, SLAs and customer trust, while reducing IT’s bandwidth to innovate.

The use of machine learning helps automate the process of finding abnormal behaviour in an application or its underlying supporting virtual infrastructure and enables you to fix the issue before it affects users. Automation can help here too. By automatically baselining every metric collected before and after the migration, it’s possible to create a clear comparison of the application performance. This makes troubleshooting during migration much easier than using manual processes.

Why is observability key to a cloud-native approach and operational efficiency?

Context is key to understanding the bigger picture that the individual data sources and services are contributing to. Full-stack observability is needed to gain a full view of what is happening across the IT stack at any time. It provides a single, unified platform to view and understand any technology health and performance and their relationships , instead of multiple, disjointed monitoring solutions. 

Technologists are striving to connect the dots up and down the stack so having complete visibility allows them to understand how performance issues impact customers and business outcomes and to prioritise decision-making and actions based on what really matters to the organisation. 

Currently, many businesses have no hard data to tell them where they should be focusing their attention and are instead having to rely on instinct and gut feeling when making decisions. As the AppDynamics report showed, 68% of technologists admit that they waste a lot of time because they can’t easily isolate where performance issues are actually happening. And even when they do identify issues, they don’t know which of them actually matter most. The result is long hours spent and frustration in the IT department. User experience also suffers.

Applying full-stack observability and real-time insights to a cloud-native environment helps IT teams to make sense of the chaos and become more informed and efficient at the same time.

How can the business yield the best ROI?

The overwhelming majority of technologists (92%) say the ability to link technology performance to business outcomes such as customer experience, sales transactions and revenue, will be what’s really important to delivering innovation goals over the next year.

Most IT teams will tell you they don’t have an unlimited budget. They therefore have to justify their spend by investing and innovating in the areas where they are likely to see the biggest return. Understanding which technology aspects impact customers and the business the most will help focus efforts and improve ROI.. Full-stack observability allows you to gain the insights you need to make the right decisions in real time and for the future.


Share article