Apr 1, 2021

The world of the data-forward enterprise

Data Centres
Big Data
Analytics
storage
Dean Yates
4 min
Dean Yates, Rubrik’s VP for the UK, Ireland, Middle East and Africa, takes a look at the road ahead for the digitally enabled, data-forward enterprise.
Dean Yates, Rubrik’s VP for the UK, Ireland, Middle East and Africa, takes a look at the road ahead for the digitally enabled, data-forward enterprise...

From florists to hotels, big pharma to healthcare, every industry and every business has gone through, is in the throes of, or is about to dive into its own data revolution.It’s an exciting time for realising the potential of our collective data, and a necessary mind-shift to further edge the enterprise into the modern digital world. Data is the lifeblood of the modern world, but not every business is up to the task of mining it for all it’s worth. IT organisations understand the importance of data management, yet most are not doing enough to prepare for future data challenges: findings from a recent IDC survey found that 44.5% of CIOs anticipate that data sprawl will be a major, potentially devastating issue two years from now if their organisation continues in its current approach to data control and management. Unsurprisingly then, IT organisations are struggling to capitalise on the value of their data, which can drastically impact their bottom line. In fact, organisations that were surveyed without an enterprise-wide data management solution incur 66% more operational costs and are 67% slower to market than their innovative peers.It’s clear that, while recent uptake of cloud services due to a need for remote working show a positive shift in cloud adoption, IT leaders worldwide still aren’t seeing the cloud for what it is: an indispensable part of their IT infrastructure that will be a critical area of investment for effectively managing their growing ocean of data.

Overcoming modern data challenges

The solution, which will allow the enterprise to bridge the gap between their untamed data oceans and the value they know they can mine, lies in a company-wide data management platform. 

The modern enterprise sits on millions if not billions of files of data, yet without a new, ground-up approach, digital businesses will face the same challenges again and again: how do they create backup copies? How do they effectively move and store those copies? And how do they identify and recover the data they need when they need it? 

They’re on the precipice of data greatness, they know they need to shake up their data management strategy, yet they know their current tape-based solution isn’t what’s going to shepherd them to a bright, data-powerful future. A modern cloud data management platform, on the other hand, is custom-built to usher in a new age of data manipulation for these organisations, comprising core features that turn data lakes into a tool, not a hindrance: 

  • Intricate, accessible data archival: a solution worth its salt should make data archival a straightforward, seamless and easy-to-manage process - be that across public or private clouds. Ideally, this includes the ability for users to automate long-term data retention by moving a slider in the same policy engine as their backup and replication schedules, as well as automated SLA compliance reporting that instantly notifies users on capacity utilisation and growth. These features are what will set a good cloud data management platform from a great one.
  • Data security and flexibility, wherever, whenever: the second feature to seek out, especially in this brave new remote world, is the ability to extend data management, and protection, to remote offices or wherever else your scattered workforce may be operating from. From here, these remote locations can backup data locally, replicate it to the central data hub of the enterprise, and archive it to the cloud - preferably via an intuitive, easy-to-navigate UI.
  • Instant backups, instant recovery: in a nutshell, this looks like continuous data protection and instant recovery of your data, without manual storage provisioning slowing you down. As radically simple as it sounds, this puts an end to job scheduling, delivers rapid recovery times, and equips users with the tools to effortlessly search for and identify data across an enterprise.
  • Data replication and recovery in the face of a crisis: Finally, in preparation for the perimeter breach we’ll all one day be faced with, your cloud data management platform must offer an easy to implement, effective disaster recovery strategy. To shine here, your chosen solution will use asynchronous, deduplicated replication, native data recovery orchestration, and cloud instantiation to automate a complete disaster recovery plan - to name a few.

Well-architected cloud data management will be new to many organisations, but it’s a concept they’ll have to bring themselves up to speed with if they don’t want their data lakes turning into unmanageable data swamps in the future. In doing so, the modern digital business will champion digital transformation and data manipulation, better embrace the cloud - and further accelerate that all-important adoption - and begin indexing, organising and utilising their data in new ways that can only strengthen their proposition.

Share article

Jul 18, 2021

Four top tips for cloud-native transformation success

Cisco
AppDynamics
CloudNative
DigitalTransformation
James Harvey
4 min
Getty Images
James Harvey, EMEAR CTO at Cisco AppDynamics, breaks down the key concerns facing technology leaders looking to execute on a cloud-native transformation

Cloud offers a range of opportunities for innovation. However, it is not an easy path to embark on as it also brings increased complexity across the IT estate. IT teams have to demonstrate incredible resilience to tackle spiralling IT complexity and accelerated speed of digital transformation. So with heightened pressure  for technologists, it is now more important than ever for businesses to remove the guesswork from their technology stack and move forward with certainty and intelligence. 

This article provides the answers to some of the most pressing questions faced by technologists when considering a cloud-native approach.

Why cloud-native?

A cloud-native architecture is a design methodology that uses cloud services to make application development modular, agile and dynamic. It uses a suite of cloud-based services, often architected as microservices to make it easier to scale according to workloads and update services independently, without causing downtime.

For DevOps-focused companies in particular, this approach is ideal. It enables development teams to choose the framework, language or system that best meets the specific objectives of a given set of services and their teams. Additionally, cloud-native applications lend themselves to constant evaluation based on how users are experiencing the service. Companies can move with speed to ensure they will scale and adapt to varying workloads .  

How can you better manage the complexity of cloud-native approaches?

According to AppDynamics research - Agents of Transformation 2021: The rise of full-stack observability - 75% of technologists report they are already struggling to manage overwhelming ‘data noise’, much of which arises from managing and monitoring a web of different services and suppliers, and having to control systems both within and outside of the core IT estate. And as more businesses migrate to the cloud, this issue will become increasingly prevalent - 85% of technologists state that quickly cutting through noise caused by the ever-increasing volumes of data to identify root causes of performance issues will represent a significant challenge in the year ahead.

The sheer amount of data itself isn’t a bad thing. In fact, it can be extremely useful but companies need the tools to analyse it, understand it, and act on it in real-time.

So, what do you do if you have too much data to process but a vital need to do so? Many enterprises are finding that manual anomaly detection amongst such high data volumes is painful, and impossible to scale. Therefore data issues can go undetected or take too long to resolve — driving up MTTR, risking SLOs, SLAs and customer trust, while reducing IT’s bandwidth to innovate.

The use of machine learning helps automate the process of finding abnormal behaviour in an application or its underlying supporting virtual infrastructure and enables you to fix the issue before it affects users. Automation can help here too. By automatically baselining every metric collected before and after the migration, it’s possible to create a clear comparison of the application performance. This makes troubleshooting during migration much easier than using manual processes.

Why is observability key to a cloud-native approach and operational efficiency?

Context is key to understanding the bigger picture that the individual data sources and services are contributing to. Full-stack observability is needed to gain a full view of what is happening across the IT stack at any time. It provides a single, unified platform to view and understand any technology health and performance and their relationships , instead of multiple, disjointed monitoring solutions. 

Technologists are striving to connect the dots up and down the stack so having complete visibility allows them to understand how performance issues impact customers and business outcomes and to prioritise decision-making and actions based on what really matters to the organisation. 

Currently, many businesses have no hard data to tell them where they should be focusing their attention and are instead having to rely on instinct and gut feeling when making decisions. As the AppDynamics report showed, 68% of technologists admit that they waste a lot of time because they can’t easily isolate where performance issues are actually happening. And even when they do identify issues, they don’t know which of them actually matter most. The result is long hours spent and frustration in the IT department. User experience also suffers.

Applying full-stack observability and real-time insights to a cloud-native environment helps IT teams to make sense of the chaos and become more informed and efficient at the same time.

How can the business yield the best ROI?

The overwhelming majority of technologists (92%) say the ability to link technology performance to business outcomes such as customer experience, sales transactions and revenue, will be what’s really important to delivering innovation goals over the next year.

Most IT teams will tell you they don’t have an unlimited budget. They therefore have to justify their spend by investing and innovating in the areas where they are likely to see the biggest return. Understanding which technology aspects impact customers and the business the most will help focus efforts and improve ROI.. Full-stack observability allows you to gain the insights you need to make the right decisions in real time and for the future.

 

Share article