Jan 15, 2021

How to take your stakeholders to the edge

edge computing
Data Centres
Cloud
IoT
Martin Percival
6 min
Martin Percival, Manager, Solution Architecture at Red Hat, shares his advice for incorporating edge computing into your business
Martin Percival, Manager, Solution Architecture at Red Hat, shares his advice for incorporating edge computing into your business...

One of the most important phenomena that’s set to transform IT and computing is that of edge computing. Edge computing refers to the idea of bringing computing power closer to where it's needed or where data is generated, as opposed to the cloud-based model that sees computing power centralised in data centres. However the edge concept is not limited to computing services, and also extends to networking or storage services - in short, it refers to an ethos of moving IT services physically closer towards where they’ll be in demand.

Edge computing brings with it the flexibility and simplicity of cloud computing, while distributing computing power across many small sites. Edge solutions are highly varied, and range from deployments that span a few computing clusters to millions of edge devices. Edge infrastructure can include any combination of devices, gateways, servers, mini-clusters and small data centres. 

Whereas cloud infrastructure is often hardware-centric, edge infrastructure is usually software-defined and very flexible in practice. There are two major areas where the edge differs substantially from the cloud in terms of technology and operations:

Firstly, the edge does not enjoy the "illusion of infinite capacity” that’s possessed by the cloud. In the cloud, supply leads demand and users can request more resources on demand, but this does not hold true for edge deployments where capacity is provisioned for a smaller set of workloads. This means edge computing requires extremely careful planning out in advance, as opposed to the friction-free scalability of cloud infrastructure.

Secondly, the edge requires a team to not only provide a computing platform but also the management of an entire hardware and software stack encompassing firmware, hardware, software, and the upkeep of services. All of this has to be handled in a consistent and repeatable manner, to keep the edge network a cohesive whole.

When making a plan to transition to the edge, you need to carefully consider how it affects all of the parties who’ll interact with this infrastructure. In particular, you’ll need to consider the knock-on effects that an edge adoption will have for your business, operations teams, and your developers. 

The edge and business continuity

When it comes to IT systems, the primary thing a business cares about is its resilience. Especially when dealing with business critical functions, it’s essential to ensure that edge deployments are configured to be highly resilient to failure. It’s essential, when business functions are concerned, that there should be redundancies baked into your edge infrastructure, and the ability to operate at reduced capability (such as an offline mode when network disruption strikes).

In addition, security is a major business concern when it comes to the edge. Edge sites often have less physical access security compared to a data centre, which increases the risk of disruption - whether it be malicious or accidental. In addition, if peripheral and less capable edge devices such as industrial microcontrollers or actuators are brought online without proper protection, then you risk exposing your entire downstream infrastructure to virtual or physical attack. This means that it’s essential to see edge systems hardened from the ground-up, whether it be OS, memory subsystems, storage, or communication channels. 

A business must also pay close attention to the cost implications of edge infrastructure. Edge is highly cost-sensitive to scaling: in smaller edge deployments fixed costs and overheads per site are often manageable, but as the number of edge sites increase, cost will increase exponentially. If a business has an extensive edge infrastructure, even a small change in a per-unit cost could have budgetary significance as it recurs across hundreds of thousands of sites.

The edge and operations teams

Managing an edge infrastructure falls to operations teams, and it can present a source of challenges in their day-to-day work. One of the most prominent issues operations teams need to reckon with when it comes to the edge is the issue of remotely accessing their edge sites. In a larger business, an operations team might have to manage tens of thousands of edge sites, with each one requiring deployment, patches, upgrades, and migrations. 

All of this will require a team to get used to remote operations from a central location, which in turn requires investing in advanced organisational remote capabilities. It also requires an organisation to invest in a fully automated operational capability, which will allow operations on the edge sites themselves to proceed with little to no manual intervention.

This calls to attention something else operations teams should seek - highly reproducible site management operations. If management operations aren’t very easy to reproduce anywhere and everywhere, troubleshooting can become a huge issue given the inevitable complexity that will arise. This means the configurations for edge infrastructure need to be highly deterministic, following a universal and standardised plan. Any divergences that can happen from that plan should be centrally and rigorously documented. Using an “Infrastructure as code” approach here may provide significant benefits, with standardised change control applied to divergent configurations, thereby enforcing a documentation trail.

The edge and developers

The edge enables businesses to offer new classes of services based on location data, which may need to be made available in real-time to partners. This inevitably means developers will need to help facilitate the automated exchange of data between a business and partners, which typically means well-defined and open application programming interfaces (APIs) that enable a business and its partners to seamlessly exchange data and provide services. Adopting APIs also will help internally, through presenting a hardware- and driver-agnostic means to access data from edge devices.

The complexity of the edge also means that interfaces to streamline application and device management will become a necessity. Developers will have to work on such a project to allow for the easy development, installation, configuration, and sharing of edge devices and apps across teams. These application management platforms will need to cover a range of scenarios, including the task of deploying these apps at various tiers of the edge.

The future of the edge

As mentioned at the start, the edge differs significantly from cloud computing as it requires more careful resource planning and management. An edge computing platform needs to manage the whole hardware and software stack, while also providing a consistent and repeatable way to approach deployment and operations.

When turning to the edge, companies should be thinking about how they can use their existing toolset to manage their edge deployments. For example, consider trying to replicate the tooling that you use to configure, manage, and provision your hybrid cloud infrastructure for your edge systems as well. Such an approach would provide a consistent way for your team to approach managing all your systems, thus reducing the risk of error and the need for further training. 

We’ve covered above some things stakeholders need to take into account when turning to the edge - on the business end, operations end, and developer end. However, these considerations shouldn’t be thought to be irrelevant if you’ve yet to build up your edge infrastructure. For although mass edge deployments remain several years away, the design and tooling decisions made today will have a lasting impact on future capabilities.

Share article

Jun 6, 2021

Unlocking the next chapter of the digital revolution

Dell
servers
IT
Technology
Tim Loake
5 min
Tim Loake, Vice President, Infrastructure Solutions Group, UK at Dell Technologies highlights the importance of often-overlooked digital infrastructure

As the world retreated to a hybrid world in 2020, our reliance on technology took the spotlight. But it was the jazzy new social and video calling platforms that took the encore. Behind the scenes, our servers worked overtime, keeping us connected and maintaining the drumbeat of always-on newly digital services.  Let’s take a moment to pay our respect to the unsung technology heroes of the pandemic – the often-forgotten IT infrastructure keeping us connected come what may. After all, as we look ahead to more resilient futures, they will be playing a central role.

Servers could be likened to our plumbing – vital to well-functioning homes but rarely top of mind so long as it is functioning. Never seen, rarely heard – our servers do all the graft with little praise. But it is essential to reflect on the incremental advances in GPU and CPU power, which have paved the way for new workloads that previously were not possible. Chatbots and native language processing that provide essential customer touchpoints for businesses across the retail and banking sectors rely on powerful servers. They also keep businesses competitive and customers happy in an always-on world. 

Tim Loake, Vice President, Infrastructure Solutions Group, UK at Dell Technologies
Tim Loake, Vice President, Infrastructure Solutions Group, UK at Dell Technologies

Serving workplace transformation

But, as businesses grappled with pandemic disruptions, the focus was largely on adopting connected devices – and awe at the rapid increase in the datasphere.  As they reined in their budgets and attempted to do more with less, one aspect was perhaps overlooked—those hard working servers.

When it came to building resilience into a newly remote workforce, the initial concern was focused on the device endpoints – keeping employees productive.  Many companies did not initially consider whether they had the server infrastructure to enable the entire workforce to log in remotely at the same time. As a result, many experienced a plethora of teething problems: virtual office crashes, long waits to get on servers, and sluggish internet connectivity and application performance, often rendering the shiny new PC frustrating and useless.

Most businesses only had a few outward-facing servers that could authenticate remote workers – a vital gateway as the vector for cyber hacks and attacks increased exponentially. That’s not to mention the fact that many business applications simply weren’t designed to work with the latency required for people working from home. What businesses discovered at that moment was that their plumbing was out of date.  

Business and IT leaders quickly realised that to stay ahead of the curve in the hybrid working world, a renewed focus on building agile, adaptable, and flexible IT infrastructures was critical. More importantly, it accelerated the inevitable digital transformation that would keep them competitive in a data-driven economy. It is now abundantly clear to businesses that they need IT infrastructure to meet the demands of diverse workloads – derive intelligent insights from data, deploy applications effectively, and enhance data management and security.  

Ripe for a digital revolution

Unsurprisingly, IDC noted that there was an increase in purchases of server infrastructure to support changing workloads. However, it also forecasts this uptick will be sustainable and last beyond the pandemic. As the economy begins to reopen, business leaders are looking ahead. IT will continue to play a crucial role in 2021 and beyond – and we have already set the foundations for the digital revolution with next-generation servers. 

As we enter the zettabyte era, new innovative technologies are coming on stream, with 5G turbocharging IoT and putting edge computing to work.  Exciting new services improved day-to-day efficiencies, and the transformation of our digital society will be underpinned by resilient IT infrastructures.  By embracing the technological innovations of our next-generation servers, businesses keep pace with the coming data deluge.

The next generation of server architecture promises more power with less heat, thanks to improved, directed airflow, and direct liquid cooling, resulting in reduced operational costs and environmental impact. As we rebuild post-pandemic, manufacturers and customers alike strive to achieve ever more challenging sustainability goals. With this in mind, a focus on environmentally responsible design is imperative for the servers of tomorrow -  uniquely designed chassis for adaptive cooling and more efficient power consumption will be critical, improving energy efficiency generation over generation.

The most notable evolution is the configuration of these next-gen servers around more specific organisational needs. Unlike clunky and often unstable legacy infrastructure, the infrastructure of tomorrow will be sturdier and more modular. The next iteration is streamlined, and in this modular form, can be more easily tailored to business needs. This equates to essential cost savings as businesses only pay for what they use.  

Resolving the problem of the future, today

Tomorrow's IT challenges will focus on response times and latency as Edge and 5G technologies go mainstream. As businesses develop new and innovative services that utilise supercharged connectivity and real-time analytics, staying on top of these challenges will give them a competitive edge. For example, in the world of retail, automation will power new virtual security guards and even the slightest delay in the data relay could result in financial loss. 

Similarly, in the smart cities of tomorrow, the network must be responsive. With city-centre traffic lights controlled by an AI-powered camera that monitors pedestrians, delays in data transfers could cost the life of an elderly pedestrian who has fallen in the road. The stakes are far higher in a 5G-enabled world. As our reliance on technology deepens, the margins for error narrow, placing greater emphasis on the efficiency of those critical underpinning technologies.

Fully enabling the hybrid work model today is just a stepping-stone towards more fluid, tech-enabled lives. A work Zoom call from an automated vehicle on-route to an intelligent transport hub is a highly probable vision of our future. But it requires incredible amounts of compute and seamless data transfers to make it possible. These glossy snapshots need super servers to come to life, making that IT plumbing glisten with next-gen innovation essential. Without exemplary server architecture, we risk future tech advances and the human progression that it enables. 

Share article