How to take your stakeholders to the edge
One of the most important phenomena that’s set to transform IT and computing is that of edge computing. Edge computing refers to the idea of bringing computing power closer to where it's needed or where data is generated, as opposed to the cloud-based model that sees computing power centralised in data centres. However the edge concept is not limited to computing services, and also extends to networking or storage services - in short, it refers to an ethos of moving IT services physically closer towards where they’ll be in demand.
Edge computing brings with it the flexibility and simplicity of cloud computing, while distributing computing power across many small sites. Edge solutions are highly varied, and range from deployments that span a few computing clusters to millions of edge devices. Edge infrastructure can include any combination of devices, gateways, servers, mini-clusters and small data centres.
Whereas cloud infrastructure is often hardware-centric, edge infrastructure is usually software-defined and very flexible in practice. There are two major areas where the edge differs substantially from the cloud in terms of technology and operations:
Firstly, the edge does not enjoy the "illusion of infinite capacity” that’s possessed by the cloud. In the cloud, supply leads demand and users can request more resources on demand, but this does not hold true for edge deployments where capacity is provisioned for a smaller set of workloads. This means edge computing requires extremely careful planning out in advance, as opposed to the friction-free scalability of cloud infrastructure.
Secondly, the edge requires a team to not only provide a computing platform but also the management of an entire hardware and software stack encompassing firmware, hardware, software, and the upkeep of services. All of this has to be handled in a consistent and repeatable manner, to keep the edge network a cohesive whole.
When making a plan to transition to the edge, you need to carefully consider how it affects all of the parties who’ll interact with this infrastructure. In particular, you’ll need to consider the knock-on effects that an edge adoption will have for your business, operations teams, and your developers.
The edge and business continuity
When it comes to IT systems, the primary thing a business cares about is its resilience. Especially when dealing with business critical functions, it’s essential to ensure that edge deployments are configured to be highly resilient to failure. It’s essential, when business functions are concerned, that there should be redundancies baked into your edge infrastructure, and the ability to operate at reduced capability (such as an offline mode when network disruption strikes).
In addition, security is a major business concern when it comes to the edge. Edge sites often have less physical access security compared to a data centre, which increases the risk of disruption - whether it be malicious or accidental. In addition, if peripheral and less capable edge devices such as industrial microcontrollers or actuators are brought online without proper protection, then you risk exposing your entire downstream infrastructure to virtual or physical attack. This means that it’s essential to see edge systems hardened from the ground-up, whether it be OS, memory subsystems, storage, or communication channels.
A business must also pay close attention to the cost implications of edge infrastructure. Edge is highly cost-sensitive to scaling: in smaller edge deployments fixed costs and overheads per site are often manageable, but as the number of edge sites increase, cost will increase exponentially. If a business has an extensive edge infrastructure, even a small change in a per-unit cost could have budgetary significance as it recurs across hundreds of thousands of sites.
The edge and operations teams
Managing an edge infrastructure falls to operations teams, and it can present a source of challenges in their day-to-day work. One of the most prominent issues operations teams need to reckon with when it comes to the edge is the issue of remotely accessing their edge sites. In a larger business, an operations team might have to manage tens of thousands of edge sites, with each one requiring deployment, patches, upgrades, and migrations.
All of this will require a team to get used to remote operations from a central location, which in turn requires investing in advanced organisational remote capabilities. It also requires an organisation to invest in a fully automated operational capability, which will allow operations on the edge sites themselves to proceed with little to no manual intervention.
This calls to attention something else operations teams should seek - highly reproducible site management operations. If management operations aren’t very easy to reproduce anywhere and everywhere, troubleshooting can become a huge issue given the inevitable complexity that will arise. This means the configurations for edge infrastructure need to be highly deterministic, following a universal and standardised plan. Any divergences that can happen from that plan should be centrally and rigorously documented. Using an “Infrastructure as code” approach here may provide significant benefits, with standardised change control applied to divergent configurations, thereby enforcing a documentation trail.
The edge and developers
The edge enables businesses to offer new classes of services based on location data, which may need to be made available in real-time to partners. This inevitably means developers will need to help facilitate the automated exchange of data between a business and partners, which typically means well-defined and open application programming interfaces (APIs) that enable a business and its partners to seamlessly exchange data and provide services. Adopting APIs also will help internally, through presenting a hardware- and driver-agnostic means to access data from edge devices.
The complexity of the edge also means that interfaces to streamline application and device management will become a necessity. Developers will have to work on such a project to allow for the easy development, installation, configuration, and sharing of edge devices and apps across teams. These application management platforms will need to cover a range of scenarios, including the task of deploying these apps at various tiers of the edge.
The future of the edge
As mentioned at the start, the edge differs significantly from cloud computing as it requires more careful resource planning and management. An edge computing platform needs to manage the whole hardware and software stack, while also providing a consistent and repeatable way to approach deployment and operations.
When turning to the edge, companies should be thinking about how they can use their existing toolset to manage their edge deployments. For example, consider trying to replicate the tooling that you use to configure, manage, and provision your hybrid cloud infrastructure for your edge systems as well. Such an approach would provide a consistent way for your team to approach managing all your systems, thus reducing the risk of error and the need for further training.
We’ve covered above some things stakeholders need to take into account when turning to the edge - on the business end, operations end, and developer end. However, these considerations shouldn’t be thought to be irrelevant if you’ve yet to build up your edge infrastructure. For although mass edge deployments remain several years away, the design and tooling decisions made today will have a lasting impact on future capabilities.