Ignite Digital Talent

On The Edge: Understanding Edge Computing

Over the last few years, edge computing has changed the landscape. Edge technologies have become a defining force of this period. They will undoubtedly go on to heavily inform the debate around how the world will do computing moving forward.  

Until recently, the conversation around edge computing has mostly been dealing with the “what if”. Largely the infrastructure needed to support edge computing has been unavailable, and so we have been forced to hypothesise.

As we welcome a new wave of edge computing resources this is changing. Application developers, entrepreneurs, and large enterprises are getting their hands on micro data centres, specialised processors and the other necessary software abstractions needed to trial the processes and provide answers to the “what ifs”. We can now go beyond the theoretical, and can directly answer questions about edge computing’s value and implications.

This real-world evidence provides weight to the debate surrounding the hype surrounding edge computing and its impact. Is it deserved, or is it misplaced?

Edge Computing. Not just about latency.

Edge computing brings computation and data storage closer to where it is needed. There is no fixed ‘edge’. Indeed, the edge can be anywhere that is closer to the end-user or device. Be that 100 miles away, one mile away, on-premises, or even on the device itself. As such, edge computing is a direct contrast to the traditional cloud computing model. In the traditional model, computation is centralised in a handful of hyperscale data centres.

So far, conversations surrounding edge computing have emphasised that it has the power to minimise latency. This can be either to improve user experience or to enable new latency-sensitive applications. However, it has the potential to do so much more than this!  

While latency mitigation is an important use case, it is not the only one. Another use case for edge computing is to minimise network traffic going to and from the cloud. This is known as Cloud Offload and will probably deliver as much economic value as latency mitigation.  

Underpinning the importance of Cloud Offload is the sheer amount of data being generated. As our world becomes more connected, users, devices and sensors are creating more data than ever before.

Data and Edge Computing

Chetan Venkatesh, CEO of Macrometa, a startup tackling data challenges in edge computing, believes that the edge is a “data problem”. Cloud Offload has arisen because it costs money to move all this data and so many companies would rather not move it to if they don’t have to.  

Edge computing allows value to be extracted from the data from where it is generated, never moving it from the edge. If needs be, the data can be scaled down to a subset that is more economical to send to the cloud for storage or further analysis.

Typically, Cloud Offload is used to process video or audio data. These are two of the most bandwidth demanding data types. However, some other data sources are just as expensive to transmit to the cloud. Industrial equipment also generates a huge amount of data and is a prime candidate for Cloud Offload.  

The Edge is an extension of the Cloud.

Early commentary suggested that the edge may replace the cloud. Instead, it has become more accurate to say that the edge expands the reach of the cloud. Thus far it has not put a dent in the ongoing trend of workloads migrating to the cloud. More so, work has been undertaken to extend the traditional cloud formula of on-demand resource availability and the abstraction of physical infrastructure to ‘edge’ locations. These locations are becoming increasingly distant from traditional cloud data centres but will be managed using tools and approaches evolved from the cloud. As a result, over time the line between the cloud and the edge will blur.

The edge computing initiatives of public cloud providers like AWS and Microsoft Azure are a direct example of how the edge and the cloud are a linear continuum. For example, if you are an enterprise looking to do on-premises edge computing, Amazon will now send you an AWS Outpost. This is a fully assembled rack of computing and storage that mimics the hardware design of Amazon’s data centres. It is installed in a customer’s own data centre and is monitored, maintained, and upgraded by Amazon. Importantly, these outposts run many of the same services AWS users have come to rely on; the EC2 compute service for example. Microsoft also has a similar aim with its Azure Stack Edge product; making the edge operationally similar to the cloud.

These products from huge name providers send a sure signal that cloud and edge infrastructure can be unified under one umbrella.

If you’d like to read more about Cloud Computing then check out our article.

Edge infrastructure is arriving in phases.

Many application owners would like to reap the benefits of edge computing without having to support any on-premises footprint. This requires access to a new kind of infrastructure. Something that is similar to the cloud but is much more geographically distributed than the few dozen hyper-scale data centres that make up the cloud today.  

This kind of infrastructure is slowly becoming available and is likely to arrive in three distinct phases. Each will extend the edge’s reach by widening the edge’s geographical footprint at each stage.

Phase 1 – Multi-region and multi-cloud.

This step is to leverage multiple regions offered by public cloud providers. For example, AWS has data centres in 22 geographic regions with four more on the way. An AWS customer serving users in both North America and Europe might run its application in both the Northern California region and the Frankfurt region, for instance. Moving from one region to multiple regions could initiate a big drop in latency. For a large set of applications, this will be all that’s needed to deliver a good user experience.

Concurrently, there is a trend towards multi-cloud approaches. This approach is driven by an array of factors including cost efficiency, risk mitigation, avoidance of provider lock-in and a desire to access best-of-sector services offered by each different provider. A multi-cloud approach is similar to the multi-region approach in the sense that it paves the way toward distributing workloads on a spectrum that is moving toward more decentralised edge computing approaches.

Phase 2- The Regional Edge.

The second phase extends the evolution a layer deeper by leveraging infrastructure in hundreds or thousands of locations instead of hyperscale data centres in just a few dozen cities.  

Content Delivery Networks (CDNs) already have an infrastructure footprint like this. CDNs have been engaged in pre-edge computing for around 20 years. For a long time, they have been caching static content closer to end-users in order to improve performance. For example, a typical CDN like Cloudflare has 194 regions while AWS has 22.  

What is different now though, is that these CDNs have begun to open up their infrastructure to general-purpose workloads, not just static content caching. Today, CDNs like Cloudflare, Fastly, Limelight, StackPath, and Zenlayer all offer some combination of container-as-a service, VM-as-a-service and serverless functions. As such, they are starting to look more like cloud providers.  

Forward-thinking cloud providers such as Packet and Ridge are also offering this kind of infrastructure. In turn, the tech giants have also taken this early step in offering more regionalised infrastructure. AWS, for example, has introduced the first of what it calls ‘Local Zones’ in Los Angeles, with more on the way. 

Phase 3 – The Access Edge.

Phase number 3 drives the edge even further outward to a place where it is just one or two network stops away from the end-user or device. In telecommunications terminology, this is known as the Access portion of the network. Therefore this type of architecture has been termed the Access Edge.  

The typical form of the Access Edge is a micro data centre. This can vary in size. In some cases, it is as small as a single rack, in other cases, as large as a small lorry. It can also vary in location. It could be deployed at the side of a road for example, or at the base of a cellular network tower.  

The work involved in this strays away from tech and relies on progression within other sectors. Innovations in power and cooling, for example, are enabling higher and higher densities of infrastructure to be deployed in these small-footprint data centres.

These micro data centres have started to pop up over the last couple of years. Companies like Vapor IO, EdgeMicro, and EdgePresence are building these centres across the US, for example. 2019 may have been the first buildout year, but their full efficacy will only being realised later down the line. It is estimated that by 2022, edge data centre ROI will be collectable for those who made any capital investment in them.

Micro Data Centres vs The Regional Edge

At the moment, there is no clear answer to just how beneficial these micro data centres will be versus The Regional Edge. Some companies are already leveraging The Regional Edge for a variety of cloud offload use cases, as well as latency mitigation in user-experience-sensitive domains such as gaming and e-commerce.  

In contrast, the sectors that could use the very short network routes and super low latencies of the Access Edge are somewhat further off. Autonomous vehicles, drones, AR/VR, smart cities and remote-guided surgery, are all applications that lend themselves well to Access Edge, for example.

It could be that a prime Access Edge customer is lurking in the wings…something that is in the development stage and not yet in the spotlight. For this reason, we may have to wait to see just how effective the Access Edge may be. 

New Software is needed to manage the Edge.

Ultimately the goal of edge computing is industry unification. There needs to be an ecosystem where the same tools and processes can be used to manage cloud and edge workloads regardless of where the edge resides. To get to this point, there will need to be an evolution of the software used to deploy, scale, and manage applications in the cloud; one which has historically been built with a single data centre in mind. 

Big company initiatives such as Google’s Anthos, Microsoft’s Azure Arc and VMware’s Tanzu are evolving cloud infrastructure software in this way. Nearly all these companies share a common thread. They are all based on Kubernetes which has emerged as the dominant approach to managing containerised applications. However, this new wave of products goes one step further. They move beyond the initial design of Kubernetes to support a new world of distributed fleets of Kubernetes clusters. While these clusters sit a-top of heterogeneous pools of infrastructure comprising the “edge,” – on-premises environments, and public clouds, for example, these evolved products mean they can all be managed uniformly.

In conclusion; looking over the Edge.

The dawn of an era where new resources support edge computing will instigate edge-oriented thinking among those who design and support applications. Latterly, the defining trend has been centralisation in a small number of data centres. We’ve now moved on to a new way of thinking; one in favour of increased decentralisation.

Edge computing is still in its infancy but it has moved away from the theoretical into the practical.  

It will continue to evolve at a rapid rate thanks to an industry which moves quickly. As we know it, the cloud is only 14 years old. In such a short time, it has transformed computing entirely. In the same vein, we can anticipate that it will not be long before edge computing leaves much the same fingerprint.