For years, companies have been moving to the cloud. With the ubiquity of internet-connected devices, it seems only natural to rely on cloud-based services for the majority of applications today. However, the rise of edge computing has demonstrated that there is also a need for hyper-local, distributed computing which can offer latencies and resilience that the cloud cannot match. With these benefits come ever-increasing complexity both in terms of individual application development as well as overall infrastructure management.

In this article, we’ll take a look at the unique benefits and challenges of edge computing, as well as how a lightweight, Kubernetes-based messaging queue can meet those challenges.

Edge Computing

Edge computing can be defined as bringing data and its associated processing closer to the point of use and is a form of distributed computing. In order to understand edge computing, we first need to understand the limitations of cloud computing.

Limitations of the Cloud

Cloud computing has brought many advantages to modern application development. It is easier and faster than ever to spin up virtualized infrastructure from any number of IaaS providers. However, I would like to highlight two drawbacks to cloud infrastructure — latency and cost.

First, latency to your cloud can be inconsistent depending on the location and network context of your users. If your data center is in the Eastern US, users from Asia attempting to access your application could experience > 200ms latency on every request to and from your cloud. Additionally, mobile or users in remote locations will have inconsistent network connectivity. If your app depends on a constant connection to the cloud, mobile users could suffer from a degraded experience.

#kubernetes #hybrid-cloud #microservices #cloud

Using KubeMQ Bridges to Communicate Between Edge and Cloud Clusters
1.20 GEEK