In a microservices architecture, apps trade the rigidity and stability of the call stack for the flexibility and chaos of the network. Concerns such as latency, outage retries, security, and traceability that were not a concern with a call stack become a concern with a service call. Service mesh is a pattern that has arisen to take these concerns out of the hands of coders so that they can stay focused on coding business solutions.

There is much overlap between an API gateway and a service mesh. This article explores what a service mesh is, its benefits to your organization, how it differs from an API gateway, and provides recommendations for service mesh’s use.

Executive Summary of Recommendations

Any application team building a large distributed componentized application running on containers should use a service mesh to manage, secure, and monitor their services. The traffic between these intra-application services is what a service mesh is best suited for. API gateways should, in contrast, be used to manage interactions between your business and your partners or between one internal business unit and another.

A service mesh comes in a variety of patterns, but the ideal pattern you should utilize is a sidecar proxy running in containers. Although Istio is the most common service mesh product, Consul, Linkerd, the service mesh Red Hat bundles with OpenShift (a fork of Istio), and more are also options for Kubernetes-based containers. Before investing in a service mesh, you should evaluate the landscape of service mesh products, their maturity, and if the industry has settled on a clear winner (as, for example, happened with in the container space with Kubernetes winning the de facto industry standard for containers).

Although a service mesh overlaps heavily with API management, security, resilience, and monitoring, it is best viewed as a cloud technology since it is so intertwined with containers and is meant to support cloud-native apps. Note, by “cloud native” I include apps designed to run on public cloud and also private (on-premises) cloud containers.

What Is a Service Mesh?

Moving from the call stack of function invocation to a network call introduces issues with security, instability, and debugging. A service mesh is a set of architectural patterns and supporting tools for handling those concerns. For one example, a function call knows the function being called is always available whereas a network call cannot. A service mesh will help the client endpoint handle this network instability by executing retries transparent to the client app. It will also help the server endpoint by routing the request to the server node best able to handle the based on configured policies of how to route traffic.

A service mesh is usually implemented with two layers: a data plane and a control plane. The data plane acts as a proxy for both client and server endpoints of a connection, enforcing the policies received from the control plane and reporting back runtime metrics to the control plane’s monitoring tool. The control plane manages the service policies and orchestration of the data plane.

Client through a proxy to another proxy fronting the service. Both proxies are managed by a service mesh control plane.

Client through a proxy to another proxy fronting the service. Both proxies are managed by a service mesh control plane.

Topology of a service mesh.

The most popular data plane is Envoy, an open source proxy created by Lyft that runs as a sidecar for cloud-native apps, including on-premises private cloud. The most popular control plane is Istio, an open source service mesh created jointly by Lyft, Google, and IBM to inject and manage Envoy instances into cloud-native apps as a container sidecar.

Below are some typical service mesh features, though not every service mesh implementation comes with all of these.

#api-gateway #service-mesh #microservice-architecture #application-architecture #microservices

Deciphering the Difference Between a Service Mesh and API Gateway
1.45 GEEK