Istio & Service Mesh - Simply Explained in 15 Mins

Istio Service Mesh explained | Learn what Service Mesh and Istio is and how it works

In this video you will learn about Service Mesh and one of its implementation, which is Istio.
In order to understand the concepts, we will first look at the new challenges introduced by a Microservice Architecture.

Then we will see how different features of a Service Mesh solve these challenges.
We will look at how Istio implements Service Mesh and learn about Istio architecture as well as how to configure Istio for our microservice application.

▬▬▬▬▬▬ T I M E S T A M P S ⏰ ▬▬▬▬▬▬

  • 0:00 - Intro
  • 0:53 - Challenges of a microservice architecture
  • 5:11 - Solution: Service Mesh with Sidecar Pattern
  • 6:15 - Service Mesh Traffic Split feature
  • 7:25 - Istio Architecture
  • 9:05 - How to configure Istio?
  • 11:57 - Istio Features: Service Discovery, Security, Metrics & Tracing
  • 13:19 - Istio Gateway
  • 14:06 - Final Overview: Traffic Flow with Istio

#istio #microservice #developer

What is GEEK

Buddha Community

Istio & Service Mesh - Simply Explained in 15 Mins
Roberta  Ward

Roberta Ward

1598169240

From Service Mess to Service Mesh

Introduction

Over the last 10 years, the rapid adoption of microservices architecture has resulted in enterprises with hundreds or (sometimes even thousands) of services. With the growth of containerization technologies like Docker and Kubernetes, microservice patterns have seen the strongest growth; resulting in a complex dependency matrix between these micro-services. For teams to monitor, support, and to maintain these services is becoming a challenge so most enterprises have invested in some kind of microservices management tool.

This article will explore some of the common aspects of microservice management. Then we’ll take a closer look at the centralized gateway pattern, as well as its limitations (most enterprises have started with or currently still use this pattern). Then we will look into a new pattern called “Service Mesh” which has gained a lot of attention in the last 3–4 years. Often this pattern is also referred to as the “Side Car Proxy”. So lets get started!

Micro-Services Management

As enterprises start building more and more microservices, it’s becoming clear that some of the aspects of microservices are common across all microservices. So it makes sense to provide a common platform for managing these common aspects. Below are some of the key common aspects:

Service Registration and Discovery: A commonplace to register, document, search and discover microservices

Service Version Management: Ability to run multiple versions of a microservice.

**Authentication and Authorization: **Handle authentication and authorization including Mutual TLS (MTLS) between services.

Service Observability: Ability to monitor end to end traffic between services, response times, and quickly identify failures and bottlenecks.

**Rate Limiting: **Define threshold limits that traffic services can handle.

Circuit Breaker: Ability to configure and introduce a circuit breaker in case of failure scenarios (to avoid flooding downstream services with requests).

**Retry Logic: **Ability to configure and introduce retry logic dynamically in services.

So it’s a good idea to build these concerns as part of a common framework or service management tool. As a result, micro-service development teams don’t have to build these aspects in the service itself.

#service-mesh #istio-service-mesh #microservices #gateway-service #envoy-proxy

Fannie  Zemlak

Fannie Zemlak

1597494060

Open Service Mesh — Microsoft’s SMI based Open Source Service Mesh Implementation

Microsoft’s Open Service Mesh is an SMI-compliant, lightweight service mesh being run as an open source project. Backed by service-mesh partners including HashiCorp, Solo.io, and Buoyant, Microsoft introduced the Service Mesh Interface last year with the goal of helping end users and software vendors work with the myriad choices presented by service mesh technology by providing a set of specification standards. OSM can be considered as a reference implementation of SMI, one that builds on existing service mesh components and concepts.

Open Service Mesh data plane is architecturally based on the Envoy proxy and implements the go-control-plane xDS v3 API. However, despite the fact that Envoy comes with OSM by default, using standard interfaces allows it to be integrated with other reverse proxies (compatible with xDS).

SMI follows in the footsteps of existing Kubernetes resources, like Ingress and Network Policy, which also do not provide an implementation where required interfaces to interact with Kubernetes are facilitated for providers to plug their products. The SMI specification instead defines a set of common APIs that allow mesh providers to deliver their own implementations. This means mesh providers can either use SMI APIs directly or build operators to translate SMI to native APIs.

Image for post

SMI Implementation

With OSM, users can use SMI and Envoy on Kubernetes and get a simplified service-mesh implementation. The SMI ecosystem already has multiple providers like Istio, Linkerd, Consul Connect, now Open Service Mesh etc. some of them have implemented SMI compatibility using adaptors (Istio, Consul Connect) and others (OSM, Linkerd etc.) consume the SMI APIs directly.

OSM implementation is very similar to Linkerd which also directly consumes SMI APIs without any need for an adaptor like Istio, but one key difference is that OSM uses Envoy for its proxy and communication bus, whereas Linkerd uses linkerd2-proxy (rust based — lighter than Envoy).

Architecture & Components

OSM control plane comprise four core components. All these four components are implemented as a single controller entity (Kubernetes pod/deployment), this is much lighter in weight when compared with older versions of Istio where there are 4 control plane components (Istio-1.6 introduced istiod which unifies all the control plane components into one binary).

Image for post

OSM Architecture — Components

OSM Data Plane — Uses Envoy as reverse-proxy by default — similar to most other Service Mesh providers (Linkerd is unique in this case which uses ultralight transparent proxy written in Rust). While by default OSM ships with Envoy, the design utilizes interfaces (An interface type in Go is kind of definition. It defines and describes the exact methods that some other type must have), which enable integrations with any xDS compatible reverse-proxy. The dynamic configuration of all the proxies is handled by OSM controller using Envoy xDS go-control-plane.

#service-mesh #istio-service-mesh #kubernetes #azure #microsoft

Istio & Service Mesh - Simply Explained in 15 Mins

Istio Service Mesh explained | Learn what Service Mesh and Istio is and how it works

In this video you will learn about Service Mesh and one of its implementation, which is Istio.
In order to understand the concepts, we will first look at the new challenges introduced by a Microservice Architecture.

Then we will see how different features of a Service Mesh solve these challenges.
We will look at how Istio implements Service Mesh and learn about Istio architecture as well as how to configure Istio for our microservice application.

▬▬▬▬▬▬ T I M E S T A M P S ⏰ ▬▬▬▬▬▬

  • 0:00 - Intro
  • 0:53 - Challenges of a microservice architecture
  • 5:11 - Solution: Service Mesh with Sidecar Pattern
  • 6:15 - Service Mesh Traffic Split feature
  • 7:25 - Istio Architecture
  • 9:05 - How to configure Istio?
  • 11:57 - Istio Features: Service Discovery, Security, Metrics & Tracing
  • 13:19 - Istio Gateway
  • 14:06 - Final Overview: Traffic Flow with Istio

#istio #microservice #developer

Camron  Shields

Camron Shields

1595303958

Explaining Microservices and Service Mesh with Istio

Application builds when broken down into multiple smaller service components, are known as microservices. When compared to the traditional Monolithic way, a Microservice Architecture treats each microservice as a standalone entity/module, essentially helping to ease the maintenance of its code and related infrastructure. Each microservice of an application can be written in a different technology stack, and further be deployed, optimized and managed independently.

Though in theory, a Microservice Architecture specifically benefits build of complex large-scale applications, however, it is also widely used for small-scale application builds (for example, a simple shopping cart) - with an eye to scale further.

Benefits of a Microservice Architecture

  • Individual microservices within an application can be developed and deployed through different technology stacks.
  • Each microservice can be optimized, deployed or scaled independently.
  • Better fault handling and error detection.

Benefits of a Microservice Architecture

  • Individual microservices within an application can be developed and deployed through different technology stacks.
  • Each microservice can be optimized, deployed or scaled independently.
  • Better fault handling and error detection.

Components of a Microservice Architecture

A modern cloud-native application running on Microservice Architecture relies on the following critical components:

  • Containerization (through platforms like Docker) - for effective management and deployment of services by breaking them into multiple processes.
  • Orchestration (through platforms like Kubernetes) - for configuration, assignment and management of available system resources to services.
  • Service Mesh (through platforms like Istio) - for inter-service communication through a mesh of service- proxies to connect, manage and secure microservices.

The above three are the most important components of a microservice architecture that allow applications in a cloud-native stack to scale under load and perform even during partial failures of the cloud environment.

#serverless #microservice architecture #cloud native #istio #service mesh

Tamia  Walter

Tamia Walter

1595350140

Explaining Microservices and Service Mesh with Istio

Application builds when broken down into multiple smaller service components, are known as microservices. When compared to the traditional Monolithic way, a Microservice Architecture treats each microservice as a standalone entity/module, essentially helping to ease the maintenance of its code and related infrastructure. Each microservice of an application can be written in a different technology stack, and further be deployed, optimized and managed independently.

Though in theory, a Microservice Architecture specifically benefits build of complex large-scale applications, however, it is also widely used for small-scale application builds (for example, a simple shopping cart) - with an eye to scale further.

Benefits of a Microservice Architecture

  • Individual microservices within an application can be developed and deployed through different technology stacks.
  • Each microservice can be optimized, deployed or scaled independently.
  • Better fault handling and error detection.

Benefits of a Microservice Architecture

  • Individual microservices within an application can be developed and deployed through different technology stacks.
  • Each microservice can be optimized, deployed or scaled independently.
  • Better fault handling and error detection.

Components of a Microservice Architecture

A modern cloud-native application running on Microservice Architecture relies on the following critical components:

  • Containerization (through platforms like Docker) - for effective management and deployment of services by breaking them into multiple processes.
  • Orchestration (through platforms like Kubernetes) - for configuration, assignment and management of available system resources to services.
  • Service Mesh (through platforms like Istio) - for inter-service communication through a mesh of service- proxies to connect, manage and secure microservices.

The above three are the most important components of a microservice architecture that allow applications in a cloud-native stack to scale under load and perform even during partial failures of the cloud environment.

Complexities of a Microservice Architecture

A large application when broken down to multiple microservices, each using a different technology stack (language, DB, etc.), requiring multiple environments to form a complex architecture to manage. Though Docker containerization helps to manage and deploy individual microservices by breaking each into multiple processes running in separate Docker Containers, the inter-services communication remains critically complicated as you have to deal with the overall system health, fault tolerance and multiple points of failure.

Let us understand this by how a shopping cart works on a Microservice Architecture. Microservices here would relate to the inventory database, the payment gateway service, the product suggestion algorithm based on the customer’s access history, etc. While all these services remain a stand-alone mini-module theoritically, they do need to interact among each other. It is important to note that a service-to-service communication is what makes microservices possible.

Why Do We Need a Service Mesh?

Now that you know the importance of a service-to-service communication in a microservice architecture, it seems apparent that the communication channel remains fault-free, secured, highly-available and robust. This is where a service mesh comes in as an infrastructure component, which ensures a controlled service-to-service communication by implementing multiple service proxies. A Service Mesh is responsible for fine-tuning communication among different services rather than adding new functionalities.

In a Service Mesh, proxies deployed alongside individual services enabling inter-service communication is widely known as the Sidecar Pattern. The sidecars (proxies) might be designed to handle any functionalities critical to inter-service communication like load balancing, circuit breaking, service discovery, etc.

Through a Service Mesh, you can:

  • Maintain, configure and secure all service-to-service communications among all or selected microservices of an application.
  • Configure and perform network functions within microservices such as network resiliency, load balancing, circuit breaking, service discovery, etc.
  • Network functions are maintained and implemented as a separate entity from the business logic, fulfilling the need of a dedicated layer for service-to-service communication decoupling from application code.
  • As a result, developers can focus on the application’s business logic, while all or most of the work related to network communication is handled by the service mesh.
  • Since a microservice to service mesh proxy communication is always on top of standard protocols such as HTTP1.x/2.x, gRPC, etc., developers can use any technology to develop individual services.

#serverless #microservice architecture #cloud native #istio #service mesh