Nat  Grady

Nat Grady

1625142420

Easier Deployment and Upgrade of NGINX Service Mesh - NGINX

Service meshes  are rapidly becoming a critical component for the  cloud native  stack, especially for users of the  Kubernetes  platform. A service mesh provides critical observability, security, and traffic control so that your Kubernetes apps don’t need to implement these features, which frees developers to focus on business logic.

NGINX Service Mesh is our fully integrated service mesh platform. It provides all the advantages of a service mesh while leveraging a data plane powered by  NGINX Plus to enable key features like mTLS, traffic management, and high availability.

NGINX Service Mesh Release 1.1.0 introduces three key enhancements that make it easier to deploy and manage our  production‑ready service mesh in Kubernetes: Helm support,  air‑gap installation, and  in‑place upgrades.

Helm Support

NGINX Service Mesh includes the nginx-meshctl CLI tool for fully scriptable installation, upgrade, and removal as part of any CI/CD pipeline. But a CLI is not always the preferred approach for managing Kubernetes services. NGINX Service Mesh Release 1.1.0 adds support for  Helm, a popular and supported tool for automating creation, configuration, packaging, and deployment of applications and services to Kubernetes.

To use Helm with NGINX Service Mesh, first add the helm repository:

## helm repo add nginx-service-mesh https://helm.nginx.com/nginx-service-mesh
## helm repo update

#blog #tech #microservices #kubernetes

What is GEEK

Buddha Community

Easier Deployment and Upgrade of NGINX Service Mesh - NGINX
Autumn  Blick

Autumn Blick

1603600800

NGINX Announces Eight Solutions that Let Developers Run Safely with Scissors

Technology is hard. As technologists, I think we like it that way. It’s built‑in job security, right? Well, unfortunately, the modern application world has become unproductively hard. We need to make it easier.

That’s why I like describing the current developer paradox as the need to run safely with scissors.

NGINX Balances Developer Choice with Infrastructure Guardrails

Running with scissors is a simple metaphor for what is the admittedly difficult ask we make of software engineers. Developers need to run. Time to market and feature velocity are critical to the success of digital businesses. As a result, we don’t want to encumber developers with processes or technology choices that slow them down. Instead we empower them to pick tools and stacks that let them deliver code to customers as quickly as possible.

But there’s a catch. In the world of fast releases, multiple daily (or hourly or minutely!) changes, and fail‑fast development, we risk introducing application downtime into digital experiences – that risk is the metaphorical scissors that make it dangerous to run fast. On some level we know it’s wrong to make developers run with scissors. But the speed upside trumps the downtime downside.

That frames the dilemma of our era: we need our developers to run with scissors, but we don’t want anybody to get hurt. Is there a solution?

At NGINX, the answer is “yes”. I’m excited to announce eight new or significantly enhanced solutions built to unleash developer speed without sacrificing the governance, visibility, and control infrastructure teams require.

Load Balancing and Security DNS Solutions Empower Self‑Service

As my colleague, Gus Robertson, eloquently points out in his recent blog The Essence of Sprint Is Speed, self‑service is an important part of developer empowerment. He talks about developers as the engines of digital transformation. And if they’re not presented with easy-to-use, capable tools, they take matters into their own hands. The result is shadow IT and significant infrastructure risk.

Self‑service turns this on its head. It provides infrastructure teams with a way to release the application delivery and security technologies that developers need for A/B, canary, blue‑green, and circuit‑breaker patterns. But it does so within the guardrails that ensure the consistency, reliability, and security that ensure your apps remain running once in production.

#blog #news #opinion #red hat #nginx controller #nginx app protect #nginx sprint 2020 #nginx ingress controller #nginx service mesh #f5 dns cloud services #nginx analytics cloud service

Autumn  Blick

Autumn Blick

1603517340

Introducing NGINX Service Mesh

We are pleased to introduce a development release of NGINX Service Mesh (NSM), a fully integrated lightweight service mesh that leverages a data plane powered by NGINX Plus to manage container traffic in Kubernetes environments. NSM is available for free download. We hope you will try it out in your development and test environments, and look forward to your feedback on GitHub.

Adopting microservices methodologies comes with challenges as deployments scale and become more complex. Communication between the services is intricate, debugging problems can be harder, and more services imply more resources to manage.

NSM addresses these challenges by enabling you to centrally provision:

  • Security – Security is more critical now than ever. Data breaches can cost organizations millions of dollars every year in lost revenue and reputation. NSM ensures all communication is mTLS‑encrypted so that there is no sensitive data on the wire for hackers to steal. Access controls enable you to define policies about which services can talk to each other.
  • Traffic management – When deploying a new version of an application, you might want to limit the amount of traffic it receives at first, in case there is a bug. With NSM intelligent container traffic management you can specify policies that limit traffic to new services and slowly increase it over time. Features like rate limiting and circuit breakers give you full control over the traffic flowing through your services.
  • Visualization – Managing thousands of services can be a debugging and visibility nightmare. NSM helps to reign that in with a built‑in Grafana dashboard displaying the full suite of metrics available in NGINX Plus. In addition, the Open Tracing integration enables fine‑grained transaction tracing.
  • Hybrid deployments – If your enterprise is like most, your entire infrastructure doesn’t run in Kubernetes. NSM ensures legacy applications are not left out. With the NGINX Kubernetes Ingress Controller integration, legacy services can communicate with mesh services and vice versa.

NSM secures applications in a zero‑trust environment by seamlessly applying encryption and authentication to container traffic. It delivers observability and insights into transactions to help organizations deploy and troubleshoot problems quickly and accurately. It also provides fine‑grained traffic control, enabling DevOps teams to deploy and optimize application components while empowering Dev teams to build and easily connect their distributed applications.

What Is NGINX Service Mesh?

NSM consists of a unified data plane for east‑west (service-to-service) traffic and a native integration of the NGINX Plus Ingress Controller for north‑south traffic, managed by a single control plane.

The control plane is designed and optimized for the NGINX Plus data plane and defines the traffic management rules that are distributed to the NGINX Plus sidecars.

#blog #service mesh #nginx service mesh

Roberta  Ward

Roberta Ward

1598169240

From Service Mess to Service Mesh

Introduction

Over the last 10 years, the rapid adoption of microservices architecture has resulted in enterprises with hundreds or (sometimes even thousands) of services. With the growth of containerization technologies like Docker and Kubernetes, microservice patterns have seen the strongest growth; resulting in a complex dependency matrix between these micro-services. For teams to monitor, support, and to maintain these services is becoming a challenge so most enterprises have invested in some kind of microservices management tool.

This article will explore some of the common aspects of microservice management. Then we’ll take a closer look at the centralized gateway pattern, as well as its limitations (most enterprises have started with or currently still use this pattern). Then we will look into a new pattern called “Service Mesh” which has gained a lot of attention in the last 3–4 years. Often this pattern is also referred to as the “Side Car Proxy”. So lets get started!

Micro-Services Management

As enterprises start building more and more microservices, it’s becoming clear that some of the aspects of microservices are common across all microservices. So it makes sense to provide a common platform for managing these common aspects. Below are some of the key common aspects:

Service Registration and Discovery: A commonplace to register, document, search and discover microservices

Service Version Management: Ability to run multiple versions of a microservice.

**Authentication and Authorization: **Handle authentication and authorization including Mutual TLS (MTLS) between services.

Service Observability: Ability to monitor end to end traffic between services, response times, and quickly identify failures and bottlenecks.

**Rate Limiting: **Define threshold limits that traffic services can handle.

Circuit Breaker: Ability to configure and introduce a circuit breaker in case of failure scenarios (to avoid flooding downstream services with requests).

**Retry Logic: **Ability to configure and introduce retry logic dynamically in services.

So it’s a good idea to build these concerns as part of a common framework or service management tool. As a result, micro-service development teams don’t have to build these aspects in the service itself.

#service-mesh #istio-service-mesh #microservices #gateway-service #envoy-proxy

Brain  Crist

Brain Crist

1600117200

Deploying Citrix ADC with Service Mesh on Rancher

Introduction

As a network of microservices changes and grows, the interactions between them can be difficult to manage and understand. That’s why it’s handy to have a service mesh as a separate infrastructure layer. A service mesh is an approach to solving microservices at scale. It handles routing and terminating traffic, monitoring and tracing, service delivery and routing, load balancing, circuit breaking and mutual authentication. A service mesh takes these components and makes them part of the underlying infrastructure layer, eliminating the need for developers to write specific code to enable these capabilities.

Istio is a popular open-source service mesh that is built into the Rancher Kubernetes management platform. This integration allows developers to focus on their business logic and leave the rest to Kubernetes and Istio.

Citrix ADC is a comprehensive application delivery and load balancing solution for monolithic and microservices-based applications. Its advanced traffic management capabilities enhance application performance and provide comprehensive security. Citrix ADC integrates with Istio as an ingress gateway to the service mesh environment and as a sidecar proxy to control inter-microservice communication. This integration allows you to tightly secure and optimize traffic into and within your microservice-based application environment. Citrix ADC’s Ingress deployment is configured as a load balancer for your Kubernetes services. As a sidecar proxy, Citrix ADC handles service-to-service communication and makes this communication reliable, secure, observable, and manageable.

#service mesh #rancher #istio service mesh #microservices and containers

Fannie  Zemlak

Fannie Zemlak

1597494060

Open Service Mesh — Microsoft’s SMI based Open Source Service Mesh Implementation

Microsoft’s Open Service Mesh is an SMI-compliant, lightweight service mesh being run as an open source project. Backed by service-mesh partners including HashiCorp, Solo.io, and Buoyant, Microsoft introduced the Service Mesh Interface last year with the goal of helping end users and software vendors work with the myriad choices presented by service mesh technology by providing a set of specification standards. OSM can be considered as a reference implementation of SMI, one that builds on existing service mesh components and concepts.

Open Service Mesh data plane is architecturally based on the Envoy proxy and implements the go-control-plane xDS v3 API. However, despite the fact that Envoy comes with OSM by default, using standard interfaces allows it to be integrated with other reverse proxies (compatible with xDS).

SMI follows in the footsteps of existing Kubernetes resources, like Ingress and Network Policy, which also do not provide an implementation where required interfaces to interact with Kubernetes are facilitated for providers to plug their products. The SMI specification instead defines a set of common APIs that allow mesh providers to deliver their own implementations. This means mesh providers can either use SMI APIs directly or build operators to translate SMI to native APIs.

Image for post

SMI Implementation

With OSM, users can use SMI and Envoy on Kubernetes and get a simplified service-mesh implementation. The SMI ecosystem already has multiple providers like Istio, Linkerd, Consul Connect, now Open Service Mesh etc. some of them have implemented SMI compatibility using adaptors (Istio, Consul Connect) and others (OSM, Linkerd etc.) consume the SMI APIs directly.

OSM implementation is very similar to Linkerd which also directly consumes SMI APIs without any need for an adaptor like Istio, but one key difference is that OSM uses Envoy for its proxy and communication bus, whereas Linkerd uses linkerd2-proxy (rust based — lighter than Envoy).

Architecture & Components

OSM control plane comprise four core components. All these four components are implemented as a single controller entity (Kubernetes pod/deployment), this is much lighter in weight when compared with older versions of Istio where there are 4 control plane components (Istio-1.6 introduced istiod which unifies all the control plane components into one binary).

Image for post

OSM Architecture — Components

OSM Data Plane — Uses Envoy as reverse-proxy by default — similar to most other Service Mesh providers (Linkerd is unique in this case which uses ultralight transparent proxy written in Rust). While by default OSM ships with Envoy, the design utilizes interfaces (An interface type in Go is kind of definition. It defines and describes the exact methods that some other type must have), which enable integrations with any xDS compatible reverse-proxy. The dynamic configuration of all the proxies is handled by OSM controller using Envoy xDS go-control-plane.

#service-mesh #istio-service-mesh #kubernetes #azure #microsoft