Luna  Mosciski

Luna Mosciski

1599918540

Proxyless gRPC load balancing in Kubernetes

In this post, I will show how you can build a proxy-less load balancing for your gRPC services using the new xDS load balancing.

You can find the complete code for this experiment in

asishrs/proxyless-grpc-lb

An example repository for demonstrating xDS load-balancer for Go gRPC client in a kubernetes cluster. If you are…

github.com

Why is load balancing in gRPC difficult?

If you are building gRPC based applications, you may already be aware of the usage of HTTP2 in gRPC. If you are unfamiliar with that, please read below

At a high-level, I want you to understand two points.

  1. gRPC is built on HTTP/2, and HTTP/2 is designed to have a single long-lived TCP connection.
  2. To do gRPC load balancing, we need to shift from connection balancing to request balancing.

What are the options?

There used to be two options to load balance gRPC requests in a Kubernetes cluster

  • Headless service
  • Using a Proxy (example Envoy, Istio, Linkerd)

Recently gRPC announced the support for xDS based load balancing, and as of this time, the gRPC team added support in C-core, Java, and Go languages. This is an essential feature as this will open a third option for load balancing in gRPC, and I will show how to do that in a Kubernetes cluster. gRPC will be moving from its original_ grpclb protocol_ to the new xDS protocol.

xDS API

xDS API is a suite of APIs becoming popular and evolving into a standard used to configure various data plane software.

In the xDS API flow, the client uses the following main APIs:

  • Listener Discovery Service (LDS): Returns Listener resources. Used basically as a convenient root for the gRPC client’s configuration. Points to the RouteConfiguration.
  • Route Discovery Service (RDS): Returns RouteConfiguration resources. Provides data used to populate the gRPC service config. Points to the Cluster.
  • Cluster Discovery Service (CDS): Returns Cluster resources. Configures things like load balancing policy and load reporting. Points to the ClusterLoadAssignment.
  • Endpoint Discovery Service (EDS): Returns ClusterLoadAssignment resources. Configures the set of endpoints (backend servers) to load balance across and may tell the client to drop requests.

#grpc #xd #kubernetes #proxyless #load-balancing

What is GEEK

Buddha Community

Proxyless gRPC load balancing in Kubernetes
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Luna  Mosciski

Luna Mosciski

1599918540

Proxyless gRPC load balancing in Kubernetes

In this post, I will show how you can build a proxy-less load balancing for your gRPC services using the new xDS load balancing.

You can find the complete code for this experiment in

asishrs/proxyless-grpc-lb

An example repository for demonstrating xDS load-balancer for Go gRPC client in a kubernetes cluster. If you are…

github.com

Why is load balancing in gRPC difficult?

If you are building gRPC based applications, you may already be aware of the usage of HTTP2 in gRPC. If you are unfamiliar with that, please read below

At a high-level, I want you to understand two points.

  1. gRPC is built on HTTP/2, and HTTP/2 is designed to have a single long-lived TCP connection.
  2. To do gRPC load balancing, we need to shift from connection balancing to request balancing.

What are the options?

There used to be two options to load balance gRPC requests in a Kubernetes cluster

  • Headless service
  • Using a Proxy (example Envoy, Istio, Linkerd)

Recently gRPC announced the support for xDS based load balancing, and as of this time, the gRPC team added support in C-core, Java, and Go languages. This is an essential feature as this will open a third option for load balancing in gRPC, and I will show how to do that in a Kubernetes cluster. gRPC will be moving from its original_ grpclb protocol_ to the new xDS protocol.

xDS API

xDS API is a suite of APIs becoming popular and evolving into a standard used to configure various data plane software.

In the xDS API flow, the client uses the following main APIs:

  • Listener Discovery Service (LDS): Returns Listener resources. Used basically as a convenient root for the gRPC client’s configuration. Points to the RouteConfiguration.
  • Route Discovery Service (RDS): Returns RouteConfiguration resources. Provides data used to populate the gRPC service config. Points to the Cluster.
  • Cluster Discovery Service (CDS): Returns Cluster resources. Configures things like load balancing policy and load reporting. Points to the ClusterLoadAssignment.
  • Endpoint Discovery Service (EDS): Returns ClusterLoadAssignment resources. Configures the set of endpoints (backend servers) to load balance across and may tell the client to drop requests.

#grpc #xd #kubernetes #proxyless #load-balancing

Hal  Sauer

Hal Sauer

1593444960

Sample Load balancing solution with Docker and Nginx

Most of today’s business applications use load balancing to distribute traffic among different resources and avoid overload of a single resource.

One of the obvious advantages of load balancing architecture is to increase the availability and reliability of applications, so if a certain number of clients request some number of resources to backends, Load balancer stays between them and route the traffic to the backend that fills most the routing criteria (less busy, most healthy, located in a given region … etc).

There are a lot of routing criteria, but we will focus on this article on fixed round-robin criteria — meaning each backend receives a fixed amount of traffic — which I think rarely documented :).

To simplify we will create two backends “applications” based on flask Python files. We will use NGINX as a load balancer to distribute 60% of traffic to application1 and 40% of traffic to application2.

Let’s start the coding, hereafter the complete architecture of our project:

app1/app1.py

from flask import request, Flask
import json

app1 = Flask(__name__)
@app1.route('/')
def hello_world():
return 'Salam alikom, this is App1 :) '
if __name__ == '__main__':
app1.run(debug=True, host='0.0.0.0')

app2/app2.py

from flask import request, Flask
import json

app1 = Flask(__name__)
@app1.route('/')
def hello_world():
return 'Salam alikom, this is App2 :) '
if __name__ == '__main__':
app1.run(debug=True, host='0.0.0.0')

Then we have to dockerize both applications by adding the requirements.txt file. It will contain only the flask library since we are using the python3 image.

#load-balancing #python-flask #docker-load-balancing #nginx #flask-load-balancing

Maud  Rosenbaum

Maud Rosenbaum

1601051854

Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.

Stability

In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Configuring NGINX Plus as an External Load Balancer for Red Hat OCP and Kubernetes

In the world of container orchestration there are two names that we run into all the time: RedHat OpenShift Container Platform (OCP) and Kubernetes. OpenShift, as you probably know, uses Kubernetes underneath, as do many of the other container orchestration platforms. Routing external traffic into a Kubernetes or OpenShift environment has always been a little challenging, in two ways:

  • Exposing services deployed inside Kubernetes to the outside world. The solution is an Ingress controller like the NGINX Plus Ingress Controller for Kubernetes. You can read more about it in our blog Getting Started with NGINX Ingress Operator on Red Hat OpenShift.
  • Load balancing traffic across your Kubernetes nodes. To solve this problem, organizations usually choose an external hardware or virtual load balancer or a cloud‑native solution. However, NGINX Plus can also be used as the external load balancer, improving performance and simplifying your technology investment.

In this blog, I focus on how to solve the second problem using NGINX Plus in a way that is simple, efficient, and enables your App Dev teams to manage both the Ingress configuration inside Kubernetes and the external load balancer configuration outside. As a reference architecture to help you get started, I’ve created the nginx-lb-operator project in GitHub – the NGINX Load Balancer Operator (NGINX-LB-Operator) is an Ansible‑based Operator for NGINX Controller created using the Red Hat Operator Framework and SDK. NGINX-LB-Operator drives the declarative API of NGINX Controller to update the configuration of the external NGINX Plus load balancer when new services are added, Pods change, or deployments scale within the Kubernetes cluster.

Please note that NGINX-LB-Operator is not covered by your NGINX Plus or NGINX Controller support agreement. You can report bugs or request troubleshooting assistance on GitHub.

Kubernetes and NGINX Technologies – A Review

NGINX-LB-Operator relies on a number of Kubernetes and NGINX technologies, so I’m providing a quick review to get us all on the same page. If you’re already familiar with them, feel free to skip to The NGINX Load Balancer Operator.

Kubernetes Controllers and Operators

Kubernetes is an orchestration platform built around a loosely coupled central API. The API provides a collection of resource definitions, along with Controllers (which typically run as Pods inside the platform) to monitor and manage those resources. The Kubernetes API is extensible, and Operators (a type of Controller) can be used to extend the functionality of Kubernetes.

  • Controllers – A core part of the Kubernetes system. They create “watches” for specific Kubernetes resources and perform the necessary steps to reach the desired state of each resource as it changes. In customer conversations, the most common Kubernetes Controller discussed is the “Ingress Controller.”
  • Operators – Custom controllers which define and make use of custom resource definitions (CRDs) to manage applications and their components.

#blog #tech #kubernetes #nginx kubernetes ingress controller #red hat openshift #nginx load balancer operator