Joel Kelly

Joel Kelly

1588129740

Kuma: Service Mesh and the Future of Application Connectivity

Service mesh is the future of application connectivity. It delivers immediate value to any architecture by increasing the security, reliability and observability of our application traffic. However, understanding and deploying service meshes in production is a daunting undertaking. This session will focus on understanding service mesh from the basic concepts to the most advanced topics and talk practically about how we can implement service mesh in production.

In this webinar, Marco dives into how service mesh can be easily used to solve common challenges in your application architecture from day one. He will cover:

Service mesh explained: concepts, benefits and pitfalls.
Adding security, reliability, and observability of service traffic with a mesh.
Live demo of deploying a service mesh in production in minutes on Kubernetes.

These concepts will be demoed live using Kuma, an open-source control plane built on top of Envoy.

#kubernetes #devops

What is GEEK

Buddha Community

Kuma: Service Mesh and the Future of Application Connectivity
Roberta  Ward

Roberta Ward

1598169240

From Service Mess to Service Mesh

Introduction

Over the last 10 years, the rapid adoption of microservices architecture has resulted in enterprises with hundreds or (sometimes even thousands) of services. With the growth of containerization technologies like Docker and Kubernetes, microservice patterns have seen the strongest growth; resulting in a complex dependency matrix between these micro-services. For teams to monitor, support, and to maintain these services is becoming a challenge so most enterprises have invested in some kind of microservices management tool.

This article will explore some of the common aspects of microservice management. Then we’ll take a closer look at the centralized gateway pattern, as well as its limitations (most enterprises have started with or currently still use this pattern). Then we will look into a new pattern called “Service Mesh” which has gained a lot of attention in the last 3–4 years. Often this pattern is also referred to as the “Side Car Proxy”. So lets get started!

Micro-Services Management

As enterprises start building more and more microservices, it’s becoming clear that some of the aspects of microservices are common across all microservices. So it makes sense to provide a common platform for managing these common aspects. Below are some of the key common aspects:

Service Registration and Discovery: A commonplace to register, document, search and discover microservices

Service Version Management: Ability to run multiple versions of a microservice.

**Authentication and Authorization: **Handle authentication and authorization including Mutual TLS (MTLS) between services.

Service Observability: Ability to monitor end to end traffic between services, response times, and quickly identify failures and bottlenecks.

**Rate Limiting: **Define threshold limits that traffic services can handle.

Circuit Breaker: Ability to configure and introduce a circuit breaker in case of failure scenarios (to avoid flooding downstream services with requests).

**Retry Logic: **Ability to configure and introduce retry logic dynamically in services.

So it’s a good idea to build these concerns as part of a common framework or service management tool. As a result, micro-service development teams don’t have to build these aspects in the service itself.

#service-mesh #istio-service-mesh #microservices #gateway-service #envoy-proxy

PostgreSQL Connection Pooling: Part 4 – PgBouncer vs. Pgpool-II

In our previous posts in this series, we spoke at length about using PgBouncer  and Pgpool-II , the connection pool architecture and pros and cons of leveraging one for your PostgreSQL deployment. In our final post, we will put them head-to-head in a detailed feature comparison and compare the results of PgBouncer vs. Pgpool-II performance for your PostgreSQL hosting !

The bottom line – Pgpool-II is a great tool if you need load-balancing and high availability. Connection pooling is almost a bonus you get alongside. PgBouncer does only one thing, but does it really well. If the objective is to limit the number of connections and reduce resource consumption, PgBouncer wins hands down.

It is also perfectly fine to use both PgBouncer and Pgpool-II in a chain – you can have a PgBouncer to provide connection pooling, which talks to a Pgpool-II instance that provides high availability and load balancing. This gives you the best of both worlds!

Using PgBouncer with Pgpool-II - Connection Pooling Diagram

PostgreSQL Connection Pooling: Part 4 – PgBouncer vs. Pgpool-II

CLICK TO TWEET

Performance Testing

While PgBouncer may seem to be the better option in theory, theory can often be misleading. So, we pitted the two connection poolers head-to-head, using the standard pgbench tool, to see which one provides better transactions per second throughput through a benchmark test. For good measure, we ran the same tests without a connection pooler too.

Testing Conditions

All of the PostgreSQL benchmark tests were run under the following conditions:

  1. Initialized pgbench using a scale factor of 100.
  2. Disabled auto-vacuuming on the PostgreSQL instance to prevent interference.
  3. No other workload was working at the time.
  4. Used the default pgbench script to run the tests.
  5. Used default settings for both PgBouncer and Pgpool-II, except max_children*. All PostgreSQL limits were also set to their defaults.
  6. All tests ran as a single thread, on a single-CPU, 2-core machine, for a duration of 5 minutes.
  7. Forced pgbench to create a new connection for each transaction using the -C option. This emulates modern web application workloads and is the whole reason to use a pooler!

We ran each iteration for 5 minutes to ensure any noise averaged out. Here is how the middleware was installed:

  • For PgBouncer, we installed it on the same box as the PostgreSQL server(s). This is the configuration we use in our managed PostgreSQL clusters. Since PgBouncer is a very light-weight process, installing it on the box has no impact on overall performance.
  • For Pgpool-II, we tested both when the Pgpool-II instance was installed on the same machine as PostgreSQL (on box column), and when it was installed on a different machine (off box column). As expected, the performance is much better when Pgpool-II is off the box as it doesn’t have to compete with the PostgreSQL server for resources.

Throughput Benchmark

Here are the transactions per second (TPS) results for each scenario across a range of number of clients:

#database #developer #performance #postgresql #connection control #connection pooler #connection pooler performance #connection queue #high availability #load balancing #number of connections #performance testing #pgbench #pgbouncer #pgbouncer and pgpool-ii #pgbouncer vs pgpool #pgpool-ii #pooling modes #postgresql connection pooling #postgresql limits #resource consumption #throughput benchmark #transactions per second #without pooling

Rahim Makhani

Rahim Makhani

1621483980

Get the best web app for your Business FUTURE

The web app is application software that runs on the webserver. You can easily use the web app by searching it in the web browser through Google or any other search engine, or you can also add shortcuts of the web app to your smartphone.

Web app for your business helps you to reach new customers and enables them to know about your firm and the services you provide and can know about your organization’s feedback and rating. It can also help you with the advertisement of your app among all.

Do you want to develop a web app for your business? Then it would help if you collaborated with Nevina Infotech, which is the best web application development company that will help you develop a unique web app with the help of its dedicated developers.

#web application development company #web application development services #web app development company #custom web application development company #web app development services #custom web application development services

Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Fannie  Zemlak

Fannie Zemlak

1597494060

Open Service Mesh — Microsoft’s SMI based Open Source Service Mesh Implementation

Microsoft’s Open Service Mesh is an SMI-compliant, lightweight service mesh being run as an open source project. Backed by service-mesh partners including HashiCorp, Solo.io, and Buoyant, Microsoft introduced the Service Mesh Interface last year with the goal of helping end users and software vendors work with the myriad choices presented by service mesh technology by providing a set of specification standards. OSM can be considered as a reference implementation of SMI, one that builds on existing service mesh components and concepts.

Open Service Mesh data plane is architecturally based on the Envoy proxy and implements the go-control-plane xDS v3 API. However, despite the fact that Envoy comes with OSM by default, using standard interfaces allows it to be integrated with other reverse proxies (compatible with xDS).

SMI follows in the footsteps of existing Kubernetes resources, like Ingress and Network Policy, which also do not provide an implementation where required interfaces to interact with Kubernetes are facilitated for providers to plug their products. The SMI specification instead defines a set of common APIs that allow mesh providers to deliver their own implementations. This means mesh providers can either use SMI APIs directly or build operators to translate SMI to native APIs.

Image for post

SMI Implementation

With OSM, users can use SMI and Envoy on Kubernetes and get a simplified service-mesh implementation. The SMI ecosystem already has multiple providers like Istio, Linkerd, Consul Connect, now Open Service Mesh etc. some of them have implemented SMI compatibility using adaptors (Istio, Consul Connect) and others (OSM, Linkerd etc.) consume the SMI APIs directly.

OSM implementation is very similar to Linkerd which also directly consumes SMI APIs without any need for an adaptor like Istio, but one key difference is that OSM uses Envoy for its proxy and communication bus, whereas Linkerd uses linkerd2-proxy (rust based — lighter than Envoy).

Architecture & Components

OSM control plane comprise four core components. All these four components are implemented as a single controller entity (Kubernetes pod/deployment), this is much lighter in weight when compared with older versions of Istio where there are 4 control plane components (Istio-1.6 introduced istiod which unifies all the control plane components into one binary).

Image for post

OSM Architecture — Components

OSM Data Plane — Uses Envoy as reverse-proxy by default — similar to most other Service Mesh providers (Linkerd is unique in this case which uses ultralight transparent proxy written in Rust). While by default OSM ships with Envoy, the design utilizes interfaces (An interface type in Go is kind of definition. It defines and describes the exact methods that some other type must have), which enable integrations with any xDS compatible reverse-proxy. The dynamic configuration of all the proxies is handled by OSM controller using Envoy xDS go-control-plane.

#service-mesh #istio-service-mesh #kubernetes #azure #microsoft