Colleen  Little

Colleen Little

1596364800

Scaling Kubernetes: Intro to Kubernetes-based event-driven autoscaling (KEDA)

This blog series will cover open source components that can be used on top of existing Kubernetes primitives to help scale Kubernetes clusters, as well as applications. We will explore:

Here is a breakdown of the blog posts in this series:

  • Part 1 (this post) will cover basic KEDA concepts
  • Part 2 will showcase KEDA auto-scaling in action with a practical example
  • Part 3 will introduce Virtual Kubelet
  • Part 4 will conclude the series with another example to demonstrate how KEDA and Virtual Kubelet can be combined to deliver scalability

In this post, you will get an overview of KEDA, its architecture, and how it works behind the scenes. This will serve as a good foundation for you to dive into the next post, where you will explore KEDA hands-on with a practical example

Image for post

KEDA (Kubernetes-based Event-driven Autoscaling) is an open source component developed by Microsoft and Red Hat to allow any Kubernetes workload to benefit from the event-driven architecture model. It is an official CNCF project and currently a part of the CNCF Sandbox. KEDA works by horizontally scaling a Kubernetes Deployment or a Job. It is built on top of the Kubernetes Horizontal Pod Autoscaler and allows the user to leverage External Metrics in Kubernetes to define autoscaling criteria based on information from any event source, such as a Kafka topic lag, length of an Azure Queue, or metrics obtained from a Prometheus query.

You can choose from a list of pre-defined triggers (also known as Scalers), which act as a source of events and metrics for autoscaling a Deployment (or a Job). These can be thought of as adapters that contain the necessary logic to connect to the external source (e.g., Kafka, Redis, Azure Queue) and fetch the required metrics to drive autoscaling operations. KEDA uses the Kubernetes Operator model, which defines Custom Resource Definitions, such as ScaledObject, which you can use to configure autoscaling properties.

Pluggability is built into KEDA and it can be extended to support new triggers/scalers

At a high level, KEDA does two things to drive the autoscaling process:

  • Provides a component to activate and deactivate a Deployment to scale to and from zero when there are no events
  • Provides a Kubernetes Metrics Server to expose event data (e.g., queue length, topic lag)

Image for post

KEDA uses three components to fulfill its tasks:

  • Scaler: Connects to an external component (e.g., Kafka) and fetches metrics (e.g., topic lag)
  • Operator (Agent): Responsible for “activating” a Deployment and creating a Horizontal Pod Autoscaler object
  • Metrics Adapter: Presents metrics from external sources to the Horizontal Pod Autoscaler

#architecture #kubernetes #open-source #docker #cloud

What is GEEK

Buddha Community

Scaling Kubernetes: Intro to Kubernetes-based event-driven autoscaling (KEDA)
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Iliana  Welch

Iliana Welch

1598503380

Kubernetes-Based Event-Driven Autoscaling (KEDA)

Overview

Implement event-driven processing on Kubernetes using Kubernetes-Based Event-Driven Autoscaling (KEDA).

The IT industry is now moving towards Event-Driven Computing. Today it’s becoming so popular due to the ability of engaging users with the app. Popular games like PUBG and COD are using this approach to provide the user with a quick and accurate response which results in better user experience, but what is this Event-Driven Computing and what is the role of  Serverless Architecture in it?

Event-Driven Computing is nothing but a computing model in which programs perform their jobs in response to the occurrence of events like user actions (mouse click, keypress), sensors output and the messages from the process or thread. It requires autoscaling based on the events triggered for better autoscaling we use serverless. Serverless does not mean running code without a server; the name “Serverless” is used because the users don’t have to rent or buy the server for the background code to run. The background code is entirely managed by the third-party (cloud providers).


KEDA (Kubernetes Based Event Driven Autoscaling)

Event-driven and serverless architecture are defining a new generation of apps and microservices. Moreover, containers are no exception; these containerized workloads and services are managed using an open-source tool called Kubernetes. Auto Scaling is an integral part of Event-driven and serverless architecture, although Kubernetes provides auto-scaling, it does not support serverless style event-driven scaling. To allow users to build their event-driven apps on top of Kubernetes Red Hat and Microsoft joined forces and developed a project called KEDA (Kubernetes Based Event Driven Autoscaling). It is a step towards serverless Kubernetes and serverless on Kubernetes.

#kubernetes #kubernetes-based event-driven autoscaling #keda

Colleen  Little

Colleen Little

1596364800

Scaling Kubernetes: Intro to Kubernetes-based event-driven autoscaling (KEDA)

This blog series will cover open source components that can be used on top of existing Kubernetes primitives to help scale Kubernetes clusters, as well as applications. We will explore:

Here is a breakdown of the blog posts in this series:

  • Part 1 (this post) will cover basic KEDA concepts
  • Part 2 will showcase KEDA auto-scaling in action with a practical example
  • Part 3 will introduce Virtual Kubelet
  • Part 4 will conclude the series with another example to demonstrate how KEDA and Virtual Kubelet can be combined to deliver scalability

In this post, you will get an overview of KEDA, its architecture, and how it works behind the scenes. This will serve as a good foundation for you to dive into the next post, where you will explore KEDA hands-on with a practical example

Image for post

KEDA (Kubernetes-based Event-driven Autoscaling) is an open source component developed by Microsoft and Red Hat to allow any Kubernetes workload to benefit from the event-driven architecture model. It is an official CNCF project and currently a part of the CNCF Sandbox. KEDA works by horizontally scaling a Kubernetes Deployment or a Job. It is built on top of the Kubernetes Horizontal Pod Autoscaler and allows the user to leverage External Metrics in Kubernetes to define autoscaling criteria based on information from any event source, such as a Kafka topic lag, length of an Azure Queue, or metrics obtained from a Prometheus query.

You can choose from a list of pre-defined triggers (also known as Scalers), which act as a source of events and metrics for autoscaling a Deployment (or a Job). These can be thought of as adapters that contain the necessary logic to connect to the external source (e.g., Kafka, Redis, Azure Queue) and fetch the required metrics to drive autoscaling operations. KEDA uses the Kubernetes Operator model, which defines Custom Resource Definitions, such as ScaledObject, which you can use to configure autoscaling properties.

Pluggability is built into KEDA and it can be extended to support new triggers/scalers

At a high level, KEDA does two things to drive the autoscaling process:

  • Provides a component to activate and deactivate a Deployment to scale to and from zero when there are no events
  • Provides a Kubernetes Metrics Server to expose event data (e.g., queue length, topic lag)

Image for post

KEDA uses three components to fulfill its tasks:

  • Scaler: Connects to an external component (e.g., Kafka) and fetches metrics (e.g., topic lag)
  • Operator (Agent): Responsible for “activating” a Deployment and creating a Horizontal Pod Autoscaler object
  • Metrics Adapter: Presents metrics from external sources to the Horizontal Pod Autoscaler

#architecture #kubernetes #open-source #docker #cloud

Maud  Rosenbaum

Maud Rosenbaum

1601051854

Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.

Stability

In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Auto-scaling + Kubernetes = KEDA

Prerequisites for this article — Kubernetes knowledge.

If you are a developer or a devops export, then you will definetly be faced with a task to create an auto-scaling job.

A typical use case is when you have a file storage, and you need to process it as soon as new files arrive, and for that you would need parallel jobs to be running. If you have one job, it won’t be fast enough. If you host more than one job, then it could be too expensive, because they would be idle and consume resources.

So the dilemma is “quick & expensive” vs “slow & cheap”.

This is where Kubernetes and specifically KEDA could help you.

KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.

#kubernetes #autoscaling #keda