Kubernetes Horizontal Pod Autoscaler (CPU Utilization | Based on Memory | Autoscaling |HPA

Horizontal Pod Autoscaler automatically scales the number of Pods in Deployments or Statefulsets based on CPU, Memory, and you can even use custom metrics specific for your application. The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller.
The resource defines the behavior of the controller.

Did I help you out?
☕ Buy Me a Coffe: https://www.buymeacoffee.com/antonputra
🔴 Add me on LinkedIn: https://www.linkedin.com/in/anton-putra

=========
⏱️TIMESTAMPS⏱️
0:00 Intro
0:15 Demo
0:24 NodeJS App
0:50 Create EKS cluster with eksctl
1:09 Deploy Metrics Server
1:56 Deploy NodeJS App
2:29 Create Horizontal Pod Autoscaler for NodeJS Deployment
3:21 Test HPA with curl

=========
Source Code
🖥️ - GitHub: https://github.com/antonputra/tutorials/tree/main/lessons/071

=========
SOCIAL
🎙 - Twitter: https://twitter.com/antonvputra
📨 - Email: me@antonputra.com

#Kubernetes #K8s #DevOps

#kubernetes #k8s #devops

What is GEEK

Buddha Community

Kubernetes Horizontal Pod Autoscaler (CPU Utilization | Based on Memory | Autoscaling |HPA
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Kubernetes Horizontal Pod Autoscaler (CPU Utilization | Based on Memory | Autoscaling |HPA

Horizontal Pod Autoscaler automatically scales the number of Pods in Deployments or Statefulsets based on CPU, Memory, and you can even use custom metrics specific for your application. The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller.
The resource defines the behavior of the controller.

Did I help you out?
☕ Buy Me a Coffe: https://www.buymeacoffee.com/antonputra
🔴 Add me on LinkedIn: https://www.linkedin.com/in/anton-putra

=========
⏱️TIMESTAMPS⏱️
0:00 Intro
0:15 Demo
0:24 NodeJS App
0:50 Create EKS cluster with eksctl
1:09 Deploy Metrics Server
1:56 Deploy NodeJS App
2:29 Create Horizontal Pod Autoscaler for NodeJS Deployment
3:21 Test HPA with curl

=========
Source Code
🖥️ - GitHub: https://github.com/antonputra/tutorials/tree/main/lessons/071

=========
SOCIAL
🎙 - Twitter: https://twitter.com/antonvputra
📨 - Email: me@antonputra.com

#Kubernetes #K8s #DevOps

#kubernetes #k8s #devops

Easy and Fast Adjustment of Kubernetes CPU and Memory

Assigning and Managing CPU and Memory resources in the Kubernetes can be tricky and easy at the same time. Having done this task for numerous customers, I have decided to create a framework zone. I will show you what Kubernetes resources and limits are and how to manage them.

The framework contains the following steps.

  • Infographic Guide shows what algorithms to follow to determine and assign the resources and limits.
  • Code templates allow applying those algorithms with minimal adaptation.
  • Algorithms and tools to gather the metrics about resource consumption and set up the limits.
  • Links to the official documentation where you can quickly grab some examples and read more detailed information.

What this article doesn’t contain.

My goal here is simplicity. So you won’t find detailed descriptions of how resources, limit ranges, and quotas work. There are plentiful articles written about it, as well as Kubernetes thorough documentation. Instead, here you will find information on how to quickly start adjusting Kubernetes resources in your projects.

#cpu-memory #azure-kubernetes-service #kubernetes-cluster #kubernetes #resources

Houston  Sipes

Houston Sipes

1602342000

Kafka Workers Autoscaling With Horizontal Pod Autoscaler

This is a step-by-step article that guides you from scratch. I assume you only have access to a vanilla Kubernetes cluster. In this article, I’m using k8s external metrics to autoscales my kafka consumers. There are other articles online that shows you how to use custom metrics instead. I chose to use external metrics since it’s more applicable to real production environment where you might want to scale based on other metrics that are available on Prometheus. For a great tutorial for custom metrics you can go to this medium article.

Outline

There are a few steps we need to do to use HPA with custom and/or external metrics. You can skip these steps if you happen to already have kafka or prometheus running.

  1. Deploy Kafka
  2. Deploy Prometheus
  3. Expose Prometheus and Grafana services
  4. Deploy a Kafka consumer application
  5. Deploy Prometheus Adapter
  6. Deploy HPA

Deploy Kafka

We will use Helm for most of our installation process out of convenience. Run this command on your terminal to install Kafka.

$ helm install \
    --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false \

kp prometheus-community/kube-prometheus-stack

#autoscaling #prometheus #kubernetes #hpa #kafka

Dejah  Reinger

Dejah Reinger

1599921600

Building Your Own Custom Metrics API for Kubernetes Horizontal Pod Autoscaler

Preface

Kubernetes is a lot of fun, has lots of features and usually supports most of one’s whims as a container orchestration tool in a straight-forward fashion.

However, one request from my direct manager had made me sweat during my attempts to achieve it: auto-scale pods according to a complicated logic criteria.

Trying to tackle the task, my online research yielded partial solutions, and I ran through so many brick walls trying to crack this one, that I had to write an article about it in order to avoid future confusion regarding this matter for all poor souls who might try to scale-up their micro-services on a criteria that’s not CPU/Mem.

The Challenge

It all started when we needed to scale one of our deployments according to the number of pending messages in a certain queue of RabbitMQ.

That is a cool, not overly complicated task that can be achieved by utilizing Prometheus, Rabbitmq-exporter, and Prometheus-adapter together (hereinafter referred to as “the trio”).

With much enthusiasm and anticipation, I jumped right into the implementation only to later discover that one of my manager’s magic light-bulbs had switched on in his brain. It happens quite often, fortunately for him, and less fortunately for me as this usually means stretching the capabilities of the technology at hand with advanced and not-often-supported demands.

He came up with a better, more accurate scaling criteria for our deployment. In a nutshell: measures how long a message has been waiting in queue “A” using the message’s timestamp, and then performs some logic to determine the final value of the metric, which is always returned as a positive integer.

Well, that’s nice and all, but as far as my knowledge extends, the trio mentioned above is not able to perform the advanced logic my manager desired. After all it relies solely on metrics that RabbitMQ exposes, so I was left to figure out a solution.

The experience from trying to implement the trio has helped me gain a better view on how the Horizontal Pod Autoscaler works and reads data from sources.

As per the documentation, HPA works mainly against 3 APIs:

  • Metrics
  • Custom Metrics
  • External Metrics

My plan was to somehow harness the ‘custom metrics’ API and have it work against an internal application metrics API of our own, with the intention that the HPA would be able to read data from the internal API and scale accordingly.

This API could, in the future, be extended and serve as an application-metric for other deployments that need scaling based on internal application metrics or any kind of metrics for that matter.

This in essence involves the following tasks:

  1. Writing the code for our internal API
  2. Creating a Kubernetes deployment and service for our internal API
  3. Creating a Custom Metrics APIService in Kubernetes
  4. Creating the HPA resource

And with that in mind, let’s get to work.

Please note that for the sake of demonstration, I used the ‘custom-metrics’ namespace in all yaml definitions. However, it’s an arbitrary selection so feel free to deploy it anywhere you want.

#kubernetes #autoscaling #hpa #api-development #docker #api