Arno  Bradtke

Arno Bradtke


Kubernetes: HorizontalPodAutoscaler — an overview with examples

Kubernetes HorizontalPodAutoscaler automatically scales Kubernetes Pods under [ReplicationController]([Deployment](, or [ReplicaSet]( controllers basing on its CPU, memory, or other metrics.

It was shortly discussed in the Kubernetes: running metrics-server in AWS EKS for a Kubernetes Pod AutoScaler post, now let’s go deeper to check all options available for scaling.

For HPA you can use three API types:

  • default metrics, basically provided by the [metrics-server](
  • []( metrics, provided by adapters from inside of a cluster, for example - Microsoft Azure AdapterGoogle StackdriverPrometheus Adapter (the Prometheus Adapter will be used in this post later), check the full list here>>>
  • []( similar to the Custom Metrics API, but metrics are provided by an external system, such as AWS CloudWatch

Documentation: Support for metrics APIs, and Custom and external metrics for autoscaling workloads.

Besides the HorizontalPodAutoscaler (HPA) you also can use Vertical Pod Autoscaling (VPA) and they can be used together although with some limitations, see Horizontal Pod Autoscaling Limitations.


Create HorizontalPodAutoscaler

Let’s start with a simple HPA which will scale pods basing on CPU usage:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
  name: hpa-example
    apiVersion: apps/v1
    kind: Deployment
    name: deployment-example
  minReplicas: 1
  maxReplicas: 5
  targetCPUUtilizationPercentage: 10


  • apiVersion: autoscaling/v1 - an API groupautoscaling, pay attention to the API version, as in the v1 at the time of writing, scaling was available by the CPU metrics only, thus memory and custom metrics can be used only with the API v2beta2 (still, you can use v1 with annotations), see API Object.
  • spec.scaleTargetRef: specify for НРА which controller will be scaled (ReplicationControllerDeploymentReplicaSet), in this case, HPA will look for the Deployment object called deployment-example
  • spec.minReplicasspec.maxReplicas: minimal and maximum pods to be running by this HPA
  • targetCPUUtilizationPercentage: CPU usage % from the [requests]( when HPA will add or remove pods

Create it:

$ kubectl apply -f hpa-example.yaml
horizontalpodautoscaler.autoscaling/hpa-example created


$ kubectl get hpa hpa-example
hpa-example Deployment/deployment-example <unknown>/10% 1 5 0 89s

Currently, its TARGETS has the  value as there are no pods created yet, but metrics are already available:

$ kubectl get — raw “/apis/” | jq{
“kind”: “APIGroup”,
“apiVersion”: “v1”,
“name”: “”,
“versions”: [
“groupVersion”: “”,
“version”: “v1beta1”
“preferredVersion”: {
“groupVersion”: “”,
“version”: “v1beta1”}

Add the called deployment-example Deployment:

apiVersion: apps/v1
kind: Deployment
  name: deployment-example
  replicas: 1
    type: RollingUpdate
      application: deployment-example
        application: deployment-example
      - name: deployment-example-pod
        image: nginx
          - containerPort: 80
            cpu: 100m
            memory: 100Mi

Here we defined Deployment which will spin up one pod with NINGX with requests for 100 millicores and 100 mebibyte memory, see Kubernetes best practices: Resource requests and limits.

Create it:

$ kubectl apply -f hpa-deployment-example.yaml
deployment.apps/deployment-example created

Check the HPA now:

$ kubectl get hpa hpa-example
hpa-example Deployment/deployment-example 0%/10% 1 5 1 14m

Our НРА found the deployment and started checking its pods’ metrics.

Let’s check those metrics — find a pod:

$ kubectl get pod | grep example | cut -d “ “ -f 1

And run the following API request:

$ kubectl get — raw /apis/–2mzjd | jq
“kind”: “PodMetrics”,
“apiVersion”: “”,
“metadata”: {
“name”: “deployment-example-86c47f5897–2mzjd”,
“namespace”: “default”,
“selfLink”: “/apis/–2mzjd”,
“creationTimestamp”: “2020–08–07T10:41:21Z”
“timestamp”: “2020–08–07T10:40:39Z”,
“window”: “30s”,
“containers”: [
“name”: “deployment-example-pod”,
“usage”: {
“cpu”: “0”,
“memory”: “2496Ki”

#monitoring #kubernetes #prometheus

What is GEEK

Buddha Community

Kubernetes: HorizontalPodAutoscaler — an overview with examples
Christa  Stehr

Christa Stehr


50+ Useful Kubernetes Tools for 2020 - Part 2


Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Maud  Rosenbaum

Maud Rosenbaum


Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.


In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Mitchel  Carter

Mitchel Carter


Microsoft Announces General Availability Of Bridge To Kubernetes

Recently, Microsoft announced the general availability of Bridge to Kubernetes, formerly known as Local Process with Kubernetes. It is an iterative development tool offered in Visual Studio and VS Code, which allows developers to write, test as well as debug microservice code on their development workstations while consuming dependencies and inheriting the existing configuration from a Kubernetes environment.

Nick Greenfield, Program Manager, Bridge to Kubernetes stated in an official blog post, “Bridge to Kubernetes is expanding support to any Kubernetes. Whether you’re connecting to your development cluster running in the cloud, or to your local Kubernetes cluster, Bridge to Kubernetes is available for your end-to-end debugging scenarios.”

Bridge to Kubernetes provides a number of compelling features. Some of them are mentioned below-

#news #bridge to kubernetes #developer tools #kubernetes #kubernetes platform #kubernetes tools #local process with kubernetes #microsoft

Houston  Sipes

Houston Sipes


Did Google Open Sourcing Kubernetes Backfired?

Over the last few years, Kubernetes have become the de-facto standard for container orchestration and has also won the race against Docker for being the most loved platforms among developers. Released in 2014, Kubernetes has come a long way with currently being used across the entire cloudscape platforms. In fact, recent reports state that out of 109 tools to manage containers, 89% of them are leveraging Kubernetes versions.

Although inspired by Borg, Kubernetes, is an open-source project by Google, and has been donated to a vendor-neutral firm — The Cloud Native Computing Foundation. This could be attributed to Google’s vision of creating a platform that can be used by every firm of the world, including the large tech companies and can host multiple cloud platforms and data centres. The entire reason for handing over the control to CNCF is to develop the platform in the best interest of its users without vendor lock-in.

#opinions #google open source #google open source tools #google opening kubernetes #kubernetes #kubernetes platform #kubernetes tools #open source kubernetes backfired

Kubernetes: Monitoring, Reducing, and Optimizing Your Costs

Over the past two years at Magalix, we have focused on building our system, introducing new features, and scaling our infrastructure and microservices. During this time, we had a look at our Kubernetes clusters utilization and found it to be very low. We were paying for resources we didn’t use, so we started a cost-saving practice to increase cluster utilization, use the resources we already had and pay less to run our cluster.

In this article, I will discuss the top five techniques we used to better utilize our Kubernetes clusters on the cloud and eliminate wasted resources, thus saving money. In the end, we were able to cut our monthly bill by more than 50%!

  • Applying Workload Right-Sizing
  • Choosing The Right Worker Nodes
  • Autoscaling Workloads
  • Autoscaling Worker Nodes
  • Purchasing Commitment/Saving Plans

#cloud-native #kubernetes #optimization #kubecost #kubernetes-cost-savings #kubernetes-cost-monitoring #kubernetes-reduce-cost #kubernetes-cost-analysis