Autoscaling Kubernetes pods have never been more straightforward with Horizontal Pod Autoscaler (HPA). HPA added an intuitive abstraction to effortlessly scale up or down pods dynamically based on some target metrics. However, according to the documentation, the HPA fetches these target metrics from the aggregated APIs, namely metrics.k8s.io
, custom.metrics.k8s.io
, and external.metrics.k8s.io
. The **metrics-server **add-on made it even easier to autoscale pods w.r.t CPU/Mem usage as it provided the metrics.k8s.io
API that the HPA uses. For other choices of metrics, one must leverage the custom.metrics.k8s.io
or the external.metrics.k8s.io
APIs to extend the HPA functionality.
To discuss a use-case, suppose we want to scale our pods based on some complicated logic. Then we can go for either of the following.
custom.metrics.k8s.io
API for HPA to use.custom.metrics.k8s.io
API for HPA to use.Let’s see how we can achieve #2 in a very simple way. To start off, here is the flow.
#kubernetes #docker #devops #monitoring #autoscaling