Let’s probe the Kubernetes magic that makes the beautiful CPU and memory dials tick on the monitoring dashboards.

You might have previously used observability tools such as PrometheusAzure MonitorAWS Container Insight, or commercial products such as Logic Monitor to monitor your Kubernetes cluster. Let’s probe the Kubernetes magic that makes the beautiful CPU and memory dials tick on the monitoring dashboards.

Kubernetes has a built-in Metrics API (see spec) and a simple CLI query, kubectl top ( documentation), that you can use to fetch a snapshot of the CPU and memory consumption of a Kubernetes object. The Kubernetes Metrics API is dependent on the Metrics Server cluster add-on that gathers resource usage from the Kubelets of the cluster. The primary consumer of the Metrics API is the Horizontal Pod Autoscaler. The Horizontal Pod Autoscaler (HPA) uses the metrics served by the Metrics API to scale the number of pods based on the observed resource metrics values. Apart from the Metrics API, HPA is designed to also consume metrics from your application running on the cluster (custom metrics) and services outside the cluster (external metrics) to autoscale pods. Some examples of external metrics provider to HPA are the popular open-source events-based autoscaling service KEDA and soon Logic Monitor. Similar to HPA, Vertical Pod Autoscaler (VPA) relies on Metrics Server as well. VPA allows you to automatically scale the CPU and memory constraints of the containers in a pod.

#devops #aws #kubernetes #kubernetes cluster

Practical Top-down Resource Monitoring of a Kubernetes Cluster With Metrics Server
1.10 GEEK