Kubernetes is great. It takes care of deployment, scaling and upgrading/updating of containerized application clusters in a declarative and automated manner. But although automation reduces the operational burden of managing applications in production, it also makes monitoring even more necessary. Full-stack observation of metrics and events — including container and Kubernetes orchestration layers — must be in place in order to maintain functional and performant applications. Left unwatched, automation can hide issues, until it’s too late and you’ve hit a breaking point.
Monitoring is considered a first-class property of any modern system, and Kubernetes is no different. There are two main mechanisms for deploying monitoring agents:
In a DaemonSet type of monitoring, the node agent will collect data from all pods running on the node. Usually, it is used to observe the framework infrastructure, such as kubelet (node, container and pod) metrics, network metrics, logs, tracing, and error reporting. However, when it comes to collecting metrics from specific workloads/applications running on the containers, a Sidecar deployment is typically a better alternative.
That is because, with a Sidecar monitoring agent, custom metrics and monitoring of that specific application can be defined without impacting the overall monitoring framework shared by other workloads. Over time, a growing number of Prometheus endpoint metrics exposed by application developers can lead to scalability issues in a DaemonSet type of deployment. See this blog post for more information about using a Sidecar deployment to scale application monitoring on Kubernetes, enabling IT Ops to give developers the ability to monitor their own applications.
#kubernetes #microservices #monitoring
Both DaemonSet and Sidecar are important for monitoring in Kubernetes. The Telegraf Operator allows you to define a common output destination for metrics.