The Kubernetes era has made scaled-out applications on multiple cloud environments a reality. But it has also introduced a tremendous amount of complexity into IT departments.

My guest on this episode of The New Stack Makers podcast is Andreas Grabner from software intelligence platform Dynatrace, who recently noted that “in the enterprise Kubernetes environments I’ve seen, there are billions of interdependencies to account for.” Yes, billions.

Grabner, who describes himself as a “DevOps Activist,” argues that AI technology can tame this otherwise overwhelming Kubernetes complexity. As he put it in a contributed post, “AI-powered observability provides enterprises with a host of new capabilities to better deploy and manage their Kubernetes environments.”

During the podcast, we dig into how AI — and automation in general — is impacting observability in Kubernetes environments. To kick the show off, I asked Grabner to clarify what he means by “AI observability.”

“We call it a deterministic AI,” he replied, “and what that really means is, at the core, it’s about capturing a lot of data [from] a lot of different data silos, and then you need to figure out how can I put [that] data on dashboards and make sense out of it. What we mean by ‘AI observability,’ or maybe let’s better call it ‘deterministic AI observability,’ is how we can connect the data with contextual information.”

Grabner pointed out that data in a Kubernetes environment can come from a lot of places — the host, pods, containers, applications and other services. The challenge is to identify how all of this data works together.

#ebook series article #kubernetes #machine learning #monitoring #podcast #sponsored #the new stack makers #the state of the kubernetes ecosystem

How AI Observability Cuts Down Kubernetes Complexity
1.20 GEEK