We provide guidance on where to deploy application services in a Kubernetes environment, using WAF as an example. Depending on your needs, it can make sense to deploy your WAF at the "front door" of the environment, on the Ingress Controller, per-service, or per-Pod.
In the previous blog in this series, we looked at the rising influence of DevOps in controlling how applications are deployed, managed, and delivered. Although this may appear to invite conflict with NetOps teams, enterprises instead need to recognize that each team has different responsibilities, goals, and modes of operation. Careful choices about where to locate application services such as load balancing and web application firewall (WAF), with duplication in some cases, is the key to specialization and operational efficiency.
There are two primary criteria to consider when determining where to locate an application service:
When you lean towards (a), it often makes sense to deploy the service close to the applications that require it, and to give control to the DevOps team which is responsible for the operation of those applications.
When you lean more towards (b), then it’s generally best to deploy the service at the front door of the infrastructure, managed by the NetOps team which is responsible for the successful operation of the entire platform.
In addition, you need to consider the technical fit in case compromises are necessary. Can the service be deployed and operated using the ecosystem tools that either DevOps or NetOps teams are comfortable with? Do the appropriate tools deliver the necessary functionality, configuration interfaces, and monitoring APIs?
In a Kubernetes environment, there are several locations where you might deploy application services:
Let’s take web application firewall (WAF) as an example. WAF policies implement advanced security measures to inspect and block undesirable traffic, but these policies often need to be fine‑tuned for specific applications in order to minimize the number of false positives.
Our original Kubernetes tool list was so popular that we've curated another great list of tools to help you improve your functionality with the platform.
Easily run Kubernetes-based applications on AWS by leveraging AWS Fargate and Amazon Elastic Kubernetes Service together. Learn more here.
We compare the performance of the community, NGINX Open Source, and NGINX Plus Ingress Controllers in a dynamic Kubernetes cloud environment. As the number of Pod replicas scales up and down, only the NGINX Plus Ingress Controller doesn't incur high latencies.
We explain why duplicating application services paradoxically can improve overall efficiency: because NetOps and DevOps teams have different mandates, it makes sense for them to select and manage the tools that best suit their specific needs. Deploying Application Services in Kubernetes, Part 1
Whenever you want to expose any service which is running inside Kubernetes then there are a couple of ways to do it but the easiest one is to have an Ingress.