This is the first part of two blog posts on AKS network isolation. First, I’ll elaborate on how to protect AKS from a networking perspective. In part 2, I’ll describe how to still access your cluster for your own management e.g. for CI/CD. You’ll see me use Terraform, PowerShell, Azure CLI, Azure Pipelines, Kusto & Cloud init. The complete solution for part 1 is on GitHub: geekzter/azure-aks: Network Isolated AKS (github.com).

When you create an Azure Kubernetes Service (AKS) in the Azure Portal, or with tools such as the Azure CLI, the default configuration will be open in the sense of traffic (both application & management) traversing public IP addresses. This is a challenge in Enterprise, especially in regulated industries. Effort is needed to control all network paths, and in the case of AKS there are quite a few moving parts to consider.

AKS Networking modes

AKS supports two networking ‘modes’. These modes control the IP address allocation of the agent nodes in the cluster. In short:

  • kubenet creates IP addresses in a different address space, and uses NAT (network address translation) to expose the agents. This is where the Kubernetes term ‘external IP’ comes from, this is a private IP address known to the rest of the network.
  • Azure CNI uses the same address space for agents as the rest of the virtual network. See comparison

I won’t go into detail of these modes, as the network mode is largely irrelevant for the isolation controls you need to implement. Choosing one over the other does not make a significant difference for network isolation. I tested with Azure CNI.

#networking #kubernetes #ak #terraform #security

Network Isolated AKS — Part 1: Controlling network traffic
4.70 GEEK