Using Traffic Director with Google Kubernetes Engine (GKE).

This article is part of the series that starts with Traffic Director by Example: Part 1.

Sidebar into Automatic Envoy Deployments

In the last article, we manually installed the Envoy service proxy for the client on a GCE VM instance. Below, we will manually install it on a GKE pod. At the same time, Traffic Director supports installing the Envoy service proxy automatically for both GCE VM instances and GKE pods.

At the same time, there are clear advantages to the automated installs.

When you use automated Envoy deployment with Compute Engine VMs, the Envoy version installed is one that we have validated to work with Traffic Director. When a new VM is created using the instance template, the VM receives the latest version that we have validated. If you have a long-running VM, you can use a rolling update to replace your existing VMs and pick up the latest version.

When you use the Envoy sidecar injector with GKE, the injector is configured to use a recent version of Envoy that we have validated to work with Traffic Director. When a sidecar is injected alongside your workload Pod, it receives this version of Envoy. If you want to pick up a more recent version of Envoy, update the Envoy sidecar injector.

— Google Cloud —_ Preparing for Traffic Director setup_

So, why did we do the manual installation?

It turns out that the feature of configuring the service proxy to only intercept traffic to the VIP service CIDR block _10.0.0.0/16 _is not available (or not documented) for the automated installation. This is a critical problem assuming that the workload needs to communicate service other than the ones that we provide ourselves.

#kubernetes #gke

Traffic Director by Example: Part 2
1.10 GEEK