Here it describes how to run a cluster in multiple zones on AWS Platform

Introduction

Kubernetes 1.2 adds support for running a single cluster in multiple failure zones, AWS calls them “availability zones”, Multizone support is deliberately limited: one or more Kubernetes cluster will run in multiple zones

Functionality

When the nodes area unit started, the kubelet itself adds labels to them with zone data.

Kubernetes can mechanically unfold the pods during a replication controller or service across nodes during a single-zone cluster (to cut back the impact of failures.)

With multiple-zone clusters, this spreading behavior is extended across zones (to cut back the impact of zone failures.) (This is achieved via SelectorSpreadPriority).

When persistent volumes are created, the PersistentVolumeLabel admission controller mechanically adds zone labels to them.

The scheduler (via the VolumeZonePredicate predicate) can then make sure that pods that claim a given volume are placed into the identical zone as that volume, as volumes cannot

Volume limitations

The following limitations are self-addressed with topology-aware volume binding.

  • StatefulSet volume zone spreading once-dynamic provisioning is presently not compatible with pod affinity or anti-affinity policies.
  • If the name of the StatefulSet contains dashes (“-”), volume zone spreading might not give an even distribution of storage across zones.
  • When specifying multiple PVCs during a readying or Pod spec, the StorageClass must be organized for a particular single zone, or the PVs have to be compelled to be statically provisioned during a specific zone. Walkthrough

We’re currently attending to rehearse fitting and employing a multi-zone cluster on each GCE & AWS. To do so, you remark a full cluster (specifying MULTIZONE=true), so you add nodes in extra zones by running Kube-up once more (specifying KUBE_USE_EXISTING_MASTER=true).

Bringing up your cluster

Create the cluster as traditional, however, pass MULTIZONE to inform the cluster to manage multiple zones making nodes in us-central1-a.

AWS:

curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash

AWS: This step brings up a cluster as traditional, still running during a single zone (but MULTIZONE=true has enabled multi-zone capabilities).

Nodes are labeled

View the nodes you’ll be able to see that they’re labeled with zone info. They are bushed us-central1-a (GCE) or us-west-2a (AWS) thus far. The labels are failure-domain.beta.kubernetes.io/region for the region, and failure-domain.beta.kubernetes.io/zone for the zone:

kubectl get nodes --show-labels

The output is similar to this:

NAME                     STATUS                     ROLES    AGE   VERSION          LABELS
kubernetes-master        Ready,SchedulingDisabled   <none>   5m    v1.13.0          beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
kubernetes-minion-87yt   Ready                      <none>   5m    v1.13.0          beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87yt
kubernetes-minion-89hf   Ready                      <none>   5m    v1.13.0          beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-89hf
kubernetes-minion-n45g   Ready                      <none>   5m    v1.13.0          beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-n45g 

#kubernetes #aws #programming #developer

Setting up Highly Available Kubernetes Cluster
2.25 GEEK