Nigel  Uys

Nigel Uys

1671248298

AWS vs. GCP vs. Azure - K8s/Kubernetes

Kubernetes (“K8s”) won the battle of container orchestration tools. Now AWS, Azure, and Google Cloud each offer a managed Kubernetes version. How do they compare?

Kubernetes (often stylized “K8s”) won the battle of container orchestration tools years ago. Nevertheless, there are still many ways to implement Kubernetes today and make it work with various infrastructures, and many tools—some better maintained than others. Perhaps the most interesting development on that front, though, is that the top cloud providers have decided to release their own managed Kubernetes versions:

  • Microsoft Azure offers the Azure Kubernetes Service (AKS)
  • AWS offers the Amazon Elastic Kubernetes Service (EKS)
  • Google Cloud offers the Google Kubernetes Engine (GKE)

From a DevOps perspective, what do these platforms offer? Do they live up to their promises? How do their creation time and other benchmarks compare? How well do they integrate with their respective platforms, especially their CLI tools? What’s it like maintaining and working with them? Below, we’ll delve into these questions, and more.

Note: For readers who would like the concepts of a Kubernetes cluster explained before they read on, Dmitriy Kononov offers an excellent introduction.

AKS vs. EKS vs. GKE: Advertised Features

We’ve decided to group the different features available for each managed Kubernetes version into silos:

  • Global Overview
  • Networking
  • Scalability and Performance
  • Security and Monitoring
  • Ecosystem
  • Pricing

Note: These details may change over time as cloud providers regularly update their products.

Global Overview

ServiceAspectAKSEKSGKE
Year Released201720182014
Latest Version1.15.11 (default) - 1.18.2 (preview)1.16.8 (default)1.14.10 (default) - 1.16.9
Specific Componentsoms-agent, tunnelfrontaws-nodefluentd, fluentd-gcp-scaler, event-exporter, l7-default-backend
Kubernetes Control Plane UpgradeManualManualAutomated (default) or manual
Worker UpgradesManualYes (easy with managed node groups)Yes: automated and manual, fine-tuning possible
SLA99.95 percent with availability zone, 99.9 percent without99.9 percent for EKS (master), 99.99 percent for EC2 (nodes)99.95 percent within a region, 99.5 percent within a zone
Native Knative SupportNoNoNo (but native Istio install)
Kubernetes Control Plane PriceFree$0.10/hour$0.10/hour

Kubernetes itself was Google’s project, so it makes sense that they were the first to propose a hosted version in 2014.

Of the three being compared here, Azure was next with AKS and has had some time to improve: If you remember acs-engine, which had been used to provision Kubernetes on Azure a few years ago, you will appreciate Microsoft’s effort on its replacement, aks-engine.

AWS was the last one to roll out its own version, EKS, so it sometimes can appear to be behind on the feature front, but they are catching up.

In terms of pricing, of course, things are always moving, and Google decided to join AWS in its price point of $0.10/hour, effective June 2020. Azure is the outsider here by giving out for free the AKS service, but it’s unclear how long that may last.

Another main difference lies in the upgrade feature of the cluster. The most automated upgrades are in GKE, and they are turned on by default. However, AKS vs. EKS are similar to each other here, in the sense that both require manual requests to be able to upgrade the master or worker nodes.

Networking

ServiceAspectAKSEKSGKE
Network PoliciesYes: Azure Network Policies or CalicoNeed to install CalicoYes: Native via Calico
Load BalancingBasic or standard SKU load balancerClassic and network load balancerContainer-native load balancer
Service MeshNone out of the boxAWS App Mesh (based on Envoy)Istio (out of the box, but beta)
DNS SupportCoreDNS customizationCoreDNS + Route53 inside VPCCoreDNS + Google Cloud DNS

On the network side of things, the three cloud providers are very close to each other. They all let customers implement network policies with Calico, for example. Concerning load balancing, they all implement their integration with their own load balancer resources and give engineers the choice of what to use.

The main difference found here is based on the added value of the service mesh. AKS does not support any service mesh out of the box (although engineers can manually install Istio). AWS has developed its own service mesh called App Mesh. Finally, Google has released its own integration with Istio (though still in beta) that customers can add directly when creating the cluster.

Best bet: GKE

Scalability and Performance

ServiceAspectAKSEKSGKE
Bare Metal NodesNoYesNo
Max Nodes per Cluster1,0001,0005,000
High Availability ClusterNoYes for control plan, manual across AZ for workersYes via regional cluster, master and worker are replicated
Auto ScalingYes via cluster autoscalerYes via cluster autoscalerYes via cluster autoscaler
Vertical Pod AutoscalerNoYesYes
Node PoolsYesYesYes
GPU NodesYesYesYes
On-premAvailable via Azure ARC (beta)NoGKE on-prem via Anthos GKE

Concerning GKE vs. AKS vs. EKS performance and scalability, GKE seems to be ahead. Indeed, it supports the biggest number of nodes (5,000) and offers extensive documentation on how to properly scale a cluster. All the features for high availability are available and are easy to fine-tune. What is more, GKE recently released Anthos, a project to create an ecosystem around GKE and its functionalities; with Anthos, you can deploy GKE on-prem.

AWS does have a key advantage, though: It is the only one to allow bare-metal nodes to run your Kubernetes cluster.

As of June 2020, AKS lacks high availability for the master, which is an important aspect to consider. But, as always, that could soon change.

Best bet: GKE

Security and Monitoring

ServiceAspectAKSEKSGKE
App Secrets EncryptionNoYes, possible via AWS KMSYes, possible via Cloud KMS
ComplianceHIPAA, SOC, ISO, PCI DSSHIPAA, SOC, ISO, PCI DSSHIPAA, SOC, ISO, PCI DSS
RBACYesYes, and strong integration with IAMYes
MonitoringAzure Monitor container health featureKubernetes control plane monitoring connected to Cloudwatch, Container Insights Metrics for nodesKubernetes Engine Monitoring and integration with Prometheus

In terms of compliance, all three cloud providers are equivalent. However, in terms of security, EKS and GKE provide another layer of security with their embedded key management services.

As for monitoring, Azure and Google Cloud provide their own monitoring ecosystem around Kubernetes. It’s worth noting that the one from Google has been recently updated to use Kubernetes Engine Monitoring, which is specifically designed for Kubernetes.

Azure provides its own container monitoring system, which was originally made for a basic, non-Kubernetes container ecosystem. They’ve added monitoring for some Kubernetes-specific metrics and resources (cluster health, deployments)—in preview mode, as of June 2020.

AWS offers lightweight monitoring for the control plane directly in Cloudwatch. To monitor the workers, you can use Kubernetes Container Insights Metrics provided via a specific CloudWatch agent you can install in the cluster.

Best bet: GKE

Ecosystem

ServiceAspectAKSEKSGKE
MarketplaceAzure Marketplace (but no clear AKS integration)AWS Marketplace (250+ apps)Google Marketplace (90+ apps)
Infrastructure-as-Code (IaC) SupportTerraform module
Ansible module
Terraform module
Ansible module
Terraform module
Ansible module
DocumentationWeak but complete and strong community (2,000+ Stack Overflow posts)Not very thorough but strong community (1,500+ Stack Overflow posts)Extensive official documentation and very strong community (4,000+ Stack Overflow posts)
CLI SupportCompleteComplete, plus special separate tool eksctl (covered below)Complete

In terms of ecosystems, the three providers have different strengths and assets. AKS now has very complete documentation around its platform and is the second in terms of posts on Stack Overflow. EKS has the least number of posts on Stack Overflow, but benefits from the strength of the AWS Marketplace. GKE, as the oldest platform, has the most posts on Stack Overflow, and a decent number of apps on its marketplace, but also the most comprehensive documentation.

Best bets: GKE and EKS

Pricing

ServiceAspectAKSEKSGKE
Free Usage Cap$170 worthNot eligible for free tier$300 worth
Kubernetes Control Plane CostFree$0.10/hour$0.10/hour (June 2020)
Reduced Price (Spot Instance/Preemptible Nodes)YesYesYes
Example Price for One Month$342
3 D2 nodes
$300
3 t3.large nodes
$190
3 n1-standard-2 nodes

Concerning the price overall, even with GKE’s move to implement the $0.10/hour price point for any cluster, it remains by far the cheapest cloud. This is thanks to something specific to Google—sustained use discounts, which are applied whenever the monthly usage of on-demand resources meets a certain minimum.

It is important to note that the example price row doesn’t take into account the traffic to the Kubernetes cluster that the cloud provider can charge for.

The reason AWS doesn’t allow the use of their free tier to test an EKS cluster is that EKS requires bigger machines than the tX.micro tier, and EKS hourly pricing is not in the free tier.

Nevertheless, it can still be economical to test any of these managed Kubernetes options with a decent load using the spot/preemptible nodes of each cloud provider—that tactic will easily save 80 to 90 percent on the final price. (Of course, it is not recommended to run stateful production loads on such machines!)

Advertised Features and Google’s Advantage

When looking at the different advertised features online, it seems there is a correlation between how long the managed Kubernetes version has been on the market and the number of features. As mentioned, Google having been the initiator of the Kubernetes project seems to be an undeniable advantage, resulting in better and stronger integration with its own cloud platform.

But AKS and EKS are not to be underestimated as they mature; both can take advantage of their unique features. For example, AWS is the only one to have bare-metal node integration, and also boasts the highest number of applications in its marketplace.

Now that the advertised features for each Kubernetes offering are clear, let’s do a deeper dive with some hands-on tests.

Kubernetes: AWS vs. GCP vs. Azure in Practice

Advertising is one thing, but how do the different platforms compare when it comes to serving production loads? As a cloud engineer, I know the importance of how long it takes to spawn and to take down a cluster when enforcing infrastructure-as-code. But I also wanted to explore the possibilities of each CLI and comment on how easy (or not) each cloud provider makes it to spawn a cluster.

Cluster Creation User Experience

AKS

On AKS, spawning a cluster is similar to creating an instance in AWS. Just find the AKS menu and go through a succession of different menus. Once the config is validated, the cluster can be created, a two-step process. It’s very straightforward, and engineers can easily and quickly launch a cluster with the default settings.

EKS

Cluster creation is definitely more complex on EKS vs. AKS. First of all, and by default, AWS requires a trip to IAM first to create a new role for the Kubernetes control plane and assign the engineer to it. It is important to note as well that this cluster creation does not include the creation of the nodes, so when I measured 11 minutes on average, this is only for the master creation. The node group creation is another step for the administrator, again needing a role for workers with three necessary policies to be made via the IAM control panel.

GKE

For me, the experience of creating a cluster manually is most pleasant on GKE. After finding the Kubernetes Engine in the Google Cloud Console, click to create a cluster. Different categories of settings appear in a menu on the left. Google will prepopulate the new cluster with an easily modifiable default node pool. Last but not least, GKE has the fastest cluster-spawning time, which brings us to the next table.

Time to Spawn a Cluster

ServiceAspectAKSEKSGKE
Size3 nodes (Ds2-v2), each having 2 vCPUs, 7 GB of RAM3 nodes t3.large3 nodes n1-standard-2
Time (m:ss)Average 5:45 for a full cluster11:06 for master plus 2:40 for the node group (totalling 13:46 for a full cluster)Average 2:42 for a full cluster

I performed these tests in the same region (Frankfurt and West Europe for AKS) to remove this difference’s possible impact on spawning time. I also tried to select the same size for nodes for the cluster: Three nodes, each having two vCPUs and seven or eight GB of memory, a standard size to run a small load on Kubernetes and start experimenting. I created each cluster three times to compute an average.

In these tests, GKE remained way ahead with a spawning time always under three minutes.

Kubernetes: AWS vs. GCP vs. Azure CLI Overview

Not all CLIs are created equal, but in this case, all three CLIs are actually modules of a larger CLI. What’s it like to get up and running with each cloud provider’s CLI toolchain?

AKS CLI (VIA az)

After installing az tooling, then the AKS module (via az aks install-cli), engineers need to authorize the CLI to communicate with the project’s Azure account. This is a matter of getting the credentials to update the local kubeconfig file via a simple az aks get-credentials --resource-group myResourceGroup --name myAKSCluster.

Similarly, to create a cluster: az aks create --resource-group myResourceGroup --name myAKSCluster

EKS CLI (VIA aws OR eksctl)

On AWS, we find a different approach—there are two different official CLI tools to manage EKS clusters. As always, aws can connect to AWS resources, particularly clusters. Getting credentials into a local kubeconfig can be done via: aws eks update-kubeconfig --name cluster-test.

However, engineers can also use eksctl, developed by Weaveworks and written in Go, to easily create and manage an EKS cluster. A major boon EKS provides for cloud engineers is that they can combine it with YAML configuration files to create infrastructure-as-code (IaC) since it’s working with CloudFormation. It’s definitely an asset to consider when integrating an EKS cluster into larger infrastructure on AWS.

Creating a cluster via eksctl is as easy as eksctl create cluster, no other parameters required.

GKE CLI (VIA gcloud)

For GKE, the steps are very similar: Install gcloud, then authenticate via gcloud init. The possibilities from there: Engineers can create, delete, describe, get credentials for, resize, update, or upgrade a cluster, or list clusters.

The syntax to create a cluster with gcloud is straightforward: gcloud container clusters create myGCloudCluster --num-nodes=1

AKS vs. EKS vs. GKE: Test Drive Results

In practice, we can see that GKE is certainly the fastest to spin up a basic cluster, in terms of both console simplicity and cluster spawn time. UX-wise, with the connect button next to the cluster, making it the most straightforward to connect to a cluster, too.

In terms of CLI tooling, the three cloud providers have implemented similar functionalities; however, we can lay the stress on the extra tool provided by Weaveworks for EKS. eksctl is the perfect tool for you to implement infrastructure-as-code on top of your preexisting AWS infrastructure, combining other services with EKS.

Managed Kubernetes Offerings Forge Ahead: AWS vs. GCP vs. Azure

For those just starting in the world of Kubernetes, the go-to implementation for me is GKE, since it’s the most straightforward. It’s easy to set up, it has a simple and fast UX for spawning, and it’s well-integrated into the Google Cloud Platform ecosystem.

Even though AWS was the last to join the race, it has a few undeniable advantages, such as bare metal nodes and the simple fact that it’s integrated with the provider with the largest mind-share.

Finally, AKS has made great progress since its creation. Tooling and feature parity likely won’t take long, meanwhile leaving room in the process to innovate. And as with any managed Kubernetes offering, for those already on the parent platform, integration will be a selling point.

Once a team has chosen a Kubernetes cloud provider, it could be interesting to look at other teams’ experiences, particularly failures. These post-mortems are a reflection of real-world cases—always a good starting point for developing one’s own cutting-edge best practices. I look forward to your comments below!

Further Reading on the Toptal Engineering Blog:

Original article source at: https://www.toptal.com/

#aws #gcp #azure 

What is GEEK

Buddha Community

AWS vs. GCP vs. Azure - K8s/Kubernetes
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Rylan  Becker

Rylan Becker

1620513960

AWS v/s Google v/s Azure: Who will win the Cloud War?

In the midst of this pandemic, what is allowing us unprecedented flexibility in making faster technological advancements is the availability of various competent cloud computing systems. From delivering on-demand computing services for applications, processing and storage, now is the time to make the best use of public cloud providers. What’s more, with easy scalability there are no geographical restrictions either.

Machine Learning systems can be indefinitely supported by them as they are open-sourced and within reach now more than ever with increased affordability for businesses. In fact, public cloud providers are increasingly helpful in building Machine Learning models. So, the question that arises for us is – what are the possibilities for using them for deployment as well?

What do we mean by deployment?

Model building is very much like the process of designing any product. From ideation and data preparation to prototyping and testing. Deployment basically is the actionable point of the whole process, which means that we use the already trained model and make its predictions available to users or other systems in an automated, reproducible and auditable manner.

#cyber security #aws vs azure #google vs aws #google vs azure #google vs azure vs aws

AWS Fargate for Amazon Elastic Kubernetes Service | Caylent

On-demand cloud computing brings new ways to ensure scalability and efficiency. Rather than pre-allocating and managing certain server resources or having to go through the usual process of setting up a cloud cluster, apps and microservices can now rely on on-demand serverless computing blocks designed to be efficient and highly optimized.

Amazon Elastic Kubernetes Service (EKS) already makes running Kubernetes on AWS very easy. Support for AWS Fargate, which introduces the on-demand serverless computing element to the environment, makes deploying Kubernetes pods even easier and more efficient. AWS Fargate offers a wide range of features that make managing clusters and pods intuitive.

Utilizing Fargate
As with many other AWS services, using Fargate to manage Kubernetes clusters is very easy to do. To integrate Fargate and run a cluster on top of it, you only need to add the command –fargate to the end of your eksctl command.

EKS automatically configures the cluster to run on Fargate. It creates a pod execution role so that pod creation and management can be automated in an on-demand environment. It also patches coredns so the cluster can run smoothly on Fargate.

A Fargate profile is automatically created by the command. You can choose to customize the profile later or configure namespaces yourself, but the default profile is suitable for a wide range of applications already, requiring no human input other than a namespace for the cluster.

There are some prerequisites to keep in mind though. For starters, Fargate requires eksctl version 0.20.0 or later. Fargate also comes with some limitations, starting with support for only a handful of regions. For example, Fargate doesn’t support stateful apps, DaemonSets or privileged containers at the moment. Check out this link for Fargate limitations for your consideration.

Support for conventional load balancing is also limited, which is why ALB Ingress Controller is recommended. At the time of this writing, Classic Load Balancers and Network Load Balancers are not supported yet.

However, you can still be very meticulous in how you manage your clusters, including using different clusters to separate trusted and untrusted workloads.

Everything else is straightforward. Once the cluster is created, you can begin specifying pod execution roles for Fargate. You have the ability to use IAM console to create a role and assign it to a Fargate cluster. Or you can also create IAM roles and Fargate profiles via Terraform.

#aws #blog #amazon eks #aws fargate #aws management console #aws services #kubernetes #kubernetes clusters #kubernetes deployment #kubernetes pods

Divya Raj

1624523136

GCP Vs AWS in 2021 - A Cloud Computing Face Off

The world of data analytics and technology have been dramatically altered by cloud computing. The two companies which are known for providing tremendous cloud computing technologies are- Google Cloud Platform and Amazon Web Services.
This artcile highlights the comparison between these big companies.

https://blog.digitalogy.co/gcp-vs-aws-in-2021/

#aws #aws and gcp #aws google #aws or google cloud #aws vs gcp services #cloud application vendors

Nigel  Uys

Nigel Uys

1671248298

AWS vs. GCP vs. Azure - K8s/Kubernetes

Kubernetes (“K8s”) won the battle of container orchestration tools. Now AWS, Azure, and Google Cloud each offer a managed Kubernetes version. How do they compare?

Kubernetes (often stylized “K8s”) won the battle of container orchestration tools years ago. Nevertheless, there are still many ways to implement Kubernetes today and make it work with various infrastructures, and many tools—some better maintained than others. Perhaps the most interesting development on that front, though, is that the top cloud providers have decided to release their own managed Kubernetes versions:

  • Microsoft Azure offers the Azure Kubernetes Service (AKS)
  • AWS offers the Amazon Elastic Kubernetes Service (EKS)
  • Google Cloud offers the Google Kubernetes Engine (GKE)

From a DevOps perspective, what do these platforms offer? Do they live up to their promises? How do their creation time and other benchmarks compare? How well do they integrate with their respective platforms, especially their CLI tools? What’s it like maintaining and working with them? Below, we’ll delve into these questions, and more.

Note: For readers who would like the concepts of a Kubernetes cluster explained before they read on, Dmitriy Kononov offers an excellent introduction.

AKS vs. EKS vs. GKE: Advertised Features

We’ve decided to group the different features available for each managed Kubernetes version into silos:

  • Global Overview
  • Networking
  • Scalability and Performance
  • Security and Monitoring
  • Ecosystem
  • Pricing

Note: These details may change over time as cloud providers regularly update their products.

Global Overview

ServiceAspectAKSEKSGKE
Year Released201720182014
Latest Version1.15.11 (default) - 1.18.2 (preview)1.16.8 (default)1.14.10 (default) - 1.16.9
Specific Componentsoms-agent, tunnelfrontaws-nodefluentd, fluentd-gcp-scaler, event-exporter, l7-default-backend
Kubernetes Control Plane UpgradeManualManualAutomated (default) or manual
Worker UpgradesManualYes (easy with managed node groups)Yes: automated and manual, fine-tuning possible
SLA99.95 percent with availability zone, 99.9 percent without99.9 percent for EKS (master), 99.99 percent for EC2 (nodes)99.95 percent within a region, 99.5 percent within a zone
Native Knative SupportNoNoNo (but native Istio install)
Kubernetes Control Plane PriceFree$0.10/hour$0.10/hour

Kubernetes itself was Google’s project, so it makes sense that they were the first to propose a hosted version in 2014.

Of the three being compared here, Azure was next with AKS and has had some time to improve: If you remember acs-engine, which had been used to provision Kubernetes on Azure a few years ago, you will appreciate Microsoft’s effort on its replacement, aks-engine.

AWS was the last one to roll out its own version, EKS, so it sometimes can appear to be behind on the feature front, but they are catching up.

In terms of pricing, of course, things are always moving, and Google decided to join AWS in its price point of $0.10/hour, effective June 2020. Azure is the outsider here by giving out for free the AKS service, but it’s unclear how long that may last.

Another main difference lies in the upgrade feature of the cluster. The most automated upgrades are in GKE, and they are turned on by default. However, AKS vs. EKS are similar to each other here, in the sense that both require manual requests to be able to upgrade the master or worker nodes.

Networking

ServiceAspectAKSEKSGKE
Network PoliciesYes: Azure Network Policies or CalicoNeed to install CalicoYes: Native via Calico
Load BalancingBasic or standard SKU load balancerClassic and network load balancerContainer-native load balancer
Service MeshNone out of the boxAWS App Mesh (based on Envoy)Istio (out of the box, but beta)
DNS SupportCoreDNS customizationCoreDNS + Route53 inside VPCCoreDNS + Google Cloud DNS

On the network side of things, the three cloud providers are very close to each other. They all let customers implement network policies with Calico, for example. Concerning load balancing, they all implement their integration with their own load balancer resources and give engineers the choice of what to use.

The main difference found here is based on the added value of the service mesh. AKS does not support any service mesh out of the box (although engineers can manually install Istio). AWS has developed its own service mesh called App Mesh. Finally, Google has released its own integration with Istio (though still in beta) that customers can add directly when creating the cluster.

Best bet: GKE

Scalability and Performance

ServiceAspectAKSEKSGKE
Bare Metal NodesNoYesNo
Max Nodes per Cluster1,0001,0005,000
High Availability ClusterNoYes for control plan, manual across AZ for workersYes via regional cluster, master and worker are replicated
Auto ScalingYes via cluster autoscalerYes via cluster autoscalerYes via cluster autoscaler
Vertical Pod AutoscalerNoYesYes
Node PoolsYesYesYes
GPU NodesYesYesYes
On-premAvailable via Azure ARC (beta)NoGKE on-prem via Anthos GKE

Concerning GKE vs. AKS vs. EKS performance and scalability, GKE seems to be ahead. Indeed, it supports the biggest number of nodes (5,000) and offers extensive documentation on how to properly scale a cluster. All the features for high availability are available and are easy to fine-tune. What is more, GKE recently released Anthos, a project to create an ecosystem around GKE and its functionalities; with Anthos, you can deploy GKE on-prem.

AWS does have a key advantage, though: It is the only one to allow bare-metal nodes to run your Kubernetes cluster.

As of June 2020, AKS lacks high availability for the master, which is an important aspect to consider. But, as always, that could soon change.

Best bet: GKE

Security and Monitoring

ServiceAspectAKSEKSGKE
App Secrets EncryptionNoYes, possible via AWS KMSYes, possible via Cloud KMS
ComplianceHIPAA, SOC, ISO, PCI DSSHIPAA, SOC, ISO, PCI DSSHIPAA, SOC, ISO, PCI DSS
RBACYesYes, and strong integration with IAMYes
MonitoringAzure Monitor container health featureKubernetes control plane monitoring connected to Cloudwatch, Container Insights Metrics for nodesKubernetes Engine Monitoring and integration with Prometheus

In terms of compliance, all three cloud providers are equivalent. However, in terms of security, EKS and GKE provide another layer of security with their embedded key management services.

As for monitoring, Azure and Google Cloud provide their own monitoring ecosystem around Kubernetes. It’s worth noting that the one from Google has been recently updated to use Kubernetes Engine Monitoring, which is specifically designed for Kubernetes.

Azure provides its own container monitoring system, which was originally made for a basic, non-Kubernetes container ecosystem. They’ve added monitoring for some Kubernetes-specific metrics and resources (cluster health, deployments)—in preview mode, as of June 2020.

AWS offers lightweight monitoring for the control plane directly in Cloudwatch. To monitor the workers, you can use Kubernetes Container Insights Metrics provided via a specific CloudWatch agent you can install in the cluster.

Best bet: GKE

Ecosystem

ServiceAspectAKSEKSGKE
MarketplaceAzure Marketplace (but no clear AKS integration)AWS Marketplace (250+ apps)Google Marketplace (90+ apps)
Infrastructure-as-Code (IaC) SupportTerraform module
Ansible module
Terraform module
Ansible module
Terraform module
Ansible module
DocumentationWeak but complete and strong community (2,000+ Stack Overflow posts)Not very thorough but strong community (1,500+ Stack Overflow posts)Extensive official documentation and very strong community (4,000+ Stack Overflow posts)
CLI SupportCompleteComplete, plus special separate tool eksctl (covered below)Complete

In terms of ecosystems, the three providers have different strengths and assets. AKS now has very complete documentation around its platform and is the second in terms of posts on Stack Overflow. EKS has the least number of posts on Stack Overflow, but benefits from the strength of the AWS Marketplace. GKE, as the oldest platform, has the most posts on Stack Overflow, and a decent number of apps on its marketplace, but also the most comprehensive documentation.

Best bets: GKE and EKS

Pricing

ServiceAspectAKSEKSGKE
Free Usage Cap$170 worthNot eligible for free tier$300 worth
Kubernetes Control Plane CostFree$0.10/hour$0.10/hour (June 2020)
Reduced Price (Spot Instance/Preemptible Nodes)YesYesYes
Example Price for One Month$342
3 D2 nodes
$300
3 t3.large nodes
$190
3 n1-standard-2 nodes

Concerning the price overall, even with GKE’s move to implement the $0.10/hour price point for any cluster, it remains by far the cheapest cloud. This is thanks to something specific to Google—sustained use discounts, which are applied whenever the monthly usage of on-demand resources meets a certain minimum.

It is important to note that the example price row doesn’t take into account the traffic to the Kubernetes cluster that the cloud provider can charge for.

The reason AWS doesn’t allow the use of their free tier to test an EKS cluster is that EKS requires bigger machines than the tX.micro tier, and EKS hourly pricing is not in the free tier.

Nevertheless, it can still be economical to test any of these managed Kubernetes options with a decent load using the spot/preemptible nodes of each cloud provider—that tactic will easily save 80 to 90 percent on the final price. (Of course, it is not recommended to run stateful production loads on such machines!)

Advertised Features and Google’s Advantage

When looking at the different advertised features online, it seems there is a correlation between how long the managed Kubernetes version has been on the market and the number of features. As mentioned, Google having been the initiator of the Kubernetes project seems to be an undeniable advantage, resulting in better and stronger integration with its own cloud platform.

But AKS and EKS are not to be underestimated as they mature; both can take advantage of their unique features. For example, AWS is the only one to have bare-metal node integration, and also boasts the highest number of applications in its marketplace.

Now that the advertised features for each Kubernetes offering are clear, let’s do a deeper dive with some hands-on tests.

Kubernetes: AWS vs. GCP vs. Azure in Practice

Advertising is one thing, but how do the different platforms compare when it comes to serving production loads? As a cloud engineer, I know the importance of how long it takes to spawn and to take down a cluster when enforcing infrastructure-as-code. But I also wanted to explore the possibilities of each CLI and comment on how easy (or not) each cloud provider makes it to spawn a cluster.

Cluster Creation User Experience

AKS

On AKS, spawning a cluster is similar to creating an instance in AWS. Just find the AKS menu and go through a succession of different menus. Once the config is validated, the cluster can be created, a two-step process. It’s very straightforward, and engineers can easily and quickly launch a cluster with the default settings.

EKS

Cluster creation is definitely more complex on EKS vs. AKS. First of all, and by default, AWS requires a trip to IAM first to create a new role for the Kubernetes control plane and assign the engineer to it. It is important to note as well that this cluster creation does not include the creation of the nodes, so when I measured 11 minutes on average, this is only for the master creation. The node group creation is another step for the administrator, again needing a role for workers with three necessary policies to be made via the IAM control panel.

GKE

For me, the experience of creating a cluster manually is most pleasant on GKE. After finding the Kubernetes Engine in the Google Cloud Console, click to create a cluster. Different categories of settings appear in a menu on the left. Google will prepopulate the new cluster with an easily modifiable default node pool. Last but not least, GKE has the fastest cluster-spawning time, which brings us to the next table.

Time to Spawn a Cluster

ServiceAspectAKSEKSGKE
Size3 nodes (Ds2-v2), each having 2 vCPUs, 7 GB of RAM3 nodes t3.large3 nodes n1-standard-2
Time (m:ss)Average 5:45 for a full cluster11:06 for master plus 2:40 for the node group (totalling 13:46 for a full cluster)Average 2:42 for a full cluster

I performed these tests in the same region (Frankfurt and West Europe for AKS) to remove this difference’s possible impact on spawning time. I also tried to select the same size for nodes for the cluster: Three nodes, each having two vCPUs and seven or eight GB of memory, a standard size to run a small load on Kubernetes and start experimenting. I created each cluster three times to compute an average.

In these tests, GKE remained way ahead with a spawning time always under three minutes.

Kubernetes: AWS vs. GCP vs. Azure CLI Overview

Not all CLIs are created equal, but in this case, all three CLIs are actually modules of a larger CLI. What’s it like to get up and running with each cloud provider’s CLI toolchain?

AKS CLI (VIA az)

After installing az tooling, then the AKS module (via az aks install-cli), engineers need to authorize the CLI to communicate with the project’s Azure account. This is a matter of getting the credentials to update the local kubeconfig file via a simple az aks get-credentials --resource-group myResourceGroup --name myAKSCluster.

Similarly, to create a cluster: az aks create --resource-group myResourceGroup --name myAKSCluster

EKS CLI (VIA aws OR eksctl)

On AWS, we find a different approach—there are two different official CLI tools to manage EKS clusters. As always, aws can connect to AWS resources, particularly clusters. Getting credentials into a local kubeconfig can be done via: aws eks update-kubeconfig --name cluster-test.

However, engineers can also use eksctl, developed by Weaveworks and written in Go, to easily create and manage an EKS cluster. A major boon EKS provides for cloud engineers is that they can combine it with YAML configuration files to create infrastructure-as-code (IaC) since it’s working with CloudFormation. It’s definitely an asset to consider when integrating an EKS cluster into larger infrastructure on AWS.

Creating a cluster via eksctl is as easy as eksctl create cluster, no other parameters required.

GKE CLI (VIA gcloud)

For GKE, the steps are very similar: Install gcloud, then authenticate via gcloud init. The possibilities from there: Engineers can create, delete, describe, get credentials for, resize, update, or upgrade a cluster, or list clusters.

The syntax to create a cluster with gcloud is straightforward: gcloud container clusters create myGCloudCluster --num-nodes=1

AKS vs. EKS vs. GKE: Test Drive Results

In practice, we can see that GKE is certainly the fastest to spin up a basic cluster, in terms of both console simplicity and cluster spawn time. UX-wise, with the connect button next to the cluster, making it the most straightforward to connect to a cluster, too.

In terms of CLI tooling, the three cloud providers have implemented similar functionalities; however, we can lay the stress on the extra tool provided by Weaveworks for EKS. eksctl is the perfect tool for you to implement infrastructure-as-code on top of your preexisting AWS infrastructure, combining other services with EKS.

Managed Kubernetes Offerings Forge Ahead: AWS vs. GCP vs. Azure

For those just starting in the world of Kubernetes, the go-to implementation for me is GKE, since it’s the most straightforward. It’s easy to set up, it has a simple and fast UX for spawning, and it’s well-integrated into the Google Cloud Platform ecosystem.

Even though AWS was the last to join the race, it has a few undeniable advantages, such as bare metal nodes and the simple fact that it’s integrated with the provider with the largest mind-share.

Finally, AKS has made great progress since its creation. Tooling and feature parity likely won’t take long, meanwhile leaving room in the process to innovate. And as with any managed Kubernetes offering, for those already on the parent platform, integration will be a selling point.

Once a team has chosen a Kubernetes cloud provider, it could be interesting to look at other teams’ experiences, particularly failures. These post-mortems are a reflection of real-world cases—always a good starting point for developing one’s own cutting-edge best practices. I look forward to your comments below!

Further Reading on the Toptal Engineering Blog:

Original article source at: https://www.toptal.com/

#aws #gcp #azure