1603945080
High availability (HA), performance, and developer efficiency have become table stakes for new developments and modernizations of legacy software applications. Users — wherever they may be — do not tolerate outages and expect low latency and high throughput; developers need to bring their applications to market fast and update them often. Those two trends work against each other: setting up an application for HA and performance and ensuring it stays so takes time. Luckily, Kubernetes and the cloud native ecosystem give developers the building blocks to deploy highly available container-based applications. (Note: in this blog post, “developers” includes operators.)
To further improve developer efficiency, tools like AWS Fargate on EKS remove the node management part out of Kubernetes, leaving only the application management API: users can submit Kubernetes standard Deployments, Services, and Ingresses, and let AWS spin up right-sized micro-VMs to run the pods, and Application Load Balancers (ALBs) to serve traffic — to complete the networking stack, you’d let external-dns configure Route53, and maybe one day ACM will auto-provision certificates (for now, Ingress annotations must refer to existing ACM certificates).
In this blog post, we’ll discuss making Kubernetes serverless and global with AWS Fargate on EKS and Admiralty, using multiple Fargate-enabled Kubernetes clusters in multiple regions. In particular, we’ll look at scheduling and ingress in this context; alternatives will be considered. For a hands-on experience, check out the companion tutorial in the Admiralty documentation.
The tools discussed above assume that the interface with the developer — or more likely their continuous deployment (CD) platform of choice — is the Kubernetes API of a single cluster. However, most organizations run multiple clusters, mainly to make runtime isolation less of a headache, but also for HA, because clusters do fail. Furthermore, going back to low latency “wherever users may be”, organizations often run clusters in multiple regions to be closer to their users. This creates a new set of problems. In this blog post, we’ll focus on deploying applications to multiple clusters and routing ingress traffic to multiple clusters, possibly in multiple regions. If you want to enable cross-cluster traffic, you’ll also need a multicluster service mesh (or ad-hoc mTLS and service discovery) — but we’ll keep this topic for a future blog post. We also assume that storage is externalized to a global cloud database.
#api management #kubernetes #serverless #contributed
1602964260
Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.
According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.
(State of Kubernetes and Container Security, 2020)
And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.
(State of Kubernetes and Container Security, 2020)
#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml
1594162113
On-demand cloud computing brings new ways to ensure scalability and efficiency. Rather than pre-allocating and managing certain server resources or having to go through the usual process of setting up a cloud cluster, apps and microservices can now rely on on-demand serverless computing blocks designed to be efficient and highly optimized.
Amazon Elastic Kubernetes Service (EKS) already makes running Kubernetes on AWS very easy. Support for AWS Fargate, which introduces the on-demand serverless computing element to the environment, makes deploying Kubernetes pods even easier and more efficient. AWS Fargate offers a wide range of features that make managing clusters and pods intuitive.
Utilizing Fargate
As with many other AWS services, using Fargate to manage Kubernetes clusters is very easy to do. To integrate Fargate and run a cluster on top of it, you only need to add the command –fargate to the end of your eksctl command.
EKS automatically configures the cluster to run on Fargate. It creates a pod execution role so that pod creation and management can be automated in an on-demand environment. It also patches coredns so the cluster can run smoothly on Fargate.
A Fargate profile is automatically created by the command. You can choose to customize the profile later or configure namespaces yourself, but the default profile is suitable for a wide range of applications already, requiring no human input other than a namespace for the cluster.
There are some prerequisites to keep in mind though. For starters, Fargate requires eksctl version 0.20.0 or later. Fargate also comes with some limitations, starting with support for only a handful of regions. For example, Fargate doesn’t support stateful apps, DaemonSets or privileged containers at the moment. Check out this link for Fargate limitations for your consideration.
Support for conventional load balancing is also limited, which is why ALB Ingress Controller is recommended. At the time of this writing, Classic Load Balancers and Network Load Balancers are not supported yet.
However, you can still be very meticulous in how you manage your clusters, including using different clusters to separate trusted and untrusted workloads.
Everything else is straightforward. Once the cluster is created, you can begin specifying pod execution roles for Fargate. You have the ability to use IAM console to create a role and assign it to a Fargate cluster. Or you can also create IAM roles and Fargate profiles via Terraform.
#aws #blog #amazon eks #aws fargate #aws management console #aws services #kubernetes #kubernetes clusters #kubernetes deployment #kubernetes pods
1622214600
We all love containers for their scalability. But it might easily become your overhead if you end up managing a large cluster.
This is where container orchestration comes in. When operating at scale, you need a platform that automates all the tasks related to the management, deployment and scaling of container clusters.
There’s a reason why almost 90% of containers are orchestrated today.
If you’re using Kubernetes on AWS, there are several options you can choose from:
Read on to find out which one is a better match for your workloads.
And if you know what’s what in the world of AWS Kubernetes, you could still probably use a few best practices to reduce your cloud bill.
#aws #aws-eks #fargate #kubernetes #aws-ec2
1616242800
The focus on most cloud services and infrastructure is not just making cloud resources available but also making sure that your applications can run smoothly and efficiently. The latter is very important because cost-efficiency has always been a challenge for developers and administrators alike. Everything from provisioning more resources than required to not destroying provisioned nodes when they are no longer in use could result in your cloud expenses ballooning without you even realizing.
Cloud service providers are aware of this demand for better cost-efficiency, which is why they have been introducing features like elasticity and serverless services these past few years. In this article, however, we are going to focus on a specific service, the AWS Fargate, and how it can be used to create a serverless Kubernetes infrastructure that supports your application. Let’s take a closer look, shall we?
AWS Fargate for EKS was first announced in 2019 and has since become the go-to service for developers and organizations who want to save money one pod at a time. As the name suggests, the orchestration service is based on EKS—there is also AWS Fargate for ECS—and Kubernetes as the foundation.
What Fargate does is abstract the entire cluster from pods operations. You don’t have to establish your own control plane. You don’t even need a data plane. You can go straight to creating a cluster and provisioning pods to run microservices or entire applications.
Since there is no need to allocate resources for the underlying cluster, AWS Fargate for EKS offers maximum efficiency. You only pay for the pods that you run. The fact that it is a managed service further lowers your overhead. A small team of developers can manage a complex cloud infrastructure with ease.
Fargate offers added flexibility too. For starters, you can run pods as they are—without your own cluster—or use Fargate pods in a mixed or hybrid way. You can still create a cluster using EKS, and then run additional pods on Fargate. For on-demand applications or for when pods require more processing power than you have, Fargate is extremely useful in the hybrid mode.
#aws #blog #aws fargate #serverless
1617756780
It’s possible to attach an IAM role in a Kubernetes POD without using third-party software, such as kube2iam and kiam. This is thanks to the integration between AWS IAM and Kubernetes ServiceAccount, following the approach of IAM Roles for Service Accounts (IRSA).
There are quite a few benefits of using IRSA with Kubernetes PODs.
There are a few pre-requirements that you’ll need to attempt in order to use the IAM role in a POD.
#cloud #tutorial #aws #kubernetes #cloud security #k8s #eks #aws security #kubernetes security #aws iam