Policy driven networking for its Private Kubernetes deployments

Today, we are excited to announce that Gravitational is working with Tigera, the company behind Project Calico, to ensure that its Private Kubernetes deployments will support Calico for secure, policy driven networking.

Gravitational enables SaaS companies and independent software vendors to deploy and manage their applications on their customers’ infrastructure (on-premises). Our solution is based on Kubernetes and it is crucial that we deliver flexible, secure and reliable communication between the services within each private deployment.

Sasha Klizhentas, Gravitational’s CTO, explains the reasoning behind using Calico, “Calico has clearly become the de facto solution for Kubernetes networking, with a well-engineered system that uses proven and well known technologies like kernel layer 3 routing and filtering for fine-grained policy. It scales well and sysadmins love it.”

Tigera’s work and expertise with Calico has been crucial in our ability to offer it as a networking option and reduced our time to market. With their help, now all Gravitational customers will be able to select Calico in their Application Manifest to best suit their deployment environments. Networking options include flat layer 3 networking without overlays, direct peering to private cloud infrastructure, as well as IP-in-IP or VXLAN for routing across public cloud regions and hybrid cloud scenarios. Also, with Calico, Kubernetes Network Policy capabilities are now available to Gravity users for enhanced application security.

#kubernetes

What is GEEK

Buddha Community

Policy driven networking for its Private Kubernetes deployments
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Improving Kubernetes Security with Open Policy Agent (OPA)

Many multinational organizations now run their applications on microservice architecture inside their cloud environments, and (many) administrators are responsible for defining multiple policies on those environments. These giant IT organizations have extensive infrastructure systems and their systems have their own policy modules or their own built-in authorization systems. This is an excellent solution to a policy issue at enterprise scale (especially if you have the investment and resources to ensure best practice implementation), but such an overall ecosystem can be fragmented, which means if you want to improve control and visibility over who can do what across the stack, you would face a lot of complexity.

Why We Need OPA

Doing a lot of policy enforcement manually is the problem of the past. This does not work in today’s modern environments where everything is very dynamic and ephemeral, where the technology stack is very heterogeneous, where every development team could use a different language. So, the question is, how do you gain granular control over manual policies to automate and streamline their implementation? And the answer is with Open Policy Agent (OPA).

OPA provides technology that helps unify policy enforcement across a wide range of software and enable or empower administrators with more control over their systems. These policies are incredibly helpful in maintaining security, compliance, standardization across environments where we need to define and enforce such policies in a declarative way.

#blog #kubernetes #security #kubernetes open policy agent #opa #open policy agent #policy enforcement #policy implementation

Maud  Rosenbaum

Maud Rosenbaum

1601051854

Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.

Stability

In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Understanding Kubernetes Operators

Automation is one of the fundamental components that makes Kubernetes so robust as a containerization engine. Even complex cloud infrastructure creation can be automated in order to simplify the process of managing cloud deployments. Despite the capability of leveraging so many resources and components to support an application, your cloud environment can still be fairly manageable.

Despite the many tools available on Kubernetes, the effort to make cloud infrastructure management more scalable and automated is ongoing. Kubernetes operator is one of the tools designed to push automation past its limits. You can do so much more without having to rely on manual inputs every time.

Getting to Know Kubernetes Operators

A Kubernetes operator, by definition, is an orchestration framework. It is a tool that lets you orchestrate and maintain cloud infrastructures with little to no human input. Kubernetes define operators as software extensions designed to utilize custom resources to manage applications and their components.

Kubernetes operators are not complex at all. Operators use controllers and the Kubernetes API to handle packaging, deployment, management, and maintenance of applications and the custom resources that they need. The whole process is fully automated, plus you can still rely on _kubectl _tooling for commands and operations.

In other words, an operator is basically a custom Kubernetes controller that integrates custom resources for management purposes. You can define parameters and configurations inside the custom resources directly, and then let the operators translate those parameters and run autonomously. Kubernetes operators’ continuous nature is their defining factor.

#blog #kubernetes #automation #kubernetes api #kubernetes deployment #kubernetes operators

AWS Fargate for Amazon Elastic Kubernetes Service | Caylent

On-demand cloud computing brings new ways to ensure scalability and efficiency. Rather than pre-allocating and managing certain server resources or having to go through the usual process of setting up a cloud cluster, apps and microservices can now rely on on-demand serverless computing blocks designed to be efficient and highly optimized.

Amazon Elastic Kubernetes Service (EKS) already makes running Kubernetes on AWS very easy. Support for AWS Fargate, which introduces the on-demand serverless computing element to the environment, makes deploying Kubernetes pods even easier and more efficient. AWS Fargate offers a wide range of features that make managing clusters and pods intuitive.

Utilizing Fargate
As with many other AWS services, using Fargate to manage Kubernetes clusters is very easy to do. To integrate Fargate and run a cluster on top of it, you only need to add the command –fargate to the end of your eksctl command.

EKS automatically configures the cluster to run on Fargate. It creates a pod execution role so that pod creation and management can be automated in an on-demand environment. It also patches coredns so the cluster can run smoothly on Fargate.

A Fargate profile is automatically created by the command. You can choose to customize the profile later or configure namespaces yourself, but the default profile is suitable for a wide range of applications already, requiring no human input other than a namespace for the cluster.

There are some prerequisites to keep in mind though. For starters, Fargate requires eksctl version 0.20.0 or later. Fargate also comes with some limitations, starting with support for only a handful of regions. For example, Fargate doesn’t support stateful apps, DaemonSets or privileged containers at the moment. Check out this link for Fargate limitations for your consideration.

Support for conventional load balancing is also limited, which is why ALB Ingress Controller is recommended. At the time of this writing, Classic Load Balancers and Network Load Balancers are not supported yet.

However, you can still be very meticulous in how you manage your clusters, including using different clusters to separate trusted and untrusted workloads.

Everything else is straightforward. Once the cluster is created, you can begin specifying pod execution roles for Fargate. You have the ability to use IAM console to create a role and assign it to a Fargate cluster. Or you can also create IAM roles and Fargate profiles via Terraform.

#aws #blog #amazon eks #aws fargate #aws management console #aws services #kubernetes #kubernetes clusters #kubernetes deployment #kubernetes pods