Archie  Clayton

Archie Clayton

1594886096

Enforcing Policies and Governance for Kubernetes Workloads

TL;DR: In this article, you will learn about enforcing policies for your Kubernetes workloads using static tools such as conftest and in-cluster operators such as Gatekeeper.

Policies in Kubernetes allow you to prevent specific workloads from being deployed in the cluster.

While compliance is usually the reason for enforcing strict policies in the cluster, there are several recommended best practices that cluster admins should implement.

Examples of such guidelines are:

  1. Not running privileged pods.
  2. Not running pods as the root user.
  3. Not specifying resource limits.
  4. Not using the latest tag for the container images.
  5. Now allowing additional Linux capabilities by default.

Besides, you may want to enforce bespoke policies that all workloads may wish to abide by, such as:

  • All workloads must have a “project” and “app” label.
  • All workloads must use container images from a specific container registry (e.g. my-company.com).

Finally, there is a third category of checks that you would want to implement as policies to avoid disruption in your services.

An example of such a check is ensuring that no two services can use the same ingress hostname.

In this article, you will learn about enforcing policies for your Kubernetes workloads using both out-of-cluster and in-cluster solutions.

These policies aim to reject workloads if they do not successfully satisfy the conditions defined.

The out-of-cluster approaches are accomplished by running static checks on the YAML manifests before they are submitted to the cluster.

There are multiple tools available for achieving this.

The in-cluster approaches make use of validating admission controllers which are invoked as part of the API request and before the manifest is stored in the database.

You may find this git repository handy as you work through the article.

A non-compliant Deployment

Let’s consider the following YAML manifest:

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: http-echo
  labels:
    app: http-echo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: http-echo
  template:
    metadata:
      labels:
        app: http-echo
    spec:
      containers:
      - name: http-echo
        image: hashicorp/http-echo
        args: ["-text", "hello-world"]
        ports:
        - containerPort: 5678

      - name: http-echo-1
        image: hashicorp/http-echo:latest
        args: ["-text", "hello-world"]
        ports:
        - containerPort: 5678

The above Deployment will create a pod consisting of two containers from the same container image.

The first container doesn’t specify any tag and the second container specifies the latest tag.

Effectively, both the containers will use the latest version of the image, hashicorp/http-echo.

This is considered a bad practice and you want to prevent such a deployment from being created in your cluster.

The best practice is to pin the container image to a tag such as hashicorp/http-echo:0.2.3.

Let’s see how you can detect the policy violation using a static check.

Since you want to prevent the resource from reaching the cluster, the right place for running this check is:

  • As a GIT pre-commit, before the resource is committed to GIT.
  • As part of your CI/CD pipeline before the branch is merged into the main branch.
  • As part of the CI/CD pipeline before the resource is submitted to the cluster.

Enforcing policies using conftest

Conftest is a binary and a testing framework for configuration data that can be used to check and verify Kubernetes manifests.

Tests are written using the purpose-built query language, Rego.

You can install conftest following the instructions on the project website.

At the time of writing, the latest release is 0.19.0.

Let’s define two policies:

check_image_tag.rego

package main

deny[msg] {
  input.kind == "Deployment"
  image := input.spec.template.spec.containers[_].image
  not count(split(image, ":")) == 2
  msg := sprintf("image '%v' doesn't specify a valid tag", [image])
}

deny[msg] {
  input.kind == "Deployment"
  image := input.spec.template.spec.containers[_].image
  endswith(image, "latest")
  msg := sprintf("image '%v' uses latest tag", [image])
}

Can you guess what the two policies are checking?

Both checks are applied only to Deployments and are designed to extract the image name from the spec.container section.

The former rule checks that there’s a tag defined on the image.

check_image_tag.rego

package main

deny[msg] {
  input.kind == "Deployment"
  image := input.spec.template.spec.containers[_].image
  not count(split(image, ":")) == 2
  msg := sprintf("image '%v' doesn't specify a valid tag", [image])
}

deny[msg] {
  input.kind == "Deployment"
  image := input.spec.template.spec.containers[_].image
  endswith(image, "latest")
  msg := sprintf("image '%v' uses latest tag", [image])
}

The latter checks that, if a tag is defined, it is not the latest tag.

check_image_tag.rego

package main

deny[msg] {
  input.kind == "Deployment"
  image := input.spec.template.spec.containers[_].image
  not count(split(image, ":")) == 2
  msg := sprintf("image '%v' doesn't specify a valid tag", [image])
}

deny[msg] {
  input.kind == "Deployment"
  image := input.spec.template.spec.containers[_].image
  endswith(image, "latest")
  msg := sprintf("image '%v' uses latest tag", [image])
}

The two deny blocks evaluate to a violation when true.

Notice that, when you have more than one deny block, conftest checks them independently, and the overall result is a violation of any of the blocks results in a breach.

Now, save the file as check_image_tag.rego and run conftest against the deployment.yaml manifest:

bash

conftest test -p conftest-checks test-data/deployment.yaml
FAIL - test-data/deployment.yaml - image 'hashicorp/http-echo' doesn't specify a valid tag
FAIL - test-data/deployment.yaml - image 'hashicorp/http-echo:latest' uses latest tag

2 tests, 0 passed, 0 warnings, 2 failures

Great, it detected both violations.

Since conftest is a static binary, you could have it running your checks before you submit the YAML to the cluster.

If you already use a CI/CD pipeline to apply changes to your cluster, you could have an extra step that validates all resources against your conftest policies.

But does it really prevent someone from submitting a Deployment with the _latest_ tag?

Of course, anyone with sufficient rights can still create the workload in your cluster and skip the CI/CD pipeline.

If you can run kubectl apply -f deployment.yaml successfully, you can ignore conftest, and your cluster will run images with the latest tag.

How can you prevent someone from working around your policies?

You could supplement the static check with dynamic policies deployed inside your cluster.

What if you could reject a resource after it is submitted to the cluster?

The Kubernetes API

Let’s recap what happens when you create a Pod like this in the cluster:

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: sise
    image: learnk8s/app:1.0.0
    ports:
    - containerPort: 8080

You could deploy the Pod to the cluster with:

bash

kubectl apply -f pod.yaml

The YAML definition is sent to the API server and:

  1. The YAML definition is stored in etcd.
  2. The scheduler assigns the Pod to a node.
  3. The kubelet retrieves the Pod spec and creates it.

At least that’s the high-level plan.

You use kubectl apply -f deployment.yaml to send a request to the control plane to deploy three replicas.

#kubernetes #devops

What is GEEK

Buddha Community

Enforcing Policies and Governance for Kubernetes Workloads
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Improving Kubernetes Security with Open Policy Agent (OPA)

Many multinational organizations now run their applications on microservice architecture inside their cloud environments, and (many) administrators are responsible for defining multiple policies on those environments. These giant IT organizations have extensive infrastructure systems and their systems have their own policy modules or their own built-in authorization systems. This is an excellent solution to a policy issue at enterprise scale (especially if you have the investment and resources to ensure best practice implementation), but such an overall ecosystem can be fragmented, which means if you want to improve control and visibility over who can do what across the stack, you would face a lot of complexity.

Why We Need OPA

Doing a lot of policy enforcement manually is the problem of the past. This does not work in today’s modern environments where everything is very dynamic and ephemeral, where the technology stack is very heterogeneous, where every development team could use a different language. So, the question is, how do you gain granular control over manual policies to automate and streamline their implementation? And the answer is with Open Policy Agent (OPA).

OPA provides technology that helps unify policy enforcement across a wide range of software and enable or empower administrators with more control over their systems. These policies are incredibly helpful in maintaining security, compliance, standardization across environments where we need to define and enforce such policies in a declarative way.

#blog #kubernetes #security #kubernetes open policy agent #opa #open policy agent #policy enforcement #policy implementation

Enforcing policies in Kubernetes

Kubernetes(K8S) has established itself as the go-to platform for container based workloads and many companies have either already started or going to start soon migrating their workloads onto Kubernetes.

Kubernetes offers so many capabilities out of box, and it exposes many infrastructure related controls to developers. Developers, who are not used to dealing with those infrastructure level concerns, might struggle to grasp all those new controls and abstractions which are not at their disposal. Trainings(such as ones on here) will certainly help to bring developers up to speed with K8S but if we want to really ensure that our applications fulfill all the requirements (such as availability, security, etc.) we want them fulfill, we better do a bit more than just publishing best practices in an intranet wiki page and expect developers to adhere to those best-practices religiously.

But how can you enforce your company’s K8S best practices? How can you create ‘policies’ that covers security and high availability concerns? How can you govern the behavior of a software service?

All the relevant files can be found in this git repo.

#enforcement-policy #rego #policy #kubernetes #opas

Maud  Rosenbaum

Maud Rosenbaum

1601051854

Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.

Stability

In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Mikel  Okuneva

Mikel Okuneva

1600012800

What Exactly Is Data Governance?

The first step is to understand what is data governance. Data Governance is an overloaded term and means different things to different people. It has been helpful to define Data Governance based on the outcomes it is supposed to deliver. In my case, Data Governance is any task required for:

  • Compliance: Data life cycle and usage is in accordance with laws and regulations.
  • Privacy: Protect data as per regulations and user expectations.
  • Security: Data & data infrastructure is adequately protected.

Why is Data Governance hard?

Compliance, Privacy, and Security are different approaches to ensure that data collectors and processors do not gain unregulated insights. It is hard to ensure that the right data governance framework is in place to meet this goal. An interesting example of an unexpected insight is the sequence of events leading to leakage of taxi cab tipping history of celebrities.

#databases #big-data-and-governance #data-lineage #data-governance #what-is-data-governance #data-governance-explained #data-governance-and-privacy #data-governance-problems