This article is part of our Open Policy Agent (OPA) series, and assumes that you are familiar with Kubernetes and OPA. If you haven’t already done so, or if you need a refresher, please have a look at the previous articles published in this series.

Today we are going to use OPA to validate our Kubernetes Network Policies. In a nutshell, a network policy in Kubernetes enables you to enforce restrictions on pod intercommunication. For example, you can require that for a pod to be able to connect to the database pods, it must have the app=web label. Such practices help decrease the attack vector in your cluster. However, a policy is only as good as its implementation. If you have a well-crafted network that lives in its YAML file and was not applied to the cluster, then it’s useless. Similarly, if important aspects were missed when creating the policy, then this poses a risk as well. OPA can help you alleviate those risks. This article provides two hands-on labs explaining the process.

Use Case 1: Ensuring That A Network Policy Exists Prior To Creating Pods

How to enforce Kubernetes Network Security policies using OPA 1

In this situation, your application pods contain proprietary code that needs increased protection. As part of your security plan, you need to ensure that no pods are allowed to access your application, except the frontend ones.

Create The Network Policy

You create a network policy that enforces this restriction which may look like this:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: app-inbound-policy
 namespace: default
spec:
 podSelector:
  matchLabels:
   app: prop
 ingress:
  - from:
   - podSelector:
     matchLabels:
      app: frontend
   ports:
    - protocol: TCP
     port: 80

Ensure That The Network Policy Is Working As Expected

Let’s set up a quick lab to ensure that our policy is indeed in place. We create a deployment that creates our protected pods. For simplicity, we’ll assume that nginx is the image used by our protected app. The deployment file may look as follows:

---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: prop-deployment
 namespace: default
spec:
 selector:
  matchLabels:
   app: prop
 replicas: 2 
 template:
  metadata:
   labels:
    app: prop
  spec:
   containers:
   - name: nginx
    image: nginx:1.14.2
    ports:
    - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
 name: prop-svc
 namespace: default
spec:
 selector:
  app: prop
 ports:
  - protocol: TCP
   port: 8080
   targetPort: 80

Apply the above definition and ensure that you have two pods running. Now let’s try to connect to the protected pods using a permitted pod. The following definition creates a pod with the allowed label that uses the alpine image:

---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: client-deployment
 namespace: default
spec:
 selector:
  matchLabels:
   app: frontend
 replicas: 1
 template:
  metadata:
   labels:
    app: frontend
  spec:
   containers:
   - name: alpine
    image: alpine
    command:
     - sh
     - -c
     - sleep 100000

To prove that the pod created by this deployment can access our protected pods, let’s open a shell session to the container and establish an HTTP connection to the pod:

$ kubectl exec -it client-deployment-7666b46645-27psl -- sh
/ # apk add curl
/ # curl prop-svc:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
  body {
    width: 35em;

So, we were able to get HTML output, which means that the connection was successful. Now, let’s create another deployment that uses different labels for the client (you can equally change the labels of the existing pods). The deployment file, for the client pod that should not be allowed access to our protected pods, should like this:

---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: illegal-client-deployment
 namespace: default
spec:
 selector:
  matchLabels:
   app: backend
 replicas: 1
 template:
  metadata:
   labels:
    app: backend
  spec:
   containers:
   - name: alpine
    image: alpine
    command:
     - sh
     - -c
     - sleep 100000

Opening a shell session to the container and trying to connect to our target pods:

kubectl exec -it illegal-client-deployment-55b694c9df-2rznp -- sh
/ # apk add curl
/ # curl --connect-timeout 10 prop-svc:8080
curl: (28) Connection timed out after 10001 milliseconds

We used the curl’s --connect-timeout command-line option to show that the connection was not established even after ten seconds have passed. The network policy is doing what it is supposed to.

#devops #security #kubernetes #opa

How to Enforce Kubernetes Network Security Policies using OPA
2.70 GEEK