The basics of Kubernetes networking

Originally published by Daniel Sanche at https://medium.com

Opening Container Ports

By default, pods are essentially isolated from the rest of the world. In order to route traffic to our application, we need to open the set of ports we plan to use for the container.

The software inside our Gitea container was designed to listen on port 3000 for HTTP requests, and 22 for SSH connections (to clone repositories). Let’s open up these ports in our container through the YAML file:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gitea-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gitea
  template:
    metadata:
      labels:
        app: gitea
    spec:
      containers:
      - name: gitea-container
        image: gitea/gitea:1.4
        ports:                                      #+
        - containerPort: 3000                       #+
          name: http                                #+
        - containerPort: 22                         #+
          name: ssh                                 #+

Apply the updated file to the cluster:

$ kubectl apply -f gitea.yaml

Now, we should then be able to run kubectl describe deployment to see our newly opened ports listed in the deployment summary. Our pod should have ports 3000 and 22 open for connections.

$ kubectl describe deployment | grep Ports
    Ports:        3000/TCP, 22/TCP

Ports 3000 and 22 are now open on the container itself, but aren’t yet exposed to the open internet

Debugging with Port Forward

The ports on our container should now be open, but we still need a way to communicate with the pod in the cluster. For debugging purposes, we can attach to our pod using kubectl port-forward

# grab the name of your active pod
$ PODNAME=$(kubectl get pods --output=template \
     --template="{{with index .items 0}}{{.metadata.name}}{{end}}")# open a port-forward session to the pod
$ kubectl port-forward $PODNAME 3000:3000

Now, kubectl will forward all connections on port 3000 on your local machine into the pod running in the cloud. If you open http://localhost:3000 in your web browser, you should be able to interact with the server as though it were running locally.

“kubectl port-forward” creates a temporary direct connection between an individual pod and your localhost

Connecting to http://localhost:3000 should present you with the Gitea sign up page

Creating an External LoadBalancer

Now that we know our pod is working, lets make it accessible to the public internet. For this, we need to add a new Kubernetes resource that will provision a public public IP address and route incoming requests to our pod. This can be accomplished using a Kubernetes resource called a Service.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gitea-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gitea
  template:
    metadata:
      labels:
        app: gitea
    spec:
      containers:
      - name: gitea-container
        image: gitea/gitea:1.4
        ports:
        - containerPort: 3000
          name: http
        - containerPort: 22
          name: ssh
---
kind: Service             #+
apiVersion: v1            #+
metadata:                 #+
  name: gitea-service     #+
spec:                     #+
  selector:               #+
    app: gitea            #+
  ports:                  #+
  - protocol: TCP         #+
    targetPort: 3000      #+
    port: 80              #+
    name: http            #+
  - protocol: TCP         #+
    targetPort: 22        #+
    port: 22              #+
    name: ssh             #+
  type: LoadBalancer      #+

Like the Deployment, the Service makes use of a selector (lines 29–30). This selector tells the LoadBalancer which pods to route traffic to. When the LoadBalancer receives requests, it will intelligently distribute the load to all pods that match the selector. In our case, load balancing is easy because we have just one pod.

The ports managed by the LoadBalancer are defined in lines 31–39. Along with a unique name and the protocol type (TCP/UDP), you must also define “port” and “targetPort”. These two fields define a mapping from ports on the external IP (port) to the ports used by the container (targetPort). On lines 33 and 34, we are saying that the LoadBalancer will listen to requests on port 80 (the default port your web browser uses to view websites), and pass the request to port 3000 on our pod.

Once again, we need to apply our updates to the cluster

$ kubectl apply -f gitea.yaml

After waiting a couple minutes for your changes to propagate, check on your service

$ kubectl get serviceNAME           TYPE          CLUSTER-IP    EXTERNAL_IP   AGE
 gitea-service  LoadBalancer  10.27.240.34  35.192.x.x    2m

After a couple minutes, you should see an external IP automatically added to your service. Entering this IP into your web browser will allow you to interact with the web server hosted by your pod.

The new LoadBalancer exposes an external IP address. Incoming requests on port 80 will be routed to port 3000 on the Gitea pod. The Gitea sign up page should now be accessible over the open internet.

Inter-Pod Communication: ClusterIP Service

If you try running through the Gitea sign-up page, you’ll see there’s still something missing: Gitea requires a database to function. To solve this, we could either add a MySQL container into the Gitea pod as a side-car, or we can create a new pod solely for MySQL. Both approaches may have their benefits and trade-offs, depending on your requirements. For the purposes of this tutorial, we will create a new pod.

Let’s start a new YAML file called mysql.yaml to manage the database:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mysql-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.6
        ports:
        - containerPort: 3306
        # Ignore this for now. It will be explained in the next article
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "true"
---
kind: Service
apiVersion: v1
metadata:
  name: mysql-service
spec:
  selector:
    app: mysql
  ports:
  - protocol: TCP
    port: 3306
  type: ClusterIP

Most of this should look familiar. Once again, we are declaring a Deployment to manage our single pod, and we are managing network connections through a Service. In this case, the service is of type “ClusterIP”; this simply means that the IP is exposed only within the cluster, rather than externally like through the LoadBalancer we made for the Gitea service.

Apply this new YAML file to the cluster
$ kubectl apply -f mysql.yaml

You should now see a new pod, deployment, and service added to your cluster

$ kubectl get pods
 NAME       READY    STATUS   RESTARTS  AGE
 gitea-pod  1/1      Running  0         9m
 mysql-pod  1/1      Running  0         9s$ kubectl get deployments
 NAME              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
 gitea-deployment  1        1        1           1          11m
 mysql-deployment  1        1        1           1          5m$ kubectl get services
 NAME           TYPE          CLUSTER-IP    EXTERNAL_IP   AGE
 gitea-service  LoadBalancer  10.27.240.34  35.192.x.x 2m
 mysql-service  ClusterIP     10.27.254.69  <none>        6m

MySQL is now deployed as a separate pod within the cluster. Its ClusterIP service can be accessed by the Gitea pod within the cluster, but it is not exposed over the public internet

The ClusterIP Service will automatically generate an internal IP address for us, listed in the console output as “CLUSTER-IP”. Any container within the cluster can access our MySQL pod using this address. Using these internal IP addresses directly is a bad practice, however. Instead, Kubernetes has an even easier way to access our new service: we can simply type “mysql-service” in the address field. This is possible due to a built-in pod called “kube-dns”, which manages internal DNS resolution for all services. In this way, you can ignore ephemeral internal IP addresses, and instead use static, human readable service names.

To allow Gitea to communicate with the MySQL pod, simply write the name and port of the service in the “host” field of the web UI. If everything is working as expected, you should see an “access denied” error. This means our pods can successfully communicate, but they need more configuration to successfully authenticate. Stay tuned for the next post to learn how.


Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Further reading about Kubernetes

Docker and Kubernetes: The Complete Guide

Learn DevOps: The Complete Kubernetes Course

Docker and Kubernetes: The Complete Guide

Kubernetes Certification Course with Practice Tests

An illustrated guide to Kubernetes Networking

An Introduction to Kubernetes: Pods, Nodes, Containers, and Clusters

An Introduction to the Kubernetes DNS Service

Kubernetes Deployment Tutorial For Beginners

Kubernetes Tutorial - Step by Step Introduction to Basic Concepts


#kubernetes #devops #docker

The basics of Kubernetes networking
6.05 GEEK