The basics of Kubernetes networking

Originally published by Daniel Sanche at https://medium.com

Opening Container Ports

By default, pods are essentially isolated from the rest of the world. In order to route traffic to our application, we need to open the set of ports we plan to use for the container.

The software inside our Gitea container was designed to listen on port 3000 for HTTP requests, and 22 for SSH connections (to clone repositories). Let’s open up these ports in our container through the YAML file:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gitea-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gitea
  template:
    metadata:
      labels:
        app: gitea
    spec:
      containers:
      - name: gitea-container
        image: gitea/gitea:1.4
        ports:                                      #+
        - containerPort: 3000                       #+
          name: http                                #+
        - containerPort: 22                         #+
          name: ssh                                 #+

Apply the updated file to the cluster:

$ kubectl apply -f gitea.yaml

Now, we should then be able to run kubectl describe deployment to see our newly opened ports listed in the deployment summary. Our pod should have ports 3000 and 22 open for connections.

$ kubectl describe deployment | grep Ports
    Ports:        3000/TCP, 22/TCP

Ports 3000 and 22 are now open on the container itself, but aren’t yet exposed to the open internet

Debugging with Port Forward

The ports on our container should now be open, but we still need a way to communicate with the pod in the cluster. For debugging purposes, we can attach to our pod using kubectl port-forward

# grab the name of your active pod
$ PODNAME=$(kubectl get pods --output=template \
     --template="{{with index .items 0}}{{.metadata.name}}{{end}}")# open a port-forward session to the pod
$ kubectl port-forward $PODNAME 3000:3000

Now, kubectl will forward all connections on port 3000 on your local machine into the pod running in the cloud. If you open http://localhost:3000 in your web browser, you should be able to interact with the server as though it were running locally.

“kubectl port-forward” creates a temporary direct connection between an individual pod and your localhost

Connecting to http://localhost:3000 should present you with the Gitea sign up page

Creating an External LoadBalancer

Now that we know our pod is working, lets make it accessible to the public internet. For this, we need to add a new Kubernetes resource that will provision a public public IP address and route incoming requests to our pod. This can be accomplished using a Kubernetes resource called a Service.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gitea-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gitea
  template:
    metadata:
      labels:
        app: gitea
    spec:
      containers:
      - name: gitea-container
        image: gitea/gitea:1.4
        ports:
        - containerPort: 3000
          name: http
        - containerPort: 22
          name: ssh
---
kind: Service             #+
apiVersion: v1            #+
metadata:                 #+
  name: gitea-service     #+
spec:                     #+
  selector:               #+
    app: gitea            #+
  ports:                  #+
  - protocol: TCP         #+
    targetPort: 3000      #+
    port: 80              #+
    name: http            #+
  - protocol: TCP         #+
    targetPort: 22        #+
    port: 22              #+
    name: ssh             #+
  type: LoadBalancer      #+

Like the Deployment, the Service makes use of a selector (lines 29–30). This selector tells the LoadBalancer which pods to route traffic to. When the LoadBalancer receives requests, it will intelligently distribute the load to all pods that match the selector. In our case, load balancing is easy because we have just one pod.

The ports managed by the LoadBalancer are defined in lines 31–39. Along with a unique name and the protocol type (TCP/UDP), you must also define “port” and “targetPort”. These two fields define a mapping from ports on the external IP (port) to the ports used by the container (targetPort). On lines 33 and 34, we are saying that the LoadBalancer will listen to requests on port 80 (the default port your web browser uses to view websites), and pass the request to port 3000 on our pod.

Once again, we need to apply our updates to the cluster

$ kubectl apply -f gitea.yaml

After waiting a couple minutes for your changes to propagate, check on your service

$ kubectl get serviceNAME           TYPE          CLUSTER-IP    EXTERNAL_IP   AGE
 gitea-service  LoadBalancer  10.27.240.34  35.192.x.x    2m

After a couple minutes, you should see an external IP automatically added to your service. Entering this IP into your web browser will allow you to interact with the web server hosted by your pod.

The new LoadBalancer exposes an external IP address. Incoming requests on port 80 will be routed to port 3000 on the Gitea pod. The Gitea sign up page should now be accessible over the open internet.

Inter-Pod Communication: ClusterIP Service

If you try running through the Gitea sign-up page, you’ll see there’s still something missing: Gitea requires a database to function. To solve this, we could either add a MySQL container into the Gitea pod as a side-car, or we can create a new pod solely for MySQL. Both approaches may have their benefits and trade-offs, depending on your requirements. For the purposes of this tutorial, we will create a new pod.

Let’s start a new YAML file called mysql.yaml to manage the database:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mysql-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.6
        ports:
        - containerPort: 3306
        # Ignore this for now. It will be explained in the next article
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "true"
---
kind: Service
apiVersion: v1
metadata:
  name: mysql-service
spec:
  selector:
    app: mysql
  ports:
  - protocol: TCP
    port: 3306
  type: ClusterIP

Most of this should look familiar. Once again, we are declaring a Deployment to manage our single pod, and we are managing network connections through a Service. In this case, the service is of type “ClusterIP”; this simply means that the IP is exposed only within the cluster, rather than externally like through the LoadBalancer we made for the Gitea service.

Apply this new YAML file to the cluster
$ kubectl apply -f mysql.yaml

You should now see a new pod, deployment, and service added to your cluster

$ kubectl get pods
 NAME       READY    STATUS   RESTARTS  AGE
 gitea-pod  1/1      Running  0         9m
 mysql-pod  1/1      Running  0         9s$ kubectl get deployments
 NAME              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
 gitea-deployment  1        1        1           1          11m
 mysql-deployment  1        1        1           1          5m$ kubectl get services
 NAME           TYPE          CLUSTER-IP    EXTERNAL_IP   AGE
 gitea-service  LoadBalancer  10.27.240.34  35.192.x.x 2m
 mysql-service  ClusterIP     10.27.254.69  <none>        6m

MySQL is now deployed as a separate pod within the cluster. Its ClusterIP service can be accessed by the Gitea pod within the cluster, but it is not exposed over the public internet

The ClusterIP Service will automatically generate an internal IP address for us, listed in the console output as “CLUSTER-IP”. Any container within the cluster can access our MySQL pod using this address. Using these internal IP addresses directly is a bad practice, however. Instead, Kubernetes has an even easier way to access our new service: we can simply type “mysql-service” in the address field. This is possible due to a built-in pod called “kube-dns”, which manages internal DNS resolution for all services. In this way, you can ignore ephemeral internal IP addresses, and instead use static, human readable service names.

To allow Gitea to communicate with the MySQL pod, simply write the name and port of the service in the “host” field of the web UI. If everything is working as expected, you should see an “access denied” error. This means our pods can successfully communicate, but they need more configuration to successfully authenticate. Stay tuned for the next post to learn how.


Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Further reading about Kubernetes

Docker and Kubernetes: The Complete Guide

Learn DevOps: The Complete Kubernetes Course

Docker and Kubernetes: The Complete Guide

Kubernetes Certification Course with Practice Tests

An illustrated guide to Kubernetes Networking

An Introduction to Kubernetes: Pods, Nodes, Containers, and Clusters

An Introduction to the Kubernetes DNS Service

Kubernetes Deployment Tutorial For Beginners

Kubernetes Tutorial - Step by Step Introduction to Basic Concepts


#kubernetes #devops #docker

What is GEEK

Buddha Community

The basics of Kubernetes networking
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Maud  Rosenbaum

Maud Rosenbaum

1601051854

Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.

Stability

In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Networking Basics

Welcome back! Previously, we went through some of the basics of an operating system. In this article, we will be covering Networking Basics. Oh yes, let’s jump right in!

Introduction

When most people think of the Internet, they think of a magical cloud that lets you access your favorite websites, shop online, and your seemingly endless stream of movies and web series. But in reality, there isn’t any magic involved. There’s no mysterious entity that grants us an online resource. The Internet is just an interconnection of computers around the world, like a giant spider web that brings all of us together. We call the interconnection of computers, a network.

Image for post

Computers in a network can talk to each other and send data to one another. You can create a simple network with just two computers. In fact, you might already have your own network at home connecting all of your home devices. Let’s think on a bigger scale. What about the computers at your school or workplace? Are they in a network? They sure are. All of the computers there are linked together in a network. Can we link your home, school and workplace networks together? We absolutely can. Your workplace connects to a bigger network, and that network connects to an even bigger network, and on and on. Eventually, you’ve got billions of computers that are interconnected, making up what we call the Internet.

#networking #basics #internet #network #computer-science #data-science

Mitchel  Carter

Mitchel Carter

1601305200

Microsoft Announces General Availability Of Bridge To Kubernetes

Recently, Microsoft announced the general availability of Bridge to Kubernetes, formerly known as Local Process with Kubernetes. It is an iterative development tool offered in Visual Studio and VS Code, which allows developers to write, test as well as debug microservice code on their development workstations while consuming dependencies and inheriting the existing configuration from a Kubernetes environment.

Nick Greenfield, Program Manager, Bridge to Kubernetes stated in an official blog post, “Bridge to Kubernetes is expanding support to any Kubernetes. Whether you’re connecting to your development cluster running in the cloud, or to your local Kubernetes cluster, Bridge to Kubernetes is available for your end-to-end debugging scenarios.”

Bridge to Kubernetes provides a number of compelling features. Some of them are mentioned below-

#news #bridge to kubernetes #developer tools #kubernetes #kubernetes platform #kubernetes tools #local process with kubernetes #microsoft

Houston  Sipes

Houston Sipes

1600992000

Did Google Open Sourcing Kubernetes Backfired?

Over the last few years, Kubernetes have become the de-facto standard for container orchestration and has also won the race against Docker for being the most loved platforms among developers. Released in 2014, Kubernetes has come a long way with currently being used across the entire cloudscape platforms. In fact, recent reports state that out of 109 tools to manage containers, 89% of them are leveraging Kubernetes versions.

Although inspired by Borg, Kubernetes, is an open-source project by Google, and has been donated to a vendor-neutral firm — The Cloud Native Computing Foundation. This could be attributed to Google’s vision of creating a platform that can be used by every firm of the world, including the large tech companies and can host multiple cloud platforms and data centres. The entire reason for handing over the control to CNCF is to develop the platform in the best interest of its users without vendor lock-in.

#opinions #google open source #google open source tools #google opening kubernetes #kubernetes #kubernetes platform #kubernetes tools #open source kubernetes backfired