Google Kubernetes Engine By Example

Google Kubernetes Engine By Example

Exploring Google Kubernetes Engine by creating a complete Node.js / React solution. Serving our application through Google Kubernetes Engine. Securing our application using Google Kubernetes Engine. Beginning to incorporate data persistence. Data persistence in production.

Exploring Google Kubernetes Engine by creating a complete Node.js / React solution. Serving our application through Google Kubernetes Engine. Securing our application using Google Kubernetes Engine. Beginning to incorporate data persistence. Data persistence in production.

In reading about Kubernetes, I have come to learn that it has the potential to disrupt the PaaS model by leveraging Docker to provide a similar developer-friendly mechanism to deploy applications.

Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.> Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
_— Kubernetes — _Kubernetes

Goal

The goal is create a complete solution, development to production, using Google Kubernetes Engine consisting of:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

The final solution is available for download.

Prerequisites

The latest versions of Docker Engine and Docker Compose (both included in Docker Desktop); versions 18.09.1 and 1.23.2 as of this writing.

The latest version of Node.js LTS; version 10.15.0 LTS as of this writing.

A text editor; cannot recommend enough the no-cost Visual Studio Code with the following extensions:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

Node.js / Express Project

We will start with a basic Node.js / Express project as described in the Backend section of another article that I wrote: Tech Stack 2019 Core.

Docker

The assumption is that the reader is already familiar with Docker basics; the Docker Get Started, Part 1: Orientation and SetupandGet Started, Part 2: Containers documentation is sufficient.

Let us start by simply building / running a Docker image / container providing the Node.js / Express application. We start by creating a Docker configuration file:

Dockerfile

FROM node:10.15.0-alpine
WORKDIR /app
COPY . /app
RUN ["npm", "install"]
RUN ["npm", "run", "build-ts"]
EXPOSE 3000
CMD ["npm", "run", "start"]

Much like our .gitignore, we do not want to copy the node_modules and dist folders into the container; we will rather install and build these files as part of building the image.

.dockerignore

node_modules
dist

We build the Docker image using the command:

docker build --tag=hellokubernetes .

and run a Docker container based on the image:

docker run -d -p 3000:3000 hellokubernetes

At this point, we can observe the Docker images and containers:

We can also open a browser and see the API result.

note: For now, we will hold off on publishing images as described in the Docker documentation.

Docker Compose

The assumption is that the reader is already familiar with Docker Compose basics; the Docker Overview of Docker Composeis sufficient.

Let us first stop / remove all the Docker containers and remove all the images from our previous steps.

note: These commands assume that you are not using Docker for any other purposes; otherwise these operations can be targeted.

docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker rmi $(docker images -a -q)

We can simplify building the Docker image and running the container by creating a Docker Compose configuration file:

docker-compose.yaml

version: '3'
services:
  web:
    build: .
    ports:
- "3000:3000"

and then executing:

docker-compose up -d

At this point, we can observe the Docker images and containers:

And as before, we can also open a browser and see the API result.

Docker Compose Development

We will be using Kubernetes for non-development deployments, at the same time we will be using Docker Compose exclusively for development. With this in mind, we need to refactor our previous implementation to allow for the live build / restart of the application to support our development workflow.

Let us first stop / remove all the Docker containers and remove all the images from our previous steps.

docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker rmi $(docker images -a -q)

In preparation for our updated Docker Compose configuration, we need to be able to run the development tools via a single npm script (currently requires running both the watch-fs and watch-node scripts). We use the concurrently package to accomplish this.

npm install -D concurrently

And add a develop script:

{
  ...
  "script": {
    ...
    "develop": "npm run build-ts && concurrently 'npm:watch-ts' 'npm:watch-node'"

  }
  ...
}

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

We then create a separate Docker configuration file:

Dockerfile-develop

FROM node:10.15.0-alpine
WORKDIR /app
EXPOSE 3000
CMD ["npm", "run", "develop"]

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

We then update the Docker Compose configuration file:

docker-compose.yaml

version: '3'
services:
  web:
    build:
      context: .
      dockerfile: Dockerfile-develop
    ports:
    - "3000:3000"
    volumes:
- .:/app

With this in place, we can execute:

docker-compose up -d

With this in place we can open a browser and see the API result. Also, editing the source files will trigger a build and restart the application.

Google Container Registry

We will need an online location to store Docker images in order to deploy them using Google Kubernetes Engine; right now our images are stored in our local Docker registry. Also, it is likely that we will want to keep these images private; so we will need a private registry.

note: In my first pass at this article, I explored using a private registry at Docker Hub. I, however, discovered that while Kubernetes supports private registries, it was a more complicated solution (requiring setting up security keys and separate billing).

Given that we are going to be using Google Kubernetes Engine, the tightly integrated Google Container Registry is a natural solution for our private registry.

Before continuing, there are some important Docker concepts to understand (not well covered in the official Docker documentation):

Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.> Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.> Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.> Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.> Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.> Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.> Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
— _Adrian Moaut — _Using Docker, Developing and deploying Software with Containers

Also, if you are going to follow along, you will need to follow the Quickstart for Container Registry to enable the feature in a Google Cloud Platform project (that you will also likely need to create) and install the gcloud command-line tool.

We next need to create an image in our local registry, in the hellokubernetes repository, and with the 1.0.0 tag:

docker build --tag=hellokubernetes:1.0.0 .

We then authorize Docker using credentials supplied to gcloud:

gcloud auth configure-docker

We tag the local image for Google Container Registry:

docker tag hellokubernetes:1.0.0 gcr.io/[PROJECT-ID]/hellokubernetes:1.0.0

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

We finally push this tagged image to Google Container Registry:

docker push gcr.io/[PROJECT-ID]/hellokubernetes:1.0.0

Google Kubernetes Engine (GKE)

Now that we have stored our Docker image online with Google Cloud Registry, we will run a container based on it using GKE.

The assumption is that the reader is already familiar with GKE basics; for this there is an excellent article Kubernetes 101: Pods, Nodes, Containers, and Clusters(it is actually a three-part series and all are helpful).

note: If you read the official Docker documentation, they define similar (but different) concepts: stacks, swarms, and clusters. Since we are using Kubernetes, these concepts do not apply.

Also, if you are looking to follow along, you will have to setup a GKE enabled Google Cloud Platform project (the same project used for Google Container Repository) and have access to both the gcloud and kubectl command-line tools; instructions are available at GKE Quickstart.

We first create a cluster: mycluster:

gcloud container clusters create mycluster

and get the credentials so that we can interact with it:

gcloud container clusters get-credentials mycluster

Because we have a stateless application, we create a Kubernetes deployment.

k8s/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hellokubernetes
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hellokubernetes
  template:
    metadata:
      labels:
        app: hellokubernetes
    spec:
      containers:
      - name: hellokubernetes
        image: gcr.io/[PROJECT-ID]/hellokubernetes:1.0.0
        ports:
- containerPort: 3000

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

We apply this deployment:

kubectl apply -f deployment.yaml

We can see that were successful:

kubectl get deployments

We now have the left side of this final diagram built:

Per the diagram, our last step will be to create a load balancer servicewith an external IP address. The load balancer exposes port 80 that is mapping to port 3000 on pods labeled app: kubernetes.

k8s/service-load-balancer.yaml

apiVersion: v1
kind: Service
metadata:
  name: hellokubernetes
spec:
  selector:
    app: hellokubernetes
  type: LoadBalancer
  ports:
  - port: 80
targetPort: 3000

We apply the service:

kubectl apply -f service-load-balancer.yaml

We can see that were successful:

kubectl get service

The final validation is opening the external IP address in a browser.

Overview

As you may have noticed, the previous example was not secure; it used HTTP instead of HTTPS. This is because we used a LoadBalancer instead of an Ingress service:

Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
_— GKE — _Setting up HTTP Load Balancing with Ingress

We start by first deleting the hellokubernetes and kubernetes (automatically generated) services:

kubectl delete service hellokubernetes
kubectl delete service kubernetes

We, however, will continue to use the deployment that we created earlier.

The rest of this article closely follows the official tutorial Setting up HTTP Load Balancing with Ingress, providing concrete examples along the way.

NodePort

We first need to create a NodePort service in preparation to create an Ingress service.

Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
_— Kubernetes — _Services

Ok, if you don’t understand this definition, I am with you; unfortunately I could not find a better explanation in my searches. So let me give an explanation a shot using the following diagram (what we are working towards):

First NodePort services are a low-level service, used by other services, e.g., an Ingress service. They also interact with the actual underlying nodes supporting the cluster.

note: Unless otherwise specified, clusters default to having three nodes.

For each Pod and exposed port (e.g., 3000) that a NodePort service specifies, the NodePort defines a map to a randomly (e.g., 30677) assigned port that is allocated on each Node (same port on each Node). This mapping provides a mechanism to allow other services to direct traffic to a particular Pod without for knowledge of which Node is it running on.

The following is our NodePort configuration file:

k8s/service-node-port.yaml

apiVersion: v1
kind: Service
metadata:
  name: hellokubernetes
spec:
  selector:
    app: hellokubernetes
  type: NodePort
  ports:
- port: 3000

and applying:

kubectl apply -f service-node-port.yaml

and listing:

Ingress

Now that we have our NodePort service, we create a an Ingress service that points to it:

k8s/service-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hellokubernetes
spec:
  backend:
    serviceName: hellokubernetes
servicePort: 3000

and applying:

kubectl apply -f service-ingress.yaml

and listing:

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

Ingress with Static IP

In order to use HTTPS, we need a DNS entry. In order to use a DNS entry, we need a static IP address. We use Google Cloud to create a static IP address:

gcloud compute addresses create web-static-ip --global

and list it:

We then update our Ingress service by adding a reference to this static IP address:

k8s/service-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hellokubernetes
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
...

and re-applying:

kubectl apply -f service-ingress.yaml

and listing:

Secret

Now that we have a static IP address, the next step towards securing our API with HTTPS is to create a domain name mapping to the static IP address. In my case, created an A record for my larkintuckerllc.com domain (hosted on GoDaddy) to the static IP address.

note: I also created a CNAME record for www.larkintuckerllc.com pointing to the A record.

Next, we need to create a HTTPS certificate; I ended paying $8 a year to obtain a PositiveSSL certificate (works for both larkintuckerllc.com and www.larkintuckerllc.com).

note: Getting the no-cost Let’s Encrypt to work with GKE seemed overly complicated; see Let’s Encrypt on GKE.

Having obtained a private key and HTTPS certificate, the next step was to BASE64 encode (single line) them. On MacOS, the command is:

openssl base64 -A -in INFILE -out OUTFILE

Next we create a file:

k8s/secret.yaml

apiVersion: v1
data:
  tls.crt: base64 encoded cert
  tls.key: base64 encoded key
kind: Secret
metadata:
  name: hellokubernetes-tls
  namespace: default
type: Opaque

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

and applying:

kubectl apply -f secret.yaml

Ingress with Static IP and TLS

We finally add a tls entry to our final Ingress configuration:

k8s/service-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hellokubernetes
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
spec:
  tls:
  - hosts:
    - larkintuckerllc.com
    - www.larkintuckerllc.com
    secretName: hellokubernetes-tls
  backend:
    serviceName: hellokubernetes
servicePort: 3000

and re-applying:

kubectl apply -f service-ingress.yaml

Whew… We have now secured the API:

Development Database

For development, we will run our database using a Docker container; using the Docker Compose instructions at Docker Hub: postgres. Simply amounts to adding the db and adminer sections to our Docker Compose configuration:

docker-compose.yml

version: '3'
services:
  web:
    build:
      context: .
      dockerfile: Dockerfile-develop
    ports:
    - "3000:3000"
    volumes:
    - .:/app

  db:
    image: postgres:11.1
    restart: always
    environment:
      POSTGRES_PASSWORD: example

  adminer:
    image: adminer
    restart: always
    ports:
- 8080:8080

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

We rebuild and restart the Docker containers using:

docker-compose up
docker-compose start

Because we do not want to run our application using database administrator access, we create a new database, create a new user, and grant access to the user to the database. To accomplish this, we will use the command-line tool, psql, available in the postgres container.

docker exec -it hello-kubernetes_db_1 /bin/bash
runuser -l postgres -c psql

note: The SQL commands can be also executed using a web interface provided by the the adminer container.

The SQL commands we need to execute are:

create database hellokubernetes;
create user hellouser with password 'hellopassword';
grant all privileges on database hellokubernetes to hellouser;

With this, we will can access the database using an URL (from the web container): postgres://hellouser:[email protected]/hellokubernetes

Development Updates

With the database in place, we update the application to use it; using the TypeORM library. These steps closely follow the first article in another series that I wrote: TypeORM By Example: Part 1.

We first install the dependencies:

npm install pg
npm install @types/pg
npm install typeorm
npm install reflect-metadata

As TypeORM uses the experimental TypeScript decorator syntax, we need to update:

tsconfig.json

{
    "compilerOptions": {
        ...
        "emitDecoratorMetadata": true,
        "experimentalDecorators": true
    },
    ...
}

We also need to create a TypeORM configuration file; provides information about database access among other things:

note: In practice, it is important to keep secrets out of configuration files, e.g., the following script incorrectly contains a password. For the purposes of keeping things simpler, will not worry about it now. The correct answer involves using an environment variable which we would set as part of the Deployment configuration.

ormconfig.json

{
  "type": "postgres",
  "host": "db",
  "port": 5432,
  "username": "hellouser",
  "password": "hellopassword",
  "database": "hellokubernetes",
  "synchronize": false,
  "migrationsRun": true,
  "logging": false,
  "entities": [
     "dist/entity/**/*.js"
  ],
  "migrations": [
     "dist/migration/**/*.js"
  ],
  "subscribers": [
     "dist/subscriber/**/*.js"
  ],
  "cli": {
     "entitiesDir": "src/entity",
     "migrationsDir": "src/migration",
     "subscribersDir": "src/subscriber"
  }
}

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

Let us create a Todo entity:

src/entity/Todo.ts

import { Column, Entity, PrimaryGeneratedColumn } from 'typeorm';

@Entity()
export class Todo {
  @PrimaryGeneratedColumn()
  public id: number;

  @Column()
  public name: string = '';

  @Column()
  public isComplete: boolean = false;
}

export default Todo;

We login to the web container and generate the migration:

docker exec -it hello-kubernetes_web_1 /bin/sh
./node_modules/.bin/typeorm migration:generate -n Initialize

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

We update the server application to run the migration and use the Todo entity:

src/server.ts

import cors from 'cors';
import express from 'express';
import 'reflect-metadata';
import { createConnection } from 'typeorm';
import Todo from './entity/Todo';

createConnection()
  .then(async connection => {
    const app = express();
    app.use(cors());
    app.get('/', (req, res) => res.send({ hello: 'world' }));
    app.get('/create', async (req, res) => {
      const todo = new Todo();
      todo.name = 'A Todo';
      await connection.manager.save(todo);
      res.send(todo);
    });
    app.get('/read', async (req, res) => {
      const todos = await connection.manager.find(Todo);
      res.send(todos);
    });
    app.listen(3000, () => console.log('Example app listening on port 3000!'));
  })
.catch(error => console.log(error));

Remember, our development container automatically reloads on file changes. As we can see from using adminer tool_,_ we now have a two new tables; todos holding the Todo entities and migrations keep track of the executed migrations:

Also, we have two new endpoints:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

to exercise the persistence.

Finally, we update the application’s version:

package.json

{
  ...
  "version": "1.1.0",
  ...
}

Production Database — PersistentVolumeClaim

We will generally follow the instructions in Using Persistent Disks with WordPress and MySQLto setup our production database. The first step is to create a PersistentVolumeClaim because:

Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.> Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
_— GKE — _Using Persistent Disks with WordPress and MySQL

We create a small 10Gi volume that we can use for our PostgreSQL instance:

k8s/persistent-volume-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: hellokubernetes-volumeclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

We apply the configuration:

kubectl apply -f persistent-volume-claim.yaml

and view the result:

kubectl get pvc

Production Database — Secret

While our database will only be accessible from resources inside our GKE cluster, we will still want to change the default postgres (administrator) password used by the postgres Docker image.

We can accomplish this by creating a Secret:

k8s/secret-postgres.yaml

apiVersion: v1
data:
  password: base64 encoded password
kind: Secret
metadata:
  name: hellokubernetes-postgres
  namespace: default
type: Opaque

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

We apply the configuration:

kubectl apply -f secret-postgres.yaml

and view the result:

kubectl get secret

Production Database — Deployment

We now create the database Deployment configuration:

k8s/deployment-postgres.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hellokubernetes-postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hellokubernetes-postgres
  template:
    metadata:
      labels:
        app: hellokubernetes-postgres
    spec:
      containers:
        - name: hellokubernetes-postgres
          image: postgres:11.1
          env:
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: hellokubernetes-postgres
                  key: password
            - name: PGDATA
              value: "/var/lib/postgresql/data/pgdata"
          ports:
            - containerPort: 5432 
          volumeMounts:
            - name: hellokubernetes-persistent-storage
              mountPath: /var/lib/postgresql/data
      volumes:
        - name: hellokubernetes-persistent-storage
          persistentVolumeClaim:
claimName: hellokubernetes-volumeclaim

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

We then apply:

kubectl apply -f deployment-postgres.yaml

and then view the result:

kubectl get deployment

Production Database — Initial Setup

Much like we did for the development database, we need an initial setup (new database, new user, and user having access to database) for the production database. To accomplish this, we need to login to the Container in the Pod generated by the Deployment. We first get the Pod’s name:

kubectl get pod

We then login to the Containerusing the Pod’s name:

kubectl exec -it hellokubernetes-postgres-6dcf55cd85-868d7 -- /bin/bash

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

note: Similarly one can access a Container’s logs using the kubectl log command using the Pod’s name.

We connect to the database using the command:

runuser -l postgres -c psql

and execute the following SQL commands:

create database hellokubernetes;
create user hellouser with password 'hellopassword';
grant all privileges on database hellokubernetes to hellouser;

note: It was about this point, when I panicked as I recalled that Pods (and their Containers) were ephemeral (meaning changes made to them will not last). We seeming just applied some changes directly on the database Container. Then, I was relieved to remember that PostgreSQL maintains metadata (tables, users, etc.) in a database itself (which is stored on the PersistentVolume.

Production Database — Service

Unlike our local Docker Compose setup, Containers (in Pods) cannot communicate with other Containers in other Pods; we need to create a ClusterIP Service to accomplish this. We can visualize this as such:

We create the ClusterIP Service configuration:

k8s/service-cluster-ip.yaml

apiVersion: v1
kind: Service
metadata:
  name: db
spec:
  type: ClusterIP
  ports:
    - port: 5432
  selector:
app: hellokubernetes-postgres

Observations:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

We apply:

kubectl apply -f service-cluster-ip.yaml

and view:

kubectl get service

Production Updates

The last step is to update the image, with the new code using the database, in the deployment serving up the API.

We first build a new image and tag it locally:

docker build --tag=hellokubernetes:1.1.0 .
docker tag hellokubernetes:1.1.0 gcr.io/[PROJECT-ID]/hellokubernetes:1.1.0

When the push the image to the Google Container Registry:

docker push gcr.io/[PROJECT-ID]/hellokubernetes:1.1.0

Finally we update the image in the deployment:

kubectl set image deployment/hellokubernetes hellokubernetes=gcr.io/[PROJECT-ID]/hellokubernetes:1.1.0

We can now see that the API has the new code, e.g., below is the read endpoint being viewed (after having used the create endpoint).

Wrap Up

Decided to stop here with the series; covered most of the core concepts to get one going with it.

To sum up my experience learning GKE:

  • TypeScript
  • Node.js
  • Express
  • SQL database
  • Object-Relational-Mapping (ORM)

Learn More

Learn Kubernetes from a DevOps guru (Kubernetes + Docker)

Learn DevOps: The Complete Kubernetes Course

Kubernetes for the Absolute Beginners - Hands-on

Complete DevOps Gitlab & Kubernetes: Best Practices Bootcamp

Learn DevOps: On-Prem or Cloud Agnostic Kubernetes

Master Jenkins CI For DevOps and Developers

Docker Technologies for DevOps and Developers

DevOps Toolkit: Learn Kubernetes with Practical Exercises!

An illustrated guide to Kubernetes Networking

An illustrated guide to Kubernetes Networking

Everything I learned about the Kubernetes Networking

Everything I learned about the Kubernetes Networking

An illustrated guide to Kubernetes Networking [Part 1]

You’ve been running a bunch of services on a Kubernetes cluster and reaping the benefits. Or at least, you’re planning to. Even though there are a bunch of tools available to setup and manage a cluster, you’ve still wondered how it all works under the hood. And where do you look if it breaks? I know I did.

Sure Kubernetes is simple enough to start using it. But let’s face it — it’s a complex beast under the hood. There are a lot of moving parts, and knowing how they all fit in and work together is a must, if you want to be ready for failures. One of the most complex, and probably the most critical parts is the Networking.

So I set out to understand exactly how the Networking in Kubernetes works. I read the docs, watched some talks, even browsed the codebase. And here is what I found out.

Kubernetes Networking Model

At it’s core, Kubernetes Networking has one important fundamental design philosophy:

Every Pod has a unique IP.
This Pod IP is shared by all the containers in this Pod, and it’s routable from all the other Pods. Ever notice some “pause” containers running on your Kubernetes nodes? They are called “sandbox containers”, whose only job is to reserve and hold a network namespace (netns) which is shared by all the containers in a pod. This way, a pod IP doesn’t change even if a container dies and a new one in created in it’s place. A huge benefit of this IP-per-pod model is there are no IP or port collisions with the underlying host. And we don’t have to worry about what port the applications use.

With this in place, the only requirement Kubernetes has is that these Pod IPs are routable/accessible from all the other pods, regardless of what node they’re on.

Intra-node communication

The first step is to make sure pods on the same node are able to talk to each other. The idea is then extended to communication across nodes, to the internet and so on.

On every Kubernetes node, which is a linux machine in this case, there’s a root network namespace (root as in base, not the superuser) — root netns.

The main network interface eth0is in this root netns.

Similarly, each pod has its own netns, with a virtual ethernet pair connecting it to the root netns. This is basically a pipe-pair with one end in root netns, and other in the pod netns.

We name the pod-end eth0, so the pod doesn’t know about the underlying host and thinks that it has its own root network setup. The other end is named something like vethxxx.

You may list all these interfaces on your node using ifconfig or ip a commands.

This is done for all the pods on the node. For these pods to talk to each other, a linux ethernet bridge cbr0 is used. Docker uses a similar bridge named docker0.

You may list the bridges using brctl show command.

Assume a packet is going from pod1 to pod2

1. It leaves pod1’s netns at eth0 and enters the root netns at vethxxx.

2. It’s passed on to cbr0, which discovers the destination using an ARP request, saying “who has this IP?”

3. vethyyy says it has that IP, so the bridge knows where to forward the packet.

4. The packet reaches vethyyy, crosses the pipe-pair and reaches pod2’s netns.

This is how containers on a node talk to each other. Obviously there are other ways, but this is probably the easiest, and what docker uses as well.

Inter-node communication

As I mentioned earlier, pods need to be reachable across nodes as well. Kubernetes doesn’t care how it’s done. We can use L2 (ARP across nodes), L3 (IP routing across nodes — like the cloud provider route tables), overlay networks, or even carrier pigeons. It doesn’t matter as long as the traffic can reach the desired pod on another node. Every node is assigned a unique CIDR block (a range of IP addresses) for pod IPs, so each pod has a unique IP that doesn’t conflict with pods on another node.

In most of the cases, especially in cloud environments, the cloud provider route tables make sure the packets reach the correct destination. The same thing could be accomplished by setting up correct routes on every node. There are a bunch of other network plugins that do their own thing.

Here we have two nodes, similar to what we saw earlier. Each node has various network namespaces, network interfaces and a bridge.

Assume a packet is going from pod1 to pod4 (on a different node).

  1. It leaves pod1’s netns at eth0 and enters the root netns at vethxxx.
  2. It’s passed on to cbr0, which makes the ARP request to find the destination.
  3. It comes out of cbr0 to the main network interface eth0 since nobody on this node has the IP address for pod4.
  4. It leaves the machine node1 onto the wire with src=pod1 and dst=pod4.
  5. The route table has routes setup for each of the node CIDR blocks, and it routes the packet to the node whose CIDR block contains the pod4 IP.
  6. So the packet arrives at node2 at the main network interface eth0.
  7. Now even though pod4 isn’t the IP of eth0, the packet is still forwarded to cbr0 since the nodes are configured with IP forwarding enabled.
  8. The node’s routing table is looked up for any routes matching the pod4 IP. It finds cbr0 as the destination for this node’s CIDR block.
  9. You may list the node route table using route -n command, which will show a route for cbr0 like this:

  1. The bridge takes the packet, makes an ARP request and finds out that the IP belongs to vethyyy.

  2. The packet crosses the pipe-pair and reaches pod4 🏠

An illustrated guide to Kubernetes Networking [Part 2]

We’ll expand on these ideas and see how the overlay networks work. We will also understand how the ever-changing pods are abstracted away from apps running in Kubernetes and handled behind the scenes.

Overlay networks

Overlay networks are not required by default, however, they help in specific situations. Like when we don’t have enough IP space, or network can’t handle the extra routes. Or maybe when we want some extra management features the overlays provide. One commonly seen case is when there’s a limit of how many routes the cloud provider route tables can handle. For example, AWS route tables support up to 50 routes without impacting network performance. So if we have more than 50 Kubernetes nodes, AWS route table won’t be enough. In such cases, using an overlay network helps.

It is essentially encapsulating a packet-in-packet which traverses the native network across nodes. You may not want to use an overlay network since it may cause some latency and complexity overhead due to encapsulation-decapsulation of all the packets. It’s often not needed, so we should use it only when we know why we need it.

To understand how traffic flows in an overlay network, let’s consider an example of flannel, which is an open-source project by CoreOS.

Here we see that it’s the same setup as before, but with a new virtual ethernet device called flannel0 added to root netns. It’s an implementation of Virtual Extensible LAN (VXLAN), but to linux, its just another network interface.

The flow for a packet going from pod1 to pod4 (on a different node) is something like this:

  1. The packet leaves pod1’s netns at eth0 and enters the root netns at vethxxx.

  2. It’s passed on to cbr0, which makes the ARP request to find the destination.

3a. Since nobody on this node has the IP address for pod4, bridge sends it to flannel0 because the node’s route table is configured with flannel0 as the target for the pod network range .

3b. As the flanneld daemon talks to the Kubernetes apiserver or the underlying etcd, it knows about all the pod IPs, and what nodes they’re on. So flannel creates the mappings (in userspace) for pods IPs to node IPs.

flannel0 takes this packet and wraps it in a UDP packet with extra headers changing the source and destinations IPs to the respective nodes, and sends it to a special vxlan port (generally 8472).

Even though the mapping is in userspace, the actual encapsulation and data flow happens in kernel space. So it happens pretty fast.

3c. The encapsulated packet is sent out via eth0 since it is involved in routing the node traffic.

  1. The packet leaves the node with node IPs as source and destination.

  2. The cloud provider route table already knows how to route traffic between nodes, so it send the packet to destination node2.

6a. The packet arrives at eth0 of node2. Due to the port being special vxlan port, kernel sends the packet to flannel0.

6b. flannel0 de-capsulates and emits it back in the root network namespace.

6c. Since IP forwarding is enabled, kernel forwards it to cbr0 as per the route tables.

  1. The bridge takes the packet, makes an ARP request and finds out that the IP belongs to vethyyy.

  2. The packet crosses the pipe-pair and reaches pod4 🏠

There could be slight differences among different implementations, but this is how overlay networks in Kubernetes work. There’s a common misconception that we have to use overlays when using Kubernetes. The truth is, it completely depends on the specific scenarios. So make sure you use it only when it’s absolutely needed.

An illustrated guide to Kubernetes Networking [Part 3]

Cluster dynamics

Due to the every-changing dynamic nature of Kubernetes, and distributed systems in general, the pods (and consequently their IPs) change all the time. Reasons could range from desired rolling updates and scaling events to unpredictable pod or node crashes. This makes the Pod IPs unreliable for using directly for communications.

Enter Kubernetes Services — a virtual IP with a group of Pod IPs as endpoints (identified via label selectors). These act as a virtual load balancer, whose IP stays the same while the backend Pod IPs may keep changing.

The whole virtual IP implementation is actually iptables (the recent versions have an option of using IPVS, but that’s another discussion) rules, that are managed by the Kubernetes component — kube-proxy. This name is actually misleading now. It used to work as a proxy pre-v1.0 days, which turned out to be pretty resource intensive and slower due to constant copying between kernel space and user space. Now, it’s just a controller, like many other controllers in Kubernetes, that watches the api server for endpoints changes and updates the iptables rules accordingly.

Due to these iptables rules, whenever a packet is destined for a service IP, it’s DNATed (DNAT=Destination Network Address Translation), meaning the destination IP is changed from service IP to one of the endpoints — pod IP — chosen at random by iptables. This makes sure the load is evenly distributed among the backend pods.

When this DNAT happens, this info is stored in conntrack — the Linux connection tracking table (stores 5-tuple translations iptables has done: protocol, srcIP, srcPort, dstIP, dstPort). This is so that when a reply comes back, it can un-DNAT, meaning change the source IP from the Pod IP to the Service IP. This way, the client is unaware of how the packet flow is handled behind the scenes.

So by using Kubernetes services, we can use same ports without any conflicts (since we can remap ports to endpoints). This makes service discovery super easy. We can just use the internal DNS and hard-code the service hostnames. We can even use the service host and port environment variables preset by Kubernetes.

Protip: Take this second approach and save a lot of unnecessary DNS calls!

Outbound traffic

The Kubernetes services we’ve talked about so far work within a cluster. However, in most of the practical cases, applications need to access some external api/website.

Generally, nodes can have both private and public IPs. For internet access, there is some sort of 1:1 NAT of these public and private IPs, especially in cloud environments.

For normal communication from node to some external IP, source IP is changed from node’s private IP to it’s public IP for outbound packets and reversed for reply inbound packets. However, when connection to an external IP is initiated by a Pod, the source IP is the Pod IP, which the cloud provider’s NAT mechanism doesn’t know about. It will just drop packets with source IPs other than the node IPs.

So we use, you guessed it, some more iptables! These rules, also added by kube-proxy, do the SNAT (Source Network Address Translation) aka IP MASQUERADE. This tells the kernel to use IP of the interface this packet is going out from, in place of the source Pod IP. A conntrack entry is also kept to un-SNAT the reply.

Inbound traffic

Everything’s good so far. Pods can talk to each other, and to the internet. But we’re still missing a key piece — serving the user request traffic. As of now, there are two main ways to do this:

NodePort/Cloud Loadbalancer (L4 — IP and Port) Setting the service type to NodePort assigns the service a nodePort in range 30000-33000. This nodePort is open on every node, even if there’s no pod running on a particular node. Inbound traffic on this NodePort would be sent to one of the pods (it may even be on some other node!) using, again, iptables.

A service type of LoadBalancer in cloud environments would create a cloud load balancer (ELB, for example) in front of all the nodes, hitting the same nodePort.

Ingress (L7 — HTTP/TCP)

A bunch of different implements, like nginx, traefik, haproxy, etc., keep a mapping of http hostnames/paths and the respective backends. This is entry point of the traffic over a load balancer and nodeport as usual, but the advantage is that we can have one ingress handling inbound traffic for all the services instead of requiring multiple nodePorts and load balancers.

Network Policy

Think of this like security groups/ACLs for pods. The NetworkPolicy rules allow/deny traffic across pods. The exact implementation depends on the network layer/CNI, but most of them just use iptables.

That’s all for now. In the previous parts we studied the foundation of Kubernetes Networking and how overlays work. Now we know how the Service abstraction helping in a dynamic cluster and makes discovery super easy. We also covered how the outbound and inbound traffic flow works and how network policy is useful for security within a cluster.

Learn More

Learn Kubernetes from a DevOps guru (Kubernetes + Docker)

Learn DevOps: The Complete Kubernetes Course

Kubernetes for the Absolute Beginners - Hands-on

Complete DevOps Gitlab & Kubernetes: Best Practices Bootcamp

Learn DevOps: On-Prem or Cloud Agnostic Kubernetes

Master Jenkins CI For DevOps and Developers

Docker Technologies for DevOps and Developers

DevOps Toolkit: Learn Kubernetes with Practical Exercises!

Guide to Spring Cloud Kubernetes for Beginners

Guide to Spring Cloud Kubernetes for Beginners

Guide to Spring Cloud Kubernetes for Beginners

1. Overview

When we build a microservices solution, both Spring Cloud and Kubernetes are optimal solutions, as they provide components for resolving the most common challenges. However, if we decide to choose Kubernetes as the main container manager and deployment platform for our solution, we can still use Spring Cloud's interesting features mainly through the Spring Cloud Kubernetes project.

In this tutorial, we’ll:

  • Install Minikube on our local machine
  • Develop a microservices architecture example with two independent Spring Boot applications communicating through REST
  • Set up the application on a one-node cluster using Minikube
  • Deploy the application using YAML config files
2. Scenario

In our example, we're using the scenario of travel agents offering various deals to clients who will query the travel agents service from time to time. We'll use it to demonstrate:

  • service discovery through Spring Cloud Kubernetes
  • configuration management and injecting Kubernetes ConfigMaps and secrets to application pods using Spring Cloud Kubernetes Config
  • load balancing using Spring Cloud Kubernetes Ribbon
3. Environment Setup

First and foremost, we need to install Minikube on our local machine and preferably a VM driver such as VirtualBox. It's also recommended to look at Kubernetes and its main features before following this environment setup.

Let's start the local single-node Kubernetes cluster:

minikube start --vm-driver=virtualbox

This command creates a Virtual Machine that runs a Minikube cluster using the VirtualBox driver. The default context in kubectl will now be minikube. However, to be able to switch between contexts, we use:

kubectl config use-context minikube

After starting Minikube, we can connect to the Kubernetes dashboard to access the logs and monitor our services, pods, ConfigMaps, and Secrets easily:

minikube dashboard

3.1. Deployment

Firstly, let's get our example from GitHub.

At this point, we can either run the “deployment-travel-client.sh” script from the parent folder, or else execute each instruction one by one to get a good grasp of the procedure:

### build the repository
mvn clean install
 
### set docker env
eval $(minikube docker-env)
 
### build the docker images on minikube
cd travel-agency-service
docker build -t travel-agency-service .
cd ../client-service
docker build -t client-service .
cd ..
 
### secret and mongodb
kubectl delete -f travel-agency-service/secret.yaml
kubectl delete -f travel-agency-service/mongo-deployment.yaml
 
kubectl create -f travel-agency-service/secret.yaml
kubectl create -f travel-agency-service/mongo-deployment.yaml
 
### travel-agency-service
kubectl delete -f travel-agency-service/travel-agency-deployment.yaml
kubectl create -f travel-agency-service/travel-agency-deployment.yaml
 
### client-service
kubectl delete configmap client-service
kubectl delete -f client-service/client-service-deployment.yaml
 
kubectl create -f client-service/client-config.yaml
kubectl create -f client-service/client-service-deployment.yaml
 
# Check that the pods are running
kubectl get pods
4. Service Discovery

This project provides us with an implementation for the ServiceDiscovery interface in Kubernetes. In a microservices environment, there are usually multiple pods running the same service.** Kubernetes exposes the service as a collection of endpoints** that can be fetched and reached from within a Spring Boot Application running in a pod in the same Kubernetes cluster.

For instance, in our example, we have multiple replicas of the travel agent service, which is accessed from our client service as http://travel-agency-service:8080. However, this internally would translate into accessing different pods such as travel-agency-service-7c9cfff655-4hxnp.

Spring Cloud Kubernetes Ribbon uses this feature to load balance between the different endpoints of a service.

We can easily use Service Discovery by adding the spring-cloud-starter-kubernetes dependency on our client application:

dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-kubernetes</artifactId>
</dependency>

Also, we should add @EnableDiscoveryClient and inject the DiscoveryClient into the ClientController by using @Autowired in our class:

@SpringBootApplication
@EnableDiscoveryClient
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}
@RestController
public class ClientController {
    @Autowired
    private DiscoveryClient discoveryClient;
}
5. ConfigMaps

Typically, microservices require some kind of configuration management. For instance, in Spring Cloud applications, we would use a Spring Cloud Config Server.

However, we can achieve this by using ConfigMaps provided by Kubernetes – provided that we intend to use it for non-sensitive, unencrypted information only. Alternatively, if the information we want to share is sensitive, then we should opt to use Secrets instead.

In our example, we're using ConfigMaps on the client-service Spring Boot application. Let's create a client-config.yaml file to define the ConfigMap of the client-service:

apiVersion: v1 by d
kind: ConfigMap
metadata:
  name: client-service
data:
  application.properties: |-
    bean.message=Testing reload! Message from backend is: %s <br/> Services : %s

**It's important that the name of the ConfigMap matches the name of the application **as specified in our “application.properties” file. In this case, it's client-service. Next, we should create the ConfigMap for client-service on Kubernetes:

kubectl create -f client-config.yaml

Now, let's create a configuration class ClientConfig with the @Configuration and @ConfigurationProperties and inject into the ClientController:

@Configuration
@ConfigurationProperties(prefix = "bean")
public class ClientConfig {
 
    private String message = "Message from backend is: %s <br/> Services : %s";
 
    // getters and setters
}
@RestController
public class ClientController {
 
    @Autowired
    private ClientConfig config;
 
    @GetMapping
    public String load() {
        return String.format(config.getMessage(), "", "");
    }
}

If we don't specify a ConfigMap, then we should expect to see the default message, which is set in the class. However, when we create the ConfigMap, this default message gets overridden by that property.

Additionally, every time we decide to update the ConfigMap, the message on the page changes accordingly:

kubectl edit configmap client-service
6. Secrets

Let's look at how Secrets work by looking at the specification of MongoDB connection settings in our example. We're going to create environment variables on Kubernetes, which will then be injected into the Spring Boot application.

6.1. Create a Secret

The first step is to create a secret.yaml file, encoding the username and password to Base 64:

apiVersion: v1
kind: Secret
metadata:
  name: db-secret
data:
  username: dXNlcg==
  password: cDQ1NXcwcmQ=

Let's apply the Secret configuration on the Kubernetes cluster:

kubectl apply -f secret.yaml

6.2. Create a MongoDB Service

We should now create the MongoDB service and the deployment travel-agency-deployment.yaml file. In particular, in the deployment part, we'll use the Secret username and password that we defined previously:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mongo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        service: mongo
      name: mongodb-service
    spec:
      containers:
      - args:
        - mongod
        - --smallfiles
        image: mongo:latest
        name: mongo
        env:
          - name: MONGO_INITDB_ROOT_USERNAME
            valueFrom:
              secretKeyRef:
                name: db-secret
                key: username
          - name: MONGO_INITDB_ROOT_PASSWORD
            valueFrom:
              secretKeyRef:
                name: db-secret
                key: password

By default, the mongo:latest image will create a user with username and password on a database named admin.

6.3. Setup MongoDB on Travel Agency Service

It's important to update the application properties to add the database related information. While we can freely specify the database name admin, here we're hiding the most sensitive information such as the username and the password:

spring.cloud.kubernetes.reload.enabled=true
spring.cloud.kubernetes.secrets.name=db-secret
spring.data.mongodb.host=mongodb-service
spring.data.mongodb.port=27017
spring.data.mongodb.database=admin
spring.data.mongodb.username=${MONGO_USERNAME}
spring.data.mongodb.password=${MONGO_PASSWORD}

Now, let's take a look at our travel-agency-deployment property file to update the services and deployments with the username and password information required to connect to the mongodb-service.

Here's the relevant section of the file, with the part related to the MongoDB connection:

env:
  - name: MONGO_USERNAME
    valueFrom:
      secretKeyRef:
        name: db-secret
        key: username
  - name: MONGO_PASSWORD
    valueFrom:
      secretKeyRef:
        name: db-secret
        key: password
  1. Communication with Ribbon
    In a microservices environment, we generally need the list of pods where our service is replicated in order to perform load-balancing. This is accomplished by using a mechanism provided by Spring Cloud Kubernetes Ribbon. This mechanism can automatically discover and reach all the endpoints of a specific service, and subsequently, it populates a Ribbon ServerList with information about the endpoints.

Let's start by adding the spring-cloud-starter-kubernetes-ribbon dependency to our client-service pom.xml file:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
</dependency>

The next step is to add the annotation @RibbonClient to our client-service application:

@RibbonClient(name = "travel-agency-service")

When the list of the endpoints is populated, the Kubernetes client will search the registered endpoints living in the current namespace/project matching the service name defined using the @RibbonClient annotation.

We also need to enable the ribbon client in the application properties:

ribbon.http.client.enabled=true
8. Additional Features

8.1. Hystrix

Hystrix helps in building a fault-tolerant and resilient application. Its main aims are fail fast and rapid recovery.

In particular, in our example, we're using Hystrix to implement the circuit breaker pattern on the client-server by annotating the Spring Boot application class with @EnableCircuitBreaker.

Additionally, we're using the fallback functionality by annotating the method TravelAgencyService.getDeals() with @HystrixCommand(). This means that in case of fallback the getFallBackName() will be called and “Fallback” message returned:

@HystrixCommand(fallbackMethod = "getFallbackName", commandProperties = { 
    @HystrixProperty(name = "execution.isolation.thread.timeoutInMilliseconds", value = "1000") })
public String getDeals() {
    return this.restTemplate.getForObject("http://travel-agency-service:8080/deals", String.class);
}
 
private String getFallbackName() {
    return "Fallback";
}

8.2. Pod Health Indicator

We can take advantage of Spring Boot HealthIndicator and Spring Boot Actuator to expose health-related information to the user.

In particular, the Kubernetes health indicator provides:

  • pod name
  • IP address
  • namespace
  • service account
  • node name
  • a flag that indicates whether the Spring Boot application is internal or external to Kubernetes
9. Conclusion

In this article, we provide a thorough overview of the Spring Cloud Kubernetes project.

So why should we use it? If we root for Kubernetes as a microservices platform but still appreciate the features of Spring Cloud, then Spring Cloud Kubernetes gives us the best of both worlds.

The full source code of the example is available over on GitHub.

Build Microservice Architecture With Kubernetes, Spring Boot , and Docker

Build Microservice Architecture With Kubernetes, Spring Boot , and Docker

In this article we learn how to start the Spring Boot microservice project and run it fast with Kubernetes and Docker

The topics covered in this article are:

  • Using Spring Boot 2.0 in cloud-native development

  • Providing service discovery for all microservices using a Spring Cloud Kubernetes project

  • Injecting configuration settings into application pods using Kubernetes Config Maps and Secrets

  • Building application images using Docker and deploying them on Kubernetes using YAML configuration files

  • Using Spring Cloud Kubernetes together with a Zuul proxy to expose a single Swagger API documentation for all microservices

Spring Cloud and Kubernetes may be threatened as competitive solutions when you build a microservices environment. Such components like Eureka, Spring Cloud Config, or Zuul provided by Spring Cloud may be replaced by built-in Kubernetes objects like services, config maps, secrets, or ingresses. But even if you decide to use Kubernetes components instead of Spring Cloud, you can take advantage of some interesting features provided throughout the whole Spring Cloud project.

The one really interesting project that helps us in development is Spring Cloud Kubernetes. Although it is still in the incubation stage, it is definitely worth dedicating some time to it. It integrates Spring Cloud with Kubernetes. I'll show you how to use an implementation of the discovery client, inter-service communication with the Ribbon client, and Zipkin discovery using Spring Cloud Kubernetes.

Before we proceed to the source code, let's take a look at the following diagram. It illustrates the architecture of our sample system. It is quite similar to the architecture presented in the mentioned article about microservices on Spring Cloud. There are three independent applications (employee-service, department-service, organization-service), which communicate with each other through a REST API. These Spring Boot microservices use some built-in mechanisms provided by Kubernetes: config maps and secrets for distributed configuration, etcd for service discovery, and ingresses for the API gateway.
This is image title

Let's proceed to the implementation. Currently, the newest stable version of Spring Cloud is Finchley.RELEASE. This version of spring-cloud-dependencies should be declared as a BOM for dependency management.

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Finchley.RELEASE</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

Spring Cloud Kubernetes is not released under Spring Cloud Release Trains, so we need to explicitly define its version. Because we use Spring Boot 2.0 we have to include the newest SNAPSHOT version of spring-cloud-kubernetes artifacts, which is 0.3.0.BUILD-SNAPSHOT.

The source code of sample applications presented in this article is available on GitHub in this repository.

Pre-Requirements

In order to be able to deploy and test our sample microservices, we need to prepare a development environment. We can realize that in the following steps:

  • You need at least a single node cluster instance of Kubernetes (Minikube) or Openshift (Minishift) running on your local machine. You should start it and expose the embedded Docker client provided by both of them. The detailed instructions for Minishift may be found in my Quick guide to deploying Java apps on OpenShift. You can also use that description to run Minikube — just replace word "minishift" with "minikube." In fact, it does not matter if you choose Kubernetes or Openshift — the next part of this tutorial will be applicable for both of them.

  • Spring Cloud Kubernetes requires access to the Kubernetes API in order to be able to retrieve a list of addresses for pods running for a single service. If you use Kubernetes, you should just execute the following command:

$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default

If you deploy your microservices on Minishift, you should first enable admin-user add-on, then log in as a cluster admin and grant the required permissions.

$ minishift addons enable admin-user
$ oc login -u system:admin
$ oc policy add-role-to-user cluster-reader system:serviceaccount:myproject:default
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo:latest
        ports:
        - containerPort: 27017
        env:
        - name: MONGO_INITDB_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  ports:
  - port: 27017
    protocol: TCP
  selector:
    app: mongodb
1. Inject the Configuration With Config Maps and Secrets

When using Spring Cloud, the most obvious choice for realizing a distributed configuration in your system is Spring Cloud Config. With Kubernetes, you can use Config Map. It holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data. It is used for storing and sharing non-sensitive, unencrypted configuration information. To use sensitive information in your clusters, you must use Secrets. Use of both these Kubernetes objects can be perfectly demonstrated based on the example of MongoDB connection settings. Inside a Spring Boot application, we can easily inject it using environment variables. Here's a fragment of application.yml file with URI configuration.

spring:
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

While username and password are sensitive fields, a database name is not, so we can place it inside the config map.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
data:
  database-name: microservices

Of course, username and password are defined as secrets.

apiVersion: v1
kind: Secret
metadata:
  name: mongodb
type: Opaque
data:
  database-password: MTIzNDU2
  database-user: cGlvdHI=

To apply the configuration to the Kubernetes cluster, we run the following commands.

$ kubectl apply -f kubernetes/mongodb-configmap.yaml
$ kubectl apply -f kubernetes/mongodb-secret.yaml

After that, we should inject the configuration properties into the application's pods. When defining the container configuration inside the Deployment YAML file, we have to include references to environment variables and secrets, as shown below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee
  labels:
    app: employee
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee
  template:
    metadata:
      labels:
        app: employee
    spec:
      containers:
      - name: employee
        image: piomin/employee:1.0
        ports:
        - containerPort: 8080
        env:
        - name: MONGO_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password
2. Building Service Discovery With Kubernetes

We are usually running microservices on Kubernetes using Docker containers. One or more containers are grouped by pods, which are the smallest deployable units created and managed in Kubernetes. A good practice is to run only one container inside a single pod. If you would like to scale up your microservice, you would just have to increase the number of running pods. All running pods that belong to a single microservice are logically grouped with Kubernetes Service. This service may be visible outside the cluster and is able to load balance incoming requests between all running pods. The following service definition groups all pods labeled with the field app equal to employee.

apiVersion: v1
kind: Service
metadata:
  name: employee
  labels:
    app: employee
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: employee

Service can be used to access the application outsidethe Kubernetes cluster or for inter-service communication inside a cluster. However, the communication between microservices can be implemented more comfortably with Spring Cloud Kubernetes. First, we need to include the following dependency in the project pom.xml.

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>

Then we should enable the discovery client for an application, the same as we have always done for discovery in Spring Cloud Netflix Eureka. This allows you to query Kubernetes endpoints (services) by name. This discovery feature is also used by Spring Cloud Kubernetes Ribbon or Zipkin projects to fetch, respectively, the list of the pods defined for a microservice to be load balanced or the Zipkin servers available to send the traces or spans.

@SpringBootApplication
@EnableDiscoveryClient
@EnableMongoRepositories
@EnableSwagger2
public class EmployeeApplication {
 public static void main(String[] args) {
  SpringApplication.run(EmployeeApplication.class, args);
 }
 // ...
}

The last important thing in this section is to guarantee that the Spring application name will be exactly the same as the Kubernetes service name for the application. For the application employee-service, it is employee.

spring:
  application:
    name: employee
3. Building Microservices Using Docker and Deploying on Kubernetes

There is nothing unusual in our sample microservices. We have included some standard Spring dependencies for building REST-based microservices, integrating with MongoDB, and generating API documentation using Swagger2.

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

In order to integrate with MongoDB, we should create an interface that extends standard Spring Data CrudRepository.

public interface EmployeeRepository extends CrudRepository {
 List findByDepartmentId(Long departmentId);
 List findByOrganizationId(Long organizationId);
}

The entity class should be annotated with Mongo @Document and a primary key field with @Id.

@Document(collection = "employee")
public class Employee {
 @Id
 private String id;
 private Long organizationId;
 private Long departmentId;
 private String name;
 private int age;
 private String position;
 // ...
}

The repository bean has been injected to the controller class. Here's the full implementation of our REST API inside employee-service.

@RestController
public class EmployeeController {
 private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
 @Autowired
 EmployeeRepository repository;
 @PostMapping("/")
 public Employee add(@RequestBody Employee employee) {
  LOGGER.info("Employee add: {}", employee);
  return repository.save(employee);
 }
 @GetMapping("/{id}")
 public Employee findById(@PathVariable("id") String id) {
  LOGGER.info("Employee find: id={}", id);
  return repository.findById(id).get();
 }
 @GetMapping("/")
 public Iterable findAll() {
  LOGGER.info("Employee find");
  return repository.findAll();
 }
 @GetMapping("/department/{departmentId}")
 public List findByDepartment(@PathVariable("departmentId") Long departmentId) {
  LOGGER.info("Employee find: departmentId={}", departmentId);
  return repository.findByDepartmentId(departmentId);
 }
 @GetMapping("/organization/{organizationId}")
 public List findByOrganization(@PathVariable("organizationId") Long organizationId) {
  LOGGER.info("Employee find: organizationId={}", organizationId);
  return repository.findByOrganizationId(organizationId);
 }
}

In order to run our microservices on Kubernetes, we should first build the whole Maven project with the mvn clean install command. Each microservice has a Dockerfile placed in the root directory. Here's the Dockerfile definition for employee-service.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8080
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Let's build Docker images for all three sample microservices.

$ cd employee-service
$ docker build -t piomin/employee:1.0 .
$ cd department-service
$ docker build -t piomin/department:1.0 .
$ cd organization-service
$ docker build -t piomin/organization:1.0 .

The last step is to deploy Docker containers with applications on Kubernetes. To do that, just execute the commands kubectl apply on YAML configuration files. The sample deployment file for employee-service has been demonstrated in step 1. All required deployment fields are available inside the project repository in the kubernetes directory.

$ kubectl apply -f kubernetes\employee-deployment.yaml
$ kubectl apply -f kubernetes\department-deployment.yaml
$ kubectl apply -f kubernetes\organization-deployment.yaml
4. Communication Between Microservices With Spring Cloud Kubernetes Ribbon

All the microservices are deployed on Kubernetes. Now, it's worth it to discuss some aspects related to inter-service communication. The application employee-service, in contrast to other microservices, did not invoke any other microservices. Let's take a look at other microservices that call the API exposed by employee-service and communicate between each other ( organization-service calls department-service API).

First, we need to include some additional dependencies in the project. We use Spring Cloud Ribbon and OpenFeign. Alternatively, you can also use Spring@LoadBalancedRestTemplate.

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

Here's the main class of department-service. It enables Feign client using the @EnableFeignClients annotation. It works the same as with discovery based on Spring Cloud Netflix Eureka. OpenFeign uses Ribbon for client-side load balancing. Spring Cloud Kubernetes Ribbon provides some beans that force Ribbon to communicate with the Kubernetes API through Fabric8 KubernetesClient.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableMongoRepositories
@EnableSwagger2
public class DepartmentApplication {
 public static void main(String[] args) {
  SpringApplication.run(DepartmentApplication.class, args);
 }
 // ...
}

Here's the implementation of Feign client for calling the method exposed by employee-service.

@FeignClient(name = "employee")
public interface EmployeeClient {
 @GetMapping("/department/{departmentId}")
 List findByDepartment(@PathVariable("departmentId") String departmentId);
}

Finally, we have to inject Feign client's beans into the REST controller. Now, we may call the method defined inside EmployeeClient, which is equivalent to calling REST endpoints.

@RestController
public class DepartmentController {
 private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
 @Autowired
 DepartmentRepository repository;
 @Autowired
 EmployeeClient employeeClient;
 // ...
 @GetMapping("/organization/{organizationId}/with-employees")
 public List findByOrganizationWithEmployees(@PathVariable("organizationId") Long organizationId) {
  LOGGER.info("Department find: organizationId={}", organizationId);
  List departments = repository.findByOrganizationId(organizationId);
  departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
  return departments;
 }
}
5. Building API Gateway Using Kubernetes Ingress

Ingress is a collection of rules that allow incoming requests to reach the downstream services. In our microservices architecture, ingress is playing the role of an API gateway. To create it, we should first prepare a YAML description file. The descriptor file should contain the hostname under which the gateway will be available and mapping rules to the downstream services.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: microservices.info
    http:
      paths:
      - path: /employee
        backend:
          serviceName: employee
          servicePort: 8080
      - path: /department
        backend:
          serviceName: department
          servicePort: 8080
      - path: /organization
        backend:
          serviceName: organization
          servicePort: 8080

You have to execute the following command to apply the configuration above to the Kubernetes cluster.

$ kubectl apply -f kubernetes\ingress.yaml

To test this solution locally, we have to insert the mapping between the IP address and hostname set in the ingress definition inside the hosts file, as shown below. After that, we can test services through ingress using defined hostname just like that: http://microservices.info/employee.

192.168.99.100 microservices.info

You can check the details of the created ingress just by executing the command kubectl describe ing gateway-ingress.
This is image title

6. Enabling API Specification on the Gateway Using Swagger2

What if we would like to expose a single Swagger documentation for all microservices deployed on Kubernetes? Well, here things are getting complicated... We can run a container with Swagger UI, and map all paths exposed by the ingress manually, but it is not a good solution...

In that case, we can use Spring Cloud Kubernetes Ribbon one more time, this time together with Spring Cloud Netflix Zuul. Zuul will act as a gateway only for serving the Swagger API.
Here's the list of dependencies used in my gateway-service project.

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>2.9.2</version>
</dependency>

Kubernetes discovery client will detect all services exposed on the cluster. We would like to display documentation only for our three microservices. That's why I defined the following routes for Zuul.

zuul:
  routes:
    department:
      path: /department/**
    employee:
      path: /employee/**
    organization:
      path: /organization/**

Now we can use the ZuulProperties bean to get the routes' addresses from Kubernetes discovery and configure them as Swagger resources, as shown below.

@Configuration
public class GatewayApi {
 @Autowired
 ZuulProperties properties;
 @Primary
 @Bean
 public SwaggerResourcesProvider swaggerResourcesProvider() {
  return () -> {
   List resources = new ArrayList();
   properties.getRoutes().values().stream()
   .forEach(route -> resources.add(createResource(route.getId(), "2.0")));
   return resources;
  };
 }
 private SwaggerResource createResource(String location, String version) {
  SwaggerResource swaggerResource = new SwaggerResource();
  swaggerResource.setName(location);
  swaggerResource.setLocation("/" + location + "/v2/api-docs");
  swaggerResource.setSwaggerVersion(version);
  return swaggerResource;
 }
}

The application gateway-service should be deployed on the cluster the same as the other applications. You can see the list of running services by executing the command kubectl get svc. Swagger documentation is available under the address http://192.168.99.100:31237/swagger-ui.html.
This is image title

Learn More

Thanks for reading !

Originally published by Piotr Mińkowski at dzone.com

Deploying a full-stack App with Spring Boot, MySQL, React on Kubernetes

Deploying a full-stack App with Spring Boot, MySQL, React on Kubernetes

In this article, you’ll learn how to deploy a Stateful app built with Spring Boot, MySQL, and React on Kubernetes. We’ll use a local minikube cluster to deploy the application.

Introduction

In this article, you’ll learn how to deploy a Stateful app built with Spring Boot, MySQL, and React on Kubernetes. We’ll use a local minikube cluster to deploy the application. Please make sure that you have kubectl and minikube installed in your system.

It is a full-stack Polling app where users can login, create a Poll, and vote for a Poll.

To deploy this application, we’ll use few additional concepts in Kubernetes called PersistentVolumes and Secrets. Let’s first get a basic understanding of these concepts before moving to the hands-on deployment guide.

Kubernetes Persistent Volume

We’ll use Kubernetes Persistent Volumes to deploy Mysql. A PersistentVolume (PV) is a piece of storage in the cluster. It is a resource in the cluster just like a node. The Persistent volume’s lifecycle is independent from Pod lifecycles. It preserves data through restarting, rescheduling, and even deleting Pods.

PersistentVolumes are consumed by something called a PersistentVolumeClaim (PVC). A PVC is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). PVCs can request specific size and access modes (e.g. read-write or read-only).

Kubernetes Secrets

We’ll make use of Kubernetes secrets to store the Database credentials. A Secret is an object in Kubernetes that lets you store and manage sensitive information, such as passwords, tokens, ssh keys etc. The secrets are stored in Kubernetes backing store, etcd. You can enable encryption to store secrets in encrypted form in etcd.

Deploying Mysql on Kubernetes using PersistentVolume and Secrets

Following is the Kubernetes manifest for MySQL deployment. I’ve added comments alongside each configuration to make sure that its usage is clear to you.

apiVersion: v1
kind: PersistentVolume            # Create a PersistentVolume
metadata:
  name: mysql-pv
  labels:
    type: local
spec:
  storageClassName: standard      # Storage class. A PV Claim requesting the same storageClass can be bound to this volume. 
  capacity:
    storage: 250Mi
  accessModes:
    - ReadWriteOnce
  hostPath:                       # hostPath PersistentVolume is used for development and testing. It uses a file/directory on the Node to emulate network-attached storage
    path: "/mnt/data"
  persistentVolumeReclaimPolicy: Retain  # Retain the PersistentVolume even after PersistentVolumeClaim is deleted. The volume is considered “released”. But it is not yet available for another claim because the previous claimant’s data remains on the volume. 
---    
apiVersion: v1
kind: PersistentVolumeClaim        # Create a PersistentVolumeClaim to request a PersistentVolume storage
metadata:                          # Claim name and labels
  name: mysql-pv-claim
  labels:
    app: polling-app
spec:                              # Access mode and resource limits
  storageClassName: standard       # Request a certain storage class
  accessModes:
    - ReadWriteOnce                # ReadWriteOnce means the volume can be mounted as read-write by a single Node
  resources:
    requests:
      storage: 250Mi
---
apiVersion: v1                    # API version
kind: Service                     # Type of kubernetes resource 
metadata:
  name: polling-app-mysql         # Name of the resource
  labels:                         # Labels that will be applied to the resource
    app: polling-app
spec:
  ports:
    - port: 3306
  selector:                       # Selects any Pod with labels `app=polling-app,tier=mysql`
    app: polling-app
    tier: mysql
  clusterIP: None
---
apiVersion: apps/v1
kind: Deployment                    # Type of the kubernetes resource
metadata:
  name: polling-app-mysql           # Name of the deployment
  labels:                           # Labels applied to this deployment 
    app: polling-app
spec:
  selector:
    matchLabels:                    # This deployment applies to the Pods matching the specified labels
      app: polling-app
      tier: mysql
  strategy:
    type: Recreate
  template:                         # Template for the Pods in this deployment
    metadata:
      labels:                       # Labels to be applied to the Pods in this deployment
        app: polling-app
        tier: mysql
    spec:                           # The spec for the containers that will be run inside the Pods in this deployment
      containers:
      - image: mysql:5.6            # The container image
        name: mysql
        env:                        # Environment variables passed to the container 
        - name: MYSQL_ROOT_PASSWORD 
          valueFrom:                # Read environment variables from kubernetes secrets
            secretKeyRef:
              name: mysql-root-pass
              key: password
        - name: MYSQL_DATABASE
          valueFrom:
            secretKeyRef:
              name: mysql-db-url
              key: database
        - name: MYSQL_USER
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: username
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: password
        ports:
        - containerPort: 3306        # The port that the container exposes       
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage  # This name should match the name specified in `volumes.name`
          mountPath: /var/lib/mysql
      volumes:                       # A PersistentVolume is mounted as a volume to the Pod  
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

We’re creating four resources in the above manifest file. A PersistentVolume, a PersistentVolumeClaim for requesting access to the PersistentVolume resource, a service for having a static endpoint for the MySQL database, and a deployment for running and managing the MySQL pod.

The MySQL container reads database credentials from environment variables. The environment variables access these credentials from Kubernetes secrets.

Let’s start a minikube cluster, create kubernetes secrets to store database credentials, and deploy the Mysql instance:

Starting a Minikube cluster

$ minikube start

Creating the secrets

You can create secrets manually from a literal or file using the kubectl create secret command, or you can create them from a generator using Kustomize.

In this article, we’re gonna create the secrets manually:

$ kubectl create secret generic mysql-root-pass --from-literal=password=R00t
secret/mysql-root-pass created

$ kubectl create secret generic mysql-user-pass --from-literal=username=callicoder [email protected]
secret/mysql-user-pass created

$ kubectl create secret generic mysql-db-url --from-literal=database=polls --from-literal=url='jdbc:mysql://polling-app-mysql:3306/polls?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false'
secret/mysql-db-url created

You can get the secrets like this -

$ kubectl get secrets
NAME                         TYPE                                  DATA   AGE
default-token-tkrx5          kubernetes.io/service-account-token   3      3d23h
mysql-db-url                 Opaque                                2      2m32s
mysql-root-pass              Opaque                                1      3m19s
mysql-user-pass              Opaque                                2      3m6s

You can also find more details about a secret like so -

$ kubectl describe secrets mysql-user-pass
Name:         mysql-user-pass
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
username:  10 bytes
password:  10 bytes

Deploying MySQL

Let’s now deploy MySQL by applying the yaml configuration -

$ kubectl apply -f deployments/mysql-deployment.yaml
service/polling-app-mysql created
persistentvolumeclaim/mysql-pv-claim created
deployment.apps/polling-app-mysql created

That’s it! You can check all the resources created in the cluster using the following commands -

$ kubectl get persistentvolumes
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
mysql-pv   250Mi      RWO            Retain           Bound    default/mysql-pv-claim   standard                30s

$ kubectl get persistentvolumeclaims
NAME             STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim   Bound    mysql-pv   250Mi      RWO            standard       50s

$ kubectl get services
NAME                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes          ClusterIP   10.96.0.1    <none>        443/TCP    5m36s
polling-app-mysql   ClusterIP   None         <none>        3306/TCP   2m57s

$ kubectl get deployments
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
polling-app-mysql   1/1     1            1           3m14s

Logging into the MySQL pod

You can get the MySQL pod and use kubectl exec command to login to the Pod.

$ kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
polling-app-mysql-6b94bc9d9f-td6l4   1/1     Running   0          4m23s

$ kubectl exec -it polling-app-mysql-6b94bc9d9f-td6l4 -- /bin/bash
[email protected]:/#

Deploying the Spring Boot app on Kubernetes

All right! Now that we have the MySQL instance deployed, Let’s proceed with the deployment of the Spring Boot app.

Following is the deployment manifest for the Spring Boot app -

---
apiVersion: apps/v1           # API version
kind: Deployment              # Type of kubernetes resource
metadata:
  name: polling-app-server    # Name of the kubernetes resource
  labels:                     # Labels that will be applied to this resource
    app: polling-app-server
spec:
  replicas: 1                 # No. of replicas/pods to run in this deployment
  selector:
    matchLabels:              # The deployment applies to any pods mayching the specified labels
      app: polling-app-server
  template:                   # Template for creating the pods in this deployment
    metadata:
      labels:                 # Labels that will be applied to each Pod in this deployment
        app: polling-app-server
    spec:                     # Spec for the containers that will be run in the Pods
      containers:
      - name: polling-app-server
        image: callicoder/polling-app-server:1.0.0
        imagePullPolicy: IfNotPresent
        ports:
          - name: http
            containerPort: 8080 # The port that the container exposes
        resources:
          limits:
            cpu: 0.2
            memory: "200Mi"
        env:                  # Environment variables supplied to the Pod
        - name: SPRING_DATASOURCE_USERNAME # Name of the environment variable
          valueFrom:          # Get the value of environment variable from kubernetes secrets
            secretKeyRef:
              name: mysql-user-pass
              key: username
        - name: SPRING_DATASOURCE_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: password
        - name: SPRING_DATASOURCE_URL
          valueFrom:
            secretKeyRef:
              name: mysql-db-url
              key: url
---
apiVersion: v1                # API version
kind: Service                 # Type of the kubernetes resource
metadata:                     
  name: polling-app-server    # Name of the kubernetes resource
  labels:                     # Labels that will be applied to this resource
    app: polling-app-server
spec:                         
  type: NodePort              # The service will be exposed by opening a Port on each node and proxying it. 
  selector:
    app: polling-app-server   # The service exposes Pods with label `app=polling-app-server`
  ports:                      # Forward incoming connections on port 8080 to the target port 8080
  - name: http
    port: 8080
    targetPort: 8080

The above deployment uses the Secrets stored in mysql-user-pass and mysql-db-url that we created in the previous section.

Let’s apply the manifest file to create the resources -

$ kubectl apply -f deployments/polling-app-server.yaml
deployment.apps/polling-app-server created
service/polling-app-server created

You can check the created Pods like this -

$ kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
polling-app-mysql-6b94bc9d9f-td6l4    1/1     Running   0          21m
polling-app-server-744b47f866-s2bpf   1/1     Running   0          31s

Now, type the following command to get the polling-app-server service URL -

$ minikube service polling-app-server --url
http://192.168.99.100:31550

You can now use the above endpoint to interact with the service -

$ curl http://192.168.99.100:31550
{"timestamp":"2019-07-30T17:55:11.366+0000","status":404,"error":"Not Found","message":"No message available","path":"/"}

Deploying the React app on Kubernetes

Finally, Let’s deploy the frontend app using Kubernetes. Here is the deployment manifest -

apiVersion: apps/v1             # API version
kind: Deployment                # Type of kubernetes resource
metadata:
  name: polling-app-client      # Name of the kubernetes resource
spec:
  replicas: 1                   # No of replicas/pods to run
  selector:                     
    matchLabels:                # This deployment applies to Pods matching the specified labels
      app: polling-app-client
  template:                     # Template for creating the Pods in this deployment
    metadata:
      labels:                   # Labels that will be applied to all the Pods in this deployment
        app: polling-app-client
    spec:                       # Spec for the containers that will run inside the Pods
      containers:
      - name: polling-app-client
        image: callicoder/polling-app-client:1.0.0
        imagePullPolicy: IfNotPresent
        ports:
          - name: http
            containerPort: 80   # Should match the Port that the container listens on
        resources:
          limits:
            cpu: 0.2
            memory: "10Mi"
---
apiVersion: v1                  # API version
kind: Service                   # Type of kubernetes resource
metadata:
  name: polling-app-client      # Name of the kubernetes resource
spec:
  type: NodePort                # Exposes the service by opening a port on each node
  selector:
    app: polling-app-client     # Any Pod matching the label `app=polling-app-client` will be picked up by this service
  ports:                        # Forward incoming connections on port 80 to the target port 80 in the Pod
  - name: http
    port: 80
    targetPort: 80

Let’s apply the above manifest file to deploy the frontend app -

$ kubectl apply -f deployments/polling-app-client.yaml
deployment.apps/polling-app-client created
service/polling-app-client created

Let’s check all the Pods in the cluster -

$ kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
polling-app-client-6b6d979b-7pgxq     1/1     Running   0          26m
polling-app-mysql-6b94bc9d9f-td6l4    1/1     Running   0          21m
polling-app-server-744b47f866-s2bpf   1/1     Running   0          31s

Type the following command to open the frontend service in the default browser -

$ minikube service polling-app-client

You’ll notice that the backend api calls from the frontend app is failing because the frontend app tries to access the backend APIs at localhost:8080. Ideally, in a real-world, you’ll have a public domain for your backend server. But since our entire setup is locally installed, we can use kubectl port-forward command to map the localhost:8080 endpoint to the backend service -

$ kubectl port-forward service/polling-app-server 8080:8080

That’s it! Now, you’ll be able to use the frontend app. Here is how the app looks like -

Kubernetes Persistent Volume Secrets Full Stack deployment example

Docker Tutorial for Beginners - Learn Docker in 2020

Docker Tutorial for Beginners - Learn Docker in 2020

In this course you will learn Docker through a series of lectures that use animation, illustration and some fun analogies that simply complex concepts, we have demos that will show how to install and get started with Docker and most importantly we have hands-on labs that you can access right in your browser.

In this course you will learn Docker through a series of lectures that use animation, illustration and some fun analogies that simply complex concepts, we have demos that will show how to install and get started with Docker and most importantly we have hands-on labs that you can access right in your browser.

In this tutorial, you will learn:

  1. What is Docker ?
  2. How DOCKER works | Docker Architecture
  3. Benefits of DOCKER | Why to use DOCKER | Advantages of DOCKER
  4. How to install DOCKER on LINUX ? Step by Step
  5. How to install DOCKER on WINDOWS ? Step by Step
  6. How to install DOCKER on MAC ? Step by Step | How to Install Docker
  7. Docker FAQ | Docker Interview Questions | Docker for Beginners
  8. What are Docker Images | How to run Docker Images | Docker Images Beginner Tutorial
  9. What are Docker Containers | How to create Docker Containers | Basic Commands
  10. How to run Jenkins on Docker container | How to create Jenkins Volumes on Docker | Beginners
  11. What is Dockerfile | How to create and build Dockerfile | Dockerfile Basic Commands
  12. What is Docker Compose | How to create docker compose file | How to use Compose
  13. What is Docker Volume | How to create Volumes | What is Bind Mount | Docker Storage
1. What is Docker ?

Docker is the world’s leading software container platform
Docker makes the process of application deployment very easy and efficient and resolves a lot of issues related to deploying applications

Docker is a tool designed to make it easier to deploy and run applications by using containers

Docker gives you a standard way of packaging your application with all its dependencies in a container

Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.

Understand Docker with analogy of the Shipping industry

How a real world problem was resolved using containers

2. How DOCKER works | Docker Architecture
  • How Docker works ?
  • Understand a general workflow of docker
  • Difference between virtualization and containerization
  • Understand docker client server architecture
  • Understand:
  • Docker file
  • Docker images
  • Docker Containers
  • Docker Hub / Registry
  • Docker client
  • Docker server / daemon
  • Docker engine
3. Benefits of DOCKER | Why to use DOCKER | Advantages of DOCKER
  • Benefits of using Docker
  • Build app only once
  • No worries that the application will not perform the same way it did on testing env
  • Portability
  • Version Control
  • Isolation
  • Productivity
  • Docker simplifies
  • DevOps
4. How to install DOCKER on LINUX ? Step by Step 5. How to install DOCKER on WINDOWS ? Step by Step 6. How to install DOCKER on MAC ? Step by Step | How to Install Docker 7. Docker FAQ | Docker Interview Questions | Docker for Beginners 8. What are Docker Images | How to run Docker Images | Docker Images Beginner Tutorial
  • What are images
  • How to pull image
  • How to run a container using an image
  • Basic Commands
9. What are Docker Containers | How to create Docker Containers | Basic Commands
  • What are Containers
  • How to create Containers
  • How to start / stop Containers
  • Basic Commands
10. How to run Jenkins on Docker container | How to create Jenkins Volumes on Docker | Beginners
  1. How to start Jenkins on Docker Container
  2. Start and Stop Jenkins Container
  3. How to set Jenkins home on Docker Volume and Host Machine
    : docker pull jenkins
    : docker run -p 8080:8080 -p 50000:50000 jenkins
    : docker run --name MyJenkins -p 8080:8080 -p 50000:50000 -v /Users/raghav/Desktop/Jenkins_Home:/var/jenkins_home jenkins
    : docker run --name MyJenkins2 -p 9090:8080 -p 50000:50000 -v /Users/raghav/Desktop/Jenkins_Home:/var/jenkins_home jenkins
    : docker volume create myjenkins
    : docker volume ls
    : docker volume inspect myjenkins
    : docker run --name MyJenkins3 -p 9090:8080 -p 50000:50000 -v myjenkins:/var/jenkins_home jenkins
    : docker inspect MyJenkins3
    In case you face issues like installing plugins on this Jenkins, can setup jenkins with this command:
    $ docker run -u root --rm -p 8080:8080 -v /srv/jenkins-data:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock --name jenkins jenkinsci/blueocean
11. What is Dockerfile | How to create and build Dockerfile | Dockerfile Basic Commands
  1. What is Dockerfile
  2. How to create Dockerfile
  3. How to build image from Dockerfile
  4. Basic Commands
12. What is Docker Compose | How to create docker compose file | How to use Compose
  1. What | Why - Docker Compose
  2. How to install
  3. How to create docker compose file
  4. How to use docker compose file to create services
  5. Basic Commands
13. What is Docker Volume | How to create Volumes | What is Bind Mount | Docker Storage
  1. What are Volumes
  2. How to create / list / delete volumes
  3. How to attach volume to a container
  4. How to share volume among containers
  5. What are bind mounts
    Volumes are the preferred mechanism for persisting data generated by and used by Docker containers
    : docker volume //get information
    : docker volume create
    : docker volume ls
    : docker volume inspect
    : docker volume rm
    : docker volume prune