Kubernetes Tutorial - Step by Step Introduction to Basic Concepts

Kubernetes Tutorial - Step by Step Introduction to Basic Concepts

Learn about the basic Kubernetes concepts while deploying a sample application on a real cluster.

Learn about the basic Kubernetes concepts while deploying a sample application on a real cluster.

Preface

In this article, you will learn about Kubernetes and develop and deploy a sample application. To avoid being repetitive and to avoid conflicting with other resources, instead of addressing theoretical topics first, this article will focus on showing you what you need to do to deploy your first application on a Kubernetes cluster. The article will not avoid the theoretical topics though; you will learn about them on the fly when needed. This approach will prevent abstract discussions and explanations that might not make sense if introduced prematurely.

At the end of this article, you will have learned how to spin up a Kubernetes cluster (on DigitalOcean), and you will have an application up and running in your cluster. If you find this topic interesting, keep reading!

Quick Introduction to Kubernetes

Kubernetes, if you are not aware, is an open-source system for automating deployment, scaling, and managing containerized applications. With this platform, you can decompose your applications into smaller systems (called microservices) while developing; then you can compose (or orchestrate) these systems together while deploying. As you will learn, Kubernetes provides you different objects that help you organize your applications' microservices into logical units that you can easily manage.

The explanation above, while correct, is probably too vague and too abstract if you are not familiar with Kubernetes and microservices. So, as the goal of this article is to avoid this kind of introduction, you will be better off getting started soon.

How to Spin Up a Kubernetes Cluster

Currently, several services around the globe provide different Kubernetes implementations. Among the most popular ones, you will find:

Note: Minikube is the only solution that is free forever (but it is also not that useful, as it runs locally only). Although some of the other solutions offer free tiers that will allow you to get started without paying a dime, they will charge you money to keep your clusters running eventually.### Why Choose DigitalOcean

You might have noticed that the list above did not mention DigitalOcean, even though this article stated that you will use it. The thing is, DigitalOcean just launched its Managed Kubernetes Service, and this service is still on a limited availability mode.

What this means is that DigitalOcean Kubernetes provides full functionality, offers ample support, but that this service is partially production-ready (errors might occur). For this article, though, the current offering is robust enough. Besides that, you will see a referral link in this article that will give you a $100 USD, 60-day credit on DigitalOcean so you can spin up your cluster without paying anything.

Installing Kube Control (kubectl)

Before spinning up a Kubernetes cluster, you will need a tool called kubectl. This tool, popularly known as "Kube Control", is a command-line interface that will allow you to manage your Kubernetes cluster with ease from a terminal. Soon, you will get quite acquainted with kubectl.

To install kubectl, you can head to this resource and choose, from the list shown, the instructions for your operating system. In this list, you will see instructions for:

After following these instructions and installing kubectl in your machine, you can issue the following command to confirm that the tool is indeed available:

kubectl version

The output of the above command will show the client version (i.e., the release of kubectl) and a message saying that the "connection to the server localhost:8080 was refused." What this means is that you do have kubectl properly installed, but that you don't have a cluster available yet (expected, right?). In the next sections, you will learn how to spin up a Kubernetes cluster.

How to Create a Kubernetes Cluster on DigitalOcean

If you already have a Kubernetes cluster that you will use, you can skip this section. Otherwise, please, follow the instructions here to create your Kubernetes cluster on DigitalOcean. For starters, as mentioned before, you will have to use this referral link. If you don't use a referral link, you will end up paying for your cluster from the very begin.

After using this link to create your account on DigitalOcean, you will get an email confirmation. Use the link sent to you to confirm your email address. Confirming your address will make DigitalOcean ask you for a credit card. Don't worry about this. If you don't spend more than $100 USD, they won't charge you anything.

After inputting a valid credit card, you can use the next screen to create a project, or you can use this link to skip this unnecessary step and to head to the Kubernetes dashboard.

From the Kubernetes dashboard, you can hit the Create a Kubernetes cluster button (you might have to click on Enable Limited Access first). Then, DigitalOcean will show you a new page with a form that you can fill in as follows:

  • Select a Kubernetes version: The instructions on this article were tested with the 1.13.5-do.1 version. If you feel like testing other versions, feel free to go ahead. Just let us know how it went.
  • Choose a datacenter region: Feel free to choose whatever region you prefer.
  • Add node pool(s): Make sure you have just one node pool, that you choose the *Select a Kubernetes version*: The instructions on this article were tested with the 1.13.5-do.1version. If you feel like testing other versions, feel free to go ahead. Just let us know how it went.*Choose a datacenter region*: Feel free to choose whatever region you prefer.*Add node pool(s)*: Make sure you have just one *node pool*, that you choose the$10/Month per node0/Month per node option, and that you have at least three nodes.*Add Tags*: Don't worry about tagging anything.*Choose a name*: You can name your cluster whatever you want (e.g., "kubernetes-tutorial"). Just make sure DigitalOcean accepts the name (e.g., names can't contain spaces).0/Month per node0/Month per node option, and that you have at least three nodes.
  • Add Tags: Don't worry about tagging anything.
  • Choose a name: You can name your cluster whatever you want (e.g., "kubernetes-tutorial"). Just make sure DigitalOcean accepts the name (e.g., names can't contain spaces).

After filling in this form, you can click on the Create Cluster button. It will take a few minutes (roughly 4 mins) before DigitalOcean finishes creating your cluster for you. However, you can already download the cluster's config file.

This file contains the credentials needed for you to act as the admin of the cluster, and you can find it on the cluster's dashboard. After you clicked on the Create Cluster button, DigitalOcean redirected you to your cluster's dashboard. From there, if you scroll to the bottom, you will see a button called Download Config File. Click on this button to download the config file.

When you finish downloading this file, open a terminal and move the file to the .kube directory in your home dir (you might have to create it):

# make sure .kube exists
mkdir ~/.kube

# move the config file to it
mv ~/Downloads/kubernetes-tutorial-kubeconfig.yaml ~/.kube

If needed, adjust the last command with the correct path of the downloaded file.

The ~/.kube directory is a good place to keep your Kubernetes credentials. By default, kubectl will use a file named config (if it finds one inside the .kube dir) to communicate with clusters. To use a different file, you have three alternatives:

  • First, you can specify another file by using the --kubeconfig flag in your kubectl commands, but this is too cumbersome.
  • Second, you can define the KUBECONFIG environment variable to avoid having to type --kubeconfig all the time.
  • Third, you can merge contexts in the same config file and then you can switch contexts.

The second option (setting the KUBECONFIG environment variable) is the easiest one, but feel free to choose another approach if you prefer. To set this environment, you can issue the following command:

export KUBECONFIG=~/.kube/kubernetes-tutorial-kubeconfig.yaml

Note: Your file path might be different. Make sure the command above contains the right path.
Keep in mind that this command will set this environment only on this terminal's session. If you open a new terminal, you will have to execute this command again.

How to Check the Nodes of Your Kubernetes Cluster

Now that you got yourself a Kubernetes cluster and that you defined what credentials kubectl will use, you can start communicating with your cluster. For starters, you can issue the following command to check the nodes that compose your cluster:

kubectl get nodes

After running this command, you will get a list with three or more nodes (depending on how many nodes you chose while creating your cluster). A node, in the context of Kubernetes, is a worker machine (virtual or physical, both apply) that Kubernetes uses to run applications (yours and those that Kubernetes needs to stay up and running).

No matter how many nodes you have in your cluster, the list that the command above outputs will show the name of these nodes, their statuses (which, hopefully, will be ready), their roles, ages, and versions. Don't worry about this information now; you will learn more about nodes in a Kubernetes cluster later.

If you are seeing the list of nodes and all of them are on the ready status, you are good to go.

How to Deploy Your First Kubernetes Application

After all this setup, now it is time to deploy your first Kubernetes application. As you will see, doing so is not hard, but it does involve a good number of steps. As such, to speed up the process, instead of deploying some application that you might have around (which would need some preparation to run on Kubernetes) and instead of creating a brand new one for that, you will deploy a sample application that already exists. More specifically, you will deploy an app that allows users to share what they are thinking. Similar to what people can do on Twitter, but without authentication and way simpler.

How to Create Kubernetes Deployments

Back in the terminal, the first thing you will do is to create a directory that you will use to save a bunch of YAML files (you can name this directory anything you like, for example, kubernetes-tutorial). While using Kubernetes, you will often use this "markup language" to describe the resources that you will orchestrate in your clusters.

After creating a directory, create a file called deployment.yaml inside it and add the following code to it:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-tutorial-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: kubernetes-tutorial-deployment
  template:
    metadata:
      labels:
        app: kubernetes-tutorial-deployment
    spec:
      containers:
      - name: kubernetes-tutorial-application
        image: auth0blog/kubernetes-tutorial
        ports:
          - containerPort: 3000

This configuration file is not hard to understand. Basically, this file is defining a deployment object (kind: Deployment) that creates a container named kubernetes-tutorial-application. This container uses an image called <a href="https://cloud.docker.com/u/auth0blog/repository/docker/auth0blog/kubernetes-tutorial" target="_blank">auth0blog/kubernetes-tutorial</a> to run the sample application.

In Kubernetes, to tell your cluster what to run, you usually use images from a registry. By default, Kubernetes will try to fetch images from the public Docker Hub registry. However, you can also use private registries if you prefer keeping your images, well, private.
Don't worry about the other properties of this file now; you will learn about them when the time comes. However, note that the sentences in the last paragraph introduced two new concepts:

  • Deployment: Basically speaking, in the context of Kubernetes, a deployment is a description of the desired state of the system. Through a deployment, you inform your Kubernetes cluster how many pods of a particular application you want running. In this case, you are specifying that you want two pods (replicas: 2).
  • Container: Containers are the lowest level of a microservice that holds running applications, their libraries, and their dependencies. Containers can be exposed to the world through an external IP address and are usually part of a pod.

Another important thing to learn about is what a pod is. A pod, as defined by the official documentation, is the smallest deployable unit of computing that can be created and managed in Kubernetes. For now, think of pods as groups of microservices (containers) that are so tightly related they cannot be deployed separately. In this case, your pods contain a single container, the sample application.

Note: Nowadays, deployments are the preferred way to orchestrate pods and replication. However, not that long ago, Kubernetes experts used to use Replication Controllers and Replication Sets. You don't need to learn about these other objects to follow along with this tutorial. However, if you are curious, you can read about their differences in this nice resource.
Then, to run this deployment in your Kubernetes cluster, you will have to issue the following command:

# make sure you are inside the kubernetes-tutorial directory
kubectl apply -f deployment.yaml

After running this command, your cluster will start working to make sure that it reaches the desired state. That is, the cluster will make an effort to run both pods (replicas: 2) on your cluster's nodes.

After that, you might be thinking, "cool, I just deployed a sample application into my cluster, now I can start using it through a browser". Well, things are not that simple. The problem is that pods are unreliable units of work that come and go all the time. As such, due to its ephemeral nature, a pod by itself is not accessible by the outside world.

In the previous command, you informed your cluster that you want two instances (pods) of the same application running. Each one of these pods has a different IP address inside your cluster and, if one of them stops working (for whatever reason), Kubernetes will launch a brand new pod that will get yet another IP address. Therefore, it would be difficult for you to keep track of these IP addresses manually. To solve this problem, you will use Kubernetes' services.

Note: Another situation that might make Kubernetes launching new pods for your deployments is if you ask your cluster to scale your application (to be able to support more users, for example).
However, before learning about services, issue the following command to confirm that your pods are indeed up and running:

kubectl get pods

By issuing this command, you will get a list of the available pods in your Kubernetes cluster. On that list, you can see that you have two pods (two rows) and that each pod contains one container (the 1/1 on the Ready column). You can also see their statuses, how many times they restarted (hopefully, zero), and their age.

Using Services and Ingresses to Expose Deployments

After learning about pods, deployments, and containers, you probably want to consume your new deployment, right? To do so, you will need to create ingress rules that expose your deployment to the external world. [Kubernetes ingress is an "object that manages external access to services in a cluster, typically through HTTP"](https://kubernetes.io/docs/concepts/services-networking/ingress/ "Kubernetes ingress is an "object that manages external access to services in a cluster, typically through HTTP""). With an ingress, you can support load balancing, TLS termination, and name-based virtual hosting from within your cluster.

To configure ingress rules in your Kubernetes cluster, first, you will need an ingress controller. As you can see here, there are many different ingress controllers that you can use. In this tutorial, you will use one of the most popular, powerful, and easy-to-use ones: the NGINX ingress controller.

The process to install this controller in your cluster is quite simple. The first thing you will have to do is to run the following command to install some mandatory resources:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

Then, you will have to issue this command to install another set of resources needed for the controller:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml

Note: If you are running your Kubernetes cluster on a service other than DigitalOcean, you will probably need to run a different set of commands. Please, check out this resource to learn more about the differences.
To confirm that the above commands worked, you can issue the following command:

kubectl get pods -n ingress-nginx

This command should list a pod called nginx-ingress-controller-... with the status equals to running. The -n ingress-nginx flag passed to this command states that you want to list pods on the ingress-nginx namespace. Namespaces are an excellent way to organize resources in a Kubernetes cluster. You will learn more about this Kubernetes feature in another opportunity.

Having configured the ingress controller in your cluster, the next thing you will do is to create a service. Wait, a service? Why not an ingress?

The thing is, as your pods are ephemeral (they can die for whatever reason or Kubernetes can spin new ones based on replication rules), you need a static resource that represents all the related pods as a single element (or, in this case, that represents the deployment responsible for these pods). When you define a service for your pods, you will be able to create ingress rules that point to this service.

What you will need now is a ClusterIP service that opens a port for your deployment. To create one, create a file called service.yaml and add the following code to it:

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-tutorial-cluster-ip
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 3000
  selector:
    app: kubernetes-tutorial-deployment
  type: ClusterIP

Note: There are many different types of services available on Kubernetes. ClusterIP, the type you are using, helps you expose your deployments inside the cluster only. That is, this kind of service does not expose deployments to the outside world. There are other types that do that for you (you can learn about them here) but, on this series, you will be not using them.
Then, you can issue the following command to create this service in your cluster:

kubectl apply -f service.yaml

After running this command, Kubernetes will create a service to represent your deployment in your cluster. To know that this service works as a broker for this deployment in particular (or for its pods, actually), you added the selector.app property on the service description (service.yaml) pointing to kubernetes-tutorial-deployment. If you take a look again on the deployment.yaml file, you will notice that you have there a property called labels.app with the same value (kubernetes-tutorial-deployment). Kubernetes will use these properties to tie this service to the deployment's pods.

Another important thing to notice about the service you are creating is that you are defining that this service will listen on port: 80 and that it will targetPort: 3000 on pods. If you check your deployment file, you will see that you defined that your containers will use this port (containerPort: 3000). As such, you must make sure that your service will target the correct port when redirecting requests to your pods.

Note: If you run kubectl get svc now, your cluster will list two services. The first one, called kubernetes, is the main service used by Kubernetes itself. The other one is the one you created: kubernetes-tutorial-cluster-ip. As you can see, both of them have internal IP addresses (CLUSTER-IP). But you won't need to know these addresses. As you will see, ingresses allow you to reference services more cleverly.
After creating your service, you can finally define an ingress (and some rules) to expose this service (and the deployment that it represents) to the outside world. To do this, create a file called ingress.yaml with the following code:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubernetes-tutorial-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: kubernetes-tutorial-cluster-ip
          servicePort: 80

In this file, you are defining an ingress resource with a single rule (spec.rules). This rule tells Kubernetes that you want requests pointing to the root path (path: /) to be redirected to the kubernetes-tutorial-cluster-ip service (this is the name of the service that you created before) on port 80 (servicePort: 80).

To deploy the new ingress in your cluster, you can issue the following command:

kubectl apply -f ingress.yaml

Then, to see the whole thing in action, you will need to grab the public IP address of your Kubernetes cluster. To do so, you can issue the following command:

kubectl get svc \
  -n ingress-nginx ingress-nginx \
  -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'

Note: On the command above, you are using a Kubernetes feature called JSONPath to extract the exact property you want from the ingress-nginx service (in this case, its public IP address). Learn more about the JSONPath feature here.
This command will output an IP address (e.g., 104.248.109.181) that you can use in your browser to see your application. So, if you open your browser and navigate to this IP address, you will see the sample application you just deployed.

Note: This application is not very useful, it just emulates a much simpler Twitter application where users can share their thoughts. The app doesn't even have an identity management (user authentication) system.
That's it! You just finished configuring your local machine to start working with Kubernetes, and you just deployed your first application on Kubernetes. How cool is that?
Note: To avoid spending the whole credit DigitalOcean gave you, you might want to delete your cluster soon. To do so, head to the Kubernetes section of your DigitalOcean dashboard, click on the More button on the right-hand side of the screen and click on Destroy. DigitalOcean will ask you to confirm the process.

Conclusion

In this article, you created a Kubernetes cluster on DigitalOcean; then you used it to spin up a sample application. In deploying this app, you learned basic Kubernetes concepts like deployments, pods, containers, services, and ingresses. With this knowledge, you are now ready to move on and start learning about more advanced concepts that will let you orchestrate microservices application on Kubernetes. If you enjoyed the article (and if you want more content about this topic), let us know on the discussion section below.

An illustrated guide to Kubernetes Networking

An illustrated guide to Kubernetes Networking

Everything I learned about the Kubernetes Networking

Everything I learned about the Kubernetes Networking

An illustrated guide to Kubernetes Networking [Part 1]

You’ve been running a bunch of services on a Kubernetes cluster and reaping the benefits. Or at least, you’re planning to. Even though there are a bunch of tools available to setup and manage a cluster, you’ve still wondered how it all works under the hood. And where do you look if it breaks? I know I did.

Sure Kubernetes is simple enough to start using it. But let’s face it — it’s a complex beast under the hood. There are a lot of moving parts, and knowing how they all fit in and work together is a must, if you want to be ready for failures. One of the most complex, and probably the most critical parts is the Networking.

So I set out to understand exactly how the Networking in Kubernetes works. I read the docs, watched some talks, even browsed the codebase. And here is what I found out.

Kubernetes Networking Model

At it’s core, Kubernetes Networking has one important fundamental design philosophy:

Every Pod has a unique IP.
This Pod IP is shared by all the containers in this Pod, and it’s routable from all the other Pods. Ever notice some “pause” containers running on your Kubernetes nodes? They are called “sandbox containers”, whose only job is to reserve and hold a network namespace (netns) which is shared by all the containers in a pod. This way, a pod IP doesn’t change even if a container dies and a new one in created in it’s place. A huge benefit of this IP-per-pod model is there are no IP or port collisions with the underlying host. And we don’t have to worry about what port the applications use.

With this in place, the only requirement Kubernetes has is that these Pod IPs are routable/accessible from all the other pods, regardless of what node they’re on.

Intra-node communication

The first step is to make sure pods on the same node are able to talk to each other. The idea is then extended to communication across nodes, to the internet and so on.

On every Kubernetes node, which is a linux machine in this case, there’s a root network namespace (root as in base, not the superuser) — root netns.

The main network interface eth0is in this root netns.

Similarly, each pod has its own netns, with a virtual ethernet pair connecting it to the root netns. This is basically a pipe-pair with one end in root netns, and other in the pod netns.

We name the pod-end eth0, so the pod doesn’t know about the underlying host and thinks that it has its own root network setup. The other end is named something like vethxxx.

You may list all these interfaces on your node using ifconfig or ip a commands.

This is done for all the pods on the node. For these pods to talk to each other, a linux ethernet bridge cbr0 is used. Docker uses a similar bridge named docker0.

You may list the bridges using brctl show command.

Assume a packet is going from pod1 to pod2

1. It leaves pod1’s netns at eth0 and enters the root netns at vethxxx.

2. It’s passed on to cbr0, which discovers the destination using an ARP request, saying “who has this IP?”

3. vethyyy says it has that IP, so the bridge knows where to forward the packet.

4. The packet reaches vethyyy, crosses the pipe-pair and reaches pod2’s netns.

This is how containers on a node talk to each other. Obviously there are other ways, but this is probably the easiest, and what docker uses as well.

Inter-node communication

As I mentioned earlier, pods need to be reachable across nodes as well. Kubernetes doesn’t care how it’s done. We can use L2 (ARP across nodes), L3 (IP routing across nodes — like the cloud provider route tables), overlay networks, or even carrier pigeons. It doesn’t matter as long as the traffic can reach the desired pod on another node. Every node is assigned a unique CIDR block (a range of IP addresses) for pod IPs, so each pod has a unique IP that doesn’t conflict with pods on another node.

In most of the cases, especially in cloud environments, the cloud provider route tables make sure the packets reach the correct destination. The same thing could be accomplished by setting up correct routes on every node. There are a bunch of other network plugins that do their own thing.

Here we have two nodes, similar to what we saw earlier. Each node has various network namespaces, network interfaces and a bridge.

Assume a packet is going from pod1 to pod4 (on a different node).

  1. It leaves pod1’s netns at eth0 and enters the root netns at vethxxx.
  2. It’s passed on to cbr0, which makes the ARP request to find the destination.
  3. It comes out of cbr0 to the main network interface eth0 since nobody on this node has the IP address for pod4.
  4. It leaves the machine node1 onto the wire with src=pod1 and dst=pod4.
  5. The route table has routes setup for each of the node CIDR blocks, and it routes the packet to the node whose CIDR block contains the pod4 IP.
  6. So the packet arrives at node2 at the main network interface eth0.
  7. Now even though pod4 isn’t the IP of eth0, the packet is still forwarded to cbr0 since the nodes are configured with IP forwarding enabled.
  8. The node’s routing table is looked up for any routes matching the pod4 IP. It finds cbr0 as the destination for this node’s CIDR block.
  9. You may list the node route table using route -n command, which will show a route for cbr0 like this:

  1. The bridge takes the packet, makes an ARP request and finds out that the IP belongs to vethyyy.

  2. The packet crosses the pipe-pair and reaches pod4 🏠

An illustrated guide to Kubernetes Networking [Part 2]

We’ll expand on these ideas and see how the overlay networks work. We will also understand how the ever-changing pods are abstracted away from apps running in Kubernetes and handled behind the scenes.

Overlay networks

Overlay networks are not required by default, however, they help in specific situations. Like when we don’t have enough IP space, or network can’t handle the extra routes. Or maybe when we want some extra management features the overlays provide. One commonly seen case is when there’s a limit of how many routes the cloud provider route tables can handle. For example, AWS route tables support up to 50 routes without impacting network performance. So if we have more than 50 Kubernetes nodes, AWS route table won’t be enough. In such cases, using an overlay network helps.

It is essentially encapsulating a packet-in-packet which traverses the native network across nodes. You may not want to use an overlay network since it may cause some latency and complexity overhead due to encapsulation-decapsulation of all the packets. It’s often not needed, so we should use it only when we know why we need it.

To understand how traffic flows in an overlay network, let’s consider an example of flannel, which is an open-source project by CoreOS.

Here we see that it’s the same setup as before, but with a new virtual ethernet device called flannel0 added to root netns. It’s an implementation of Virtual Extensible LAN (VXLAN), but to linux, its just another network interface.

The flow for a packet going from pod1 to pod4 (on a different node) is something like this:

  1. The packet leaves pod1’s netns at eth0 and enters the root netns at vethxxx.

  2. It’s passed on to cbr0, which makes the ARP request to find the destination.

3a. Since nobody on this node has the IP address for pod4, bridge sends it to flannel0 because the node’s route table is configured with flannel0 as the target for the pod network range .

3b. As the flanneld daemon talks to the Kubernetes apiserver or the underlying etcd, it knows about all the pod IPs, and what nodes they’re on. So flannel creates the mappings (in userspace) for pods IPs to node IPs.

flannel0 takes this packet and wraps it in a UDP packet with extra headers changing the source and destinations IPs to the respective nodes, and sends it to a special vxlan port (generally 8472).

Even though the mapping is in userspace, the actual encapsulation and data flow happens in kernel space. So it happens pretty fast.

3c. The encapsulated packet is sent out via eth0 since it is involved in routing the node traffic.

  1. The packet leaves the node with node IPs as source and destination.

  2. The cloud provider route table already knows how to route traffic between nodes, so it send the packet to destination node2.

6a. The packet arrives at eth0 of node2. Due to the port being special vxlan port, kernel sends the packet to flannel0.

6b. flannel0 de-capsulates and emits it back in the root network namespace.

6c. Since IP forwarding is enabled, kernel forwards it to cbr0 as per the route tables.

  1. The bridge takes the packet, makes an ARP request and finds out that the IP belongs to vethyyy.

  2. The packet crosses the pipe-pair and reaches pod4 🏠

There could be slight differences among different implementations, but this is how overlay networks in Kubernetes work. There’s a common misconception that we have to use overlays when using Kubernetes. The truth is, it completely depends on the specific scenarios. So make sure you use it only when it’s absolutely needed.

An illustrated guide to Kubernetes Networking [Part 3]

Cluster dynamics

Due to the every-changing dynamic nature of Kubernetes, and distributed systems in general, the pods (and consequently their IPs) change all the time. Reasons could range from desired rolling updates and scaling events to unpredictable pod or node crashes. This makes the Pod IPs unreliable for using directly for communications.

Enter Kubernetes Services — a virtual IP with a group of Pod IPs as endpoints (identified via label selectors). These act as a virtual load balancer, whose IP stays the same while the backend Pod IPs may keep changing.

The whole virtual IP implementation is actually iptables (the recent versions have an option of using IPVS, but that’s another discussion) rules, that are managed by the Kubernetes component — kube-proxy. This name is actually misleading now. It used to work as a proxy pre-v1.0 days, which turned out to be pretty resource intensive and slower due to constant copying between kernel space and user space. Now, it’s just a controller, like many other controllers in Kubernetes, that watches the api server for endpoints changes and updates the iptables rules accordingly.

Due to these iptables rules, whenever a packet is destined for a service IP, it’s DNATed (DNAT=Destination Network Address Translation), meaning the destination IP is changed from service IP to one of the endpoints — pod IP — chosen at random by iptables. This makes sure the load is evenly distributed among the backend pods.

When this DNAT happens, this info is stored in conntrack — the Linux connection tracking table (stores 5-tuple translations iptables has done: protocol, srcIP, srcPort, dstIP, dstPort). This is so that when a reply comes back, it can un-DNAT, meaning change the source IP from the Pod IP to the Service IP. This way, the client is unaware of how the packet flow is handled behind the scenes.

So by using Kubernetes services, we can use same ports without any conflicts (since we can remap ports to endpoints). This makes service discovery super easy. We can just use the internal DNS and hard-code the service hostnames. We can even use the service host and port environment variables preset by Kubernetes.

Protip: Take this second approach and save a lot of unnecessary DNS calls!

Outbound traffic

The Kubernetes services we’ve talked about so far work within a cluster. However, in most of the practical cases, applications need to access some external api/website.

Generally, nodes can have both private and public IPs. For internet access, there is some sort of 1:1 NAT of these public and private IPs, especially in cloud environments.

For normal communication from node to some external IP, source IP is changed from node’s private IP to it’s public IP for outbound packets and reversed for reply inbound packets. However, when connection to an external IP is initiated by a Pod, the source IP is the Pod IP, which the cloud provider’s NAT mechanism doesn’t know about. It will just drop packets with source IPs other than the node IPs.

So we use, you guessed it, some more iptables! These rules, also added by kube-proxy, do the SNAT (Source Network Address Translation) aka IP MASQUERADE. This tells the kernel to use IP of the interface this packet is going out from, in place of the source Pod IP. A conntrack entry is also kept to un-SNAT the reply.

Inbound traffic

Everything’s good so far. Pods can talk to each other, and to the internet. But we’re still missing a key piece — serving the user request traffic. As of now, there are two main ways to do this:

NodePort/Cloud Loadbalancer (L4 — IP and Port) Setting the service type to NodePort assigns the service a nodePort in range 30000-33000. This nodePort is open on every node, even if there’s no pod running on a particular node. Inbound traffic on this NodePort would be sent to one of the pods (it may even be on some other node!) using, again, iptables.

A service type of LoadBalancer in cloud environments would create a cloud load balancer (ELB, for example) in front of all the nodes, hitting the same nodePort.

Ingress (L7 — HTTP/TCP)

A bunch of different implements, like nginx, traefik, haproxy, etc., keep a mapping of http hostnames/paths and the respective backends. This is entry point of the traffic over a load balancer and nodeport as usual, but the advantage is that we can have one ingress handling inbound traffic for all the services instead of requiring multiple nodePorts and load balancers.

Network Policy

Think of this like security groups/ACLs for pods. The NetworkPolicy rules allow/deny traffic across pods. The exact implementation depends on the network layer/CNI, but most of them just use iptables.

That’s all for now. In the previous parts we studied the foundation of Kubernetes Networking and how overlays work. Now we know how the Service abstraction helping in a dynamic cluster and makes discovery super easy. We also covered how the outbound and inbound traffic flow works and how network policy is useful for security within a cluster.

Learn More

Learn Kubernetes from a DevOps guru (Kubernetes + Docker)

Learn DevOps: The Complete Kubernetes Course

Kubernetes for the Absolute Beginners - Hands-on

Complete DevOps Gitlab & Kubernetes: Best Practices Bootcamp

Learn DevOps: On-Prem or Cloud Agnostic Kubernetes

Master Jenkins CI For DevOps and Developers

Docker Technologies for DevOps and Developers

DevOps Toolkit: Learn Kubernetes with Practical Exercises!

Guide to Spring Cloud Kubernetes for Beginners

Guide to Spring Cloud Kubernetes for Beginners

Guide to Spring Cloud Kubernetes for Beginners

1. Overview

When we build a microservices solution, both Spring Cloud and Kubernetes are optimal solutions, as they provide components for resolving the most common challenges. However, if we decide to choose Kubernetes as the main container manager and deployment platform for our solution, we can still use Spring Cloud's interesting features mainly through the Spring Cloud Kubernetes project.

In this tutorial, we’ll:

  • Install Minikube on our local machine
  • Develop a microservices architecture example with two independent Spring Boot applications communicating through REST
  • Set up the application on a one-node cluster using Minikube
  • Deploy the application using YAML config files
2. Scenario

In our example, we're using the scenario of travel agents offering various deals to clients who will query the travel agents service from time to time. We'll use it to demonstrate:

  • service discovery through Spring Cloud Kubernetes
  • configuration management and injecting Kubernetes ConfigMaps and secrets to application pods using Spring Cloud Kubernetes Config
  • load balancing using Spring Cloud Kubernetes Ribbon
3. Environment Setup

First and foremost, we need to install Minikube on our local machine and preferably a VM driver such as VirtualBox. It's also recommended to look at Kubernetes and its main features before following this environment setup.

Let's start the local single-node Kubernetes cluster:

minikube start --vm-driver=virtualbox

This command creates a Virtual Machine that runs a Minikube cluster using the VirtualBox driver. The default context in kubectl will now be minikube. However, to be able to switch between contexts, we use:

kubectl config use-context minikube

After starting Minikube, we can connect to the Kubernetes dashboard to access the logs and monitor our services, pods, ConfigMaps, and Secrets easily:

minikube dashboard

3.1. Deployment

Firstly, let's get our example from GitHub.

At this point, we can either run the “deployment-travel-client.sh” script from the parent folder, or else execute each instruction one by one to get a good grasp of the procedure:

### build the repository
mvn clean install
 
### set docker env
eval $(minikube docker-env)
 
### build the docker images on minikube
cd travel-agency-service
docker build -t travel-agency-service .
cd ../client-service
docker build -t client-service .
cd ..
 
### secret and mongodb
kubectl delete -f travel-agency-service/secret.yaml
kubectl delete -f travel-agency-service/mongo-deployment.yaml
 
kubectl create -f travel-agency-service/secret.yaml
kubectl create -f travel-agency-service/mongo-deployment.yaml
 
### travel-agency-service
kubectl delete -f travel-agency-service/travel-agency-deployment.yaml
kubectl create -f travel-agency-service/travel-agency-deployment.yaml
 
### client-service
kubectl delete configmap client-service
kubectl delete -f client-service/client-service-deployment.yaml
 
kubectl create -f client-service/client-config.yaml
kubectl create -f client-service/client-service-deployment.yaml
 
# Check that the pods are running
kubectl get pods
4. Service Discovery

This project provides us with an implementation for the ServiceDiscovery interface in Kubernetes. In a microservices environment, there are usually multiple pods running the same service.** Kubernetes exposes the service as a collection of endpoints** that can be fetched and reached from within a Spring Boot Application running in a pod in the same Kubernetes cluster.

For instance, in our example, we have multiple replicas of the travel agent service, which is accessed from our client service as http://travel-agency-service:8080. However, this internally would translate into accessing different pods such as travel-agency-service-7c9cfff655-4hxnp.

Spring Cloud Kubernetes Ribbon uses this feature to load balance between the different endpoints of a service.

We can easily use Service Discovery by adding the spring-cloud-starter-kubernetes dependency on our client application:

dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-kubernetes</artifactId>
</dependency>

Also, we should add @EnableDiscoveryClient and inject the DiscoveryClient into the ClientController by using @Autowired in our class:

@SpringBootApplication
@EnableDiscoveryClient
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}
@RestController
public class ClientController {
    @Autowired
    private DiscoveryClient discoveryClient;
}
5. ConfigMaps

Typically, microservices require some kind of configuration management. For instance, in Spring Cloud applications, we would use a Spring Cloud Config Server.

However, we can achieve this by using ConfigMaps provided by Kubernetes – provided that we intend to use it for non-sensitive, unencrypted information only. Alternatively, if the information we want to share is sensitive, then we should opt to use Secrets instead.

In our example, we're using ConfigMaps on the client-service Spring Boot application. Let's create a client-config.yaml file to define the ConfigMap of the client-service:

apiVersion: v1 by d
kind: ConfigMap
metadata:
  name: client-service
data:
  application.properties: |-
    bean.message=Testing reload! Message from backend is: %s <br/> Services : %s

**It's important that the name of the ConfigMap matches the name of the application **as specified in our “application.properties” file. In this case, it's client-service. Next, we should create the ConfigMap for client-service on Kubernetes:

kubectl create -f client-config.yaml

Now, let's create a configuration class ClientConfig with the @Configuration and @ConfigurationProperties and inject into the ClientController:

@Configuration
@ConfigurationProperties(prefix = "bean")
public class ClientConfig {
 
    private String message = "Message from backend is: %s <br/> Services : %s";
 
    // getters and setters
}
@RestController
public class ClientController {
 
    @Autowired
    private ClientConfig config;
 
    @GetMapping
    public String load() {
        return String.format(config.getMessage(), "", "");
    }
}

If we don't specify a ConfigMap, then we should expect to see the default message, which is set in the class. However, when we create the ConfigMap, this default message gets overridden by that property.

Additionally, every time we decide to update the ConfigMap, the message on the page changes accordingly:

kubectl edit configmap client-service
6. Secrets

Let's look at how Secrets work by looking at the specification of MongoDB connection settings in our example. We're going to create environment variables on Kubernetes, which will then be injected into the Spring Boot application.

6.1. Create a Secret

The first step is to create a secret.yaml file, encoding the username and password to Base 64:

apiVersion: v1
kind: Secret
metadata:
  name: db-secret
data:
  username: dXNlcg==
  password: cDQ1NXcwcmQ=

Let's apply the Secret configuration on the Kubernetes cluster:

kubectl apply -f secret.yaml

6.2. Create a MongoDB Service

We should now create the MongoDB service and the deployment travel-agency-deployment.yaml file. In particular, in the deployment part, we'll use the Secret username and password that we defined previously:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mongo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        service: mongo
      name: mongodb-service
    spec:
      containers:
      - args:
        - mongod
        - --smallfiles
        image: mongo:latest
        name: mongo
        env:
          - name: MONGO_INITDB_ROOT_USERNAME
            valueFrom:
              secretKeyRef:
                name: db-secret
                key: username
          - name: MONGO_INITDB_ROOT_PASSWORD
            valueFrom:
              secretKeyRef:
                name: db-secret
                key: password

By default, the mongo:latest image will create a user with username and password on a database named admin.

6.3. Setup MongoDB on Travel Agency Service

It's important to update the application properties to add the database related information. While we can freely specify the database name admin, here we're hiding the most sensitive information such as the username and the password:

spring.cloud.kubernetes.reload.enabled=true
spring.cloud.kubernetes.secrets.name=db-secret
spring.data.mongodb.host=mongodb-service
spring.data.mongodb.port=27017
spring.data.mongodb.database=admin
spring.data.mongodb.username=${MONGO_USERNAME}
spring.data.mongodb.password=${MONGO_PASSWORD}

Now, let's take a look at our travel-agency-deployment property file to update the services and deployments with the username and password information required to connect to the mongodb-service.

Here's the relevant section of the file, with the part related to the MongoDB connection:

env:
  - name: MONGO_USERNAME
    valueFrom:
      secretKeyRef:
        name: db-secret
        key: username
  - name: MONGO_PASSWORD
    valueFrom:
      secretKeyRef:
        name: db-secret
        key: password
  1. Communication with Ribbon
    In a microservices environment, we generally need the list of pods where our service is replicated in order to perform load-balancing. This is accomplished by using a mechanism provided by Spring Cloud Kubernetes Ribbon. This mechanism can automatically discover and reach all the endpoints of a specific service, and subsequently, it populates a Ribbon ServerList with information about the endpoints.

Let's start by adding the spring-cloud-starter-kubernetes-ribbon dependency to our client-service pom.xml file:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
</dependency>

The next step is to add the annotation @RibbonClient to our client-service application:

@RibbonClient(name = "travel-agency-service")

When the list of the endpoints is populated, the Kubernetes client will search the registered endpoints living in the current namespace/project matching the service name defined using the @RibbonClient annotation.

We also need to enable the ribbon client in the application properties:

ribbon.http.client.enabled=true
8. Additional Features

8.1. Hystrix

Hystrix helps in building a fault-tolerant and resilient application. Its main aims are fail fast and rapid recovery.

In particular, in our example, we're using Hystrix to implement the circuit breaker pattern on the client-server by annotating the Spring Boot application class with @EnableCircuitBreaker.

Additionally, we're using the fallback functionality by annotating the method TravelAgencyService.getDeals() with @HystrixCommand(). This means that in case of fallback the getFallBackName() will be called and “Fallback” message returned:

@HystrixCommand(fallbackMethod = "getFallbackName", commandProperties = { 
    @HystrixProperty(name = "execution.isolation.thread.timeoutInMilliseconds", value = "1000") })
public String getDeals() {
    return this.restTemplate.getForObject("http://travel-agency-service:8080/deals", String.class);
}
 
private String getFallbackName() {
    return "Fallback";
}

8.2. Pod Health Indicator

We can take advantage of Spring Boot HealthIndicator and Spring Boot Actuator to expose health-related information to the user.

In particular, the Kubernetes health indicator provides:

  • pod name
  • IP address
  • namespace
  • service account
  • node name
  • a flag that indicates whether the Spring Boot application is internal or external to Kubernetes
9. Conclusion

In this article, we provide a thorough overview of the Spring Cloud Kubernetes project.

So why should we use it? If we root for Kubernetes as a microservices platform but still appreciate the features of Spring Cloud, then Spring Cloud Kubernetes gives us the best of both worlds.

The full source code of the example is available over on GitHub.

Build Microservice Architecture With Kubernetes, Spring Boot , and Docker

Build Microservice Architecture With Kubernetes, Spring Boot , and Docker

In this article we learn how to start the Spring Boot microservice project and run it fast with Kubernetes and Docker

The topics covered in this article are:

  • Using Spring Boot 2.0 in cloud-native development

  • Providing service discovery for all microservices using a Spring Cloud Kubernetes project

  • Injecting configuration settings into application pods using Kubernetes Config Maps and Secrets

  • Building application images using Docker and deploying them on Kubernetes using YAML configuration files

  • Using Spring Cloud Kubernetes together with a Zuul proxy to expose a single Swagger API documentation for all microservices

Spring Cloud and Kubernetes may be threatened as competitive solutions when you build a microservices environment. Such components like Eureka, Spring Cloud Config, or Zuul provided by Spring Cloud may be replaced by built-in Kubernetes objects like services, config maps, secrets, or ingresses. But even if you decide to use Kubernetes components instead of Spring Cloud, you can take advantage of some interesting features provided throughout the whole Spring Cloud project.

The one really interesting project that helps us in development is Spring Cloud Kubernetes. Although it is still in the incubation stage, it is definitely worth dedicating some time to it. It integrates Spring Cloud with Kubernetes. I'll show you how to use an implementation of the discovery client, inter-service communication with the Ribbon client, and Zipkin discovery using Spring Cloud Kubernetes.

Before we proceed to the source code, let's take a look at the following diagram. It illustrates the architecture of our sample system. It is quite similar to the architecture presented in the mentioned article about microservices on Spring Cloud. There are three independent applications (employee-service, department-service, organization-service), which communicate with each other through a REST API. These Spring Boot microservices use some built-in mechanisms provided by Kubernetes: config maps and secrets for distributed configuration, etcd for service discovery, and ingresses for the API gateway.
This is image title

Let's proceed to the implementation. Currently, the newest stable version of Spring Cloud is Finchley.RELEASE. This version of spring-cloud-dependencies should be declared as a BOM for dependency management.

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Finchley.RELEASE</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

Spring Cloud Kubernetes is not released under Spring Cloud Release Trains, so we need to explicitly define its version. Because we use Spring Boot 2.0 we have to include the newest SNAPSHOT version of spring-cloud-kubernetes artifacts, which is 0.3.0.BUILD-SNAPSHOT.

The source code of sample applications presented in this article is available on GitHub in this repository.

Pre-Requirements

In order to be able to deploy and test our sample microservices, we need to prepare a development environment. We can realize that in the following steps:

  • You need at least a single node cluster instance of Kubernetes (Minikube) or Openshift (Minishift) running on your local machine. You should start it and expose the embedded Docker client provided by both of them. The detailed instructions for Minishift may be found in my Quick guide to deploying Java apps on OpenShift. You can also use that description to run Minikube — just replace word "minishift" with "minikube." In fact, it does not matter if you choose Kubernetes or Openshift — the next part of this tutorial will be applicable for both of them.

  • Spring Cloud Kubernetes requires access to the Kubernetes API in order to be able to retrieve a list of addresses for pods running for a single service. If you use Kubernetes, you should just execute the following command:

$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default

If you deploy your microservices on Minishift, you should first enable admin-user add-on, then log in as a cluster admin and grant the required permissions.

$ minishift addons enable admin-user
$ oc login -u system:admin
$ oc policy add-role-to-user cluster-reader system:serviceaccount:myproject:default
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo:latest
        ports:
        - containerPort: 27017
        env:
        - name: MONGO_INITDB_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  ports:
  - port: 27017
    protocol: TCP
  selector:
    app: mongodb
1. Inject the Configuration With Config Maps and Secrets

When using Spring Cloud, the most obvious choice for realizing a distributed configuration in your system is Spring Cloud Config. With Kubernetes, you can use Config Map. It holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data. It is used for storing and sharing non-sensitive, unencrypted configuration information. To use sensitive information in your clusters, you must use Secrets. Use of both these Kubernetes objects can be perfectly demonstrated based on the example of MongoDB connection settings. Inside a Spring Boot application, we can easily inject it using environment variables. Here's a fragment of application.yml file with URI configuration.

spring:
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

While username and password are sensitive fields, a database name is not, so we can place it inside the config map.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
data:
  database-name: microservices

Of course, username and password are defined as secrets.

apiVersion: v1
kind: Secret
metadata:
  name: mongodb
type: Opaque
data:
  database-password: MTIzNDU2
  database-user: cGlvdHI=

To apply the configuration to the Kubernetes cluster, we run the following commands.

$ kubectl apply -f kubernetes/mongodb-configmap.yaml
$ kubectl apply -f kubernetes/mongodb-secret.yaml

After that, we should inject the configuration properties into the application's pods. When defining the container configuration inside the Deployment YAML file, we have to include references to environment variables and secrets, as shown below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee
  labels:
    app: employee
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee
  template:
    metadata:
      labels:
        app: employee
    spec:
      containers:
      - name: employee
        image: piomin/employee:1.0
        ports:
        - containerPort: 8080
        env:
        - name: MONGO_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password
2. Building Service Discovery With Kubernetes

We are usually running microservices on Kubernetes using Docker containers. One or more containers are grouped by pods, which are the smallest deployable units created and managed in Kubernetes. A good practice is to run only one container inside a single pod. If you would like to scale up your microservice, you would just have to increase the number of running pods. All running pods that belong to a single microservice are logically grouped with Kubernetes Service. This service may be visible outside the cluster and is able to load balance incoming requests between all running pods. The following service definition groups all pods labeled with the field app equal to employee.

apiVersion: v1
kind: Service
metadata:
  name: employee
  labels:
    app: employee
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: employee

Service can be used to access the application outsidethe Kubernetes cluster or for inter-service communication inside a cluster. However, the communication between microservices can be implemented more comfortably with Spring Cloud Kubernetes. First, we need to include the following dependency in the project pom.xml.

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>

Then we should enable the discovery client for an application, the same as we have always done for discovery in Spring Cloud Netflix Eureka. This allows you to query Kubernetes endpoints (services) by name. This discovery feature is also used by Spring Cloud Kubernetes Ribbon or Zipkin projects to fetch, respectively, the list of the pods defined for a microservice to be load balanced or the Zipkin servers available to send the traces or spans.

@SpringBootApplication
@EnableDiscoveryClient
@EnableMongoRepositories
@EnableSwagger2
public class EmployeeApplication {
 public static void main(String[] args) {
  SpringApplication.run(EmployeeApplication.class, args);
 }
 // ...
}

The last important thing in this section is to guarantee that the Spring application name will be exactly the same as the Kubernetes service name for the application. For the application employee-service, it is employee.

spring:
  application:
    name: employee
3. Building Microservices Using Docker and Deploying on Kubernetes

There is nothing unusual in our sample microservices. We have included some standard Spring dependencies for building REST-based microservices, integrating with MongoDB, and generating API documentation using Swagger2.

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

In order to integrate with MongoDB, we should create an interface that extends standard Spring Data CrudRepository.

public interface EmployeeRepository extends CrudRepository {
 List findByDepartmentId(Long departmentId);
 List findByOrganizationId(Long organizationId);
}

The entity class should be annotated with Mongo @Document and a primary key field with @Id.

@Document(collection = "employee")
public class Employee {
 @Id
 private String id;
 private Long organizationId;
 private Long departmentId;
 private String name;
 private int age;
 private String position;
 // ...
}

The repository bean has been injected to the controller class. Here's the full implementation of our REST API inside employee-service.

@RestController
public class EmployeeController {
 private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
 @Autowired
 EmployeeRepository repository;
 @PostMapping("/")
 public Employee add(@RequestBody Employee employee) {
  LOGGER.info("Employee add: {}", employee);
  return repository.save(employee);
 }
 @GetMapping("/{id}")
 public Employee findById(@PathVariable("id") String id) {
  LOGGER.info("Employee find: id={}", id);
  return repository.findById(id).get();
 }
 @GetMapping("/")
 public Iterable findAll() {
  LOGGER.info("Employee find");
  return repository.findAll();
 }
 @GetMapping("/department/{departmentId}")
 public List findByDepartment(@PathVariable("departmentId") Long departmentId) {
  LOGGER.info("Employee find: departmentId={}", departmentId);
  return repository.findByDepartmentId(departmentId);
 }
 @GetMapping("/organization/{organizationId}")
 public List findByOrganization(@PathVariable("organizationId") Long organizationId) {
  LOGGER.info("Employee find: organizationId={}", organizationId);
  return repository.findByOrganizationId(organizationId);
 }
}

In order to run our microservices on Kubernetes, we should first build the whole Maven project with the mvn clean install command. Each microservice has a Dockerfile placed in the root directory. Here's the Dockerfile definition for employee-service.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8080
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Let's build Docker images for all three sample microservices.

$ cd employee-service
$ docker build -t piomin/employee:1.0 .
$ cd department-service
$ docker build -t piomin/department:1.0 .
$ cd organization-service
$ docker build -t piomin/organization:1.0 .

The last step is to deploy Docker containers with applications on Kubernetes. To do that, just execute the commands kubectl apply on YAML configuration files. The sample deployment file for employee-service has been demonstrated in step 1. All required deployment fields are available inside the project repository in the kubernetes directory.

$ kubectl apply -f kubernetes\employee-deployment.yaml
$ kubectl apply -f kubernetes\department-deployment.yaml
$ kubectl apply -f kubernetes\organization-deployment.yaml
4. Communication Between Microservices With Spring Cloud Kubernetes Ribbon

All the microservices are deployed on Kubernetes. Now, it's worth it to discuss some aspects related to inter-service communication. The application employee-service, in contrast to other microservices, did not invoke any other microservices. Let's take a look at other microservices that call the API exposed by employee-service and communicate between each other ( organization-service calls department-service API).

First, we need to include some additional dependencies in the project. We use Spring Cloud Ribbon and OpenFeign. Alternatively, you can also use Spring@LoadBalancedRestTemplate.

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

Here's the main class of department-service. It enables Feign client using the @EnableFeignClients annotation. It works the same as with discovery based on Spring Cloud Netflix Eureka. OpenFeign uses Ribbon for client-side load balancing. Spring Cloud Kubernetes Ribbon provides some beans that force Ribbon to communicate with the Kubernetes API through Fabric8 KubernetesClient.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableMongoRepositories
@EnableSwagger2
public class DepartmentApplication {
 public static void main(String[] args) {
  SpringApplication.run(DepartmentApplication.class, args);
 }
 // ...
}

Here's the implementation of Feign client for calling the method exposed by employee-service.

@FeignClient(name = "employee")
public interface EmployeeClient {
 @GetMapping("/department/{departmentId}")
 List findByDepartment(@PathVariable("departmentId") String departmentId);
}

Finally, we have to inject Feign client's beans into the REST controller. Now, we may call the method defined inside EmployeeClient, which is equivalent to calling REST endpoints.

@RestController
public class DepartmentController {
 private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
 @Autowired
 DepartmentRepository repository;
 @Autowired
 EmployeeClient employeeClient;
 // ...
 @GetMapping("/organization/{organizationId}/with-employees")
 public List findByOrganizationWithEmployees(@PathVariable("organizationId") Long organizationId) {
  LOGGER.info("Department find: organizationId={}", organizationId);
  List departments = repository.findByOrganizationId(organizationId);
  departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
  return departments;
 }
}
5. Building API Gateway Using Kubernetes Ingress

Ingress is a collection of rules that allow incoming requests to reach the downstream services. In our microservices architecture, ingress is playing the role of an API gateway. To create it, we should first prepare a YAML description file. The descriptor file should contain the hostname under which the gateway will be available and mapping rules to the downstream services.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: microservices.info
    http:
      paths:
      - path: /employee
        backend:
          serviceName: employee
          servicePort: 8080
      - path: /department
        backend:
          serviceName: department
          servicePort: 8080
      - path: /organization
        backend:
          serviceName: organization
          servicePort: 8080

You have to execute the following command to apply the configuration above to the Kubernetes cluster.

$ kubectl apply -f kubernetes\ingress.yaml

To test this solution locally, we have to insert the mapping between the IP address and hostname set in the ingress definition inside the hosts file, as shown below. After that, we can test services through ingress using defined hostname just like that: http://microservices.info/employee.

192.168.99.100 microservices.info

You can check the details of the created ingress just by executing the command kubectl describe ing gateway-ingress.
This is image title

6. Enabling API Specification on the Gateway Using Swagger2

What if we would like to expose a single Swagger documentation for all microservices deployed on Kubernetes? Well, here things are getting complicated... We can run a container with Swagger UI, and map all paths exposed by the ingress manually, but it is not a good solution...

In that case, we can use Spring Cloud Kubernetes Ribbon one more time, this time together with Spring Cloud Netflix Zuul. Zuul will act as a gateway only for serving the Swagger API.
Here's the list of dependencies used in my gateway-service project.

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>2.9.2</version>
</dependency>

Kubernetes discovery client will detect all services exposed on the cluster. We would like to display documentation only for our three microservices. That's why I defined the following routes for Zuul.

zuul:
  routes:
    department:
      path: /department/**
    employee:
      path: /employee/**
    organization:
      path: /organization/**

Now we can use the ZuulProperties bean to get the routes' addresses from Kubernetes discovery and configure them as Swagger resources, as shown below.

@Configuration
public class GatewayApi {
 @Autowired
 ZuulProperties properties;
 @Primary
 @Bean
 public SwaggerResourcesProvider swaggerResourcesProvider() {
  return () -> {
   List resources = new ArrayList();
   properties.getRoutes().values().stream()
   .forEach(route -> resources.add(createResource(route.getId(), "2.0")));
   return resources;
  };
 }
 private SwaggerResource createResource(String location, String version) {
  SwaggerResource swaggerResource = new SwaggerResource();
  swaggerResource.setName(location);
  swaggerResource.setLocation("/" + location + "/v2/api-docs");
  swaggerResource.setSwaggerVersion(version);
  return swaggerResource;
 }
}

The application gateway-service should be deployed on the cluster the same as the other applications. You can see the list of running services by executing the command kubectl get svc. Swagger documentation is available under the address http://192.168.99.100:31237/swagger-ui.html.
This is image title

Learn More

Thanks for reading !

Originally published by Piotr Mińkowski at dzone.com

Deploying a full-stack App with Spring Boot, MySQL, React on Kubernetes

Deploying a full-stack App with Spring Boot, MySQL, React on Kubernetes

In this article, you’ll learn how to deploy a Stateful app built with Spring Boot, MySQL, and React on Kubernetes. We’ll use a local minikube cluster to deploy the application.

Introduction

In this article, you’ll learn how to deploy a Stateful app built with Spring Boot, MySQL, and React on Kubernetes. We’ll use a local minikube cluster to deploy the application. Please make sure that you have kubectl and minikube installed in your system.

It is a full-stack Polling app where users can login, create a Poll, and vote for a Poll.

To deploy this application, we’ll use few additional concepts in Kubernetes called PersistentVolumes and Secrets. Let’s first get a basic understanding of these concepts before moving to the hands-on deployment guide.

Kubernetes Persistent Volume

We’ll use Kubernetes Persistent Volumes to deploy Mysql. A PersistentVolume (PV) is a piece of storage in the cluster. It is a resource in the cluster just like a node. The Persistent volume’s lifecycle is independent from Pod lifecycles. It preserves data through restarting, rescheduling, and even deleting Pods.

PersistentVolumes are consumed by something called a PersistentVolumeClaim (PVC). A PVC is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). PVCs can request specific size and access modes (e.g. read-write or read-only).

Kubernetes Secrets

We’ll make use of Kubernetes secrets to store the Database credentials. A Secret is an object in Kubernetes that lets you store and manage sensitive information, such as passwords, tokens, ssh keys etc. The secrets are stored in Kubernetes backing store, etcd. You can enable encryption to store secrets in encrypted form in etcd.

Deploying Mysql on Kubernetes using PersistentVolume and Secrets

Following is the Kubernetes manifest for MySQL deployment. I’ve added comments alongside each configuration to make sure that its usage is clear to you.

apiVersion: v1
kind: PersistentVolume            # Create a PersistentVolume
metadata:
  name: mysql-pv
  labels:
    type: local
spec:
  storageClassName: standard      # Storage class. A PV Claim requesting the same storageClass can be bound to this volume. 
  capacity:
    storage: 250Mi
  accessModes:
    - ReadWriteOnce
  hostPath:                       # hostPath PersistentVolume is used for development and testing. It uses a file/directory on the Node to emulate network-attached storage
    path: "/mnt/data"
  persistentVolumeReclaimPolicy: Retain  # Retain the PersistentVolume even after PersistentVolumeClaim is deleted. The volume is considered “released”. But it is not yet available for another claim because the previous claimant’s data remains on the volume. 
---    
apiVersion: v1
kind: PersistentVolumeClaim        # Create a PersistentVolumeClaim to request a PersistentVolume storage
metadata:                          # Claim name and labels
  name: mysql-pv-claim
  labels:
    app: polling-app
spec:                              # Access mode and resource limits
  storageClassName: standard       # Request a certain storage class
  accessModes:
    - ReadWriteOnce                # ReadWriteOnce means the volume can be mounted as read-write by a single Node
  resources:
    requests:
      storage: 250Mi
---
apiVersion: v1                    # API version
kind: Service                     # Type of kubernetes resource 
metadata:
  name: polling-app-mysql         # Name of the resource
  labels:                         # Labels that will be applied to the resource
    app: polling-app
spec:
  ports:
    - port: 3306
  selector:                       # Selects any Pod with labels `app=polling-app,tier=mysql`
    app: polling-app
    tier: mysql
  clusterIP: None
---
apiVersion: apps/v1
kind: Deployment                    # Type of the kubernetes resource
metadata:
  name: polling-app-mysql           # Name of the deployment
  labels:                           # Labels applied to this deployment 
    app: polling-app
spec:
  selector:
    matchLabels:                    # This deployment applies to the Pods matching the specified labels
      app: polling-app
      tier: mysql
  strategy:
    type: Recreate
  template:                         # Template for the Pods in this deployment
    metadata:
      labels:                       # Labels to be applied to the Pods in this deployment
        app: polling-app
        tier: mysql
    spec:                           # The spec for the containers that will be run inside the Pods in this deployment
      containers:
      - image: mysql:5.6            # The container image
        name: mysql
        env:                        # Environment variables passed to the container 
        - name: MYSQL_ROOT_PASSWORD 
          valueFrom:                # Read environment variables from kubernetes secrets
            secretKeyRef:
              name: mysql-root-pass
              key: password
        - name: MYSQL_DATABASE
          valueFrom:
            secretKeyRef:
              name: mysql-db-url
              key: database
        - name: MYSQL_USER
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: username
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: password
        ports:
        - containerPort: 3306        # The port that the container exposes       
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage  # This name should match the name specified in `volumes.name`
          mountPath: /var/lib/mysql
      volumes:                       # A PersistentVolume is mounted as a volume to the Pod  
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

We’re creating four resources in the above manifest file. A PersistentVolume, a PersistentVolumeClaim for requesting access to the PersistentVolume resource, a service for having a static endpoint for the MySQL database, and a deployment for running and managing the MySQL pod.

The MySQL container reads database credentials from environment variables. The environment variables access these credentials from Kubernetes secrets.

Let’s start a minikube cluster, create kubernetes secrets to store database credentials, and deploy the Mysql instance:

Starting a Minikube cluster

$ minikube start

Creating the secrets

You can create secrets manually from a literal or file using the kubectl create secret command, or you can create them from a generator using Kustomize.

In this article, we’re gonna create the secrets manually:

$ kubectl create secret generic mysql-root-pass --from-literal=password=R00t
secret/mysql-root-pass created

$ kubectl create secret generic mysql-user-pass --from-literal=username=callicoder [email protected]
secret/mysql-user-pass created

$ kubectl create secret generic mysql-db-url --from-literal=database=polls --from-literal=url='jdbc:mysql://polling-app-mysql:3306/polls?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false'
secret/mysql-db-url created

You can get the secrets like this -

$ kubectl get secrets
NAME                         TYPE                                  DATA   AGE
default-token-tkrx5          kubernetes.io/service-account-token   3      3d23h
mysql-db-url                 Opaque                                2      2m32s
mysql-root-pass              Opaque                                1      3m19s
mysql-user-pass              Opaque                                2      3m6s

You can also find more details about a secret like so -

$ kubectl describe secrets mysql-user-pass
Name:         mysql-user-pass
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
username:  10 bytes
password:  10 bytes

Deploying MySQL

Let’s now deploy MySQL by applying the yaml configuration -

$ kubectl apply -f deployments/mysql-deployment.yaml
service/polling-app-mysql created
persistentvolumeclaim/mysql-pv-claim created
deployment.apps/polling-app-mysql created

That’s it! You can check all the resources created in the cluster using the following commands -

$ kubectl get persistentvolumes
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
mysql-pv   250Mi      RWO            Retain           Bound    default/mysql-pv-claim   standard                30s

$ kubectl get persistentvolumeclaims
NAME             STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim   Bound    mysql-pv   250Mi      RWO            standard       50s

$ kubectl get services
NAME                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes          ClusterIP   10.96.0.1    <none>        443/TCP    5m36s
polling-app-mysql   ClusterIP   None         <none>        3306/TCP   2m57s

$ kubectl get deployments
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
polling-app-mysql   1/1     1            1           3m14s

Logging into the MySQL pod

You can get the MySQL pod and use kubectl exec command to login to the Pod.

$ kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
polling-app-mysql-6b94bc9d9f-td6l4   1/1     Running   0          4m23s

$ kubectl exec -it polling-app-mysql-6b94bc9d9f-td6l4 -- /bin/bash
[email protected]:/#

Deploying the Spring Boot app on Kubernetes

All right! Now that we have the MySQL instance deployed, Let’s proceed with the deployment of the Spring Boot app.

Following is the deployment manifest for the Spring Boot app -

---
apiVersion: apps/v1           # API version
kind: Deployment              # Type of kubernetes resource
metadata:
  name: polling-app-server    # Name of the kubernetes resource
  labels:                     # Labels that will be applied to this resource
    app: polling-app-server
spec:
  replicas: 1                 # No. of replicas/pods to run in this deployment
  selector:
    matchLabels:              # The deployment applies to any pods mayching the specified labels
      app: polling-app-server
  template:                   # Template for creating the pods in this deployment
    metadata:
      labels:                 # Labels that will be applied to each Pod in this deployment
        app: polling-app-server
    spec:                     # Spec for the containers that will be run in the Pods
      containers:
      - name: polling-app-server
        image: callicoder/polling-app-server:1.0.0
        imagePullPolicy: IfNotPresent
        ports:
          - name: http
            containerPort: 8080 # The port that the container exposes
        resources:
          limits:
            cpu: 0.2
            memory: "200Mi"
        env:                  # Environment variables supplied to the Pod
        - name: SPRING_DATASOURCE_USERNAME # Name of the environment variable
          valueFrom:          # Get the value of environment variable from kubernetes secrets
            secretKeyRef:
              name: mysql-user-pass
              key: username
        - name: SPRING_DATASOURCE_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: password
        - name: SPRING_DATASOURCE_URL
          valueFrom:
            secretKeyRef:
              name: mysql-db-url
              key: url
---
apiVersion: v1                # API version
kind: Service                 # Type of the kubernetes resource
metadata:                     
  name: polling-app-server    # Name of the kubernetes resource
  labels:                     # Labels that will be applied to this resource
    app: polling-app-server
spec:                         
  type: NodePort              # The service will be exposed by opening a Port on each node and proxying it. 
  selector:
    app: polling-app-server   # The service exposes Pods with label `app=polling-app-server`
  ports:                      # Forward incoming connections on port 8080 to the target port 8080
  - name: http
    port: 8080
    targetPort: 8080

The above deployment uses the Secrets stored in mysql-user-pass and mysql-db-url that we created in the previous section.

Let’s apply the manifest file to create the resources -

$ kubectl apply -f deployments/polling-app-server.yaml
deployment.apps/polling-app-server created
service/polling-app-server created

You can check the created Pods like this -

$ kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
polling-app-mysql-6b94bc9d9f-td6l4    1/1     Running   0          21m
polling-app-server-744b47f866-s2bpf   1/1     Running   0          31s

Now, type the following command to get the polling-app-server service URL -

$ minikube service polling-app-server --url
http://192.168.99.100:31550

You can now use the above endpoint to interact with the service -

$ curl http://192.168.99.100:31550
{"timestamp":"2019-07-30T17:55:11.366+0000","status":404,"error":"Not Found","message":"No message available","path":"/"}

Deploying the React app on Kubernetes

Finally, Let’s deploy the frontend app using Kubernetes. Here is the deployment manifest -

apiVersion: apps/v1             # API version
kind: Deployment                # Type of kubernetes resource
metadata:
  name: polling-app-client      # Name of the kubernetes resource
spec:
  replicas: 1                   # No of replicas/pods to run
  selector:                     
    matchLabels:                # This deployment applies to Pods matching the specified labels
      app: polling-app-client
  template:                     # Template for creating the Pods in this deployment
    metadata:
      labels:                   # Labels that will be applied to all the Pods in this deployment
        app: polling-app-client
    spec:                       # Spec for the containers that will run inside the Pods
      containers:
      - name: polling-app-client
        image: callicoder/polling-app-client:1.0.0
        imagePullPolicy: IfNotPresent
        ports:
          - name: http
            containerPort: 80   # Should match the Port that the container listens on
        resources:
          limits:
            cpu: 0.2
            memory: "10Mi"
---
apiVersion: v1                  # API version
kind: Service                   # Type of kubernetes resource
metadata:
  name: polling-app-client      # Name of the kubernetes resource
spec:
  type: NodePort                # Exposes the service by opening a port on each node
  selector:
    app: polling-app-client     # Any Pod matching the label `app=polling-app-client` will be picked up by this service
  ports:                        # Forward incoming connections on port 80 to the target port 80 in the Pod
  - name: http
    port: 80
    targetPort: 80

Let’s apply the above manifest file to deploy the frontend app -

$ kubectl apply -f deployments/polling-app-client.yaml
deployment.apps/polling-app-client created
service/polling-app-client created

Let’s check all the Pods in the cluster -

$ kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
polling-app-client-6b6d979b-7pgxq     1/1     Running   0          26m
polling-app-mysql-6b94bc9d9f-td6l4    1/1     Running   0          21m
polling-app-server-744b47f866-s2bpf   1/1     Running   0          31s

Type the following command to open the frontend service in the default browser -

$ minikube service polling-app-client

You’ll notice that the backend api calls from the frontend app is failing because the frontend app tries to access the backend APIs at localhost:8080. Ideally, in a real-world, you’ll have a public domain for your backend server. But since our entire setup is locally installed, we can use kubectl port-forward command to map the localhost:8080 endpoint to the backend service -

$ kubectl port-forward service/polling-app-server 8080:8080

That’s it! Now, you’ll be able to use the frontend app. Here is how the app looks like -

Kubernetes Persistent Volume Secrets Full Stack deployment example

Add CI/CD to Your Spring Boot App with Jenkins X and Kubernetes

Add CI/CD to Your Spring Boot App with Jenkins X and Kubernetes

A lot has happened in the last five years of software development. What it means to build, deploy, and orchestrate software has changed drastically. There’s been a move from hosting software on-premise to public cloud and shift from virtual machines (VMs) to containers. Containers are cheaper to run than VMs because they require fewer resources and run as single processes. Moving to containers has reduced costs, but created the problem of how to run containers at scale.

A lot has happened in the last five years of software development. What it means to build, deploy, and orchestrate software has changed drastically. There’s been a move from hosting software on-premise to public cloud and shift from virtual machines (VMs) to containers. Containers are cheaper to run than VMs because they require fewer resources and run as single processes. Moving to containers has reduced costs, but created the problem of how to run containers at scale.

Kubernetes was first open-sourced on June 6th, 2014. Google had been using containers for years and used a tool called Borg to manage containers at scale. Kubernetes is the open source version of Borg and has become the de facto standard in the last four years.

Its journey to becoming a standard was largely facilitated by all the big players jumping on board. Red Hat, IBM, Amazon, Microsoft, Oracle, and Pivotal – every major public cloud provider has Kubernetes support.

This is great for developers because it provides a single way to package applications (in a Docker container) and deploy it on any Kubernetes cluster.

High-Performance Development with CI/CD, Kubernetes, and Jenkins X

High performing teams are almost always a requirement for success in technology, and continuous integration, continuous deployment (CI/CD), small iterations, plus fast feedback are the building blocks. CI/CD can be difficult to set up for your cloud native app. By automating everything, developers can spend their precious time delivering actual business value.

How do you become a high performing team using containers, continuous delivery, and Kubernetes? This is where Jenkins X comes in.

“The idea of Jenkins X is to give all developers their own naval seafaring butler that can help you sail the seas of continuous delivery.” — James Strachan

Jenkins X helps you automate your CI/CD in Kubernetes – and you don’t even have to learn Docker or Kubernetes!

What Does Jenkins X Do?

Jenkins X automates the installation, configuration, and upgrading of Jenkins and other apps (Helm, Skaffold, Nexus, among others) on Kubernetes. It automates CI/CD of your applications using Docker images, Helm charts, and pipelines. It uses GitOps to manage promotion between environments and provides lots of feedback by commenting on pull requests as they hit staging and production.

Get Started with Jenkins X

To get installed with Jenkins X, you first need to install the jx binary on your machine, or cloud provider. You can get $300 in credits for Google Cloud, so I decided to start there.

Install Jenkins X on Google Cloud and Create a Cluster

Navigate to cloud.google.com and log in. If you don’t have an account, sign up for a free trial. Go to the console (there’s a link in the top right corner) and activate Google Cloud shell. Copy and paste the following commands into the shell.

curl -L https://github.com/jenkins-x/jx/releases/download/v1.3.79/jx-linux-amd64.tar.gz | tar xzv
sudo mv jx /usr/local/bin


NOTE: Google Cloud Shell terminates any changes made outside your home directory after an hour, so you might have to rerun the commands. The good news is they’ll be in your history, so you only need to hit the up arrow and enter. You can also eliminate the sudo mv command above and add the following to your .bashrc instead.

export PATH=$PATH:.


Create a cluster on GKE (Google Kubernetes Engine) using the following command. You may have to enable GKE for your account.

jx create cluster gke --skip-login


Confirm you want to install helm if you’re prompted to download it. You will be prompted to select a Google Cloud Zone. I’d suggest picking one close to your location. I chose us-west1-a since I live near Denver, Colorado. For Google Cloud Machine Type, I selected n1-standard-2, and used the defaults for the min (3) and max (5) number of nodes.

For the GitHub name, type your own (e.g., mraible) and an email you have registered with GitHub (e.g., [email protected]). I tried to use oktadeveloper (a GitHub organization), and I was unable to make it work.

NOTE: GitHub integration will fail if you have two-factor authentication enabled on your account. You’ll need to disable it on GitHub if you want the process to complete successfully. :-/

When prompted to install an ingress controller, hit Enter for Yes. Hit Enter again to select the default domain.

You’ll be prompted to create a GitHub API Token. Click on the provided URL and name it “Jenkins X”. Copy and paste the token’s value back into your console.

Grab a coffee, an adult beverage, or do some pushups while your install finishes. It can take several minutes.

The next step will be to copy the API token from Jenkins to your console. Follow the provided instructions in your console.

When you’re finished with that, run jx console and click on the link to log in to your Jenkins instance. Click on Administration and upgrade Jenkins, as well as all its plugins (Plugin Manager > scroll to the bottom and select all). If you fail to perform this step, you won’t be able to navigate from your GitHub pull request to your Jenkins X CI process for it.

Create a Spring Boot App

When I first started using Jenkins X, I tried to import an existing project. Even though my app used Spring Boot, it didn’t have a pom.xml in the root directory, so Jenkins X thought it was a Node.js app. For this reason, I suggest creating a blank Spring Boot app first to confirm Jenkins X is set up correctly.

Create a bare-bones Spring Boot app from Cloud Shell:

jx create spring -d web -d actuator


This command uses Spring Initializr, so you’ll be prompted with a few choices. Below are the answers I used:

Question Answer Language java Group com.okta.developer Artifact okta-spring-jx-example TIP: Picking a short name for your artifact name will save you pain. Jenkins X has a 53 character limit for release names and oktadeveloper/okta-spring-boot-jenkinsx-example will cause it to be exceeded by two characters.

Select all the defaults for the git user name, initializing git, and the commit message. You can select an organization to use if you don’t want to use your personal account. Run the following command to watch the CI/CD pipeline of your app.

jx get activity -f okta-spring-jx-example -w


Run jx console, click the resulting link, and navigate to your project if you’d like a more visually rich view.

This process will perform a few tasks:

  1. Create a release for your project.
  2. Create a pull request for your staging environment project.
  3. Auto-deploy it to staging environment so you can see it in action.
Merge status checks all passed so the promotion worked!
Application is available at: http://okta-spring-jx-example.jx-staging.35.230.106.169.nip.io


NOTE: Since Spring Boot doesn’t provide a welcome page by default, you will get a 404 when you open the URL above.

Deploy Your Spring Boot App to Production with Jenkins X

By default, Jenkins X will only auto-deploy to staging. You can manually promote from staging to production using:

jx promote okta-spring-jx-example --version 0.0.1 --env production


You can change your production environment to use auto-deploy using [jx edit environment](https://jenkins-x.io/commands/jx_edit_environment/ "jx edit environment").

Now that you know how to use Jenkins X with a bare-bones Spring Boot app let’s see how to make it work with a more real-world example.

Secure Your Spring Boot App and Add an Angular PWA

Over the last several months, I’ve written a series of blog posts about building a PWA (progressive web app) with Ionic/Angular and Spring Boot.

  1. Create a release for your project.
  2. Create a pull request for your staging environment project.
  3. Auto-deploy it to staging environment so you can see it in action.

This is the final blog post in the series. I believe this is an excellent example of a real-world app because it has numerous unit and integration tests, including end-to-end tests with Protractor. Let’s see how to automate its path to production with Jenkins X and Kubernetes!

Clone the Spring Boot project you just created from GitHub (make sure to change {yourUsername} in the URL):

git clone https://github.com/{yourUsername}/okta-spring-jx-example.git okta-jenkinsx


In an adjacent directory, clone the project created that has Spring Boot + Angular as a single artifact:

git clone https://github.com/oktadeveloper/okta-spring-boot-angular-auth-code-flow-example.git spring-boot-angular


In a terminal, navigate to okta-jenkinsx and remove the files that are no longer necessary:

cd okta-jenkinsx
rm -rf .mvn src mvnw* pom.xml


The result should be a directory structure with the following files:

$ tree .
.
├── charts
│   ├── okta-spring-jx-example
│   │   ├── Chart.yaml
│   │   ├── Makefile
│   │   ├── README.md
│   │   ├── templates
│   │   │   ├── deployment.yaml
│   │   │   ├── _helpers.tpl
│   │   │   ├── NOTES.txt
│   │   │   └── service.yaml
│   │   └── values.yaml
│   └── preview
│       ├── Chart.yaml
│       ├── Makefile
│       ├── requirements.yaml
│       └── values.yaml
├── Dockerfile
├── Jenkinsfile
└── skaffold.yaml

4 directories, 15 files


Copy all the files from spring-boot-angular into okta-jenkinsx.

cp -r ../spring-boot-angular/* .


When using Travis CI to test this app, I ran npm install as part of the process. With Jenkins X, it’s easier to everything with one container (e.g. maven or nodejs), so add an execution to the frontend-maven-plugin (in holdings-api/pom.xml) to run npm install (hint: you need to add the execution with id==’npm install’ to the existing pom.xml).

Now is a great time to open the okta-jenkinsx directory as a project in an IDE like IntelliJ IDEA, Eclipse, Netbeans, or VS Code! :)

<plugin>
   <groupId>com.github.eirslett</groupId>
   <artifactId>frontend-maven-plugin</artifactId>
   <version>${frontend-maven-plugin.version}</version>
   <configuration>
       <workingDirectory>../crypto-pwa</workingDirectory>
   </configuration>
   <executions>
       <execution>
           <id>install node and npm</id>
           <goals>
               <goal>install-node-and-npm</goal>
           </goals>
           <configuration>
               <nodeVersion>${node.version}</nodeVersion>
           </configuration>
       </execution>
       <execution>
           <id>npm install</id>
           <goals>
               <goal>npm</goal>
           </goals>
           <phase>generate-resources</phase>
           <configuration>
               <arguments>install --unsafe-perm</arguments>
           </configuration>
       </execution>
       ...
   </executions>
</plugin>


NOTE: The --unsafe-perm flag is necessary because Jenkins X runs the build as a root user. I figured out this workaround from node-sass’s troubleshooting instructions.

Add Actuator and Turn Off HTTPS

Jenkins X relies on Spring Boot’s Actuator for health checks. This means if you don’t include it in your project (or have /actuator/health protected), Jenkins X will report your app has failed to startup.

Add the Actuator starter as a dependency to holdings-api/pom.xml:

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>


You’ll also need to allow access to its health check endpoint. Jenkins X will deploy your app behind a Nginx server, so you’ll want to turn off forcing HTTPS as well, or you won’t be able to reach your app. Modify holdings-api/src/main/java/.../SecurityConfiguration.java to allow /actuator/health and to remove requiresSecure().

public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

   @Override
   public void configure(WebSecurity web) throws Exception {
       web.ignoring().antMatchers("/**/*.{js,html,css}");
   }

   @Override
   protected void configure(HttpSecurity http) throws Exception {
       http
               .csrf().csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse())
           .and()
               .authorizeRequests()
               .antMatchers("/", "/home", "/api/user", "/actuator/health").permitAll()
               .anyRequest().authenticated();
   }
}


Adjust Paths in Dockerfile and Jenkinsfile

Since this project builds in a sub-directory rather than the root directory, update ./Dockerfile to look in holdings-api for files.

FROM openjdk:8-jdk-slim
ENV PORT 8080
ENV CLASSPATH /opt/lib
EXPOSE 8080

# copy pom.xml and wildcards to avoid this command failing if there's no target/lib directory
COPY holdings-api/pom.xml holdings-api/target/lib* /opt/lib/

# NOTE we assume there's only 1 jar in the target dir
# but at least this means we don't have to guess the name
# we could do with a better way to know the name - or to always create an app.jar or something
COPY holdings-api/target/*.jar /opt/app.jar
WORKDIR /opt
CMD ["java", "-jar", "app.jar"]


You’ll also need to update Jenkinsfile so it runs any mvn commands in the holdings-api directory. Add the -Pprod profile too. For example:

// in the 'CI Build and push snapshot' stage
steps {
 container('maven') {
   dir ('./holdings-api') {
     sh "mvn versions:set -DnewVersion=$PREVIEW_VERSION"
     sh "mvn install -Pprod"
   }
 }
 ...
}
// in the 'Build Release' stage
dir ('./holdings-api') {
  sh "mvn versions:set -DnewVersion=\$(cat ../VERSION)"
}
...
dir ('./holdings-api') {
  sh "mvn clean deploy -Pprod"
}


This should be enough to make this app work with Jenkins X. However, you won’t be able to log into it unless you have an Okta account and configure it accordingly.

Why Okta?

In short, we make identity management a lot easier, more secure, and more scalable than what you’re probably used to. Okta is a cloud service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

Are you sold? Register for a forever-free developer account, and when you’ve finished, come on back so we can learn more about CI/CD with Spring Boot and Jenkins X!

Create a Web Application in Okta for Your Spring Boot App

After you’ve completed the setup process, log in to your account and navigate to Applications > Add Application. Click Web and Next. On the next page, enter the following values and click Done (you will have to click Done, then Edit to modify Logout redirect URIs).

Open holdings-api/src/main/resources/application.yml and paste the values from your org/app into it.

okta:
 client:
   orgUrl: https://{yourOktaDomain}
   token: XXX
security:
   oauth2:
     client:
       access-token-uri: https://{yourOktaDomain}/oauth2/default/v1/token
       user-authorization-uri: https://{yourOktaDomain}/oauth2/default/v1/authorize
       client-id: {clientId}
       client-secret: {clientSecret}
     resource:
       user-info-uri: https://{yourOktaDomain}/oauth2/default/v1/userinfo


You’ll notice the token value is XXX. This is because I prefer to read it from an environment variable rather than being checked into source control. You’ll likely want to do this for your client secret as well, but I’m only doing one property for brevity. To create an API token:

  1. Create a release for your project.
  2. Create a pull request for your staging environment project.
  3. Auto-deploy it to staging environment so you can see it in action.

You’ll need to add a holdings attribute to your organization’s user profiles to store your cryptocurrency holdings in Okta. Navigate to Users > Profile Editor. Click on Profile for the first profile in the table. You can identify it by its Okta logo. Click Add Attribute and use the following values:

After performing these steps, you should be able to navigate to [http://localhost:8080](http://localhost:8080 "http://localhost:8080") and log in after running the following commands:

cd holdings-api
./mvnw -Pprod package
java -jar target/*.jar


Storing Secrets in Jenkins X

Storing environment variables locally is pretty straightforward. But how do you do it in Jenkins X? Look no further than its credentials feature. Here’s how to use it:

  1. Create a release for your project.
  2. Create a pull request for your staging environment project.
  3. Auto-deploy it to staging environment so you can see it in action.

While you’re in there, add a few more secrets: OKTA_APP_ID, E2E_USERNAME, and E2E_PASSWORD. The first is the ID of the Jenkins X OIDC app you created. You can get its value from navigating to your app on Okta and copying the value from the URL. The E2E-* secrets should be credentials you want to use to run end-to-end (Protractor) tests. You might want to create a new user for this.

You can access these values in your Jenkinsfile by adding them to the environment section near the top.

environment {
  ORG               = 'mraible'
  APP_NAME          = 'okta-spring-jx-example'
  CHARTMUSEUM_CREDS = credentials('jenkins-x-chartmuseum')
  OKTA_CLIENT_TOKEN = credentials('OKTA_CLIENT_TOKEN')
  OKTA_APP_ID       = credentials('OKTA_APP_ID')
  E2E_USERNAME      = credentials('E2E_USERNAME')
  E2E_PASSWORD      = credentials('E2E_PASSWORD')
}


Transferring Environment Variables to Docker Containers

To transfer the OKTA_CLIENT_TOKEN environment variable to the Docker container, look for:

sh "make preview"


And change it to:

sh "make OKTA_CLIENT_TOKEN=\$OKTA_CLIENT_TOKEN preview"


At this point, you can create a branch, commit your changes, and verify everything works in Jenkins X.

cd ..
git checkout -b add-secure-app
git add .
git commit -m "Add Bootiful PWA"
git push origin add-secure-app


Open your browser and navigate to your repository on GitHub and create a pull request. It should look like the following after creating it.

If the tests pass for your pull request, you should see some greenery and a comment from Jenkins X that your app is available in a preview environment.

If you click on the here link and try to log in, you’ll likely get an error from Okta that the redirect URI hasn’t been whitelisted.

Automate Adding Redirect URIs in Okta

When you create apps in Okta and run them locally, it’s easy to know what the redirect URIs for your app will be. For this particular app, they’ll be [http://localhost:8080/login](http://localhost:8080/login "http://localhost:8080/login") for login, and [http://localhost:8080](http://localhost:8080 "http://localhost:8080") for logout. When you go to production, the URLs are generally well-known as well. However, with Jenkins X, the URLs are dynamic and created on-the-fly based on your pull request number.

To make this work with Okta, you can create a Java class that talks to the Okta API and dynamically adds URIs. Create holdings-api/src/test/java/.../cli/AppRedirectUriManager.java and populate it with the following code.

package com.okta.developer.cli;

import com.okta.sdk.client.Client;
import com.okta.sdk.lang.Collections;
import com.okta.sdk.resource.application.OpenIdConnectApplication;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.ApplicationArguments;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import java.util.LinkedHashSet;
import java.util.List;
import java.util.Set;

@SpringBootApplication
public class AppRedirectUriManager implements ApplicationRunner {
   private static final Logger log = LoggerFactory.getLogger(AppRedirectUriManager.class);

   private final Client client;

   @Value("${appId}")
   private String appId;

   @Value("${redirectUri}")
   private String redirectUri;

   @Value("${operation:add}")
   private String operation;

   public AppRedirectUriManager(Client client) {
       this.client = client;
   }

   public static void main(String[] args) {
       SpringApplication.run(AppRedirectUriManager.class, args);
   }

   @Override
   public void run(ApplicationArguments args) {
       log.info("Adjusting Okta settings: {appId: {}, redirectUri: {}, operation: {}}", appId, redirectUri, operation);
       OpenIdConnectApplication app = (OpenIdConnectApplication) client.getApplication(appId);

       String loginRedirectUri = redirectUri + "/login";

       // update redirect URIs
       List<String> redirectUris = app.getSettings().getOAuthClient().getRedirectUris();
       // use a set so values are unique
       Set<String> updatedRedirectUris = new LinkedHashSet<>(redirectUris);
       if (operation.equalsIgnoreCase("add")) {
           updatedRedirectUris.add(loginRedirectUri);
       } else if (operation.equalsIgnoreCase("remove")) {
           updatedRedirectUris.remove(loginRedirectUri);
       }

       // todo: update logout redirect URIs with redirectUri (not currently available in Java SDK)
       app.getSettings().getOAuthClient().setRedirectUris(Collections.toList(updatedRedirectUris));
       app.update();
       System.exit(0);
   }
}


This class uses Spring Boot’s CLI (command-line interface) support, which makes it possible to invoke it using the Exec Maven Plugin. To add support for running it from Maven, make the following modifications in holdings-api/pom.xml.


<properties>
    ...
   <exec-maven-plugin.version>1.6.0</exec-maven-plugin.version>
   <appId>default</appId>
   <redirectUri>override-me</redirectUri>
</properties>

<!-- dependencies -->

<build>
   <defaultGoal>spring-boot:run</defaultGoal>
   <finalName>holdings-app-${project.version}</finalName>
   <plugins>
       <!-- existing plugins -->
       <plugin>
           <groupId>org.codehaus.mojo</groupId>
           <artifactId>exec-maven-plugin</artifactId>
           <version>${exec-maven-plugin.version}</version>
           <executions>
               <execution>
                   <id>add-redirect</id>
                   <goals>
                       <goal>java</goal>
                   </goals>
               </execution>
           </executions>
           <configuration>
               <mainClass>com.okta.developer.cli.AppRedirectUriManager</mainClass>
               <classpathScope>test</classpathScope>
               <arguments>
                   <argument>appId ${appId} redirectUri ${redirectUri}</argument>
               </arguments>
           </configuration>
       </plugin>
   </plugins>
</build>


Then update Jenkinsfile to add a block that runs mvn exec:java after it builds the image.

dir ('./charts/preview') {
  container('maven') {
    sh "make preview"
    sh "make OKTA_CLIENT_TOKEN=\$OKTA_CLIENT_TOKEN preview"
    sh "jx preview --app $APP_NAME --dir ../.."
  }
}

// Add redirect URI in Okta
dir ('./holdings-api') {
  container('maven') {
    sh '''
      yum install -y jq
      previewURL=$(jx get preview -o json|jq  -r ".items[].spec | select (.previewGitInfo.name==\\"$CHANGE_ID\\") | .previewGitInfo.applicationURL")
      mvn exec:[email protected] -DappId=$OKTA_APP_ID -DredirectUri=$previewURL
    '''
  }
}


Commit and push your changes, and your app should be updated with a redirect URI for [http://{yourPreviewURL}/login](http://{yourPreviewURL}/login "http://{yourPreviewURL}/login"). You’ll need to manually add a logout redirect URI for [http://{yourPreviewURL}](http://{yourPreviewURL} "http://{yourPreviewURL}") since this is not currently supported by Okta’s Java SDK.

To promote your passing pull request to a staging environment, merge it, and the master branch will be pushed to staging. Unfortunately, you won’t be able to log in. That’s because no process registers the staging site’s redirect URIs with your Okta app. If you add the URIs manually, everything should work.

Running Protractor Tests in Jenkins X

Figuring how to run end-to-end tests in Jenkins X was the hardest thing for me to figure out. I started by adding a new Maven profile that would allow me to run the tests with Maven, rather than npm.

NOTE: For this profile to work, you will need to add [http://localhost:8000/login](http://localhost:8000/login "http://localhost:8000/login") as a login redirect URI to your app, and [http://localhost:8000](http://localhost:8000 "http://localhost:8000") as a logout redirect URI.

<profile>
   <id>e2e</id>
   <properties>
       <!-- Hard-code port instead of using build-helper-maven-plugin. -->
       <!-- This way, you don't need to add a redirectUri to Okta app. -->
       <http.port>8000</http.port>
   </properties>
   <build>
       <plugins>
           <plugin>
               <groupId>org.springframework.boot</groupId>
               <artifactId>spring-boot-maven-plugin</artifactId>
               <executions>
                   <execution>
                       <id>pre-integration-test</id>
                       <goals>
                           <goal>start</goal>
                       </goals>
                       <configuration>
                           <arguments>
                               <argument>--server.port=${http.port}</argument>
                           </arguments>
                       </configuration>
                   </execution>
                   <execution>
                       <id>post-integration-test</id>
                       <goals>
                           <goal>stop</goal>
                       </goals>
                   </execution>
               </executions>
           </plugin>
           <plugin>
               <groupId>com.github.eirslett</groupId>
               <artifactId>frontend-maven-plugin</artifactId>
               <version>${frontend-maven-plugin.version}</version>
               <configuration>
                   <workingDirectory>../crypto-pwa</workingDirectory>
               </configuration>
               <executions>
                   <execution>
                       <id>webdriver update</id>
                       <goals>
                           <goal>npm</goal>
                       </goals>
                       <phase>pre-integration-test</phase>
                       <configuration>
                           <arguments>run e2e-update</arguments>
                       </configuration>
                   </execution>
                   <execution>
                       <id>ionic e2e</id>
                       <goals>
                           <goal>npm</goal>
                       </goals>
                       <phase>integration-test</phase>
                       <configuration>
                           <environmentVariables>
                               <PORT>${http.port}</PORT>
                               <CI>true</CI>
                           </environmentVariables>
                           <arguments>run e2e-test</arguments>
                       </configuration>
                   </execution>
               </executions>
           </plugin>
       </plugins>
   </build>
</profile>


TIP: You might notice that I had to specify two different executions for e2e-update and e2e-test. I found that running npm e2e doesn’t work with the frontend-maven-plugin because it just calls other npm run commands. It seems you need to invoke a binary directly when using the frontend-maven-plugin.

Instead of using a TRAVIS environment variable, you’ll notice I’m using a CI one here. This change requires updating crypto-pwa/test/protractor.conf.js to match.

baseUrl: (process.env.CI) ? 'http://localhost:' + process.env.PORT : 'http://localhost:8100',


Make these changes, and you should be able to run ./mvnw verify -Pprod,e2e to run your end-to-end tests locally. Note that you’ll need to have E2E_USERNAME and E2E_PASSWORD defined as environment variables.

When I first tried this in Jenkins X, I discovered that the jenkins-maven agent didn’t have Chrome installed. I found it difficult to install and discovered that [jenkins-nodejs](https://github.com/jenkins-x/builder-nodejs/blob/master/Dockerfile#L4 "jenkins-nodejs") has Chrome and Xvfb pre-installed. When I first tried it, I encountered the following error:

[21:51:08] E/launcher - unknown error: DevToolsActivePort file doesn't exist


This error is caused by a Chrome on Linux issue. I figured out the workaround is to specify --disable-dev-shm-usage in chromeOptions for Protractor. I also added some additional flags that seem to be recommended. I particularly like --headless when running locally, so a browser doesn’t pop up and get in my way. If I want to see the process happening in real-time, I can quickly remove the option.

If you’d like to see your project’s Protractor tests running on Jenkins X, you’ll need to modify crypto-pwa/test/protractor.conf.js to specify the following chromeOptions:

capabilities: {
  'browserName': 'chrome',
  'chromeOptions': {
    'args': ['--headless', ''--disable-gpu', '--no-sandbox', '--disable-extensions', '--disable-dev-shm-usage']
  }
},


Then add a new Run e2e tests stage to Jenkinsfile that sits between the “CI Build” and “Build Release” stages. If it helps, you can see the final Jenkinsfile.

stage('Run e2e tests') {
 agent {
   label "jenkins-nodejs"
 }
 steps {
   container('nodejs') {
     sh '''
       yum install -y jq
       previewURL=$(jx get preview -o json|jq  -r ".items[].spec | select (.previewGitInfo.name==\\"$CHANGE_ID\\") | .previewGitInfo.applicationURL")
       cd crypto-pwa && npm install --unsafe-perm && npm run e2e-update
       Xvfb :99 &
       sleep 60s
       DISPLAY=:99 npm run e2e-test -- --baseUrl=$previewURL
     '''
   }
 }
}


After making all these changes, create a new branch, check in your changes, and create a pull request on GitHub.

git checkout -b enable-e2e-tests
git add .
git commit -m "Add stage for end-to-end tests"
git push origin enable-e2e-tests


I did have to make a few additional adjustments to get all the Protractor tests to pass:

  1. Create a release for your project.
  2. Create a pull request for your staging environment project.
  3. Auto-deploy it to staging environment so you can see it in action.

The tests will likely fail on the first run because the logout redirect URI is not configured for the new preview environment. Update your Okta app’s logout redirect URIs to match your PR’s preview environment URI, replay the pull request tests, and everything should pass!

You can find the source code for the completed application in this example on GitHub.

Thanks for reading ❤