Add CI/CD to Your Spring Boot App with Jenkins X and Kubernetes

Add CI/CD to Your Spring Boot App with Jenkins X and Kubernetes

A lot has happened in the last five years of software development. What it means to build, deploy, and orchestrate software has changed drastically. There’s been a move from hosting software on-premise to public cloud and shift from virtual machines (VMs) to containers. Containers are cheaper to run than VMs because they require fewer resources and run as single processes. Moving to containers has reduced costs, but created the problem of how to run containers at scale.

A lot has happened in the last five years of software development. What it means to build, deploy, and orchestrate software has changed drastically. There’s been a move from hosting software on-premise to public cloud and shift from virtual machines (VMs) to containers. Containers are cheaper to run than VMs because they require fewer resources and run as single processes. Moving to containers has reduced costs, but created the problem of how to run containers at scale.

Kubernetes was first open-sourced on June 6th, 2014. Google had been using containers for years and used a tool called Borg to manage containers at scale. Kubernetes is the open source version of Borg and has become the de facto standard in the last four years.

Its journey to becoming a standard was largely facilitated by all the big players jumping on board. Red Hat, IBM, Amazon, Microsoft, Oracle, and Pivotal – every major public cloud provider has Kubernetes support.

This is great for developers because it provides a single way to package applications (in a Docker container) and deploy it on any Kubernetes cluster.

High-Performance Development with CI/CD, Kubernetes, and Jenkins X

High performing teams are almost always a requirement for success in technology, and continuous integration, continuous deployment (CI/CD), small iterations, plus fast feedback are the building blocks. CI/CD can be difficult to set up for your cloud native app. By automating everything, developers can spend their precious time delivering actual business value.

How do you become a high performing team using containers, continuous delivery, and Kubernetes? This is where Jenkins X comes in.

“The idea of Jenkins X is to give all developers their own naval seafaring butler that can help you sail the seas of continuous delivery.” — James Strachan

Jenkins X helps you automate your CI/CD in Kubernetes – and you don’t even have to learn Docker or Kubernetes!

What Does Jenkins X Do?

Jenkins X automates the installation, configuration, and upgrading of Jenkins and other apps (Helm, Skaffold, Nexus, among others) on Kubernetes. It automates CI/CD of your applications using Docker images, Helm charts, and pipelines. It uses GitOps to manage promotion between environments and provides lots of feedback by commenting on pull requests as they hit staging and production.

Get Started with Jenkins X

To get installed with Jenkins X, you first need to install the jx binary on your machine, or cloud provider. You can get $300 in credits for Google Cloud, so I decided to start there.

Install Jenkins X on Google Cloud and Create a Cluster

Navigate to cloud.google.com and log in. If you don’t have an account, sign up for a free trial. Go to the console (there’s a link in the top right corner) and activate Google Cloud shell. Copy and paste the following commands into the shell.

curl -L https://github.com/jenkins-x/jx/releases/download/v1.3.79/jx-linux-amd64.tar.gz | tar xzv
sudo mv jx /usr/local/bin


NOTE: Google Cloud Shell terminates any changes made outside your home directory after an hour, so you might have to rerun the commands. The good news is they’ll be in your history, so you only need to hit the up arrow and enter. You can also eliminate the sudo mv command above and add the following to your .bashrc instead.

export PATH=$PATH:.


Create a cluster on GKE (Google Kubernetes Engine) using the following command. You may have to enable GKE for your account.

jx create cluster gke --skip-login


Confirm you want to install helm if you’re prompted to download it. You will be prompted to select a Google Cloud Zone. I’d suggest picking one close to your location. I chose us-west1-a since I live near Denver, Colorado. For Google Cloud Machine Type, I selected n1-standard-2, and used the defaults for the min (3) and max (5) number of nodes.

For the GitHub name, type your own (e.g., mraible) and an email you have registered with GitHub (e.g., [email protected]). I tried to use oktadeveloper (a GitHub organization), and I was unable to make it work.

NOTE: GitHub integration will fail if you have two-factor authentication enabled on your account. You’ll need to disable it on GitHub if you want the process to complete successfully. :-/

When prompted to install an ingress controller, hit Enter for Yes. Hit Enter again to select the default domain.

You’ll be prompted to create a GitHub API Token. Click on the provided URL and name it “Jenkins X”. Copy and paste the token’s value back into your console.

Grab a coffee, an adult beverage, or do some pushups while your install finishes. It can take several minutes.

The next step will be to copy the API token from Jenkins to your console. Follow the provided instructions in your console.

When you’re finished with that, run jx console and click on the link to log in to your Jenkins instance. Click on Administration and upgrade Jenkins, as well as all its plugins (Plugin Manager > scroll to the bottom and select all). If you fail to perform this step, you won’t be able to navigate from your GitHub pull request to your Jenkins X CI process for it.

Create a Spring Boot App

When I first started using Jenkins X, I tried to import an existing project. Even though my app used Spring Boot, it didn’t have a pom.xml in the root directory, so Jenkins X thought it was a Node.js app. For this reason, I suggest creating a blank Spring Boot app first to confirm Jenkins X is set up correctly.

Create a bare-bones Spring Boot app from Cloud Shell:

jx create spring -d web -d actuator


This command uses Spring Initializr, so you’ll be prompted with a few choices. Below are the answers I used:

Question Answer Language java Group com.okta.developer Artifact okta-spring-jx-example TIP: Picking a short name for your artifact name will save you pain. Jenkins X has a 53 character limit for release names and oktadeveloper/okta-spring-boot-jenkinsx-example will cause it to be exceeded by two characters.

Select all the defaults for the git user name, initializing git, and the commit message. You can select an organization to use if you don’t want to use your personal account. Run the following command to watch the CI/CD pipeline of your app.

jx get activity -f okta-spring-jx-example -w


Run jx console, click the resulting link, and navigate to your project if you’d like a more visually rich view.

This process will perform a few tasks:

  1. Create a release for your project.
  2. Create a pull request for your staging environment project.
  3. Auto-deploy it to staging environment so you can see it in action.
Merge status checks all passed so the promotion worked!
Application is available at: http://okta-spring-jx-example.jx-staging.35.230.106.169.nip.io


NOTE: Since Spring Boot doesn’t provide a welcome page by default, you will get a 404 when you open the URL above.

Deploy Your Spring Boot App to Production with Jenkins X

By default, Jenkins X will only auto-deploy to staging. You can manually promote from staging to production using:

jx promote okta-spring-jx-example --version 0.0.1 --env production


You can change your production environment to use auto-deploy using [jx edit environment](https://jenkins-x.io/commands/jx_edit_environment/ "jx edit environment").

Now that you know how to use Jenkins X with a bare-bones Spring Boot app let’s see how to make it work with a more real-world example.

Secure Your Spring Boot App and Add an Angular PWA

Over the last several months, I’ve written a series of blog posts about building a PWA (progressive web app) with Ionic/Angular and Spring Boot.

  1. Create a release for your project.
  2. Create a pull request for your staging environment project.
  3. Auto-deploy it to staging environment so you can see it in action.

This is the final blog post in the series. I believe this is an excellent example of a real-world app because it has numerous unit and integration tests, including end-to-end tests with Protractor. Let’s see how to automate its path to production with Jenkins X and Kubernetes!

Clone the Spring Boot project you just created from GitHub (make sure to change {yourUsername} in the URL):

git clone https://github.com/{yourUsername}/okta-spring-jx-example.git okta-jenkinsx


In an adjacent directory, clone the project created that has Spring Boot + Angular as a single artifact:

git clone https://github.com/oktadeveloper/okta-spring-boot-angular-auth-code-flow-example.git spring-boot-angular


In a terminal, navigate to okta-jenkinsx and remove the files that are no longer necessary:

cd okta-jenkinsx
rm -rf .mvn src mvnw* pom.xml


The result should be a directory structure with the following files:

$ tree .
.
├── charts
│   ├── okta-spring-jx-example
│   │   ├── Chart.yaml
│   │   ├── Makefile
│   │   ├── README.md
│   │   ├── templates
│   │   │   ├── deployment.yaml
│   │   │   ├── _helpers.tpl
│   │   │   ├── NOTES.txt
│   │   │   └── service.yaml
│   │   └── values.yaml
│   └── preview
│       ├── Chart.yaml
│       ├── Makefile
│       ├── requirements.yaml
│       └── values.yaml
├── Dockerfile
├── Jenkinsfile
└── skaffold.yaml

4 directories, 15 files


Copy all the files from spring-boot-angular into okta-jenkinsx.

cp -r ../spring-boot-angular/* .


When using Travis CI to test this app, I ran npm install as part of the process. With Jenkins X, it’s easier to everything with one container (e.g. maven or nodejs), so add an execution to the frontend-maven-plugin (in holdings-api/pom.xml) to run npm install (hint: you need to add the execution with id==’npm install’ to the existing pom.xml).

Now is a great time to open the okta-jenkinsx directory as a project in an IDE like IntelliJ IDEA, Eclipse, Netbeans, or VS Code! :)

<plugin>
   <groupId>com.github.eirslett</groupId>
   <artifactId>frontend-maven-plugin</artifactId>
   <version>${frontend-maven-plugin.version}</version>
   <configuration>
       <workingDirectory>../crypto-pwa</workingDirectory>
   </configuration>
   <executions>
       <execution>
           <id>install node and npm</id>
           <goals>
               <goal>install-node-and-npm</goal>
           </goals>
           <configuration>
               <nodeVersion>${node.version}</nodeVersion>
           </configuration>
       </execution>
       <execution>
           <id>npm install</id>
           <goals>
               <goal>npm</goal>
           </goals>
           <phase>generate-resources</phase>
           <configuration>
               <arguments>install --unsafe-perm</arguments>
           </configuration>
       </execution>
       ...
   </executions>
</plugin>


NOTE: The --unsafe-perm flag is necessary because Jenkins X runs the build as a root user. I figured out this workaround from node-sass’s troubleshooting instructions.

Add Actuator and Turn Off HTTPS

Jenkins X relies on Spring Boot’s Actuator for health checks. This means if you don’t include it in your project (or have /actuator/health protected), Jenkins X will report your app has failed to startup.

Add the Actuator starter as a dependency to holdings-api/pom.xml:

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>


You’ll also need to allow access to its health check endpoint. Jenkins X will deploy your app behind a Nginx server, so you’ll want to turn off forcing HTTPS as well, or you won’t be able to reach your app. Modify holdings-api/src/main/java/.../SecurityConfiguration.java to allow /actuator/health and to remove requiresSecure().

public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

   @Override
   public void configure(WebSecurity web) throws Exception {
       web.ignoring().antMatchers("/**/*.{js,html,css}");
   }

   @Override
   protected void configure(HttpSecurity http) throws Exception {
       http
               .csrf().csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse())
           .and()
               .authorizeRequests()
               .antMatchers("/", "/home", "/api/user", "/actuator/health").permitAll()
               .anyRequest().authenticated();
   }
}


Adjust Paths in Dockerfile and Jenkinsfile

Since this project builds in a sub-directory rather than the root directory, update ./Dockerfile to look in holdings-api for files.

FROM openjdk:8-jdk-slim
ENV PORT 8080
ENV CLASSPATH /opt/lib
EXPOSE 8080

# copy pom.xml and wildcards to avoid this command failing if there's no target/lib directory
COPY holdings-api/pom.xml holdings-api/target/lib* /opt/lib/

# NOTE we assume there's only 1 jar in the target dir
# but at least this means we don't have to guess the name
# we could do with a better way to know the name - or to always create an app.jar or something
COPY holdings-api/target/*.jar /opt/app.jar
WORKDIR /opt
CMD ["java", "-jar", "app.jar"]


You’ll also need to update Jenkinsfile so it runs any mvn commands in the holdings-api directory. Add the -Pprod profile too. For example:

// in the 'CI Build and push snapshot' stage
steps {
 container('maven') {
   dir ('./holdings-api') {
     sh "mvn versions:set -DnewVersion=$PREVIEW_VERSION"
     sh "mvn install -Pprod"
   }
 }
 ...
}
// in the 'Build Release' stage
dir ('./holdings-api') {
  sh "mvn versions:set -DnewVersion=\$(cat ../VERSION)"
}
...
dir ('./holdings-api') {
  sh "mvn clean deploy -Pprod"
}


This should be enough to make this app work with Jenkins X. However, you won’t be able to log into it unless you have an Okta account and configure it accordingly.

Why Okta?

In short, we make identity management a lot easier, more secure, and more scalable than what you’re probably used to. Okta is a cloud service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

Are you sold? Register for a forever-free developer account, and when you’ve finished, come on back so we can learn more about CI/CD with Spring Boot and Jenkins X!

Create a Web Application in Okta for Your Spring Boot App

After you’ve completed the setup process, log in to your account and navigate to Applications > Add Application. Click Web and Next. On the next page, enter the following values and click Done (you will have to click Done, then Edit to modify Logout redirect URIs).

Open holdings-api/src/main/resources/application.yml and paste the values from your org/app into it.

okta:
 client:
   orgUrl: https://{yourOktaDomain}
   token: XXX
security:
   oauth2:
     client:
       access-token-uri: https://{yourOktaDomain}/oauth2/default/v1/token
       user-authorization-uri: https://{yourOktaDomain}/oauth2/default/v1/authorize
       client-id: {clientId}
       client-secret: {clientSecret}
     resource:
       user-info-uri: https://{yourOktaDomain}/oauth2/default/v1/userinfo


You’ll notice the token value is XXX. This is because I prefer to read it from an environment variable rather than being checked into source control. You’ll likely want to do this for your client secret as well, but I’m only doing one property for brevity. To create an API token:

  1. Create a release for your project.
  2. Create a pull request for your staging environment project.
  3. Auto-deploy it to staging environment so you can see it in action.

You’ll need to add a holdings attribute to your organization’s user profiles to store your cryptocurrency holdings in Okta. Navigate to Users > Profile Editor. Click on Profile for the first profile in the table. You can identify it by its Okta logo. Click Add Attribute and use the following values:

After performing these steps, you should be able to navigate to [http://localhost:8080](http://localhost:8080 "http://localhost:8080") and log in after running the following commands:

cd holdings-api
./mvnw -Pprod package
java -jar target/*.jar


Storing Secrets in Jenkins X

Storing environment variables locally is pretty straightforward. But how do you do it in Jenkins X? Look no further than its credentials feature. Here’s how to use it:

  1. Create a release for your project.
  2. Create a pull request for your staging environment project.
  3. Auto-deploy it to staging environment so you can see it in action.

While you’re in there, add a few more secrets: OKTA_APP_ID, E2E_USERNAME, and E2E_PASSWORD. The first is the ID of the Jenkins X OIDC app you created. You can get its value from navigating to your app on Okta and copying the value from the URL. The E2E-* secrets should be credentials you want to use to run end-to-end (Protractor) tests. You might want to create a new user for this.

You can access these values in your Jenkinsfile by adding them to the environment section near the top.

environment {
  ORG               = 'mraible'
  APP_NAME          = 'okta-spring-jx-example'
  CHARTMUSEUM_CREDS = credentials('jenkins-x-chartmuseum')
  OKTA_CLIENT_TOKEN = credentials('OKTA_CLIENT_TOKEN')
  OKTA_APP_ID       = credentials('OKTA_APP_ID')
  E2E_USERNAME      = credentials('E2E_USERNAME')
  E2E_PASSWORD      = credentials('E2E_PASSWORD')
}


Transferring Environment Variables to Docker Containers

To transfer the OKTA_CLIENT_TOKEN environment variable to the Docker container, look for:

sh "make preview"


And change it to:

sh "make OKTA_CLIENT_TOKEN=\$OKTA_CLIENT_TOKEN preview"


At this point, you can create a branch, commit your changes, and verify everything works in Jenkins X.

cd ..
git checkout -b add-secure-app
git add .
git commit -m "Add Bootiful PWA"
git push origin add-secure-app


Open your browser and navigate to your repository on GitHub and create a pull request. It should look like the following after creating it.

If the tests pass for your pull request, you should see some greenery and a comment from Jenkins X that your app is available in a preview environment.

If you click on the here link and try to log in, you’ll likely get an error from Okta that the redirect URI hasn’t been whitelisted.

Automate Adding Redirect URIs in Okta

When you create apps in Okta and run them locally, it’s easy to know what the redirect URIs for your app will be. For this particular app, they’ll be [http://localhost:8080/login](http://localhost:8080/login "http://localhost:8080/login") for login, and [http://localhost:8080](http://localhost:8080 "http://localhost:8080") for logout. When you go to production, the URLs are generally well-known as well. However, with Jenkins X, the URLs are dynamic and created on-the-fly based on your pull request number.

To make this work with Okta, you can create a Java class that talks to the Okta API and dynamically adds URIs. Create holdings-api/src/test/java/.../cli/AppRedirectUriManager.java and populate it with the following code.

package com.okta.developer.cli;

import com.okta.sdk.client.Client;
import com.okta.sdk.lang.Collections;
import com.okta.sdk.resource.application.OpenIdConnectApplication;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.ApplicationArguments;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import java.util.LinkedHashSet;
import java.util.List;
import java.util.Set;

@SpringBootApplication
public class AppRedirectUriManager implements ApplicationRunner {
   private static final Logger log = LoggerFactory.getLogger(AppRedirectUriManager.class);

   private final Client client;

   @Value("${appId}")
   private String appId;

   @Value("${redirectUri}")
   private String redirectUri;

   @Value("${operation:add}")
   private String operation;

   public AppRedirectUriManager(Client client) {
       this.client = client;
   }

   public static void main(String[] args) {
       SpringApplication.run(AppRedirectUriManager.class, args);
   }

   @Override
   public void run(ApplicationArguments args) {
       log.info("Adjusting Okta settings: {appId: {}, redirectUri: {}, operation: {}}", appId, redirectUri, operation);
       OpenIdConnectApplication app = (OpenIdConnectApplication) client.getApplication(appId);

       String loginRedirectUri = redirectUri + "/login";

       // update redirect URIs
       List<String> redirectUris = app.getSettings().getOAuthClient().getRedirectUris();
       // use a set so values are unique
       Set<String> updatedRedirectUris = new LinkedHashSet<>(redirectUris);
       if (operation.equalsIgnoreCase("add")) {
           updatedRedirectUris.add(loginRedirectUri);
       } else if (operation.equalsIgnoreCase("remove")) {
           updatedRedirectUris.remove(loginRedirectUri);
       }

       // todo: update logout redirect URIs with redirectUri (not currently available in Java SDK)
       app.getSettings().getOAuthClient().setRedirectUris(Collections.toList(updatedRedirectUris));
       app.update();
       System.exit(0);
   }
}


This class uses Spring Boot’s CLI (command-line interface) support, which makes it possible to invoke it using the Exec Maven Plugin. To add support for running it from Maven, make the following modifications in holdings-api/pom.xml.


<properties>
    ...
   <exec-maven-plugin.version>1.6.0</exec-maven-plugin.version>
   <appId>default</appId>
   <redirectUri>override-me</redirectUri>
</properties>

<!-- dependencies -->

<build>
   <defaultGoal>spring-boot:run</defaultGoal>
   <finalName>holdings-app-${project.version}</finalName>
   <plugins>
       <!-- existing plugins -->
       <plugin>
           <groupId>org.codehaus.mojo</groupId>
           <artifactId>exec-maven-plugin</artifactId>
           <version>${exec-maven-plugin.version}</version>
           <executions>
               <execution>
                   <id>add-redirect</id>
                   <goals>
                       <goal>java</goal>
                   </goals>
               </execution>
           </executions>
           <configuration>
               <mainClass>com.okta.developer.cli.AppRedirectUriManager</mainClass>
               <classpathScope>test</classpathScope>
               <arguments>
                   <argument>appId ${appId} redirectUri ${redirectUri}</argument>
               </arguments>
           </configuration>
       </plugin>
   </plugins>
</build>


Then update Jenkinsfile to add a block that runs mvn exec:java after it builds the image.

dir ('./charts/preview') {
  container('maven') {
    sh "make preview"
    sh "make OKTA_CLIENT_TOKEN=\$OKTA_CLIENT_TOKEN preview"
    sh "jx preview --app $APP_NAME --dir ../.."
  }
}

// Add redirect URI in Okta
dir ('./holdings-api') {
  container('maven') {
    sh '''
      yum install -y jq
      previewURL=$(jx get preview -o json|jq  -r ".items[].spec | select (.previewGitInfo.name==\\"$CHANGE_ID\\") | .previewGitInfo.applicationURL")
      mvn exec:[email protected] -DappId=$OKTA_APP_ID -DredirectUri=$previewURL
    '''
  }
}


Commit and push your changes, and your app should be updated with a redirect URI for [http://{yourPreviewURL}/login](http://{yourPreviewURL}/login "http://{yourPreviewURL}/login"). You’ll need to manually add a logout redirect URI for [http://{yourPreviewURL}](http://{yourPreviewURL} "http://{yourPreviewURL}") since this is not currently supported by Okta’s Java SDK.

To promote your passing pull request to a staging environment, merge it, and the master branch will be pushed to staging. Unfortunately, you won’t be able to log in. That’s because no process registers the staging site’s redirect URIs with your Okta app. If you add the URIs manually, everything should work.

Running Protractor Tests in Jenkins X

Figuring how to run end-to-end tests in Jenkins X was the hardest thing for me to figure out. I started by adding a new Maven profile that would allow me to run the tests with Maven, rather than npm.

NOTE: For this profile to work, you will need to add [http://localhost:8000/login](http://localhost:8000/login "http://localhost:8000/login") as a login redirect URI to your app, and [http://localhost:8000](http://localhost:8000 "http://localhost:8000") as a logout redirect URI.

<profile>
   <id>e2e</id>
   <properties>
       <!-- Hard-code port instead of using build-helper-maven-plugin. -->
       <!-- This way, you don't need to add a redirectUri to Okta app. -->
       <http.port>8000</http.port>
   </properties>
   <build>
       <plugins>
           <plugin>
               <groupId>org.springframework.boot</groupId>
               <artifactId>spring-boot-maven-plugin</artifactId>
               <executions>
                   <execution>
                       <id>pre-integration-test</id>
                       <goals>
                           <goal>start</goal>
                       </goals>
                       <configuration>
                           <arguments>
                               <argument>--server.port=${http.port}</argument>
                           </arguments>
                       </configuration>
                   </execution>
                   <execution>
                       <id>post-integration-test</id>
                       <goals>
                           <goal>stop</goal>
                       </goals>
                   </execution>
               </executions>
           </plugin>
           <plugin>
               <groupId>com.github.eirslett</groupId>
               <artifactId>frontend-maven-plugin</artifactId>
               <version>${frontend-maven-plugin.version}</version>
               <configuration>
                   <workingDirectory>../crypto-pwa</workingDirectory>
               </configuration>
               <executions>
                   <execution>
                       <id>webdriver update</id>
                       <goals>
                           <goal>npm</goal>
                       </goals>
                       <phase>pre-integration-test</phase>
                       <configuration>
                           <arguments>run e2e-update</arguments>
                       </configuration>
                   </execution>
                   <execution>
                       <id>ionic e2e</id>
                       <goals>
                           <goal>npm</goal>
                       </goals>
                       <phase>integration-test</phase>
                       <configuration>
                           <environmentVariables>
                               <PORT>${http.port}</PORT>
                               <CI>true</CI>
                           </environmentVariables>
                           <arguments>run e2e-test</arguments>
                       </configuration>
                   </execution>
               </executions>
           </plugin>
       </plugins>
   </build>
</profile>


TIP: You might notice that I had to specify two different executions for e2e-update and e2e-test. I found that running npm e2e doesn’t work with the frontend-maven-plugin because it just calls other npm run commands. It seems you need to invoke a binary directly when using the frontend-maven-plugin.

Instead of using a TRAVIS environment variable, you’ll notice I’m using a CI one here. This change requires updating crypto-pwa/test/protractor.conf.js to match.

baseUrl: (process.env.CI) ? 'http://localhost:' + process.env.PORT : 'http://localhost:8100',


Make these changes, and you should be able to run ./mvnw verify -Pprod,e2e to run your end-to-end tests locally. Note that you’ll need to have E2E_USERNAME and E2E_PASSWORD defined as environment variables.

When I first tried this in Jenkins X, I discovered that the jenkins-maven agent didn’t have Chrome installed. I found it difficult to install and discovered that [jenkins-nodejs](https://github.com/jenkins-x/builder-nodejs/blob/master/Dockerfile#L4 "jenkins-nodejs") has Chrome and Xvfb pre-installed. When I first tried it, I encountered the following error:

[21:51:08] E/launcher - unknown error: DevToolsActivePort file doesn't exist


This error is caused by a Chrome on Linux issue. I figured out the workaround is to specify --disable-dev-shm-usage in chromeOptions for Protractor. I also added some additional flags that seem to be recommended. I particularly like --headless when running locally, so a browser doesn’t pop up and get in my way. If I want to see the process happening in real-time, I can quickly remove the option.

If you’d like to see your project’s Protractor tests running on Jenkins X, you’ll need to modify crypto-pwa/test/protractor.conf.js to specify the following chromeOptions:

capabilities: {
  'browserName': 'chrome',
  'chromeOptions': {
    'args': ['--headless', ''--disable-gpu', '--no-sandbox', '--disable-extensions', '--disable-dev-shm-usage']
  }
},


Then add a new Run e2e tests stage to Jenkinsfile that sits between the “CI Build” and “Build Release” stages. If it helps, you can see the final Jenkinsfile.

stage('Run e2e tests') {
 agent {
   label "jenkins-nodejs"
 }
 steps {
   container('nodejs') {
     sh '''
       yum install -y jq
       previewURL=$(jx get preview -o json|jq  -r ".items[].spec | select (.previewGitInfo.name==\\"$CHANGE_ID\\") | .previewGitInfo.applicationURL")
       cd crypto-pwa && npm install --unsafe-perm && npm run e2e-update
       Xvfb :99 &
       sleep 60s
       DISPLAY=:99 npm run e2e-test -- --baseUrl=$previewURL
     '''
   }
 }
}


After making all these changes, create a new branch, check in your changes, and create a pull request on GitHub.

git checkout -b enable-e2e-tests
git add .
git commit -m "Add stage for end-to-end tests"
git push origin enable-e2e-tests


I did have to make a few additional adjustments to get all the Protractor tests to pass:

  1. Create a release for your project.
  2. Create a pull request for your staging environment project.
  3. Auto-deploy it to staging environment so you can see it in action.

The tests will likely fail on the first run because the logout redirect URI is not configured for the new preview environment. Update your Okta app’s logout redirect URIs to match your PR’s preview environment URI, replay the pull request tests, and everything should pass!

You can find the source code for the completed application in this example on GitHub.

Thanks for reading ❤

Build Microservice Architecture With Kubernetes, Spring Boot , and Docker

Build Microservice Architecture With Kubernetes, Spring Boot , and Docker

In this article we learn how to start the Spring Boot microservice project and run it fast with Kubernetes and Docker

The topics covered in this article are:

  • Using Spring Boot 2.0 in cloud-native development

  • Providing service discovery for all microservices using a Spring Cloud Kubernetes project

  • Injecting configuration settings into application pods using Kubernetes Config Maps and Secrets

  • Building application images using Docker and deploying them on Kubernetes using YAML configuration files

  • Using Spring Cloud Kubernetes together with a Zuul proxy to expose a single Swagger API documentation for all microservices

Spring Cloud and Kubernetes may be threatened as competitive solutions when you build a microservices environment. Such components like Eureka, Spring Cloud Config, or Zuul provided by Spring Cloud may be replaced by built-in Kubernetes objects like services, config maps, secrets, or ingresses. But even if you decide to use Kubernetes components instead of Spring Cloud, you can take advantage of some interesting features provided throughout the whole Spring Cloud project.

The one really interesting project that helps us in development is Spring Cloud Kubernetes. Although it is still in the incubation stage, it is definitely worth dedicating some time to it. It integrates Spring Cloud with Kubernetes. I'll show you how to use an implementation of the discovery client, inter-service communication with the Ribbon client, and Zipkin discovery using Spring Cloud Kubernetes.

Before we proceed to the source code, let's take a look at the following diagram. It illustrates the architecture of our sample system. It is quite similar to the architecture presented in the mentioned article about microservices on Spring Cloud. There are three independent applications (employee-service, department-service, organization-service), which communicate with each other through a REST API. These Spring Boot microservices use some built-in mechanisms provided by Kubernetes: config maps and secrets for distributed configuration, etcd for service discovery, and ingresses for the API gateway.

Let's proceed to the implementation. Currently, the newest stable version of Spring Cloud is Finchley.RELEASE. This version of spring-cloud-dependencies should be declared as a BOM for dependency management.

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Finchley.RELEASE</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

Spring Cloud Kubernetes is not released under Spring Cloud Release Trains, so we need to explicitly define its version. Because we use Spring Boot 2.0 we have to include the newest SNAPSHOT version of spring-cloud-kubernetes artifacts, which is 0.3.0.BUILD-SNAPSHOT.

The source code of sample applications presented in this article is available on GitHub in this repository.

Pre-Requirements

In order to be able to deploy and test our sample microservices, we need to prepare a development environment. We can realize that in the following steps:

  • You need at least a single node cluster instance of Kubernetes (Minikube) or Openshift (Minishift) running on your local machine. You should start it and expose the embedded Docker client provided by both of them. The detailed instructions for Minishift may be found in my Quick guide to deploying Java apps on OpenShift. You can also use that description to run Minikube — just replace word "minishift" with "minikube." In fact, it does not matter if you choose Kubernetes or Openshift — the next part of this tutorial will be applicable for both of them.

  • Spring Cloud Kubernetes requires access to the Kubernetes API in order to be able to retrieve a list of addresses for pods running for a single service. If you use Kubernetes, you should just execute the following command:

$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default

If you deploy your microservices on Minishift, you should first enable admin-user add-on, then log in as a cluster admin and grant the required permissions.

$ minishift addons enable admin-user
$ oc login -u system:admin
$ oc policy add-role-to-user cluster-reader system:serviceaccount:myproject:default
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo:latest
        ports:
        - containerPort: 27017
        env:
        - name: MONGO_INITDB_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  ports:
  - port: 27017
    protocol: TCP
  selector:
    app: mongodb
1. Inject the Configuration With Config Maps and Secrets

When using Spring Cloud, the most obvious choice for realizing a distributed configuration in your system is Spring Cloud Config. With Kubernetes, you can use Config Map. It holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data. It is used for storing and sharing non-sensitive, unencrypted configuration information. To use sensitive information in your clusters, you must use Secrets. Use of both these Kubernetes objects can be perfectly demonstrated based on the example of MongoDB connection settings. Inside a Spring Boot application, we can easily inject it using environment variables. Here's a fragment of application.yml file with URI configuration.

spring:
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

While username and password are sensitive fields, a database name is not, so we can place it inside the config map.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
data:
  database-name: microservices

Of course, username and password are defined as secrets.

apiVersion: v1
kind: Secret
metadata:
  name: mongodb
type: Opaque
data:
  database-password: MTIzNDU2
  database-user: cGlvdHI=

To apply the configuration to the Kubernetes cluster, we run the following commands.

$ kubectl apply -f kubernetes/mongodb-configmap.yaml
$ kubectl apply -f kubernetes/mongodb-secret.yaml

After that, we should inject the configuration properties into the application's pods. When defining the container configuration inside the Deployment YAML file, we have to include references to environment variables and secrets, as shown below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee
  labels:
    app: employee
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee
  template:
    metadata:
      labels:
        app: employee
    spec:
      containers:
      - name: employee
        image: piomin/employee:1.0
        ports:
        - containerPort: 8080
        env:
        - name: MONGO_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password
2. Building Service Discovery With Kubernetes

We are usually running microservices on Kubernetes using Docker containers. One or more containers are grouped by pods, which are the smallest deployable units created and managed in Kubernetes. A good practice is to run only one container inside a single pod. If you would like to scale up your microservice, you would just have to increase the number of running pods. All running pods that belong to a single microservice are logically grouped with Kubernetes Service. This service may be visible outside the cluster and is able to load balance incoming requests between all running pods. The following service definition groups all pods labeled with the field app equal to employee.

apiVersion: v1
kind: Service
metadata:
  name: employee
  labels:
    app: employee
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: employee

Service can be used to access the application outsidethe Kubernetes cluster or for inter-service communication inside a cluster. However, the communication between microservices can be implemented more comfortably with Spring Cloud Kubernetes. First, we need to include the following dependency in the project pom.xml.

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>

Then we should enable the discovery client for an application, the same as we have always done for discovery in Spring Cloud Netflix Eureka. This allows you to query Kubernetes endpoints (services) by name. This discovery feature is also used by Spring Cloud Kubernetes Ribbon or Zipkin projects to fetch, respectively, the list of the pods defined for a microservice to be load balanced or the Zipkin servers available to send the traces or spans.

@SpringBootApplication
@EnableDiscoveryClient
@EnableMongoRepositories
@EnableSwagger2
public class EmployeeApplication {
 public static void main(String[] args) {
  SpringApplication.run(EmployeeApplication.class, args);
 }
 // ...
}

The last important thing in this section is to guarantee that the Spring application name will be exactly the same as the Kubernetes service name for the application. For the application employee-service, it is employee.

spring:
  application:
    name: employee
3. Building Microservices Using Docker and Deploying on Kubernetes

There is nothing unusual in our sample microservices. We have included some standard Spring dependencies for building REST-based microservices, integrating with MongoDB, and generating API documentation using Swagger2.

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

In order to integrate with MongoDB, we should create an interface that extends standard Spring Data CrudRepository.

public interface EmployeeRepository extends CrudRepository {
 List findByDepartmentId(Long departmentId);
 List findByOrganizationId(Long organizationId);
}

The entity class should be annotated with Mongo @Document and a primary key field with @Id.

@Document(collection = "employee")
public class Employee {
 @Id
 private String id;
 private Long organizationId;
 private Long departmentId;
 private String name;
 private int age;
 private String position;
 // ...
}

The repository bean has been injected to the controller class. Here's the full implementation of our REST API inside employee-service.

@RestController
public class EmployeeController {
 private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
 @Autowired
 EmployeeRepository repository;
 @PostMapping("/")
 public Employee add(@RequestBody Employee employee) {
  LOGGER.info("Employee add: {}", employee);
  return repository.save(employee);
 }
 @GetMapping("/{id}")
 public Employee findById(@PathVariable("id") String id) {
  LOGGER.info("Employee find: id={}", id);
  return repository.findById(id).get();
 }
 @GetMapping("/")
 public Iterable findAll() {
  LOGGER.info("Employee find");
  return repository.findAll();
 }
 @GetMapping("/department/{departmentId}")
 public List findByDepartment(@PathVariable("departmentId") Long departmentId) {
  LOGGER.info("Employee find: departmentId={}", departmentId);
  return repository.findByDepartmentId(departmentId);
 }
 @GetMapping("/organization/{organizationId}")
 public List findByOrganization(@PathVariable("organizationId") Long organizationId) {
  LOGGER.info("Employee find: organizationId={}", organizationId);
  return repository.findByOrganizationId(organizationId);
 }
}

In order to run our microservices on Kubernetes, we should first build the whole Maven project with the mvn clean install command. Each microservice has a Dockerfile placed in the root directory. Here's the Dockerfile definition for employee-service.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8080
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Let's build Docker images for all three sample microservices.

$ cd employee-service
$ docker build -t piomin/employee:1.0 .
$ cd department-service
$ docker build -t piomin/department:1.0 .
$ cd organization-service
$ docker build -t piomin/organization:1.0 .

The last step is to deploy Docker containers with applications on Kubernetes. To do that, just execute the commands kubectl apply on YAML configuration files. The sample deployment file for employee-service has been demonstrated in step 1. All required deployment fields are available inside the project repository in the kubernetes directory.

$ kubectl apply -f kubernetes\employee-deployment.yaml
$ kubectl apply -f kubernetes\department-deployment.yaml
$ kubectl apply -f kubernetes\organization-deployment.yaml
4. Communication Between Microservices With Spring Cloud Kubernetes Ribbon

All the microservices are deployed on Kubernetes. Now, it's worth it to discuss some aspects related to inter-service communication. The application employee-service, in contrast to other microservices, did not invoke any other microservices. Let's take a look at other microservices that call the API exposed by employee-service and communicate between each other ( organization-service calls department-service API).

First, we need to include some additional dependencies in the project. We use Spring Cloud Ribbon and OpenFeign. Alternatively, you can also use Spring@LoadBalancedRestTemplate.

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

Here's the main class of department-service. It enables Feign client using the @EnableFeignClients annotation. It works the same as with discovery based on Spring Cloud Netflix Eureka. OpenFeign uses Ribbon for client-side load balancing. Spring Cloud Kubernetes Ribbon provides some beans that force Ribbon to communicate with the Kubernetes API through Fabric8 KubernetesClient.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableMongoRepositories
@EnableSwagger2
public class DepartmentApplication {
 public static void main(String[] args) {
  SpringApplication.run(DepartmentApplication.class, args);
 }
 // ...
}

Here's the implementation of Feign client for calling the method exposed by employee-service.

@FeignClient(name = "employee")
public interface EmployeeClient {
 @GetMapping("/department/{departmentId}")
 List findByDepartment(@PathVariable("departmentId") String departmentId);
}

Finally, we have to inject Feign client's beans into the REST controller. Now, we may call the method defined inside EmployeeClient, which is equivalent to calling REST endpoints.

@RestController
public class DepartmentController {
 private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
 @Autowired
 DepartmentRepository repository;
 @Autowired
 EmployeeClient employeeClient;
 // ...
 @GetMapping("/organization/{organizationId}/with-employees")
 public List findByOrganizationWithEmployees(@PathVariable("organizationId") Long organizationId) {
  LOGGER.info("Department find: organizationId={}", organizationId);
  List departments = repository.findByOrganizationId(organizationId);
  departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
  return departments;
 }
}
5. Building API Gateway Using Kubernetes Ingress

Ingress is a collection of rules that allow incoming requests to reach the downstream services. In our microservices architecture, ingress is playing the role of an API gateway. To create it, we should first prepare a YAML description file. The descriptor file should contain the hostname under which the gateway will be available and mapping rules to the downstream services.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: microservices.info
    http:
      paths:
      - path: /employee
        backend:
          serviceName: employee
          servicePort: 8080
      - path: /department
        backend:
          serviceName: department
          servicePort: 8080
      - path: /organization
        backend:
          serviceName: organization
          servicePort: 8080

You have to execute the following command to apply the configuration above to the Kubernetes cluster.

$ kubectl apply -f kubernetes\ingress.yaml

To test this solution locally, we have to insert the mapping between the IP address and hostname set in the ingress definition inside the hosts file, as shown below. After that, we can test services through ingress using defined hostname just like that: http://microservices.info/employee.

192.168.99.100 microservices.info

You can check the details of the created ingress just by executing the command kubectl describe ing gateway-ingress.

6. Enabling API Specification on the Gateway Using Swagger2

What if we would like to expose a single Swagger documentation for all microservices deployed on Kubernetes? Well, here things are getting complicated... We can run a container with Swagger UI, and map all paths exposed by the ingress manually, but it is not a good solution...

In that case, we can use Spring Cloud Kubernetes Ribbon one more time, this time together with Spring Cloud Netflix Zuul. Zuul will act as a gateway only for serving the Swagger API.
Here's the list of dependencies used in my gateway-service project.

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>2.9.2</version>
</dependency>

Kubernetes discovery client will detect all services exposed on the cluster. We would like to display documentation only for our three microservices. That's why I defined the following routes for Zuul.

zuul:
  routes:
    department:
      path: /department/**
    employee:
      path: /employee/**
    organization:
      path: /organization/**

Now we can use the ZuulProperties bean to get the routes' addresses from Kubernetes discovery and configure them as Swagger resources, as shown below.

@Configuration
public class GatewayApi {
 @Autowired
 ZuulProperties properties;
 @Primary
 @Bean
 public SwaggerResourcesProvider swaggerResourcesProvider() {
  return () -> {
   List resources = new ArrayList();
   properties.getRoutes().values().stream()
   .forEach(route -> resources.add(createResource(route.getId(), "2.0")));
   return resources;
  };
 }
 private SwaggerResource createResource(String location, String version) {
  SwaggerResource swaggerResource = new SwaggerResource();
  swaggerResource.setName(location);
  swaggerResource.setLocation("/" + location + "/v2/api-docs");
  swaggerResource.setSwaggerVersion(version);
  return swaggerResource;
 }
}

The application gateway-service should be deployed on the cluster the same as the other applications. You can see the list of running services by executing the command kubectl get svc. Swagger documentation is available under the address http://192.168.99.100:31237/swagger-ui.html.

Learn More

Thanks for reading !

Originally published by Piotr Mińkowski at dzone.com

Deploying a full-stack App with Spring Boot, MySQL, React on Kubernetes

Deploying a full-stack App with Spring Boot, MySQL, React on Kubernetes

In this article, you’ll learn how to deploy a Stateful app built with Spring Boot, MySQL, and React on Kubernetes. We’ll use a local minikube cluster to deploy the application.

Introduction

In this article, you’ll learn how to deploy a Stateful app built with Spring Boot, MySQL, and React on Kubernetes. We’ll use a local minikube cluster to deploy the application. Please make sure that you have kubectl and minikube installed in your system.

It is a full-stack Polling app where users can login, create a Poll, and vote for a Poll.

To deploy this application, we’ll use few additional concepts in Kubernetes called PersistentVolumes and Secrets. Let’s first get a basic understanding of these concepts before moving to the hands-on deployment guide.

Kubernetes Persistent Volume

We’ll use Kubernetes Persistent Volumes to deploy Mysql. A PersistentVolume (PV) is a piece of storage in the cluster. It is a resource in the cluster just like a node. The Persistent volume’s lifecycle is independent from Pod lifecycles. It preserves data through restarting, rescheduling, and even deleting Pods.

PersistentVolumes are consumed by something called a PersistentVolumeClaim (PVC). A PVC is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). PVCs can request specific size and access modes (e.g. read-write or read-only).

Kubernetes Secrets

We’ll make use of Kubernetes secrets to store the Database credentials. A Secret is an object in Kubernetes that lets you store and manage sensitive information, such as passwords, tokens, ssh keys etc. The secrets are stored in Kubernetes backing store, etcd. You can enable encryption to store secrets in encrypted form in etcd.

Deploying Mysql on Kubernetes using PersistentVolume and Secrets

Following is the Kubernetes manifest for MySQL deployment. I’ve added comments alongside each configuration to make sure that its usage is clear to you.

apiVersion: v1
kind: PersistentVolume            # Create a PersistentVolume
metadata:
  name: mysql-pv
  labels:
    type: local
spec:
  storageClassName: standard      # Storage class. A PV Claim requesting the same storageClass can be bound to this volume. 
  capacity:
    storage: 250Mi
  accessModes:
    - ReadWriteOnce
  hostPath:                       # hostPath PersistentVolume is used for development and testing. It uses a file/directory on the Node to emulate network-attached storage
    path: "/mnt/data"
  persistentVolumeReclaimPolicy: Retain  # Retain the PersistentVolume even after PersistentVolumeClaim is deleted. The volume is considered “released”. But it is not yet available for another claim because the previous claimant’s data remains on the volume. 
---    
apiVersion: v1
kind: PersistentVolumeClaim        # Create a PersistentVolumeClaim to request a PersistentVolume storage
metadata:                          # Claim name and labels
  name: mysql-pv-claim
  labels:
    app: polling-app
spec:                              # Access mode and resource limits
  storageClassName: standard       # Request a certain storage class
  accessModes:
    - ReadWriteOnce                # ReadWriteOnce means the volume can be mounted as read-write by a single Node
  resources:
    requests:
      storage: 250Mi
---
apiVersion: v1                    # API version
kind: Service                     # Type of kubernetes resource 
metadata:
  name: polling-app-mysql         # Name of the resource
  labels:                         # Labels that will be applied to the resource
    app: polling-app
spec:
  ports:
    - port: 3306
  selector:                       # Selects any Pod with labels `app=polling-app,tier=mysql`
    app: polling-app
    tier: mysql
  clusterIP: None
---
apiVersion: apps/v1
kind: Deployment                    # Type of the kubernetes resource
metadata:
  name: polling-app-mysql           # Name of the deployment
  labels:                           # Labels applied to this deployment 
    app: polling-app
spec:
  selector:
    matchLabels:                    # This deployment applies to the Pods matching the specified labels
      app: polling-app
      tier: mysql
  strategy:
    type: Recreate
  template:                         # Template for the Pods in this deployment
    metadata:
      labels:                       # Labels to be applied to the Pods in this deployment
        app: polling-app
        tier: mysql
    spec:                           # The spec for the containers that will be run inside the Pods in this deployment
      containers:
      - image: mysql:5.6            # The container image
        name: mysql
        env:                        # Environment variables passed to the container 
        - name: MYSQL_ROOT_PASSWORD 
          valueFrom:                # Read environment variables from kubernetes secrets
            secretKeyRef:
              name: mysql-root-pass
              key: password
        - name: MYSQL_DATABASE
          valueFrom:
            secretKeyRef:
              name: mysql-db-url
              key: database
        - name: MYSQL_USER
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: username
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: password
        ports:
        - containerPort: 3306        # The port that the container exposes       
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage  # This name should match the name specified in `volumes.name`
          mountPath: /var/lib/mysql
      volumes:                       # A PersistentVolume is mounted as a volume to the Pod  
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

We’re creating four resources in the above manifest file. A PersistentVolume, a PersistentVolumeClaim for requesting access to the PersistentVolume resource, a service for having a static endpoint for the MySQL database, and a deployment for running and managing the MySQL pod.

The MySQL container reads database credentials from environment variables. The environment variables access these credentials from Kubernetes secrets.

Let’s start a minikube cluster, create kubernetes secrets to store database credentials, and deploy the Mysql instance:

Starting a Minikube cluster

$ minikube start

Creating the secrets

You can create secrets manually from a literal or file using the kubectl create secret command, or you can create them from a generator using Kustomize.

In this article, we’re gonna create the secrets manually:

$ kubectl create secret generic mysql-root-pass --from-literal=password=R00t
secret/mysql-root-pass created

$ kubectl create secret generic mysql-user-pass --from-literal=username=callicoder [email protected]
secret/mysql-user-pass created

$ kubectl create secret generic mysql-db-url --from-literal=database=polls --from-literal=url='jdbc:mysql://polling-app-mysql:3306/polls?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false'
secret/mysql-db-url created

You can get the secrets like this -

$ kubectl get secrets
NAME                         TYPE                                  DATA   AGE
default-token-tkrx5          kubernetes.io/service-account-token   3      3d23h
mysql-db-url                 Opaque                                2      2m32s
mysql-root-pass              Opaque                                1      3m19s
mysql-user-pass              Opaque                                2      3m6s

You can also find more details about a secret like so -

$ kubectl describe secrets mysql-user-pass
Name:         mysql-user-pass
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
username:  10 bytes
password:  10 bytes

Deploying MySQL

Let’s now deploy MySQL by applying the yaml configuration -

$ kubectl apply -f deployments/mysql-deployment.yaml
service/polling-app-mysql created
persistentvolumeclaim/mysql-pv-claim created
deployment.apps/polling-app-mysql created

That’s it! You can check all the resources created in the cluster using the following commands -

$ kubectl get persistentvolumes
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
mysql-pv   250Mi      RWO            Retain           Bound    default/mysql-pv-claim   standard                30s

$ kubectl get persistentvolumeclaims
NAME             STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim   Bound    mysql-pv   250Mi      RWO            standard       50s

$ kubectl get services
NAME                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes          ClusterIP   10.96.0.1    <none>        443/TCP    5m36s
polling-app-mysql   ClusterIP   None         <none>        3306/TCP   2m57s

$ kubectl get deployments
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
polling-app-mysql   1/1     1            1           3m14s

Logging into the MySQL pod

You can get the MySQL pod and use kubectl exec command to login to the Pod.

$ kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
polling-app-mysql-6b94bc9d9f-td6l4   1/1     Running   0          4m23s

$ kubectl exec -it polling-app-mysql-6b94bc9d9f-td6l4 -- /bin/bash
[email protected]:/#

Deploying the Spring Boot app on Kubernetes

All right! Now that we have the MySQL instance deployed, Let’s proceed with the deployment of the Spring Boot app.

Following is the deployment manifest for the Spring Boot app -

---
apiVersion: apps/v1           # API version
kind: Deployment              # Type of kubernetes resource
metadata:
  name: polling-app-server    # Name of the kubernetes resource
  labels:                     # Labels that will be applied to this resource
    app: polling-app-server
spec:
  replicas: 1                 # No. of replicas/pods to run in this deployment
  selector:
    matchLabels:              # The deployment applies to any pods mayching the specified labels
      app: polling-app-server
  template:                   # Template for creating the pods in this deployment
    metadata:
      labels:                 # Labels that will be applied to each Pod in this deployment
        app: polling-app-server
    spec:                     # Spec for the containers that will be run in the Pods
      containers:
      - name: polling-app-server
        image: callicoder/polling-app-server:1.0.0
        imagePullPolicy: IfNotPresent
        ports:
          - name: http
            containerPort: 8080 # The port that the container exposes
        resources:
          limits:
            cpu: 0.2
            memory: "200Mi"
        env:                  # Environment variables supplied to the Pod
        - name: SPRING_DATASOURCE_USERNAME # Name of the environment variable
          valueFrom:          # Get the value of environment variable from kubernetes secrets
            secretKeyRef:
              name: mysql-user-pass
              key: username
        - name: SPRING_DATASOURCE_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-user-pass
              key: password
        - name: SPRING_DATASOURCE_URL
          valueFrom:
            secretKeyRef:
              name: mysql-db-url
              key: url
---
apiVersion: v1                # API version
kind: Service                 # Type of the kubernetes resource
metadata:                     
  name: polling-app-server    # Name of the kubernetes resource
  labels:                     # Labels that will be applied to this resource
    app: polling-app-server
spec:                         
  type: NodePort              # The service will be exposed by opening a Port on each node and proxying it. 
  selector:
    app: polling-app-server   # The service exposes Pods with label `app=polling-app-server`
  ports:                      # Forward incoming connections on port 8080 to the target port 8080
  - name: http
    port: 8080
    targetPort: 8080

The above deployment uses the Secrets stored in mysql-user-pass and mysql-db-url that we created in the previous section.

Let’s apply the manifest file to create the resources -

$ kubectl apply -f deployments/polling-app-server.yaml
deployment.apps/polling-app-server created
service/polling-app-server created

You can check the created Pods like this -

$ kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
polling-app-mysql-6b94bc9d9f-td6l4    1/1     Running   0          21m
polling-app-server-744b47f866-s2bpf   1/1     Running   0          31s

Now, type the following command to get the polling-app-server service URL -

$ minikube service polling-app-server --url
http://192.168.99.100:31550

You can now use the above endpoint to interact with the service -

$ curl http://192.168.99.100:31550
{"timestamp":"2019-07-30T17:55:11.366+0000","status":404,"error":"Not Found","message":"No message available","path":"/"}

Deploying the React app on Kubernetes

Finally, Let’s deploy the frontend app using Kubernetes. Here is the deployment manifest -

apiVersion: apps/v1             # API version
kind: Deployment                # Type of kubernetes resource
metadata:
  name: polling-app-client      # Name of the kubernetes resource
spec:
  replicas: 1                   # No of replicas/pods to run
  selector:                     
    matchLabels:                # This deployment applies to Pods matching the specified labels
      app: polling-app-client
  template:                     # Template for creating the Pods in this deployment
    metadata:
      labels:                   # Labels that will be applied to all the Pods in this deployment
        app: polling-app-client
    spec:                       # Spec for the containers that will run inside the Pods
      containers:
      - name: polling-app-client
        image: callicoder/polling-app-client:1.0.0
        imagePullPolicy: IfNotPresent
        ports:
          - name: http
            containerPort: 80   # Should match the Port that the container listens on
        resources:
          limits:
            cpu: 0.2
            memory: "10Mi"
---
apiVersion: v1                  # API version
kind: Service                   # Type of kubernetes resource
metadata:
  name: polling-app-client      # Name of the kubernetes resource
spec:
  type: NodePort                # Exposes the service by opening a port on each node
  selector:
    app: polling-app-client     # Any Pod matching the label `app=polling-app-client` will be picked up by this service
  ports:                        # Forward incoming connections on port 80 to the target port 80 in the Pod
  - name: http
    port: 80
    targetPort: 80

Let’s apply the above manifest file to deploy the frontend app -

$ kubectl apply -f deployments/polling-app-client.yaml
deployment.apps/polling-app-client created
service/polling-app-client created

Let’s check all the Pods in the cluster -

$ kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
polling-app-client-6b6d979b-7pgxq     1/1     Running   0          26m
polling-app-mysql-6b94bc9d9f-td6l4    1/1     Running   0          21m
polling-app-server-744b47f866-s2bpf   1/1     Running   0          31s

Type the following command to open the frontend service in the default browser -

$ minikube service polling-app-client

You’ll notice that the backend api calls from the frontend app is failing because the frontend app tries to access the backend APIs at localhost:8080. Ideally, in a real-world, you’ll have a public domain for your backend server. But since our entire setup is locally installed, we can use kubectl port-forward command to map the localhost:8080 endpoint to the backend service -

$ kubectl port-forward service/polling-app-server 8080:8080

That’s it! Now, you’ll be able to use the frontend app. Here is how the app looks like -