An introduction to Git merge and rebase: what they are, and how to use them

As a Developer, many of us have to choose between Merge and Rebase. With all the references we get from the internet, everyone believes “Don’t use Rebase, it could cause serious problems.” Here I will explain what merge and rebase are, why you should (and shouldn’t) use them, and how to do so.

As a Developer, many of us have to choose between Merge and Rebase. With all the references we get from the internet, everyone believes “Don’t use Rebase, it could cause serious problems.” Here I will explain what merge and rebase are, why you should (and shouldn’t) use them, and how to do so.

Git Merge and Git Rebase serve the same purpose. They are designed to integrate changes from multiple branches into one. Although the final goal is the same, those two methods achieve it in different ways, and it's helpful to know the difference as you become a better software developer.

This question has split the Git community. Some believe you should always rebase and others that you should always merge. Each side has some convincing benefits.

Git Merge

Merging is a common practice for developers using version control systems. Whether branches are created for testing, bug fixes, or other reasons, merging commits changes to another location. To be more specific, merging takes the contents of a source branch and integrates them with a target branch. In this process, only the target branch is changed. The source branch history remains the same.

Merge Master -> Feature branch

Pros

  • Simple and familiar
  • Preserves complete history and chronological order
  • Maintains the context of the branch

Cons

  • Commit history can become polluted by lots of merge commits
  • Debugging using git bisect can become harder

How to do it

Merge the master branch into the feature branch using the checkout and merge commands.


$ git checkout feature
$ git merge master
(or)
$ git merge master feature


This will create a new “Merge commit” in the feature branch that holds the history of both branches.

Git Rebase

Rebase is another way to integrate changes from one branch to another. Rebase compresses all the changes into a single “patch.” Then it integrates the patch onto the target branch.

Unlike merging, rebasing flattens the history because it transfers the completed work from one branch to another. In the process, unwanted history is eliminated.

Rebases are how changes should pass from the top of the hierarchy downwards, and merges are how they flow back upwards

Rebase feature branch into master

Pros

  • Streamlines a potentially complex history
  • Manipulating a single commit is easy (e.g. reverting them)
  • Avoids merge commit “noise” in busy repos with busy branches
  • Cleans intermediate commits by making them a single commit, which can be helpful for DevOps teams

Cons

  • Squashing the feature down to a handful of commits can hide the context
  • Rebasing public repositories can be dangerous when working as a team
  • It’s more work: Using rebase to keep your feature branch updated always
  • Rebasing with remote branches requires you to force push. The biggest problem people face is they force push but haven’t set git push default. This results in updates to all branches having the same name, both locally and remotely, and that is dreadful to deal with.
If you rebase incorrectly and unintentionally rewrite the history, it can lead to serious issues, so make sure you know what you are doing!

How to do it

Rebase the feature branch onto the master branch using the following commands.


$ git checkout feature
$ git rebase master

This moves the entire feature branch on top of the master branch. It does this by re-writing the project history by creating brand new commits for each commit in the original (feature) branch.

Interactive Rebasing

This allows altering the commits as they are moved to the new branch. This is more powerful than automated rebase, as it offers complete control over the branch’s commit history. Typically this is used to clean up a messy history before merging a feature branch into master.

$ git checkout feature
$ git rebase -i master

This will open the editor by listing all the commits that are about to be moved.

pick 22d6d7c Commit message#1
pick 44e8a9b Commit message#2
pick 79f1d2h Commit message#3

This defines exactly what the branch will look like after the rebase is performed. By re-ordering the entities, you can make the history look like whatever you want. For example, you can use commands like fixup, squash, edit etc, in place of pick.

Which one to use

So what’s best? What do the experts recommend?


It’s hard to generalize and decide on one or the other, since every team is different. But we have to start somewhere.

Teams need to consider several questions when setting their Git rebase vs. merge policies. Because as it turns out, one workflow strategy is not better than the other. It is dependent on your team.

Consider the level of rebasing and Git competence across your organization. Determine the degree to which you value the simplicity of rebasing as compared to the traceability and history of merging.

Finally, decisions on merging and rebasing should be considered in the context of a clear branching strategy (Refer this article to understand more about branching strategy). A successful branching strategy is designed around the organization of your teams.

What do I recommend?

As the team grows, it will become hard to manage or trace development changes with an always merge policy. To have a clean and understandable commit history, using Rebase is reasonable and effective.

By considering the following circumstances and guidelines, you can get best out of Rebase:

  • You’re developing locally: If you have not shared your work with anyone else. At this point, you should prefer rebasing over merging to keep your history tidy. If you’ve got your personal fork of the repository and that is not shared with other developers, you’re safe to rebase even after you’ve pushed to your branch.
  • Your code is ready for review: You created a pull request. Others are reviewing your work and are potentially fetching it into their fork for local review. At this point, you should not rebase your work. You should create ‘rework’ commits and update your feature branch. This helps with traceability in the pull request and prevents the accidental history breakage.
  • The review is done and ready to be integrated into the target branch.Congratulations! You’re about to delete your feature branch. Given that other developers won’t be fetch-merging in these changes from this point on, this is your chance to sanitize your history. At this point, you can rewrite history and fold the original commits and those pesky ‘pr rework’ and ‘merge’ commits into a small set of focused commits. Creating an explicit merge for these commits is optional, but has value. It records when the feature graduated to master.

Conclusion

I hope this explanation has given some insights on Git merge and Git rebase.Merge vs rebase strategy is always debatable. But perhaps this article will help dispel your doubts and allow you to adopt an approach that works for your team.

I’m looking forward to writing on Git workflows and concepts of Git. Do comment on the topics that you want me to write about next. Cheers!

code = coffee + developer

Here is another useful reference

A

Build Docker Images and Host a Docker Image Repository with GitLab

Build Docker Images and Host a Docker Image Repository with GitLab

In this tutorial, you'll learn how to build Docker images and host a Docker image repository with GitLab. We set up a new GitLab runner to build Docker images, created a private Docker registry to store them in, and updated a Node.js app to be built and tested inside of Docker containers.

In this tutorial, you'll learn how to build Docker images and host a Docker image repository with GitLab. We set up a new GitLab runner to build Docker images, created a private Docker registry to store them in, and updated a Node.js app to be built and tested inside of Docker containers.

Introduction

Containerization is quickly becoming the most accepted method of packaging and deploying applications in cloud environments. The standardization it provides, along with its resource efficiency (when compared to full virtual machines) and flexibility, make it a great enabler of the modern DevOps mindset. Many interesting cloud native deployment, orchestration, and monitoring strategies become possible when your applications and microservices are fully containerized.

Docker containers are by far the most common container type today. Though public Docker image repositories like Docker Hub are full of containerized open source software images that you can docker pull and use today, for private code you'll need to either pay a service to build and store your images, or run your own software to do so.

GitLab Community Edition is a self-hosted software suite that provides Git repository hosting, project tracking, CI/CD services, and a Docker image registry, among other features. In this tutorial we will use GitLab's continuous integration service to build Docker images from an example Node.js app. These images will then be tested and uploaded to our own private Docker registry.

Prerequisites

Before we begin, we need to set up a secure GitLab server, and a GitLab CI runner to execute continuous integration tasks. The sections below will provide links and more details.

A GitLab Server Secured with SSL

To store our source code, run CI/CD tasks, and host the Docker registry, we need a GitLab instance installed on an Ubuntu 16.04 server. GitLab currently recommends a server with at least 2 CPU cores and 4GB of RAM. Additionally, we'll secure the server with SSL certificates from Let's Encrypt. To do so, you'll need a domain name pointed at the server.

A GitLab CI Runner

Set Up Continuous Integration Pipelines with GitLab CI on Ubuntu 16.04 will give you an overview of GitLab's CI service, and show you how to set up a CI runner to process jobs. We will build on top of the demo app and runner infrastructure created in this tutorial.

Step 1 — Setting Up a Privileged GitLab CI Runner

In the prerequisite GitLab continuous integration tutorial, we set up a GitLab runner using sudo gitlab-runner register and its interactive configuration process. This runner is capable of running builds and tests of software inside of isolated Docker containers.

However, in order to build Docker images, our runner needs full access to a Docker service itself. The recommended way to configure this is to use Docker's official docker-in-docker image to run the jobs. This requires granting the runner a special privileged execution mode, so we'll create a second runner with this mode enabled.

Note: Granting the runner privileged mode basically disables all of the security advantages of using containers. Unfortunately, the other methods of enabling Docker-capable runners also carry similar security implications. Please look at the official GitLab documentation on Docker Build to learn more about the different runner options and which is best for your situation.

Read Also: How to Create Docker Image with MySQL Database

Because there are security implications to using a privileged runner, we are going to create a project-specific runner that will only accept Docker jobs on our hello_hapi project (GitLab admins can always manually add this runner to other projects at a later time). From your hello_hapi project page, click Settings at the bottom of the left-hand menu, then click CI/CD in the submenu:

Build Docker Images and Host a Docker Image Repository with GitLab

Now click the Expand button next to the Runners settings section:

Build Docker Images and Host a Docker Image Repository with GitLab

There will be some information about setting up a Specific Runner, including a registration token. Take note of this token. When we use it to register a new runner, the runner will be locked to this project only.

Build Docker Images and Host a Docker Image Repository with GitLab

While we're on this page, click the Disable shared Runners button. We want to make sure our Docker jobs always run on our privileged runner. If a non-privileged shared runner was available, GitLab might choose to use that one, which would result in build errors.

Log in to the server that has your current CI runner on it. If you don't have a machine set up with runners already, go back and complete the Installing the GitLab CI Runner Service

section of the prerequisite tutorial before proceeding.

Now, run the following command to set up the privileged project-specific runner:

    sudo gitlab-runner register -n \
      --url https://gitlab.example.com/ \
      --registration-token your-token \
      --executor docker \
      --description "docker-builder" \
      --docker-image "docker:latest" \
      --docker-privileged

Output

Registering runner... succeeded                     runner=61SR6BwV
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

Be sure to substitute your own information. We set all of our runner options on the command line instead of using the interactive prompts, because the prompts don't allow us to specify --docker-privileged mode.

Your runner is now set up, registered, and running. To verify, switch back to your browser. Click the wrench icon in the main GitLab menu bar, then click Runners in the left-hand menu. Your runners will be listed:

Build Docker Images and Host a Docker Image Repository with GitLab

Now that we have a runner capable of building Docker images, let's set up a private Docker registry for it to push images to.

Read Also: Docker All The Things

Step 2 — Setting Up GitLab's Docker Registry

Setting up your own Docker registry lets you push and pull images from your own private server, increasing security and reducing the dependencies your workflow has on outside services.

GitLab will set up a private Docker registry with just a few configuration updates. First we'll set up the URL where the registry will reside. Then we will (optionally) configure the registry to use an S3-compatible object storage service to store its data.

SSH into your GitLab server, then open up the GitLab configuration file:

sudo nano /etc/gitlab/gitlab.rb

Scroll down to the Container Registry settings section. We're going to uncomment the registry_external_url line and set it to our GitLab hostname with a port number of 5555:

/etc/gitlab/gitlab.rb

registry_external_url 'https://gitlab.example.com:5555'

Next, add the following two lines to tell the registry where to find our Let's Encrypt certificates:

/etc/gitlab/gitlab.rb

registry_nginx['ssl_certificate'] = "/etc/letsencrypt/live/gitlab.example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.example.com/privkey.pem"

Save and close the file, then reconfigure GitLab:

sudo gitlab-ctl reconfigure

Output

gitlab Reconfigured!

Update the firewall to allow traffic to the registry port:

sudo ufw allow 5555

Now switch to another machine with Docker installed, and log in to the private Docker registry. If you don’t have Docker on your local development computer, you can use whichever server is set up to run your GitLab CI jobs, as it has Docker installed already:

docker login gitlab.example.com:5555

You will be prompted for your username and password. Use your GitLab credentials to log in.

Output
Login Succeeded 

Success! The registry is set up and working. Currently it will store files on the GitLab server's local filesystem. If you'd like to use an object storage service instead, continue with this section. If not, skip down to Step 3.

To set up an object storage backend for the registry, we need to know the following information about our object storage service:

  • Access Key
  • Secret Key
  • Region (us-east-1) for example, if using Amazon S3, or Region Endpoint if using an S3-compatible service ([https://nyc.digitaloceanspaces.com](https://nyc.digitaloceanspaces.com))
  • Bucket Name

If you're using DigitalOcean Spaces, you can find out how to set up a new Space and get the above information by reading How To Create a DigitalOcean Space and API Key.

When you have your object storage information, open the GitLab configuration file:

sudo nano /etc/gitlab/gitlab.rb

Once again, scroll down to the container registry section. Look for the registry['storage'] block, uncomment it, and update it to the following, again making sure to substitute your own information where appropriate:

/etc/gitlab/gitlab.rb

registry['storage'] = {
  's3' => {
    'accesskey' => 'your-key',
    'secretkey' => 'your-secret',
    'bucket' => 'your-bucket-name',
    'region' => 'nyc3',
    'regionendpoint' => 'https://nyc3.digitaloceanspaces.com'
  }
}

If you're using Amazon S3, you only need region and not regionendpoint. If you're using an S3-compatible service like Spaces, you'll need regionendpoint. In this case region doesn't actually configure anything and the value you enter doesn't matter, but it still needs to be present and not blank.

Save and close the file.

Note: There is currently a bug where the registry will shut down after thirty seconds if your object storage bucket is empty. To avoid this, put a file in your bucket before running the next step. You can remove it later, after the registry has added its own objects.

If you are using DigitalOcean Spaces, you can drag and drop to upload a file using the Control Panel interface.

Reconfigure GitLab one more time:

sudo gitlab-ctl reconfigure

On your other Docker machine, log in to the registry again to make sure all is well:

docker login gitlab.example.com:5555

You should get a Login Succeeded message.

Now that we've got our Docker registry set up, let's update our application's CI configuration to build and test our app, and push Docker images to our private registry.

Step 3 — Updating gitlab-ci.yaml and Building a Docker Image

Note: If you didn't complete the prerequisite article on GitLab CI you'll need to copy over the example repository to your GitLab server. Follow the Copying the Example Repository From GitHub section to do so.

To get our app building in Docker, we need to update the .gitlab-ci.yml file. You can edit this file right in GitLab by clicking on it from the main project page, then clicking the Edit button. Alternately, you could clone the repo to your local machine, edit the file, then git push it back to GitLab. That would look like this:

    git clone [email protected]:sammy/hello_hapi.git
    cd hello_hapi
    # edit the file w/ your favorite editor
    git commit -am "updating ci configuration"
    git push

First, delete everything in the file, then paste in the following configuration:

.gitlab-ci.yml

image: docker:latest
services:
- docker:dind

stages:
- build
- test
- release

variables:
  TEST_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:$CI_COMMIT_REF_NAME
  RELEASE_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:latest

before_script:
  - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN gitlab.example.com:5555

build:
  stage: build
  script:
    - docker build --pull -t $TEST_IMAGE .
    - docker push $TEST_IMAGE

test:
  stage: test
  script:
    - docker pull $TEST_IMAGE
    - docker run $TEST_IMAGE npm test

release:
  stage: release
  script:
    - docker pull $TEST_IMAGE
    - docker tag $TEST_IMAGE $RELEASE_IMAGE
    - docker push $RELEASE_IMAGE
  only:
    - master

Be sure to update the highlighted URLs and usernames with your own information, then save with the Commit changes button in GitLab. If you're updating the file outside of GitLab, commit the changes and git push back to GitLab.

This new config file tells GitLab to use the latest docker image (image: docker:latest) and link it to the docker-in-docker service (docker:dind). It then defines build, test, and release stages. The build stage builds the Docker image using the Dockerfile provided in the repo, then uploads it to our Docker image registry. If that succeeds, the test stage will download the image we just built and run the npm test command inside it. If the test stage is successful, the release stage will pull the image, tag it as hello_hapi:latest and push it back to the registry.

Depending on your workflow, you could also add additional test stages, or even deploy stages that push the app to a staging or production environment.

Updating the configuration file should have triggered a new build. Return to the hello_hapi project in GitLab and click on the CI status indicator for the commit:

Build Docker Images and Host a Docker Image Repository with GitLab

On the resulting page you can then click on any of the stages to see their progress:

Build Docker Images and Host a Docker Image Repository with GitLab

Build Docker Images and Host a Docker Image Repository with GitLab

Eventually, all stages should indicate they were successful by showing green check mark icons. We can find the Docker images that were just built by clicking the Registry item in the left-hand menu:

Build Docker Images and Host a Docker Image Repository with GitLab

If you click the little "document" icon next to the image name, it will copy the appropriate docker pull ... command to your clipboard. You can then pull and run your image:

    docker pull gitlab.example.com:5555/sammy/hello_hapi:latest
    docker run -it --rm -p 3000:3000 gitlab.example.com:5555/sammy/hello_hapi:latest

Output

> [email protected] start /usr/src/app
> node app.js

Server running at: http://56fd5df5ddd3:3000

The image has been pulled down from the registry and started in a container. Switch to your browser and connect to the app on port 3000 to test. In this case we're running the container on our local machine, so we can access it via localhost at the following URL:

http://localhost:3000/hello/test

Output

Hello, test!

Success! You can stop the container with CTRL-C. From now on, every time we push new code to the master branch of our repository, we'll automatically build and test a new hello_hapi:latest image.

Conclusion

In this tutorial we set up a new GitLab runner to build Docker images, created a private Docker registry to store them in, and updated a Node.js app to be built and tested inside of Docker containers.

Learn how to CI/CD with GitHub Actions and Docker

Learn how to CI/CD with GitHub Actions and Docker

In this post, you'll learn how to CI and CD a Node.JS Application Using GitHub Actions

Originally published by Abhinav Dhasmana at https://blog.bitsrc.io

This article will cover the following:

  • Use Docker instead of bare metal deployment
  • Use GitHub actions for continuous integration of your app
  • Use GitHub actions for continuous deployment by pushing the Docker image to a Docker registry (Docker Hub)

Our workflow will look like this

A workflow of a Node.js app deployed using GitHub actions

The complete source code is available on GitHub

Use Docker instead of bare metal deployment

Dockerizing an existing app is easy. All we need is a Dockerfile and an optional .dockerignore file. Below is a Dockerfile for our app.

FROM node:10.16.0-alpine

WORKDIR /source/github-action-example-node

COPY package.json /source/github-action-example-node

RUN cd /source/github-action-example-node && npm i --only=production

COPY . .

EXPOSE 3000
CMD ["sh", "-c", "node server.js"]

It copies our package.json, runs npm install and starts the server. To make sure our file is correct, we can run docker build -t abhinavdhasmana/github-action-example-node . from the root folder. If we run docker images , we will see our latest image. We can also run our container with docker run -d -p 3000:3000 abhinavdhasmana/github-action-example-node. Point the browser to http://localhost:3000/ and text will appear.

What are GitHub Actions and how do they work

‘GitHub Actions’ is an API that can react to any event, GitHub’s or our own events. For example, for every push event on the repository, we want our test cases to run.

For GitHub Actions to work, we need to create a .github/workflows folder. We need to create our workflows inside this folder. Let’s create push.yml. Here is what we want from our workflow:

On every push, perform these actions in the given order

  1. git clone the repo
  2. run npm install
  3. run npm lint
  4. run npm test
  5. build the docker image
  6. login to docker hub
  7. Push the image to docker hub

Since we have to run each of these commands inside a docker we have to declare a Dockerfile for each of these actions and run the command in those containers. This is, of course, very tedious and error-prone. Remember, GitHub Actions are code, so we can just reuse, edit and fork them as we do with any other piece of code.

This is how our push.yml would look like

on: push
name: npm build, lint, test and publish
jobs:
build-and-publish:
name: build and publish
runs-on: ubuntu-latest
steps:
- uses: actions/[email protected]
- name: npm install
uses: actions/[email protected]
with:
args: install
- name: npm test
uses: actions/[email protected]
with:
args: run test
- name: npm lint
uses: actions/[email protected]
with:
args: run lint
- name: docker build
uses: actions/docker/[email protected]
with:
args: build -t abhinavdhasmana/github-action-example-node .
- name: docker login
uses: actions/docker/[email protected]
env:
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
- name: docker push
uses: actions/docker/[email protected]
with:
args: push abhinavdhasmana/github-action-example-node

GitHub actions file for npm actions and push to docker hub

Let’s dissect this file

line 1: We want to trigger our workflow when someone pushes the code to our repo

line 3–6: We are defining a job build-and-publish which runs on ubuntu-latest. Each job runs in a fresh instance of a virtual environment. A job can contain one or more steps .

line 8: This is step 1 of our application. Here we want to get our source code. We can write our own code to pull our source code or reuse an open source. The repo link is https://github.com/actions/checkout

line 9-12: This is step 2 of our workflow where we run npm install on our codebase. Again, we use an open source action at https://github.com/actions/npm and pass install as an argument.

line 13–20: These are same as the last step except the argument passed to npm command.

line 21–24: We build a docker image of our code with the help of docker action and tag the image as abhinavdhasmana/github-action-example-node

line 25-29: This one is a little different where we want to login into docker hub. Here we use secrets which are passed as an env variables to our build. We can set these env variables in many ways. To set this up via GitHub, go to Settings-> Secrets and create new secrets

Store secrets in GitHub

line 30-33: We push the image to the docker hub with the tag we created in line 24.

If we commit these changes, GitHub Actions will come into play and start running all the steps in our job. We should see something like this

GitHub Actions running our job

To validate if a new image has been pushed to DockerHub, we should see a new image being pushed in Docker Hub

Docker Hub image

Full source code is available on GitHub.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Further reading

Docker and Kubernetes: The Complete Guide

Docker Mastery: The Complete Toolset From a Docker Captain

Docker for the Absolute Beginner - Hands On - DevOps

Docker for Absolute Beginners

How to debug Node.js in a Docker container?

Docker Containers for Beginners

Deploy Docker Containers With AWS CodePipeline

Build Docker Images and Host a Docker Image Repository with GitLab

How to create a full stack React/Express/MongoDB app using Docker



20+ Outstanding Vue.js Open Source Projects

20+ Outstanding Vue.js Open Source Projects

There are more than 20 Vue.js open-source projects in this article. The goal was to make this list as varied as possible.

There are more than 20 Vue.js open-source projects in this article. The goal was to make this list as varied as possible.

In this short intro, I won’t go back to the history of the Vue.js or some statistics on the use of this framework. Now it is a matter of fact that Vue is gaining popularity and projects listed below are the best evidence of its prevalence.

So here we go!

Prettier

Opinionated code formatter

Website: https://prettier.io

Demo: https://prettier.io/playground/

**GitHub **:https://github.com/prettier/prettier

GitHub Stars: 32 343

Prettier reprints your code in a consistent style with several rules. Using a code formatter you don’t have to do it manually and argue about what is the right coding style anymore. This code formatter Integrates with most editors (Atom, Emacs, Visual Studio, Web Storm, etc. and works with all your favorite tools such as JavaScript, CSS, HTML, GraphQL, etc. And last year Prettier started to run in the browser and support .vue files.

Image source: https://prettier.io

Image source: https://prettier.io

Vuetify

Material Component Framework

Website: https://vuetifyjs.com/en/

GitHub: https://github.com/vuetifyjs/vuetify

GitHub Stars: 20 614

This framework allows you to customize visual components. It complies with Google Material Design guidelines. Vuetify combines all the advantages of Vue.js and Material. That is more Vuetify is constantly evolving because it has been improved by both communities on GitHub. This framework is compatible with RTL and Vue CLI-3. You can build an interactive and attractive frontend using Vuetify.

Image source: https://vuetifyjs.com/en/

iView

A set of UI components

Website: https://iviewui.com/

GitHub: https://github.com/iview/iview

GitHub Stars: 21 643

Developers of all skill levels

can use iView but you have to be familiar with a Single File Components

(https://vuejs.org/v2/guide/single-file-components.html).. "https://vuejs.org/v2/guide/single-file-components.html).") A friendly API and

constant fixes and upgrades make it easy to use. You can use separate

components (navigation, charts, etc.) or you can use a Starter Kit. Solid documentation of the iView is a big plus and of course, it is compatible with the latest Vue.js. Please note that it doesn’t support IE8.

Image source: https://iviewui.com/

Image source: https://iviewui.com/

Epiboard

A tab page

GitHub: https://github.com/Alexays/Epiboard

GitHub Stars: 124

A tab page gives easy access to RSS feeds, weather, downloads, etc. Epiboard focuses on customizability to provide a user with a personalized experience. You can synchronize your settings across your devices, change the feel and look and add your favorite bookmarks. This project follows the guidelines of the material design. The full list of the current cards you can find on the GitHub page.

Image source: https://github.com/Alexays/Epiboard

Light Blue Vue Admin

Vue JS Admin Dashboard Template

Website: https://flatlogic.com/admin-dashboards/light-blue-vue-lite

Demo: https://flatlogic.com/admin-dashboards/light-blue-vue-lite/demo

GitHub: https://github.com/flatlogic/light-blue-vue-admin

GitHub Stars: 28

Light Blue is built with latest Vue.js and Bootstrap has detailed documentation and transparent and modern design. This template is easy to navigate, has user-friendly functions and a variety of UI elements. All the components of this template fit together impeccably and provide great user experience. Easy customization is another big plus, cuts dramatically development time.

Image source: https://flatlogic.com/admin-dashboards/light-blue-vue-lite

Image source: https://flatlogic.com/admin-dashboards/light-blue-vue-lite

Beep

Account Security Scanner

Website: https://beep.modus.app

GitHub: https://github.com/ModusCreateOrg/beep

GitHub Stars: 110

This security scanner was built with Vue.js and Ionic. It runs security checks and keeps passwords safe. So how this check is working? Beep simply compare your data with all the information in the leaked credentials databases. Your passwords are safe with Beep thanks to the use of the SHA-1 algorithm. Plus this app never stores your login and password as it is.

Image source: https://beep.modus.app

Sing App Vue Dashboard

Vue.JS admin dashboard template

Website: https://flatlogic.com/admin-dashboards/sing-app-vue-dashboard

Demo: https://flatlogic.com/admin-dashboards/sing-app-vue-dashboard/demo

GitHub: https://github.com/flatlogic/sing-app-vue-dashboard

GitHub Stars: 176

What do you need from an admin template? You definitely need classic look, awesome typography and the usual set of components. Sing App fits all these criteria plus it has a very soft color scheme. A free version of this template has all the necessary features to start your project with minimal work. This is an elegantly designed dashboard can be useful more the most of web apps like CMS, CRM or simple website admin panel.

Image source: https://flatlogic.com/admin-dashboards/sing-app-vue-dashboard

Image source: https://flatlogic.com/admin-dashboards/sing-app-vue-dashboard

Vue Storefront

PWA for the eCommerce

Website: https://www.vuestorefront.io

GitHub: https://github.com/DivanteLtd/vue-storefront

GitHub Stars: 5 198

This PWA storefront can connect almost with any backend for the eCommerce because it uses headless architecture. This includes popular BigCommerce platform, Magento, Shopware, etc. Vue Storefront isn’t easy to learn at once because it is a complex solution. But it gives you lots of possibilities and it always improving thanks to growing community of professionals. Some of the advantages of Vue Storefront include mobile-first approach, Server-Side Rendering (good for SEO) and offline mode.

Image source: https://www.vuestorefront.io

Cross-platform GUI client for DynamoDb

GitHub: https://github.com/Arattian/DynamoDb-GUI-Client

GitHub Stars: 178

DynamoDB is a NoSQL database applicable in cases where you have to deal with large amounts of data or serverless apps with AWS Lambda. This GUI client gives remote access plus supports several databases at the same time.

Image source: https://github.com/Arattian/DynamoDb-GUI-Client

vueOrgChart

Interactive organization chart

Demo: https://hoogkamer.github.io/vue-org-chart/#/

GitHub: https://github.com/Hoogkamer/vue-org-chart

GitHub Stars: 44

With this solution, no webserver, install or database needed. This simple chart can be edited in excel or webpage. You can easily search for a particular manager or department. Also, there are two options for usage. First as a static website. This option is suitable for you if you want to use vueOrgChart without modification. If you are planning to build your own chart on top of this project you will have to study the section “Build Setup”.

Image source: https://hoogkamer.github.io/vue-org-chart/#/

Faviator

Favicon generator

Website: https://www.faviator.xyz

Demo: https://www.faviator.xyz/playground

GitHub: https://github.com/faviator/faviator

GitHub Stars: 63

This library helps you to create a simple icon. The first step is to pass in a configuration, and second, choose the format of your icon. You can choose JPG, PNG or SVG format. As you can see in the screenshot you can choose any font from the Google Fonts.

Image source: https://www.faviator.xyz

Minimal Notes

Web app for PC or Tablet

Demo: https://vladocar.github.io/Minimal-Notes/

GitHub: https://github.com/vladocar/Minimal-Notes

GitHub Stars: 48

There is not much to say about this app. It is minimalistic, works on a browser locally, stored in localStorage and the file is only 4Kb. It is also available for Mac OS but the file form 4KB becomes 0.45 Bb. But it is still very lightweight.

Image source: https://vladocar.github.io/Minimal-Notes/

Directus

CMS built with Vue.js

Website: https://directus.io

Demo: https://directus.app/?ref=madewithvuejs.com#/login

GitHub: https://github.com/directus/directus

GitHub Stars: 4 607

Directus is a very lightweight and simple CMS. It has been modularized to give the developers the opportunity to customize it in every aspect. The main peculiarity of this CMS is that it stores your data in SQL databases so it can stay synchronized with every change you made and be easily customized. It also supports multilingual content.

Image source: https://directus.io

VuePress

Static Site Generator

Website: https://vuepress.vuejs.org

GitHub: https://github.com/vuejs/vuepress

GitHub Stars: 12 964

The creator of Vue.js, Evan You, created this simple site generator. Minimalistic and SEO friendly it has multi-language support and easy Google Analytics integration. A VuePress site is using VueRouter, Vue, and webpack. If you worked with the Nuxt or Gatsby you will notice some familiarities. The only difference is that Nuxt was created to develop applications and VuePress is for building static websites.

Image source: https://vuepress.vuejs.org

Docsify

Documentation site generator

Website: https://docsify.js.org/#/

GitHub: https://github.com/docsifyjs/docsify

GitHub Stars: 10 105

This project has an impressive showcase list. The main peculiarity of this generator lies in the way pages are generated. It simply grabs you Markdown file and displays it as a page of your site. Another big plus of this project is a full-text search and API plugins. It supports multiple themes and really lightweight.

Image source: https://docsify.js.org/#/

vue-cli

Standard Tooling for Vue.js Development

Website: https://cli.vuejs.org

GitHub: https://github.com/vuejs/vue-cli

GitHub Stars: 21 263

This well-known tooling was released by the Vue team. Please note that before starting to use it you should install the latest version of Vue.js, Node.js, NPM, and a code editor. Vue CLI has a GUI tool and instant prototyping. Instant prototyping is a relatively new feature. It allows you to create a separate component. And this component will have all “vue powers” as full Vue.js project.

Image source: https://cli.vuejs.org

SheetJS

Spreadsheet Parser and Writer

Website: https://sheetjs.com/

Demo: https://sheetjs.com/demos

GitHub: https://github.com/SheetJS/js-xlsx

GitHub Stars: 16 264

SheetJS is a JS library that helps you to operate data stored in excel file. For example, you can export a workbook on browser-side or convert any HTML table. In other words, SheetJS doesn’t involve a server-side script, or for example AJAX. This the best solution for front-end operation of two-dimensional tables. It can export and parse data and run in node terminal or browser side.

Image source: https://sheetjs.com/

Vue-devtools

Browser devtools extension

GitHub: https://github.com/vuejs/vue-devtools

GitHub Stars: 13 954

Almost any framework provides developers with a suitable devtool. This is literally an additional panel in the browser which very differs from the standard one. You don’t have to install it as a browser extension. There is an option to install it as a standalone application. You can activate it by right-click the element and choose “Inspect Vue component” and navigate the tree of components. The left menu of this tool will show you the data and the props of the component.

Image source: https://github.com/vuejs/vue-devtools

Handsontable

Data Grid Component

Website: https://handsontable.com

GitHub: https://github.com/handsontable/handsontable

GitHub Stars: 12 049

This component has a spreadsheet look, can be easily modified with a plugin and binds to almost any data source. It supports all the standard operations like read, delete, update and create. Plus you can sort and filter your records. What is more, you can include data summaries and assign a type to a cell. This project has exemplary documentation and was designed as customizable as it needs to be.

Image source: https://handsontable.com

Vue webpack boilerplate

Website: http://vuejs-templates.github.io/webpack/

GitHub: https://github.com/vuejs-templates/webpack

GitHub Stars: 9 052

Vue.js provides great templates to help you start the development process with your favorite stack. This boilerplate is a solid foundation for your project. It includes the best project structure and configuration, optimal tools and best development practices. Make sure this template has more or less the same features that you need for your project. Otherwise, it is better to use Vue CLI due to its flexibility.

Image source: http://vuejs-templates.github.io/webpack/

Material design for Vue.js

Website: http://vuematerial.io

GitHub: https://github.com/vuematerial/vue-material

GitHub Stars: 7 984

What is great about this Material Design Framework is truly thorough documentation. The framework is very lightweight with a full array of components and fully in line with the Google Material Design guidelines. This design fits every screen and supports every modern browser.

Image source: http://vuematerial.io

CSSFX

Click-to-copy CSS effects

Website: https://cssfx.dev

GitHub: https://github.com/jolaleye/cssfx

GitHub Stars: 4 569

This project is very simple and does exactly what is said in the description line. It’s a collection of CSS effects. You can see a preview of each effect and click on it. You will see a pop up with a code snippet that you can copy.

Image source: https://cssfx.dev

uiGradients

Website: http://uigradients.com/

GitHub:https://github.com/ghosh/uiGradients

GitHub Stars: 4 323

This is a collection of linear gradients which allows you to copy CSS codes. The collection is community contributed and has the opportunity to filter gradients based on preferred color.

Image source: http://uigradients.com/

Vuestic

Demo: https://vuestic.epicmax.co/#/admin/dashboard

GitHub: https://github.com/epicmaxco/vuestic-admin

GitHub Stars: 5 568

Vuestic is a responsive admin template that already proving popular at the GitHub. Made with Bootstrap 4 this template doesn’t require jQuery. With 36 UI ready-to-use elements and 18 pages, Vuestic offers multiple options for customization. The code is constantly evolving not only due to the efforts of the author but also because of the support of the Vue community on GitHub.

Image source: https://vuestic.epicmax.co/#/admin/dashboard

How to implement CI/CD into Spring-Boot-based Java Applications?

How to implement CI/CD into Spring-Boot-based Java Applications?

You'll learn how to implement CI/CD on a Spring Boot Java app using Maven, GitHub, Travis CI, Docker, Codecov, SonarCloud, and Heroku.

I am very excited to share my experiences building Continuous Integration/Continuous Delivery (CI/CD) into Spring-Boot-based Java applications. First, let's establish everything we will learn in this tutorial:

Step 1) Create a Spring Boot Java App using Spring Initializr

Step 2) Create a GitHub repository

Step 3) Use Travis CI and Docker to implement CI/CD

Step 4) Add Codecov to provide code coverage

Step 5) Use SonarCloud to write stellar code

Step 6) Build a project site using GitHub site-maven-plugin

Step 7) Deploy the app on Heroku using heroku-maven-plugin

Step 8) Manage topics

Gradually, we'll add badges to the README.md file so that we can be notified in real-time on the state of Travis CI, Docker, Codecov, and SonarCloud. Also, we'll add the license badge.

Are you ready? If not take time to better understand or prepare yourself and continue to read this later. The code is available here. So just fork, it's all yours!

Step 1: Create a Spring Boot Java App Using Spring Initializr

In this project, I used Spring Tool Suite 4 (STS 4) IDE; you are free to use whatever tool you find suitable for this project. STS 4 has the Spring Initializr built-in, so that's why I chose it for this project.

This is what STS 4 dark theme looks like:

STS 4 - Home

Click on File -> New -> Spring Starter Project

You will get:

STS 4 - Form

Please fill out the form as follows:

Name: cicd-applied-to-spring-boot-java-app

Group: com.cicd

Artifact: cicd-applied-to-spring-boot-java-app

Description: Implementing CI/CD on Spring Boot Java App

Package: com.cicd.cicd-applied-to-spring-boot-java-app

By default:

Type: Maven

Packaging: jar

Java Version: 8

Language: Java

You will get:

![STS 4 - Form completed](https://dzone.com/storage/temp/12535502-p4.png - Form completed")

Then, click Next.

Click on Spring Web:

![STS 4 - Spring Web](https://dzone.com/storage/temp/12535561-p6.png - Spring Web")

Click on Finish. The new project will appear:

STS 4 - New project finally created

Next, please open the CicdAppliedToSpringBootJavaAppApplication.java file.

We can then add a basic endpoint:

STS 4 - CicdAppliedToSpringBootJavaAppApplication edited

Right click -> Run As -> Maven build

STS 4 - Maven build

Then you will receive:

STS 4 - Edit configuration

To run the app, please add the following:

Goals -> spring-boot:run

STS 4 - Goals

Click Run:

STS 4 - Run

The final result can be found here: http://localhost:8080/

STS 4 - Final result

Now, on to the next step!

Step 2: Create a GitHub Repository

First, you need to sign in or sign up. I'm already a GitHub user so I just signed in. You will be directed to the homepage:

GitHub - Home

To create a new repository, click on green button "**New" **or click here. You will then be directed here:

GitHub - New repository

Please fill out the form as follows:

Repository name: cicd-applied-to-spring-boot-java-app (I chose to set the same name as the artifact field from step one)

Description: Implementing Continuous Integration/Continuous Delivery on Spring Boot Java App

Click on Public

Click on Initialize this repository with a README

Select the MIT license.

Why? It's very simple. The following links are helpful to better understand why you need an MIT license. Here's how to choose an open-source license and how open-source licenses work and how to add them to your projects.

Later, we'll add the .gitignore file.

Then, click on Create repository:

GitHub - Repository form completed

This is the new repository:

GitHub - New repository

I suggest you add a file named RESEARCHES.md. Why? While working on a project, you may face difficulties and need to ask for help. The goal is to save time when solving problems or fixing bugs.

To create it, please click on Create new file:

GitHub - Create new file

Then, fill the name field with RESEARCHES.md and edit the file as follows. CI/CD is an example of research and the links represent results. "##" makes bold text.

GitHub - RESEARCHES.md

Furthermore, click on the green button "**Commit new file" **at the bottom of the page:

GitHub - Commit new file

This is what we get:

GitHub - RESEARCHES.md is created

Now, please install Git (Git installation can be found here) and GitHub Desktop (GitHub Desktop installation can be found here).

After installing these two tools, it's time to clone the project we started in step one.

Open GitHub Desktop and select the repository we created previously as follows:

Click on File -> Clone repository...:

GitHub - Desktop clone repository

You'll get this pop-up:

GitHub - Desktop pop-up

Just fill the search bar with "cicd;" you will find the repository: "cicd-applied-to-spring-boot-java-app" among the results:

GitHub - Desktop search bar

Select the repository and click on Clone:

GitHub - repository selected

GitHub Desktop is cloning the repository:

GitHub - Cloning repository

The repository is already cloned:

GitHub - Repository already cloned

At this stage, open the repository folder. This is my path:

My repository folder contains three files: LICENSE, README.md, and RESEARCHES.md shown below:

GitHub - Repository folder

It's time to open the folder where the code is saved:

GitHub - STS 4 project folder

Copy the content from the code folder and paste it into the repository folder. The repository folder looks as follows:

Github - Repository folder changed 1

It's important to ignore files and folders. We will not directly modify these files when working on a project. In order to do that, we'll make some changes to the .gitignore file from the repository folder. I used Sublime Text to edit that file.

Here's what it should look like before any changes:

GitHub - .gitignore before

Here's what it will look like after making changes.

First, add: .gitignore. It should look like:

GitHub - .gitignore after

Now, this is what folder repository looks on GitHub Desktop:

GitHub - 5 changed files

Fill the summary field with "First Upload" and click "Commit to master":

GitHub - First Upload

So what's next? Click on Push origin:

GitHub - Before push origin

GitHub - After Push origin

The repository is now up-to-date on GitHub:

GitHub - up-to-date

Step 3: Use Travis CI and Docker to Implement CI/CD

Note: If you're not familiar with either of these tools, check out this Travis CI Tutorial and Docker Getting Started tutorial to help you get started.

Sign up or sign in with GitHub and make sure Travis CI has access to your repository. Then, create a file named .travis.yml, which contains instructions that Travis CI will follow:

At first, this is what I get:

Travis - Create .travis.yml 1

Then, click on .travis.yml file:

Travis - Create .travis.yml 2

This is the repository on Travis CI:

Travis - First build passing

Now, we'll add a Travis CI badge so that we are notified about changes, etc.

To edit the README.md file, please click on the pencil icon:

Travis - Click on pencil icon

We'll get this page:

Travis - Opening README.md

Add this text but replace "FanJups" with your Travis CI username:

Travis - Adding Travis build status

Then, add a commit description "Adding Travis CI badge" and click on the Commit changes button:

Travis - Adding Badge Commit description

Then, we get:

Travis - Badge already added

It's important to know that, for every change you make, Travis CI will trigger a build and send an email. It's a continuous process:

Travis - Build related to adding badge

We successfully added Travis CI and its badge. Next, we'll focus on Docker.

First, sign in or sign up on Docker Hub:

Docker - Docker Hub Home Page

Click on the Create Repository button:

Docker - Create Repository Page

Fill out the form as follows:

Name: cicd-applied-to-spring-boot-java-app (GitHub repository name)

Description: Implementing Continuous Integration/Continuous Delivery on Spring Boot Java App (GitHub repository description)

Visibility: choose Public

Build Settings: select GitHub

After clicking on the Create button:

Docker - Repository already created

It's time to link our Docker repository to our GitHub repository. Click on Builds:

Docker - Builds

Then, click on Link to GitHub:

Docker - Link source providers

Select your GitHub repository:

Docker - Selecting GitHub Repository

Now that the GitHub repository is selected, we need to make some changes:

Autotest: select Internal and External Pull Requests

Repository links: select Enable for Base Image

Docker - Configuring builds 1

Docker - Configuring builds 2

Click on Save:

Docker - Github is now linked to Docker

We succeed to link our GitHub repository to the Docker repository. If you need help with Docker builds, this link is helpful.

What's next? First, we'll install Docker. Then we'll make some changes to the code and Travis CI.

To install Docker, go to Docker's Get Started page, select Docker for Developers and click on Download Desktop and Take a Tutorial:

Docker - Download Desktop

To make sure you've installed Docker and verify it's running properly, open your command line and write "docker." Then validate:

Docker - Command Line

Now, go back to your IDE or text editor; we'll make some changes to the code.

Create a file named "Dockerfile." To sum up what we've done so far, the Dockerile is useful when creating Docker images. To better understand the purpose of this file, this Dockerfile reference will help you.

To keep things simple, I use this Callicoder Dockerfile example and make little changes. This is what the Dockerfile looks like:

Here's the Dockerfile creating a process using STS 4:

Select the project, then click on New -> File

Fill the file name field with "Dockerfile" and click the Finish button:

Docker - Dockerfile 2

Copy and paste the content of Dockerfile presented previously:

Docker - Dockerfile is ready

Before making some changes to the pom.xml, let's look at the actual content:

We add Spotify's dockerfile-maven-plugin to push the project on Docker Hub:

"... to ensure the jar is unpacked before the Docker image is created, we add some configuration for the dependency plugin."

To continue, we will link Travis CI to Docker from our GitHub repository.

Do you remember your Docker username and password? Well, you will have to do so in order to proceed. We will create two environment variables in Travis CI.

To get there, just copy and paste this (https://travis-ci.com/GITHUBUSERNAME/cicd-applied-to-spring-boot-java-app) in your browser. But replace GITHUBUSERNAME with your correct username or click on your Travis CI badge present in README.md:

Travis CI-Docker - Travis CI Badge

Travis CI-Docker - Travis CI repository

Click on More options -> Settings :

Travis CI-Docker - Travis CI Settings

Travis CI-Docker - Travis CI Environment variables

Fill in the form as follows:

Name: DOCKER_PASSWORD

Value: yourdockerpassword

Click Add button

Name: DOCKER_USERNAME

Value: yourdockerusername

Click Add button

Travis CI-Docker - Docker environment variables added

To deploy on Docker, we'll use "mvn deploy" as explained by Spotify. The Apache Maven Project explains the role of the Apache Maven Deploy Plugin as a plugin used to "add artifacts to a remote repository."

But we don't want to add artifacts to a remote repository, we just want to deploy it on Docker. So, when we call the deploy phase, we must include a valid section POM. However, that's not the purpose here. Thus, we'll add this property in pom.xml:

<maven.deploy.skip>true</maven.deploy.skip>

If we don't add this property, this error will occur:

"**[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.8.2:deploy (default-deploy) on project cicd-applied-to-spring-boot-java-app: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter -> [Help 1]**"

At this stage, it's time to use those two Docker environment variables. Just copy and paste this new .travis.yml and push it on GitHub:

Travis CI-Docker - Updating .travis.yml

Commit description: "Linking Travis CI to Docker"

We received a beautiful red cross and Travis CI badge, meaning a beautiful error! But ignore that for now, we'll correct it later!

Travis CI-Docker - red cross

Travis CI-Docker - Red Travis CI badge faiilure

Ladies and gentlemen, I'm happy to present to you: our beautiful error! Just go to the Travis CI repository and check out the beautiful build log:

The command "mvn deploy" exited with 1.

Travis CI-Docker - mvn deploy failed

We've already added the Travis CI badge. Now, it's time to do the same for Docker. Go to https://shields.io/.

On the search bar, write "docker." Then, we'll get the following results:

Travis CI-Docker - Shields.io 1

Click on Docker Cloud Build Status:

Travis CI-Docker - Shields.io 2

What if I told you we'll get an error here also?

Nevermind, just fill the following form:

Travis CI-Docker - Docker Build inaccessible

Click on the Copy Badge URL.

Now, go back to the GitHub repository and edit README.md. We'll add the following Docker badge:

Travis CI-Docker - Adding Docker badge

Commit description: "Adding Docker badge"

Travis CI-Docker - Adding Docker badge commit description

Ladies and gentlemen, we are all winners here. So let's get it right!

Previously, we made changes on the pom.xml and created a Dockerfile.

All of those errors occurred due to the fact that Maven didn't know how to handle the deployment on Docker and the Dockerfile was absent so it was impossible to push images.

The time has come to push those changes (Dockefile and pom.xml) on GitHub using GitHub Desktop:

Travis CI-Docker - Let's get it right

Ladies and gentlemen, who are we now? Winners!

Now, we have two ugly green badges meaning success! Just kidding! That's beautiful.

Travis CI - Docker - Success

To be sure, check your emails. You should have received two emails: one from Travis CI and the other from Docker.

Before moving on to step three, we'll run the app without using Docker only. Just remember to replace "fanjups" with your own Docker Hub username:

Travis CI - Docker - Running Docker Image

I got the following error: "Invalid or corrupt jarfile /app.jar." It's all about encoding, so I'll add those two properties to the pom.xml.

Now, it's time to commit on GitHub. If you're confused about writing useful commit messages,

Before running the app again, it's important to list all containers using docker ps.

Then check the "CONTAINER ID," stop (docker stop "CONTAINER ID"), and remove it (docker rm "CONTAINER ID") because it's persisted, as explained by this post on Spring Boot with Docker.

Then, we'll run again the app to ensure that everything works well:

Travis CI - Docker - Running Docker Image Successfully 1

Travis CI - Docker - Running Docker Image Successfully 2

I was so happy when I solved this problem!

The core steps are now over. We've successfully implemented the CI/CD. Now, let's add some useful tools!

Step 4: Add Codecov for Code Coverage

First, make sure you've updated the project on your computer:

Codecov - Pull Origin 2

Click on Pull Origin:

Codecov - Pull Origin 3

Copy the modified files which we'll use in IDE from our GitHub folder and then paste them into our workspace. In this case, we'll only copy and paste the pom.xml.

Don't forget to refresh the project on STS 4 and do whatever it takes to include changes.

To better use this tool, we make some changes by adding a unit test.

First, create a new package — com.cicd.cicdappliedtospringbootjavaapp.controller.

Secondly, create a new class HelloController.java and change CicdAppliedToSpringBootJavaAppApplication.java as follows:

The folder looks like:

Codecov - folder

Before running the app on your computer, you can skip the entire dockerfile plugin because the deployment will take place on the GitHub repository managed by Travis CI.

To do this, just add this option (-Ddockerfile.skip), as explained by Spotify dockerfile-maven-plugin's usage, to your Maven command. Finally, we get mvn spring-boot:run -Ddockerfile.skip.

Now, log in or sign up to Codecov with GitHub.

Click on Account -> Repositories -> Add new repository

Just choose your GitHub repository or follow this link https://codecov.io/gh/GITHUB_USERNAME/GITHUB_REPOSITORY. But remember to replace GITHUB_REPOSITORY with cicd-applied-to-spring-boot-java-app and the GITHUB_USERNAME with yours:

Codecov -  cicd-applied-to-spring-boot-java-app

Last time, we added two environment variables to Docker. Now, we added the Codecov environment variable: CODECOV_TOKEN, as well. Copy your token and add it to your Travis CI repository.

We made some changes to the pom.xml by adding the jacoco-maven-plugin.

Go back to GitHub repository and we'll edit .travis.yml.

What Time Is it? Codecov Badge Time!

Go to your Codecov repository and Click on **Settings -> Badge -> **Copy (from Markdown).

Then, go to your GitHub repository and paste it into README.md.

Finally, push your changes from your computer to GitHub.

Code Coverage: 60%

Codecov - Badge 60%

Perhaps, you want to deactivate the coverage and activate it later. If so, go ahead and create a file named codecov.yml. Now, it's useful to know coverage so I'll comment each line with "#."

If you wish to learn more, click here to read the docs.

Now, on to step 5!

Step 5: Use SonarCloud to Write Great Code

To start, log in or sign up with GitHub.

Click on + (Analyze new project or create new organization) -> **Analyze new project -> Import another organization -> Choose an organization on GitHub **

SonarCloud - Home

Next, make sure SonarCloud has access to your GitHub repository.

Now that we're back to SonarCloud, choose a Key. I suggest using "cicd-applied-to-spring-boot-java-app" as the Key.

Then, click on Continue -> Choose Free plan -> Create Organization -> Analyze new project -> Select your GitHub repository -> Set Up -> With Travis CI -> Provide and encrypt your token -> Copy

Go back to Travis CI and create a SonarCloud environment variable named SONAR_TOKEN. As a value, paste the token you've just copied.

Now, back to SonarCloud and click on Continue -> Edit your .travis.yml file -> Choose Maven as build technology -> Configure your platform -> Configure the scanner -> Copy.

I chose to write SonarCloud script under after_success instead of script because I focus on deployment here. You are free to place it where you want.

Also, create a file named sonar-project.properties and edit as follows: sonar.projectKey=GITHUBUSERNAME_cicd-applied-to-spring-boot-java-app

Go back to SonarCloud and click on Finish.

To end, we add a SonarCloud badge into README.md.

Here's the SonarCloud badge already added:

SonarCloud - Badge

Now, on to step 6!

Step 6: Build a Project Site Using the GitHub site-maven-plugin

To get started, open pom.xml on your computer. We add:

**1) OAuth token and GitHub servers ** as properties

2) org.apache.maven.plugins:maven-site-plugin

3) com.github.github:site-maven-plugin

4) org.apache.maven.plugins:maven-project-info-reports-plugin

5) developers section

6) organization section

7) issueManagement section

8) Software Configuration Management (SCM) section

"The important configuration is to allow the OAuth token to be read from an environment variable (excerpt from pom.xml)" as explained by Michael Lanyon's blog. _"To create the token, follow these instructions". _

Copy the token, then create a new environment variable named GITHUB_OAUTH_TOKEN.

Push pom.xml to GitHub and edit .travis.yml by adding "- mvn site" under after_success.

After pushing all changes, gh-pages branch and project site are created. Each time you push, the site will be updated if necessary.

GitHub Site - gh-pages branch

To see the site, click on environment -> View deployment (under Deployed to github-pages).

Here's a link to my GitHub repo.

GitHub Site - Home

Ok, great. Now, let's move on to step 7!

Step 7: Deploy the App on Heroku Using heroku-maven-plugin

Here we go! Log in or sign up for Heroku.

Heroku - Home

Click on New -> Create new app. To continue, enter an app name (cicd-spring-boot-java-app). cicd-applied-to-spring-boot-java-app is too long as an app name. Choose a region and click Create app.

Next, click Connect to GitHub.

Search the GitHub repository. Once you find it, click Connect.

Check** Wait for CI to pass before deploy.**

Click Enable Automatic Deploys.

Go to Account settings.

Heroku - Account settings

Copy your API KEY and create a new Travis CI environment variable named HEROKU_API_KEY. This the last environment variable linked to this project.

Heroku - HEROKU_API_KEY

It's time to edit pom.xml and push to GitHub. We add:

1) full-artifact-name as a property

2) com.heroku.sdk:heroku-maven-plugin

Now, we focus on .travis.yml.

  1. To deploy on Docker Hub, we used mvn deploy.

  2. To deploy on Heroku, we'll use mvn heroku:deploy.

  3. In order to deploy on Docker and Heroku, we'll repeat the deploy phase twice, and risk exceeding timeout.

  4. To avoid that, we'll only use mvn heroku:deploy.

We succeeded in deploying on Heroku! Hooray! Now, go to https://cicd-spring-boot-java-app.herokuapp.com/

Heroku  - Succeesful Deployment

Now, it's time for the final step.

Step 8) Manage Topics

What does it mean to be in the last stage!? Topics are helpful when getting a quick overview of the project.

Go back to GitHub repository and click on Manage topics and add whatever you want.

By the way, we added a MIT license badge into the README.md and license section of the pom.xml!

Final GitHub Repository 1

Final GitHub Repository 2

Conclusion

Congratulations! You're all done. To sum things up, you learned how to implement CI/CD on a Spring Boot Java app using Maven, GitHub, Travis CI, Docker, Codecov, SonarCloud, and Heroku. This is a template you are free to use.

If you're confused, please ask in the comments. I also suggest reading the docs available, as many times as necessary.

The code is available here. So just fork, it's all yours!

Kubernetes Tutorial: How to deploy Gitea using the Google Kubernetes Engine

Kubernetes Tutorial: How to deploy Gitea using the Google Kubernetes Engine

In his tutorial will go over how to deploy Gitea, an open-source git hosting service, using the Google Kubernetes Engine.

Originally published by Daniel Sanche at https://medium.com

If you’ve read "An Introduction to Kubernetes", you should have a good foundational understanding of the basic pieces that make up Kubernetes. If you’re anything like me, however, you won’t fully understand a concept until you get hands on with it. 

There’s nothing too special about Gitea specifically, but going through the process of deploying an arbitrary open source application to the cloud will give us some practical hands-on experience with using Kubernetes. Plus, at the end you will be left with a great self-hosted service you can use to host your future projects!

Setting Up a Cluster

kubectl and gcloud

The most important tool you use when setting up a Kubernetes environment is the kubectl command. This command allows you to interact with the Kubernetes API. It is used to create, update, and delete Kubernetes resources like pods, deployments, and load balancers.

There is a catch, however: kubectl can’t be used to directly provision the nodes or clusters your pods are run on. This is because Kubernetes was designed to be platform agnostic. Kubernetes doesn’t know or care where it is running, so there is no built in way for it to communicate with your chosen cloud provider to rent nodes on your behalf. Because we are using Google Kubernetes Engine for this tutorial, we will need to use the gcloud command for these tasks.

In brief, gcloud is used to provision the resources listed under “Hardware”, and kubectl is used to manage the resources listed under “Software”

This tutorial assumes you already have kubectl and gcloud installed on your system. If you’re starting completely fresh, you will first want to check out the first part of the Google Kubernetes Engine Quickstart to sign up for a GCP account, set up a project, enable billing, and install the command line tools.

Once you have your environment ready to go, you can create a cluster by running the following commands:

# create the cluster
by default, 3 standard nodes are created for our cluster

gcloud container clusters create my-cluster --zone us-west1-a# get the credentials so we can manage it locally through kubectl

creating a cluster can take a few minutes to complete

gcloud container clusters get-credentials my-cluster \

     --zone us-west1-a

We now have a provisioned cluster made up of three n1-standard1 nodes

Along with the gcloud command, you can manage your resources through the Google Cloud Console page. After running the previous commands, you should see your cluster appear under the GKE section. You should also see a list of the VMs provisioned as your nodes under the GCE section. Note that although the GCE UI allows you to delete the VMs from this page, they are being managed by your cluster, which will re-create them when it notices they are missing. When you are finished with this tutorial and want to permanently remove the VMs, you can remove everything at once by deleting the cluster itself.


Deploying An App

YAML: Declarative Infrastructure

Now that our cluster is live, it’s time to put it to work. There are two ways to add resources to Kubernetes: interactively through the command line using kubectl add, and declaratively, by defining resources in YAML files

While interactive deployment with kubectl add is great for experimenting, YAML is the way to go when you want to build something maintainable. By writing all of your Kubernetes resources into YAML files, you can record the entire state of your cluster in a set of easily maintainable files, which can be version-controlled and managed like any other part of your system. In this way, all the instructions needed to host your service can be saved right alongside the code itself.

Adding a Pod

To show a basic example of what a Kubernetes YAML file looks like, let’s add a pod to our cluster. Create a new file called gitea.yaml and fill it with the following text:

apiVersion: v1
kind: Pod
metadata:
name: gitea-pod
spec:
containers:

  • name: gitea-container
    image: gitea/gitea:1.4

This pod is fairly basic. Line 2 declares that the type of resource we are creating is a pod; line 1 says that this resource is defined in v1 of the Kubernetes API. Lines 3–8 describe the properties of our pod. In this case, the pod is unoriginally named “gitea-pod”, and it contains a single container we’re calling “gitea-container”.

Line 8 is the most interesting part. This line defines which container image we want to run; in this case, the image tagged 1.4 in the gitea/gitea repository. Kubernetes will tell the built-in container runtime to find the requested container image, and pull it down into the pod. Because the default container runtime is Docker, it will find the gitea repository hosted on Dockerhub, and pull down the requested image.

Now that we have the YAML written out, we apply it to our cluster:kubectl apply -f gitea.yaml

This command will cause Kubernetes to read our YAML file, and update any resources in our cluster accordingly. To see the newly created pod in action, you can run kubectl get pods. You should see the pod running.

$ kubectl get podsNAME        READY     STATUS    RESTARTS   AGE

gitea-pod   1/1       Running   0          9m

Gitea is now running in a pod the cluster

If you want even more information, you can view the standard output of the container with the following command:

$ kubectl logs -f gitea-podGenerating /data/ssh/ssh_host_ed25519_key...

Feb 13 21:22:00 syslogd started: BusyBox v1.27.2

Generating /data/ssh/ssh_host_rsa_key...

Generating /data/ssh/ssh_host_dsa_key...

Generating /data/ssh/ssh_host_ecdsa_key...

/etc/ssh/sshd_config line 32: Deprecated option UsePrivilegeSeparation

Feb 13 21:22:01 sshd[12]: Server listening on :: port 22.

Feb 13 21:22:01 sshd[12]: Server listening on 0.0.0.0 port 22.

2018/02/13 21:22:01 [T] AppPath: /app/gitea/gitea

2018/02/13 21:22:01 [T] AppWorkPath: /app/gitea

2018/02/13 21:22:01 [T] Custom path: /data/gitea

2018/02/13 21:22:01 [T] Log path: /data/gitea/log

2018/02/13 21:22:01 [I] Gitea v1.4.0+rc1-1-gf61ef28 built with: bindata, sqlite

2018/02/13 21:22:01 [I] Log Mode: Console(Info)

2018/02/13 21:22:01 [I] XORM Log Mode: Console(Info)

2018/02/13 21:22:01 [I] Cache Service Enabled

2018/02/13 21:22:01 [I] Session Service Enabled

2018/02/13 21:22:01 [I] SQLite3 Supported

2018/02/13 21:22:01 [I] Run Mode: Development

2018/02/13 21:22:01 Serving [::]:3000 with pid 14

2018/02/13 21:22:01 [I] Listen: http://0.0.0.0:3000

As you can see, there is now a server running inside the container on our cluster! Unfortunately, we won’t be able to access it until we start opening up ingress channels (coming in a future post).

Deployment

As explained in Kubernetes Tutorial, pods aren’t typically run directly in Kubernetes. Instead, we should define a deployment to manage our pods.

First, let’s delete the pod we already have running:

kubectl delete -f gitea.yaml

This command removes all resources defined in the YAML file from the cluster. We can now modify our YAML file to look like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitea-deployment
spec:
replicas: 1
selector:
matchLabels:
app: gitea
template:
metadata:
labels:
app: gitea
spec:
containers:
- name: gitea-container
image: gitea/gitea:1.4

This one looks a bit more complicated than the pod we made earlier. That’s because we are really defining two different objects here: the deployment itself (lines 1–9), and the template of the pod it is managing (lines 10–17).

Line 6 is the most important part of our deployment. It defines the number of copies of the pods we want running. In this example, we are only requesting one copy, because Gitea wasn’t designed with multiple pods in mind.²

There is one other new concept introduced here: labels and selectors. Labels are simply user-defined key-value stores associated with Kubernetes resources. Selectors are used retrieve the resources that match a given label query. In this example, line 13 assigns the label “app=gitea” to all pods created by this deployment. Now, if the deployment ever needs to retrieve the list of all pods that it created (to make sure they are all healthy, for example) it will use the selector defined on lines 8–9. In this way, the deployment can always keep track of which pods it manages by searching for which ones have been assigned the “app=gitea” label.

For the most part, labels are user-defined. In the example above, “app” doesn’t mean anything special to Kubernetes, it is just a way that we may find useful to organize our system. Having said that, there are certain labels that are automatically applied by Kubernetes, containing information about the system.

Now that we have created our new YAML file, we can re-apply it to our cluster:

kubectl apply -f gitea.yaml

Now, our pod is managed by a deployment

Now, if we run kubectl get pods we can now our new pods running, as specified in our deployment:

$ kubectl get podsNAME                              READY    STATUS    RESTARTS
gitea-deployment-8944989b8-5kmn2  0/1      Running   0

We can see information about the deployment itself:

$ kubectl get deploymentsNAME              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
gitea-deployment  1        1        1           1          4m

To test to make sure everything’s working, try deleting the pod with kubectl delete pod <pod_name>. You should quickly see a new one pop back in it’s place. That’s the magic of deployments!

You may have noticed that the new pod has weird, partially randomly generated name. That’s because pods are now created in bulk by the deployment, and are meant to be ephemeral. When wrapped in a deployment, pods should be thought of as cattle rather than pets.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Further reading about Kubernetes

Docker and Kubernetes: The Complete Guide

Learn DevOps: The Complete Kubernetes Course

Docker and Kubernetes: The Complete Guide

Kubernetes Certification Course with Practice Tests

An illustrated guide to Kubernetes Networking

An Introduction to Kubernetes: Pods, Nodes, Containers, and Clusters

An Introduction to the Kubernetes DNS Service

Kubernetes Deployment Tutorial For Beginners

Kubernetes Tutorial - Step by Step Introduction to Basic Concepts