How to Install and Use Docker Community Edition on Ubuntu 18?

How to Install and Use Docker Community Edition on Ubuntu 18?

In this Docker tutorial, you’ll learn how to install and use Docker Community Edition (CE) on Ubuntu 18. You’ll install Docker itself, work with containers and images, and push an image to a Docker Repository.

Introduction

Docker is an application that simplifies the process of managing application processes in containers. Containers let you run your applications in resource-isolated processes. They’re similar to virtual machines, but containers are more portable, more resource-friendly, and more dependent on the host operating system.

In this tutorial, you’ll install and use Docker Community Edition (CE) on Ubuntu 18.04. You’ll install Docker itself, work with containers and images, and push an image to a Docker Repository.

Step 1 — Installing Docker

The Docker installation package available in the official Ubuntu repository may not be the latest version. To ensure we get the latest version, we’ll install Docker from the official Docker repository. To do that, we’ll add a new package source, add the GPG key from Docker to ensure the downloads are valid, and then install the package.

First, update your existing list of packages:

sudo apt update

Next, install a few prerequisite packages which let apt use packages over HTTPS:

sudo apt install apt-transport-https ca-certificates curl software-properties-common

Then add the GPG key for the official Docker repository to your system:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources:

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

Next, update the package database with the Docker packages from the newly added repo:

sudo apt update

Make sure you are about to install from the Docker repo instead of the default Ubuntu repo:

apt-cache policy docker-ce

You’ll see output like this, although the version number for Docker may be different:

Output of apt-cache policy docker-ce

docker-ce:
  Installed: (none)
  Candidate: 18.03.1~ce~3-0~ubuntu
  Version table:
     18.03.1~ce~3-0~ubuntu 500
        500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages

Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Ubuntu 18.04 (bionic).

Finally, install Docker:

sudo apt install docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:

sudo systemctl status docker

The output should be similar to the following, showing that the service is active and running:

Output● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2018-07-05 15:08:39 UTC; 2min 55s ago
     Docs: https://docs.docker.com
 Main PID: 10096 (dockerd)
    Tasks: 16
   CGroup: /system.slice/docker.service
           ├─10096 /usr/bin/dockerd -H fd://
           └─10113 docker-containerd --config /var/run/docker/containerd/containerd.toml

Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We’ll explore how to use the docker command later in this tutorial.

Step 2 — Executing the Docker Command Without Sudo (Optional)

By default, the docker command can only be run the root user or by a user in the docker group, which is automatically created during Docker’s installation process. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you’ll get an output like this:

Outputdocker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

sudo usermod -aG docker ${USER}

To apply the new group membership, log out of the server and back in, or type the following:

su - ${USER}

You will be prompted to enter your user’s password to continue.

Confirm that your user is now added to the docker group by typing:

id -nG

Outputsammy sudo docker

If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using:

sudo usermod -aG docker username

The rest of this article assumes you are running the docker command as a user in the docker group. If you choose not to, please prepend the commands with sudo.

Let’s explore the docker command next.

Step 3 — Using the Docker Command

Using docker consists of passing it a chain of options and commands followed by arguments. The syntax takes this form:

docker [option] [command] [arguments]

To view all available subcommands, type:

docker

As of Docker 18, the complete list of available subcommands includes:

Output
  attach      Attach local standard input, output, and error streams to a running container
  build       Build an image from a Dockerfile
  commit      Create a new image from a container's changes
  cp          Copy files/folders between a container and the local filesystem
  create      Create a new container
  diff        Inspect changes to files or directories on a container's filesystem
  events      Get real time events from the server
  exec        Run a command in a running container
  export      Export a container's filesystem as a tar archive
  history     Show the history of an image
  images      List images
  import      Import the contents from a tarball to create a filesystem image
  info        Display system-wide information
  inspect     Return low-level information on Docker objects
  kill        Kill one or more running containers
  load        Load an image from a tar archive or STDIN
  login       Log in to a Docker registry
  logout      Log out from a Docker registry
  logs        Fetch the logs of a container
  pause       Pause all processes within one or more containers
  port        List port mappings or a specific mapping for the container
  ps          List containers
  pull        Pull an image or a repository from a registry
  push        Push an image or a repository to a registry
  rename      Rename a container
  restart     Restart one or more containers
  rm          Remove one or more containers
  rmi         Remove one or more images
  run         Run a command in a new container
  save        Save one or more images to a tar archive (streamed to STDOUT by default)
  search      Search the Docker Hub for images
  start       Start one or more stopped containers
  stats       Display a live stream of container(s) resource usage statistics
  stop        Stop one or more running containers
  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
  top         Display the running processes of a container
  unpause     Unpause all processes within one or more containers
  update      Update configuration of one or more containers
  version     Show the Docker version information
  wait        Block until one or more containers stop, then print their exit codes

To view the options available to a specific command, type:

docker docker-subcommand --help

To view system-wide information about Docker, use:

docker info

Let’s explore some of these commands. We’ll start by working with images.

Step 4 — Working with Docker Images

Docker containers are built from Docker images. By default, Docker pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anyone can host their Docker images on Docker Hub, so most applications and Linux distributions you’ll need will have images hosted there.

To check whether you can access and download images from Docker Hub, type:

docker run hello-world

The output will indicate that Docker in working correctly:

OutputUnable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9bb5a5d4561a: Pull complete
Digest: sha256:3e1764d0f546ceac4565547df2ac4907fe46f007ea229fd7ef2718514bcec35d
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.
...

Docker was initially unable to find the hello-world image locally, so it downloaded the image from Docker Hub, which is the default repository. Once the image downloaded, Docker created a container from the image and the application within the container executed, displaying the message.

You can search for images available on Docker Hub by using the docker command with the search subcommand. For example, to search for the Ubuntu image, type:

docker search ubuntu

The script will crawl Docker Hub and return a listing of all images whose name match the search string. In this case, the output will be similar to this:

OutputNAME                                                      DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
ubuntu                                                    Ubuntu is a Debian-based Linux operating sys…   7917                [OK]
dorowu/ubuntu-desktop-lxde-vnc                            Ubuntu with openssh-server and NoVNC            193                                     [OK]
rastasheep/ubuntu-sshd                                    Dockerized SSH service, built on top of offi…   156                                     [OK]
ansible/ubuntu14.04-ansible                               Ubuntu 14.04 LTS with ansible                   93                                      [OK]
ubuntu-upstart                                            Upstart is an event-based replacement for th…   87                  [OK]
neurodebian                                               NeuroDebian provides neuroscience research s…   50                  [OK]
ubuntu-debootstrap                                        debootstrap --variant=minbase --components=m…   38                  [OK]
1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5      ubuntu-16-nginx-php-phpmyadmin-mysql-5          36                                      [OK]
nuagebec/ubuntu                                           Simple always updated Ubuntu docker images w…   23                                      [OK]
tutum/ubuntu                                              Simple Ubuntu docker images with SSH access     18
i386/ubuntu                                               Ubuntu is a Debian-based Linux operating sys…   13
ppc64le/ubuntu                                            Ubuntu is a Debian-based Linux operating sys…   12
1and1internet/ubuntu-16-apache-php-7.0                    ubuntu-16-apache-php-7.0                        10                                      [OK]
1and1internet/ubuntu-16-nginx-php-phpmyadmin-mariadb-10   ubuntu-16-nginx-php-phpmyadmin-mariadb-10       6                                       [OK]
eclipse/ubuntu_jdk8                                       Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, …   6                                       [OK]
codenvy/ubuntu_jdk8                                       Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, …   4                                       [OK]
darksheer/ubuntu                                          Base Ubuntu Image -- Updated hourly             4                                       [OK]
1and1internet/ubuntu-16-apache                            ubuntu-16-apache                                3                                       [OK]
1and1internet/ubuntu-16-nginx-php-5.6-wordpress-4         ubuntu-16-nginx-php-5.6-wordpress-4             3                                       [OK]
1and1internet/ubuntu-16-sshd                              ubuntu-16-sshd                                  1                                       [OK]
pivotaldata/ubuntu                                        A quick freshening-up of the base Ubuntu doc…   1
1and1internet/ubuntu-16-healthcheck                       ubuntu-16-healthcheck                           0                                       [OK]
pivotaldata/ubuntu-gpdb-dev                               Ubuntu images for GPDB development              0
smartentry/ubuntu                                         ubuntu with smartentry                          0                                       [OK]
ossobv/ubuntu
...

In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you’ve identified the image that you would like to use, you can download it to your computer using the pull subcommand.

Execute the following command to download the official ubuntu image to your computer:

docker pull ubuntu

You’ll see the following output:

OutputUsing default tag: latest
latest: Pulling from library/ubuntu
6b98dfc16071: Pull complete
4001a1209541: Pull complete
6319fc68c576: Pull complete
b24603670dc3: Pull complete
97f170c87c6f: Pull complete
Digest: sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d
Status: Downloaded newer image for ubuntu:latest

After an image has been downloaded, you can then run a container using the downloaded image with the run subcommand. As you saw with the hello-world example, if an image has not been downloaded when docker is executed with the run subcommand, the Docker client will first download the image, then run a container using it.

To see the images that have been downloaded to your computer, type:

docker images

The output should look similar to the following:

OutputREPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              113a43faa138        4 weeks ago         81.2MB
hello-world         latest              e38bc07ac18e        2 months ago        1.85kB

As you’ll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded (pushed is the technical term) to Docker Hub or other Docker registries.

Let’s look at how to run containers in more detail.

Step 5 — Running a Docker Container

The hello-world container you ran in the previous step is an example of a container that runs and exits after emitting a test message. Containers can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.

As an example, let’s run a container using the latest image of Ubuntu. The combination of the -i and -t switches gives you interactive shell access into the container:

docker run -it ubuntu

Your command prompt should change to reflect the fact that you’re now working inside the container and should take this form:

[email protected]:/#

Note the container id in the command prompt. In this example, it is d9b100f2f636. You’ll need that container ID later to identify the container when you want to remove it.

Now you can run any command inside the container. For example, let’s update the package database inside the container. You don’t need to prefix any command with sudo, because you’re operating inside the container as the root user:

apt update

Then install any application in it. Let’s install Node.js:

apt install nodejs

This installs Node.js in the container from the official Ubuntu repository. When the installation finishes, verify that Node.js is installed:

node -v

You’ll see the version number displayed in your terminal:

Outputv8.10.0

Any changes you make inside the container only apply to that container.

To exit the container, type exit at the prompt.

Let’s look at managing the containers on our system next.

Step 6 — Managing Docker Containers

After using Docker for a while, you’ll have many active (running) and inactive containers on your computer. To view the active ones, use:

docker ps

You will see output similar to the following:

OutputCONTAINER ID        IMAGE               COMMAND             CREATED             

In this tutorial, you started two containers; one from the hello-world image and another from the ubuntu image. Both containers are no longer running, but they still exist on your system.

To view all containers — active and inactive, run docker ps with the -a switch:

docker ps -a

You’ll see output similar to this:

d9b100f2f636        ubuntu              "/bin/bash"         About an hour ago   Exited (0) 8 minutes ago                           sharp_volhard
01c950718166        hello-world         "/hello"            About an hour ago   Exited (0) About an hour ago                       festive_williams

To view the latest container you created, pass it the -l switch:

docker ps -l

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
d9b100f2f636        ubuntu              "/bin/bash"         About an hour ago   Exited (0) 10 minutes ago                       sharp_volhard

To start a stopped container, use docker start, followed by the container ID or the container’s name. Let’s start the Ubuntu-based container with the ID of d9b100f2f636:

docker start d9b100f2f636

The container will start, and you can use docker ps to see its status:

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
d9b100f2f636        ubuntu              "/bin/bash"         About an hour ago   Up 8 seconds                            sharp_volhard

To stop a running container, use docker stop, followed by the container ID or name. This time, we’ll use the name that Docker assigned the container, which is sharp_volhard:

docker stop sharp_volhard

Once you’ve decided you no longer need a container anymore, remove it with the docker rm command, again using either the container ID or the name. Use the docker ps -a command to find the container ID or name for the container associated with the hello-world image and remove it.

docker rm festive_williams

You can start a new container and give it a name using the --name switch. You can also use the --rm switch to create a container that removes itself when it’s stopped. See the docker run help command for more information on these options and others.

Containers can be turned into images which you can use to build new containers. Let’s look at how that works.

Step 7 — Committing Changes in a Container to a Docker Image

When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the docker rm command, the changes will be lost for good.

This section shows you how to save the state of a container as a new Docker image.

After installing Node.js inside the Ubuntu container, you now have a container running off an image, but the container is different from the image you used to create it. But you might want to reuse this Node.js container as the basis for new images later.

Then commit the changes to a new Docker image instance using the following command.

docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name

The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container_id is the one you noted earlier in the tutorial when you started the interactive Docker session. Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username.

For example, for the user sammy, with the container ID of d9b100f2f636, the command would be:

docker commit -m "added Node.js" -a "sammy" d9b100f2f636 sammy/ubuntu-nodejs

When you commit an image, the new image is saved locally on your computer. Later in this tutorial, you’ll learn how to push an image to a Docker registry like Docker Hub so others can access it.

Listing the Docker images again will show the new image, as well as the old one that it was derived from:

docker images

You’ll see output like this:

OutputREPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
sammy/ubuntu-nodejs   latest              7c1f35226ca6        7 seconds ago       179MB
ubuntu                   latest              113a43faa138        4 weeks ago         81.2MB
hello-world              latest              e38bc07ac18e        2 months ago        1.85kB

In this example, ubuntu-nodejs is the new image, which was derived from the existing ubuntu image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that NodeJS was installed. So next time you need to run a container using Ubuntu with NodeJS pre-installed, you can just use the new image.

You can also build Images from a Dockerfile, which lets you automate the installation of software in a new image. However, that’s outside the scope of this tutorial.

Now let’s share the new image with others so they can create containers from it.

Step 8 — Pushing Docker Images to a Docker Repository

The next logical step after creating a new image from an existing image is to share it with a select few of your friends, the whole world on Docker Hub, or other Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.

To push your image, first log into Docker Hub.

docker login -u docker-registry-username

You’ll be prompted to authenticate using your Docker Hub password. If you specified the correct password, authentication should succeed.

Note: If your Docker registry username is different from the local username you used to create the image, you will have to tag your image with your registry username. For the example given in the last step, you would type:

docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs

Then you may push your own image using:

docker push docker-registry-username/docker-image-name

To push the ubuntu-nodejs image to the sammy repository, the command would be:

docker push sammy/ubuntu-nodejs

The process may take some time to complete as it uploads the images, but when completed, the output will look like this:

OutputThe push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Pushed
5f70bf18a086: Pushed
a3b5c80a4eba: Pushed
7f18b442972b: Pushed
3ce512daaf78: Pushed
7aae4540b42d: Pushed

...

After pushing an image to a registry, it should be listed on your account’s dashboard, like that show in the image below.

If a push attempt results in an error of this sort, then you likely did not log in:

OutputThe push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Preparing
5f70bf18a086: Preparing
a3b5c80a4eba: Preparing
7f18b442972b: Preparing
3ce512daaf78: Preparing
7aae4540b42d: Waiting
unauthorized: authentication required

Log in with docker login and repeat the push attempt. Then verify that it exists on your Docker Hub repository page.

You can now use docker pull sammy/ubuntu-nodejs to pull the image to a new machine and use it to run a new container.

Conclusion

In this tutorial you installed Docker, worked with images and containers, and pushed a modified image to Docker Hub.

Originally published by Brian Hogan at https://www.digitalocean.com

Angular 9 Tutorial: Learn to Build a CRUD Angular App Quickly

What's new in Bootstrap 5 and when Bootstrap 5 release date?

What’s new in HTML6

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

How to Install Docker Compose on Ubuntu 18.04?

How to Install Docker Compose on Ubuntu 18.04?

In this Docker Compose tutorial, we’ll show you how to install Docker Compose on Ubuntu 18.04. How to install the latest version of Docker Compose to help you manage multi-container applications. Test your installation by running a Hello World example, and removed the test image and container.

Introduction

Docker is a great tool for automating the deployment of Linux applications inside software containers, but to take full advantage of its potential each component of an application should run in its own individual container. For complex applications with a lot of components, orchestrating all the containers to start up, communicate, and shut down together can quickly become unwieldy.

The Docker community came up with a popular solution called Fig, which allowed you to use a single YAML file to orchestrate all your Docker containers and configurations. This became so popular that the Docker team decided to make Docker Compose based on the Fig source, which is now deprecated. Docker Compose makes it easier for users to orchestrate the processes of Docker containers, including starting up, shutting down, and setting up intra-container linking and volumes.

In this tutorial, we’ll show you how to install the latest version of Docker Compose to help you manage multi-container applications.

Step 1 — Installing Docker Compose

Although we can install Docker Compose from the official Ubuntu repositories, it is several minor version behind the latest release, so we’ll install Docker Compose from the Docker’s GitHub repository. The command below is slightly different than the one you’ll find on the Releases page. By using the -o flag to specify the output file first rather than redirecting the output, this syntax avoids running into a permission denied error caused when using sudo.

We’ll check the current release and if necessary, update it in the command below:

sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

Next we’ll set the permissions:

sudo chmod +x /usr/local/bin/docker-compose

Then we’ll verify that the installation was successful by checking the version:

docker-compose --version

This will print out the version we installed:

Outputdocker-compose version 1.21.2, build a133471

Now that we have Docker Compose installed, we’re ready to run a “Hello World” example.

Step 2 — Running a Container with Docker Compose

The public Docker registry, Docker Hub, includes a Hello World image for demonstration and testing. It illustrates the minimal configuration required to run a container using Docker Compose: a YAML file that calls a single image:

First, we’ll create a directory for the YAML file and move into it:

mkdir hello-world
cd hello-world

Then, we’ll create the YAML file:

nano docker-compose.yml

Put the following contents into the file, save the file, and exit the text editor:

docker-compose.yml
my-test:
 image: hello-world

The first line in the YAML file is used as part of the container name. The second line specifies which image to use to create the container. When we run the command docker-compose up it will look for a local image by the name we specified, hello-world. With this in place, we’ll save and exit the file.

We can look manually at images on our system with the docker images command:

docker images

When there are no local images at all, only the column headings display:

OutputREPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

Now, while still in the ~/hello-world directory, we’ll execute the following command:

docker-compose up

The first time we run the command, if there’s no local image named hello-world, Docker Compose will pull it from the Docker Hub public repository:

OutputPulling my-test (hello-world:latest)...
latest: Pulling from library/hello-world
c04b14da8d14: Downloading [==================================================>] c04b14da8d14: Extracting [==================================================>]  c04b14da8d14: Extracting [==================================================>]  c04b14da8d14: Pull complete
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest
. . .

After pulling the image, docker-compose creates a container, attaches, and runs the hello program, which in turn confirms that the installation appears to be working:

Output. . .
Creating helloworld_my-test_1...
Attaching to helloworld_my-test_1
my-test_1 |
my-test_1 | Hello from Docker.
my-test_1 | This message shows that your installation appears to be working correctly.
my-test_1 |
. . .

Then it prints an explanation of what it did:

Output of docker-compose up1\. The Docker client contacted the Docker daemon.
2\. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3\. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading.
4\. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.

Docker containers only run as long as the command is active, so once hello finished running, the container stopped. Consequently, when we look at active processes, the column headers will appear, but the hello-world container won’t be listed because it’s not running.

docker ps

OutputCONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES

We can see the container information, which we’ll need in the next step, by using the -a flag which shows all containers, not just the active ones:

docker ps -a

OutputCONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
06069fd5ca23        hello-world         "/hello"            35 minutes ago      Exited (0) 35 minutes ago                       drunk_payne

This displays the information we’ll need to remove the container when we’re done with it.

Step 3 — Removing the Image (Optional)

To avoid using unnecessary disk space, we’ll remove the local image. To do so, we’ll need to delete all the containers that reference the image using the docker rm command, followed by either the CONTAINER ID or the NAME. Below, we’re using the CONTAINER ID from the docker ps -a command we just ran. Be sure to substitute the ID of your container:

docker rm 06069fd5ca23

Once all containers that reference the image have been removed, we can remove the image:

docker rmi hello-world

Conclusion

We’ve now installed Docker Compose, tested our installation by running a Hello World example, and removed the test image and container.

Originally published by Melissa Anderson at https://www.digitalocean.com

How to Use Ansible to Install and Set Up Docker on Ubuntu 18.04

How to Use Ansible to Install and Set Up Docker on Ubuntu 18.04

This Docker tutorial explains how to use Ansible to automate the steps contained in our guide on How To Install and Use Docker on Ubuntu 18.04. How to use Ansible to automate the process of installing and setting up Docker on a remote server. What Does this Playbook Do? Install aptitude, which is preferred by Ansible as an alternative to the apt package manager. Install the Docker GPG APT key. Add the official Docker repository to the apt sources. Install Docker. Install the Python Docker module via pip.

Introduction

Server automation now plays an essential role in systems administration, due to the disposable nature of modern application environments. Configuration management tools such as Ansible are typically used to streamline the process of automating server setup by establishing standard procedures for new servers while also reducing human error associated with manual setups.

Ansible offers a simple architecture that doesn’t require special software to be installed on nodes. It also provides

This guide explains how to use Ansible to automate the steps contained in our guide on How To Install and Use Docker on Ubuntu 18.04. Docker is an application that simplifies the process of managing containers, resource-isolated processes that behave in a similar way to virtual machines, but are more portable, more resource-friendly, and depend more heavily on the host operating system.

What Does this Playbook Do?

This Ansible playbook provides an alternative to manually running through the procedure outlined in our guide on How To Install and Use Docker on Ubuntu 18.04.

Running this playbook will perform the following actions on your Ansible hosts:

  1. Install aptitude, which is preferred by Ansible as an alternative to the apt package manager.
  2. Install the required system packages.
  3. Install the Docker GPG APT key.
  4. Add the official Docker repository to the apt sources.
  5. Install Docker.
  6. Install the Python Docker module via pip.
  7. Pull the default image specified by default_container_image from Docker Hub.
  8. Create the number of containers defined by the create_containers variable, each using the image defined by default_container_image, and execute the command defined in default_container_command in each new container.

Once the playbook has finished running, you will have a number of containers created based on the options you defined within your configuration variables.

How to Use this Playbook

The first thing we need to do is obtain the Docker playbook and its dependencies from the do-community/ansible-playbooks repository. We need to clone this repository to a local folder inside the Ansible Control Node.

In case you have cloned this repository before while following a different guide, access your existing ansible-playbooks copy and run a git pull command to make sure you have updated contents:

cd ~/ansible-playbooks
git pull

If this is your first time using the do-community/ansible-playbooks repository, you should start by cloning the repository to your home folder with:

cd ~
git clone https://github.com/do-community/ansible-playbooks.git
cd ansible-playbooks

The files we’re interested in are located inside the docker_ubuntu1804 folder, which has the following structure:

docker_ubuntu1804
├── vars
│   └── default.yml
├── playbook.yml
└── readme.md

Here is what each of these files are:

  • vars/default.yml: Variable file for customizing playbook settings.
  • playbook.yml: The playbook file, containing the tasks to be executed on the remote server(s).
  • readme.md: A text file containing information about this playbook.

We’ll edit the playbook’s variable file to customize our Docker setup. Access the docker_ubuntu1804 directory and open the vars/default.yml file using your command line editor of choice:

cd docker_ubuntu1804
nano vars/default.yml

This file contains a few variables that require your attention:

---
create_containers: 4
default_container_name: docker
default_container_image: ubuntu
default_container_command: sleep 1d

The following list contains a brief explanation of each of these variables and how you might want to change them:

  • create_containers: The number of containers to create.
  • default_container_name: Default container name.
  • default_container_image: Default Docker image to be used when creating containers.
  • default_container_command: Default command to run on new containers.

Once you’re done updating the variables inside vars/default.yml, save and close this file. If you used nano, do so by pressing CTRL + X, Y, then ENTER.

You’re now ready to run this playbook on one or more servers. Most playbooks are configured to be executed on every server in your inventory, by default. We can use the -l flag to make sure that only a subset of servers, or a single server, is affected by the playbook. We can also use the -u flag to specify which user on the remote server we’re using to connect and execute the playbook commands on the remote hosts.

To execute the playbook only on server1, connecting as root, you can use the following command:

ansible-playbook playbook.yml -l server1 -u root

You will get output similar to this:

Output...
TASK [Add Docker GPG apt Key] ********************************************************************************************************************
changed: [server1]

TASK [Add Docker Repository] *********************************************************************************************************************
changed: [server1]

TASK [Update apt and install docker-ce] **********************************************************************************************************
changed: [server1]

TASK [Install Docker Module for Python] **********************************************************************************************************
changed: [server1]

TASK [Pull default Docker image] *****************************************************************************************************************
changed: [server1]

TASK [Create default containers] *****************************************************************************************************************
changed: [server1] => (item=1)
changed: [server1] => (item=2)
changed: [server1] => (item=3)
changed: [server1] => (item=4)

PLAY RECAP ***************************************************************************************************************************************
server1                  : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

When the playbook is finished running, log in via SSH to the server provisioned by Ansible and run docker ps -a to check if the containers were successfully created:

sudo docker ps -a

You should see output similar to this:

OutputCONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
a3fe9bfb89cf        ubuntu              "sleep 1d"          5 minutes ago       Created                                 docker4
8799c16cde1e        ubuntu              "sleep 1d"          5 minutes ago       Created                                 docker3
ad0c2123b183        ubuntu              "sleep 1d"          5 minutes ago       Created                                 docker2
b9350916ffd8        ubuntu              "sleep 1d"          5 minutes ago       Created                                 docker1

This means the containers defined in the playbook were created successfully. Since this was the last task in the playbook, it also confirms that the playbook was fully executed on this server.

The Playbook Contents

You can find the Docker server setup featured in this tutorial in the docker_ubuntu1804 folder inside the DigitalOcean Community Playbooks repository. To copy or download the script contents directly, click the Raw button towards the top of each script.

The full contents of the playbook as well as its associated files are also included here for your convenience.

vars/default.yml

The default.yml variable file contains values that will be used when setting up Docker on your server.

---
create_containers: 4
default_container_name: docker
default_container_image: ubuntu
default_container_command: sleep 1d

playbook.yml

The playbook.yml file is where all tasks from this setup are defined. It starts by defining the group of servers that should be the target of this setup (all), after which it uses become: true to define that tasks should be executed with privilege escalation (sudo) by default. Then, it includes the vars/default.yml variable file to load configuration options.

---
- hosts: all
  become: true
  vars_files:
    - vars/default.yml

  tasks:
    - name: Install aptitude using apt
      apt: name=aptitude state=latest update_cache=yes force_apt_get=yes

    - name: Install required system packages
      apt: name={{ item }} state=latest update_cache=yes
      loop: [ 'apt-transport-https', 'ca-certificates', 'curl', 'software-properties-common', 'python3-pip', 'virtualenv', 'python3-setuptools']

    - name: Add Docker GPG apt Key
      apt_key:
        url: https://download.docker.com/linux/ubuntu/gpg
        state: present

    - name: Add Docker Repository
      apt_repository:
        repo: deb https://download.docker.com/linux/ubuntu bionic stable
        state: present

    - name: Update apt and install docker-ce
      apt: update_cache=yes name=docker-ce state=latest

    - name: Install Docker Module for Python
      pip:
        name: docker

    - name: Pull default Docker image
      docker_image:
        name: "{{ default_container_image }}"
        source: pull

    # Creates the number of containers defined by the variable create_containers, using values from vars file
    - name: Create default containers
      docker_container:
        name: "{{ default_container_name }}{{ item }}"
        image: "{{ default_container_image }}"
        command: "{{ default_container_command }}"
        state: present
      with_sequence: count={{ create_containers }}

Feel free to modify this playbook to best suit your individual needs within your own workflow. For example, you could use the docker_image module to push images to Docker Hub or the docker_container module to set up container networks.

Conclusion

Automating your infrastructure setup can not only save you time, but it also helps to ensure that your servers will follow a standard configuration that can be customized to your needs. With the distributed nature of modern applications and the need for consistency between different staging environments, automation like this has become a central component in many teams’ development processes.

In this guide, we demonstrated how to use Ansible to automate the process of installing and setting up Docker on a remote server. Because each individual typically has different needs when working with containers, we encourage you to check out the official Ansible documentation for more information and use cases of the docker_container Ansible module.

Originally published by Erika Heidi at https://www.digitalocean.com