Docker Compose Tutorial for Beginners

Docker Compose Tutorial for Beginners

Docker Compose is a tool that helps us overcome this problem and easily handle multiple containers at once. In this tutorial, we’ll have a look at its main features and powerful mechanisms.

Docker Compose is a tool that helps us overcome this problem and easily handle multiple containers at once. In this tutorial, we’ll have a look at its main features and powerful mechanisms.

1. Overview

When using Docker extensively, the management of several different containers quickly becomes cumbersome.

Docker Compose is a tool that helps us overcome this problem and easily handle multiple containers at once.

In this tutorial, we’ll have a look at its main features and powerful mechanisms.

2. The YAML Configuration Explained

In short, Docker Compose works by applying many rules declared within a single docker-compose.yml configuration file.

These YAML rules, both human-readable and machine-optimized, provide us an effective way to snapshot the whole project from ten-thousand feet in a few lines.

Almost every rule replaces a specific **Docker **command so that in the end we just need to run:

docker-compose up

We can get dozens of configurations applied by Compose under the hood. This will save us the hassle of scripting them with Bash or something else.

In this file, we need to specify the version of the Compose file format, at least one service, and optionally volumes and networks:

version: "3.7"
services:
  ...
volumes:
  ...
networks:
  ...

Let’s see what these elements actually are.

2.1. Services

First of all, services refer to containers’ configuration.

For example, let’s take a dockerized web application consisting of a front end, a back end, and a database: We’d likely split those components into three images and define them as three different services in the configuration:

services:
  frontend:
    image: my-vue-app
    ...
  backend:
    image: my-springboot-app
    ...
  db:
    image: postgres
    ...

There are multiple settings that we can apply to services, and we’ll explore them deeply later on.

2.2. Volumes & Networks

Volumes, on the other hand, are physical areas of disk space shared between the host and a container, or even between containers. In other words, a volume is a shared directory in the host, visible from some or all containers.

Similarly, networks define the communication rules between containers, and between a container and the host. Common network zones will make containers’ services discoverable by each other, while private zones will segregate them in virtual sandboxes.

Again, we’ll learn more about them in the next section.

3. Dissecting a Service

Let’s now begin to inspect the main settings of a service.

3.1. Pulling an Image

Sometimes, the image we need for our service has already been published (by us or by others) in Docker Hub, or another Docker Registry.

If that’s the case, then we refer to it with the image attribute, by specifying the image name and tag:

services: 
  my-service:
    image: ubuntu:latest
    ...

3.2. Building an Image

Instead, we might need to build an image from the source code by reading its Dockerfile.

This time, we’ll use the build keyword, passing the path to the Dockerfile as the value:

services: 
  my-custom-app:
    build: /path/to/dockerfile/
    ...

We can also use a URL instead of a path:

services: 
  my-custom-app:
    build: https://github.com/my-company/my-project.git
    ...

Additionally, we can specify an image name in conjunction with the build attribute, which will name the image once created, making it available to be used by other services:

services: 
  my-custom-app:
    build: https://github.com/my-company/my-project.git
    image: my-project-image
    ...

3.3. Configuring the Networking

Docker containers** communicate between themselves in networks created, implicitly or through configuration, by Docker Compose**. A service can communicate with another service on the same network by simply referencing it by container name and port (for example network-example-service:80), provided that we’ve made the port accessible through the expose keyword:

services:
  network-example-service:
    image: karthequian/helloworld:latest
    expose:
      - "80"

In this case, by the way, it would also work without exposing it, because the expose directive is already in the image Dockerfile.

To reach a container from the host, the ports must be exposed declaratively through the ports keyword, which also allows us to choose if exposing the port differently in the host:

services:
  network-example-service:
    image: karthequian/helloworld:latest
    ports:
      - "80:80"
    ...
  my-custom-app:
    image: myapp:latest
    ports:
      - "8080:3000"
    ...
  my-custom-app-replica:
    image: myapp:latest
    ports:
      - "8081:3000"
    ...

Port 80 will now be visible from the host, while port 3000 of the other two containers will be available on ports 8080 and 8081 in the host. This powerful mechanism allows us to run different containers exposing the same ports without collisions.

Finally, we can define additional virtual networks to segregate our containers:

services:
  network-example-service:
    image: karthequian/helloworld:latest
    networks: 
      - my-shared-network
    ...
  another-service-in-the-same-network:
    image: alpine:latest
    networks: 
      - my-shared-network
    ...
  another-service-in-its-own-network:
    image: alpine:latest
    networks: 
      - my-private-network
    ...
networks:
  my-shared-network: {}
  my-private-network: {}

In this last example, we can see that another-service-in-the-same-network will be able to ping and to reach port 80 of network-example-service, while another-service-in-its-own-network won’t.

3.4. Setting Up the Volumes

There are three types of volumes: anonymous, named, and host ones.

Docker manages both anonymous and named volumes, automatically mounting them in self-generated directories in the host. While anonymous volumes were useful with older versions of Docker (pre 1.9), named ones are the suggested way to go nowadays. Host volumes also allow us to specify an existing folder in the host.

We can configure host volumes at the service level and named volumes in the outer level of the configuration, in order to make the latter visible to other containers and not only to the one they belong:

services:
  volumes-example-service:
    image: alpine:latest
    volumes: 
      - my-named-global-volume:/my-volumes/named-global-volume
      - /tmp:/my-volumes/host-volume
      - /home:/my-volumes/readonly-host-volume:ro
    ...
  another-volumes-example-service:
    image: alpine:latest
    volumes:
      - my-named-global-volume:/another-path/the-same-named-global-volume
    ...
volumes:
  my-named-global-volume: 

Here, both containers will have read/write access to the my-named-global-volume shared folder, no matter the different paths they’ve mapped it to. The two host volumes, instead, will be available only to volumes-example-service.

The /tmp folder of the host’s file system is mapped to the /my-volumes/host-volume folder of the container.

This portion of the file system is writeable, which means that the container can not only read but also write (and delete) files in the host machine.

We can mount a volume in read-only mode by appending :ro to the rule, like for the /home folder (we don’t want a Docker container erasing our users by mistake).

3.5. Declaring the Dependencies

Often, we need to create a dependency chain between our services, so that some services get loaded before (and unloaded after) other ones. We can achieve this result through the depends_on keyword:

services:
  kafka:
    image: wurstmeister/kafka:2.11-0.11.0.3
    depends_on:
      - zookeeper
    ...
  zookeeper:
    image: wurstmeister/zookeeper
    ...

We should be aware, however, that Compose will not wait for the zookeeper service to finish loading before starting the kafka service: it will simply wait for it to start. If we need a service to be fully loaded before starting another service, we need to get deeper control of startup and shutdown order in Compose.

4. Managing Environment Variables

Working with environment variables is easy in Compose. We can define static environment variables, and also define dynamic variables with the ${} notation:

services:
  database: 
    image: "postgres:${POSTGRES_VERSION}"
    environment:
      DB: mydb
      USER: "${USER}"

There are different methods to provide those values to Compose.

For example, one is setting them in a .env file in the same directory, structured like a .properties file, key=value:

POSTGRES_VERSION=alpine
USER=foo

Otherwise, we can set them in the OS before calling the command:

export POSTGRES_VERSION=alpine
export USER=foo
docker-compose up

Finally, we might find handy using a simple one-liner in the shell:

POSTGRES_VERSION=alpine USER=foo docker-compose up

We can mix the approaches, but let’s keep in mind that Compose uses the following priority order, overwriting the less important with the higher ones:

  1. Compose file
  2. Shell environment variables
  3. Environment file
  4. Dockerfile
  5. Variable not defined
5. Scaling & Replicas

In older Compose versions, we were allowed to scale the instances of a container through the docker-compose scale command. Newer versions deprecated it and replaced it with the –__–scale option.

On the other side, we can exploit Docker Swarm – a cluster of Docker Engines – and autoscale our containers declaratively through the replicas attribute of the deploy section:

services:
  worker:
    image: dockersamples/examplevotingapp_worker
    networks:
      - frontend
      - backend
    deploy:
      mode: replicated
      replicas: 6
      resources:
        limits:
          cpus: '0.50'
          memory: 50M
        reservations:
          cpus: '0.25'
          memory: 20M
      ...

Under deploy, we can also specify many other options, like the resources thresholds. Compose, however, considers the whole deploy section only when deploying to Swarm, and ignores it otherwise.

6. A Real-World Example: Spring Cloud Data Flow

While small experiments help us understanding the single gears, seeing the real-world code in action will definitely unveil the big picture.

Spring Cloud Data Flow is a complex project, but simple enough to be understandable. Let’s download its YAML file and run:

DATAFLOW_VERSION=2.1.0.RELEASE SKIPPER_VERSION=2.0.2.RELEASE docker-compose up 

Compose will download, configure, and start every component, and then intersect the container’s logs into a single flow in the current terminal.

It’ll also apply unique colors to each one of them for a great user experience:

We might get the following error running a brand new Docker Compose installation:

lookup registry-1.docker.io: no such host

While there are different solutions to this common pitfall, using 8.8.8.8 as DNS is probably the simplest.

7. Lifecycle Management

Let’s finally take a closer look at the syntax of Docker Compose:

docker-compose [-f ...] [options] [COMMAND] [ARGS...]

While there are many options and commands available, we need at least to know the ones to activate and deactivate the whole system correctly.

7.1. Startup

We’ve seen that we can create and start the containers, the networks, and the volumes defined in the configuration with up:

docker-compose up

After the first time, however, we can simply use start to start the services:

docker-compose start

In case our file has a different name than the default one (docker-compose.yml), we can exploit the -f and –__–file flags to specify an alternate file name:

docker-compose -f custom-compose-file.yml start

Compose can also run in the background as a daemon when launched with the -d option:

docker-compose up -d

7.2. Shutdown

To safely stop the active services, we can use stop, which will preserve containers, volumes, and networks, along with every modification made to them:

docker-compose stop

To reset the status of our project, instead, we simply run down, which will destroy everything with only the exception of external volumes:

docker-compose down

8. Conclusion

In this tutorial, we’ve learned about Docker Compose and how it works.

As usual, we can find the source docker-compose.yml file on GitHub, along with a helpful battery of tests immediately available in the following image:

WordPress in Docker. Part 1: Dockerization

WordPress in Docker. Part 1: Dockerization

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

What is Docker | Docker Tutorial for Beginners

What is Docker | Docker Tutorial for Beginners

This DevOps Docker Tutorial on what is docker will help you understand how to use Docker Hub, Docker Images, Docker Container & Docker Compose. This tutorial explains Docker's working Architecture and Docker Engine in detail.

This Docker tutorial also includes a Hands-On session around Docker by the end of which you will learn to pull a centos Docker Image and spin your own Docker Container. You will also see how to launch multiple docker containers using Docker Compose. Finally, it will also tell you the role Docker plays in the DevOps life-cycle.

The Hands-On session is performed on an Ubuntu-64bit machine in which Docker is installed.

Deploying Dockerized .NET Apps Without Being a DevOps Guru

Deploying Dockerized .NET Apps Without Being a DevOps Guru

This article will demonstrate first using the tooling to publish a simple ASP.NET Core API in an image to the Docker hub, and then creating a Linux virtual machine in Azure to host the API

Originally published by Julie Lerman at https://blog.docker.com

.NET Developers who use Visual Studio have access to a great extension to help them create Docker images for their apps. The Visual Studio Tools for Docker simplify the task of developing and debugging apps destined for Docker images. But what happens when you are ready to move from debugging in Visual Studio to deploying your image to a container in the cloud?

This blog post will demonstrate first using the tooling to publish a simple ASP.NET Core API in an image to the Docker hub, and then creating a Linux virtual machine in Azure to host the API. It will also engage Docker Compose and Microsoft SQL Server for Linux in a Docker container, along with a Docker Volume for persistence. The goal is to create a simple test environment and a low-stress path to getting your first experience with publishing an app in Docker.

Using the Docker Tools to aid in building and debugging the API is the focus of a series of articles that were published in the April, May and June 2019 issues of MSDN Magazine. So I’ll provide only a high level look at the solution.

Overview of the Sample App

The API allows me to track the names of Docker Captains. It’s not a real-world solution, but enough to give me something to work with. You can download the solution from github.com/julielerman/dockercaptains. I’ll provide a few highlights here.

   public class Captain
   {
       public int CaptainId { get; set; }
       public string Name { get; set; }
   }

The API leverages Entity Framework Core (EF Core) for its data persistence. This requires a class that inherits from the EF Core DbContext. My class, CaptainContext, specifies a DbSet to work from and defines a bit of seed data for the database.

Enabling a Dynamic Connection String

The startup.cs file uses ASP.NET Core’s dependency injection to configure a SQL Server provider for the CaptainContext. There is also code to read a connection string from an environment variable within the Docker container and update a password placeholder that’s less visible to prying eyes.

public void ConfigureServices(IServiceCollection services)
{
 services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
 var conn = Configuration["ConnectionStrings:CaptainDB"];
 conn = connectionstring.Replace("ENVPW", Configuration["DB_PW"]);
 services.AddDbContext<CaptainContext>(options => options.UseSqlServer(conn));
}

The VS Tools generated a Dockerfile and I only made one change to the default — adding the CaptainDB connection string ENV variable with its ENVPW placeholder:

ENV ConnectionStrings:CaptainDB "Server=db;Database=CaptainDB;User=sa;Password=ENVPW;"

ASP.NET Core can discover Docker environment variables when running in a Docker container.

Orchestrating with a docker-compose file

Finally comes the docker-compose.yml file. This sets up a service for the API image, another for the database server image and a volume for persisting the data.

version: '3.4'

services:
 dataapidocker:
   image: ${DOCKER_REGISTRY-}dataapidocker
   build:
     context: .
     dockerfile: DataAPIDocker/Dockerfile
   environment:
     - DB_PW
   depends_on:
     - db
   ports:
     - 80:80
 db:
   image: mcr.microsoft.com/mssql/server
   volumes:
     - mssql-server-julie-data:/var/opt/mssql/data
   environment:
     SA_PASSWORD: "${DB_PW}"
     ACCEPT_EULA: "Y"
   ports:
     - "1433:1433"
volumes:
 mssql-server-julie-data: {}

Notice that I’m declaring the DB_PW environment variable in the API’s service definition and referencing it in the db’s service definition.

There’s also an .env file in the solution where the value of DB_PW is hidden.

[email protected]

Docker will read that file by default.

I got this solution set up and running from within Visual Studio on my development machine. And I love that even when the debugger publishes the app to a local container, I can still debug while it’s running in that container. That’s a super-power of the tools extension.

Using the Tools to Publish to Docker Hub

Once I was happy with my progress, I wanted to get this demo running in the cloud. Although I can easily use the CLI to push and pull, I love that the Docker Tools in VS can handle this part. The Dockerfile created by the tool has instructions for a multi-stage build. When you target Visual Studio to a release build, the tools will build the release image described in the Dockerfile. Publishing will rebuild that release image and publish it to your destination registry.

You can see my full solution in the screenshot below. My API project is called DataAPIDocker. Notice there is also a docker-compose project. This was created by the Docker Tools. But it is the DataAPIDocker project that will be published first into an image and then to a repository.

This will present a Publish page where you can choose to create a New Profile. A publish profile lets you define where to publish your app and also predefine any needed credentials. Creating a profile begins with selecting from a list of targets; for publishing a Docker image, select Container Registry. That option then gives you predefined registries to choose, such as Azure Container Registry, Docker Hub, or a custom registry – which could be an instance of Docker Trusted Registry. 

I’ll choose Docker Hub and click Publish. 

The last step is to provide your Docker Hub repository name. If you don’t already have docker.config set up with your credentials, then you also need to supply your password. 

After creating a profile, it gets stored in the Visual Studio project.

You’ll be returned to the Publish overview page with this profile selected, where you can edit the default “latest” tag name. Click the Publish button to trigger the Docker Tools to do their job. 

A window will open up showing the progress of the docker push command run by the tools.

After the push is complete you can open the repository to see your new repository which by default is public.

Setting up an Azure Linux VM to Host the Containers

Now that the image is hosted in the cloud, you can turn your sights to hosting a container instance for running the app. Since my Visual Studio Subscription includes credits on Azure, I’ll use those. I will create a Linux Virtual Machine on Azure with Docker and Docker Compose, then run an instance of my new image along with a SQL Server and a data volume.

I found two interesting paths for doing this at the command line. One was by using the Azure CLI at the command line in Windows, macOS or Linux. It is so much easier than doing it through the Azure Portal.

I found this doc to be really helpful as I was doing this for the first time. The article walks you through installing the Azure CLI, logging into Azure, creating a Linux VM with Docker already installed then installing Docker Compose. Keep in mind that this will create a default machine using “Standard DS1 v2 (1 vcpus, 3.5 GB memory)” setup. That VM size has an estimated cost of about $54 (USD) per month. 

Alternatively, you can use Docker Machine, a Docker tool for installing Docker on virtual hosts and managing the hosts. This path is a little more automated but it does require that you use bash and that you start by using the Azure CLI to log into your Azure account using the command az login.

Once that’s done, you can use parameters of docker-machine to tell it you’re creating this in Azure, specify your subscription, ssh username, port and size of the machine to create. The last uses standard Azure VM size names. 

I found it interesting to use the Azure CLI workflow which was educational and then consider the docker-machine workflow as a shortcut version.

Since I was still working on my Windows machine, and don’t have the Windows Subsystem for Linux installed there, I opened up Visual Studio Code and switched my terminal shell to use bash. That let me use docker-machine without issue.I also have the Azure Login extension in VS Code, so I was already logged in to Azure.

I first had to get the subscription ID of my Azure Account which I did using the CLI. Then I plugged the id into the docker-machine command:

docker-machine create -d azure 
   --azure-subscription-id [this is where I pasted my subscript id]
   --azure-ssh-user azureuser
   --azure-open-port 80
   --azure-size "Standard_DS1_v2"
   mylinuxvm

There are more settings you can apply, such as defining the resource and location. The output from this command will pause, providing you with details for how to allow docker-machine authorization to the VM by plugging a provided code into a browser window. Once that’s done the command will continue its work and the output will forge ahead.

When it’s finished, you’ll see the message “Docker is up and running!” (on the new VM), Followed by a very important message to configure a shell on the VM by running:

"C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env mylinuxvm

Recall that I’m doing these tasks on Windows, so docker-machine is ensuring that I know where to find the executable. After performing this task, I can see the machine up and running in the Azure Portal. This lets me inspect other default configuration choices made because I didn’t specify them in the docker-machine command.

By default, all of the needed ports are set up for access such as 80 for http and 22 for ssh.

Re-Creating Docker-Compose and .env on the VM

We only need two files on this machine: the docker-compose.yml and the .env file.

Docker-machine allows you to easily ssh into the VM in order for your command line commands to execute on that machine.

docker-machine ssh mylinuxvm

Then you can use a linux editor such as nano to re-create the two files.

nano docker-compose.yml

And you can paste the contents of your docker-compose file into there. This is the docker-compose file in my solution for the sample app. However, there are two edits you’ll need to make.

  1. The original file depends on a variable supplied by the VS Docker Tools for the registry location. Change the value of image to point to your Docker Hub image: image: julielerman/dataapidocker:formylinuxvm
  2. You’ll also need to change the version of docker-compose specified at the top of the file to 2.0 since you’re moving from hosting on Windows to hosting on Linux.

In nano, you can save the docker-compose file with ^O. Then exit nano and run it again to create the .env file using the command:

nano .env

Paste the key value pair environment variable from the app and save the .env file.

Running the Container

I still had to install docker-compose on the new machine. Docker is nice enough to feed you the command for that if you try to run docker-compose before installing it.

 sudo apt install docker-compose

Then I was able to run my containers with: 

 sudo docker-compose up

One important thing I learned: The VS Docker tooling doesn’t define port mapping for the API service in docker-compose. That’s hidden in a docker-compose.override.yml file used by the debugger. If you look at the docker-compose file listed earlier in this article, you’ll see that I added it myself. Without it, when you try to browse to the API, you will get a Connection refused error.

My ASP.NET Core API is now running and I can browse to it at public IP address specified for the VM. The HTTP Get of my Captains controller returns a list of the captains seeded in the database. 

DevOps are for Devs, Too

As a developer who is often first in line to claim “I don’t do DevOps”, I was surprised at how simple it turned out to be to deploy the app I had created. So often I have allowed my development machine to be a gate that defined the limitations of my expertise. I can build the apps and watch them work on my development machine but I’ve usually left deployment to someone else.

While I have ventured into the Azure Portal frequently, the fact that the Docker Tools and the Azure CLI made it so simple to create the assets I needed for deploying the app made me wonder why I’d waited so long to try that out. And in reality, I didn’t have to deploy the app, just an image and then a docker-compose file. That the Docker Machine made it even easier to create those cloud assets was something of a revelation. 

Part of this workflow leveraged the Docker Tools for Visual Studio on Windows. But because I spend a lot of time in Visual Studio Code on my MacBook, I now have the confidence to explore using the Docker CLI for publishing the image to Docker Hub. After that I can just repeat the Docker Machine path to create the Azure VM where I can run my containers. 

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Further reading

Docker and Kubernetes: The Complete Guide

Docker Mastery: The Complete Toolset From a Docker Captain

Docker for the Absolute Beginner - Hands On - DevOps

Docker for Absolute Beginners

How to debug Node.js in a Docker container?

Docker Containers for Beginners

Deploy Docker Containers With AWS CodePipeline

Build Docker Images and Host a Docker Image Repository with GitLab

How to create a full stack React/Express/MongoDB app using Docker