Getting started with Docker Compose

Getting started with Docker Compose

If you're already using Docker, you might came across Docker Compose.The idea of Docker Compose is to help you to define and build application stacks.These application stacks are basically multiple containers that can be linked together to provide a multi-container-service.

Originally published by Matthias at


For instance, you can use Docker Compose to launch a three container application stack that has a load balancer, a web application and a database. Each application runs in its own container, but Docker Compose allows you to start the complete application stack without having to link and configure each container.

In a Docker Compose configuration file you can define environment variables, networks or volumes. A developer can simply use docker-compose up to start the application stack, which makes it easy to create a development environment which behaves identically on every developers machine. I think every developer knows the situation where she or he joins a new team and wants to setup the new project immediately - Docker Compose can speed this up.

Another scenario for Docker Compose are single host deployments. Docker Compose was originally created for development purposes, but there are production environment features like restart policies or container scaling. There is a more detailed description available in the official Docker documentation (which is always a good resource for getting help). However, for more complex setups, one would use Docker Swarm or even Kubernetes.

If you're using Docker for Mac or Docker for Windows, Docker Compose is already installed on your machine. If you want to run Docker Compose on a Linux server, please check the installation instructions.

In your terminal, you can type docker-compose -v to check if Docker Compose is installed and which version you are currently using. If everything is correct, you should see something like docker-compose version 1.22.0, build f46880f.

The following article will explain the basic usage of Docker Compose. You will create a MySQL database, a WordPress instance and an NGINX proxy.

Here is a quick overview of what we are going to do in this guide:

  1. Create a docker-compose.yml file
  2. Define services (MySQL, WordPress and NGINX)
  3. Add networks which allow communication between the services
  4. Add volumes for the NGINX configuration files
  5. Use docker-compose up to run the application stack


For this guide you'll need:

  • Docker for Mac or Docker for Windows installed
  • Docker Compose working (you can check if it is working by typing docker-compose -v in your terminal)
  • An empty directory, where you can create a docker-compose.yml file.

Configure the project

This is the desired project structure:

├── default.conf
└── docker-compose.yml

Create the docker-compose.yml file and insert the application stack configuration (we're going to have a closer look on each section later in this article):

version: '3'

services: database: image: mysql:5.7 environment: - MYSQL_USER=wordpress - MYSQL_PASSWORD=password - MYSQL_DATABASE=wordpress - MYSQL_RANDOM_ROOT_PASSWORD=true networks: - backend wordpress: image: wordpress:4.9.8 depends_on: - database environment: - WORDPRESS_DB_HOST=database - WORDPRESS_DB_USER=wordpress - WORDPRESS_DB_PASSWORD=password networks: - backend - frontend nginx: image: nginx:1.15 depends_on: - wordpress volumes: - ./default.conf:/etc/nginx/conf.d/default.conf ports: - 8080:80 networks: - frontend

networks: backend: frontend:

Create the default.conf and insert the NGINX configuration:

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;
    location / {
        proxy_read_timeout 90;
        proxy_connect_timeout 90;
        proxy_redirect off;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto http;
        proxy_set_header X-Forwarded-Port 8080;
        proxy_set_header Host $host:8080;
        proxy_pass http://wordpress:80;

Database service

The first section defines the database service. The user credentials as well as the default database are configured with environment variables. For security reasons, the root user is created with a random password.

There is no need to access the database from the outside (only the WordPress container needs to access it), so no ports are exposed.

The database service is a member of the backend network, which makes it possible for WordPress to read and store data.

  image: mysql:5.7
    - MYSQL_USER=wordpress
    - MYSQL_PASSWORD=password
    - MYSQL_DATABASE=wordpress
    - backend

WordPress service

We are using the official WordPress image from the Docker Hub. As you have already seen in the database service, we use some environment variables to configure the container - in this case, the database connection is configured.

WordPress is not directly accessible, so that there is no need to open any ports.

The container is a member of both networks (backend and frontend).

  image: wordpress:4.9.8
    - database
    - WORDPRESS_DB_HOST=database
    - WORDPRESS_DB_USER=wordpress
    - backend
    - frontend

NGINX service

The NGINX service acts as a proxy for WordPress - all requests are routed through it.

To make the NGINX proxy available for the outside, it is necessary to expose a port (in this case we are mapping port 8080 to 80). Furthermore, it needs to communicate with the wordpress service, thats why it is a member of the frontend network.

There is also a volume mapping which mounts the NGINX configuration file into the correct directory.

  image: nginx:1.15
    - wordpress
    - ./default.conf:/etc/nginx/conf.d/default.conf
    - 8080:80
    - frontend


If applications need to communicate with each other, you'll have to define networks. In our docker-compose.yml the networks are defined in the last three lines. There are different networks for the backend and frontend.


The Wordpress container is assigned to both networks (backend and frontend), because Wordpress needs access to the MySQL server and the NGINX instance will proxy requests to WordPress.

The services database and nginx cannot communicate with each other, because they are not in the same network. A simple ping from the nginxservice can prove that:

[email protected]:/# ping -c 5 wordpress
PING wordpress ( 56(84) bytes of data.
64 bytes from docker-compose-getting-started_wordpress_1.docker-compose-getting-started_frontend ( icmp_seq=1 ttl=64 time=0.107 ms
64 bytes from docker-compose-getting-started_wordpress_1.docker-compose-getting-started_frontend ( icmp_seq=2 ttl=64 time=0.084 ms
64 bytes from docker-compose-getting-started_wordpress_1.docker-compose-getting-started_frontend ( icmp_seq=3 ttl=64 time=0.117 ms
64 bytes from docker-compose-getting-started_wordpress_1.docker-compose-getting-started_frontend ( icmp_seq=4 ttl=64 time=0.115 ms
64 bytes from docker-compose-getting-started_wordpress_1.docker-compose-getting-started_frontend ( icmp_seq=5 ttl=64 time=0.117 ms

--- wordpress ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4082ms rtt min/avg/max/mdev = 0.084/0.108/0.117/0.012 ms [email protected]:/# ping -c 5 mysql ping: mysql: Name or service not known [email protected]:/#

If two or more services are a member of the same network, it is possible to reach them via their defined names (in this example databasewordpressand nginx).


Typically, your files are deleted once you stop and remove your application stack. If you need to have your data to be persistent, you can use volumes. Your data is then stored inside the volume which is independent from your container's lifetime.

Besides of making data persistent, volumes can be used to mount files into a running container. In this example, the NGINX default.conf is mounted into the NGINX configuration directory.

A more detailled explanation on volumes, I refer to the use volumes section in the official documentation.


There are situations, where it is mandatory, that a container is already runinng before another is started. To ensure this, you can use the depends_on property. In our Docker Compose configuration file the NGINX container will be started once the WordPress container is ready.

However Docker Compose won't wait for your database to be ready (even if you're using depends_on) - it will only make sure that the services are started in the correct order.

If you have to wait for a database to be successfully started and accept connections, you will need to use a tool such as wait-for-it or dockerize. For further information about this topic, Control startup order in Compose is a good reading.

Environment variables

According to the Twelve-Factor App it is a good practice to pass the app config as environment variables.

If you have a look at the definition of the database and wordpress services, you'll notice that both services are configured with environment variables.

Typically the description of a Docker image provides a very good overview of available environment variables. Have a look at the MySQL and WordPress image in the Docker Hub.

Run the application stack

It's now time to start the application. You can start your stack by typing docker-compose up --build. Of course, you need to make sure that you execute the command from within the directory where the docker-compose.yml is located.

By default docker-compose uses the docker-compose.yml, but you can also specify an alternate Docker Compose configuration file by using the -f (or --file) option (e.g. docker-compose -f docker-compose.production.yml up --build).

Once the application stack is successfully started, you can navigate to http://localhost:8080 which will bring up the WordPress setup wizard.

If you have completed the setup routine, you can login to the admin interface or open the blog.


Docker Compose significantly simplifies your development workflow. Whether you're using it to set up a one-container-only stack or a complex multi-service-architecture, you can easily configure your application and share it with other developers in your team. Think about how much time you've spent helping others to start the application on their machine - now you just need to share the docker-compose.yml. Another benefit is, that reading the docker-compose.yml will give others a first insight of how your application stack behaves and which services communicate with each other.

If you don't need automatic scaling or multi-server-environments, you can even use Docker Compose to run your application in production.

Originally published by Matthias at


Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More

☞ Docker Mastery: The Complete Toolset From a Docker Captain

☞ Docker and Kubernetes: The Complete Guide

☞ Docker for the Absolute Beginner - Hands On - DevOps

☞ Docker Crash Course for busy DevOps and Developers

☞ The Docker for DevOps course: From development to production

☞ Docker for Node.js Projects From a Docker Captain

☞ Docker Certified Associate 2019

☞ Selenium WebDriver with Docker

docker kubernetes

Bootstrap 5 Complete Course with Examples

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Building a simple Applications with Vue 3

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

Kubernetes vs Docker

Get Hands-on experience on Kubernetes and the best comparison of Kubernetes over the DevOps at your place at Kubernetes training

Docker Explained: Docker Architecture | Docker Registries

Following the second video about Docker basics, in this video, I explain Docker architecture and explain the different building blocks of the docker engine; docker client, API, Docker Daemon. I also explain what a docker registry is and I finish the video with a demo explaining and illustrating how to use Docker hub.

Docker vs. Kubernetes | Docker vs. Kubernetes Difference

Docker and Kubernetes are two orchestration tools that very popular. Many people have trouble picking one. In this video on Docker vs Kubernetes, we will be comparing these two tools end to end and see which one will suit your needs better.

What is the difference between Docker, Kubernetes and Docker Swarm ?

What is the difference between Docker and Kubernetes? And Kubernetes or Docker Swarm? In my video "Docker vs Kubernetes vs Docker Swarm" I compare both Docker and Kubernetes and Kubernetes vs Docker Swarm.

Kubernetes vs. Docker | Docker Limitations

Kubernetes vs Docker | Docker Limitations. What is Docker? Docker is an open-source platform based on Linux containers for developing, shipping, and running applications inside containers. What is Kubernetes? Kubernetes is an open-source platform for managing containerized workloads and services. Kubernetes takes care of scaling and failover for your application running on the container.