Getting started with Docker Compose

Getting started with Docker Compose

If you're already using Docker, you might came across Docker Compose.The idea of Docker Compose is to help you to define and build application stacks.These application stacks are basically multiple containers that can be linked together to provide a multi-container-service.

Originally published by Matthias at dev.to

Introduction

For instance, you can use Docker Compose to launch a three container application stack that has a load balancer, a web application and a database. Each application runs in its own container, but Docker Compose allows you to start the complete application stack without having to link and configure each container.

In a Docker Compose configuration file you can define environment variables, networks or volumes. A developer can simply use docker-compose up to start the application stack, which makes it easy to create a development environment which behaves identically on every developers machine. I think every developer knows the situation where she or he joins a new team and wants to setup the new project immediately - Docker Compose can speed this up.

Another scenario for Docker Compose are single host deployments. Docker Compose was originally created for development purposes, but there are production environment features like restart policies or container scaling. There is a more detailed description available in the official Docker documentation (which is always a good resource for getting help). However, for more complex setups, one would use Docker Swarm or even Kubernetes.

If you're using Docker for Mac or Docker for Windows, Docker Compose is already installed on your machine. If you want to run Docker Compose on a Linux server, please check the installation instructions.

In your terminal, you can type docker-compose -v to check if Docker Compose is installed and which version you are currently using. If everything is correct, you should see something like docker-compose version 1.22.0, build f46880f.

The following article will explain the basic usage of Docker Compose. You will create a MySQL database, a WordPress instance and an NGINX proxy.

Here is a quick overview of what we are going to do in this guide:

  1. Create a docker-compose.yml file
  2. Define services (MySQL, WordPress and NGINX)
  3. Add networks which allow communication between the services
  4. Add volumes for the NGINX configuration files
  5. Use docker-compose up to run the application stack
Prerequisites

For this guide you'll need:

  • Docker for Mac or Docker for Windows installed
  • Docker Compose working (you can check if it is working by typing docker-compose -v in your terminal)
  • An empty directory, where you can create a docker-compose.yml file.
Configure the project

This is the desired project structure:

.
├── default.conf
└── docker-compose.yml

Create the docker-compose.yml file and insert the application stack configuration (we're going to have a closer look on each section later in this article):

version: '3'

services:
database:
image: mysql:5.7
environment:
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=password
- MYSQL_DATABASE=wordpress
- MYSQL_RANDOM_ROOT_PASSWORD=true
networks:
- backend
wordpress:
image: wordpress:4.9.8
depends_on:
- database
environment:
- WORDPRESS_DB_HOST=database
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=password
networks:
- backend
- frontend
nginx:
image: nginx:1.15
depends_on:
- wordpress
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
networks:
- frontend

networks:
backend:
frontend:

Create the default.conf and insert the NGINX configuration:

server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location / {
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Forwarded-Port 8080;
proxy_set_header Host $host:8080;
proxy_pass http://wordpress:80;
}
}
Database service

The first section defines the database service. The user credentials as well as the default database are configured with environment variables. For security reasons, the root user is created with a random password.

There is no need to access the database from the outside (only the WordPress container needs to access it), so no ports are exposed.

The database service is a member of the backend network, which makes it possible for WordPress to read and store data.

database:
image: mysql:5.7
environment:
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=password
- MYSQL_DATABASE=wordpress
- MYSQL_RANDOM_ROOT_PASSWORD=true
networks:
- backend
WordPress service

We are using the official WordPress image from the Docker Hub. As you have already seen in the database service, we use some environment variables to configure the container - in this case, the database connection is configured.

WordPress is not directly accessible, so that there is no need to open any ports.

The container is a member of both networks (backend and frontend).

wordpress:
image: wordpress:4.9.8
depends_on:
- database
environment:
- WORDPRESS_DB_HOST=database
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=password
networks:
- backend
- frontend
NGINX service

The NGINX service acts as a proxy for WordPress - all requests are routed through it.

To make the NGINX proxy available for the outside, it is necessary to expose a port (in this case we are mapping port 8080 to 80). Furthermore, it needs to communicate with the wordpress service, thats why it is a member of the frontend network.

There is also a volume mapping which mounts the NGINX configuration file into the correct directory.

nginx:
image: nginx:1.15
depends_on:
- wordpress
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
networks:
- frontend
Networks

If applications need to communicate with each other, you'll have to define networks. In our docker-compose.yml the networks are defined in the last three lines. There are different networks for the backend and frontend.

networks:
backend:
frontend:

The Wordpress container is assigned to both networks (backend and frontend), because Wordpress needs access to the MySQL server and the NGINX instance will proxy requests to WordPress.

The services database and nginx cannot communicate with each other, because they are not in the same network. A simple ping from the nginxservice can prove that:

[email protected]:/# ping -c 5 wordpress
PING wordpress (172.20.0.2) 56(84) bytes of data.
64 bytes from docker-compose-getting-started_wordpress_1.docker-compose-getting-started_frontend (172.20.0.2): icmp_seq=1 ttl=64 time=0.107 ms
64 bytes from docker-compose-getting-started_wordpress_1.docker-compose-getting-started_frontend (172.20.0.2): icmp_seq=2 ttl=64 time=0.084 ms
64 bytes from docker-compose-getting-started_wordpress_1.docker-compose-getting-started_frontend (172.20.0.2): icmp_seq=3 ttl=64 time=0.117 ms
64 bytes from docker-compose-getting-started_wordpress_1.docker-compose-getting-started_frontend (172.20.0.2): icmp_seq=4 ttl=64 time=0.115 ms
64 bytes from docker-compose-getting-started_wordpress_1.docker-compose-getting-started_frontend (172.20.0.2): icmp_seq=5 ttl=64 time=0.117 ms

--- wordpress ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4082ms
rtt min/avg/max/mdev = 0.084/0.108/0.117/0.012 ms
[email protected]:/# ping -c 5 mysql
ping: mysql: Name or service not known
[email protected]:/#

If two or more services are a member of the same network, it is possible to reach them via their defined names (in this example databasewordpressand nginx).

Volumes

Typically, your files are deleted once you stop and remove your application stack. If you need to have your data to be persistent, you can use volumes. Your data is then stored inside the volume which is independent from your container's lifetime.

Besides of making data persistent, volumes can be used to mount files into a running container. In this example, the NGINX default.conf is mounted into the NGINX configuration directory.

A more detailled explanation on volumes, I refer to the use volumes section in the official documentation.

Dependencies

There are situations, where it is mandatory, that a container is already runinng before another is started. To ensure this, you can use the depends_on property. In our Docker Compose configuration file the NGINX container will be started once the WordPress container is ready.

However Docker Compose won't wait for your database to be ready (even if you're using depends_on) - it will only make sure that the services are started in the correct order.

If you have to wait for a database to be successfully started and accept connections, you will need to use a tool such as wait-for-it or dockerize. For further information about this topic, Control startup order in Compose is a good reading.

Environment variables

According to the Twelve-Factor App it is a good practice to pass the app config as environment variables.

If you have a look at the definition of the database and wordpress services, you'll notice that both services are configured with environment variables.

Typically the description of a Docker image provides a very good overview of available environment variables. Have a look at the MySQL and WordPress image in the Docker Hub.

Run the application stack

It's now time to start the application. You can start your stack by typing docker-compose up --build. Of course, you need to make sure that you execute the command from within the directory where the docker-compose.yml is located.

By default docker-compose uses the docker-compose.yml, but you can also specify an alternate Docker Compose configuration file by using the -f (or --file) option (e.g. docker-compose -f docker-compose.production.yml up --build).

Once the application stack is successfully started, you can navigate to http://localhost:8080 which will bring up the WordPress setup wizard.

If you have completed the setup routine, you can login to the admin interface or open the blog.

Conclusion

Docker Compose significantly simplifies your development workflow. Whether you're using it to set up a one-container-only stack or a complex multi-service-architecture, you can easily configure your application and share it with other developers in your team. Think about how much time you've spent helping others to start the application on their machine - now you just need to share the docker-compose.yml. Another benefit is, that reading the docker-compose.yml will give others a first insight of how your application stack behaves and which services communicate with each other.

If you don't need automatic scaling or multi-server-environments, you can even use Docker Compose to run your application in production.


Originally published by Matthias at dev.to

========================================

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More

☞ Docker Mastery: The Complete Toolset From a Docker Captain

☞ Docker and Kubernetes: The Complete Guide

☞ Docker for the Absolute Beginner - Hands On - DevOps

☞ Docker Crash Course for busy DevOps and Developers

☞ The Docker for DevOps course: From development to production

☞ Docker for Node.js Projects From a Docker Captain

☞ Docker Certified Associate 2019

☞ Selenium WebDriver with Docker

WordPress in Docker. Part 1: Dockerization

WordPress in Docker. Part 1: Dockerization

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

Kubernetes Vs Docker

Kubernetes Vs Docker

This video on "Kubernetes vs Docker" will help you understand the major differences between these tools and how companies use these tools.

We will compare Kubernetes and Docker on the following factors:

  1. Definition
  2. Working
  3. Deployment
  4. Autoscaling
  5. Health check
  6. Setup
  7. Tolerance ratio
  8. Public cloud service providers
  9. Companies using them

An Introduction to Docker Kubernetes Service

An Introduction to Docker Kubernetes Service

Docker Kubernetes Service (DKS) is a Certified Kubernetes distribution that is included with Docker Enterprise 3.0

Docker Kubernetes Service (DKS) is a Certified Kubernetes distribution that is included with Docker Enterprise 3.0

An Introduction to Kubernetes

**Kubernetes **is a powerful orchestration technology for deploying, scaling and managing distributed applications and it has taken the industry by storm over the past few years. However, due to its inherent complexity, relatively few enterprises have been able to realize the full value of Kubernetes; with 96% of enterprise IT organizations unable to manage **Kubernetes **on their own. At Docker, we recognize that much of Kubernetes’ perceived complexity stems from a lack of intuitive security and manageability configurations that most enterprises expect and require for production-grade software.

Docker Kubernetes Service (DKS) is a Certified Kubernetes distribution that is included with Docker Enterprise 3.0 and is designed to solve this fundamental challenge. It’s the only offering that integrates **Kubernetes **from the **developer **desktop to production servers, with ‘sensible secure defaults’ out-of-the-box. Simply put, **DKS **makes **Kubernetes **easy to use and more secure for the entire organization. Here are three things that **DKS **does to simplify (and accelerate) **Kubernetes **adoption for the enterprise:

Consistent, seamless Kubernetes experience for developers and operators

DKS is the only Kubernetes offering that provides consistency across the full development lifecycle from local desktops to servers. Through the use of Version Packs, developers’ Kubernetes environments running in Docker Desktop Enterprise stay in sync with production environments for a complete, seamless Kubernetes experience. With a quarterly release cycle for Kubernetes and new APIs getting added every release, different environments may end up running different versions of Docker and Kubernetes. Developers can switch between version packs with a single click to stay aligned to different resulting environments.

Streamlined Kubernetes lifecyle management

New cluster management tools enable operations teams to easily deploy, scale, backup and restore and upgrade a certified Kubernetes environment using a set of simple CLI commands. This delivers an automated way to install and configure Kubernetes applications across hybrid and multi-cloud deployment, including AWS, Azure, or VMware.

Enhanced security with ‘sensible defaults’

**DKS **comes hardened with “sensible defaults” that enterprises expect and require for production-level deployments. These include out-of-the-box configurations for security, encryption, access control, and lifecycle management — all without having to become a Kubernetes expert. DKS also allows organizations to integrate their existing LDAP and SAML-based authentication solutions with Kubernetes RBAC for simple multi-tenancy.

Take the next step to Kubernetes success