In this lesson, we are going to learn how Docker Compose works and how it can be used to deploy & manage multiple containers in the production environment.

In the previous lessons, we discussed the basics of Docker. We learned the anatomy of Docker containers, the structure of a Dockerfile, how to create images, how to manage containers, etc. This is just the basic information we need to know in order to operate Docker.

If our application is as simple as an HTTP server, we can run it inside a single Docker container. You can create a custom Docker image, copy the application code to the image, and run a container from it. You can mount a volume for persistent data storage and bind a port on the host to the port on the container to make your service public. That’s it. After that, we would need to manually monitor the container in case it goes down.

However, in reality, our application is made up of different services. For example, we might have a frontend service that serves the web application, a backend service that provides a REST API to the frontend, and a database service that stores the user data.

These services can be dependent on each other. For example, the frontend service depends on the backend service and the backend service depends on the database service. Unless we have all the services up and running, our application won’t function properly. They might need to be started in the right order for things to go well.

Doing this manually is cumbersome. Not everyone on your team will have the context of the entire application. It’s a big metadata to carry and you can’t afford to mess things up. You need some sort of orchestration tool to automatically manage your services. You want this tool to configure your services, manage startup and shutdown as well as handle failures. This is where Docker Compose comes in.

#docker-compose #devops #continuous-integration #docker #software-development

A beginner’s guide to deploying a Docker application to production using Docker Compose
1.15 GEEK