How to use Both Volumes and Bind Mounts in Docker Container

Learn to use both volumes and bind mounts to persist your Docker data and avoid data loss after your Docker container is destroyed.

When a Docker container is destroyed, creating a new container off of the existing Docker image does so without making any changes to the original container. Therefore, you’ll lose data any time you destroy one container and create a new one.

To avoid losing data, Docker provides volumes and bind mounts, two mechanisms for persisting data in your Docker container. In this tutorial, we’ll examine volumes and bind mounts before looking at some examples and use cases for each.

Let’s get started!


What is GEEK

Buddha Community

How to use Both Volumes and Bind Mounts in Docker Container
August  Murray

August Murray


Docker: Manage Data in Docker -Understanding “Docker Volumes” and “Bind Mounts”


By Design, Docker containers don’t hold persistent data. Any data you write inside the docker’s writable layer is no longer available once the container is stopped. It can be difficult to get the data out of the container if another process needs it.

Also, a container’s writable layer is tightly coupled to the host machine where the container is running. You can’t easily move the data somewhere else.

Docker has two options for containers to store files in the host machine, so that the files are persisted even after the container stops: volumes, and bind mounts.

  • Volumes are stored in a part of the host filesystem which is managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker.
  • Bind mounts may be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the Docker host or a Docker container can modify them at any time.

Let’s understand them in detail one by one.

#docker-container #docker #docker-volume #containerization

Mikel  Okuneva

Mikel Okuneva


Ever Wondered Why We Use Containers In DevOps?

At some point we’ve all said the words, “But it works on my machine.” It usually happens during testing or when you’re trying to get a new project set up. Sometimes it happens when you pull down changes from an updated branch.

Every machine has different underlying states depending on the operating system, other installed programs, and permissions. Getting a project to run locally could take hours or even days because of weird system issues.

The worst part is that this can also happen in production. If the server is configured differently than what you’re running locally, your changes might not work as you expect and cause problems for users. There’s a way around all of these common issues using containers.

What is a container

A container is a piece of software that packages code and its dependencies so that the application can run in any computing environment. They basically create a little unit that you can put on any operating system and reliably and consistently run the application. You don’t have to worry about any of those underlying system issues creeping in later.

Although containers were already used in Linux for years, they became more popular in recent years. Most of the time when people are talking about containers, they’re referring to Docker containers. These containers are built from images that include all of the dependencies needed to run an application.

When you think of containers, virtual machines might also come to mind. They are very similar, but the big difference is that containers virtualize the operating system instead of the hardware. That’s what makes them so easy to run on all of the operating systems consistently.

What containers have to do with DevOps

Since we know how odd happenings occur when you move code from one computing environment to another, this is also a common issue with moving code to the different environments in our DevOps process. You don’t want to have to deal with system differences between staging and production. That would require more work than it should.

Once you have an artifact built, you should be able to use it in any environment from local to production. That’s the reason we use containers in DevOps. It’s also invaluable when you’re working with microservices. Docker containers used with something like Kubernetes will make it easier for you to handle larger systems with more moving pieces.

#devops #containers #containers-devops #devops-containers #devops-tools #devops-docker #docker #docker-image

August  Murray

August Murray


Manage Docker Storage & Volumes

Intro to volumes and storage.

TL;DR Overview of how docker storage & volumes work and how to manage them.

In this fourth part of the Dockerventure series, we are going to focus on how docker handles storage, how it manages container file systems, and showcase how we can effectively manage our data with volumes.

Check here for the first 3 parts: Intro DockerDockerfilesUseful Docker commands.

Default Docker File System

By default at creation time, docker creates the directory /var/lib/docker where it stores all its data regarding containers, images, volumes, etc.

When a new container is started, a new **read-write container layer **is added on top of the read-only image layers that were created during the build phase.

This container layer exists only while the container exists and when the container is killed this layer along with all the changes we made on top is lost.

#docker-mount #docker-volume #docker-storage #docker

Hudson  Kunde

Hudson Kunde


Docker Compose Syntax: Volume or Bind Mount?

Docker Compose allows you to configure volumes and bind mounts using a short syntax. A few examples:






Which of these are volumes and which are a bind mounts?

Whenever I had to read a docker-compose.yml file, I had to look up the official documentation or run a quick local experiment to figure out how Docker Compose would mount the directories into the container.

I wrote this article so that next time you read a Docker Compose file, you won’t have to guess anymore. You’ll simply know by looking at the syntax whether a volume or a bind mount is used behind the scenes.

The different variations are essentially three unique forms. I list and explain them in this article below.

Two volumes keys in docker-compose.yml

Before we talk about the different ways to specify volumes, let’s first clarify which volumes key we’re referring to. In docker-compose.yml, the volumes key can appear in two different places.

version: "3.7"

    # ...
    volumes: # Nested key. Configures volumes for a particular service.

volumes: # Top-level key. Declares volumes which can be referenced from multiple services.
  # ...

In this article, we’ll talk about the nested volumes key. That’s where you configure volumes for a specific service/container such as a database or web server. This configuration has a short (and a long) syntax format.

Short syntax format and its variations

The volume configuration has a short syntax format that is defined as:


SOURCE can be a named volume or a (relative or absolute) path on the host system. TARGET is an absolute path in the container. MODE is a mount option which can be read-only or read-write. Brackets mean the argument is optional.

This optionality leads to three unique variations you can use to configure a container’s volumes. Docker Compose is smart about recognising which variety is used and whether to use a volume or bind mount.

#docker #syntax #volume #bind mount

Turner  Crona

Turner Crona


How to access files outside a Docker container

If containers are isolated, how can they communicate to the host machine, perhaps to store data? Because when we create a container from an image, any data generated is lost when the container is removed.

So we need a way to have permanent storage.

We can do so using Bind Mounts and Volumes.

There’s not a lot of difference between the two, except Bind Mounts can point to any folder on the host computer, and are not managed by Docker directly.

Let’s start with them. One classic example is logs. Suppose your app creates a log file, inside the container, in /usr/src/app/logs. You can map that to a folder on the host machine, using the -v (same as --volume) flag when you run the container with docker run, like this: -v ~/logs:/usr/src/app/logs

This will map that folder to the logs subfolder in the user’s home directory.

Node: the _-m_ or _--mount_ flag works in a very similar way

This is the flag used with the examplenode image we created previously:

docker run -d -p 80:3000 -v ~/logs:/usr/src/app/logs --name node-app examplenode

So now we can run our Node app, and any log will be stored in the host computer, rather than inside the Docker container.

Note that the _examplenode_ app does not generate any log in _/usr/src/app/logs_, it’s just an example and you would need to set that logging up first.

The details about the volume will be listed when you run docker inspect on the container name, under “Mounts”:

"Mounts": [
        "Type": "bind",
        "Source": "/Users/flavio/logs",
        "Destination": "/usr/src/app/logs",
        "Mode": "",
        "RW": true,
        "Propagation": "rprivate"

#docker #container #volumes #bind mounts