Docker Image vs Container 💪💪💪

Docker Image vs Container 💪💪💪

In this article, we will take a look at the basics and what a Docker image vs container is all about.

Docker is a powerful tool for creating and deploying applications. It simplifies rolling out applications across multiple systems and is a useful tool for integrating new technologies. An application that runs using Docker will start up the same every time on every system. This means that if the application works on your local computer, it’ll work anywhere that supports Docker. That’s great news! It simplifies your development process and can be a powerful tool for continuous delivery.

As you begin to understand Docker, you need to grasp two key facets of how Docker works. Docker image vs container — it can be a little bit complicated at first. In this post, we break them down and make them easy to understand.


What are Docker images?

If you’ve ever used virtual machines aside from Docker, you’ll be familiar with Docker images. In other virtual machine environments, images would be called something like “snapshots.” They’re a picture of a Docker virtual machine at a specific point in time. Docker images are a little bit different from a virtual machine snapshot, though. For starters, Docker images can’t ever change. Once you’ve made one, you can delete it, but you can’t modify it. If you need a new version of the snapshot, you create an entirely new image.

This inability to change (called “immutability”) is a powerful tool for Docker images. An image can never change. So, if you get your Docker virtual machine into a working state and create an image, you know that image will always work, forever. This makes it easy to try out additions to your environment. You might experiment with new software packages, or try to reorganize your project files. When you do this, you can be sure that you won’t break your working instance, because you can’t. You will always be able to shut down your Docker virtual machine and restart it using your existing image, and it’ll be like nothing ever changed.

Sharing images

What’s more, you can share images with other people. Once you’ve created an image of a Docker virtual machine, you can send that image to someone else. That person can start a new virtual machine using your image, and their Docker virtual machine will run exactly the same as yours. This is why Docker is so powerful for creating and deploying applications. Every developer on a team will have the exact same development instance. Each testing instance is exactly the same as the development instance. Your production instance is exactly the same as the testing instance. Because your systems are identical, you don’t need to spend time troubleshooting issues that only exist in one environment.

Developers around the world capitalized on this ability to share images with one another to create Docker Hub. Docker Hub holds images for a plethora of different Docker virtual machines. There are images for just about any common software system in the world. Setting up a new application that runs on Docker is as simple as inserting a few lines into a Docker configuration setup file and waiting for a short download. Much like other repositories, it’s possible for anyone to publish an image on Docker Hub. This means that it’s easy for your team to create a new Docker image and share it with team members around the world. Your testing and production servers can also download from Docker Hub to get their images. And like we mentioned, that image will run exactly the same no matter where it’s been downloaded.


What’s a Docker container?

We use the phrase “Docker virtual machine,” but the better way to say that is “Docker container.” If a Docker image is a digital photograph, a Docker container is like a printout of that photograph. In technical terms, we call it an “instance” of the image. Each Docker container runs separately, and you can modify the container while it’s running. Modifications to a Docker container aren’t saved unless you create another image, as we noted. Most Docker images include full operating systems to allow you to do whatever you need on them. This makes it easy to start up a program—like a command line—on the running container. Inside that command line, you can do some work like installing a new software package or configuring the system’s security. Then you can save another image and upload it to somewhere like Docker Hub to share it with people who can make use of your work.

Sometimes, you need to do things on a container that you need to save, but can’t become part of the Docker image. A good example is building a web application, where you’ll probably have a Docker container that holds your database. Databases need to be able to write data to the hard drive so that it can be retrieved later. Docker has the ability to configure containers and “share” folders between the container and the host computer. One of the most common use cases for this is to share a directory that holds your application code. You modify the application code on your host machine, and those changes are detected by the application server running inside the Docker container.

Working with and sharing docker containers

Unlike images, it isn’t intended to share Docker containers. They’re much larger, containing all sorts of installed applications and configuration information. Also, as we mentioned, they don’t save their state. If you were to try to transfer a Docker container from one computer to another, you’d find that when you started it on the second computer, it would revert back to the original installed image. Instead, if you need to share your work on a container, you should create an image, and share that. Conversely, your work inside a container shouldn’t modify the container. Like previously mentioned, files that you need to save past the end of a container’s life should be kept in a shared folder. Modifying the contents of a running container eliminates the benefits Docker provides. Because one container might be different from another, suddenly your guarantee that every container will work in every situation is gone.

Using containers has a number of other benefits as a developer. For instance, your computer isolates each Docker container from the others on your computer. Need to set things up in a very specific way to support your database server? You don’t have to worry about those changes bleeding over to your web application server, because the two will never share memory or a file system.

Docker Containers and Docker images work together

Docker containers and images work together to unlock the potential of Docker. Each image provides an infinitely reproducible virtual environment shareable across the room or around the world. Containers build on those images to run applications—both simple or very complicated. What’s more, some terrific tools like Docker Compose make it simple to “compose” novel Docker systems encompassing multiple containers using a small config file. You can easily use images for a database, web server, caching server and message queue to easily configure a web application, for instance. Once all of those pieces come together, monitoring them with a Docker-aware application monitoring platform like Retrace is simple. Docker shortens your development times and accelerates your testing processes by making it easy to set up new, identical systems.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Learn More

Docker and Kubernetes: The Complete Guide

Docker Mastery: The Complete Toolset From a Docker Captain

Docker for the Absolute Beginner - Hands On - DevOps

Getting Started With MongoDB As A Docker Container Deployment

Docker Tutorials - From Beginner to Advanced

Docker Containers for Beginners

How to debug Node.js in a Docker container?

How to Create Docker Image with MySQL Database


WordPress in Docker. Part 1: Dockerization

WordPress in Docker. Part 1: Dockerization

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

What is Docker | Docker Tutorial for Beginners

What is Docker | Docker Tutorial for Beginners

This DevOps Docker Tutorial on what is docker will help you understand how to use Docker Hub, Docker Images, Docker Container & Docker Compose. This tutorial explains Docker's working Architecture and Docker Engine in detail.

This Docker tutorial also includes a Hands-On session around Docker by the end of which you will learn to pull a centos Docker Image and spin your own Docker Container. You will also see how to launch multiple docker containers using Docker Compose. Finally, it will also tell you the role Docker plays in the DevOps life-cycle.

The Hands-On session is performed on an Ubuntu-64bit machine in which Docker is installed.

Docker Basics: Docker Compose

Docker Basics: Docker Compose

Create, configure, and run a multi-container application using Docker Compose and this introductory tutorial.

Create, configure, and run a multi-container application using Docker Compose and this introductory tutorial.

Docker Compose is a tool that allows you to run multi-container applications. With compose we can use yaml files to configure our application’ services and then using a single command to create and start all of the configured services. I use this tool a lot when it comes to local development in a microservice environment. It is also lightweight and needs just a small effort. Instead of managing how to run each service while developing, you can have the environment and services needed preconfigured and focus on the service that you currently develop.

With docker compose , we can configure a network for our services, volumes, mount-points, environmental variables — just about everything.

To showcase this we are going to solve a problem. Our goal would be to extract data from MongoDB using Grafana. Grafana does not have out-of-the-box support for MongoD, therefore,e we will have to use a plugin.

The first step is to create our networks. Creating a network is not necessary since your services, once started, will join the default network. We will make a showcase of using custom networks, and have a network for backend services and a network for frontend services. Apparently, network configuration can get more advanced and specify custom network drivers or even configure static addresses.

version: '3.5'

networks:
  frontend:
    name: frontend-network
  backend:
    name: backend-network
    internal: true


The backend network is going to be internal so there won’t be any outbound connectivity to the containers attached to it.

Then we will setup our MongoDB instance.

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    volumes:
      - ${DB_PATH}:/data/db
    networks:
      - backend


As you see, we specified a volume. Volumes can also be specified separately and attached to a service. We used environmental variables for the root account, and you might also have noticed that the password is going to be provided through environmental variables. The same applies for the volume path, too. You can have a more advanced configuration for volumes in your compose configuration and reference them from your service.

Our next goal is to set up the proxy server which will be in the middle of our Grafana and MongoDB server. Since it needs a custom Dockerfile to create it, we will do it through docker-compose. Compose has the capability to spin up a service by specifying the docker file.

So let’s start with the Dockerfile.

FROM node

WORKDIR /usr/src/mongografanaproxy

COPY . /usr/src/mongografanaproxy

EXPOSE 3333

RUN cd /usr/src/mongografanaproxy
RUN npm install
ENTRYPOINT ["npm","run","server"]

Then let’s add it to compose.

version: '3.5'

services:
  mongo-proxy:
    build:
      context: .
      dockerfile: ProxyDockerfile
    restart: always
    networks:
      - backend


And the same will be done to the Grafana image that we want to use. Instead of using a ready Grafana image, we will create one with the plugin preinstalled.

FROM grafana/grafana

COPY . /var/lib/grafana/plugins/mongodb-grafana

EXPOSE 3000

version: '3.5'

services:
  grafana:
    build:
      context: .
      dockerfile: GrafanaDockerfile
    restart: always
    ports:
      - 3000:3000
    networks:
      - backend
      - frontend


Let’s wrap them all together:

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    volumes:
      - ${DB_PATH}:/data/db
    networks:
      - backend
  mongo-proxy:
    build:
      context: .
      dockerfile: ProxyDockerfile
    restart: always
    networks:
      - backend
  grafana:
    build:
      context: .
      dockerfile: GrafanaDockerfile
    restart: always
    ports:
      - 3000:3000
    networks:
      - backend
      - frontend
networks:
  frontend:
    name: frontend-network
  backend:
    name: backend-network
    internal: true


So let’s run them all together.

docker-compose -f stack.yaml build
MONGO_USER=root MONGO_PASSWORD=root DB_PATH=~/grafana-mongo  docker-compose -f stack.yaml up


This code can be found on Github, and for more, check out the Docker ImagesDocker Containers, and Docker registry posts.

Originally published by Emmanouil Gkatziouras at https://dzone.com

Learn more

Jenkins, From Zero To Hero: Become a DevOps Jenkins Master

☞ http://school.learn4startup.com/p/rIKN0OqT2

Docker Mastery: The Complete Toolset From a Docker Captain

☞ http://school.learn4startup.com/p/r18lJJ_1Te

Docker and Kubernetes: The Complete Guide

☞ http://school.learn4startup.com/p/7bXEiVS7Q

Docker Crash Course for busy DevOps and Developers

☞ http://school.learn4startup.com/p/Sy8T4CfkM

Selenium WebDriver with Docker

☞ http://school.learn4startup.com/p/9fGLIrlWl

Amazon EKS Starter: Docker on AWS EKS with Kubernetes

☞ http://school.learn4startup.com/p/TpIgI9KEN