What is Docker, and why is it so popular?

If you’ve been anywhere near the IT industry over the last five years, you’ve very likely heard of the container platform&nbsp;<a href="https://www.docker.com/" target="_blank" style="color: rgb(21, 143, 239);">Docker.</a>&nbsp;Docker and containers are a new way of running software that is revolutionizing software development and delivery.

If you’ve been anywhere near the IT industry over the last five years, you’ve very likely heard of the container platform Docker. Docker and containers are a new way of running software that is revolutionizing software development and delivery.


What is Docker?

Docker is a new technology that allows development teams to build, manage, and secure apps anywhere.

It’s not possible to explain what Docker is without explaining what containers are, so let’s look at a quick explanation of containers and how they work.


A container is a special type of process that is isolated from other processes. Containers are assigned resources that no other process can access, and they cannot access any resources not explicitly assigned to them.

So what’s the big deal?

Processes that are not “containerized” can ask the operating system for access to any file on disk or any network socket.

Until containers became widely available, there was no reliable, guaranteed way to isolate a process to its own set of resources. A properly functioning container has absolutely no way to reach outside its resource “sandbox” to touch resources that were not explicitly assigned to it.

For example, two containers running on the same computer might as well be on two completely different computers, miles away from each other. They are entirely and effectively isolated from each other.

This isolation has several advantages:

  • Two containerized processes can run side-by-side on the same computer, but they can’t interfere with each other.

  • They can’t access each other’s data unless explicitly configured to do so.

  • Two different applications can run containers on the same hardware with confidence that their processes and data are secure.

  • Shared hardware means less hardware. Gone are the days when a company needs thousands of servers to run applications. That hardware can be shared between different business units or entirely different enterprise clients. The result is massive new economies of scale for private and public centers alike.
Docker explained

Now that you know what containers are, let’s get to Docker.


Docker is both a company and a product. Docker Inc. makes Docker, the container toolkit.

Containers aren’t a singular technology. They are a collection of technologies that have been developed over more than ten years. The features of Linux (such as namespaces and cgroups) have been available for quite some time — since about 2008.

Why, then have containers not been used all that time?

The answer is that very few people knew how to make them. Only the most powerful Level-20 Linux Systems Developer Warrior Mage understood all the various technologies needed to create a container.

In those early days, willing to do the work to understand them, let alone creating containers, was a complex chore. The stakes are high — getting it wrong turns the benefits of containers to liabilities.


If containers don’t contain, they can become the root cause of the latest Hacker News security breach headline.

The masses needed consistent, reliable container creation before containers could go mainstream.

Enter Docker Inc.

The primary features of Docker are:

  • The Docker command-line interface (CLI)
  • The Docker Engine

Docker made it easier to create containers by “wrapping” the complexity of the underlying OS syscalls needed to make them work. Docker’s popularity snowballed, to put it mildly.

In March 2013, the creator of Docker, dotCloud, renamed itself to Docker Inc. and open-sourced Docker. In just a few years, containers have made a journey from relative obscurity, to the transformation of an industry. Docker’s impact rivals the introduction of Virtual Machines in the early 2000s.


How popular is Docker?

Here’s a Google Trends graph of searches for the term “docker” over the last five years:

You can see that Google searches for Docker have seen steady, sustainable growth since its introduction in 2013. Docker has established itself as the de-facto standard for containerization. There are a few competing products, such as CoreOS/rkt, but they are reasonably far behind Docker in popularity and market awareness.

Docker’s popularity was buoyed recently when Microsoft announced support for it in both Windows 10 and Windows Server 2016.


Why is Docker so popular and why the rise of containers?

Docker is popular because of the possibilities it opens for software delivery and deployment. Many common problems and inefficiencies are resolved with containers.

The six main reasons for Docker’s popularity are:


1. Ease of use

A large part of Docker’s popularity is how easy it is to use. Docker can be learned quickly, mainly due to the many resources available to learn how to create and manage containers. Docker is open-source, so all you need to get started is a computer with an operating system that supports Virtualbox, Docker for Mac/Windows, or supports containers natively, such as Linux.


2. Faster scaling of systems

Containers allow much more work to be done by far less computing hardware. In the early days of the Internet, the only way to scale a website was to buy or lease more servers. The cost of popularity was bound, linearly, to the cost of scaling up. Popular sites became victims of their own success, shelling out tens of thousands of dollars for new hardware. Containers allow data center operators to cram far more workloads into less hardware. Shared hardware means lower costs. Operators can bank those profits or pass the savings along to their customers.


3. Better software delivery

Software delivery using containers can also be more efficient. Containers are portable. They are also entirely self-contained. Containers include an isolated disk volume. That volume goes with the container as it is developed and deployed to various environments. The software dependencies (libraries, runtimes, etc.) ship with the container. If a container works on your machine, it will run the same way in a Development, Staging, and Production environment. Containers can eliminate the configuration variance problems common when deploying binaries or raw code.


4. Flexibility

Operating containerized applications is more flexible and resilient than that of non-containerized applications. Container orchestrators handle the running and monitoring of hundreds or thousands of containers.

Container orchestrators are very powerful tools for managing large deployments and complex systems. Perhaps the only thing more popular than Docker right now is Kubernetes, currently the most popular container orchestrator.


5. Software-defined networking

Docker supports software-defined networking. The Docker CLI and Engine allow operators to define isolated networks for containers, without having to touch a single router. Developers and operators can design systems with complex network topologies and define the networks in configuration files. This is a security benefit, as well. An application’s containers can run in an isolated virtual network, with tightly-controlled ingress and egress paths.


6. The rise of microservices architecture

The rise of microservices has also contributed to the popularity of Docker. Microservices are simple functions, usually accessed via HTTP/HTTPS, that do one thing — and do it well.

Software systems typically start as “monoliths,” in which a single binary supports many different system functions. As they grow, monoliths can become difficult to maintain and deploy. Microservices break a system down into simpler functions that can be deployed independently. Containers are terrific hosts for microservices. They are self-contained, easily deployed, and efficient.


Should you use Docker?

A question like this is almost always best answered with caution and circumspection. No technology is a panacea. Each technology has drawbacks, tradeoffs, and caveats.

Having said all that…

Yes, use Docker.

I’m making some assumptions with this answer:

  1. That you develop distributed software with the intent of squeezing every last cycle of processing power and byte of RAM out of your infrastructure.

  2. You’re designing your software for high loads and performance, even if you don’t yet have high loads or need the best performance.

  3. You want to achieve high deployment velocity and reap the benefits of same. If you aspire to DevOps practices in software delivery, containers are a key tool in that toolbox.

  4. You either want the benefits of containers, need them, or both. If you already run high-load, distributed, monolithic or microservice applications, you need containers. If you aspire to someday run these high-load, high-performance applications, now is the time to get started with containers.
When you should not use Docker or containers

Developing, deploying, and operating software in containers is very different from traditional development and delivery. It is not without trials and tribulations.

There are tradeoffs to be considered:


If your team needs significant training

Your team’s existing skillset is a significant consideration. If you lack the time or resources to take up containers slowly or to bring on a consulting partner to get you ramped up, you should wait. Container development and operations is not something you want to “figure out as you go,” unless you move very slowly and deliberately.


When you have a high-risk profile

Your risk profile is another major consideration. If you are in a regulated industry, or running revenue-generating workloads, be cautious with containers. Operating containers at scale with container orchestrators is very different than for non-containerized systems. The benefits of containers come with additional complexity in the systems that deliver, operate, and monitor them.


If you can’t hire the talent

For all its popularity, Docker is a very new way of developing and delivering software. The ecosystem is constantly changing, and the population of engineers who are experts in it is still relatively small. During this early stage, many companies are opting to work with Enterprise ISV partners to get started with Docker and its related systems. If this is not an option for you, you’ll want to balance the cost of taking up Docker on your own against the potential benefits.


Consider your system’s complexity

Finally, consider your overall requirements. Are your systems sufficiently complex enough to justify the additional burden of taking on containerization? If your business is, for example, centered around creating static websites, you may just not need containers.


In conclusion, Docker is popular because it has revolutionized development

Docker, and the containers it makes possible, has revolutionized the software industry and in five short years their popularity as a tool and platform has skyrocketed.

The main reason is that containers create vast economies of scale. Systems that used to require expensive, dedicated hardware resources can now share hardware with other systems. Another is that containers are self-contained and portable. If a container works on one host, it will work just as well on any other, as long as that host provides a compatible runtime.

It’s important to consider that Docker isn’t a panacea (no technology is.) There are tradeoffs to consider when planning a technology strategy. Moving to containers is not a trivial undertaking.

Consider the tradeoffs before committing to a Docker-based strategy. A careful accounting of the benefits and costs of containerization may well lead you to adopt Docker. If the numbers add up, Docker and containers have the potential to open up new opportunities for your enterprise.

Wondering how you can monitor microservices for performance problems? Raygun APM, Real User Monitoring and Crash Reporting are designed with modern development practices in mind. See how the Raygun platform can help keep your containers performant.


By David Swersky

WordPress in Docker. Part 1: Dockerization

WordPress in Docker. Part 1: Dockerization

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

What is Docker | Docker Tutorial for Beginners

What is Docker | Docker Tutorial for Beginners

This DevOps Docker Tutorial on what is docker will help you understand how to use Docker Hub, Docker Images, Docker Container & Docker Compose. This tutorial explains Docker's working Architecture and Docker Engine in detail.

This Docker tutorial also includes a Hands-On session around Docker by the end of which you will learn to pull a centos Docker Image and spin your own Docker Container. You will also see how to launch multiple docker containers using Docker Compose. Finally, it will also tell you the role Docker plays in the DevOps life-cycle.

The Hands-On session is performed on an Ubuntu-64bit machine in which Docker is installed.

Docker Basics: Docker Compose

Docker Basics: Docker Compose

Create, configure, and run a multi-container application using Docker Compose and this introductory tutorial.

Create, configure, and run a multi-container application using Docker Compose and this introductory tutorial.

Docker Compose is a tool that allows you to run multi-container applications. With compose we can use yaml files to configure our application’ services and then using a single command to create and start all of the configured services. I use this tool a lot when it comes to local development in a microservice environment. It is also lightweight and needs just a small effort. Instead of managing how to run each service while developing, you can have the environment and services needed preconfigured and focus on the service that you currently develop.

With docker compose , we can configure a network for our services, volumes, mount-points, environmental variables — just about everything.

To showcase this we are going to solve a problem. Our goal would be to extract data from MongoDB using Grafana. Grafana does not have out-of-the-box support for MongoD, therefore,e we will have to use a plugin.

The first step is to create our networks. Creating a network is not necessary since your services, once started, will join the default network. We will make a showcase of using custom networks, and have a network for backend services and a network for frontend services. Apparently, network configuration can get more advanced and specify custom network drivers or even configure static addresses.

version: '3.5'

networks:
  frontend:
    name: frontend-network
  backend:
    name: backend-network
    internal: true


The backend network is going to be internal so there won’t be any outbound connectivity to the containers attached to it.

Then we will setup our MongoDB instance.

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    volumes:
      - ${DB_PATH}:/data/db
    networks:
      - backend


As you see, we specified a volume. Volumes can also be specified separately and attached to a service. We used environmental variables for the root account, and you might also have noticed that the password is going to be provided through environmental variables. The same applies for the volume path, too. You can have a more advanced configuration for volumes in your compose configuration and reference them from your service.

Our next goal is to set up the proxy server which will be in the middle of our Grafana and MongoDB server. Since it needs a custom Dockerfile to create it, we will do it through docker-compose. Compose has the capability to spin up a service by specifying the docker file.

So let’s start with the Dockerfile.

FROM node

WORKDIR /usr/src/mongografanaproxy

COPY . /usr/src/mongografanaproxy

EXPOSE 3333

RUN cd /usr/src/mongografanaproxy
RUN npm install
ENTRYPOINT ["npm","run","server"]

Then let’s add it to compose.

version: '3.5'

services:
  mongo-proxy:
    build:
      context: .
      dockerfile: ProxyDockerfile
    restart: always
    networks:
      - backend


And the same will be done to the Grafana image that we want to use. Instead of using a ready Grafana image, we will create one with the plugin preinstalled.

FROM grafana/grafana

COPY . /var/lib/grafana/plugins/mongodb-grafana

EXPOSE 3000

version: '3.5'

services:
  grafana:
    build:
      context: .
      dockerfile: GrafanaDockerfile
    restart: always
    ports:
      - 3000:3000
    networks:
      - backend
      - frontend


Let’s wrap them all together:

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    volumes:
      - ${DB_PATH}:/data/db
    networks:
      - backend
  mongo-proxy:
    build:
      context: .
      dockerfile: ProxyDockerfile
    restart: always
    networks:
      - backend
  grafana:
    build:
      context: .
      dockerfile: GrafanaDockerfile
    restart: always
    ports:
      - 3000:3000
    networks:
      - backend
      - frontend
networks:
  frontend:
    name: frontend-network
  backend:
    name: backend-network
    internal: true


So let’s run them all together.

docker-compose -f stack.yaml build
MONGO_USER=root MONGO_PASSWORD=root DB_PATH=~/grafana-mongo  docker-compose -f stack.yaml up


This code can be found on Github, and for more, check out the Docker ImagesDocker Containers, and Docker registry posts.

Originally published by Emmanouil Gkatziouras at https://dzone.com

Learn more

Jenkins, From Zero To Hero: Become a DevOps Jenkins Master

☞ http://school.learn4startup.com/p/rIKN0OqT2

Docker Mastery: The Complete Toolset From a Docker Captain

☞ http://school.learn4startup.com/p/r18lJJ_1Te

Docker and Kubernetes: The Complete Guide

☞ http://school.learn4startup.com/p/7bXEiVS7Q

Docker Crash Course for busy DevOps and Developers

☞ http://school.learn4startup.com/p/Sy8T4CfkM

Selenium WebDriver with Docker

☞ http://school.learn4startup.com/p/9fGLIrlWl

Amazon EKS Starter: Docker on AWS EKS with Kubernetes

☞ http://school.learn4startup.com/p/TpIgI9KEN