Getting Started with Docker

Getting Started with Docker

Getting Started with Docker - This tutorial will explain the fundamentals of Docker and start you with some basic usage...

What is Docker?

Docker is open source software to pack, ship and run any application as a lightweight container. Containers are completely hardware and platform independent so you don’t have to worry about whether what you are creating will run everywhere.

Table of Contents

  • What is Docker?
  • Installation
  • Creating Your First Docker Image
  • Conclusion

In the past virtual machines have been used to accomplish many if these same goals. However, Docker containers are smaller and have far less overhead than VMs. VMs are not portable as different VM runtime environments are very different. Docker containers are extremely portable. Finally, VMs were not built with software developers in mind; they contain no concept of versioning, and logging/monitoring is very difficult. Docker images, on the other hand, are built from layers that can be version controlled. Docker has logging functionality readily available for use.

You might be wondering what could go into a “container”. Well, anything! You can isolate pieces of your system into separate containers. You could potentially have a container for nginx, a container for MongoDB, and one for Redis. Containers are very easy to setup. Major projects like nginx, MongoDB, and Redis all offer free Docker images for you to use; you can install and run any of these containers with just one shell command. This is much easier than using a virtual machine (even with something like Vagrant).


Installation

Installing Docker is very easy. Visit the official Docker installation page and follow the instructions tailored for your operating system. There are simple installers for both Mac OS X and Windows.

After you’ve installed Docker, open the terminal and type the following:

$ docker info

If your installation worked, you will see a bunch of information about your Docker installation. If not, you will need to revisit the install docs.

Creating Your First Docker Image

Every Docker container is an “instance” of a Docker image. There is a massive library of pre-built Docker images. However, in order to really understand Docker, you should create an image as an exercise.

Let’s create a Docker image for running Redis. Redis is an easy to use in-memory key/value store. It is commonly used as an object cache for many different platforms across many different environments and programming languages.

Remember how I said Docker images are built from layers? Well, every Docker image has to start with a base layer. Common base layers are Ubuntu and CentOS. Let’s use Ubuntu. (In production I would use Debian since it is much smaller.)

The following command will start a Docker container based on the Ubuntu:latest image. :latest is called the image tag and in this case refers to the latest version of Ubuntu. If you don’t have the image locally, it will download it first. The container will be started in a bash terminal. Run the following:

$ docker run --name my-redis -it ubuntu:latest bash

-it let’s us interact with our container via the command line. --name just gives us a convenient way to reference our container. You should now be inside your container in a bash terminal seeing something like this:

$ [email protected]

As you can see, you are logged in as root to the container so no need for sudo. The Ubuntu base image is very bare bones. An important stratey for creating Docker images is keeping them as light as possible. Therefore you have to install a lot of things you normally just have. First, let’s install wget:



$ apt-get update
$ apt-get install wget

We need a few other things to build Redis from source and run it:


$ apt-get install build-essential tcl8.5

Now let’s install Redis:



$ wget http://download.redis.io/releases/redis-stable.tar.gz
$ tar xzf redis-stable.tar.gz
$ cd redis-stable
$ make
$ make install
$ ./utils/install_server.sh

This downloads the newest version of Redis, builds it from source, and runs the installer. You will need to answer some configuration questions. Just use all the defaults. Now start Redis by running the following (it might already be started):


$ service redis_6379 start

You now have Redis started in a Docker container. The next step is saving your image. We want to be able to save the image as it is so we can distribute it and use it elsewhere.

Note: this container is an example, and is missing some things to make it truly usable such as port mapping. We will make a production ready image in the next section.

Exit your container by running:


$ exit

Note that your container is now stopped since you exited bash. You can easily configure containers to run in the background though.

Run the following command:


$ docker ps -a

This command shows us all of our docker containers, running or stopped. See the container tagged with my-redis. That’s the one we created! Now let’s commit our container as an image:


$ docker commit -m "Added Redis" -a "Your Name" my-redis tlovett1/my-redis:latest

This command compiles our container’s changes into an image. -m specifies a commit message, and -a let’s us specify an author. tlovett/my-redis:latest is formatted author/name:version. Author refers to your username on Docker Hub. If you don’t want to push your image to the Docker Hub, then this doesn’t matter, and you can use anything you want. If you do, you will need to create an account and use docker push to push the image upstream.

Docker commit creates an image containing the changes we made to the original Ubuntu image. This makes distributing Docker containers super fast since people won’t have to re-download layers (such as Ubuntu:latest) that they already have. In a container, every time you run a command, add a file or directory, create an environmental variable, etc. a new layer is created. Docker commit groups these layers into an image. When distributing Docker images, you should carefully optimize your layers to keep them as small as possible. This tutorial does not cover layer optimization.

You might be thinking that this is somewhat messy since your container is basically a black box. What if you want to redo your image? Would you have to write down the steps to reproduce the entire thing? What if you wanted to recreate your image from CentOS instead of Ubuntu? Your thinking would be correct. Creating Docker images in this way is not the best idea. Instead you should use Dockerfiles.

Your First Dockerfile

A Dockerfile is a set of instructions written as a shell script for creating a Docker image. Let’s create a Dockerfile that generates an image like the one we just created manually but with some important additions.

Create a file called Dockerfile. Paste the following into the new file:


FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y wget
RUN apt-get install -y build-essential tcl8.5
RUN wget http://download.redis.io/releases/redis-stable.tar.gz
RUN tar xzf redis-stable.tar.gz
RUN cd redis-stable && make && make install
RUN ./redis-stable/utils/install_server.sh
EXPOSE 6379
ENTRYPOINT ["redis-server"]

There are some special things in this Dockerfile. FROM tells Docker which image to start from. As you can see, we are starting with Ubuntu. RUN simply runs a shell command. EXPOSE opens up a port to be publically accessible. 6379 is the standard Redis port. ENTRYPOINT designates the command or application to be run when a container is created. In this case whenever a container is created from our image, redis-server will be run.

Now that we’ve written our Dockerfile, let’s build an image from it. Run the following command from within the folder of your Dockerfile:


$ docker build -t redis .

This command will create an image tagged redis from your Dockerfile.

Finally, let’s create a running container from our image. Run the following command:


$ docker run -d -p 6379:6379 redis

That’s it! Now you have Redis up-and-running on your machine.This container/image is production ready.


Conclusion

Docker is a powerful tool for creating and running distributable, lightweight applications both locally and in production.

There are many tools and services available to be used in Docker. For example, Dockunit is a tool powered by Docker that lets you test your software across any environment. This tutorial has just scratched the surface of the Docker world.


Originally published by Taylor Lovett at scotch.io

=========================================

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More


☞ Docker Mastery: The Complete Toolset From a Docker Captain

☞ Docker and Kubernetes: The Complete Guide

☞ Docker for the Absolute Beginner - Hands On - DevOps

☞ Docker Crash Course for busy DevOps and Developers

☞ The Docker for DevOps course: From development to production

☞ Docker for Node.js Projects From a Docker Captain

☞ Docker Certified Associate 2019

☞ Selenium WebDriver with Docker


WordPress in Docker. Part 1: Dockerization

WordPress in Docker. Part 1: Dockerization

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

What is Docker | Docker Tutorial for Beginners

What is Docker | Docker Tutorial for Beginners

This DevOps Docker Tutorial on what is docker will help you understand how to use Docker Hub, Docker Images, Docker Container & Docker Compose. This tutorial explains Docker's working Architecture and Docker Engine in detail.

This Docker tutorial also includes a Hands-On session around Docker by the end of which you will learn to pull a centos Docker Image and spin your own Docker Container. You will also see how to launch multiple docker containers using Docker Compose. Finally, it will also tell you the role Docker plays in the DevOps life-cycle.

The Hands-On session is performed on an Ubuntu-64bit machine in which Docker is installed.

Docker Basics: Docker Compose

Docker Basics: Docker Compose

Create, configure, and run a multi-container application using Docker Compose and this introductory tutorial.

Create, configure, and run a multi-container application using Docker Compose and this introductory tutorial.

Docker Compose is a tool that allows you to run multi-container applications. With compose we can use yaml files to configure our application’ services and then using a single command to create and start all of the configured services. I use this tool a lot when it comes to local development in a microservice environment. It is also lightweight and needs just a small effort. Instead of managing how to run each service while developing, you can have the environment and services needed preconfigured and focus on the service that you currently develop.

With docker compose , we can configure a network for our services, volumes, mount-points, environmental variables — just about everything.

To showcase this we are going to solve a problem. Our goal would be to extract data from MongoDB using Grafana. Grafana does not have out-of-the-box support for MongoD, therefore,e we will have to use a plugin.

The first step is to create our networks. Creating a network is not necessary since your services, once started, will join the default network. We will make a showcase of using custom networks, and have a network for backend services and a network for frontend services. Apparently, network configuration can get more advanced and specify custom network drivers or even configure static addresses.

version: '3.5'

networks:
  frontend:
    name: frontend-network
  backend:
    name: backend-network
    internal: true


The backend network is going to be internal so there won’t be any outbound connectivity to the containers attached to it.

Then we will setup our MongoDB instance.

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    volumes:
      - ${DB_PATH}:/data/db
    networks:
      - backend


As you see, we specified a volume. Volumes can also be specified separately and attached to a service. We used environmental variables for the root account, and you might also have noticed that the password is going to be provided through environmental variables. The same applies for the volume path, too. You can have a more advanced configuration for volumes in your compose configuration and reference them from your service.

Our next goal is to set up the proxy server which will be in the middle of our Grafana and MongoDB server. Since it needs a custom Dockerfile to create it, we will do it through docker-compose. Compose has the capability to spin up a service by specifying the docker file.

So let’s start with the Dockerfile.

FROM node

WORKDIR /usr/src/mongografanaproxy

COPY . /usr/src/mongografanaproxy

EXPOSE 3333

RUN cd /usr/src/mongografanaproxy
RUN npm install
ENTRYPOINT ["npm","run","server"]

Then let’s add it to compose.

version: '3.5'

services:
  mongo-proxy:
    build:
      context: .
      dockerfile: ProxyDockerfile
    restart: always
    networks:
      - backend


And the same will be done to the Grafana image that we want to use. Instead of using a ready Grafana image, we will create one with the plugin preinstalled.

FROM grafana/grafana

COPY . /var/lib/grafana/plugins/mongodb-grafana

EXPOSE 3000

version: '3.5'

services:
  grafana:
    build:
      context: .
      dockerfile: GrafanaDockerfile
    restart: always
    ports:
      - 3000:3000
    networks:
      - backend
      - frontend


Let’s wrap them all together:

version: '3.5'

services:
  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    volumes:
      - ${DB_PATH}:/data/db
    networks:
      - backend
  mongo-proxy:
    build:
      context: .
      dockerfile: ProxyDockerfile
    restart: always
    networks:
      - backend
  grafana:
    build:
      context: .
      dockerfile: GrafanaDockerfile
    restart: always
    ports:
      - 3000:3000
    networks:
      - backend
      - frontend
networks:
  frontend:
    name: frontend-network
  backend:
    name: backend-network
    internal: true


So let’s run them all together.

docker-compose -f stack.yaml build
MONGO_USER=root MONGO_PASSWORD=root DB_PATH=~/grafana-mongo  docker-compose -f stack.yaml up


This code can be found on Github, and for more, check out the Docker ImagesDocker Containers, and Docker registry posts.

Originally published by Emmanouil Gkatziouras at https://dzone.com

Learn more

Jenkins, From Zero To Hero: Become a DevOps Jenkins Master

☞ http://school.learn4startup.com/p/rIKN0OqT2

Docker Mastery: The Complete Toolset From a Docker Captain

☞ http://school.learn4startup.com/p/r18lJJ_1Te

Docker and Kubernetes: The Complete Guide

☞ http://school.learn4startup.com/p/7bXEiVS7Q

Docker Crash Course for busy DevOps and Developers

☞ http://school.learn4startup.com/p/Sy8T4CfkM

Selenium WebDriver with Docker

☞ http://school.learn4startup.com/p/9fGLIrlWl

Amazon EKS Starter: Docker on AWS EKS with Kubernetes

☞ http://school.learn4startup.com/p/TpIgI9KEN