Docker for Absolute Beginners

Docker for Absolute Beginners

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud. This article you will learn Docker with Hands On Coding Exercises. For beginners in DevOps.

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud. This article you will learn Docker with Hands On Coding Exercises. For beginners in DevOps.

Whether you are planning to start your career in DevOps, or you are already into it, if you do not have **Docker **listed on your resume, it’s undoubtedly time for you to think about it, as **Docker **is one of the critical skill for anyone who is into **DevOps **arena.

In this post, I will try my best to explain **Docker **in the simplest way I can.

Let’s begin by understanding, What is Docker?

In simple terms, **Docker **is a software platform that simplifies the process of building, running, managing and distributing applications. It does this by virtualizing the operating system of the computer on which it is installed and running.

The first edition of **Docker **was released in 2013.

Docker is developed using the GO programming language.

Looking at the rich set of functionality Docker has got to offer, it’s been widely accepted by some of the world’s leading organizations and universities, such as Visa, PayPal, Cornell University and Indiana University (just to name a few) to run and manage their applications using Docker.##

Now let’s try to understand the problem, and the solution Docker has got to offer

The Problem

Let’s say you have three different **Python-based applications **that you plan to host on a single server (which could either be a physical or a virtual machine).

Each of these applications makes use of a different version of Python, as well as the associated libraries and dependencies, differ from one application to another.

Since we cannot have different versions of **Python installed **on the same machine, this prevents us from hosting all three applications on the same computer.

The Solution

Let’s look at how we could solve this problem without making use of Docker. In such a scenario, we could solve this problem either by having three physical machines, or a single physical machine, which is powerful enough to host and run three virtual machines on it.

Both the options would allow us to install different versions of Python on each of these machines, along with their associated dependencies.

Irrespective of which solution we choose, the costs associated with procuring and maintaining the hardware are quite expensive.

Now, let’s check out how Docker could be an efficient and cost-effective solution to this problem.

To understand this, we need to take a look at how exactly Docker functions.

The machine on which Docker is installed and running is usually referred to as a Docker Host or Host in simple terms.

So, whenever you plan to deploy an application on the host, it would create a logical entity on it to host that application. In Docker terminology, we call this logical entity a Container or Docker Container to be more precise.

A Docker Container doesn’t have any operating system installed and running on it. But it would have a virtual copy of the process table, network interface(s), and the file system mount point(s). These have been inherited from the operating system of the host on which the container is hosted and running.

Whereas the kernel of the host’s operating system is shared across all the containers that are running on it.

This allows each container to be isolated from the other present on the same host. Thus it supports multiple containers with different application requirements and dependencies to run on the same host, as long as they have the same operating system requirements.

To understand how Docker has been beneficial in solving this problem, you need to refer to the next section, which discusses the advantages and disadvantages of using Docker.

In short, Docker would virtualize the operating system of the host on which it is installed and running, rather than virtualizing the hardware components.

The Advantages and Disadvantages of using Docker

Advantages of using Docker

Some of the key benefits of using Docker are listed below:

  • Docker supports multiple applications with different application requirements and dependencies, to be hosted together on the same host, as long as they have the same operating system requirements.
  • Storage Optimized. A large number of applications can be hosted on the same host, as containers are usually few megabytes in size and consume very little disk space.
  • Robustness. A container does not have an operating system installed on it. Thus, it consumes very little memory in comparison to a virtual machine (which would have a complete operating system installed and running on it). This also reduces the bootup time to just a few seconds, as compared to a couple of minutes required to boot up a virtual machine.
  • Reduces costs. Docker is less demanding when it comes to the hardware required to run it.

Disadvantages of using Docker

  • Applications with different operating system requirements cannot be hosted together on the same Docker Host. For example, let’s say we have 4 different applications, out of which 3 applications require a Linux-based operating system and the other application requires a Windows-based operating system. In such a scenario, the 3 applications that require Linux-based operating system can be hosted on a single Docker Host, whereas the application that requires a Windows-based operating system needs to be hosted on a different Docker Host.
Core Components of Docker

Docker Engine is one of the core components of Docker. It is responsible for the overall functioning of the Docker platform.

Docker Engine is a client-server based application and consists of 3 main components.

  1. Server
  2. REST API
  3. Client

Image Source: https://docs.docker.com

The Server runs a daemon known as dockerd (Docker Daemon), which is nothing but a process. It is responsible for creating and managing Docker Images, Containers, Networks and Volumes on the Docker platform.

The REST API specifies how the applications can interact with the Server, and instruct it to get their job done.

The Client is nothing but a command line interface, that allows users to interact with Docker using the commands.

Docker Terminology

Let us take a quick look at some of the terminology associated with Docker.

Docker Images and Docker Containers are the two essential things that you will come across daily while working with Docker.

In simple terms, a Docker Image is a template that contains the application, and all the dependencies required to run that application on Docker.

On the other hand, as stated earlier, a Docker Container is a logical entity. In more precise terms, it is a running instance of the Docker Image.

What is Docker Hub?

Docker Hub is the official online repository where you could find all the Docker Images that are available for us to use.

Docker Hub also allows us to store and distribute our custom images as well if we wish to do so. We could also make them either public or private, based on our requirements.

Please Note: Free users are only allowed to keep one Docker Image as private. If we wish to keep more than one Docker Image as private, we need to subscribe to a paid subscription plan.

Docker Editions

Docker is available in 2 different editions, as listed below:

  • Community Edition (CE)
  • Enterprise Edition (EE)

The Community Edition is suitable for individual developers and small teams. It offers limited functionality, in comparison to the Enterprise Edition.

The **Enterprise Edition, **on the other hand, is suitable for large teams and for using Docker in production environments.

The Enterprise Edition is further categorized into three different editions, as listed below:

  • Basic Edition
  • Standard Edition
  • Advanced Edition
Installing Docker

One last thing that we need to know before we go ahead and get our hands dirty with Docker is actually to have Docker installed.

Below are the links to the official Docker CE installation guides. You can follow these guides to install Docker on your machine, as they are simple and straightforward.

Want to skip installation and head off straight to practicing Docker?

Just in case you are feeling too lazy to install Docker, or you don’t have enough resources available on your computer, you need not have to worry — here’s the solution to your problem.

You can head over to Play with Docker, which is an online playground for Docker. It allows users to practice Docker commands immediately, without having to install anything on your machine. The best part is it’s simple to use and available free of cost.

Docker Commands

Now it’s time to get our hands dirty with Docker commands, for which we all have been waiting till now.

docker create

The first command which we will be looking at is the **docker create **command.

This command allows us to create a new container.

The syntax for this command is as shown below:

docker create [options] IMAGE [commands] [arguments]

Please Note: Anything enclosed within the square brackets is optional. This is applicable to all the commands that you would see on this guide.

Some of the examples of using this command are shown below:

$ docker create fedora
02576e880a2ccbb4ce5c51032ea3b3bb8316e5b626861fc87d28627c810af03

In the above example, the docker create command would create a new container using the latest Fedora image.

Before creating the container, it will check if the latest official image of the Fedora is available on the Docker Host or not. If the latest image isn’t available on the Docker Host, it will then go ahead and download the Fedora image from the Docker Hub before creating the container. If the Fedora image is already present on the Docker Host, it will make use of that image and create the container.

If the container was created successfully, Docker will return the container ID. For instance, in the above example 02576e880a2ccbb4ce5c51032ea3b3bb8316e5b626861fc87d28627c810af03 is the container ID returned by Docker.

Each container has a unique container ID. We refer to the container using its container ID for performing various operations on the container, such as starting, stopping, restarting, and so on.

Now, let us refer to another example of docker create command, which has options and commands being passed to it.

$ docker create -t -i ubuntu bash
30986b73dc0022dbba81648d9e35e6e866b4356f026e75660460c3474f1ca005

In the above example, the docker create command creates a container using the Ubuntu image (As stated earlier, if the image isn’t available on the Docker Host, it will go ahead and download the latest image from the Docker Hub before creating the container).

The options -t and -i instruct Docker to allocate a terminal to the container so that the user can interact with the container. It also instructs Docker to execute the bash command whenever the container is started.

docker ps

The next command we will look at is the docker ps command.

The docker ps command allows us to view all the containers that are running on the Docker Host.

$ docker ps
CONTAINER ID IMAGE  COMMAND CREATED        STATUS            PORTS NAMES
30986b73dc00 ubuntu "bash"  45 minutes ago Up About a minute                 elated_franklin

It only displays the containers that are presently running on the Docker Host.

If you want to view all the containers that were created on this Docker Host, irrespective of their current status, such as whether they are running or exited, then you would need to include the option -a, which in turn would display all the containers that were created on this Docker Host.

$ docker ps -a
CONTAINER ID IMAGE  COMMAND     CREATED           STATUS       PORTS NAMES
30986b73dc00 ubuntu “bash”      About an hour ago Up 29 minutes elated_franklin
02576e880a2c fedora “/bin/bash” About an hour ago Created hungry_sinoussi

Before we proceed further, let’s try to decode and understand the output of the docker ps command.

CONTAINER ID: A unique string consisting of alpha-numeric characters, associated with each container.

IMAGE: Name of the Docker Image used to create this container.

COMMAND: Any application specific command(s) that needs to be executed when the container is started.

CREATED: This shows the time elapsed since this container has been created.

STATUS: This shows the current status of the container, along with the time elapsed, in its present state.

If the container is running, it will display as Up along with the time period elapsed (for example, Up About an hour or Up 3 minutes).

If the container is stopped, then it will display as Exited followed by the exit status code within round brackets, along with the time period elapsed (for example, Exited (0) 3 weeks ago or Exited (137) 15 seconds ago, where 0 and 137 are the exit codes).

PORTS: This displays any port mappings defined for the container.

NAMES: Apart from the CONTAINER ID, each container is also assigned a unique name. We can refer to a container either using its container ID or its unique name. Docker automatically assigns a unique silly name to each container it creates. But if you want to specify your own name to the container, you can do that by including the — — name (double hyphen name) option to the docker create or the docker run (we will look at the docker run command later) command.

I hope this gives you a better understanding of the output of the docker ps command.

docker start

The next command we will look at, is the docker start command.

This command starts any stopped container(s).

The syntax for this command is as shown below:

docker start [options] CONTAINER ID/NAME [CONTAINER ID/NAME…]

We can start a container either by specifying the first few unique characters of its container ID or by specifying its name.

Some of the examples of using this command are shown below:

$ docker start 30986

In the above example, Docker starts the container beginning with the container ID 30986.

$ docker start elated_franklin

Whereas in this example, Docker starts the container named elated_franklin.

docker stop

The next command on the list is the **docker stop **command.

This command stops any running container(s).

The syntax for this command is as shown below:

docker stop [options] CONTAINER ID/NAME [CONTAINER ID/NAME…]

It is similar to the docker start command.

We can stop the container either by specifying the first few unique characters of its container ID or by specifying its name.

Some of the examples of using this command are shown below:

$ docker stop 30986

In the above example, Docker will stop the container beginning with the container ID 30986.

$ docker stop elated_franklin

Whereas in this example, Docker will stop the container named elated_franklin.

docker restart

The next command we will look at is the **docker restart **command.

This command restarts any running container(s).

The syntax for this command is as shown below:

docker restart [options] CONTAINER ID/NAME [CONTAINER ID/NAME…]

We can restart the container either by specifying the first few unique characters of its container ID or by specifying its name.

Some of the examples of using this command are shown below:

$ docker restart 30986

In the above example, Docker will restart the container beginning with the container ID 30986.

$ docker restart elated_franklin

Whereas in this example, Docker will restart the container named elated_franklin.

docker run

The next command we will be looking at is the docker run command.

This command first creates the container, and then it starts the container. In short, this command is a combination of the docker create and the docker start command.

The syntax for this command is as shown below:

docker run [options] IMAGE [commands] [arguments]

It has a syntax similar to that of the docker create command.

Some of the examples of using this command are shown below:

$ docker run ubuntu
30fa018c72682d78cf168626b5e6138bb3b3ae23015c5ec4bbcc2a088e67520

In the above example, Docker will create the container using the latest Ubuntu image and then immediately start the container.

If we execute the above command, it would start the container and immediately stop it — we wouldn’t get any chance to interact with the container at all.

If we want to interact with the container, then we need to specify the options: -it (hyphen followed by i and t) to the docker run command presents us with the terminal, using which we could interact with the container by typing in appropriate commands. Below is an example of the same.

$ docker run -it ubuntu
[email protected]:/#

In order to come out of the container, you need to type exit in the terminal.

docker rm

Moving on to the next command — if we want to delete a container, we use the docker rm command.

The syntax for this command is as shown below:

docker rm [options] CONTAINER ID/NAME [CONTAINER ID/NAME...]

Some of the examples of using this command are shown below:

$ docker rm 30fa elated_franklin

In the above example, we are instructing Docker to delete 2 containers within a single command. The first container to be deleted is specified using its container ID, and the second container to be deleted is specified using its name.

Please Note: The containers need to be in a stopped state in order to be deleted.

docker images

**docker images **is the next command on the list.

This command lists out all the Docker Images that are present on your Docker Host.

$ docker images
REPOSITORY  TAG      IMAGE          CREATED        SIZE
mysql       latest   7bb2586065cd   38 hours ago   477MB
httpd       latest   5eace252f2f2   38 hours ago   132MB
ubuntu      16.04    9361ce633ff1   2 weeks ago    118MB
ubuntu      trusty   390582d83ead   2 weeks ago    188MB
fedora      latest   d09302f77cfc   2 weeks ago    275MB
ubuntu      latest   94e814e2efa8   2 weeks ago    88.9MB

Let us decode the output of the docker images command.

REPOSITORY: This represents the unique name of the Docker Image.

TAG: Each image is associated with a unique tag. A tag basically represents a version of the image.

A tag is usually represented either using a word or set of numbers or a combination of alphanumeric characters.

IMAGE ID: A unique string consisting of alpha-numeric characters, associated with each image.

CREATED: This shows the time elapsed since this image has been created.

SIZE: This shows the size of the image.

docker rmi

The next command on the list is the docker rmi command.

The docker rmi command allows us to remove an image(s) from the Docker Host.

The syntax for this command is as shown below:

docker rmi [options] IMAGE NAME/ID [IMAGE NAME/ID...]

Some of the examples of using this command are shown below:

docker rmi mysql

The above command removes the image named mysql from the Docker Host.

docker rmi httpd fedora

The above command removes the images named httpd and fedora from the Docker Host.

docker rmi 94e81

The above command removes the image starting with the image ID 94e81 from the Docker Host.

docker rmi ubuntu:trusty

The above command removes the image named ubuntu, with the tag trusty from the Docker Host.

These were some of the basic Docker commands you will see. There are many more Docker commands to explore.

Wrap-Up

Containerization has recently gotten the attention it deserves, although it has been around for a long time. Some of the top tech companies like Google, Amazon Web Services (AWS), Intel, Tesla, and Juniper Networks have their own custom version of container engines. They heavily rely on them to build, run, manage, and distribute their applications.

Docker is an extremely powerful containerization engine, and it has a lot to offer when it comes to building, running, managing and distributing your applications efficiently.
You have just seen Docker at a very high level. There is a lot more to learn about Docker, such as:

  • Docker commands (More powerful commands)
  • Docker Images (Build your own custom images)
  • Docker Networking (Setup and configure networking)
  • Docker Services (Grouping containers that use the same image)
  • Docker Stack (Grouping services required by an application)
  • Docker Compose (Tool for managing and running multiple containers)
  • Docker Swarm (Grouping and managing one or more machines on which docker is running)
  • And much more…

If you have found Docker to be fascinating, and are interested in learning more about it, then I would recommend that you enroll in the courses which are listed below. I found them to be very informative and straight to the point.

**Docker **is a future-proofed skill and is just picking up momentum.

WordPress in Docker. Part 1: Dockerization

WordPress in Docker. Part 1: Dockerization

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

What is Docker | Docker Tutorial for Beginners

What is Docker | Docker Tutorial for Beginners

This DevOps Docker Tutorial on what is docker will help you understand how to use Docker Hub, Docker Images, Docker Container & Docker Compose. This tutorial explains Docker's working Architecture and Docker Engine in detail.

This Docker tutorial also includes a Hands-On session around Docker by the end of which you will learn to pull a centos Docker Image and spin your own Docker Container. You will also see how to launch multiple docker containers using Docker Compose. Finally, it will also tell you the role Docker plays in the DevOps life-cycle.

The Hands-On session is performed on an Ubuntu-64bit machine in which Docker is installed.

Deploying Dockerized .NET Apps Without Being a DevOps Guru

Deploying Dockerized .NET Apps Without Being a DevOps Guru

This article will demonstrate first using the tooling to publish a simple ASP.NET Core API in an image to the Docker hub, and then creating a Linux virtual machine in Azure to host the API

Originally published by Julie Lerman at https://blog.docker.com

.NET Developers who use Visual Studio have access to a great extension to help them create Docker images for their apps. The Visual Studio Tools for Docker simplify the task of developing and debugging apps destined for Docker images. But what happens when you are ready to move from debugging in Visual Studio to deploying your image to a container in the cloud?

This blog post will demonstrate first using the tooling to publish a simple ASP.NET Core API in an image to the Docker hub, and then creating a Linux virtual machine in Azure to host the API. It will also engage Docker Compose and Microsoft SQL Server for Linux in a Docker container, along with a Docker Volume for persistence. The goal is to create a simple test environment and a low-stress path to getting your first experience with publishing an app in Docker.

Using the Docker Tools to aid in building and debugging the API is the focus of a series of articles that were published in the April, May and June 2019 issues of MSDN Magazine. So I’ll provide only a high level look at the solution.

Overview of the Sample App

The API allows me to track the names of Docker Captains. It’s not a real-world solution, but enough to give me something to work with. You can download the solution from github.com/julielerman/dockercaptains. I’ll provide a few highlights here.

   public class Captain
   {
       public int CaptainId { get; set; }
       public string Name { get; set; }
   }

The API leverages Entity Framework Core (EF Core) for its data persistence. This requires a class that inherits from the EF Core DbContext. My class, CaptainContext, specifies a DbSet to work from and defines a bit of seed data for the database.

Enabling a Dynamic Connection String

The startup.cs file uses ASP.NET Core’s dependency injection to configure a SQL Server provider for the CaptainContext. There is also code to read a connection string from an environment variable within the Docker container and update a password placeholder that’s less visible to prying eyes.

public void ConfigureServices(IServiceCollection services)
{
 services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
 var conn = Configuration["ConnectionStrings:CaptainDB"];
 conn = connectionstring.Replace("ENVPW", Configuration["DB_PW"]);
 services.AddDbContext<CaptainContext>(options => options.UseSqlServer(conn));
}

The VS Tools generated a Dockerfile and I only made one change to the default — adding the CaptainDB connection string ENV variable with its ENVPW placeholder:

ENV ConnectionStrings:CaptainDB "Server=db;Database=CaptainDB;User=sa;Password=ENVPW;"

ASP.NET Core can discover Docker environment variables when running in a Docker container.

Orchestrating with a docker-compose file

Finally comes the docker-compose.yml file. This sets up a service for the API image, another for the database server image and a volume for persisting the data.

version: '3.4'

services:
 dataapidocker:
   image: ${DOCKER_REGISTRY-}dataapidocker
   build:
     context: .
     dockerfile: DataAPIDocker/Dockerfile
   environment:
     - DB_PW
   depends_on:
     - db
   ports:
     - 80:80
 db:
   image: mcr.microsoft.com/mssql/server
   volumes:
     - mssql-server-julie-data:/var/opt/mssql/data
   environment:
     SA_PASSWORD: "${DB_PW}"
     ACCEPT_EULA: "Y"
   ports:
     - "1433:1433"
volumes:
 mssql-server-julie-data: {}

Notice that I’m declaring the DB_PW environment variable in the API’s service definition and referencing it in the db’s service definition.

There’s also an .env file in the solution where the value of DB_PW is hidden.

[email protected]

Docker will read that file by default.

I got this solution set up and running from within Visual Studio on my development machine. And I love that even when the debugger publishes the app to a local container, I can still debug while it’s running in that container. That’s a super-power of the tools extension.

Using the Tools to Publish to Docker Hub

Once I was happy with my progress, I wanted to get this demo running in the cloud. Although I can easily use the CLI to push and pull, I love that the Docker Tools in VS can handle this part. The Dockerfile created by the tool has instructions for a multi-stage build. When you target Visual Studio to a release build, the tools will build the release image described in the Dockerfile. Publishing will rebuild that release image and publish it to your destination registry.

You can see my full solution in the screenshot below. My API project is called DataAPIDocker. Notice there is also a docker-compose project. This was created by the Docker Tools. But it is the DataAPIDocker project that will be published first into an image and then to a repository.

This will present a Publish page where you can choose to create a New Profile. A publish profile lets you define where to publish your app and also predefine any needed credentials. Creating a profile begins with selecting from a list of targets; for publishing a Docker image, select Container Registry. That option then gives you predefined registries to choose, such as Azure Container Registry, Docker Hub, or a custom registry – which could be an instance of Docker Trusted Registry. 

I’ll choose Docker Hub and click Publish. 

The last step is to provide your Docker Hub repository name. If you don’t already have docker.config set up with your credentials, then you also need to supply your password. 

After creating a profile, it gets stored in the Visual Studio project.

You’ll be returned to the Publish overview page with this profile selected, where you can edit the default “latest” tag name. Click the Publish button to trigger the Docker Tools to do their job. 

A window will open up showing the progress of the docker push command run by the tools.

After the push is complete you can open the repository to see your new repository which by default is public.

Setting up an Azure Linux VM to Host the Containers

Now that the image is hosted in the cloud, you can turn your sights to hosting a container instance for running the app. Since my Visual Studio Subscription includes credits on Azure, I’ll use those. I will create a Linux Virtual Machine on Azure with Docker and Docker Compose, then run an instance of my new image along with a SQL Server and a data volume.

I found two interesting paths for doing this at the command line. One was by using the Azure CLI at the command line in Windows, macOS or Linux. It is so much easier than doing it through the Azure Portal.

I found this doc to be really helpful as I was doing this for the first time. The article walks you through installing the Azure CLI, logging into Azure, creating a Linux VM with Docker already installed then installing Docker Compose. Keep in mind that this will create a default machine using “Standard DS1 v2 (1 vcpus, 3.5 GB memory)” setup. That VM size has an estimated cost of about $54 (USD) per month. 

Alternatively, you can use Docker Machine, a Docker tool for installing Docker on virtual hosts and managing the hosts. This path is a little more automated but it does require that you use bash and that you start by using the Azure CLI to log into your Azure account using the command az login.

Once that’s done, you can use parameters of docker-machine to tell it you’re creating this in Azure, specify your subscription, ssh username, port and size of the machine to create. The last uses standard Azure VM size names. 

I found it interesting to use the Azure CLI workflow which was educational and then consider the docker-machine workflow as a shortcut version.

Since I was still working on my Windows machine, and don’t have the Windows Subsystem for Linux installed there, I opened up Visual Studio Code and switched my terminal shell to use bash. That let me use docker-machine without issue.I also have the Azure Login extension in VS Code, so I was already logged in to Azure.

I first had to get the subscription ID of my Azure Account which I did using the CLI. Then I plugged the id into the docker-machine command:

docker-machine create -d azure 
   --azure-subscription-id [this is where I pasted my subscript id]
   --azure-ssh-user azureuser
   --azure-open-port 80
   --azure-size "Standard_DS1_v2"
   mylinuxvm

There are more settings you can apply, such as defining the resource and location. The output from this command will pause, providing you with details for how to allow docker-machine authorization to the VM by plugging a provided code into a browser window. Once that’s done the command will continue its work and the output will forge ahead.

When it’s finished, you’ll see the message “Docker is up and running!” (on the new VM), Followed by a very important message to configure a shell on the VM by running:

"C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env mylinuxvm

Recall that I’m doing these tasks on Windows, so docker-machine is ensuring that I know where to find the executable. After performing this task, I can see the machine up and running in the Azure Portal. This lets me inspect other default configuration choices made because I didn’t specify them in the docker-machine command.

By default, all of the needed ports are set up for access such as 80 for http and 22 for ssh.

Re-Creating Docker-Compose and .env on the VM

We only need two files on this machine: the docker-compose.yml and the .env file.

Docker-machine allows you to easily ssh into the VM in order for your command line commands to execute on that machine.

docker-machine ssh mylinuxvm

Then you can use a linux editor such as nano to re-create the two files.

nano docker-compose.yml

And you can paste the contents of your docker-compose file into there. This is the docker-compose file in my solution for the sample app. However, there are two edits you’ll need to make.

  1. The original file depends on a variable supplied by the VS Docker Tools for the registry location. Change the value of image to point to your Docker Hub image: image: julielerman/dataapidocker:formylinuxvm
  2. You’ll also need to change the version of docker-compose specified at the top of the file to 2.0 since you’re moving from hosting on Windows to hosting on Linux.

In nano, you can save the docker-compose file with ^O. Then exit nano and run it again to create the .env file using the command:

nano .env

Paste the key value pair environment variable from the app and save the .env file.

Running the Container

I still had to install docker-compose on the new machine. Docker is nice enough to feed you the command for that if you try to run docker-compose before installing it.

 sudo apt install docker-compose

Then I was able to run my containers with: 

 sudo docker-compose up

One important thing I learned: The VS Docker tooling doesn’t define port mapping for the API service in docker-compose. That’s hidden in a docker-compose.override.yml file used by the debugger. If you look at the docker-compose file listed earlier in this article, you’ll see that I added it myself. Without it, when you try to browse to the API, you will get a Connection refused error.

My ASP.NET Core API is now running and I can browse to it at public IP address specified for the VM. The HTTP Get of my Captains controller returns a list of the captains seeded in the database. 

DevOps are for Devs, Too

As a developer who is often first in line to claim “I don’t do DevOps”, I was surprised at how simple it turned out to be to deploy the app I had created. So often I have allowed my development machine to be a gate that defined the limitations of my expertise. I can build the apps and watch them work on my development machine but I’ve usually left deployment to someone else.

While I have ventured into the Azure Portal frequently, the fact that the Docker Tools and the Azure CLI made it so simple to create the assets I needed for deploying the app made me wonder why I’d waited so long to try that out. And in reality, I didn’t have to deploy the app, just an image and then a docker-compose file. That the Docker Machine made it even easier to create those cloud assets was something of a revelation. 

Part of this workflow leveraged the Docker Tools for Visual Studio on Windows. But because I spend a lot of time in Visual Studio Code on my MacBook, I now have the confidence to explore using the Docker CLI for publishing the image to Docker Hub. After that I can just repeat the Docker Machine path to create the Azure VM where I can run my containers. 

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Further reading

Docker and Kubernetes: The Complete Guide

Docker Mastery: The Complete Toolset From a Docker Captain

Docker for the Absolute Beginner - Hands On - DevOps

Docker for Absolute Beginners

How to debug Node.js in a Docker container?

Docker Containers for Beginners

Deploy Docker Containers With AWS CodePipeline

Build Docker Images and Host a Docker Image Repository with GitLab

How to create a full stack React/Express/MongoDB app using Docker