How To Build a Node.js Application with Docker

The&nbsp;<a href="https://www.docker.com/" target="_blank">Docker</a>&nbsp;platform allows developers to package and run applications as&nbsp;<em>containers</em>. A container is an isolated process that runs on a shared operating system, offering a lighter weight alternative to virtual machines. Though containers are not new, they offer benefits — including process isolation and environment standardization — that are growing in importance as more developers use distributed application architectures.

Introduction

The Docker platform allows developers to package and run applications as containers. A container is an isolated process that runs on a shared operating system, offering a lighter weight alternative to virtual machines. Though containers are not new, they offer benefits — including process isolation and environment standardization — that are growing in importance as more developers use distributed application architectures.

When building and scaling an application with Docker, the starting point is typically creating an image for your application, which you can then run in a container. The image includes your application code, libraries, configuration files, environment variables, and runtime. Using an image ensures that the environment in your container is standardized and contains only what is necessary to build and run your application.

In this tutorial, you will create an application image for a static website that uses the Express framework and Bootstrap. You will then build a container using that image and push it to Docker Hub for future use. Finally, you will pull the stored image from your Docker Hub repository and build another container, demonstrating how you can recreate and scale your application.

Prerequisites

To follow this tutorial, you will need:

Step 1 — Installing Your Application Dependencies

To create your image, you will first need to make your application files, which you can then copy to your container. These files will include your application's static content, code, and dependencies.

First, create a directory for your project in your non-root user's home directory. We will call ours node_project, but you should feel free to replace this with something else:

mkdir node_project

Navigate to this directory:

cd node_project

This will be the root directory of the project.

Next, create a package.json file with your project's dependencies and other identifying information. Open the file with nano or your favorite editor:

nano package.json

Add the following information about the project, including its name, author, license, entrypoint, and dependencies. Be sure to replace the author information with your own name and contact details:

~/node_project/package.json

{
  "name": "nodejs-image-demo",
  "version": "1.0.0",
  "description": "nodejs image demo",
  "author": "Sammy the Shark <[email protected]>",
  "license": "MIT",
  "main": "app.js",
  "keywords": [
    "nodejs",
    "bootstrap",
    "express"
  ],
  "dependencies": {
    "express": "^4.16.4"
  }
}

This file includes the project name, author, and license under which it is being shared. Npm recommends making your project name short and descriptive, and avoiding duplicates in the npm registry. We've listed the MIT license in the license field, permitting the free use and distribution of the application code.

Additionally, the file specifies:

  • "main": The entrypoint for the application, app.js. You will create this file next.
  • "dependencies": The project dependencies — in this case, Express 4.16.4 or above.

Though this file does not list a repository, you can add one by following these guidelines on adding a repository to your package.json file. This is a good addition if you are versioning your application.

Save and close the file when you've finished making changes.

To install your project's dependencies, run the following command:

npm install

This will install the packages you've listed in your package.json file in your project directory.

We can now move on to building the application files.

Step 2 — Creating the Application Files

We will create a website that offers users information about sharks. Our application will have a main entrypoint, app.js, and a views directory that will include the project's static assets. The landing page, index.html, will offer users some preliminary information and a link to a page with more detailed shark information, sharks.html. In the views directory, we will create both the landing page and sharks.html.

First, open app.js in the main project directory to define the project's routes:

nano app.js

The first part of the file will create the Express application and Router objects, and define the base directory, port, and host as variables:

~/node_project/app.js

var express = require("express");
var app = express();
var router = express.Router();

var path = __dirname + '/views/';
const PORT = 8080;
const HOST = '0.0.0.0';

The require function loads the express module, which we then use to create the app and router objects. The router object will perform the routing function of the application, and as we define HTTP method routes we will add them to this object to define how our application will handle requests.

This section of the file also sets a few variables, path, PORT, and HOST:

  • path: Defines the base directory, which will be the views subdirectory within the current project directory.
  • HOST: Defines the address that the application will bind to and listen on. Setting this to 0.0.0.0 or all IPv4 addresses corresponds with Docker's default behavior of exposing containers to 0.0.0.0 unless otherwise instructed.
  • PORT: Tells the app to listen on and bind to port 8080.

Next, set the routes for the application using the router object:

~/node_project/app.js

...

router.use(function (req,res,next) {
console.log("/" + req.method);
next();
});

router.get("/",function(req,res){
res.sendFile(path + "index.html");
});

router.get("/sharks",function(req,res){
res.sendFile(path + "sharks.html");
});

The router.use function loads a middleware function that will log the router's requests and pass them on to the application's routes. These are defined in the subsequent functions, which specify that a GET request to the base project URL should return the index.html page, while a GET request to the /sharks route should return sharks.html.

Finally, mount the router middleware and the application's static assets and tell the app to listen on port 8080:

~/node_project/app.js

...

app.use(express.static(path));
app.use("/", router);

app.listen(8080, function () {
console.log('Example app listening on port 8080!')
})

The finished app.js file will look like this:

~/node_project/app.js

var express = require("express");
var app = express();
var router = express.Router();

var path = __dirname + '/views/';
const PORT = 8080;
const HOST = '0.0.0.0';

router.use(function (req,res,next) {
console.log("/" + req.method);
next();
});

router.get("/",function(req,res){
res.sendFile(path + "index.html");
});

router.get("/sharks",function(req,res){
res.sendFile(path + "sharks.html");
});

app.use(express.static(path));
app.use("/", router);

app.listen(8080, function () {
console.log('Example app listening on port 8080!')
})

Save and close the file when you are finished.

Next, let's add some static content to the application. Start by creating the views directory:

mkdir views

Open the landing page file, index.html:

nano views/index.html

Add the following code to the file, which will import Boostrap and create a jumbotron component with a link to the more detailed sharks.html info page:

~/node_project/views/index.html

<!DOCTYPE html>
<html lang="en">

<head>
<title>About Sharks</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
<link href="css/styles.css" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Merriweather:400,700" rel="stylesheet" type="text/css">
</head>

<body>
<nav class="navbar navbar-dark bg-dark navbar-static-top navbar-expand-md">
<div class="container">
<button type="button" class="navbar-toggler collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false"> <span class="sr-only">Toggle navigation</span>
</button> <a class="navbar-brand" href="#">Everything Sharks</a>
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav mr-auto">
<li class="active nav-item"><a href="/" class="nav-link">Home</a>
</li>
<li class="nav-item"><a href="/sharks" class="nav-link">Sharks</a>
</li>
</ul>
</div>
</div>
</nav>
<div class="jumbotron">
<div class="container">
<h1>Want to Learn About Sharks?</h1>
<p>Are you ready to learn about sharks?</p>
<br>
<p><a class="btn btn-primary btn-lg" href="/sharks" role="button">Get Shark Info</a>
</p>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-6">
<h3>Not all sharks are alike</h3>
<p>Though some are dangerous, sharks generally do not attack humans. Out of the 500 species known to researchers, only 30 have been known to attack humans.
</p>
</div>
<div class="col-lg-6">
<h3>Sharks are ancient</h3>
<p>There is evidence to suggest that sharks lived up to 400 million years ago.
</p>
</div>
</div>
</div>
</body>

</html>

The top-level navbar here allows users to toggle between the Home and Sharks pages. In the navbar-nav subcomponent, we are using Bootstrap's active class to indicate the current page to the user. We've also specified the routes to our static pages, which match the routes we defined in app.js:

~/node_project/views/index.html

...
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav mr-auto">
<li class="active nav-item"><a href="/" class="nav-link">Home</a>
</li>
<li class="nav-item"><a href="/sharks" class="nav-link">Sharks</a>
</li>
</ul>
</div>
...

Additionally, we've created a link to our shark information page in our jumbotron's button:

~/node_project/views/index.html

...
<div class="jumbotron">
<div class="container">
<h1>Want to Learn About Sharks?</h1>
<p>Are you ready to learn about sharks?</p>
<br>
<p><a class="btn btn-primary btn-lg" href="/sharks" role="button">Get Shark Info</a>
</p>
</div>
</div>
...

There is also a link to a custom style sheet in the header:

~/node_project/views/index.html

...
<link href="css/styles.css" rel="stylesheet">
...

We will create this style sheet at the end of this step.

Save and close the file when you are finished.

With the application landing page in place, we can create our shark information page, sharks.html, which will offer interested users more information about sharks.

Open the file:

nano views/sharks.html

Add the following code, which imports Bootstrap and the custom style sheet and offers users detailed information about certain sharks:

~/node_project/views/sharks.html

<!DOCTYPE html>
<html lang="en">

<head>
<title>About Sharks</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
<link href="css/styles.css" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Merriweather:400,700" rel="stylesheet" type="text/css">
</head>
<nav class="navbar navbar-dark bg-dark navbar-static-top navbar-expand-md">
<div class="container">
<button type="button" class="navbar-toggler collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false"> <span class="sr-only">Toggle navigation</span>
</button> <a class="navbar-brand" href="/">Everything Sharks</a>
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav mr-auto">
<li class="nav-item"><a href="/" class="nav-link">Home</a>
</li>
<li class="active nav-item"><a href="/sharks" class="nav-link">Sharks</a>
</li>
</ul>
</div>
</div>
</nav>
<div class="jumbotron text-center">
<h1>Shark Info</h1>
</div>
<div class="container">
<div class="row">
<div class="col-lg-6">
<p>
<div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans.
</div>
<img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark">
</p>
</div>
<div class="col-lg-6">
<p>
<div class="caption">Other sharks are known to be friendly and welcoming!</div>
<img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark">
</p>
</div>
</div>
</div>

</html>

Note that in this file, we again use the active class to indicate the current page.

Save and close the file when you are finished.

Finally, create the custom CSS style sheet that you've linked to in index.html and sharks.html by first creating a css folder in the viewsdirectory:

mkdir views/css

Open the style sheet:

nano views/css/styles.css

Add the following code, which will set the desired color and font for our pages:

~/node_project/views/css/styles.css

.navbar {
margin-bottom: 0;
}

body {
background: #020A1B;
color: #ffffff;
font-family: 'Merriweather', sans-serif;
}

h1,
h2 {
font-weight: bold;
}

p {
font-size: 16px;
color: #ffffff;
}

.jumbotron {
background: #0048CD;
color: white;
text-align: center;
}

.jumbotron p {
color: white;
font-size: 26px;
}

.btn-primary {
color: #fff;
text-color: #000000;
border-color: white;
margin-bottom: 5px;
}

img,
video,
audio {
margin-top: 20px;
max-width: 80%;
}

div.caption: {
float: left;
clear: both;
}

In addition to setting font and color, this file also limits the size of the images by specifying a max-width of 80%. This will prevent them from taking up more room than we would like on the page.

Save and close the file when you are finished.

With the application files in place and the project dependencies installed, you are ready to start the application.

If you followed the initial server setup tutorial in the prerequisites, you will have an active firewall permitting only SSH traffic. To permit traffic to port 8080 run:sudo ufw allow 8080

To start the application, make sure that you are in your project's root directory:

cd ~/node_project

Start the application with node app.js:

node app.js

Navigate your browser to http://your_server_ip:8080. You will see the following landing page:

Click on the Get Shark Info button. You will see the following information page:

You now have an application up and running. When you are ready, quit the server by typing CTRL+C. We can now move on to creating the Dockerfile that will allow us to recreate and scale this application as desired.

Step 3 — Writing the Dockerfile

Your Dockerfile specifies what will be included in your application container when it is executed. Using a Dockerfile allows you to define your container environment and avoid discrepancies with dependencies or runtime versions.

Following these guidelines on building optimized containers, we will make our image as efficient as possible by minimizing the number of image layers and restricting the image's function to a single purpose — recreating our application files and static content.

In your project's root directory, create the Dockerfile:

nano Dockerfile

Docker images are created using a succession of layered images that build on one another. Our first step will be to add the base image for our application that will form the starting point of the application build.

Let's use the node:10-alpine image, since, at the time of writing, this is the recommended LTS version of Node.js. The alpine image is derived from the Alpine Linux project, and will help us keep our image size down. For more information about whether or not the alpine image is the right choice for your project, please see the full discussion under the Image Variants section of the Docker Hub Node image page.

Add the following FROM instruction to set the application's base image:

~/node_project/Dockerfile

FROM node:10-alpine

This image includes Node.js and npm. Each Dockerfile must begin with a FROM instruction.

By default, the Docker Node image includes a non-root node user that you can use to avoid running your application container as root. It is a recommended security practice to avoid running containers as root and to restrict capabilities within the container to only those required to run its processes. We will therefore use the node user's home directory as the working directory for our application and set them as our user inside the container. For more information about best practices when working with the Docker Node image, see this best practices guide.

To fine-tune the permissions on our application code in the container, let's create the node_modules subdirectory in /home/node along with the appdirectory. Creating these directories will ensure that they have the permissions we want, which will be important when we create local node modules in the container with npm install. In addition to creating these directories, we will set ownership on them to our node user:

~/node_project/Dockerfile

...
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app

For more information on the utility of consolidating RUN instructions, see this discussion of how to manage container layers.

Next, set the working directory of the application to /home/node/app:

~/node_project/Dockerfile

...
WORKDIR /home/node/app

If a WORKDIR isn't set, Docker will create one by default, so it's a good idea to set it explicitly.

Next, copy the package.json and package-lock.json (for npm 5+) files:

~/node_project/Dockerfile

...
COPY package*.json ./

Adding this COPY instruction before running npm install or copying the application code allows us to take advantage of Docker's caching mechanism. At each stage in the build, Docker will check to see if it has a layer cached for that particular instruction. If we change package.json, this layer will be rebuilt, but if we don't, this instruction will allow Docker to use the existing image layer and skip reinstalling our node modules.

After copying the project dependencies, we can run npm install:

~/node_project/Dockerfile

...
RUN npm install

Copy your application code to the working application directory on the container:

~/node_project/Dockerfile

...
COPY . .

To ensure that the application files are owned by the non-root node user, copy the permissions from your application directory to the directory on the container:

~/node_project/Dockerfile

...
COPY --chown=node:node . .

Set the user to node:

~/node_project/Dockerfile

...
USER node

Expose port 8080 on the container and start the application:

~/node_project/Dockerfile

...
EXPOSE 8080

CMD [ "node", "app.js" ]

EXPOSE does not publish the port, but instead functions as a way of documenting which ports on the container will be published at runtime. CMD runs the command to start the application — in this case, node app.js. Note that there should only be one CMD instruction in each Dockerfile. If you include more than one, only the last will take effect.

There are many things you can do with the Dockerfile. For a complete list of instructions, please refer to Docker's Dockerfile reference documentation.

The complete Dockerfile looks like this:

~/node_project/Dockerfile


FROM node:10-alpine

RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app

WORKDIR /home/node/app

COPY package*.json ./

RUN npm install

COPY . .

COPY --chown=node:node . .

USER node

EXPOSE 8080

CMD [ "node", "app.js" ]

Save and close the file when you are finished editing.

Before building the application image, let's add a .dockerignore file. Working in a similar way to a .gitignore file, .dockerignore specifies which files and directories in your project directory should not be copied over to your container.

Open the .dockerignore file:

nano .dockerignore

Inside the file, add your local node modules, npm logs, Dockerfile, and .dockerignore file:

~/node_project/.dockerignore

node_modules
npm-debug.log
Dockerfile
.dockerignore

If you are working with Git then you will also want to add your .gitdirectory and .gitignore file.

Save and close the file when you are finished.

You are now ready to build the application image using the docker buildcommand. Using the -t flag with docker build will allow you to tag the image with a memorable name. Because we are going to push the image to Docker Hub, let's include our Docker Hub username in the tag. We will tag the image as nodejs-image-demo, but feel free to replace this with a name of your own choosing. Remember to also replace your_dockerhub_usernamewith your own Docker Hub username:

docker build -t your_dockerhub_username/nodejs-image-demo .

The . specifies that the build context is the current directory.

It will take a minute or two to build the image. Once it is complete, check your images:

docker images

You will see the following output:

Output

REPOSITORY TAG IMAGE ID CREATED SIZE
your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 8 seconds ago 73MB
node 10-alpine f09e7c96b6de 3 weeks ago 70.7MB

It is now possible to create a container with this image using docker run. We will include three flags with this command:

  • -p: This publishes the port on the container and maps it to a port on our host. We will use port 80 on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, see this discussion in the Docker docs on port binding.
  • -d: This runs the container in the background.
  • --name: This allows us to give the container a memorable name.

Run the following command to build the container:

docker run --name nodejs-image-demo -p 80:8080 -d your_dockerhub_username/nodejs-image-demo

Once your container is up and running, you can inspect a list of your running containers with docker ps:

docker ps

You will see the following output:

Output

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e50ad27074a7 your_dockerhub_username/nodejs-image-demo "node app.js" 8 seconds ago Up 7 seconds 0.0.0.0:80->8080/tcp nodejs-image-demo

With your container running, you can now visit your application by navigating your browser to http://your_server_ip. You will see your application landing page once again:

Now that you have created an image for your application, you can push it to Docker Hub for future use.

Step 4 — Using a Repository to Work with Images

By pushing your application image to a registry like Docker Hub, you make it available for subsequent use as you build and scale your containers. We will demonstrate how this works by pushing the application image to a repository and then using the image to recreate our container.

The first step to pushing the image is to log in to the Docker Hub account you created in the prerequisites:

docker login -u your_dockerhub_username -p your_dockerhub_password

Logging in this way will create a ~/.docker/config.json file in your user's home directory with your Docker Hub credentials.

You can now push the application image to Docker Hub using the tag you created earlier, your_dockerhub_username/nodejs-image-demo:

docker push your_dockerhub_username/nodejs-image-demo

Let's test the utility of the image registry by destroying our current application container and image and rebuilding them with the image in our repository.

First, list your running containers:

docker ps

You will see the following output:

Output

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e50ad27074a7 your_dockerhub_username/nodejs-image-demo "node app.js" 3 minutes ago Up 3 minutes 0.0.0.0:80->8080/tcp nodejs-image-demo

Using the CONTAINER ID listed in your output, stop the running application container. Be sure to replace the highlighted ID below with your own CONTAINER ID:

docker stop e50ad27074a7

List your all of your images with the -a flag:

docker images -a

You will see the following output with the name of your image, your_dockerhub_username/nodejs-image-demo, along with the node image and the other images from your build:

Output

REPOSITORY TAG IMAGE ID CREATED SIZE
your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 7 minutes ago 73MB
<none> <none> 2e3267d9ac02 4 minutes ago 72.9MB
<none> <none> 8352b41730b9 4 minutes ago 73MB
<none> <none> 5d58b92823cb 4 minutes ago 73MB
<none> <none> 3f1e35d7062a 4 minutes ago 73MB
<none> <none> 02176311e4d0 4 minutes ago 73MB
<none> <none> 8e84b33edcda 4 minutes ago 70.7MB
<none> <none> 6a5ed70f86f2 4 minutes ago 70.7MB
<none> <none> 776b2637d3c1 4 minutes ago 70.7MB
node 10-alpine f09e7c96b6de 3 weeks ago 70.7MB


Remove the stopped container and all of the images, including unused or dangling images, with the following command:

docker system prune -a

Type y when prompted in the output to confirm that you would like to remove the stopped container and images. Be advised that this will also remove your build cache.

You have now removed both the container running your application image and the image itself. For more information on removing Docker containers, images, and volumes, please see How To Remove Docker Images, Containers, and Volumes.

With all of your images and containers deleted, you can now pull the application image from Docker Hub:

docker pull your_dockerhub_username/nodejs-image-demo

List your images once again:

docker images

You will see your application image:

Output

REPOSITORY TAG IMAGE ID CREATED SIZE
your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 11 minutes ago 73MB

You can now rebuild your container using the command from Step 3:

docker run --name nodejs-image-demo -p 80:8080 -d your_dockerhub_username/nodejs-image-demo

List your running containers:

docker ps
Output

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6bc2f50dff6 your_dockerhub_username/nodejs-image-demo "node app.js" 4 seconds ago Up 3 seconds 0.0.0.0:80->8080/tcp nodejs-image-demo

Visit http://your_server_ip once again to view your running application.

Conclusion

In this tutorial you created a static web application with Express and Bootstrap, as well as a Docker image for this application. You used this image to create a container and pushed the image to Docker Hub. From there, you were able to destroy your image and container and recreate them using your Docker Hub repository.

If you are interested in learning more about how to work with tools like Docker Compose and Docker Machine to create multi-container setups, you can look at the following guides:

For general tips on working with container data, see:

If you are interested in other Docker-related topics, please see our complete library of Docker tutorials.


By : katjuell



Crafting multi-stage builds with Docker in Node.js

Crafting multi-stage builds with Docker in Node.js

Learn how you can use a multi-stage Docker build for your Node.js application. Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks.

Everyone knows about Docker. It’s the ubiquitous tool for packaging and distribution of applications that seemed to come from nowhere and take over our industry! If you are reading this, it means you already understand the basics of Docker and are now looking to create a more complex build pipeline.

In the past, optimizing our Docker images has been a challenging experience. All sorts of magic tricks were employed to reduce the size of our applications before they went to production. Things are different now because support for multi-stage builds has been added to Docker.

In this post, we explore how you can use a multi-stage build for your Node.js application. For an example, we’ll use a TypeScript build process, but the same kind of thing will work for any build pipeline. So even if you’d prefer to use Babel, or maybe you need to build a React client, then a Docker multi-stage build can work for you as well.

A basic, single-stage Dockerfile for Node.js

Let’s start by looking at a basic Dockerfile for Node.js. We can visualize the normal Docker build process as shown in Figure 1 below.

Figure 1: Normal Docker build process.

We use the docker build command to turn our Dockerfile into a Docker image. We then use the docker run command to instantiate our image to a Docker container.

The Dockerfile in Listing 1 below is just a standard, run-of-the-mill Dockerfile for Node.js. You have probably seen this kind of thing before. All we are doing here is copying the package.json, installing production dependencies, copying the source code, and finally starting the application.

This Dockerfile is for regular JavaScript applications, so we don’t need a build process yet. I’m only showing you this simple Dockerfile so you can compare it to the multi-stage Dockerfile I’ll be showing you soon.

Listing 1: A run-of-the-mill Dockerfile for Node.js

FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY ./src ./src
EXPOSE 3000
CMD npm start

Listing 1 is a quite ordinary-looking Docker file. In fact, all Docker files looked pretty much like this before multi-stage builds were introduced. Now that Docker supports multi-stage builds, we can visualize our simple Dockerfile as the single-stage build process illustrated in Figure 2.


Figure 2: A single-stage build pipeline.

The need for multiple stages

We can already run whatever commands we want in the Dockerfile when building our image, so why do we even need a multi-stage build?

To find out why, let’s upgrade our simple Dockerfile to include a TypeScript build process. Listing 2 shows the upgraded Dockerfile. I’ve bolded the updated lines so you can easily pick them out.

Listing 2: We have upgraded our simple Dockerfile to include a TypeScript build process

FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm install
COPY ./src ./src
RUN npm run build
EXPOSE 80
CMD npm start

We can easily and directly see the problem this causes. To see it for yourself, you should instantiate a container from this image and then shell into it and inspect its file system.

I did this and used the Linux tree command to list all the directories and files in the container. You can see the result in Figure 3.

Notice that we have unwittingly included in our production image all the debris of development and the build process. This includes our original TypeScript source code (which we don’t use in production), the TypeScript compiler itself (which, again, we don’t use in production), plus any other dev dependencies we might have installed into our Node.js project.


FIgure 3: The debris from development and the build process is bloating our production Docker image.
Bear in mind this is only a trivial project, so we aren’t actually seeing too much cruft left in our production image. But you can imagine how bad this would be for a real application with many sources files, many dev dependencies, and a more complex build process that generates temporary files!

We don’t want this extra bloat in production. The extra size makes our containers bigger. When our containers are bigger than needed, it means we aren’t making efficient use of our resources. The increased surface area of the container can also be a problem for security, where we generally prefer to minimize the attackable surface area of our application.

Wouldn’t it be nice if we could throw away the files we don’t want and just keep the ones we do want? This is exactly what a Docker multi-stage build can do for us.

Crafting a Dockerfile with a multi-stage build

We are going to split out Dockerfile into two stages. Figure 4 shows what our build pipeline looks like after the split.


Figure 4: A multi-stage Docker build pipeline to build TypeScript.

Our new multi-stage build pipeline has two stages: Build stage 1 is what builds our TypeScript code; Build stage 2 is what creates our production Docker image. The final Docker image produced at the end of this pipeline contains only what it needs and omits the cruft we don’t want.

To create our two-stage build pipeline, we are basically just going to create two Docker files in one. Listing 3 shows our Dockerfile with multiple stages added. The first FROM command initiates the first stage, and the second FROM command initiates the second stage.

Compare this to a regular single-stage Dockerfile, and you can see that it actually looks like two Dockerfiles squished together in one.

Listing 3: A multi-stage Dockerfile for building TypeScript code

# 
# Build stage 1.
# This state builds our TypeScript and produces an intermediate Docker image containing the compiled JavaScript code.
#
FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm install
COPY ./src ./src
RUN npm run build

#
# Build stage 2.
# This stage pulls the compiled JavaScript code from the stage 1 intermediate image.
# This stage builds the final Docker image that we'll use in production.
#
FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY --from=0 /usr/src/app/build ./build
EXPOSE 80
CMD npm start

To create this multi-stage Dockerfile, I simply took Listing 2 and divided it up into separate Dockerfiles. The first stage contains only what is need to build the TypeScript code. The second stage contains only what is needed to produce the final production Docker image. I then merged the two Dockerfiles into a single file.

The most important thing to note is the use of --from in the second stage. I’ve bolded this line in Listing 3 so you can easily pick it out. This is the syntax we use to pull the built files from our first stage, which we refer to here as stage 0. We are pulling the compiled JavaScript files from the first stage into the second stage.

We can easily check to make sure we got the desired result. After creating the new image and instantiating a container, I shelled in to check the contents of the file system. You can see in Figure 5 that we have successfully removed the debris from our production image.


Figure 5: We have removed the debris of development from our Docker image.

We now have fewer files in our image, it’s smaller, and it has less surface area. Yay! Mission accomplished.

But what, specifically, does this mean?

The effect of the multi-stage build

What exactly is the effect of the new build pipeline on our production image?

I measured the results before and after. Our single-stage image produced by Listing 2 weighs in at 955MB. After converting to the multi-stage build in Listing 3, the image now comes to 902MB. That’s a reasonable reduction — we removed 53MB from our image!

While 53MB seems like a lot, we have actually only shaved off just more than 5 percent of the size. I know what you’re going to say now: But Ash, our image is still monstrously huge! There’s still way too much bloat in that image.

Well, to make our image even smaller, we now need to use the alpine, or slimmed-down, Node.js base image. We can do this by changing our second build stage from node:10.15.2 to node:10.15.2-alpine.

This reduces our production image down to 73MB — that’s a huge win! Now the savings we get from discarding our debris is more like a whopping 60 percent. Alright, we are really getting somewhere now!

This highlights another benefit of multi-stage builds: we can use separate Docker base images for each of our build stages. This means you can customize each build stage by using a different base image.

Say you have one stage that relies on some tools that are in a different image, or you have created a special Docker image that is custom for your build process. This gives us a lot of flexibility when constructing our build pipelines.

How does it work?

You probably already guessed this: each stage or build process produces its own separate Docker image. You can see how this works in Figure 6.

The Docker image produced by a stage can be used by the following stages. Once the final image is produced, all the intermediate images are discarded; we take what we want for the final image, and the rest gets thrown away.


Figure 6: Each stage of a multi-stage Docker build produces an image.

Adding more stages

There’s no need to stop at two stages, although that’s often all that’s needed; we can add as many stages as we need. A specific example is illustrated in Figure 7.

Here we are building TypeScript code in stage 1 and our React client in stage 2. In addition, there’s a third stage that produces the final image from the results of the first two stages.


Figure 7: Using a Docker multi-stage build, we can create more complex build pipelines.

Pro tips

Now time to leave you with a few advanced tips to explore on your own:

  1. You can name your build stages! You don’t have to leave them as the default 0, 1, etc. Naming your build stages will make your Dockerfile more readable.
  2. Understand the options you have for base images. Using the right base image can relieve a lot of confusion when constructing your build pipeline.
  3. Build a custom base image if the complexity of your build process is getting out of hand.
  4. You can pull from external images! Just like you pull files from earlier stages, you can also pull files from images that are published to a Docker repository. This gives you an option to pre-bake an early build stage if it’s expensive and doesn’t change very often.
Conclusion and resources

Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks. They help us slim down our production Docker images and remove the bloat. They also allow us to structure and modularize our build process, which makes it easier to test parts of our build process in isolation.

So please have some fun with Docker multi-stage builds, and don’t forget to have a look at the example code on GitHub.

Here’s the Docker documentation on multi-stage builds, too.

Dockerizing a Node.js web application

Dockerizing a Node.js web application

In this article, we will see how to dockerize a Node.js application. Dockerizing a Node.js web application

Originally published by  ganeshmani009 at  cloudnweb.dev

what is docker ?

Firstly, Docker is containerization platform where developers can package the application and run as a container.

In simple words, docker runs each application as a separate environment which shares only the resources such as os, memory, etc.

Virtual Machine vs Docker

Docker and node.js setup

Here, we can find the difference between the docker and virtual machines.

To read more about docker, Docker Docs

we are gonna see how to dockerize a node.js application. before that, docker has to be installed on the machine. Docker Installation

After installing the docker, we need to initialize the node application.

npm init --yes
npm install express body-parser

the first command initializes the package.json file which contains the details about the application and dependencies. the second one install the express and bodyParser

create a file called server.js and paste the following code

'use strict';

const express = require('express');

// Constants
const PORT = 8080;
const HOST = '0.0.0.0';

// App
const app = express();
app.get('/', (req, res) => {
res.send('You have done it !!!!!\n');
});

app.listen(PORT,()=>{
console.log(Running on http://${HOST}:${PORT});
});

this runs the basic express application server. now, we need to create the docker image file. create a file name called Dockerfile and add the following commands

FROM node:8

First we install the node image from the Docker hub to the image

WORKDIR /usr/src/app

Next, we set the /usr/src/app as the working directory in the docker image

COPY package*.json ./
RUN npm install

then copies the package.json from the local machine to docker image. It’s not an efficient way to copy the dependencies from local to docker image.

so we are just copying the package.json and install all the dependencies in the docker image

COPY . .
EXPOSE 8080

CMD [ "npm" , "start" ]

it copies all the source code from local to docker image, binds the app to port 8080 in the docker image. docker image port 8080 can be mapped with local machine port. then we run the command

Your Dockerfile should now look like:

# this install the node image from docker hub
FROM node:8

this is the current working directory in the docker image

WORKDIR /usr/src/app
#copy package.json from local to docker image
COPY package*.json ./
#run npm install commands
RUN npm install
#copy all the files from local directory to docker image
COPY . .
#this port exposed to the docker to map.
EXPOSE 8080

CMD [ "npm" , "start" ]

create a .dockerignore file with the following content:

node_modules
npm-debug.log

now, we need to build our image in the command line as :

$ docker build -t <your username>/node-web-app .

-t flag used to tag a name to image. so, it will be easy to identify with a name instead of id. Note : dot in the end of command is important(else it won’t work)

we could run the image using the following command :

docker run -p 49160:8080 -d <your username>/node-web-app

we can check it using

 curl -i localhost:49160

output should be:

HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 23
ETag: W/"17-C2jfoqVpuUrcmNFogd/3pZ5xds8"
Date: Mon, 08 Apr 2019 17:29:12 GMT
Connection: keep-alive

You have done it !!!!!

To read more

https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md

Originally published by  ganeshmani009 at  cloudnweb.dev

=============================

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter


Further reading

☞ The Complete Node.js Developer Course (3rd Edition)

☞ Angular & NodeJS - The MEAN Stack Guide

☞ NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

☞ Best 50 Nodejs interview questions from Beginners to Advanced in 2019

☞ Node.js 12: The future of server-side JavaScript

☞ Docker for Absolute Beginners

☞ How to debug Node.js in a Docker container?

☞ Docker Containers for Beginners

☞ Deploy Docker Containers With AWS CodePipeline


Docker Best Practices for Node Developers

Docker Best Practices for Node Developers

Welcome to the "Docker Best Practices for Node Developers"! With your basic knowledge of Docker and Node.js in hand, Docker Mastery for Node.js is a course for anyone on the Node.js path. This course will help you master them together.

Welcome to the best course on the planet for using Docker with Node.js! With your basic knowledge of Docker and Node.js in hand, Docker Mastery for Node.js is a course for anyone on the Node.js path. This course will help you master them together.

My talk on all the best of Docker for Node.js developers and DevOps dealing with Node apps. From DockerCon 2019. Get the full 9-hour training course with my coupon at http://bit.ly/365ogba

Get the source code for this talk at https://github.com/BretFisher/dockercon19

Some of the many cool things you'll do in this course
  • Build Node.js Images that auto-scan for security vulnerabilities
  • Use Docker's cutting-edge BuildKit with SSH Agents and NPM Caches for better image building
  • Use docker-compose with Visual Studio Code for full Node.js debug support
  • Use BuildKit and Multi-stage Builds to create minimal and flexible Dockerfiles
  • Build custom Node.js images using distro's like CentOS and Alpine
  • Test Docker init, tini, and Node.js as a PID 1 process in containers
  • Create Node.js apps that properly startup and respond to healthchecks
  • Develop ARM based Node.js apps with Docker Desktop, and deploy to AWS A1 Servers
  • Build graceful shutdown code into your apps for zero-downtime deploys
  • Dig into HTTP connections with orchestration, and how Proxies can help
  • Study examples of Docker Swarm and Kubernetes deployments for Node.js
  • Spend time Migrating traditional (legacy) Node.js apps into containers
  • Simplify your microservice solutions with advanced Docker Compose features
What you will learn in this course

You'll start with a quick review about getting set up with Docker, as well as Docker Compose basics. That way we're on the same page for the basics.

Then you'll jump into Node.js Dockerfile basics, that way you'll have a good Dockerfile foundation for new features we'll add throughout the course.

You'll be building on all the different things you learn from each Lecture in the course. Once you have the basics down of Compose, Dockerfile, and Docker Image, then you'll focus on nuances like how Docker and Linux control the Node process and how Docker changes that to make sure you know what options there are for starting up and shutting down Node.js and the right way to do it in different scenarios.

We'll cover advanced, newer features around making the Dockerfile the most efficient and flexible as possible using things like BuildKit and Multi-stage.

Then we'll talk about distributed computing and cloud design to ensure your Node.js apps have 12-factor design in your containers, as well as learning how to migrate old apps into this new way of doing things.

Next we cover Compose and its awesome features to get really efficient local development and test set-up using the Docker Compose command line and Docker Compose YAML file.

With all this knowledge, you'll progress to production concerns and making images production-ready.

Then we'll jump into deploying those containers and running them in production. Whether you use Docker Engine or orchestration with Kubernetes or Swarm, I've got you covered. In addition, we'll cover HTTP connections and reverse proxies for connection handling and routing with multi-container systems.

Lastly, you'll get a final, big assignment where you'll be building and deploying a large, complex solution, including multiple Node.js containers that are doing different things. You'll build Docker images, Dockerfiles, and compose files, and deploy them to a server to test. You'll need to check whether connections failover properly. You'll basically take everything you've learned and apply it in one big project!