Using Docker for Node.js in Development and Production

My current primary tech stack is Node.js/Javascript and, like many teams, I moved our development and production environments in to Docker containers. However, when I started to learn Docker, I realized that most articles focused on development or production environments and could find nothing about how should you organize your Docker configuration to be flexible for both cases.

My current primary tech stack is Node.js/Javascript and, like many teams, I moved our development and production environments in to Docker containers. However, when I started to learn Docker, I realized that most articles focused on development or production environments and could find nothing about how should you organize your Docker configuration to be flexible for both cases.

In this article, I demonstrate different use cases and examples of Node.js Dockerfiles, explain the decision making process, and help envision how your flow should be using Docker. Starting with a simple example, we then review more complicated scenarios and workarounds to keep your development experience consistent with or without Docker.

Disclaimer: This guide is large and focused on different audiences with varying levels of Docker skills; at some points, the instructions stated will be obvious for you, but I will try to make certain relevant points alongside them in order to provide a complete vision of the final set up.

Prerequisites

Described cases

  • Basic Node.js Dockerfile and docker-compose
  • Nodemon in development, Node in production
  • Keeping production Docker image away from devDependecies
  • Using multi-stage build for images required node-gyp support

Add .dockerignore file

Before we start to configure our Dockerfile, let’s add a .dockerignore file to your app folder. The .dockerignore file excludes during the COPY/ADD command files described in the file. Read more here

node_modules
npm-debug.log
Dockerfile*
docker-compose*
.dockerignore
.git
.gitignore
README.md
LICENSE
.vscode

Basic Node.js Dockerfile

To ensure clear understanding, we will start from basic Dockerfile you could use for simple Node.js projects. By simple, I mean that your code does not have any extra native dependencies or build logic.

FROM node:10-alpine

WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install

COPY . .

CMD [ "npm", "start" ]

You will find something like this in every Node.js Docker article. Let’s briefly go through it.

WORKDIR /usr/src/app

The workdir is sort of default directory that is used for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions. In some articles you will see that people do mkdir /app and then set it as workdir, but this is not best practice. Use a pre-existing folder/usr/src/app that is better suited for this.

COPY package*.json ./
RUN npm install

Here’s another best practice adjustment: Copy your package.json and package-lock.json before you copy your code into the container. Docker will cache installed node_modules as a separate layer, then, if you change your app code and execute the build command, the node_modules will not be installed again if you did not change package.json. Generally speaking, even if you forget to add those line, you will not encounter a lot of problems. Usually, you will need to run a docker build only when your package.json was changed, which leads you to install from scratch anyway. In other cases, you don’t run docker build too often after your initial build in the development environment.

The moment when the docker-compose comes in

Before we start to run our app in production, we have to develop it. The best way of orchestrating and running your docker environment is using docker-compose. Define a list of containers/services you want to run and instructions for them in an easy to use syntax for further running in a YAML file.

version: '3'

services:
example-service:
build: .
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3000:3000
- 9229:9229
command: npm start

In the example of a basic docker-compose.yaml configuration above, the build done by using Dockerfile inside your app folder then your app folder is mounted to the container and node_modules that are installed inside the container during the build will not be overridden by your current folder. The 3000 port is exposed to your localhost, assuming that you have a web server running. 9229 is used for exposing the debug port. Read more here.

Now run your app with:

docker-compose up

Or use VS code extension for the same purpose.

With this command, we expose 3000 and 9229 ports of the Dockerized app to localhost, then we mount the current folder with the app to /usr/src/app and use a hack to prevent overriding of node modules from the local machine through Docker.

So can you use that Dockerfile in development and production?

Yes and no.

Differences in CMD

First of all, usually you want your development environment app reloading on a file change. For that purpose, you can use nodemon. But in production, you want to run without it. That means your CMD(command) for development and production environments have to be different.

There are few different options for this:

1. Replace CMD with the command for running your app without nodemon, which can be a separate defined command in your package.json file, such as:

 "scripts": {
"start": "nodemon --inspect=0.0.0.0 src/index.js",
"start:prod": "node src/index.js"
}

In that case your Dockerfile could be like this:

FROM node:10-alpine

WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install

COPY . .

CMD [ "npm", “run”, "start:prod" ]

However, because you use docker-compose file for your development environment, we can have a different command inside, exactly as in the previous example:

version: '3'

services:

... previous instructions

command: npm start

2. If there is a bigger difference or you use docker-compose for development and production, you can create multiple docker-compose files or Dockerfile depending on your differences. Such as docker-compose.dev.yml or Dockerfile.dev.

Managing packages installation

It’s generally preferable to keep your production image size as small as possible and you don’t want to install node modules dependencies that are unnecessary for production. Solving this issue is still possible by keeping one unified Dockerfile.

Revisit your package.json file and split devDependencies apart from dependencies. Read more here. In brief, if you run your npm install with --production flag or set your NODE_ENV as production, all devDependencies will not be installed. We will add extra lines to our docker file to handle that:

FROM node:10-alpine

ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}

WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install

COPY . .

CMD [ "npm", “run”, "start:prod" ]

To customize the behaviour we use

ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}

Docker supports passing build arguments through the docker command or docker-compose. NODE_ENV=development will be used by default until we override it with different value. The good explanation you could find here.

Now when you build your containers with a docker-compose file, all dependencies will be installed, and when you are building it for production, you can pass build argument as production and devDependencies will be ignored. Because I use CI services for building containers, I simply add that option for their configuration. Read more here

Using multi-stage build for images requiring node-gyp support

Not every app you will try to run in Docker will exclusively use JS dependencies, some of them require node-gyp and extra native installed os libraries to use.

To help solve that problem we can use multi-stage builds, which help us to install and build all dependencies in a separate container and move only the result of the installation without any garbage to the final container. The Dockerfile could look like this:

# The instructions for the first stage
FROM node:10-alpine as builder

ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}

RUN apk --no-cache add python make g++

COPY package*.json ./
RUN npm install

The instructions for second stage

FROM node:10-alpine

WORKDIR /usr/src/app
COPY --from=builder node_modules node_modules

COPY . .

CMD [ "npm", “run”, "start:prod" ]

In that example, we installed and compiled all dependencies based on the environment at the first stage,then copied the node_modules in a second stage we will use in the development and production environment.

The line RUN apk --no-cache add python make g++ may be different from project to project, likely because you will need extra dependencies.

COPY --from=builder node_modules node_modules

In that line, we copy a node_modules folder from first stage to a node_modules folder in second stage. Because of this, in the second stage, we set WORKDIR as /usr/src/app the node_modules will be copied to that folder.

Summary

I hope this guide helped you understand how to organize your Dockerfile and have it serve your needs for both development and production environments. We can sum up our advice as follows:

  • Try to unify your Dockerfile for dev and production environments; if it does not work, split them.
  • Don’t install dev node_modules for production builds.
  • Don’t leave native extension dependencies required for node-gyp and node modules installation in the final image.
  • Use docker-compose to orchestrate your development setup.


By : Alex Barashkov


Crafting multi-stage builds with Docker in Node.js

Crafting multi-stage builds with Docker in Node.js

Learn how you can use a multi-stage Docker build for your Node.js application. Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks.

Everyone knows about Docker. It’s the ubiquitous tool for packaging and distribution of applications that seemed to come from nowhere and take over our industry! If you are reading this, it means you already understand the basics of Docker and are now looking to create a more complex build pipeline.

In the past, optimizing our Docker images has been a challenging experience. All sorts of magic tricks were employed to reduce the size of our applications before they went to production. Things are different now because support for multi-stage builds has been added to Docker.

In this post, we explore how you can use a multi-stage build for your Node.js application. For an example, we’ll use a TypeScript build process, but the same kind of thing will work for any build pipeline. So even if you’d prefer to use Babel, or maybe you need to build a React client, then a Docker multi-stage build can work for you as well.

A basic, single-stage Dockerfile for Node.js

Let’s start by looking at a basic Dockerfile for Node.js. We can visualize the normal Docker build process as shown in Figure 1 below.

Figure 1: Normal Docker build process.

We use the docker build command to turn our Dockerfile into a Docker image. We then use the docker run command to instantiate our image to a Docker container.

The Dockerfile in Listing 1 below is just a standard, run-of-the-mill Dockerfile for Node.js. You have probably seen this kind of thing before. All we are doing here is copying the package.json, installing production dependencies, copying the source code, and finally starting the application.

This Dockerfile is for regular JavaScript applications, so we don’t need a build process yet. I’m only showing you this simple Dockerfile so you can compare it to the multi-stage Dockerfile I’ll be showing you soon.

Listing 1: A run-of-the-mill Dockerfile for Node.js

FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY ./src ./src
EXPOSE 3000
CMD npm start

Listing 1 is a quite ordinary-looking Docker file. In fact, all Docker files looked pretty much like this before multi-stage builds were introduced. Now that Docker supports multi-stage builds, we can visualize our simple Dockerfile as the single-stage build process illustrated in Figure 2.


Figure 2: A single-stage build pipeline.

The need for multiple stages

We can already run whatever commands we want in the Dockerfile when building our image, so why do we even need a multi-stage build?

To find out why, let’s upgrade our simple Dockerfile to include a TypeScript build process. Listing 2 shows the upgraded Dockerfile. I’ve bolded the updated lines so you can easily pick them out.

Listing 2: We have upgraded our simple Dockerfile to include a TypeScript build process

FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm install
COPY ./src ./src
RUN npm run build
EXPOSE 80
CMD npm start

We can easily and directly see the problem this causes. To see it for yourself, you should instantiate a container from this image and then shell into it and inspect its file system.

I did this and used the Linux tree command to list all the directories and files in the container. You can see the result in Figure 3.

Notice that we have unwittingly included in our production image all the debris of development and the build process. This includes our original TypeScript source code (which we don’t use in production), the TypeScript compiler itself (which, again, we don’t use in production), plus any other dev dependencies we might have installed into our Node.js project.


FIgure 3: The debris from development and the build process is bloating our production Docker image.
Bear in mind this is only a trivial project, so we aren’t actually seeing too much cruft left in our production image. But you can imagine how bad this would be for a real application with many sources files, many dev dependencies, and a more complex build process that generates temporary files!

We don’t want this extra bloat in production. The extra size makes our containers bigger. When our containers are bigger than needed, it means we aren’t making efficient use of our resources. The increased surface area of the container can also be a problem for security, where we generally prefer to minimize the attackable surface area of our application.

Wouldn’t it be nice if we could throw away the files we don’t want and just keep the ones we do want? This is exactly what a Docker multi-stage build can do for us.

Crafting a Dockerfile with a multi-stage build

We are going to split out Dockerfile into two stages. Figure 4 shows what our build pipeline looks like after the split.


Figure 4: A multi-stage Docker build pipeline to build TypeScript.

Our new multi-stage build pipeline has two stages: Build stage 1 is what builds our TypeScript code; Build stage 2 is what creates our production Docker image. The final Docker image produced at the end of this pipeline contains only what it needs and omits the cruft we don’t want.

To create our two-stage build pipeline, we are basically just going to create two Docker files in one. Listing 3 shows our Dockerfile with multiple stages added. The first FROM command initiates the first stage, and the second FROM command initiates the second stage.

Compare this to a regular single-stage Dockerfile, and you can see that it actually looks like two Dockerfiles squished together in one.

Listing 3: A multi-stage Dockerfile for building TypeScript code

# 
# Build stage 1.
# This state builds our TypeScript and produces an intermediate Docker image containing the compiled JavaScript code.
#
FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm install
COPY ./src ./src
RUN npm run build

#
# Build stage 2.
# This stage pulls the compiled JavaScript code from the stage 1 intermediate image.
# This stage builds the final Docker image that we'll use in production.
#
FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY --from=0 /usr/src/app/build ./build
EXPOSE 80
CMD npm start

To create this multi-stage Dockerfile, I simply took Listing 2 and divided it up into separate Dockerfiles. The first stage contains only what is need to build the TypeScript code. The second stage contains only what is needed to produce the final production Docker image. I then merged the two Dockerfiles into a single file.

The most important thing to note is the use of --from in the second stage. I’ve bolded this line in Listing 3 so you can easily pick it out. This is the syntax we use to pull the built files from our first stage, which we refer to here as stage 0. We are pulling the compiled JavaScript files from the first stage into the second stage.

We can easily check to make sure we got the desired result. After creating the new image and instantiating a container, I shelled in to check the contents of the file system. You can see in Figure 5 that we have successfully removed the debris from our production image.


Figure 5: We have removed the debris of development from our Docker image.

We now have fewer files in our image, it’s smaller, and it has less surface area. Yay! Mission accomplished.

But what, specifically, does this mean?

The effect of the multi-stage build

What exactly is the effect of the new build pipeline on our production image?

I measured the results before and after. Our single-stage image produced by Listing 2 weighs in at 955MB. After converting to the multi-stage build in Listing 3, the image now comes to 902MB. That’s a reasonable reduction — we removed 53MB from our image!

While 53MB seems like a lot, we have actually only shaved off just more than 5 percent of the size. I know what you’re going to say now: But Ash, our image is still monstrously huge! There’s still way too much bloat in that image.

Well, to make our image even smaller, we now need to use the alpine, or slimmed-down, Node.js base image. We can do this by changing our second build stage from node:10.15.2 to node:10.15.2-alpine.

This reduces our production image down to 73MB — that’s a huge win! Now the savings we get from discarding our debris is more like a whopping 60 percent. Alright, we are really getting somewhere now!

This highlights another benefit of multi-stage builds: we can use separate Docker base images for each of our build stages. This means you can customize each build stage by using a different base image.

Say you have one stage that relies on some tools that are in a different image, or you have created a special Docker image that is custom for your build process. This gives us a lot of flexibility when constructing our build pipelines.

How does it work?

You probably already guessed this: each stage or build process produces its own separate Docker image. You can see how this works in Figure 6.

The Docker image produced by a stage can be used by the following stages. Once the final image is produced, all the intermediate images are discarded; we take what we want for the final image, and the rest gets thrown away.


Figure 6: Each stage of a multi-stage Docker build produces an image.

Adding more stages

There’s no need to stop at two stages, although that’s often all that’s needed; we can add as many stages as we need. A specific example is illustrated in Figure 7.

Here we are building TypeScript code in stage 1 and our React client in stage 2. In addition, there’s a third stage that produces the final image from the results of the first two stages.


Figure 7: Using a Docker multi-stage build, we can create more complex build pipelines.

Pro tips

Now time to leave you with a few advanced tips to explore on your own:

  1. You can name your build stages! You don’t have to leave them as the default 0, 1, etc. Naming your build stages will make your Dockerfile more readable.
  2. Understand the options you have for base images. Using the right base image can relieve a lot of confusion when constructing your build pipeline.
  3. Build a custom base image if the complexity of your build process is getting out of hand.
  4. You can pull from external images! Just like you pull files from earlier stages, you can also pull files from images that are published to a Docker repository. This gives you an option to pre-bake an early build stage if it’s expensive and doesn’t change very often.
Conclusion and resources

Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks. They help us slim down our production Docker images and remove the bloat. They also allow us to structure and modularize our build process, which makes it easier to test parts of our build process in isolation.

So please have some fun with Docker multi-stage builds, and don’t forget to have a look at the example code on GitHub.

Here’s the Docker documentation on multi-stage builds, too.

Dockerizing a Node.js web application

Dockerizing a Node.js web application

In this article, we will see how to dockerize a Node.js application. Dockerizing a Node.js web application

Originally published by  ganeshmani009 at  cloudnweb.dev

what is docker ?

Firstly, Docker is containerization platform where developers can package the application and run as a container.

In simple words, docker runs each application as a separate environment which shares only the resources such as os, memory, etc.

Virtual Machine vs Docker

Docker and node.js setup

Here, we can find the difference between the docker and virtual machines.

To read more about docker, Docker Docs

we are gonna see how to dockerize a node.js application. before that, docker has to be installed on the machine. Docker Installation

After installing the docker, we need to initialize the node application.

npm init --yes
npm install express body-parser

the first command initializes the package.json file which contains the details about the application and dependencies. the second one install the express and bodyParser

create a file called server.js and paste the following code

'use strict';

const express = require('express');

// Constants
const PORT = 8080;
const HOST = '0.0.0.0';

// App
const app = express();
app.get('/', (req, res) => {
res.send('You have done it !!!!!\n');
});

app.listen(PORT,()=>{
console.log(Running on http://${HOST}:${PORT});
});

this runs the basic express application server. now, we need to create the docker image file. create a file name called Dockerfile and add the following commands

FROM node:8

First we install the node image from the Docker hub to the image

WORKDIR /usr/src/app

Next, we set the /usr/src/app as the working directory in the docker image

COPY package*.json ./
RUN npm install

then copies the package.json from the local machine to docker image. It’s not an efficient way to copy the dependencies from local to docker image.

so we are just copying the package.json and install all the dependencies in the docker image

COPY . .
EXPOSE 8080

CMD [ "npm" , "start" ]

it copies all the source code from local to docker image, binds the app to port 8080 in the docker image. docker image port 8080 can be mapped with local machine port. then we run the command

Your Dockerfile should now look like:

# this install the node image from docker hub
FROM node:8

this is the current working directory in the docker image

WORKDIR /usr/src/app
#copy package.json from local to docker image
COPY package*.json ./
#run npm install commands
RUN npm install
#copy all the files from local directory to docker image
COPY . .
#this port exposed to the docker to map.
EXPOSE 8080

CMD [ "npm" , "start" ]

create a .dockerignore file with the following content:

node_modules
npm-debug.log

now, we need to build our image in the command line as :

$ docker build -t <your username>/node-web-app .

-t flag used to tag a name to image. so, it will be easy to identify with a name instead of id. Note : dot in the end of command is important(else it won’t work)

we could run the image using the following command :

docker run -p 49160:8080 -d <your username>/node-web-app

we can check it using

 curl -i localhost:49160

output should be:

HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 23
ETag: W/"17-C2jfoqVpuUrcmNFogd/3pZ5xds8"
Date: Mon, 08 Apr 2019 17:29:12 GMT
Connection: keep-alive

You have done it !!!!!

To read more

https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md

Originally published by  ganeshmani009 at  cloudnweb.dev

=============================

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter


Further reading

☞ The Complete Node.js Developer Course (3rd Edition)

☞ Angular & NodeJS - The MEAN Stack Guide

☞ NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

☞ Best 50 Nodejs interview questions from Beginners to Advanced in 2019

☞ Node.js 12: The future of server-side JavaScript

☞ Docker for Absolute Beginners

☞ How to debug Node.js in a Docker container?

☞ Docker Containers for Beginners

☞ Deploy Docker Containers With AWS CodePipeline


Docker Best Practices for Node Developers

Docker Best Practices for Node Developers

Welcome to the "Docker Best Practices for Node Developers"! With your basic knowledge of Docker and Node.js in hand, Docker Mastery for Node.js is a course for anyone on the Node.js path. This course will help you master them together.

Welcome to the best course on the planet for using Docker with Node.js! With your basic knowledge of Docker and Node.js in hand, Docker Mastery for Node.js is a course for anyone on the Node.js path. This course will help you master them together.

My talk on all the best of Docker for Node.js developers and DevOps dealing with Node apps. From DockerCon 2019. Get the full 9-hour training course with my coupon at http://bit.ly/365ogba

Get the source code for this talk at https://github.com/BretFisher/dockercon19

Some of the many cool things you'll do in this course
  • Build Node.js Images that auto-scan for security vulnerabilities
  • Use Docker's cutting-edge BuildKit with SSH Agents and NPM Caches for better image building
  • Use docker-compose with Visual Studio Code for full Node.js debug support
  • Use BuildKit and Multi-stage Builds to create minimal and flexible Dockerfiles
  • Build custom Node.js images using distro's like CentOS and Alpine
  • Test Docker init, tini, and Node.js as a PID 1 process in containers
  • Create Node.js apps that properly startup and respond to healthchecks
  • Develop ARM based Node.js apps with Docker Desktop, and deploy to AWS A1 Servers
  • Build graceful shutdown code into your apps for zero-downtime deploys
  • Dig into HTTP connections with orchestration, and how Proxies can help
  • Study examples of Docker Swarm and Kubernetes deployments for Node.js
  • Spend time Migrating traditional (legacy) Node.js apps into containers
  • Simplify your microservice solutions with advanced Docker Compose features
What you will learn in this course

You'll start with a quick review about getting set up with Docker, as well as Docker Compose basics. That way we're on the same page for the basics.

Then you'll jump into Node.js Dockerfile basics, that way you'll have a good Dockerfile foundation for new features we'll add throughout the course.

You'll be building on all the different things you learn from each Lecture in the course. Once you have the basics down of Compose, Dockerfile, and Docker Image, then you'll focus on nuances like how Docker and Linux control the Node process and how Docker changes that to make sure you know what options there are for starting up and shutting down Node.js and the right way to do it in different scenarios.

We'll cover advanced, newer features around making the Dockerfile the most efficient and flexible as possible using things like BuildKit and Multi-stage.

Then we'll talk about distributed computing and cloud design to ensure your Node.js apps have 12-factor design in your containers, as well as learning how to migrate old apps into this new way of doing things.

Next we cover Compose and its awesome features to get really efficient local development and test set-up using the Docker Compose command line and Docker Compose YAML file.

With all this knowledge, you'll progress to production concerns and making images production-ready.

Then we'll jump into deploying those containers and running them in production. Whether you use Docker Engine or orchestration with Kubernetes or Swarm, I've got you covered. In addition, we'll cover HTTP connections and reverse proxies for connection handling and routing with multi-container systems.

Lastly, you'll get a final, big assignment where you'll be building and deploying a large, complex solution, including multiple Node.js containers that are doing different things. You'll build Docker images, Dockerfiles, and compose files, and deploy them to a server to test. You'll need to check whether connections failover properly. You'll basically take everything you've learned and apply it in one big project!