Containerizing a Node.js Application for Development With Docker Compose

Containerizing a Node.js Application for Development With Docker Compose

This tutorial will show you how to set up a development environment for a Node.js application using Docker. You will create two containers ...

This tutorial will show you how to set up a development environment for a Node.js application using Docker. You will create two containers ...

Introduction

If you are actively developing an application, using Docker can simplify your workflow and the process of deploying your application to production. Working with containers in development offers the following benefits:

  • Environments are consistent, meaning that you can choose the languages and dependencies you want for your project without worrying about system conflicts.
  • Environments are isolated, making it easier to troubleshoot issues and onboard new team members.
  • Environments are portable, allowing you to package and share your code with others.

This tutorial will show you how to set up a development environment for a Node.js application using Docker. You will create two containers — one for the Node application and another for the MongoDB database — with Docker Compose. Because this application works with Node and MongoDB, our setup will do the following:

  • Synchronize the application code on the host with the code in the container to facilitate changes during development.

  • Ensure that changes to the application code work without a restart.

  • Create a user and password-protected database for the application’s data.

  • Persist this data.

At the end of this tutorial, you will have a working shark information application running on Docker containers:

Prerequisites

To follow this tutorial, you will need:

  • A development server running Ubuntu 18.04, along with a non-root user with sudo privileges and an active firewall. For guidance on how to set these up, please see this Initial Server Setup guide.

  • Docker installed on your server, following Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04.

  • Docker Compose installed on your server, following Step 1 of How To Install Docker Compose on Ubuntu 18.04.

Step 1 — Cloning the Project and Modifying Dependencies

The first step in building this setup will be cloning the project code and modifying its package.json file, which includes the project’s dependencies. We will add nodemon to the project’s devDependencies, specifying that we will be using it during development. Running the application with nodemon ensures that it will be automatically restarted whenever you make changes to your code.

First, clone the nodejs-mongo-mongoose repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Integrate MongoDB with Your Node Application, which explains how to integrate a MongoDB database with an existing Node application using Mongoose.

Clone the repository into a directory called node_project:

git clone https://github.com/do-community/nodejs-mongo-mongoose.git node_project


Navigate to the node_project directory:

cd node_project


Open the project’s package.json file using nano or your favorite editor:

nano package.json


Beneath the project dependencies and above the closing curly brace, create a new devDependencies object that includes nodemon:

~/node_project/package.json

...
"dependencies": {
    "ejs": "^2.6.1",
    "express": "^4.16.4",
    "mongoose": "^5.4.10"
  },
  "devDependencies": {
    "nodemon": "^1.18.10"
  }    
}


Save and close the file when you are finished editing.

With the project code in place and its dependencies modified, you can move on to refactoring the code for a containerized workflow.

Step 2 — Configuring Your Application to Work with Containers

Modifying our application for a containerized workflow means making our code more modular. Containers offer portability between environments, and our code should reflect that by remaining as decoupled from the underlying operating system as possible. To achieve this, we will refactor our code to make greater use of Node’s process.envproperty, which returns an object with information about your user environment at runtime. We can use this object in our code to dynamically assign configuration information at runtime with environment variables.

Let’s begin with app.js, our main application entrypoint. Open the file:

nano app.js


Inside, you will see a definition for a port constant, as well a [listen](https://expressjs.com/en/4x/api.html#app.listen "listen") function that uses this constant to specify the port the application will listen on:

~/home/node_project/app.js

...
const port = 8080;
...
app.listen(port, function () {
  console.log('Example app listening on port 8080!');
});


Let’s redefine the port constant to allow for dynamic assignment at runtime using the process.env object. Make the following changes to the constant definition and listen function:

~/home/node_project/app.js

...
const port = process.env.PORT || 8080;
...
app.listen(port, function () {
  console.log(`Example app listening on ${port}!`);
});


Our new constant definition assigns port dynamically using the value passed in at runtime or 8080. Similarly, we’ve rewritten the listen function to use a template literal, which will interpolate the port value when listening for connections. Because we will be mapping our ports elsewhere, these revisions will prevent our having to continuously revise this file as our environment changes.

When you are finished editing, save and close the file.

Next, we will modify our database connection information to remove any configuration credentials. Open the db.jsfile, which contains this information:

nano db.js


Currently, the file does the following things:

  • Imports Mongoose, the Object Document Mapper (ODM) that we’re using to create schemas and models for our application data.

  • Sets the database credentials as constants, including the username and password.

  • Connects to the database using the mongoose.connect method.

For more information about the file, please see Step 3 of How To Integrate MongoDB with Your Node Application.

Our first step in modifying the file will be redefining the constants that include sensitive information. Currently, these constants look like this:

~/node_project/db.js

...
const MONGO_USERNAME = 'sammy';
const MONGO_PASSWORD = 'your_password';
const MONGO_HOSTNAME = '127.0.0.1';
const MONGO_PORT = '27017';
const MONGO_DB = 'sharkinfo';
...


Instead of hardcoding this information, you can use the process.env object to capture the runtime values for these constants. Modify the block to look like this:

~/node_project/db.js

...
const {
  MONGO_USERNAME,
  MONGO_PASSWORD,
  MONGO_HOSTNAME,
  MONGO_PORT,
  MONGO_DB
} = process.env;
...


Save and close the file when you are finished editing.

At this point, you have modified db.js to work with your application’s environment variables, but you still need a way to pass these variables to your application. Let’s create an .env file with values that you can pass to your application at runtime.

Open the file:

nano .env


This file will include the information that you removed from db.js: the username and password for your application’s database, as well as the port setting and database name. Remember to update the username, password, and database name listed here with your own information:

~/node_project/.env

MONGO_USERNAME=sammy
MONGO_PASSWORD=your_password
MONGO_PORT=27017
MONGO_DB=sharkinfo


Note that we have removed the host setting that originally appeared in db.js. We will now define our host at the level of the Docker Compose file, along with other information about our services and containers.

Save and close this file when you are finished editing.

Because your .env file contains sensitive information, you will want to ensure that it is included in your project’s .dockerignore and .gitignore files so that it does not copy to your version control or containers.

Open your .dockerignore file:

nano .dockerignore


Add the following line to the bottom of the file:

~/node_project/.dockerignore

...
.gitignore
.env


Save and close the file when you are finished editing.

The .gitignore file in this repository already includes .env, but feel free to check that it is there:

nano .gitignore


~~/node_project/.gitignore

...
.env
...


At this point, you have successfully extracted sensitive information from your project code and taken measures to control how and where this information gets copied. Now you can add more robustness to your database connection code to optimize it for a containerized workflow.

Step 3 — Modifying Database Connection Settings

Our next step will be to make our database connection method more robust by adding code that handles cases where our application fails to connect to our database. Introducing this level of resilience to your application code is a recommended practice when working with containers using Compose.

Open db.js for editing:

nano db.js


You will see the code that we added earlier, along with the url constant for Mongo’s connection URI and the Mongoose connect method:

~/node_project/db.js

...
const {
  MONGO_USERNAME,
  MONGO_PASSWORD,
  MONGO_HOSTNAME,
  MONGO_PORT,
  MONGO_DB
} = process.env;

const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;

mongoose.connect(url, {useNewUrlParser: true});


Currently, our connect method accepts an option that tells Mongoose to use Mongo’s new URL parser. Let’s add a few more options to this method to define parameters for reconnection attempts. We can do this by creating an options constant that includes the relevant information, in addition to the new URL parser option. Below your Mongo constants, add the following definition for an options constant:

~/node_project/db.js

...
const {
  MONGO_USERNAME,
  MONGO_PASSWORD,
  MONGO_HOSTNAME,
  MONGO_PORT,
  MONGO_DB
} = process.env;

const options = {
  useNewUrlParser: true,
  reconnectTries: Number.MAX_VALUE,
  reconnectInterval: 500, 
  connectTimeoutMS: 10000,
};
...


The reconnectTries option tells Mongoose to continue trying to connect indefinitely, while reconnectIntervaldefines the period between connection attempts in milliseconds. connectTimeoutMS defines 10 seconds as the period that the Mongo driver will wait before failing the connection attempt.

We can now use the new options constant in the Mongoose connect method to fine tune our Mongoose connection settings. We will also add a promise to handle potential connection errors.

Currently, the Mongoose connect method looks like this:

~/node_project/db.js

...
mongoose.connect(url, {useNewUrlParser: true});


Delete the existing connect method and replace it with the following code, which includes the options constant and a promise:

~/node_project/db.js

...
mongoose.connect(url, options).then( function() {
  console.log('MongoDB is connected');
})
  .catch( function(err) {
  console.log(err);
});


In the case of a successful connection, our function logs an appropriate message; otherwise it will catch and log the error, allowing us to troubleshoot.

The finished file will look like this:

~/node_project/db.js

const mongoose = require('mongoose');

const {
  MONGO_USERNAME,
  MONGO_PASSWORD,
  MONGO_HOSTNAME,
  MONGO_PORT,
  MONGO_DB
} = process.env;

const options = {
  useNewUrlParser: true,
  reconnectTries: Number.MAX_VALUE,
  reconnectInterval: 500,
  connectTimeoutMS: 10000,
};

const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;

mongoose.connect(url, options).then( function() {
  console.log('MongoDB is connected');
})
  .catch( function(err) {
  console.log(err);
});


Save and close the file when you have finished editing.

You have now added resiliency to your application code to handle cases where your application might fail to connect to your database. With this code in place, you can move on to defining your services with Compose.

Step 4 — Defining Services with Docker Compose

With your code refactored, you are ready to write the docker-compose.yml file with your service definitions. A service in Compose is a running container, and service definitions — which you will include in your docker-compose.yml file — contain information about how each container image will run. The Compose tool allows you to define multiple services to build multi-container applications.

Before defining our services, however, we will add a tool to our project called wait-for to ensure that our application only attempts to connect to our database once the database startup tasks are complete. This wrapper script uses netcat to poll whether or not a specific host and port are accepting TCP connections. Using it allows you to control your application’s attempts to connect to your database by testing whether or not the database is ready to accept connections.

Though Compose allows you to specify dependencies between services using the depends_on option, this order is based on whether or not the container is running rather than its readiness. Using depends_on won’t be optimal for our setup, since we want our application to connect only when the database startup tasks, including adding a user and password to the admin authentication database, are complete. For more information on using wait-for and other tools to control startup order, please see the relevant recommendations in the Compose documentation.

Open a file called wait-for.sh:

nano wait-for.sh


Paste the following code into the file to create the polling function:

~/node_project/app/wait-for.sh

#!/bin/sh

# original script: https://github.com/eficode/wait-for/blob/master/wait-for

TIMEOUT=15
QUIET=0

echoerr() {
  if [ "$QUIET" -ne 1 ]; then printf "%s\n" "$*" 1>&2; fi
}

usage() {
  exitcode="$1"
  cat &2
Usage:
  $cmdname host:port [-t timeout] [-- command args]
  -q | --quiet                        Do not output any status messages
  -t TIMEOUT | --timeout=timeout      Timeout in seconds, zero for no timeout
  -- COMMAND ARGS                     Execute command with args after the test finishes
USAGE
  exit "$exitcode"
}

wait_for() {
  for i in `seq $TIMEOUT` ; do
    nc -z "$HOST" "$PORT" > /dev/null 2>&1

    result=$?
    if [ $result -eq 0 ] ; then
      if [ $# -gt 0 ] ; then
        exec "[email protected]"
      fi
      exit 0
    fi
    sleep 1
  done
  echo "Operation timed out" >&2
  exit 1
}

while [ $# -gt 0 ]
do
  case "$1" in
    *:* )
    HOST=$(printf "%s\n" "$1"| cut -d : -f 1)
    PORT=$(printf "%s\n" "$1"| cut -d : -f 2)
    shift 1
    ;;
    -q | --quiet)
    QUIET=1
    shift 1
    ;;
    -t)
    TIMEOUT="$2"
    if [ "$TIMEOUT" = "" ]; then break; fi
    shift 2
    ;;
    --timeout=*)
    TIMEOUT="${1#*=}"
    shift 1
    ;;
    --)
    shift
    break
    ;;
    --help)
    usage 0
    ;;
    *)
    echoerr "Unknown argument: $1"
    usage 1
    ;;
  esac
done

if [ "$HOST" = "" -o "$PORT" = "" ]; then
  echoerr "Error: you need to provide a host and port to test."
  usage 2
fi

wait_for "[email protected]"


Save and close the file when you are finished adding the code.

Make the script executable:

chmod +x wait-for.sh


Next, open the docker-compose.yml file:

nano docker-compose.yml


First, define the nodejs application service by adding the following code to the file:

~/node_project/docker-compose.yml

version: '3'

services:
  nodejs:
    build:
      context: .
      dockerfile: Dockerfile
    image: nodejs
    container_name: nodejs
    restart: unless-stopped
    env_file: .env
    environment:
      - MONGO_USERNAME=$MONGO_USERNAME
      - MONGO_PASSWORD=$MONGO_PASSWORD
      - MONGO_HOSTNAME=db
      - MONGO_PORT=$MONGO_PORT
      - MONGO_DB=$MONGO_DB 
    ports:
      - "80:8080"
    volumes:
      - .:/home/node/app
      - node_modules:/home/node/app/node_modules
    networks:
      - app-network
    command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js


The nodejs service definition includes the following options:

  • build: This defines the configuration options, including the context and dockerfile, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image instruction instead, with information about your username, repository, and image tag.

  • context: This defines the build context for the image build — in this case, the current project directory.

- dockerfile: This specifies the Dockerfile in your current project directory as the file Compose will use to build the application image. For more information about this file, please see How To Build a Node.js Application with Docker.

  • image, container_name: These apply names to the image and container.

  • restart: This defines the restart policy. The default is no, but we have set the container to restart unless it is stopped.

  • env_file: This tells Compose that we would like to add environment variables from a file called .env, located in our build context.

  • environment: Using this option allows you to add the Mongo connection settings you defined in the .env file. Note that we are not setting NODE_ENV to development, since this is Express’s default behavior if NODE_ENV is not set. When moving to production, you can set this to production to enable view caching and less verbose error messages. Also note that we have specified the db database container as the host, as discussed in Step 2.

  • ports: This maps port 80 on the host to port 8080 on the container.

  • volumes: We are including two types of mounts here:

  • The first is a bind mount that mounts our application code on the host to the /home/node/app directory on the container. This will facilitate rapid development, since any changes you make to your host code will be populated immediately in the container.

  • The second is a named volume, node_modules. When Docker runs the npm install instruction listed in the application Dockerfile, npm will create a new node_modules directory on the container that includes the packages required to run the application. The bind mount we just created will hide this newly created node_modules directory, however. Since node_modules on the host is empty, the bind will map an empty directory to the container, overriding the new node_modules directory and preventing our application from starting. The named node_modules volume solves this problem by persisting the contents of the /home/node/app/node_modules directory and mounting it to the container, hiding the bind.

Keep the following points in mind when using this approach:

  • Your bind will mount the contents of the node_modules directory on the container to the host and this directory will be owned by root, since the named volume was created by Docker.

  • If you have a pre-existing node_modules directory on the host, it will override thenode_modulesdirectory created on the container. The setup that we’re building in this tutorial assumes that you do not have a pre-existing node_modules directory and that you won’t be working with npm on your host. This is in keeping with a twelve-factor approach to application development, which minimizes dependencies between execution environments.

  • networks: This specifies that our application service will join the app-network network, which we will define at the bottom on the file.

  • command: This option lets you set the command that should be executed when Compose runs the image. Note that this will override the CMD instruction that we set in our application Dockerfile. Here, we are running the application using the wait-for script, which will poll the db service on port 27017 to test whether or not the database service is ready. Once the readiness test succeeds, the script will execute the command we have set, /home/node/app/node_modules/.bin/nodemon app.js, to start the application with nodemon. This will ensure that any future changes we make to our code are reloaded without our having to restart the application.

Next, create the db service by adding the following code below the application service definition:

~/node_project/docker-compose.yml

...
  db:
    image: mongo:4.1.8-xenial
    container_name: db
    restart: unless-stopped
    env_file: .env
    environment:
      - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
      - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
    volumes:  
      - dbdata:/data/db   
    networks:
      - app-network  


Some of the settings we defined for the nodejs service remain the same, but we’ve also made the following changes to the image, environment, and volumes definitions:

  • image: To create this service, Compose will pull the 4.1.8-xenial Mongo image from Docker Hub. We are pinning a particular version to avoid possible future conflicts as the Mongo image changes. For more information about version pinning, please see the Docker documentation on Dockerfile best practices.

  • MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD: The mongo image makes these environment variables available so that you can modify the initialization of your database instance. MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD together create a root user in the admin authentication database and ensure that authentication is enabled when the container starts. We have set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD using the values from our .env file, which we pass to the db service using the env_file option. Doing this means that our sammy application user will be a root user on the database instance, with access to all of the administrative and operational privileges of that role. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

Note: Keep in mind that these variables will not take effect if you start the container with an existing data directory in place.

  • dbdata:/data/db: The named volume dbdata will persist the data stored in Mongo’s default data directory, /data/db. This will ensure that you don’t lose data in cases where you stop or remove containers.

We’ve also added the db service to the app-network network with the networks option.

As a final step, add the volume and network definitions to the bottom of the file:

~/node_project/docker-compose.yml

...
networks:
  app-network:
    driver: bridge

volumes:
  dbdata:
  node_modules:  


The user-defined bridge network app-network enables communication between our containers since they are on the same Docker daemon host. This streamlines traffic and communication within the application, as it opens all ports between containers on the same bridge network, while exposing no ports to the outside world. Thus, our db and nodejs containers can communicate with each other, and we only need to expose port 80 for front-end access to the application.

Our top-level volumes key defines the volumes dbdata and node_modules. When Docker creates volumes, the contents of the volume are stored in a part of the host filesystem, /var/lib/docker/volumes/, that’s managed by Docker. The contents of each volume are stored in a directory under /var/lib/docker/volumes/ and get mounted to any container that uses the volume. In this way, the shark information data that our users will create will persist in the dbdata volume even if we remove and recreate the db container.

The finished docker-compose.yml file will look like this:

~/node_project/docker-compose.yml

version: '3'

services:
  nodejs:
    build:
      context: .
      dockerfile: Dockerfile
    image: nodejs
    container_name: nodejs
    restart: unless-stopped
    env_file: .env
    environment:
      - MONGO_USERNAME=$MONGO_USERNAME
      - MONGO_PASSWORD=$MONGO_PASSWORD
      - MONGO_HOSTNAME=db
      - MONGO_PORT=$MONGO_PORT
      - MONGO_DB=$MONGO_DB
    ports:
      - "80:8080"
    volumes:
      - .:/home/node/app
      - node_modules:/home/node/app/node_modules
    networks:
      - app-network
    command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js 

  db:
    image: mongo:4.1.8-xenial
    container_name: db
    restart: unless-stopped
    env_file: .env
    environment:
      - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
      - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
    volumes:     
      - dbdata:/data/db
    networks:
      - app-network  

networks:
  app-network:
    driver: bridge

volumes:
  dbdata:
  node_modules:  


Save and close the file when you are finished editing.

With your service definitions in place, you are ready to start the application.

Step 5 — Testing the Application

With your docker-compose.yml file in place, you can create your services with the docker-compose up command. You can also test that your data will persist by stopping and removing your containers with docker-compose down.

First, build the container images and create the services by running docker-compose up with the -d flag, which will then run the nodejs and db containers in the background:

docker-compose up -d


You will see output confirming that your services have been created:

Output...
Creating db ... done
Creating nodejs ... done


You can also get more detailed information about the startup processes by displaying the log output from the services:

docker-compose logs 


You will see something like this if everything has started correctly:

Output...
nodejs    | [nodemon] starting `node app.js`
nodejs    | Example app listening on 8080!
nodejs    | MongoDB is connected
...
db        | 2019-02-22T17:26:27.329+0000 I ACCESS   [conn2] Successfully authenticated as principal sammy on admin


You can also check the status of your containers with docker-compose ps:

docker-compose ps


You will see output indicating that your containers are running:

Output Name               Command               State          Ports        
----------------------------------------------------------------------
db       docker-entrypoint.sh mongod      Up      27017/tcp           
nodejs   ./wait-for.sh db:27017 --  ...   Up      0.0.0.0:80->8080/tcp


With your services running, you can visit [http://your_server_ip in the browser. You will see a landing page that looks like this:

Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark’s general character:

In the form, add a shark of your choosing. For the purpose of this demonstration, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

Click on the Submit button. You will see a page with this shark information displayed back to you:

As a final step, we can test that the data you’ve just entered will persist if you remove your database container.

Back at your terminal, type the following command to stop and remove your containers and network:

docker-compose down


Note that we are not including the --volumes option; hence, our dbdata volume is not removed.

The following output confirms that your containers and network have been removed:

OutputStopping nodejs ... done
Stopping db     ... done
Removing nodejs ... done
Removing db     ... done
Removing network node_project_app-network


Recreate the containers:

docker-compose up -d


Now head back to the shark information form:

Enter a new shark of your choosing. We’ll go with Whale Shark and Large:

Once you click Submit, you will see that the new shark has been added to the shark collection in your database without the loss of the data you’ve already entered:

Your application is now running on Docker containers with data persistence and code synchronization enabled.

Conclusion

By following this tutorial, you have created a development setup for your Node application using Docker containers. You’ve made your project more modular and portable by extracting sensitive information and decoupling your application’s state from your application code. You have also configured a boilerplate docker-compose.yml file that you can revise as your development needs and requirements change.

As you develop, you may be interested in learning more about designing applications for containerized and Cloud Native workflows. Please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetesfor more information on these topics.

To learn more about the code used in this tutorial, please see How To Build a Node.js Application with Docker and How To Integrate MongoDB with Your Node Application. For information about deploying a Node application with an Nginx reverse proxy using containers, please see How To Secure a Containerized Node.js Application with Nginx, Let’s Encrypt, and Docker Compose.

*Originally published by Kathleen Juell at *https://www.digitalocean.com

Learn more

The Complete Node.js Developer Course (3rd Edition)

React Node FullStack - Social Network from Scratch to Deploy

NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

Angular & NodeJS - The MEAN Stack Guide

Docker Mastery: The Complete Toolset From a Docker Captain

Docker and Kubernetes: The Complete Guide

Docker Crash Course for busy DevOps and Developers

Jenkins, From Zero To Hero: Become a DevOps Jenkins Master

Docker and Kubernetes: The Complete Guide

DOCKER | Step by Step for Beginners | with Sample Project

Crafting multi-stage builds with Docker in Node.js

Crafting multi-stage builds with Docker in Node.js

Learn how you can use a multi-stage Docker build for your Node.js application. Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks.

Everyone knows about Docker. It’s the ubiquitous tool for packaging and distribution of applications that seemed to come from nowhere and take over our industry! If you are reading this, it means you already understand the basics of Docker and are now looking to create a more complex build pipeline.

In the past, optimizing our Docker images has been a challenging experience. All sorts of magic tricks were employed to reduce the size of our applications before they went to production. Things are different now because support for multi-stage builds has been added to Docker.

In this post, we explore how you can use a multi-stage build for your Node.js application. For an example, we’ll use a TypeScript build process, but the same kind of thing will work for any build pipeline. So even if you’d prefer to use Babel, or maybe you need to build a React client, then a Docker multi-stage build can work for you as well.

A basic, single-stage Dockerfile for Node.js

Let’s start by looking at a basic Dockerfile for Node.js. We can visualize the normal Docker build process as shown in Figure 1 below.

Figure 1: Normal Docker build process.

We use the docker build command to turn our Dockerfile into a Docker image. We then use the docker run command to instantiate our image to a Docker container.

The Dockerfile in Listing 1 below is just a standard, run-of-the-mill Dockerfile for Node.js. You have probably seen this kind of thing before. All we are doing here is copying the package.json, installing production dependencies, copying the source code, and finally starting the application.

This Dockerfile is for regular JavaScript applications, so we don’t need a build process yet. I’m only showing you this simple Dockerfile so you can compare it to the multi-stage Dockerfile I’ll be showing you soon.

Listing 1: A run-of-the-mill Dockerfile for Node.js

FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY ./src ./src
EXPOSE 3000
CMD npm start

Listing 1 is a quite ordinary-looking Docker file. In fact, all Docker files looked pretty much like this before multi-stage builds were introduced. Now that Docker supports multi-stage builds, we can visualize our simple Dockerfile as the single-stage build process illustrated in Figure 2.


Figure 2: A single-stage build pipeline.

The need for multiple stages

We can already run whatever commands we want in the Dockerfile when building our image, so why do we even need a multi-stage build?

To find out why, let’s upgrade our simple Dockerfile to include a TypeScript build process. Listing 2 shows the upgraded Dockerfile. I’ve bolded the updated lines so you can easily pick them out.

Listing 2: We have upgraded our simple Dockerfile to include a TypeScript build process

FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm install
COPY ./src ./src
RUN npm run build
EXPOSE 80
CMD npm start

We can easily and directly see the problem this causes. To see it for yourself, you should instantiate a container from this image and then shell into it and inspect its file system.

I did this and used the Linux tree command to list all the directories and files in the container. You can see the result in Figure 3.

Notice that we have unwittingly included in our production image all the debris of development and the build process. This includes our original TypeScript source code (which we don’t use in production), the TypeScript compiler itself (which, again, we don’t use in production), plus any other dev dependencies we might have installed into our Node.js project.


FIgure 3: The debris from development and the build process is bloating our production Docker image.
Bear in mind this is only a trivial project, so we aren’t actually seeing too much cruft left in our production image. But you can imagine how bad this would be for a real application with many sources files, many dev dependencies, and a more complex build process that generates temporary files!

We don’t want this extra bloat in production. The extra size makes our containers bigger. When our containers are bigger than needed, it means we aren’t making efficient use of our resources. The increased surface area of the container can also be a problem for security, where we generally prefer to minimize the attackable surface area of our application.

Wouldn’t it be nice if we could throw away the files we don’t want and just keep the ones we do want? This is exactly what a Docker multi-stage build can do for us.

Crafting a Dockerfile with a multi-stage build

We are going to split out Dockerfile into two stages. Figure 4 shows what our build pipeline looks like after the split.


Figure 4: A multi-stage Docker build pipeline to build TypeScript.

Our new multi-stage build pipeline has two stages: Build stage 1 is what builds our TypeScript code; Build stage 2 is what creates our production Docker image. The final Docker image produced at the end of this pipeline contains only what it needs and omits the cruft we don’t want.

To create our two-stage build pipeline, we are basically just going to create two Docker files in one. Listing 3 shows our Dockerfile with multiple stages added. The first FROM command initiates the first stage, and the second FROM command initiates the second stage.

Compare this to a regular single-stage Dockerfile, and you can see that it actually looks like two Dockerfiles squished together in one.

Listing 3: A multi-stage Dockerfile for building TypeScript code

# 
# Build stage 1.
# This state builds our TypeScript and produces an intermediate Docker image containing the compiled JavaScript code.
#
FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm install
COPY ./src ./src
RUN npm run build

#
# Build stage 2.
# This stage pulls the compiled JavaScript code from the stage 1 intermediate image.
# This stage builds the final Docker image that we'll use in production.
#
FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY --from=0 /usr/src/app/build ./build
EXPOSE 80
CMD npm start

To create this multi-stage Dockerfile, I simply took Listing 2 and divided it up into separate Dockerfiles. The first stage contains only what is need to build the TypeScript code. The second stage contains only what is needed to produce the final production Docker image. I then merged the two Dockerfiles into a single file.

The most important thing to note is the use of --from in the second stage. I’ve bolded this line in Listing 3 so you can easily pick it out. This is the syntax we use to pull the built files from our first stage, which we refer to here as stage 0. We are pulling the compiled JavaScript files from the first stage into the second stage.

We can easily check to make sure we got the desired result. After creating the new image and instantiating a container, I shelled in to check the contents of the file system. You can see in Figure 5 that we have successfully removed the debris from our production image.


Figure 5: We have removed the debris of development from our Docker image.

We now have fewer files in our image, it’s smaller, and it has less surface area. Yay! Mission accomplished.

But what, specifically, does this mean?

The effect of the multi-stage build

What exactly is the effect of the new build pipeline on our production image?

I measured the results before and after. Our single-stage image produced by Listing 2 weighs in at 955MB. After converting to the multi-stage build in Listing 3, the image now comes to 902MB. That’s a reasonable reduction — we removed 53MB from our image!

While 53MB seems like a lot, we have actually only shaved off just more than 5 percent of the size. I know what you’re going to say now: But Ash, our image is still monstrously huge! There’s still way too much bloat in that image.

Well, to make our image even smaller, we now need to use the alpine, or slimmed-down, Node.js base image. We can do this by changing our second build stage from node:10.15.2 to node:10.15.2-alpine.

This reduces our production image down to 73MB — that’s a huge win! Now the savings we get from discarding our debris is more like a whopping 60 percent. Alright, we are really getting somewhere now!

This highlights another benefit of multi-stage builds: we can use separate Docker base images for each of our build stages. This means you can customize each build stage by using a different base image.

Say you have one stage that relies on some tools that are in a different image, or you have created a special Docker image that is custom for your build process. This gives us a lot of flexibility when constructing our build pipelines.

How does it work?

You probably already guessed this: each stage or build process produces its own separate Docker image. You can see how this works in Figure 6.

The Docker image produced by a stage can be used by the following stages. Once the final image is produced, all the intermediate images are discarded; we take what we want for the final image, and the rest gets thrown away.


Figure 6: Each stage of a multi-stage Docker build produces an image.

Adding more stages

There’s no need to stop at two stages, although that’s often all that’s needed; we can add as many stages as we need. A specific example is illustrated in Figure 7.

Here we are building TypeScript code in stage 1 and our React client in stage 2. In addition, there’s a third stage that produces the final image from the results of the first two stages.


Figure 7: Using a Docker multi-stage build, we can create more complex build pipelines.

Pro tips

Now time to leave you with a few advanced tips to explore on your own:

  1. You can name your build stages! You don’t have to leave them as the default 0, 1, etc. Naming your build stages will make your Dockerfile more readable.
  2. Understand the options you have for base images. Using the right base image can relieve a lot of confusion when constructing your build pipeline.
  3. Build a custom base image if the complexity of your build process is getting out of hand.
  4. You can pull from external images! Just like you pull files from earlier stages, you can also pull files from images that are published to a Docker repository. This gives you an option to pre-bake an early build stage if it’s expensive and doesn’t change very often.
Conclusion and resources

Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks. They help us slim down our production Docker images and remove the bloat. They also allow us to structure and modularize our build process, which makes it easier to test parts of our build process in isolation.

So please have some fun with Docker multi-stage builds, and don’t forget to have a look at the example code on GitHub.

Here’s the Docker documentation on multi-stage builds, too.

How to Use Express.js, Node.js and MongoDB.js

How to Use Express.js, Node.js and MongoDB.js

In this post, I will show you how to use Express.js, Node.js and MongoDB.js. We will be creating a very simple Node application, that will allow users to input data that they want to store in a MongoDB database. It will also show all items that have been entered into the database.

In this post, I will show you how to use Express.js, Node.js and MongoDB.js. We will be creating a very simple Node application, that will allow users to input data that they want to store in a MongoDB database. It will also show all items that have been entered into the database.

Creating a Node Application

To get started I would recommend creating a new database that will contain our application. For this demo I am creating a directory called node-demo. After creating the directory you will need to change into that directory.

mkdir node-demo
cd node-demo

Once we are in the directory we will need to create an application and we can do this by running the command
npm init

This will ask you a series of questions. Here are the answers I gave to the prompts.

The first step is to create a file that will contain our code for our Node.js server.

touch app.js

In our app.js we are going to add the following code to build a very simple Node.js Application.

var express = require("express");
var app = express();
var port = 3000;
 
app.get("/", (req, res) => {
  res.send("Hello World");
});
 
app.listen(port, () => {
  console.log("Server listening on port " + port);
});

What the code does is require the express.js application. It then creates app by calling express. We define our port to be 3000.

The app.use line will listen to requests from the browser and will return the text “Hello World” back to the browser.

The last line actually starts the server and tells it to listen on port 3000.

Installing Express

Our app.js required the Express.js module. We need to install express in order for this to work properly. Go to your terminal and enter this command.

npm install express --save

This command will install the express module into our package.json. The module is installed as a dependency in our package.json as shown below.

To test our application you can go to the terminal and enter the command

node app.js

Open up a browser and navigate to the url http://localhost:3000

You will see the following in your browser

Creating Website to Save Data to MongoDB Database

Instead of showing the text “Hello World” when people view your application, what we want to do is to show a place for user to save data to the database.

We are going to allow users to enter a first name and a last name that we will be saving in the database.

To do this we will need to create a basic HTML file. In your terminal enter the following command to create an index.html file.

touch index.html

In our index.html file we will be creating an input filed where users can input data that they want to have stored in the database. We will also need a button for users to click on that will add the data to the database.

Here is what our index.html file looks like.

<!DOCTYPE html>
<html>
  <head>
    <title>Intro to Node and MongoDB<title>
  <head>

  <body>
    <h1>Into to Node and MongoDB<&#47;h1>
    <form method="post" action="/addname">
      <label>Enter Your Name<&#47;label><br>
      <input type="text" name="firstName" placeholder="Enter first name..." required>
      <input type="text" name="lastName" placeholder="Enter last name..." required>
      <input type="submit" value="Add Name">
    </form>
  <body>
<html>

If you are familiar with HTML, you will not find anything unusual in our code for our index.html file. We are creating a form where users can input their first name and last name and then click an “Add Name” button.

The form will do a post call to the /addname endpoint. We will be talking about endpoints and post later in this tutorial.

Displaying our Website to Users

We were previously displaying the text “Hello World” to users when they visited our website. Now we want to display our html file that we created. To do this we will need to change the app.use line our our app.js file.

We will be using the sendFile command to show the index.html file. We will need to tell the server exactly where to find the index.html file. We can do that by using a node global call __dirname. The __dirname will provide the current directly where the command was run. We will then append the path to our index.html file.

The app.use lines will need to be changed to
app.use("/", (req, res) => {   res.sendFile(__dirname + "/index.html"); });

Once you have saved your app.js file, we can test it by going to terminal and running node app.js

Open your browser and navigate to “http://localhost:3000”. You will see the following

Connecting to the Database

Now we need to add our database to the application. We will be connecting to a MongoDB database. I am assuming that you already have MongoDB installed and running on your computer.

To connect to the MongoDB database we are going to use a module called Mongoose. We will need to install mongoose module just like we did with express. Go to your terminal and enter the following command.
npm install mongoose --save

This will install the mongoose model and add it as a dependency in our package.json.

Connecting to the Database

Now that we have the mongoose module installed, we need to connect to the database in our app.js file. MongoDB, by default, runs on port 27017. You connect to the database by telling it the location of the database and the name of the database.

In our app.js file after the line for the port and before the app.use line, enter the following two lines to get access to mongoose and to connect to the database. For the database, I am going to use “node-demo”.

var mongoose = require("mongoose"); mongoose.Promise = global.Promise; mongoose.connect("mongodb://localhost:27017/node-demo");

Creating a Database Schema

Once the user enters data in the input field and clicks the add button, we want the contents of the input field to be stored in the database. In order to know the format of the data in the database, we need to have a Schema.

For this tutorial, we will need a very simple Schema that has only two fields. I am going to call the field firstName and lastName. The data stored in both fields will be a String.

After connecting to the database in our app.js we need to define our Schema. Here are the lines you need to add to the app.js.
var nameSchema = new mongoose.Schema({   firstName: String,   lastNameName: String });

Once we have built our Schema, we need to create a model from it. I am going to call my model “DataInput”. Here is the line you will add next to create our mode.
var User = mongoose.model("User", nameSchema);

Creating RESTful API

Now that we have a connection to our database, we need to create the mechanism by which data will be added to the database. This is done through our REST API. We will need to create an endpoint that will be used to send data to our server. Once the server receives this data then it will store the data in the database.

An endpoint is a route that our server will be listening to to get data from the browser. We already have one route that we have created already in the application and that is the route that is listening at the endpoint “/” which is the homepage of our application.

HTTP Verbs in a REST API

The communication between the client(the browser) and the server is done through an HTTP verb. The most common HTTP verbs are
GET, PUT, POST, and DELETE.

The following table explains what each HTTP verb does.

HTTP Verb Operation
GET Read
POST Create
PUT Update
DELETE Delete

As you can see from these verbs, they form the basis of CRUD operations that I talked about previously.

Building a CRUD endpoint

If you remember, the form in our index.html file used a post method to call this endpoint. We will now create this endpoint.

In our previous endpoint we used a “GET” http verb to display the index.html file. We are going to do something very similar but instead of using “GET”, we are going to use “POST”. To get started this is what the framework of our endpoint will look like.

app.post("/addname", (req, res) => {
 
});
Express Middleware

To fill out the contents of our endpoint, we want to store the firstName and lastName entered by the user into the database. The values for firstName and lastName are in the body of the request that we send to the server. We want to capture that data, convert it to JSON and store it into the database.

Express.js version 4 removed all middleware. To parse the data in the body we will need to add middleware into our application to provide this functionality. We will be using the body-parser module. We need to install it, so in your terminal window enter the following command.

npm install body-parser --save

Once it is installed, we will need to require this module and configure it. The configuration will allow us to pass the data for firstName and lastName in the body to the server. It can also convert that data into JSON format. This will be handy because we can take this formatted data and save it directly into our database.

To add the body-parser middleware to our application and configure it, we can add the following lines directly after the line that sets our port.

var bodyParser = require('body-parser');
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
Saving data to database

Mongoose provides a save function that will take a JSON object and store it in the database. Our body-parser middleware, will convert the user’s input into the JSON format for us.

To save the data into the database, we need to create a new instance of our model that we created early. We will pass into this instance the user’s input. Once we have it then we just need to enter the command “save”.

Mongoose will return a promise on a save to the database. A promise is what is returned when the save to the database completes. This save will either finish successfully or it will fail. A promise provides two methods that will handle both of these scenarios.

If this save to the database was successful it will return to the .then segment of the promise. In this case we want to send text back the user to let them know the data was saved to the database.

If it fails it will return to the .catch segment of the promise. In this case, we want to send text back to the user telling them the data was not saved to the database. It is best practice to also change the statusCode that is returned from the default 200 to a 400. A 400 statusCode signifies that the operation failed.

Now putting all of this together here is what our final endpoint will look like.

app.post("/addname", (req, res) => {
  var myData = new User(req.body);
  myData.save()
    .then(item => {
      res.send("item saved to database");
    })
    .catch(err => {
      res.status(400).send("unable to save to database");
    });
});
Testing our code

Save your code. Go to your terminal and enter the command node app.js to start our server. Open up your browser and navigate to the URL “http://localhost:3000”. You will see our index.html file displayed to you.

Make sure you have mongo running.

Enter your first name and last name in the input fields and then click the “Add Name” button. You should get back text that says the name has been saved to the database like below.

Access to Code

The final version of the code is available in my Github repo. To access the code click here. Thank you for reading !

Dockerizing a Node.js web application

Dockerizing a Node.js web application

In this article, we will see how to dockerize a Node.js application. Dockerizing a Node.js web application

Originally published by  ganeshmani009 at  cloudnweb.dev

what is docker ?

Firstly, Docker is containerization platform where developers can package the application and run as a container.

In simple words, docker runs each application as a separate environment which shares only the resources such as os, memory, etc.

Virtual Machine vs Docker

Docker and node.js setup

Here, we can find the difference between the docker and virtual machines.

To read more about docker, Docker Docs

we are gonna see how to dockerize a node.js application. before that, docker has to be installed on the machine. Docker Installation

After installing the docker, we need to initialize the node application.

npm init --yes
npm install express body-parser

the first command initializes the package.json file which contains the details about the application and dependencies. the second one install the express and bodyParser

create a file called server.js and paste the following code

'use strict';

const express = require('express');

// Constants
const PORT = 8080;
const HOST = '0.0.0.0';

// App
const app = express();
app.get('/', (req, res) => {
res.send('You have done it !!!!!\n');
});

app.listen(PORT,()=>{
console.log(Running on http://${HOST}:${PORT});
});

this runs the basic express application server. now, we need to create the docker image file. create a file name called Dockerfile and add the following commands

FROM node:8

First we install the node image from the Docker hub to the image

WORKDIR /usr/src/app

Next, we set the /usr/src/app as the working directory in the docker image

COPY package*.json ./
RUN npm install

then copies the package.json from the local machine to docker image. It’s not an efficient way to copy the dependencies from local to docker image.

so we are just copying the package.json and install all the dependencies in the docker image

COPY . .
EXPOSE 8080

CMD [ "npm" , "start" ]

it copies all the source code from local to docker image, binds the app to port 8080 in the docker image. docker image port 8080 can be mapped with local machine port. then we run the command

Your Dockerfile should now look like:

# this install the node image from docker hub
FROM node:8

this is the current working directory in the docker image

WORKDIR /usr/src/app
#copy package.json from local to docker image
COPY package*.json ./
#run npm install commands
RUN npm install
#copy all the files from local directory to docker image
COPY . .
#this port exposed to the docker to map.
EXPOSE 8080

CMD [ "npm" , "start" ]

create a .dockerignore file with the following content:

node_modules
npm-debug.log

now, we need to build our image in the command line as :

$ docker build -t <your username>/node-web-app .

-t flag used to tag a name to image. so, it will be easy to identify with a name instead of id. Note : dot in the end of command is important(else it won’t work)

we could run the image using the following command :

docker run -p 49160:8080 -d <your username>/node-web-app

we can check it using

 curl -i localhost:49160

output should be:

HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 23
ETag: W/"17-C2jfoqVpuUrcmNFogd/3pZ5xds8"
Date: Mon, 08 Apr 2019 17:29:12 GMT
Connection: keep-alive

You have done it !!!!!

To read more

https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md

Originally published by  ganeshmani009 at  cloudnweb.dev

=============================

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter


Further reading

☞ The Complete Node.js Developer Course (3rd Edition)

☞ Angular & NodeJS - The MEAN Stack Guide

☞ NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

☞ Best 50 Nodejs interview questions from Beginners to Advanced in 2019

☞ Node.js 12: The future of server-side JavaScript

☞ Docker for Absolute Beginners

☞ How to debug Node.js in a Docker container?

☞ Docker Containers for Beginners

☞ Deploy Docker Containers With AWS CodePipeline