Build A Node.js Docker workflow for Beginners

The Docker platform allows developers to package and run applications as containers. A container is an isolated process that runs on a shared operating system, offering a lighter weight alternative to virtual machines. Though containers are not new, they offer benefits — including process isolation and environment standardization — that are growing in importance as more developers use distributed application architectures.

I’ve been using Docker for approximately a year now, and after some time getting used to I am now a huge fan of how it can improve the whole making of an application, from the development phase to the production phase.

Docker is a great tool that helps developers build, deploy, and run applications more efficiently in a standardized way. We can develop in the same environment as the app running in production. You can speed up the debugging or even the prevention of upcoming bugs by having the same setup locally.

In this article I chose to talk about 3 parts of the making of an app that Docker can bring to a new level:

1. Optimize production artifact with Docker

One of Docker’s main feature is to package your app so that it can be deployed in any Docker-compatible environment. Your Docker image should include everything you need for your app to run.

But when you and your IT team release your app in production with Docker, there are certain optimizations you can make to improve your app’s performance, increase security and reduce the footprint of your package.

  • Use alpine based image

Alpine linux is a lightweight Linux distribution based on musl libc and busybox. The main benefit of using Alpine is the size of the docker image (node:alpine weight 24Mo, compared to the the 267Mo for node:latest).

The light weight of the Alpine distribution also provides less attack surface for hackers.

Beware though that you might encounter some issues if using software compiled specifically with glibc, as stated in node-alpine repository

But this should not impact your app if you’re using a single stack inside your container (like Node), which is highly recommended for cloud-native applications

  • Include only what the application needs to run

This means only include production dependencies, not development dependencies :

RUN npm install --only=production

Also use a .dockerignore file to exclude the files not needed for production, like the node_modules that will be fetched inside the Dockerfile, test files, the documentation, the docker files themselves, etc…
If you are using a transpiler like Babel to use ES6 or newer syntax in your Node app, then do the transpile part in your npm run build script inside your Dockerfile, and remove your source after the build successfully executes. These steps can be made more elegantly using Docker multistage build that you can see in the code below (docs here )

  • Run npm install before copying your source to the container image

This allows your docker runtime to cache the volume layer containing all your dependencies below the layer containing your sources. That means that if your source code is updated more frequently than your dependency configuration (which is likely), your Docker build time will be much faster on average.
Node official documentation has a clean tutorial on how to build a docker image for a node application, where they mention this part : ( https://nodejs.org/en/docs/guides/nodejs-docker-webapp/ )

  • Use a specific version of Node docker image

Even if you might not be aware of it, your application probably has some tight coupling to a specific version of your language runtime (Node or any other application stack). To prevent your application from crashing when the runtime gets updated during a new Docker build, you should precise the version of Node you want running on your production platform.

Here is a gist containing basic files for a dockerized Node
application that uses ES6 and Babel as a transpiler

babelrc.json

{
    "presets": [
        "env"
    ]
}

.env

MYSQL_ROOT_PASSWORD=123456
MYSQL_DATABASE=database

docker-compose.yml


version: '3'

services:
  reverse-proxy:
    image: traefik # The official Traefik docker image
    command: --api --docker.exposedbydefault=false # Enables the web UI and tells Træfik to listen to docker, without exposing by default
    ports:
      - "80:80"     # The HTTP port
      - "8080:8080" # The Web UI (enabled by --api)
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events

  db:
    image: mysql:5
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD
      - MYSQL_DATABASE

  redis:
    image: redis:alpine

  app:
    build:
      dockerfile: Dockerfile-dev
      context: .
    command: npm run dev
    volumes:
      - "./src:/home/node/app/src"
    environment:
      - DB_HOST=db
      - DB_NAME=${MYSQL_DATABASE}
      - DB_USER=root
      - DB_PASSWORD=${MYSQL_ROOT_PASSWORD}
      - REDIS_HOST=redis
    labels:
      - "traefik.enable=true"
      - "traefik.frontend.rule=Host:app.test.localhost.tv"
    depends_on:
      - db
      - redis


Dockerfile


###############################################################################
# Step 1 : Builder image
#
FROM node:9-alpine AS builder

# Define working directory and copy source
WORKDIR /home/node/app
COPY . .
# Install dependencies and build whatever you have to build 
# (babel, grunt, webpack, etc.)
RUN npm install && npm run build

###############################################################################
# Step 2 : Run image
#
FROM node:9-alpine
ENV NODE_ENV=production
WORKDIR /home/node/app

# Install deps for production only
COPY ./package* ./
RUN npm install && \
    npm cache clean --force
# Copy builded source from the upper builder stage
COPY --from=builder /home/node/app/build ./build

# Expose ports (for orchestrators and dynamic reverse proxies)
EXPOSE 3000

# Start the app
CMD npm start

Dockerfile-env


FROM node:9-alpine

WORKDIR /home/node/app

# Install deps
COPY ./package* ./
RUN npm install && \
    npm cache clean --force

COPY . .

# Expose ports (for orchestrators and dynamic reverse proxies)
EXPOSE 3000

# Start the app
CMD npm start

index.js

import express from 'express'

const app = express()

app.get('/', function (req, res) {
  res.send('Hello World!')
})

app.listen(3000, function () {
  console.log('Example app listening on port 3000!')
})

package.json


{
  "name": "Docker-node",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "clean": "rm -rf build && mkdir build",
    "build-babel": "babel -d ./build ./src -s",
    "build": "npm run clean && npm run build-babel",
    "start": "node ./build/index.js",
    "dev": "babel-node src/index.js"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "babel-cli": "^6.26.0",
    "babel-preset-env": "^1.7.0"
  },
  "dependencies": {
    "express": "^4.16.3"
  }
}

2. Normalize environments with Docker Compose

Docker compose is a tool by Docker which allows you to define your whole application stack (app services, databases, cache layer, …) as containers inside a single file (docker-compose.yml), and manage the state of these containers as well as the underlying resource (volumes, network) using a CLI.

What is cool about docker-compose in my opinion is that it can make it easy to run a full production-like environment in your development environment.

Let’s imagine you have an application that consists of the following components :

  • An API in Node.JS
  • Talking to a MySQL Database
  • Using Redis as a cache and session layer
  • Traefik as a reverse proxy for your API

if you don’t know Traefik, I would recommend you check it out, it is a dynamic reverse proxy that can inspect your running web containers and reverse proxy them on the fly.

Docker compose allows you to setup this stack for all your environments (dev, staging, production even if the ops team feel like it) quite easily and in a somewhat factorized way.
Here are the steps I came up with to facilitate iso-production setup and configuration factorization between environments :

  • Use a configuration library for your Node.js app

This allows you to store your configuration in a centralized place, and make it overridable in multiple ways, such as dotenv files or environment variables.

By doing so, the only thing that should change in your Node.js app when running it on different environment is a dotenv file or a list of environment variables

In our example, the configuration should contain at least the MySQL and Redis connection information.

  • Define your whole stack configuration in a single place

This can be in a sourced environment script, or with a .env file (which makes it easier as at can be read by docker-compose)
In our example, this file should contain the same variables as for the configuration file in the Node.js app.

  • Create your docker-compose.yml file using variables

Docker Compose can substitute environment variables in the configuration file . This is convenient to have a single docker-compose file in all the environments.

The only differences between dev and prod is that in development I am using a different Dockerfile for the Node.js app so that I can have nodemon live-reload changes to my code (mounted inside a Docker volume)

Here are the docker-compose.yml and docker-compose.dev.yml files, the .env file and the Dockerfile for development :

.env file :

MYSQL_ROOT_PASSWORD=123456
MYSQL_DATABASE=database
APP_HOST=app.test.localhost.tv

Dockerfile-env :


FROM node:9-alpine
WORKDIR /home/node/app
# Install deps
COPY ./package* ./
RUN npm install && \
    npm cache clean --force
COPY . .
# Expose ports (for orchestrators and dynamic reverse proxies)
EXPOSE 3000
# Start the app
CMD npm start

docker-compose.yml file :


version: '3'
services:
  reverse-proxy:
    image: traefik # The official Traefik docker image
    command: --api --docker.exposedbydefault=false # Enables the web UI and tells Træfik to listen to docker, without exposing by default
    ports:
      - "80:80"     # The HTTP port
      - "8080:8080" # The Web UI (enabled by --api)
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
db:
    image: mysql:5
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD
      - MYSQL_DATABASE
redis:
    image: redis:alpine
app:
    build: .
    environment:
      - DB_HOST=db
      - DB_NAME=${MYSQL_DATABASE}
      - DB_USER=root
      - DB_PASSWORD=${MYSQL_ROOT_PASSWORD}
      - REDIS_HOST=redis
    labels:
      - "traefik.enable=true"
      - "traefik.frontend.rule=Host:${APP_HOST}"
    depends_on:
      - db
      - redis

The docker-compose.dev.yml file :

version: '3'
services:
  app:
    build:
      dockerfile: Dockerfile-dev
      context: .
    command: npm run dev
    volumes:
      - "./src:/home/node/app/src"

You can see in the “app” section of the docker-compose.yml file that I am using localhost.tv, which is a nice remote DNS server that bind all .localhost.tv to your localhost. I use it to avoid using relative path for application endpoint (like localhost/api), which always come with undesirable side-effects when moving to a subdomain in production (embedded links for instance, inner routing, stuff like that).

The separate Dockerfile for development image is a bit annoying, as it makes the development configuration not the same as the production one, and so introduces some work (and thus some risk) when deploying the app to another environment. So far the only solution I’ve come up with is to use a templating system (simple script, or more evolved provisioning tools such as ansible) to make the Dockerfile dynamic.

With all these file setup, you can use the following command to run your stack in development environment :

First, build your app container from the Dockerfile-dev file :

docker-compose -f docker-compose.yml -f docker-compose.dev.yml build


Then, run your stack with the following :

docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d


You now have a dockerized, reverse-proxied, iso-production development environment running with live-reloading in Node.js.
You can find the full example app here :

3. Smoothen delivery and integration with CI/CD

Now that you have a portable and customizable app environment, you can use it for all the steps of the continuous integration and deployment.

Here is what I try to do for each project in terms of tests when using Docker with Node.js :

  • Run unit tests when building the Docker image. You can also build a custom image for this, such as :

# Use the builder image as base image
FROM builder
# Copy the test files
COPY tests tests
# Override the NODE_ENV environment variable to 'dev', in order to get required test packages
ENV NODE_ENV dev
# 1. Get test packages; AND
# 2. Install our test framework - mocha
RUN npm update && \
    npm install -g mocha
# Override the command, to run the test instead of the application
CMD ["mocha", "tests/test.js", "--reporter", "spec"]

You can test the return of the docker run function to determine whether the CI pipeline can go on or not.

  • Run integration tests using docker-compose inside the CI tool, such as running docker-compose up for the full stack to be operational, and calling a special endpoint to check that the Node.js app can correctly access its required components (database and redis in the example)

  • Run real API tests using docker-compose inside the CI tool, and tools such as fixtures in Sequelize to populate the database before running the tests.

You can run all these steps inside your CI provider (Jenkins, Gitlab-CI, Travis) if they can run a dockerized environment. For example in gitlab-ci you can use this image : , which is a docker in docker image which includes docker-compose.

The article here is the end ! I hope these insights will be helpful to anyone who consider using Docker for a Node.js based application development or deployment.

Thanks for reading !

#node #nodejs #docker #javascript

Build A Node.js Docker workflow for Beginners
7 Likes85.10 GEEK