If you want to create an excellent local development and test environment for Node.js using Docker Compose, I have the following 10 tips.
Docker Compose offers a great local development setup for designing and developing container solutions. Whether you are a tester, developer, or a DevOps operator, Docker Compose has got you covered.
If you want to create an excellent local development and test environment for Node.js using Docker Compose, I have the following 10 tips.1. Use the Correct Version in Your Docker Compose File
docker-compose.yml file is a YAML file that defines services, networks, and volumes for a Docker application. The first line of the file contains the
version keyword and tells Docker Compose which version of its file format you are using.
There are two major versions that you can use, version 2 and version 3; both have a different use case.
The Docker Compose development team created version 2 for local development and version 3 to be compatible with container orchestrators such as Swarm and Kubernetes.
As we are talking about local Node.js development, I always use the latest version 2 release, at the time of writing, v2.4.
2. Use Bind Mounts Correctly
version: "2.4" services: web:
My first tip for your bind mounts is to always mount your Node.js source code from your host using relative paths.
Using relative paths allows other developers to use this Compose file even when they have a different folder structure on their host.
volumes: - ./src:/home/nodeapp/src
Use named volumes to mount your databases
Almost all Node.js applications are deployed to production using a Linux container. If you use a Linux container and develop your application on Windows or a Mac you shouldn’t bind-mount your database files.
In this situation, the database server has to cross the operating system boundaries when reading or writing the database. Instead, you should use a named volume, and let Docker handle the database files.
version: '2.4' services: workflowdb: image: 'mongo:4.0.14' environment: - MONGO_INITDB_ROOT_USERNAME=mveroot - MONGO_INITDB_ROOT_PASSWORD=2020minivideoencoder! - MONGO_INITDB_DATABASE=workflow-db volumes: - workflowdatabase:/data.db ports: - '27017:27017' volumes: workflowdatabase:
Mounting a MongoDB database using a named volume
volumes: keyword defines the named volumes of your docker-compose file. Here, we define the named volume
workflowdatabase and use it in the
Use delegated configuration for improved performance
I always add the delegated configuration to my volume mounts to improve performance. By using a delegated configuration on your bind mount, you tell Docker that it may delay updates from the container to appear in the host.
Usually, with local development, there is no need for writes performed in a container to be reflected immediately on the host. The delegated flag is an option that is specific to Docker Desktop for Mac.
volumes: - ./src:/home/app/src:delegated
Depending on the level of consistency you need between the container and your host, there are two other options to consider,
You can’t bind mount the
node_modules directory from your host on macOS or Windows into your container because of the difference in the operating system.
Some npm modules perform dynamic compilation during npm install, and these dynamically compiled modules from macOS won’t run on Linux.
There are two different solutions to solve this:
node_moduledirectory on the host via the Docker container.
You can fill the
node_module directory on the host via the Docker container by running
npm install via the
docker-compose run command. This installs the correct node_modules using the operation of the container.
For example, a standard Node.js app with the following
FROM node:12-alpine WORKDIR /app COPY . . CMD [ "node", "index.js"]
Standard Dockerfile for a Node.js app
version: '2.4' services: workflowengine: build: . ports: - 8080:8080 volumes: - .:/app
Standard Docker-Compose.yml file
By executing the command
docker-compose run workflowengine npm install, I install the node_modules on the host via the running Docker container.
This means that the node_modules on the host are now for the architecture and operating system of the Dockerfile and cannot be used from your host anymore.
2. Hide the host’s node_modules using an empty bind mount.
The second solution is more flexible than the first one as you can still run and develop your application from the host as from the Docker container. This is known as the node_modules volume trick.
We have to change the Dockerfile so that the node_modules are installed one directory higher than the Node.js app.
package.json is copied and installed in the
/node directory while the application is installed in the
/node/app directory. Node.js applications look for the
node_modules directory up from the current application folder.
FROM node:12-alpine WORKDIR /node COPY package*.json ./ RUN npm install && npm cache clean --force --loglevel=error WORKDIR /node/app COPY ./index.js index.js CMD [ "node", "index.js"]
The node_modules from the host are in the same folder as the application source code.
To make sure that the node_modules from the host don't bind mount into the Docker image, we mount an empty volume using this
version: '2.4' services: workflowengine: build: . ports: - 8080:8080 volumes: - .:/node/app - /node/app/node_modules
The second statement in the volumes section actually hides the node_modules directory from the host.4. Using Tools With Docker Compose
If you want to run your tools when developing with Docker Compose, you have two options: use
docker-compose run or use
docker-compose exe. Both behave differently.
docker-compose run [service] [command] starts a new container from the image of the service and runs the command.
5. Using nodemon for File Watching
docker-compose exec [service] [command] runs the command in the currently running container of that service.
I always use
[nodemon](https://www.npmjs.com/package/nodemon) for watching file changes and restarting Node.js. When you are developing using Docker Compose, you can use
nodemon by installing
nodemon via the following Compose run command:
docker-compose run workflowengine npm install nodemon —-save-dep
command below the
workflowengine service in the
docker-compose.yml file. You also have to set the
NODE_ENV to development so that the dev dependencies are installed.
6. Specify the Startup Order of Services
version: '2.4' services: workflowengine: build: . command: /app/node_modules/.bin/nodemon ./index.js ports: - 8080:8080 volumes: - .:/app environment: - NODE_ENV=development
Docker Compose does not use a specific order when starting its services. If your services need a specific startup order, you can specify this using the
depends_on keyword in your docker-compose file.
depends_on you can specify that your service A depends on service B. Docker Compose starts service B before service A and makes sure that service B can be reached through DNS before starting service A.
If you are using version 2 of the Docker Compose YAML,
depend_on can be combined with the
HEALTHCHECK command to make sure that the service you depend on is started and healthy.
If you want your service to start after the service you depend on has started and healthy, you have to combine
depends on with health checks.
version: '2.4' services: workflowengine: image: 'workflowengine:0.6.0' depends_on: workflowdb: condition: service_healthy environment: - STORAGE_HOST=mongodb://mve-workflowengine:[email protected]:27017/workflow-db?authMechanism=DEFAULT&authSource=workflow-db ports: - '8181:8181' networks: - mve-network workflowdb: image: 'mongo:4.0.14' healthcheck: test: echo 'db.runCommand("ping").ok' | mongo localhost:27017/test --quiet environment: - MONGO_INITDB_ROOT_USERNAME=mveroot - MONGO_INITDB_ROOT_PASSWORD=2020minivideoencoder! - MONGO_INITDB_DATABASE=workflow-db volumes: - ./WorkflowDatabase/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro - ./WorkflowDatabase/data/workflow-db.db:/data.db ports: - '27017:27017' networks: - mve-network networks: mve-network:
Combining dependson with a health check
You have to add
condition: service_healthy to
depends_on to indicate that the service you depend on should be healthy before starting this service.
The health check specified for the MongoDB database makes sure that the database server has started and is accepting connections before reporting healthy.8. Shrinking Compose Files Using Extension Fields
You can increase the flexibility of your Compose files using environment variables and extension fields. Environment variables can be set using the
For example, to change the connection string of the database or the port that your API is listening to. See my article Node.js with Docker in production on how to configure and use environment variables in your Node.js application.
Extension fields let you define a block of text in a Compose file that can be reused in that same file. This way, you decrease the size of your Compose file and make it more DRY.
version: '2.4' # template: x-base: &base-service-template build: . networks: - mve-network services: workflowengine: <<: *base-service-template build: . ports: - 8080:8080 volumes: - .:/node/app - /node/app/node_modules networks: mve-network:
I define a template that includes
networks which is the same on each service by using the syntax
<<: *base-service-template. I inject the defined template into the service definition.
Once you have multiple services defined in your Compose file that expose an HTTP endpoint, you should start using a reverse proxy. Instead of having to manage all the ports and port mappings for your HTTP endpoints, you can start performing host header routing.
If you plan to use NGINX, I suggest the jwilder/nginx-proxy Docker container from Jason Wilder. Nginx-proxy uses docker-gen to generate NGINX configuration templates based on the services in your Compose file.
Every time you add or remove a service from your Compose file, Nginx-proxy regenerates the templates and automatically restarts NGINX. Automatically regenerating and restarting means that you always have an up-to-date reverse proxy configuration that includes all your services.
You can specify the DNS name of your service by adding the
VIRTUAL_HOST environment variable to your service definition.
version: '2.4' services: nginx-proxy: image: jwilder/nginx-proxy port: - "80:80" volumes: - /var/run/docker/docker.sock:/tmp/docker.sock workflowengine: image: 'workflowengine:0.6.0' depends_on: workflowdb: condition: service_healthy environment: - VIRTUAL_HOST=workflowengine.localhost - STORAGE_HOST=mongodb://mve-workflowengine:[email protected]:27017/workflow-db?authMechanism=DEFAULT&authSource=workflow-db ports: - '8181:8181' networks: - mve-network workflowencoder: image: 'videoencoder:0.6.0' depends_on: workflowdb: condition: service_healthy environment: - VIRTUAL_HOST=videoencoder.localhost ports: - '8181:8181' networks: - mve-network
Using jwilder/nginx-proxy as a reverse proxy for your services
Nginx-proxy service mounts the Docker socket, this enables it to respond to containers being added or removed. In the
VIRTUAL_HOST environment variable, I use
Chrome automatically points
.localhost domains to 127.0.0.1.
Traefik is a specialized open-source reverse proxy container image for HTTP and TCP-based applications.
Using Traefik as a reverse proxy inside our Docker Compose is more or less the same as Nginx-proxy. Traefik offers an HTTP-based dashboard to show you the currently active routes handled by Traefik.
version: '2.4' services: traefik: image: traefik:v1.7.20-alpine port: - "80:80" volumes: - /var/run/docker/docker.sock:/tmp/docker.sock command: - --docker - --docker.domain=traefik - --docker.watch - --api - --defautlentrypoints=http,https labels: - traefik.port=8080 - traefik.frontend.rule=Host:traefik.localhost workflowengine: image: 'workflowengine:0.6.0' depends_on: workflowdb: condition: service_healthy environment: - STORAGE_HOST=mongodb://mve-workflowengine:[email protected]:27017/workflow-db?authMechanism=DEFAULT&authSource=workflow-db labels: - traefik.port=8080 - traefik.frontend.rule=HOST:workflowengine.localhost ports: - '8181:8181' networks: - mve-network workflowencoder: image: 'videoencoder:0.6.0' depends_on: workflowdb: condition: service_healthy labels: - traefik.port=8081 - traefik.frontend.rule=HOST:videoencoder.localhost ports: - '8181:8181' networks: - mve-network
labels instead of environment variables to define your DNS names. See the example above.
Traefik offers a lot more functionality than shown above, if you are interested, I direct you to their website which offers complete documentation on other features such as load balancing, and automatic requesting and renewing of Let’s Encrypt certificates.
Thank you for reading, I hope these nine tips help with Node.js development using Docker Compose. If you have any questions, feel free to leave a response!
Node is fast, scalable, and easy to get started with. Its default package manager is npm, which means it also sports the largest ecosystem of open-source libraries. Node is used by companies such as NASA, Uber, Netflix, and Walmart.
But Node doesn't come alone. It comes with a plethora of frameworks. A Node framework can be pictured as the external scaffolding that you can build your app in. These frameworks are built on top of Node and extend the technology's functionality, mostly by making apps easier to prototype and develop, while also making them faster and more scalable.
Below are 7of the most popular Node frameworks at this point in time (ranked from high to low by GitHub stars).Express
With over 43,000 GitHub stars, Express is the most popular Node framework. It brands itself as a fast, unopinionated, and minimalist framework. Express acts as middleware: it helps set up and configure routes to send and receive requests between the front-end and the database of an app.
Express provides lightweight, powerful tools for HTTP servers. It's a great framework for single-page apps, websites, hybrids, or public HTTP APIs. It supports over fourteen different template engines, so developers aren't forced into any specific ORM.Meteor
The project has over 41,000 GitHub stars and is built to power large projects. Meteor is used by companies such as Mazda, Honeywell, Qualcomm, and IKEA. It has excellent documentation and a strong community behind it.Koa
Koa is built by the same team that built Express. It uses ES6 methods that allow developers to work without callbacks. Developers also have more control over error-handling. Koa has no middleware within its core, which means that developers have more control over configuration, but which means that traditional Node middleware (e.g. req, res, next) won't work with Koa.
Koa already has over 26,000 GitHub stars. The Express developers built Koa because they wanted a lighter framework that was more expressive and more robust than Express. You can find out more about the differences between Koa and Express here.Sails
Sails is a real-time, MVC framework for Node that's built on Express. It supports auto-generated REST APIs and comes with an easy WebSocket integration.
The project has over 20,000 stars on GitHub and is compatible with almost all databases (MySQL, MongoDB, PostgreSQL, Redis). It's also compatible with most front-end technologies (Angular, iOS, Android, React, and even Windows Phone).Nest
Nest is packaged in such a way it serves as a complete development kit for writing enterprise-level apps. The framework uses Express, but is compatible with a wide range of other libraries.LoopBack
LoopBack is a framework that allows developers to quickly create REST APIs. It has an easy-to-use CLI wizard and allows developers to create models either on their schema or dynamically. It also has a built-in API explorer.
LoopBack has over 12,000 GitHub stars and is used by companies such as GoDaddy, Symantec, and the Bank of America. It's compatible with many REST services and a wide variety of databases (MongoDB, Oracle, MySQL, PostgreSQL).Hapi
Similar to Express, hapi serves data by intermediating between server-side and client-side. As such, it's can serve as a substitute for Express. Hapi allows developers to focus on writing reusable app logic in a modular and prescriptive fashion.
The project has over 11,000 GitHub stars. It has built-in support for input validation, caching, authentication, and more. Hapi was originally developed to handle all of Walmart's mobile traffic during Black Friday.
Node.js for Beginners - Learn Node.js from Scratch (Step by Step) - Learn the basics of Node.js. This Node.js tutorial will guide you step by step so that you will learn basics and theory of every part. Learn to use Node.js like a professional. You’ll learn: Basic Of Node, Modules, NPM In Node, Event, Email, Uploading File, Advance Of Node.Node.js for Beginners
Welcome to my course "Node.js for Beginners - Learn Node.js from Scratch". This course will guide you step by step so that you will learn basics and theory of every part. This course contain hands on example so that you can understand coding in Node.js better. If you have no previous knowledge or experience in Node.js, you will like that the course begins with Node.js basics. otherwise if you have few experience in programming in Node.js, this course can help you learn some new information . This course contain hands on practical examples without neglecting theory and basics. Learn to use Node.js like a professional. This comprehensive course will allow to work on the real world as an expert!
What you’ll learn:
Learn how you can use a multi-stage Docker build for your Node.js application. Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks.
Everyone knows about Docker. It’s the ubiquitous tool for packaging and distribution of applications that seemed to come from nowhere and take over our industry! If you are reading this, it means you already understand the basics of Docker and are now looking to create a more complex build pipeline.
In the past, optimizing our Docker images has been a challenging experience. All sorts of magic tricks were employed to reduce the size of our applications before they went to production. Things are different now because support for multi-stage builds has been added to Docker.
In this post, we explore how you can use a multi-stage build for your Node.js application. For an example, we’ll use a TypeScript build process, but the same kind of thing will work for any build pipeline. So even if you’d prefer to use Babel, or maybe you need to build a React client, then a Docker multi-stage build can work for you as well.A basic, single-stage Dockerfile for Node.js
Let’s start by looking at a basic Dockerfile for Node.js. We can visualize the normal Docker build process as shown in Figure 1 below.
Figure 1: Normal Docker build process.
We use the
docker build command to turn our Dockerfile into a Docker image. We then use the
docker run command to instantiate our image to a Docker container.
The Dockerfile in Listing 1 below is just a standard, run-of-the-mill Dockerfile for Node.js. You have probably seen this kind of thing before. All we are doing here is copying the
package.json, installing production dependencies, copying the source code, and finally starting the application.
FROM node:10.15.2 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install --only=production COPY ./src ./src EXPOSE 3000 CMD npm start
Listing 1 is a quite ordinary-looking Docker file. In fact, all Docker files looked pretty much like this before multi-stage builds were introduced. Now that Docker supports multi-stage builds, we can visualize our simple Dockerfile as the single-stage build process illustrated in Figure 2.
The need for multiple stages
Figure 2: A single-stage build pipeline.
We can already run whatever commands we want in the Dockerfile when building our image, so why do we even need a multi-stage build?
To find out why, let’s upgrade our simple Dockerfile to include a TypeScript build process. Listing 2 shows the upgraded Dockerfile. I’ve bolded the updated lines so you can easily pick them out.
FROM node:10.15.2 WORKDIR /usr/src/app COPY package*.json ./ COPY tsconfig.json ./ RUN npm install COPY ./src ./src RUN npm run build EXPOSE 80 CMD npm start
We can easily and directly see the problem this causes. To see it for yourself, you should instantiate a container from this image and then shell into it and inspect its file system.
I did this and used the Linux tree command to list all the directories and files in the container. You can see the result in Figure 3.
Notice that we have unwittingly included in our production image all the debris of development and the build process. This includes our original TypeScript source code (which we don’t use in production), the TypeScript compiler itself (which, again, we don’t use in production), plus any other dev dependencies we might have installed into our Node.js project.
FIgure 3: The debris from development and the build process is bloating our production Docker image.
Bear in mind this is only a trivial project, so we aren’t actually seeing too much cruft left in our production image. But you can imagine how bad this would be for a real application with many sources files, many dev dependencies, and a more complex build process that generates temporary files!
We don’t want this extra bloat in production. The extra size makes our containers bigger. When our containers are bigger than needed, it means we aren’t making efficient use of our resources. The increased surface area of the container can also be a problem for security, where we generally prefer to minimize the attackable surface area of our application.
Wouldn’t it be nice if we could throw away the files we don’t want and just keep the ones we do want? This is exactly what a Docker multi-stage build can do for us.Crafting a Dockerfile with a multi-stage build
We are going to split out Dockerfile into two stages. Figure 4 shows what our build pipeline looks like after the split.
Figure 4: A multi-stage Docker build pipeline to build TypeScript.
Our new multi-stage build pipeline has two stages: Build stage 1 is what builds our TypeScript code; Build stage 2 is what creates our production Docker image. The final Docker image produced at the end of this pipeline contains only what it needs and omits the cruft we don’t want.
To create our two-stage build pipeline, we are basically just going to create two Docker files in one. Listing 3 shows our Dockerfile with multiple stages added. The first
FROM command initiates the first stage, and the second
FROM command initiates the second stage.
Compare this to a regular single-stage Dockerfile, and you can see that it actually looks like two Dockerfiles squished together in one.
To create this multi-stage Dockerfile, I simply took Listing 2 and divided it up into separate Dockerfiles. The first stage contains only what is need to build the TypeScript code. The second stage contains only what is needed to produce the final production Docker image. I then merged the two Dockerfiles into a single file.
The most important thing to note is the use of
We can easily check to make sure we got the desired result. After creating the new image and instantiating a container, I shelled in to check the contents of the file system. You can see in Figure 5 that we have successfully removed the debris from our production image.
Figure 5: We have removed the debris of development from our Docker image.
We now have fewer files in our image, it’s smaller, and it has less surface area. Yay! Mission accomplished.
But what, specifically, does this mean?The effect of the multi-stage build
What exactly is the effect of the new build pipeline on our production image?
I measured the results before and after. Our single-stage image produced by Listing 2 weighs in at 955MB. After converting to the multi-stage build in Listing 3, the image now comes to 902MB. That’s a reasonable reduction — we removed 53MB from our image!
While 53MB seems like a lot, we have actually only shaved off just more than 5 percent of the size. I know what you’re going to say now: But Ash, our image is still monstrously huge! There’s still way too much bloat in that image.
Well, to make our image even smaller, we now need to use the
alpine, or slimmed-down, Node.js base image. We can do this by changing our second build stage from
This reduces our production image down to 73MB — that’s a huge win! Now the savings we get from discarding our debris is more like a whopping 60 percent. Alright, we are really getting somewhere now!
This highlights another benefit of multi-stage builds: we can use separate Docker base images for each of our build stages. This means you can customize each build stage by using a different base image.
Say you have one stage that relies on some tools that are in a different image, or you have created a special Docker image that is custom for your build process. This gives us a lot of flexibility when constructing our build pipelines.How does it work?
You probably already guessed this: each stage or build process produces its own separate Docker image. You can see how this works in Figure 6.
The Docker image produced by a stage can be used by the following stages. Once the final image is produced, all the intermediate images are discarded; we take what we want for the final image, and the rest gets thrown away.
Adding more stages
Figure 6: Each stage of a multi-stage Docker build produces an image.
There’s no need to stop at two stages, although that’s often all that’s needed; we can add as many stages as we need. A specific example is illustrated in Figure 7.
Here we are building TypeScript code in stage 1 and our React client in stage 2. In addition, there’s a third stage that produces the final image from the results of the first two stages.
Figure 7: Using a Docker multi-stage build, we can create more complex build pipelines.
Now time to leave you with a few advanced tips to explore on your own:
Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks. They help us slim down our production Docker images and remove the bloat. They also allow us to structure and modularize our build process, which makes it easier to test parts of our build process in isolation.
So please have some fun with Docker multi-stage builds, and don’t forget to have a look at the example code on GitHub.
Here’s the Docker documentation on multi-stage builds, too.