Use Docker to Create a Node Development Environment

Use Docker to Create a Node Development Environment

Use Docker to Create a Node Development Environment - Leverage Docker images and containers to create an isolated Node development environment that runs a server

In this tutorial, instead of creating and running a Node app locally, you’ll to take advantage of the Debian Linux operating system that official Docker Node images are based on. You’ll create a portable Node development environment that solves the “But it runs on my machine” problem that constantly trolls developers since containers are created predictably from the execution of Docker images on any platform.

Throughout this tutorial, you’ll be working in two realms:

  • Local Operating System: Using a CLI application, such as Terminal or PowerShell, you’ll use a local installation of Docker to build images and run them as containers.

  • Container Operating System: Using Docker commands, you’ll access the base operating system of a running container. Within that context, you’ll use a container shell to issue commands to create and run a Node app.

The container operating system runs in isolation from the local operating system. Any files created within the container won’t be accessible locally. Any servers running within the container can’t listen to requests made from a local web browser. This is not ideal for local development. To overcome these limitations, you’ll bridge these two systems by doing the following:

  • Mount a local folder to the container filesystem: Using this mount point as your container working directory, you'll persist locally any files created within the container and you'll make the container aware of any local changes made to project files.

  • Allow the host to interact with the container network: By mapping a local port to a container port, any HTTP requests made to the local port will be redirected by Docker to the container port.

To see this Docker-based Node development strategy in action, you'll create a basic Node Express web server. Let's get started!

Removing the Burden of Installing Node

To run a simple "Hello World!" Node app, the typical tutorial asks to:

  • Download and install Node
  • Download and install Yarn
  • To use different versions of Node, uninstall Node and install nvm
  • Install NPM packages globally

Each operating system has its own quirks making the aforementioned installations non-standard. However, access to the Node ecosystem can be standardized using Docker images. The only installation requirement for this tutorial is Docker. If you need to install Docker, choose your operating system from this Docker installation document and follow the steps provided.

Similar to how NPM works, Docker gives us access to a large registry of Docker images called Docker Hub. From Docker Hub, you can pull and run different versions of Node as images. You can then run these images as local processes that don’t overlap or conflict with each other. You can simultaneously create a cross-platform project that depends on Node 8 with NPM and another one that depends on Node 11 with Yarn.

Creating the Project Foundation

To start, anywhere in your system, create a node-docker folder. This is the project directory.

With the goal of running a Node Express server, under the node-docker project directory, create a server.js file, populate it as follows and save it:

// server.js
const express = require("express");
const app = express();

const PORT = process.env.PORT || 8080;

app.get("/", (req, res) => {
    <h1>Docker + Node</h1>
    <span>A match made in the cloud</span>

app.listen(PORT, () => {
  console.log(`Server listening on port ${PORT}...`);

A Node project needs a package.json file and a node_modules folder. Assuming that Node is not installed in your system, you’ll use Docker to create those files following a structured workflow.

Accessing the Container Operating System

You can gain access to the container OS with any of the following methods:

  • Using a single but long docker run command.
  • Using a Dockerfile combined with a short docker run command.
  • Using a Dockerfile in combination with Docker Compose.

Using a single docker run command

Execute the following command:

docker run --rm -it --name node-docker \
-v $PWD:/home/app -w /home/app \
-e "PORT=3000" -p 8080:3000  \
-u node node:latest /bin/bash

Let’s breakdown this docker run command to understand how helps you access the container shell:

docker run --rm -it

docker run creates a new container instance. The [--rm]( "--rm") flag automatically stops and removes the container once the container exits. The combined -i and -t flags run interactive processes such as a shell. The -i flag keeps the STDIN (Standard Input) open while the -t flag lets the process pretend it is a text terminal and pass along signals.

Think of --rm as "out of sight, out of mind".

Without the -it team, you won't see anything on the screen

docker run --rm -it --name node-docker

The [--name]( "--name") flag assigns a friendly name to the container to easily identify it in logs and tables. For example when you run [docker ps]( "docker ps").

docker run --rm -it --name node-docker \
-v $PWD:/home/app -w /home/app

The [-v]( "-v") flag mounts a local folder into a container folder using this mapping as its argument:


An environmental variable can print the current working directory when the command is executed: $PWD on Mac and Linux and $CD on Windows. The [-w]( "-w") flag sets the mounting point as the container working directory.

docker run --rm -it --name node-docker \
-v $PWD:/home/app -w /home/app \
-e "PORT=3000" -p 8080:3000

The [-e]( "-e") flag sets an environmental variable PORT with a value of 3000. The [-p]( "-p") flag maps a local port 8080 to a container port 3000 to match the environmental variable PORT that is consumed within server.js:

const PORT = process.env.PORT || 8080;

docker run --rm -it --name node-docker \
-v $PWD:/home/app -w /home/app \
-e "PORT=3000" -p 8080:3000  \
-u node node:latest /bin/bash

For security and to avoid file permission problems, the [-u]( "-u") flag sets the non-root user node available in the Node image as the user that runs the container processes. After setting the flags, the image to execute is specified: node:latest. The last argument is a command to execute inside the container once it’s running. /bin/bashinvokes the container shell.

If the image is not present locally, Docker issues docker pull in the background to download it from Docker Hub.

Once the command executes, you'll see the container shell prompt:

[email protected]<CONTAINER ID>:/home/app$

Before moving to the next method, exit the container terminal by typing exit and pressing <ENTER>.

Using a Dockerfile

The docker run command from the previous section is made of image build time and container runtime flags and elements:

docker run --rm -it --name node-docker \
-v $PWD:/home/app -w /home/app \
-e "PORT=3000" -p 8080:3000  \
-u node node:latest /bin/bash

Anything related to image build time can be defined as a custom image using a Dockerfile as follows:

  • FROM specifies the container base image: node:latest
  • WORKDIR defines -w
  • USER defines -u
  • ENV defines -e
  • ENTRYPOINT specifies to execute /bin/bash once the container runs

Based on this, under the node-docker project directory, create a file named Dockerfile, populate it as follows, and save it:

FROM node:latest

WORKDIR /home/app
USER node


ENTRYPOINT /bin/bash

EXPOSE 3000 documents the port to expose at runtime. However, container runtime flags that define container name, port mapping, and volume mounting still need to be specified with docker run.

The custom image defined within Dockerfile needs to be built using [docker build]( "docker build") before it can be run. In your local terminal, execute:

docker build -t node-docker .

docker build provides your image the friendly name node-docker using the -t flag. This is different than the container name. To verify that the image was created, run docker images.

With the image built, execute this shorter command to run the server:

docker run --rm -it --name node-docker \
-v $PWD:/home/app -p 8080:3000 \

The container shell prompts comes up with the following format:

`[email protected]:/home/app<div id=“blog-post-content” class=“entry-content js-entry-content” itemprop=“articleBody”

Once again, before moving to the next method, exit the container terminal by typing exit and pressing <ENTER>.

Using Docker Compose

For Linux, Docker Compose is installed separately.

Based on the Dockerfile and the shorter docker run command of the previous section, you can create a Docker Compose YAML file to define your Node development environment as a service:


FROM node:latest

WORKDIR /home/app
USER node


ENTRYPOINT /bin/bash


docker run --rm -it --name node-docker \
-v $PWD:/home/app -p 8080:3000 \

The only elements left to abstract from the docker run command are the container name, the volume mounting, and the port mapping.

Under the node-docker project directory, create a file named docker-compose.yml, populate it with the following content, and save it:

version: "3"
    build: .
    container_name: node-docker
      - 8080:3000
      - ./:/home/app

  • nod_dev_env gives the service a name to easily identify it
  • build specifies the path to the Dockerfile
  • container_name provides a friendly name to the container
  • ports configures host-to-container port mapping
  • volumes defines the mounting point of a local folder into a container folder

To start and run this service, execute the following command:

docker-compose up

up builds its own images and containers separate from those created by the docker run and docker buildcommands used before. To verify this run:

docker image
# Notice the image named <project-folder>_nod_dev_env
docker ps -a
# Notice the container named <project-folder>_nod_dev_env_<number>

up created an image and a container but the container shell prompt didn’t come up. What happened? up starts the full service composition defined in docker-compose.yml. However, it doesn’t present interactive output; instead, it only presents static service logs. To get interactive output, you use docker-compose run instead to run nod_dev_env individually.

First, to clean the images and containers created by up, execute the following command in your local terminal:

docker-compose down

Then, execute the following command to run the service:

docker-compose run --rm nod_dev_env

The run command acts like docker run -it; however, it doesn't map and expose any container ports to the host. In order to use the port mapping configured in the Docker Compose file, you use the --service-ports flag. The container shell prompt comes up once again with the following format:

[email protected]<CONTAINER ID>:/home/app$

If for any reason the ports specified in the Docker Compose file are already in use, you can use the --publish, (-p) flag to manually specify a different port mapping. For example, the following command maps the host port 4000 to the container port 3000:

docker-compose run --rm -p 4000:3000 nod_dev_env

Installing Dependencies and Running the Server

If you don't have an active container shell, using any of the previous section methods to access it.

In the container shell, initialize the Node project and install dependencies by issuing the following commands (if you prefer, use npm):

yarn init -y
yarn add express
yarn add -D nodemon

Verify that package.json and node_modules are now present under your local node-docker project directory.

[nodemon]( "nodemon") streamlines your development workflow by restarting the server automatically anytime you make changes to source code. To configure nodemon, update package.json as follows:

  // Other properties...
  "scripts": {
    "start": "nodemon server.js"

In the container shell, execute yarn start to run the Node server.

To test the server, visit [http://localhost:8080/](http://localhost:8080/ "http://localhost:8080/") in your local browser. Docker intelligently redirects the request from the host port 8080 to the container port 3000.

To test the connection of local file content and server, open server.js locally, update the response as follows and save the changes:

// server.js

// package and constant definitions...

app.get("/", (req, res) => {
    <h1>Hello From Node Running Inside Docker</h1>

// server listening...

Refresh the browser window and observe the new response.

Modifying and Extending the Project

Assuming that Node is not installed in your local system, you can use the local terminal to modify project structure and file content but you can’t issue any Node-related commands, such as yarn add. As the server runs within the container, you are also not able to make server requests to the internal container port 3000.

In the event that you want to interact with the server within the container or modify the Node project, you need to execute commands against the running container using [docker exec]( "docker exec") and the running container ID. You don’t use docker run as that command would create a new isolated container.

Getting the running container ID is easy.

  • If you already have a container shell open, the container ID is present in the shell prompt:
    [email protected]:/home/app$

  • You can also get the container ID programmatically using docker ps to filter containers based on name and return the CONTAINER ID of any match:

docker ps -qf "name=node-docker"

The -f flag filters containers by the name=node-docker condition. The -q (--quiet) limits the output to only display the ID of the match, effectively plugging the CONTAINER ID of node-docker into the docker exec command.

Once you have the container ID, you can use docker exec to:

  • Open a new instance of the running container shell:
docker exec -it $(docker ps -qf "name=node-docker") /bin/bash

  • Make a server request using the internal port 3000:
docker exec -it $(docker ps -qf "name=node-docker") curl localhost:3000

Install or remove dependencies:

docker exec -it $(docker ps -qf "name=node-docker") yarn add body-parser

One you have another active container shell, you can easily run curl and yarn add there instead.

Recap... and Uncovering Little Lies

You've learned how to create an isolated Node development environment through different levels of complexity: by running a single docker run command, using a Dockerfile to build and run a custom image, and using Docker Compose to run a container as a Docker service.

Each level requires more file configuration but a shorter command to run the container. This is a worthy trade-off as encapsulating configuration in files makes the environment portable and easier to maintain. Additionally, you learned how to interact with a running container to extend your project.

You may still need to install Node locally for IDEs to provide syntax assistance, or you can use a CLI editor like vimwithin the container.

Even so, you can still benefit from this isolated development environment. If you restrict project setup, installation, and runtime steps to be executed within the container, you can standardize those steps for your team as everyone would be executing commands using the same version of Linux. Also, all the cache and hidden files created by Node tools stay within the container and don’t pollute your local system. Oh, and you also get yarn for free!

JetBrains is starting to offer the capability to use Docker images as remote interpreters for Node and Python when running and debugging applications. In the future, we may become entirely free from downloading and installing these tools directly in our systems. Stay tuned to see what the industry provides us to make our developer environments standard and portable.

Crafting multi-stage builds with Docker in Node.js

Crafting multi-stage builds with Docker in Node.js

Learn how you can use a multi-stage Docker build for your Node.js application. Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks.

Everyone knows about Docker. It’s the ubiquitous tool for packaging and distribution of applications that seemed to come from nowhere and take over our industry! If you are reading this, it means you already understand the basics of Docker and are now looking to create a more complex build pipeline.

In the past, optimizing our Docker images has been a challenging experience. All sorts of magic tricks were employed to reduce the size of our applications before they went to production. Things are different now because support for multi-stage builds has been added to Docker.

In this post, we explore how you can use a multi-stage build for your Node.js application. For an example, we’ll use a TypeScript build process, but the same kind of thing will work for any build pipeline. So even if you’d prefer to use Babel, or maybe you need to build a React client, then a Docker multi-stage build can work for you as well.

A basic, single-stage Dockerfile for Node.js

Let’s start by looking at a basic Dockerfile for Node.js. We can visualize the normal Docker build process as shown in Figure 1 below.

Figure 1: Normal Docker build process.

We use the docker build command to turn our Dockerfile into a Docker image. We then use the docker run command to instantiate our image to a Docker container.

The Dockerfile in Listing 1 below is just a standard, run-of-the-mill Dockerfile for Node.js. You have probably seen this kind of thing before. All we are doing here is copying the package.json, installing production dependencies, copying the source code, and finally starting the application.

This Dockerfile is for regular JavaScript applications, so we don’t need a build process yet. I’m only showing you this simple Dockerfile so you can compare it to the multi-stage Dockerfile I’ll be showing you soon.

Listing 1: A run-of-the-mill Dockerfile for Node.js

FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY ./src ./src
CMD npm start

Listing 1 is a quite ordinary-looking Docker file. In fact, all Docker files looked pretty much like this before multi-stage builds were introduced. Now that Docker supports multi-stage builds, we can visualize our simple Dockerfile as the single-stage build process illustrated in Figure 2.

Figure 2: A single-stage build pipeline.

The need for multiple stages

We can already run whatever commands we want in the Dockerfile when building our image, so why do we even need a multi-stage build?

To find out why, let’s upgrade our simple Dockerfile to include a TypeScript build process. Listing 2 shows the upgraded Dockerfile. I’ve bolded the updated lines so you can easily pick them out.

Listing 2: We have upgraded our simple Dockerfile to include a TypeScript build process

FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm install
COPY ./src ./src
RUN npm run build
CMD npm start

We can easily and directly see the problem this causes. To see it for yourself, you should instantiate a container from this image and then shell into it and inspect its file system.

I did this and used the Linux tree command to list all the directories and files in the container. You can see the result in Figure 3.

Notice that we have unwittingly included in our production image all the debris of development and the build process. This includes our original TypeScript source code (which we don’t use in production), the TypeScript compiler itself (which, again, we don’t use in production), plus any other dev dependencies we might have installed into our Node.js project.

FIgure 3: The debris from development and the build process is bloating our production Docker image.
Bear in mind this is only a trivial project, so we aren’t actually seeing too much cruft left in our production image. But you can imagine how bad this would be for a real application with many sources files, many dev dependencies, and a more complex build process that generates temporary files!

We don’t want this extra bloat in production. The extra size makes our containers bigger. When our containers are bigger than needed, it means we aren’t making efficient use of our resources. The increased surface area of the container can also be a problem for security, where we generally prefer to minimize the attackable surface area of our application.

Wouldn’t it be nice if we could throw away the files we don’t want and just keep the ones we do want? This is exactly what a Docker multi-stage build can do for us.

Crafting a Dockerfile with a multi-stage build

We are going to split out Dockerfile into two stages. Figure 4 shows what our build pipeline looks like after the split.

Figure 4: A multi-stage Docker build pipeline to build TypeScript.

Our new multi-stage build pipeline has two stages: Build stage 1 is what builds our TypeScript code; Build stage 2 is what creates our production Docker image. The final Docker image produced at the end of this pipeline contains only what it needs and omits the cruft we don’t want.

To create our two-stage build pipeline, we are basically just going to create two Docker files in one. Listing 3 shows our Dockerfile with multiple stages added. The first FROM command initiates the first stage, and the second FROM command initiates the second stage.

Compare this to a regular single-stage Dockerfile, and you can see that it actually looks like two Dockerfiles squished together in one.

Listing 3: A multi-stage Dockerfile for building TypeScript code

# Build stage 1.
# This state builds our TypeScript and produces an intermediate Docker image containing the compiled JavaScript code.
FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm install
COPY ./src ./src
RUN npm run build

# Build stage 2.
# This stage pulls the compiled JavaScript code from the stage 1 intermediate image.
# This stage builds the final Docker image that we'll use in production.
FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY --from=0 /usr/src/app/build ./build
CMD npm start

To create this multi-stage Dockerfile, I simply took Listing 2 and divided it up into separate Dockerfiles. The first stage contains only what is need to build the TypeScript code. The second stage contains only what is needed to produce the final production Docker image. I then merged the two Dockerfiles into a single file.

The most important thing to note is the use of --from in the second stage. I’ve bolded this line in Listing 3 so you can easily pick it out. This is the syntax we use to pull the built files from our first stage, which we refer to here as stage 0. We are pulling the compiled JavaScript files from the first stage into the second stage.

We can easily check to make sure we got the desired result. After creating the new image and instantiating a container, I shelled in to check the contents of the file system. You can see in Figure 5 that we have successfully removed the debris from our production image.

Figure 5: We have removed the debris of development from our Docker image.

We now have fewer files in our image, it’s smaller, and it has less surface area. Yay! Mission accomplished.

But what, specifically, does this mean?

The effect of the multi-stage build

What exactly is the effect of the new build pipeline on our production image?

I measured the results before and after. Our single-stage image produced by Listing 2 weighs in at 955MB. After converting to the multi-stage build in Listing 3, the image now comes to 902MB. That’s a reasonable reduction — we removed 53MB from our image!

While 53MB seems like a lot, we have actually only shaved off just more than 5 percent of the size. I know what you’re going to say now: But Ash, our image is still monstrously huge! There’s still way too much bloat in that image.

Well, to make our image even smaller, we now need to use the alpine, or slimmed-down, Node.js base image. We can do this by changing our second build stage from node:10.15.2 to node:10.15.2-alpine.

This reduces our production image down to 73MB — that’s a huge win! Now the savings we get from discarding our debris is more like a whopping 60 percent. Alright, we are really getting somewhere now!

This highlights another benefit of multi-stage builds: we can use separate Docker base images for each of our build stages. This means you can customize each build stage by using a different base image.

Say you have one stage that relies on some tools that are in a different image, or you have created a special Docker image that is custom for your build process. This gives us a lot of flexibility when constructing our build pipelines.

How does it work?

You probably already guessed this: each stage or build process produces its own separate Docker image. You can see how this works in Figure 6.

The Docker image produced by a stage can be used by the following stages. Once the final image is produced, all the intermediate images are discarded; we take what we want for the final image, and the rest gets thrown away.

Figure 6: Each stage of a multi-stage Docker build produces an image.

Adding more stages

There’s no need to stop at two stages, although that’s often all that’s needed; we can add as many stages as we need. A specific example is illustrated in Figure 7.

Here we are building TypeScript code in stage 1 and our React client in stage 2. In addition, there’s a third stage that produces the final image from the results of the first two stages.

Figure 7: Using a Docker multi-stage build, we can create more complex build pipelines.

Pro tips

Now time to leave you with a few advanced tips to explore on your own:

  1. You can name your build stages! You don’t have to leave them as the default 0, 1, etc. Naming your build stages will make your Dockerfile more readable.
  2. Understand the options you have for base images. Using the right base image can relieve a lot of confusion when constructing your build pipeline.
  3. Build a custom base image if the complexity of your build process is getting out of hand.
  4. You can pull from external images! Just like you pull files from earlier stages, you can also pull files from images that are published to a Docker repository. This gives you an option to pre-bake an early build stage if it’s expensive and doesn’t change very often.
Conclusion and resources

Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks. They help us slim down our production Docker images and remove the bloat. They also allow us to structure and modularize our build process, which makes it easier to test parts of our build process in isolation.

So please have some fun with Docker multi-stage builds, and don’t forget to have a look at the example code on GitHub.

Here’s the Docker documentation on multi-stage builds, too.

How to Use Express.js, Node.js and MongoDB.js

How to Use Express.js, Node.js and MongoDB.js

In this post, I will show you how to use Express.js, Node.js and MongoDB.js. We will be creating a very simple Node application, that will allow users to input data that they want to store in a MongoDB database. It will also show all items that have been entered into the database.

In this post, I will show you how to use Express.js, Node.js and MongoDB.js. We will be creating a very simple Node application, that will allow users to input data that they want to store in a MongoDB database. It will also show all items that have been entered into the database.

Creating a Node Application

To get started I would recommend creating a new database that will contain our application. For this demo I am creating a directory called node-demo. After creating the directory you will need to change into that directory.

mkdir node-demo
cd node-demo

Once we are in the directory we will need to create an application and we can do this by running the command
npm init

This will ask you a series of questions. Here are the answers I gave to the prompts.

The first step is to create a file that will contain our code for our Node.js server.

touch app.js

In our app.js we are going to add the following code to build a very simple Node.js Application.

var express = require("express");
var app = express();
var port = 3000;
app.get("/", (req, res) => {
&nbsp;&nbsp;res.send("Hello World");
app.listen(port, () => {
  console.log("Server listening on port " + port);

What the code does is require the express.js application. It then creates app by calling express. We define our port to be 3000.

The app.use line will listen to requests from the browser and will return the text “Hello World” back to the browser.

The last line actually starts the server and tells it to listen on port 3000.

Installing Express

Our app.js required the Express.js module. We need to install express in order for this to work properly. Go to your terminal and enter this command.

npm install express --save

This command will install the express module into our package.json. The module is installed as a dependency in our package.json as shown below.

To test our application you can go to the terminal and enter the command

node app.js

Open up a browser and navigate to the url http://localhost:3000

You will see the following in your browser

Creating Website to Save Data to MongoDB Database

Instead of showing the text “Hello World” when people view your application, what we want to do is to show a place for user to save data to the database.

We are going to allow users to enter a first name and a last name that we will be saving in the database.

To do this we will need to create a basic HTML file. In your terminal enter the following command to create an index.html file.

touch index.html

In our index.html file we will be creating an input filed where users can input data that they want to have stored in the database. We will also need a button for users to click on that will add the data to the database.

Here is what our index.html file looks like.

<!DOCTYPE html>
    <title>Intro to Node and MongoDB<title>

    <h1>Into to Node and MongoDB<&#47;h1>
    <form method="post" action="/addname">
      <label>Enter Your Name<&#47;label><br>
      <input type="text" name="firstName" placeholder="Enter first name..." required>
      <input type="text" name="lastName" placeholder="Enter last name..." required>
      <input type="submit" value="Add Name">

If you are familiar with HTML, you will not find anything unusual in our code for our index.html file. We are creating a form where users can input their first name and last name and then click an “Add Name” button.

The form will do a post call to the /addname endpoint. We will be talking about endpoints and post later in this tutorial.

Displaying our Website to Users

We were previously displaying the text “Hello World” to users when they visited our website. Now we want to display our html file that we created. To do this we will need to change the app.use line our our app.js file.

We will be using the sendFile command to show the index.html file. We will need to tell the server exactly where to find the index.html file. We can do that by using a node global call __dirname. The __dirname will provide the current directly where the command was run. We will then append the path to our index.html file.

The app.use lines will need to be changed to
app.use("/", (req, res) => {   res.sendFile(__dirname + "/index.html"); });

Once you have saved your app.js file, we can test it by going to terminal and running node app.js

Open your browser and navigate to “http://localhost:3000”. You will see the following

Connecting to the Database

Now we need to add our database to the application. We will be connecting to a MongoDB database. I am assuming that you already have MongoDB installed and running on your computer.

To connect to the MongoDB database we are going to use a module called Mongoose. We will need to install mongoose module just like we did with express. Go to your terminal and enter the following command.
npm install mongoose --save

This will install the mongoose model and add it as a dependency in our package.json.

Connecting to the Database

Now that we have the mongoose module installed, we need to connect to the database in our app.js file. MongoDB, by default, runs on port 27017. You connect to the database by telling it the location of the database and the name of the database.

In our app.js file after the line for the port and before the app.use line, enter the following two lines to get access to mongoose and to connect to the database. For the database, I am going to use “node-demo”.

var mongoose = require("mongoose"); mongoose.Promise = global.Promise; mongoose.connect("mongodb://localhost:27017/node-demo");

Creating a Database Schema

Once the user enters data in the input field and clicks the add button, we want the contents of the input field to be stored in the database. In order to know the format of the data in the database, we need to have a Schema.

For this tutorial, we will need a very simple Schema that has only two fields. I am going to call the field firstName and lastName. The data stored in both fields will be a String.

After connecting to the database in our app.js we need to define our Schema. Here are the lines you need to add to the app.js.
var nameSchema = new mongoose.Schema({   firstName: String,   lastNameName: String });

Once we have built our Schema, we need to create a model from it. I am going to call my model “DataInput”. Here is the line you will add next to create our mode.
var User = mongoose.model("User", nameSchema);

Creating RESTful API

Now that we have a connection to our database, we need to create the mechanism by which data will be added to the database. This is done through our REST API. We will need to create an endpoint that will be used to send data to our server. Once the server receives this data then it will store the data in the database.

An endpoint is a route that our server will be listening to to get data from the browser. We already have one route that we have created already in the application and that is the route that is listening at the endpoint “/” which is the homepage of our application.

HTTP Verbs in a REST API

The communication between the client(the browser) and the server is done through an HTTP verb. The most common HTTP verbs are

The following table explains what each HTTP verb does.

HTTP Verb Operation
GET Read
POST Create
PUT Update

As you can see from these verbs, they form the basis of CRUD operations that I talked about previously.

Building a CRUD endpoint

If you remember, the form in our index.html file used a post method to call this endpoint. We will now create this endpoint.

In our previous endpoint we used a “GET” http verb to display the index.html file. We are going to do something very similar but instead of using “GET”, we are going to use “POST”. To get started this is what the framework of our endpoint will look like."/addname", (req, res) => {
Express Middleware

To fill out the contents of our endpoint, we want to store the firstName and lastName entered by the user into the database. The values for firstName and lastName are in the body of the request that we send to the server. We want to capture that data, convert it to JSON and store it into the database.

Express.js version 4 removed all middleware. To parse the data in the body we will need to add middleware into our application to provide this functionality. We will be using the body-parser module. We need to install it, so in your terminal window enter the following command.

npm install body-parser --save

Once it is installed, we will need to require this module and configure it. The configuration will allow us to pass the data for firstName and lastName in the body to the server. It can also convert that data into JSON format. This will be handy because we can take this formatted data and save it directly into our database.

To add the body-parser middleware to our application and configure it, we can add the following lines directly after the line that sets our port.

var bodyParser = require('body-parser');
app.use(bodyParser.urlencoded({ extended: true }));
Saving data to database

Mongoose provides a save function that will take a JSON object and store it in the database. Our body-parser middleware, will convert the user’s input into the JSON format for us.

To save the data into the database, we need to create a new instance of our model that we created early. We will pass into this instance the user’s input. Once we have it then we just need to enter the command “save”.

Mongoose will return a promise on a save to the database. A promise is what is returned when the save to the database completes. This save will either finish successfully or it will fail. A promise provides two methods that will handle both of these scenarios.

If this save to the database was successful it will return to the .then segment of the promise. In this case we want to send text back the user to let them know the data was saved to the database.

If it fails it will return to the .catch segment of the promise. In this case, we want to send text back to the user telling them the data was not saved to the database. It is best practice to also change the statusCode that is returned from the default 200 to a 400. A 400 statusCode signifies that the operation failed.

Now putting all of this together here is what our final endpoint will look like."/addname", (req, res) => {
  var myData = new User(req.body);
    .then(item => {
      res.send("item saved to database");
    .catch(err => {
      res.status(400).send("unable to save to database");
Testing our code

Save your code. Go to your terminal and enter the command node app.js to start our server. Open up your browser and navigate to the URL “http://localhost:3000”. You will see our index.html file displayed to you.

Make sure you have mongo running.

Enter your first name and last name in the input fields and then click the “Add Name” button. You should get back text that says the name has been saved to the database like below.

Access to Code

The final version of the code is available in my Github repo. To access the code click here. Thank you for reading !

Dockerizing a Node.js web application

Dockerizing a Node.js web application

In this article, we will see how to dockerize a Node.js application. Dockerizing a Node.js web application

Originally published by  ganeshmani009 at

what is docker ?

Firstly, Docker is containerization platform where developers can package the application and run as a container.

In simple words, docker runs each application as a separate environment which shares only the resources such as os, memory, etc.

Virtual Machine vs Docker

Docker and node.js setup

Here, we can find the difference between the docker and virtual machines.

To read more about docker, Docker Docs

we are gonna see how to dockerize a node.js application. before that, docker has to be installed on the machine. Docker Installation

After installing the docker, we need to initialize the node application.

npm init --yes
npm install express body-parser

the first command initializes the package.json file which contains the details about the application and dependencies. the second one install the express and bodyParser

create a file called server.js and paste the following code

'use strict';

const express = require('express');

// Constants
const PORT = 8080;
const HOST = '';

// App
const app = express();
app.get('/', (req, res) => {
res.send('You have done it !!!!!\n');

console.log(Running on http://${HOST}:${PORT});

this runs the basic express application server. now, we need to create the docker image file. create a file name called Dockerfile and add the following commands

FROM node:8

First we install the node image from the Docker hub to the image

WORKDIR /usr/src/app

Next, we set the /usr/src/app as the working directory in the docker image

COPY package*.json ./
RUN npm install

then copies the package.json from the local machine to docker image. It’s not an efficient way to copy the dependencies from local to docker image.

so we are just copying the package.json and install all the dependencies in the docker image

COPY . .

CMD [ "npm" , "start" ]

it copies all the source code from local to docker image, binds the app to port 8080 in the docker image. docker image port 8080 can be mapped with local machine port. then we run the command

Your Dockerfile should now look like:

# this install the node image from docker hub
FROM node:8

this is the current working directory in the docker image

WORKDIR /usr/src/app
#copy package.json from local to docker image
COPY package*.json ./
#run npm install commands
RUN npm install
#copy all the files from local directory to docker image
COPY . .
#this port exposed to the docker to map.

CMD [ "npm" , "start" ]

create a .dockerignore file with the following content:


now, we need to build our image in the command line as :

$ docker build -t <your username>/node-web-app .

-t flag used to tag a name to image. so, it will be easy to identify with a name instead of id. Note : dot in the end of command is important(else it won’t work)

we could run the image using the following command :

docker run -p 49160:8080 -d <your username>/node-web-app

we can check it using

 curl -i localhost:49160

output should be:

HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 23
ETag: W/"17-C2jfoqVpuUrcmNFogd/3pZ5xds8"
Date: Mon, 08 Apr 2019 17:29:12 GMT
Connection: keep-alive

You have done it !!!!!

To read more

Originally published by  ganeshmani009 at


Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Further reading

☞ The Complete Node.js Developer Course (3rd Edition)

☞ Angular & NodeJS - The MEAN Stack Guide

☞ NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

☞ Best 50 Nodejs interview questions from Beginners to Advanced in 2019

☞ Node.js 12: The future of server-side JavaScript

☞ Docker for Absolute Beginners

☞ How to debug Node.js in a Docker container?

☞ Docker Containers for Beginners

☞ Deploy Docker Containers With AWS CodePipeline