A Complete Guide on Deploying a Node app to AWS with Docker

A Complete Guide on Deploying a Node app to AWS with Docker

In this guide, I'll walk you through the steps on how to dockerize a Node.js application and then deploy it to Amazon Web Services (AWS) using Amazon Elastic Container Registry (ECR) and Amazon Elastic Container Service (ECS).

In this guide, I'll walk you through the steps on how to dockerize a Node.js application and then deploy it to Amazon Web Services (AWS) using Amazon Elastic Container Registry (ECR) and Amazon Elastic Container Service (ECS).

Once you've got a web application running locally on your machine, if you want to access it on the internet, you've got to deploy it. And instead of just deploying it manually on a virtual machine in the cloud, let's dockerize the app and then deploy it to the cloud.

Table of Contents

1. Introduction

2. Prerequisites

3. A quick primer on Docker and AWS

4. What we’ll be deploying

5. Creating a Dockerfile

6. Building a docker image

7. Running a docker container

8. Creating the Registry (ECR) and uploading the app image to it

9. Creating a new task definition

10. Creating a cluster

11. Creating a service to run it

12. Conclusion

1. Introduction

Writing code that does stuff is something most developers are familiar with. Sometimes, we need to take the responsibility of a SysAdmin or DevOps engineer and deploy our codebase to production where it will help a business solve problems for customers.

In this tutorial, I’ll show you how to dockerize a Node.js application and deploy it to Amazon Web Service (AWS) using Amazon ECR (Elastic Container Registry) and ECS (Elastic container service).

2. Prerequisites

To follow through this tutorial, you’ll need the following:

  1. Node and Npm: Follow this link to install the latest versions.
  2. Basic knowledge of Node.js.
  3. Docker: The installation provides Docker Engine, Docker CLI client, and other cool stuff. Follow the instructions for your operating system. To check if the installation worked, fire this on the terminal:
docker --version

The command above should display the version number. If it doesn’t, the installation didn’t complete properly.

4. AWS account: Sign up for a free tier. There is a waiting period to verify your phone number and bank card. After this, you will have access to the console.

5. AWS CLI: Follow the instructions for your OS. You need Python installed.

3. A quick primer on Docker and AWS

Docker is an open source software that allows you to pack an application together with the required dependencies and environment in a ‘Container’ that you can ship and run anywhere. It is independent of platforms or hardware, and therefore the containerized application can run in any environment in an isolated fashion.

Docker containers solve many issues, such as when an app works on a co-worker’s computer but doesn’t run on yours, or it works in the local development environment but doesn’t work when you deploy it to a server.

Amazon Web Services (AWS) offers a reliable, scalable, and inexpensive cloud computing service for businesses. As I mentioned before, this tutorial will focus on using the ECR and ECS services.

4. What we’ll be deploying

Let’s quickly build a sample app that we’ll use for the purpose of this tutorial. It going to be very simple Node.js app.

Enter the following in your terminal:

// create a new directory
mkdir sample-nodejs-app
// change to new directory
cd sample-nodejs-app
// Initialize npm
npm init -y
// install express
npm install express
// create an server.js file
touch server.js

Open server.js and paste the code below into it:

// server.js
const express = require('express')const app = express()
app.get('/', (req, res) => {    res.send('Hello world from a Node.js app!')})
app.listen(3000, () => {    console.log('Server is up on 3000')})

Start the app with:

node server.js

Access it on http://localhost:3000. You should get Hello world from a Node.js app! displayed in your browser. The complete code is available on GitHub.

Now let’s take our very important app to production 😄.

5. Creating a Dockerfile

We are going to start dockerizing the app by creating a single file called a Dockerfile in the base of our project directory.

The Dockerfile is the blueprint from which our images are built. And then images turn into containers, in which we run our apps.

Every Dockerfile starts with a base image as its foundation. There are two ways to approach creating your Dockerfile:

  1. Use a plain OS base image (For example, Ubuntu OS, Debian, CentOS etc.) and install an application environment in it such as Node.js OR
  2. Use an environment-ready base image to get an OS image with an application environment already installed.

We will proceed with the second approach. We can use the official Node.js image hosted on Dockerhub which is based on Alpine Linux.

Write this in the Dockerfile:

FROM node:8-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
RUN npm install
EXPOSE 3000
CMD [ "node", "server.js" ]

Let’s walk through this line by line to see what is happening here, and why.

FROM node:8-alpine

Here, we are building our Docker image using the official Node.js image from Dockerhub (a repository for base images).

  • Start our Dockerfile with a [**FROM**](https://docs.docker.com/reference/builder/#from) statement. This is where you specify your base image.
  • The [**RUN**](https://docs.docker.com/reference/builder/#run) statement will allow us to execute a command for anything you want to do. We created a subdirectory /usr/src/app that will hold our application code within the docker image.
  • [**WORKDIR**](https://docs.docker.com/engine/reference/builder/#workdir) instruction establishes the subdirectory we created as the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. /usr/src/app is our working directory.
  • [**COPY**](https://docs.docker.com/engine/reference/builder/#copy) lets us copy files from a source to a destination. We copied the contents of our node application code ( server.js and package.json) from our current directory to the working directory in our docker image.
  • The [**EXPOSE**](https://docs.docker.com/engine/reference/builder/#expose) instruction informs Docker that the container listens on the specified network ports at runtime. We specified port 3000.
  • Last but not least, the[**CMD**](https://docs.docker.com/reference/builder/#cmd) statement specifies the command to start our application. This tells Docker how to run your application. Here we use node server.js which is typically how files are run in Node.js.

With this completed file, we are now ready to build a new Docker image.

6. Building a docker image

Make sure you have Docker up and running. Now that we have defined our Dockerfile, let’s build the image with a title using -t:

docker build -t sample-nodejs-app .

This will output hashes, and alphanumeric strings that identify containers and images saying “Successfully built” on the last line:

Sending build context to Docker daemon  1.966MB
Step 1/7 : FROM node:6-alpine
 ---> 998971a692ca
Step 2/7 : RUN mkdir -p /usr/src/app
 ---> Using cache
 ---> f1aa1c112188
Step 3/7 : WORKDIR /usr/src/app
 ---> Using cache
 ---> b4421b83357b
Step 4/7 : COPY . .
 ---> 836112e1d526
Step 5/7 : RUN npm install
 ---> Running in 1c6b36b5381c
npm WARN [email protected] No description
npm WARN [email protected] No repository field.
Removing intermediate container 1c6b36b5381c
 ---> 93999e6c807f
Step 6/7 : EXPOSE 3000
 ---> Running in 7419020927f1
Removing intermediate container 7419020927f1
 ---> ed4ac8a31f83
Step 7/7 : CMD [ "node", "server.js" ]
 ---> Running in c77d34f4c873
Removing intermediate container c77d34f4c873
 ---> eaf97859f909
Successfully built eaf97859f909

// dont expect the same values from your terminal.
7. Running a Docker Container

We’ve built the docker image. To see previously created images, run:

docker images

You should see the image we just created as the most recent based on time:

Copy the image Id. To run the container, we write on the terminal:

docker run -p 80:3000 {image-id}

// fill with your image-id


By default, Docker containers can make connections to the outside world, but the outside world cannot connect to containers. -p publishes all exposed ports to the host interfaces. Here we publish the app to port 80:3000. Because we are running Docker locally, go to http://localhost to view.

At any moment, you can check running Docker containers by typing:

docker container ls

Finally, you can stop the container from running by:

docker stop {image-id}

Leave the Docker daemon running.

8. Create Registry (ECR) and upload the app image to it

Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon Elastic Container Service (ECS), simplifying your development to production workflow.

The keyword “Elastic” means you can scale the capacity or reduce it as desired.

Steps:

  1. Go to the AWS console and sign in.
  2. Select the EC2 container service and Get started

3. The first run page shows, scroll down and click cancel > enter ECS dashboard.

4. To ensure your CLI can connect with your AWS account, run on the terminal:

aws configure

If your AWS CLI was properly installed, aws configure will ask for the following:

$ aws configure
AWS Access Key ID [None]: accesskey
AWS Secret Access Key [None]: secretkey
Default region name [None]: us-west-2
Default output format [None]:

Get the security credentials from your AWS account under your username > Access keys. Run aws configure again and fill correctly.

4. Create a new repository and enter a name (preferably with the same container name as in your local dev environment for consistency).

For example, use sample-nodejs-app.

Follow the 5 instructions from the AWS console for building, tagging, and pushing Docker images:

Note: The arguments of the following are mine and will differ from yours, so just follow the steps outlined on your console.

  1. Retrieve the Docker login command that you can use to authenticate your Docker client to your registry:
  2. Note: If you receive an “Unknown options: - no-include-email” error, install the latest version of the AWS CLI. Learn more here.
aws ecr get-login --no-include-email --region us-east-2

2. Run the docker login command that was returned in the previous step (just copy and paste). Note: If you are using Windows PowerShell, run the following command instead:

Invoke-Expression -Command (aws ecr get-login --no-include-email --region us-east-2)

It should output: Login Succeeded.

3. Build your Docker image using the following command. For information on building a Docker file from scratch, see the instructions here. You can skip this step since our image is already built:

docker build -t sample-nodejs-app .

4. With a completed build, tag your image with a keyword (For example, latest) so you can push the image to this repository:

docker tag sample-nodejs-app:latest 559908478199.dkr.ecr.us-east-2.amazonaws.com/sample-nodejs-app:latest

5. Run the following command to push this image to your newly created AWS repository:

docker push 559908478199.dkr.ecr.us-east-2.amazonaws.com/sample-nodejs-app:latest

9. Create a new task definition

Tasks function like the docker run command of the Docker CLI but for multiple containers. They define:

  • Container images (to use)
  • Volumes (if any)
  • Networks Environment variables
  • Port mappings

From Task Definitions in the ECS dashboard, press on the Create new Task Definition (ECS) button:

Set a task name and use the following steps:

  • Add Container: sample-nodejs-app (the one we pushed).
  • Image: the URL to your container. Mine is 559908478199.dkr.ecr.us-east-2.amazonaws.com/sample-nodejs-app
  • Soft limit: 512
  • Map 80 (host) to 3000 (container) for sample-nodejs-app
  • Env Variables:

NODE_ENV: production

10. Create a Cluster

A cluster is the place where AWS containers run. They use configurations similar to EC2 instances. Define the following:

  • Cluster name: demo-nodejs-app-cluster
  • EC2 instance type: t2.micro

(Note: you select the instances based on the size of your application. Here we’ve selected the smallest. Your selection affects how much money you are billed at the end of the month. Visit here for more information). Thank you Nicholas Kolatsis for pointing out that the previous selection of m4.large was expensive for this tutorial.

  • Number of instances: 1
  • EBS storage: 22
  • Key pair: None
  • VPC: New

When the process is complete, you may choose to click on “View cluster.”

11. Create a service to run it

Go to Task Definition > click demo-nodejs-app > click on the latest revision.

Inside of the task definition, click on the actions dropdown and select Create servcie

Use the following:

  • Launch type: EC2
  • Service name: demo-nodejs-app-service
  • Number of tasks: 1

Skip through options and click Create service and View service.

You’ll see its status as PENDING. Give it a little time and it will indicate RUNNING.

Go to Cluster (through a link from the service we just created) > EC2 instances > Click on the container instance to reveal the public DNS.

Visit the public DNS to view our app! Mine is [ec2–18–219–113–111.us-east-2.compute.amazonaws.com](http://ec2-18-219-113-111.us-east-2.compute.amazonaws.com/)

12. Conclusion.

Congrats on finishing this post! Grab the code for the Docker part from Github.

Angular 9 Tutorial: Learn to Build a CRUD Angular App Quickly

What's new in Bootstrap 5 and when Bootstrap 5 release date?

What’s new in HTML6

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

Crafting multi-stage builds with Docker in Node.js

Crafting multi-stage builds with Docker in Node.js

Learn how you can use a multi-stage Docker build for your Node.js application. Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks.

Everyone knows about Docker. It’s the ubiquitous tool for packaging and distribution of applications that seemed to come from nowhere and take over our industry! If you are reading this, it means you already understand the basics of Docker and are now looking to create a more complex build pipeline.

In the past, optimizing our Docker images has been a challenging experience. All sorts of magic tricks were employed to reduce the size of our applications before they went to production. Things are different now because support for multi-stage builds has been added to Docker.

In this post, we explore how you can use a multi-stage build for your Node.js application. For an example, we’ll use a TypeScript build process, but the same kind of thing will work for any build pipeline. So even if you’d prefer to use Babel, or maybe you need to build a React client, then a Docker multi-stage build can work for you as well.

A basic, single-stage Dockerfile for Node.js

Let’s start by looking at a basic Dockerfile for Node.js. We can visualize the normal Docker build process as shown in Figure 1 below.

Figure 1: Normal Docker build process.

We use the docker build command to turn our Dockerfile into a Docker image. We then use the docker run command to instantiate our image to a Docker container.

The Dockerfile in Listing 1 below is just a standard, run-of-the-mill Dockerfile for Node.js. You have probably seen this kind of thing before. All we are doing here is copying the package.json, installing production dependencies, copying the source code, and finally starting the application.

This Dockerfile is for regular JavaScript applications, so we don’t need a build process yet. I’m only showing you this simple Dockerfile so you can compare it to the multi-stage Dockerfile I’ll be showing you soon.

Listing 1: A run-of-the-mill Dockerfile for Node.js

FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY ./src ./src
EXPOSE 3000
CMD npm start

Listing 1 is a quite ordinary-looking Docker file. In fact, all Docker files looked pretty much like this before multi-stage builds were introduced. Now that Docker supports multi-stage builds, we can visualize our simple Dockerfile as the single-stage build process illustrated in Figure 2.


Figure 2: A single-stage build pipeline.

The need for multiple stages

We can already run whatever commands we want in the Dockerfile when building our image, so why do we even need a multi-stage build?

To find out why, let’s upgrade our simple Dockerfile to include a TypeScript build process. Listing 2 shows the upgraded Dockerfile. I’ve bolded the updated lines so you can easily pick them out.

Listing 2: We have upgraded our simple Dockerfile to include a TypeScript build process

FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm install
COPY ./src ./src
RUN npm run build
EXPOSE 80
CMD npm start

We can easily and directly see the problem this causes. To see it for yourself, you should instantiate a container from this image and then shell into it and inspect its file system.

I did this and used the Linux tree command to list all the directories and files in the container. You can see the result in Figure 3.

Notice that we have unwittingly included in our production image all the debris of development and the build process. This includes our original TypeScript source code (which we don’t use in production), the TypeScript compiler itself (which, again, we don’t use in production), plus any other dev dependencies we might have installed into our Node.js project.


FIgure 3: The debris from development and the build process is bloating our production Docker image.
Bear in mind this is only a trivial project, so we aren’t actually seeing too much cruft left in our production image. But you can imagine how bad this would be for a real application with many sources files, many dev dependencies, and a more complex build process that generates temporary files!

We don’t want this extra bloat in production. The extra size makes our containers bigger. When our containers are bigger than needed, it means we aren’t making efficient use of our resources. The increased surface area of the container can also be a problem for security, where we generally prefer to minimize the attackable surface area of our application.

Wouldn’t it be nice if we could throw away the files we don’t want and just keep the ones we do want? This is exactly what a Docker multi-stage build can do for us.

Crafting a Dockerfile with a multi-stage build

We are going to split out Dockerfile into two stages. Figure 4 shows what our build pipeline looks like after the split.


Figure 4: A multi-stage Docker build pipeline to build TypeScript.

Our new multi-stage build pipeline has two stages: Build stage 1 is what builds our TypeScript code; Build stage 2 is what creates our production Docker image. The final Docker image produced at the end of this pipeline contains only what it needs and omits the cruft we don’t want.

To create our two-stage build pipeline, we are basically just going to create two Docker files in one. Listing 3 shows our Dockerfile with multiple stages added. The first FROM command initiates the first stage, and the second FROM command initiates the second stage.

Compare this to a regular single-stage Dockerfile, and you can see that it actually looks like two Dockerfiles squished together in one.

Listing 3: A multi-stage Dockerfile for building TypeScript code

# 
# Build stage 1.
# This state builds our TypeScript and produces an intermediate Docker image containing the compiled JavaScript code.
#
FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm install
COPY ./src ./src
RUN npm run build

#
# Build stage 2.
# This stage pulls the compiled JavaScript code from the stage 1 intermediate image.
# This stage builds the final Docker image that we'll use in production.
#
FROM node:10.15.2

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY --from=0 /usr/src/app/build ./build
EXPOSE 80
CMD npm start

To create this multi-stage Dockerfile, I simply took Listing 2 and divided it up into separate Dockerfiles. The first stage contains only what is need to build the TypeScript code. The second stage contains only what is needed to produce the final production Docker image. I then merged the two Dockerfiles into a single file.

The most important thing to note is the use of --from in the second stage. I’ve bolded this line in Listing 3 so you can easily pick it out. This is the syntax we use to pull the built files from our first stage, which we refer to here as stage 0. We are pulling the compiled JavaScript files from the first stage into the second stage.

We can easily check to make sure we got the desired result. After creating the new image and instantiating a container, I shelled in to check the contents of the file system. You can see in Figure 5 that we have successfully removed the debris from our production image.


Figure 5: We have removed the debris of development from our Docker image.

We now have fewer files in our image, it’s smaller, and it has less surface area. Yay! Mission accomplished.

But what, specifically, does this mean?

The effect of the multi-stage build

What exactly is the effect of the new build pipeline on our production image?

I measured the results before and after. Our single-stage image produced by Listing 2 weighs in at 955MB. After converting to the multi-stage build in Listing 3, the image now comes to 902MB. That’s a reasonable reduction — we removed 53MB from our image!

While 53MB seems like a lot, we have actually only shaved off just more than 5 percent of the size. I know what you’re going to say now: But Ash, our image is still monstrously huge! There’s still way too much bloat in that image.

Well, to make our image even smaller, we now need to use the alpine, or slimmed-down, Node.js base image. We can do this by changing our second build stage from node:10.15.2 to node:10.15.2-alpine.

This reduces our production image down to 73MB — that’s a huge win! Now the savings we get from discarding our debris is more like a whopping 60 percent. Alright, we are really getting somewhere now!

This highlights another benefit of multi-stage builds: we can use separate Docker base images for each of our build stages. This means you can customize each build stage by using a different base image.

Say you have one stage that relies on some tools that are in a different image, or you have created a special Docker image that is custom for your build process. This gives us a lot of flexibility when constructing our build pipelines.

How does it work?

You probably already guessed this: each stage or build process produces its own separate Docker image. You can see how this works in Figure 6.

The Docker image produced by a stage can be used by the following stages. Once the final image is produced, all the intermediate images are discarded; we take what we want for the final image, and the rest gets thrown away.


Figure 6: Each stage of a multi-stage Docker build produces an image.

Adding more stages

There’s no need to stop at two stages, although that’s often all that’s needed; we can add as many stages as we need. A specific example is illustrated in Figure 7.

Here we are building TypeScript code in stage 1 and our React client in stage 2. In addition, there’s a third stage that produces the final image from the results of the first two stages.


Figure 7: Using a Docker multi-stage build, we can create more complex build pipelines.

Pro tips

Now time to leave you with a few advanced tips to explore on your own:

  1. You can name your build stages! You don’t have to leave them as the default 0, 1, etc. Naming your build stages will make your Dockerfile more readable.
  2. Understand the options you have for base images. Using the right base image can relieve a lot of confusion when constructing your build pipeline.
  3. Build a custom base image if the complexity of your build process is getting out of hand.
  4. You can pull from external images! Just like you pull files from earlier stages, you can also pull files from images that are published to a Docker repository. This gives you an option to pre-bake an early build stage if it’s expensive and doesn’t change very often.
Conclusion and resources

Docker multi-stage builds enable us to create more complex build pipelines without having to resort to magic tricks. They help us slim down our production Docker images and remove the bloat. They also allow us to structure and modularize our build process, which makes it easier to test parts of our build process in isolation.

So please have some fun with Docker multi-stage builds, and don’t forget to have a look at the example code on GitHub.

Here’s the Docker documentation on multi-stage builds, too.

How to Use Express.js, Node.js and MongoDB.js

How to Use Express.js, Node.js and MongoDB.js

In this post, I will show you how to use Express.js, Node.js and MongoDB.js. We will be creating a very simple Node application, that will allow users to input data that they want to store in a MongoDB database. It will also show all items that have been entered into the database.

In this post, I will show you how to use Express.js, Node.js and MongoDB.js. We will be creating a very simple Node application, that will allow users to input data that they want to store in a MongoDB database. It will also show all items that have been entered into the database.

Creating a Node Application

To get started I would recommend creating a new database that will contain our application. For this demo I am creating a directory called node-demo. After creating the directory you will need to change into that directory.

mkdir node-demo
cd node-demo

Once we are in the directory we will need to create an application and we can do this by running the command
npm init

This will ask you a series of questions. Here are the answers I gave to the prompts.

The first step is to create a file that will contain our code for our Node.js server.

touch app.js

In our app.js we are going to add the following code to build a very simple Node.js Application.

var express = require("express");
var app = express();
var port = 3000;
 
app.get("/", (req, res) => {
  res.send("Hello World");
});
 
app.listen(port, () => {
  console.log("Server listening on port " + port);
});

What the code does is require the express.js application. It then creates app by calling express. We define our port to be 3000.

The app.use line will listen to requests from the browser and will return the text “Hello World” back to the browser.

The last line actually starts the server and tells it to listen on port 3000.

Installing Express

Our app.js required the Express.js module. We need to install express in order for this to work properly. Go to your terminal and enter this command.

npm install express --save

This command will install the express module into our package.json. The module is installed as a dependency in our package.json as shown below.

To test our application you can go to the terminal and enter the command

node app.js

Open up a browser and navigate to the url http://localhost:3000

You will see the following in your browser

Creating Website to Save Data to MongoDB Database

Instead of showing the text “Hello World” when people view your application, what we want to do is to show a place for user to save data to the database.

We are going to allow users to enter a first name and a last name that we will be saving in the database.

To do this we will need to create a basic HTML file. In your terminal enter the following command to create an index.html file.

touch index.html

In our index.html file we will be creating an input filed where users can input data that they want to have stored in the database. We will also need a button for users to click on that will add the data to the database.

Here is what our index.html file looks like.

<!DOCTYPE html>
<html>
  <head>
    <title>Intro to Node and MongoDB<title>
  <head>

  <body>
    <h1>Into to Node and MongoDB<&#47;h1>
    <form method="post" action="/addname">
      <label>Enter Your Name<&#47;label><br>
      <input type="text" name="firstName" placeholder="Enter first name..." required>
      <input type="text" name="lastName" placeholder="Enter last name..." required>
      <input type="submit" value="Add Name">
    </form>
  <body>
<html>

If you are familiar with HTML, you will not find anything unusual in our code for our index.html file. We are creating a form where users can input their first name and last name and then click an “Add Name” button.

The form will do a post call to the /addname endpoint. We will be talking about endpoints and post later in this tutorial.

Displaying our Website to Users

We were previously displaying the text “Hello World” to users when they visited our website. Now we want to display our html file that we created. To do this we will need to change the app.use line our our app.js file.

We will be using the sendFile command to show the index.html file. We will need to tell the server exactly where to find the index.html file. We can do that by using a node global call __dirname. The __dirname will provide the current directly where the command was run. We will then append the path to our index.html file.

The app.use lines will need to be changed to
app.use("/", (req, res) => {   res.sendFile(__dirname + "/index.html"); });

Once you have saved your app.js file, we can test it by going to terminal and running node app.js

Open your browser and navigate to “http://localhost:3000”. You will see the following

Connecting to the Database

Now we need to add our database to the application. We will be connecting to a MongoDB database. I am assuming that you already have MongoDB installed and running on your computer.

To connect to the MongoDB database we are going to use a module called Mongoose. We will need to install mongoose module just like we did with express. Go to your terminal and enter the following command.
npm install mongoose --save

This will install the mongoose model and add it as a dependency in our package.json.

Connecting to the Database

Now that we have the mongoose module installed, we need to connect to the database in our app.js file. MongoDB, by default, runs on port 27017. You connect to the database by telling it the location of the database and the name of the database.

In our app.js file after the line for the port and before the app.use line, enter the following two lines to get access to mongoose and to connect to the database. For the database, I am going to use “node-demo”.

var mongoose = require("mongoose"); mongoose.Promise = global.Promise; mongoose.connect("mongodb://localhost:27017/node-demo");

Creating a Database Schema

Once the user enters data in the input field and clicks the add button, we want the contents of the input field to be stored in the database. In order to know the format of the data in the database, we need to have a Schema.

For this tutorial, we will need a very simple Schema that has only two fields. I am going to call the field firstName and lastName. The data stored in both fields will be a String.

After connecting to the database in our app.js we need to define our Schema. Here are the lines you need to add to the app.js.
var nameSchema = new mongoose.Schema({   firstName: String,   lastNameName: String });

Once we have built our Schema, we need to create a model from it. I am going to call my model “DataInput”. Here is the line you will add next to create our mode.
var User = mongoose.model("User", nameSchema);

Creating RESTful API

Now that we have a connection to our database, we need to create the mechanism by which data will be added to the database. This is done through our REST API. We will need to create an endpoint that will be used to send data to our server. Once the server receives this data then it will store the data in the database.

An endpoint is a route that our server will be listening to to get data from the browser. We already have one route that we have created already in the application and that is the route that is listening at the endpoint “/” which is the homepage of our application.

HTTP Verbs in a REST API

The communication between the client(the browser) and the server is done through an HTTP verb. The most common HTTP verbs are
GET, PUT, POST, and DELETE.

The following table explains what each HTTP verb does.

HTTP Verb Operation
GET Read
POST Create
PUT Update
DELETE Delete

As you can see from these verbs, they form the basis of CRUD operations that I talked about previously.

Building a CRUD endpoint

If you remember, the form in our index.html file used a post method to call this endpoint. We will now create this endpoint.

In our previous endpoint we used a “GET” http verb to display the index.html file. We are going to do something very similar but instead of using “GET”, we are going to use “POST”. To get started this is what the framework of our endpoint will look like.

app.post("/addname", (req, res) => {
 
});
Express Middleware

To fill out the contents of our endpoint, we want to store the firstName and lastName entered by the user into the database. The values for firstName and lastName are in the body of the request that we send to the server. We want to capture that data, convert it to JSON and store it into the database.

Express.js version 4 removed all middleware. To parse the data in the body we will need to add middleware into our application to provide this functionality. We will be using the body-parser module. We need to install it, so in your terminal window enter the following command.

npm install body-parser --save

Once it is installed, we will need to require this module and configure it. The configuration will allow us to pass the data for firstName and lastName in the body to the server. It can also convert that data into JSON format. This will be handy because we can take this formatted data and save it directly into our database.

To add the body-parser middleware to our application and configure it, we can add the following lines directly after the line that sets our port.

var bodyParser = require('body-parser');
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
Saving data to database

Mongoose provides a save function that will take a JSON object and store it in the database. Our body-parser middleware, will convert the user’s input into the JSON format for us.

To save the data into the database, we need to create a new instance of our model that we created early. We will pass into this instance the user’s input. Once we have it then we just need to enter the command “save”.

Mongoose will return a promise on a save to the database. A promise is what is returned when the save to the database completes. This save will either finish successfully or it will fail. A promise provides two methods that will handle both of these scenarios.

If this save to the database was successful it will return to the .then segment of the promise. In this case we want to send text back the user to let them know the data was saved to the database.

If it fails it will return to the .catch segment of the promise. In this case, we want to send text back to the user telling them the data was not saved to the database. It is best practice to also change the statusCode that is returned from the default 200 to a 400. A 400 statusCode signifies that the operation failed.

Now putting all of this together here is what our final endpoint will look like.

app.post("/addname", (req, res) => {
  var myData = new User(req.body);
  myData.save()
    .then(item => {
      res.send("item saved to database");
    })
    .catch(err => {
      res.status(400).send("unable to save to database");
    });
});
Testing our code

Save your code. Go to your terminal and enter the command node app.js to start our server. Open up your browser and navigate to the URL “http://localhost:3000”. You will see our index.html file displayed to you.

Make sure you have mongo running.

Enter your first name and last name in the input fields and then click the “Add Name” button. You should get back text that says the name has been saved to the database like below.

Access to Code

The final version of the code is available in my Github repo. To access the code click here. Thank you for reading !

Intro to Docker on AWS

Intro to Docker on AWS

Serverless containers with AWS Fargate. Running a serverless Node.js app on AWS ECS. Learn how to use Docker with key AWS services to deploy and manage container-based applications. Running Docker on AWS provides developers and admins a highly reliable, low-cost way to build, ship, and run distributed applications at any scale.

Intro to Docker on AWS AWS Fargate: Running a serverless Node.js app on AWS ECS.

Project Summary

We're going to containerize a node.js project that renders a simple static site and deploy it to an Amazon ECS Fargate cluster. I will supply all the code at https://github.com/austinloveless/Docker-on-AWS.

Installing Prerequisites

Downloading Docker Desktop.

If you are on a Mac go to https://docs.docker.com/docker-for-mac/install/ or Windows go to https://docs.docker.com/docker-for-windows/install/. Follow the installation instructions and account setup.

Installing node.js

Download node.js here.

Installing the AWS CLI

Follow the instructions here.

Project setup

Now that we have our prerequisites installed we can build our application. This project isn't going to focus on the application code, the point is to get more familiar with Docker and AWS. So you can download the Repo and change directories into the Docker-on-AWS directory.

If you wanted to run the app locally and say screw docker you can run npm install in side the Docker-on-AWS directory. Then run node app.js. To see the site running locally visit http://localhost:80.

Now that we have docker installed and the repo downloaded we can look at the Dockerfile. You can think of it as a list of instructions for docker to execute when building a container or the blueprints for the application.

FROM node:12.4-alpine

RUN mkdir /app
WORKDIR /app

COPY package.json package.json
RUN npm install && mv node_modules /node_modules

COPY . .

LABEL maintainer="Austin Loveless"

CMD node app.js

At the top we are declaring our runtime which is node:12.4-alpine. This is basically our starting point for the application. We're grabbing this base image "FROM" the official docker hub node image.

If you go to the link you can see 12.4-alpine. The "-alpine" is a much smaller base image and is recommended by docker hub "when final image size being as small as possible is desired". Our application is very small so we're going to use an alpine image.

Next in the Dockerfile we're creating an /app directory and setting our working directory within the docker container to run in /app.

After that we're going to "COPY" the package.json file to package.json on the docker container. We then install our dependencies from our node_modules. "COPY" the entire directory and run the command node app.js to start the node app within the docker container.

Using Docker

Now that we've gone over the boring details of a Dockerfile, lets actually build the thing.

So when you installed Docker Desktop it comes with a few tools. Docker Command Line, Docker Compose and Docker Notary command line.

We're going to use the Docker CLI to:

  • Build a docker image

  • Run the container locally

Building an image

The command for building an image is docker build [OPTIONS] PATH | URL | -. You can go to the docs to see all the options.

In the root directory of the application you can run docker build -t docker-on-aws .. This will tag our image as "docker-on-aws".

To verify you successfully created the image you can run docker images. Mine looks like docker-on-aws latest aa68c5e51a8e About a minute ago 82.8MB.

Running a container locally

Now we are going to run our newly created image and see docker in action. Run docker run -p 80:80 docker-on-aws. The -p is defining what port you want your application running on.

You can now visit http://localhost:80.

To see if your container is running via the CLI you can open up another terminal window and run docker container ls. To stop the image you can run docker container stop <CONTAINER ID> . Verify it stopped with docker container ls again or docker ps.

Docker on Amazon ECS

We're going to push the image we just created to Amazon ECR, Elastic Container Registry, create an ECS cluster and download the image from ECR onto the ECS cluster.

Before we can do any of that we need to create an IAM user and setup our AWS CLI.

Configuring the AWS CLI

We're going to build everything with the AWS CLI.

Go to the AWS Console and search for IAM. Then go to "Users" and click the blue button "Add User".

Create a user name like "ECS-User" and select "Programmatic Access".

Click "Next: Permissions" and select "Attach exisiting policies directly" at the top right. Then you should see "AdministratorAccess", we're keeping this simple and giving admin access.

Click "Next: Tags" and then "Next: Review", we're not going to add any tags, and "Create user".

Now you should see a success page and an "Access key ID" and a "Secret access key".

Take note of both the Access Key ID and Secret Access key. We're going to need that to configure the AWS CLI.

Open up a new terminal window and type aws configure and input the keys when prompted. Set your region as us-east-1.

Creating an ECS Cluster

To create an ECS Cluster you can run the command aws ecs create-cluster --cluster-name docker-on-aws.

We can validate that our cluster is created by running aws ecs list-clusters.

.

If you wanted to delete the cluster you can run aws ecs delete-cluster --cluster docker-on-aws

Pushing an Image to Amazon ECR

Now that the CLI is configured we can tag our docker image and upload it to ECR.

First, we need to login to ECR.

Run the command aws ecr get-login --no-include-email. The output should be docker login -u AWS -p followed by a token that is valid for 12 hours. Copy and run that command as well. This will authenticate you with Amazon ECR. If successful you should see "Login Succeeded".

Create an ECR Repository by running aws ecr create-repository --repository-name docker-on-aws/nodejs. That's the cluster name followed by the image name. Take note of the repositoryUri in the output.

We have to tag our image so we can push it up to ECR.

Run the command docker tag docker-on-aws <ACCOUNT ID>.dkr.ecr.us-east-1.amazonaws.com/docker-on-aws/nodejs. Verify you tagged it correctly with docker images.

Now push the image to your ECR repo. Run docker push <ACCOUNT ID>.dkr.ecr.us-east-1.amazonaws.com/docker-on-aws/nodejs. Verify you pushed the image with aws ecr list-images --repository-name docker-on-aws/nodejs.

Uploading a node.js app to ECS

The last few steps involve pushing our node.js app to the ECS cluster. To do that we need to create and run a task definition and a service. Before we can do that we need to create an IAM role to allow us access to ECS.

Creating an ecsTaskExecutionRole with the AWS CLI

I have created a file called task-execution-assume-role.json that we will use to create the ecsTaskExecutionRole from the CLI.

    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Service": "ecs-tasks.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

You can run aws iam create-role --role-name ecsTaskExecutionRole --assume-role-policy-document file://task-execution-assume-role.json to create the role. Take note of the "Arn" in the output.

Then run aws iam attach-role-policy --role-name ecsTaskExecutionRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy to attach the "AmazonECSTaskExecutionRolePolicy".

Take the "Arn" you copied earlier and paste it into the node-task-definition.json file for the executionRoleArn.

{
    "family": "nodejs-fargate-task",
    "networkMode": "awsvpc",
    "executionRoleArn": "arn:aws:iam::xxxxx:role/ecsTaskExecutionRole",
    "containerDefinitions": [
        {
            "name": "nodejs-app",
            "image": "xxxxx.dkr.ecr.us-east-1.amazonaws.com/docker-on-aws/nodejs:latest",
            "portMappings": [
                {
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp"
                }
            ],
            "essential": true
        }
    ],
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "cpu": "256",
    "memory": "512"
}

Registering an ECS Task Definition

Once your IAM role is created and you updated the node-task-definition.json file with your repositoryUri and executionRoleArn and you can register your task.

Run aws ecs register-task-definition --cli-input-json file://node-task-definition.json

Creating and ECS Service

The final step to this process is creating a service that will run our task on the ECS Cluster.

We need to create a security group with port 80 open and we need a list of public subnets for our network configuration.

To create the security group run aws ec2 create-security-group --group-name ecs-security-group --description "Security Group us-east-1 for ECS". That will output a security group ID. Take note of this ID. You can see information about the security group by running aws ec2 describe-security-groups --group-id <YOUR SG ID>.

It will show that we don't have any IpPermissions so we need to add one to allow port 80 for our node application. Run aws ec2 authorize-security-group-ingress --group-id <YOUR SG ID> --protocol tcp --port 80 --cidr 0.0.0.0/0 to add port 80.

Now we need to get a list of our public subnets and then we can create the ECS Service.

Run aws ec2 describe-subnets in the output you should see "SubnetArn" for all the subnets. At the end of that line you see "subnet-XXXXXX" take note of those subnets. Note: if you are in us-east-1 you should have 6 subnets

Finally we can create our service.

Replace the subnets and security group Id with yours and run aws ecs create-service --cluster docker-on-aws --service-name nodejs-service --task-definition nodejs-fargate-task:1 --desired-count 1 --network-configuration "awsvpcConfiguration={subnets=[ subnet-XXXXXXXXXX, subnet-XXXXXXXXXX, subnet-XXXXXXXXXX, subnet-XXXXXXXXXX, subnet-XXXXXXXXXX, subnet-XXXXXXXXXX],securityGroups=[sg-XXXXXXXXXX],assignPublicIp=ENABLED}" --launch-type "FARGATE".

Running this will create the service nodejs-service and run the task nodejs-fargate-task:1. The :1 is the revision count. When you update the task definition the revision count will go up.

Viewing your nodejs application.

Now that you have everything configured and running it's time to view the application in the browser.

To view the application we need to get the public IP address. Go to the ECS dashboard, in the AWS Console, and click on your cluster.

Then click the "tasks" tab and click your task ID.

From there you should see a network section and the "Public IP".

Paste the IP address in the browser and you can see the node application.

Bam! We have a simple node application running in an Amazon ECS cluster powered by Fargate.

If you don't want to use AWS and just want to learn how to use Docker check out my last blog

Also, I attached some links here for more examples of task definitions you could use for other applications.

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/example_task_definitions.html

https://github.com/aws-samples/aws-containers-task-definitions/blob/master/