Deploy Docker Containers With AWS CodePipeline

Deploy Docker Containers With AWS CodePipeline

Docker containers may be deployed using one of the several cloud platforms, Amazon Elastic Container Service (ECS) being. AWS CodePipeline is a DevOps service for Continuous Integration, Continuous Delivery and Continuous Deployment of applications hosted on various AWS platforms.

Docker containers may be deployed using one of the several cloud platforms, Amazon Elastic Container Service (ECS) being. AWS CodePipeline is a DevOps service for Continuous Integration, Continuous Delivery and Continuous Deployment of applications hosted on various AWS platforms.

Amazon Elastic Container Service (ECS) is an AWS managed service for containerized applications for Docker containers.

Amazon Fargate is a serverless launch type for Amazon Elastic Container Service (ECS).

For the example application deployed to ECS an AWS CodePipelineconsists of a source code repository such as a GitHub repo, AWS CodeBuild for Build and a AWS ECS (Fargate) service for Staging.

The benefit of using an AWS CodePipeline for an AWS ECS service is that the ECS service continues to run while a new Docker image is built and deployed.


A Docker container deployment may need to be updated or modified due to changes in the Docker image source code or code build. Any modifications in the source code for a Docker image would require that a Docker image be rebuilt and the Docker service be redeployed. Without a mechanism to integrate, build, deliver and deploy source code while an AWS ECS deployment is running, it would involve stopping the ECS service tasks and as a result, incurring downtime for an ECS service. High availability of an ECS service being a priority, stopping tasks to redeploy an application is not a suitable option.


AWS CodePipeline is a DevOps service for Continuous Integration, Continuous Delivery and Continuous Deployment of applications hosted on the various AWS platforms, including Amazon ECS and Fargate as a deployment platform. An ECS service may be updated or modified without stopping the ECS service tasks. AWS CodePipeline provides high availability of an ECS service in a dynamic environment in which source code changes for a Docker image are common. A CodePipeline consists of three phases: Source code integration, Source code build, and Deployment, as shown in Figure 1.

Figure 1. CodePipeline Phases

For source code we shall use a Github repository. For source code build we shall use a AWS CodeBuild project. For deployment we shall use an ECS service of launch type Fargate.

Creating and deploying a CodePipeline application to ECS Fargate involves the following procedure:

  1. Create an ECS Fargate Task Definition an Service
  2. Configure connectivity on Task Subnet/s
  3. Create or Configure a S3 Bucket for Output Artifacts from the CodePipeline Build Stage
  4. Create a CodePipeline to deploy a Docker platform application (image) on ECS Fargate
  5. Modify Input/Output Settings for Stage Artifacts
  6. Run the CodePipeline
  7. Make source code modifications to re-run CodePipeline
Setting the Environment

The only prerequisite is an AWS account. The application deployed by a CodePipeline on ECS Fargate is a Docker application. Any Docker image that has source code repo could be used and we have used Docker image dvohra/node-server. Here is the GitHub source code repository for the Docker image dvohra/node-server.

Creating a GitHub Code Repository

If a new GitHub source code repository were to be used, it must include a Dockerfile from which to build Docker image. The Dockerfile for the dvohra/node-server image is based on the Docker image node:4.6. Dockerfile instructions copy a server.js file, which is used to create a Node server, to the current directory, expose port 8080 for the Node server to listen on, and run a node command on the server.js script. The server.js file creates a Node server and handles an HTTP request/response.

Adding a Build Spec for CodeBuild Project

A build spec is a YAML syntax file with build commands and settings used by a CodeBuild project to run a build. The build spec file must be called “buildspec.yml” and must be copied to the root of the source code repository. A buildspec.yml file consists of key/value pairs to describe the various phases of a build. The build phases are represented with the phases sequence, which is a required mapping in a buildspec.yml. The version is the other required mapping in a buildspec.yml. The buildspec.yml file is listed on the GitHub repo.

Adding an Image Definitions File

For deploying Container based applications such as those deployed to ECS, AWS CodePipeline requires an Image definitions file in JSON format. The Image definitions file is called imagedefinitions.json by default but could be given another name. The image definitions file describes the container application and consists of two attributes: name and imageURI. The name specifies the Docker container name and the container must be running prior to running the CodePipeline. The imageURI specifies the Docker image to be run in the Docker container. The Docker image would typically be the same as the Docker image already running in an ECS container. The image could be different and the variation would typically be of the image tag.  The imagedefinitions.json used for the Node server application deployed to ECS Fargate is listed on the GitHub.

Creating a Task Definition

A task definition in an ECS application describes the container/s in the ECS deployment. In this section, we shall create task definition for a Node Server container to be deployed on ECS Fargate. Open this URL and log in if not already logged in.. Click on Get started to access the ECS Console. Click on Task Definitions in the navigation margin. In the Task Definitions click on Create new Task Definition. In the Create new Task Definition select launch type compatibility as Fargate. Click on Next step. Next, configure task and container definitions. In the Add container dialog specify a Container name (node-server) and specify Image as dvohra/node-server. The task definition is shown in Figure 2.

Figure 2. Task Definition

Configuring Connectivity in Task Subnets

Before creating a service we need to configure connectivity to the Internet in the Subnets to be used when configuring the service. The Route Table lists the routes. We need to add a route with a default Internet gateway. Add a route with Destination as, which is an IP address. Select Target as an internet gateway.

Creating and Testing the Container Service

Next, create an ECS container service in the default cluster as shown in Figure 3.

Figure 3. Cluster with 1 Service

With Fargate launch type an Elastic Network Interface (ENI) is provisioned for each task. Copy the Public IP of the task, which is the same as the Public IP of the Elastic Network Interface, from either the task Details page Network section or the ENI Console. Open the URL :8080 in a browser to invoke the Node Service application. The Node Server returns a message as shown in Figure 4.

Figure 4. Node Server Response

Creating or Configuring an S3 Bucket

The CodePipeline to build the Source code for the Node server application and deploy a Docker image to an ECS service requires that the CodeBuild project generate “Output Artifacts”. The Output artifacts are stored in an S3 bucket, which is selected when creating a CodePipeline. Create a new S3 Bucket in the S3 Console, or alternatively select a S3 bucket that may be created from an earlier run of CodePipeline.

Creating a CodeBuild Project

Next, create a CodeBuild project that is to be used to build the source code into a Docker image. The source code for a Docker image dvohra/node-server and the buildspec.yml file used to build the source code into the Docker image was discussed earlier. After a Docker image is built, it is uploaded to Docker hub by CodeBuild. To create a CodeBuild project, open this URL in a web browser. Select Build projects and click on Create project as shown in Figure 5.

Figure 5. Build projects>Create project

The Create project wizard gets started. In Configure project specify a Project name (node-server). In Source>Source Provider select GitHub. For Repository select Use a repository in my account and Choose a repository as the _dvohra/docker-node-server _ repo. Keep the default setting for Git clone depth as 1.

Select the options for Webhook, Insecure SSL and Build Badge. Selecting Webhook makes the code get rebuilt every time a code update is made in the GitHub repo. The Insecure SSL option makes the code build SSL warnings when connecting to project source. The Build Badge option makes the project’s status visible and embeddable. In Environment: How to build select Environment image as Use an image managed by AWS CodeBuild. Select Operating System as Ubuntu. For Runtime, select Docker as shown in Figure 6.

Figure 6. Selecting Runtime as Docker

For Runtime version, select aws:codebuild/docker:17.09.0, which represents the Docker version 17.9. If a later version is available select the later version. The Privileged option gets selected automatically for Docker runtime as it is required to build a Docker image. For Build specification select Use the buildspec.yml in the source code root directory as shown in Figure 7. The Buildspec name is buildspec.yml by default.

Figure 7. Configuring Environment: How to build

For Certificate select Do not install any certificate. A certificate is not required by CodePipeline, but for a more secure CodeBuild a self-signed certificate could be installed from S3. Next, configure the output artifacts in the Artifacts. Output artifacts are required by CodePipeline Select Type as Amazon S3. Specify a S3 bucket folder name to use. Set the Path as “/”, which creates the folder at the bucket root. Select Namespace type as Node. Select a Bucket name as the Bucket configured earlier as shown in Figure 8. Select Cache as No cache.

Figure 8. Configuring S3 Bucket for Artifacts

In Service role select the option Create a service role in your account if the CodeBuild project is being created for the first time. If the CodeBuild project was created before, select Choose an existing service role from your account. Select the option Allow AWS CodeBuild to modify this service role so it can be used with this build project as shown in Figure 9. For VPC select No VPC. Click on Continue.

Figure 9. Configuring Service Role and VPC settings

In Review, review the Source and Build environment. Scroll down and click on Create to create the CodeBuild project. A CodeBuild project gets created and listed in Build projects as shown in Figure 10.

Figure 10. CodeBuild Project

The service role created by default by CodeBuild does not include some of the required permissions. The service role needs to be modified by adding an inline policy that adds permissions s3:GetObject and s3:PutObject. The inline policy to add is listed:

Testing the CodeBuild Project

Test the CodeBuild project before creating and configuring the CodeBuild in a CodePipeline so that if any errors exist they may be fixed. Click on Start build to start the build as shown in Figure 11.

Figure 11. Start build

The Start new build wizard gets started. Click on Start build. The CodeBuild project gets started and the code starts to get built. When the CodeBuild project gets completed, the Phase details indicate the same as shown in Figure 12.

Figure 12. Phase details and Build logs indicate that the CodeBuild has completed

The Docker image dvohra/node-server generated and updated by the CodeBuild project to Docker Hub is shown in Figure 13.

Figure 13. Docker Image generated and uploaded by CodeBuild on Docker Hub

Creating a CodePipeline

Having created projects for each of the CodePipeline stages; GitHub code repository for Source, CodeBuild for Build and ECS Fargate service for Staging, next we shall create a CodePipeline. Open CodePipeline Console with URL and click on Get started as shown in Figure 14.

Figure 14. CodePipeline>Get started

The Create pipeline wizard gets started as shown in Figure 15. First, specify a Pipeline name (node-server-fargate) and click on Next step.

Figure 15. Specifying Pipeline Name

Next, configure the Source location. For Source provider select GitHub as shown in Figure 16.

Figure 16. Selecting Source provider

Next, connect to the GitHub with Connect to GitHub as shown in Figure 17.

Figure 17. Connect to GitHub

Select the GitHub Repository as shown in Figure 18.

Figure 18. Selecting GitHub Repo

Select the repo Branch as shown in Figure 19. Click on Next step.

Figure 19. Source location>Next step

Next, configure the Build. Select Build provider as AWS CodeBuild as shown in Figure 20.

Figure 20. Selecting Build provider as AWS CodeBuild

The subsequent section displayed is based on the Build provider selected. For AWS CodeBuild, a section to provide details about the CodeBuild gets displayed. For Configure your project select Select an existing project as shown in Figure 21. Select Project name as the CodeBuild project node-server created earlier.

Figure 21. Selecting CodeBuild Project

Click on Next step. Next, configure the Deploy stage of the CodePipeline as shown in Figure 22. Select Deployment provider as Amazon ECS.

Figure 22. Selecting Deployment provider as Amazon ECS

The subsequent section displayed is based on the Deployment provider selected as indicated by the Amazon ECS section in Figure 23. In the Amazon ECS section select Cluster name as the cluster in which the ECS service to be deployed to is created; select default.

Figure 23. Selecting ECS Cluster

Select the Service name as the ECS service to deploy to, which is node-server-service, as shown in Figure 24.

Figure 24. Selecting ECS Service

Specify the Image filename as imagedefinitions.json as shown in Figure 25. If omitted, the default Image filename is imagedefinitions.json and the file should be available in the Source code GitHub repo. Click on Next step.

Figure 25. Specifying Image filename

Next, select the Service Role name as shown in Figure 26. A new service role is created in IAM automatically the first time a CodePipeline is created. Subsequently, the Service role gets listed to be selected.

Figure 26. Selecting Service Role

Click on Next step. Review the CodePipeline and click on Create pipeline as shown in Figure 27.

Figure 27. Create pipeline

A CodePipeline gets created as shown in Figure 28. After getting created the CodePipeline starts to run automatically as shown by In Progress status for Source stage.

Figure 28. CodePipeline Source stage in Progress

When the Source stage completes, its status becomes Succeeded as shown in Figure 29. And the Build stage starts to run as indicated by the In Progress status.

Figure 29. Source stage completed and Build stage In Progress

The Build stage also gets completed as indicated by the Succeeded status in Figure 30. The Staging stage starts to run.

Figure 30. Build stage succeeded and Staging stage In Progress

Modifying the Input/Output Artifacts

We had already configured all the CodePiepline stages; why do we need to modify the settings for Input/Output Artifacts? Because for the example ECS application deployed by CodePipeline, the default Input/Output artifacts settings are not as required to run the CodePipeline. By default, the input artifacts to the Staging stage are the output artifacts from the Build stage. For a CodeBuild that only builds a Docker image and uploads the Docker image to Docker Hub or Amazon ECR, no output artifacts are generated by the CodeBuild stage. The input artifacts to the Staging stage need to be set to the output artifacts from the Source stage. Without modifying the settings for the Input/Output artifacts, the Staging stage would fail. The CodePipeline that was started automatically after being created fails. To modify the Input/Output artifacts click on Edit as shown in Figure 31.

Figure 31. CodePipeline>Edit

After modifying the Input/Output artifacts, click on Save pipeline changes as shown in Figure 32.

Figure 32. Save pipeline changes

Running the CodePipeline

To run the CodePipeline after modification, click on Release change as shown in Figure 33.

Figure 33. Release change

In the Release change confirmation dialog, click on Release. The modifications made to the CodePipeline get applied and the CodePipeline starts to run from the start as indicated by the In Progress status for the Source stage in Figure 34. The status for the Build and Staging stages are from the previous run and does not indicate the current status of the stages.

Figure 34. Source stage In Progress

All stages of the CodePipeline get completed with status Succeeded as shown in Figure 35.

Figure 35. All stages of CodePipeline completed successfully

The previous service task gets stopped and a task based on the revised task definition starts to run as indicated by the RUNNING task status in Figure 36.

Figure 36. New task running

To invoke the new task, find its Public IP as before from the Elastic Network interface for the task. Open URL :8080 in a web browser to invoke the task. The Node server application response is shown in Figure 37.

Figure 37. Node Server application response

Modifying Source Code to Re-Run CodePipeline

What is the advantage of running a CodePipeline, if the ECS service response is the same as when invoking the service directly without a CodePipeline? Typically the source code for an ECS deployed in production would need to be updated periodically, which implies that the Docker image would need to be rebuilt. The Docker image deployed in an ECS service task would also need to be updated. The Docker image needs to be updated without any discontinuation of the ECS based service. With a CodePipeline, a new run of the CodePipeline is started automatically every time source code modifications are made. Without any user intervention, the ECS deployment gets updated when source code changes are made.

To demonstrate, make a slight modification to source code in the GitHub repo, such as make the Hello message in server.js different. Click on Commit changes in the repo. The CodePipeline starts to re-run automatically as shown by the Source stage having completed successfully and the Build stage in progress in Figure 38.

Figure 38. CodePipeline restarted automatically

After the Build stage completes successfully, the Staging starts to run as indicated by the status messages in Figure 39.

Figure 39. Build stage completed and Staging stage started

The Staging also completes successfully. The new task gets started.  Using the Public IP of the new task, open the URL :8080 in a web browser. The new task gets invoked and the Node server response gets displayed as shown in Figure 40. The Node server response is the modified response from server.js.

Figure 40. Modified Node Server Response from new task


In this article, we discussed deploying a Docker container application to ECS Fargate. We have demonstrated updating source code for a Docker image and deploying the new Docker image to a running container service without stopping the container service, all using an AWS CodePipeline.

AWS DevOps: Introduction to DevOps on AWS

AWS DevOps: Introduction to DevOps on AWS

AWS DevOps: Introduction to DevOps on AWS

This is the story of how DevOps met AWS, and how their union can benefit you.

Technology has evolved over time. And with technology, the ways and needs to handle technology have also evolved. The last two decades have seen a great shift in computation and software development lifecycles. We have seen a huge demand for online DevOps training and AWS certification.

This blog focuses on the following points:

  1. What Is DevOps?
  2. What Is AWS?
  3. AWS DevOps
What Is DevOps?

In these fast-paced times, we see more emphasis being placed on faster delivery of software deployment. In order to stay competitive in the market, companies are expected to deploy quality software in defined timelines. Thus, the roles of software developers and system admins have become very important. A lot of juggling of responsibilities happens between the two teams. Let us take a look at how do these individuals contribute to the deployment process.

A programmer or a software developer is responsible for developing the software. In simple words he is supposed to develop a software which has:

  • New features
  • Security Upgrades
  • Bug Fixes

But a developer may have to wait for weeks for the product to get deployed which is also known as** “**time to market” in business terms. This delay may put pressure on the developer because he is forced to re-adjust his dependent activities like:

  • Pending code
  • Old code
  • New products
  • New features

When the product is put into the production environment, the product may show some unforeseen errors. This is because the developer writes code in the development environment, which may be different from the production environment.

Let us go ahead and take a look at this process from the operations point of view. Now the operations team or the system administrating team is responsible for maintaining and ensuring the uptime of the production environment. As the company invests time and money in more products and services, the number of servers admins have to take care of also keeps growing.

This gives rise to more challenges because the tools that were used to manage the previous number of servers may not be sufficient to cater to the needs of upcoming and growing number of servers. The operations team also needs to make slight changes to the code so that it fits into the production environment. Hence, the need to schedule these deployments accordingly also grows, which leads to time delays.

When the code is deployed, the operations team is also responsible for handling code changes or minor errors to the code. At times, the operation team may feel pressured and it may seem like developers have pushed their responsibilities to operations’ side of the responsibility wall. As you may come to realize, none of the sides can be held as the culprit.

What if these two teams could work together? What if they:

  • Could break down silos?
  • Share responsibilities?
  • Start thinking alike?
  • Work as a team?

Well, this is what DevOps does. It helps you get software developers and operations in sync to improve productivity. DevOps is the process of integrating development and operations teams in order to improve collaborations and productivity. This is done with automation of workflows and productivity and continuous measurement of application performance.

DevOps focuses on automating everything that lets them write small chunks of code that can be tested, monitored and deployed in hours, which is different from writing large chunks of codes that takes weeks to deploy. Let us move ahead and understand more about AWS and how it forms a crucial pairing with DevOps to give you AWS DevOps.

What Is AWS?

If you go back a decade, the scenario of handling and storing data was different. Companies preferred storing data using their private servers. However, with more and better usage of the internet, the trend has seen a paradigm shift for companies, as they are moving their data to the cloud. This enables companies to focus more on core competencies and stop worrying about storing and computation.

For example, Netflix is a popular video streaming service which the whole world uses today. Back in 2008, Netflix suffered a major database corruption, and for three days their operations were halted. The problem was scaling up, which is when they realized the need for a highly reliable, horizontally scalable, distributed systems in the cloud. They began using cloud services, and since then their growth has been off the charts.

Gartner says that by 2020, a corporate “no-cloud” policy will be as rare as a “no-internet” policy today. Interesting, isn’t it?

Almost every company has started to adopt cloud services, and AWS, in particular, is the leading cloud service provider in the market. Let us understand more about it.


Amazon’s AWS makes its customer base strong from small-scale companies to big enterprises like D-Link.

AWS DevOps

AWS is one of the best cloud service providers and DevOps is the popular and efficient implementation of the software development lifecycle, making AWS DevOps a highly popular amalgamation.

AWS CloudFormation

DevOps teams are required to create and release cloud instances and services more frequently than traditional development teams. AWS CloudFormation enables you to do just that. Templates of AWS resources like EC2 instances, ECS containers, and S3 storage buckets let you set up the entire stack without you having to bring everything together yourself.


AWS EC2 speaks for itself. You can run containers inside EC2 instances, so you can leverage the AWS Security and management features, yet another reason why AWS DevOps is a lethal combo.

AWS CloudWatch

This monitoring tool lets you track every resource that AWS has to offer. Plus it makes it very easy to use third-party tools for monitoring.

AWS CodePipeline

CodePipeline is one popular feature from AWS which simplifies the way you manage your CI/CD toolset. It lets you integrate with tools like GitHub, Jenkins, and CodeDeploy, enabling you to visually control the flow of app updates from build to production.

Instances In AWS

AWS frequently creates and adds new instances to their list and the level of customization with these instances allow you to make it easy to use AWS DevOps together.

All these reasons make AWS one of the best platforms for DevOps.

Originally published by Vishal Padghan at

WordPress in Docker. Part 1: Dockerization

WordPress in Docker. Part 1: Dockerization

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

Intro to Docker on AWS

Intro to Docker on AWS

Serverless containers with AWS Fargate. Running a serverless Node.js app on AWS ECS. Learn how to use Docker with key AWS services to deploy and manage container-based applications. Running Docker on AWS provides developers and admins a highly reliable, low-cost way to build, ship, and run distributed applications at any scale.

Intro to Docker on AWS AWS Fargate: Running a serverless Node.js app on AWS ECS.

Project Summary

We're going to containerize a node.js project that renders a simple static site and deploy it to an Amazon ECS Fargate cluster. I will supply all the code at

Installing Prerequisites

Downloading Docker Desktop.

If you are on a Mac go to or Windows go to Follow the installation instructions and account setup.

Installing node.js

Download node.js here.

Installing the AWS CLI

Follow the instructions here.

Project setup

Now that we have our prerequisites installed we can build our application. This project isn't going to focus on the application code, the point is to get more familiar with Docker and AWS. So you can download the Repo and change directories into the Docker-on-AWS directory.

If you wanted to run the app locally and say screw docker you can run npm install in side the Docker-on-AWS directory. Then run node app.js. To see the site running locally visit http://localhost:80.

Now that we have docker installed and the repo downloaded we can look at the Dockerfile. You can think of it as a list of instructions for docker to execute when building a container or the blueprints for the application.

FROM node:12.4-alpine

RUN mkdir /app

COPY package.json package.json
RUN npm install && mv node_modules /node_modules

COPY . .

LABEL maintainer="Austin Loveless"

CMD node app.js

At the top we are declaring our runtime which is node:12.4-alpine. This is basically our starting point for the application. We're grabbing this base image "FROM" the official docker hub node image.

If you go to the link you can see 12.4-alpine. The "-alpine" is a much smaller base image and is recommended by docker hub "when final image size being as small as possible is desired". Our application is very small so we're going to use an alpine image.

Next in the Dockerfile we're creating an /app directory and setting our working directory within the docker container to run in /app.

After that we're going to "COPY" the package.json file to package.json on the docker container. We then install our dependencies from our node_modules. "COPY" the entire directory and run the command node app.js to start the node app within the docker container.

Using Docker

Now that we've gone over the boring details of a Dockerfile, lets actually build the thing.

So when you installed Docker Desktop it comes with a few tools. Docker Command Line, Docker Compose and Docker Notary command line.

We're going to use the Docker CLI to:

  • Build a docker image

  • Run the container locally

Building an image

The command for building an image is docker build [OPTIONS] PATH | URL | -. You can go to the docs to see all the options.

In the root directory of the application you can run docker build -t docker-on-aws .. This will tag our image as "docker-on-aws".

To verify you successfully created the image you can run docker images. Mine looks like docker-on-aws latest aa68c5e51a8e About a minute ago 82.8MB.

Running a container locally

Now we are going to run our newly created image and see docker in action. Run docker run -p 80:80 docker-on-aws. The -p is defining what port you want your application running on.

You can now visit http://localhost:80.

To see if your container is running via the CLI you can open up another terminal window and run docker container ls. To stop the image you can run docker container stop <CONTAINER ID> . Verify it stopped with docker container ls again or docker ps.

Docker on Amazon ECS

We're going to push the image we just created to Amazon ECR, Elastic Container Registry, create an ECS cluster and download the image from ECR onto the ECS cluster.

Before we can do any of that we need to create an IAM user and setup our AWS CLI.

Configuring the AWS CLI

We're going to build everything with the AWS CLI.

Go to the AWS Console and search for IAM. Then go to "Users" and click the blue button "Add User".

Create a user name like "ECS-User" and select "Programmatic Access".

Click "Next: Permissions" and select "Attach exisiting policies directly" at the top right. Then you should see "AdministratorAccess", we're keeping this simple and giving admin access.

Click "Next: Tags" and then "Next: Review", we're not going to add any tags, and "Create user".

Now you should see a success page and an "Access key ID" and a "Secret access key".

Take note of both the Access Key ID and Secret Access key. We're going to need that to configure the AWS CLI.

Open up a new terminal window and type aws configure and input the keys when prompted. Set your region as us-east-1.

Creating an ECS Cluster

To create an ECS Cluster you can run the command aws ecs create-cluster --cluster-name docker-on-aws.

We can validate that our cluster is created by running aws ecs list-clusters.


If you wanted to delete the cluster you can run aws ecs delete-cluster --cluster docker-on-aws

Pushing an Image to Amazon ECR

Now that the CLI is configured we can tag our docker image and upload it to ECR.

First, we need to login to ECR.

Run the command aws ecr get-login --no-include-email. The output should be docker login -u AWS -p followed by a token that is valid for 12 hours. Copy and run that command as well. This will authenticate you with Amazon ECR. If successful you should see "Login Succeeded".

Create an ECR Repository by running aws ecr create-repository --repository-name docker-on-aws/nodejs. That's the cluster name followed by the image name. Take note of the repositoryUri in the output.

We have to tag our image so we can push it up to ECR.

Run the command docker tag docker-on-aws <ACCOUNT ID> Verify you tagged it correctly with docker images.

Now push the image to your ECR repo. Run docker push <ACCOUNT ID> Verify you pushed the image with aws ecr list-images --repository-name docker-on-aws/nodejs.

Uploading a node.js app to ECS

The last few steps involve pushing our node.js app to the ECS cluster. To do that we need to create and run a task definition and a service. Before we can do that we need to create an IAM role to allow us access to ECS.

Creating an ecsTaskExecutionRole with the AWS CLI

I have created a file called task-execution-assume-role.json that we will use to create the ecsTaskExecutionRole from the CLI.

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Service": ""
            "Action": "sts:AssumeRole"

You can run aws iam create-role --role-name ecsTaskExecutionRole --assume-role-policy-document file://task-execution-assume-role.json to create the role. Take note of the "Arn" in the output.

Then run aws iam attach-role-policy --role-name ecsTaskExecutionRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy to attach the "AmazonECSTaskExecutionRolePolicy".

Take the "Arn" you copied earlier and paste it into the node-task-definition.json file for the executionRoleArn.

    "family": "nodejs-fargate-task",
    "networkMode": "awsvpc",
    "executionRoleArn": "arn:aws:iam::xxxxx:role/ecsTaskExecutionRole",
    "containerDefinitions": [
            "name": "nodejs-app",
            "image": "",
            "portMappings": [
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp"
            "essential": true
    "requiresCompatibilities": [
    "cpu": "256",
    "memory": "512"

Registering an ECS Task Definition

Once your IAM role is created and you updated the node-task-definition.json file with your repositoryUri and executionRoleArn and you can register your task.

Run aws ecs register-task-definition --cli-input-json file://node-task-definition.json

Creating and ECS Service

The final step to this process is creating a service that will run our task on the ECS Cluster.

We need to create a security group with port 80 open and we need a list of public subnets for our network configuration.

To create the security group run aws ec2 create-security-group --group-name ecs-security-group --description "Security Group us-east-1 for ECS". That will output a security group ID. Take note of this ID. You can see information about the security group by running aws ec2 describe-security-groups --group-id <YOUR SG ID>.

It will show that we don't have any IpPermissions so we need to add one to allow port 80 for our node application. Run aws ec2 authorize-security-group-ingress --group-id <YOUR SG ID> --protocol tcp --port 80 --cidr to add port 80.

Now we need to get a list of our public subnets and then we can create the ECS Service.

Run aws ec2 describe-subnets in the output you should see "SubnetArn" for all the subnets. At the end of that line you see "subnet-XXXXXX" take note of those subnets. Note: if you are in us-east-1 you should have 6 subnets

Finally we can create our service.

Replace the subnets and security group Id with yours and run aws ecs create-service --cluster docker-on-aws --service-name nodejs-service --task-definition nodejs-fargate-task:1 --desired-count 1 --network-configuration "awsvpcConfiguration={subnets=[ subnet-XXXXXXXXXX, subnet-XXXXXXXXXX, subnet-XXXXXXXXXX, subnet-XXXXXXXXXX, subnet-XXXXXXXXXX, subnet-XXXXXXXXXX],securityGroups=[sg-XXXXXXXXXX],assignPublicIp=ENABLED}" --launch-type "FARGATE".

Running this will create the service nodejs-service and run the task nodejs-fargate-task:1. The :1 is the revision count. When you update the task definition the revision count will go up.

Viewing your nodejs application.

Now that you have everything configured and running it's time to view the application in the browser.

To view the application we need to get the public IP address. Go to the ECS dashboard, in the AWS Console, and click on your cluster.

Then click the "tasks" tab and click your task ID.

From there you should see a network section and the "Public IP".

Paste the IP address in the browser and you can see the node application.

Bam! We have a simple node application running in an Amazon ECS cluster powered by Fargate.

If you don't want to use AWS and just want to learn how to use Docker check out my last blog

Also, I attached some links here for more examples of task definitions you could use for other applications.