How to manipulate AWS AppSync and GraphQL to comply with DynamoDB best practices?

DynamoDB operates best with a single table per application (<a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-general-nosql-design.html" target="_blank">https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-general-nosql-design.html</a>), yet AppSync by default breaks that rule by the way it auto-generates code from the GraphQL schema (that AWS recommends users allow the API to do). Therefore, to use AppSync with GraphQL while upholding DynamoDB's best practices (assuming DynamoDB is the sole data source for the GraphQL API), would this approach work?

DynamoDB operates best with a single table per application (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-general-nosql-design.html), yet AppSync by default breaks that rule by the way it auto-generates code from the GraphQL schema (that AWS recommends users allow the API to do). Therefore, to use AppSync with GraphQL while upholding DynamoDB's best practices (assuming DynamoDB is the sole data source for the GraphQL API), would this approach work?

First, create a blank DynamoDB table (TheTable in this example) and give it a partition key (partitionKey) and a sort key (sortKey).

Second, manually enforce every GraphQL type to be backed by that table (TheTable). This is where AppSync automatic code generation will go the other direction.

GraphQL schema:

type Pineapple {
    partitionKey: String!
    sortKey: String!
    name: String!
}
create varying types as long as they all map to the same table

type MachineGun {
partitionKey: String!
sortKey: String!
name: String!
}

input CreatePineappleInput {
partitionKey: String!
sortKey: String!
name: String!
}

type Mutation {
createPineapple(input: CreatePineappleInput!): Pineapple
}

Third, configure your own resolvers to handle the schema (again avoid auto-generated code):

Resolver:

{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"partitionKey": $util.dynamodb.toDynamoDBJson($ctx.args.input.partitionKey),
"sortKey": $util.dynamodb.toDynamoDBJson($ctx.args.input.sortKey),
},
"attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args.input),
}

And when we run the mutation in the AppSync console:

GraphQL operation:

mutation createPineapple($createPineappleInput: CreatePineappleInput!) {
createPineapple(input: $createPineappleInput) {
name
}
}

{
"createPineappleInput": {
"partitionKey": "attraction123",
"sortKey": "meta",
"name": "Looking OK"
}
}

We get the result we hoped for:

{
"data": {
"createPineapple": {
"name": "Looking OK"
}
}
}

Is there a reason why this wouldn't achieve single-table efficiency using AppSync?

Amazon Web Services Tutorial – Learn Amazon Web Services from Experts

Amazon Web Services Tutorial – Learn Amazon Web Services from Experts

This AWS tutorial is meant for beginners to learn AWS. Through this AWS tutorial you will understand AWS architecture, various AWS products like S3, EC2, VPC, Route 53, Lambda, IAM, Redshift, RDS, DynamoDB and others. You will know the advantages of AWS, migration from on-premise to AWS cloud, administration of AWS and more through this easy to learn AWS tutorial. If you want to master AWS and get certified then check the Intellipaat AWS training course.

What is AWS?

First of all, what does AWS stand for? Obviously it is Amazon Web Services. Amazon Web Services is the world’s biggest cloud provider which is owned by Amazon. They provide a set of on-demand services to the customers via the internet or “The Cloud”. Also, you get a pay-as-you-go option where you need to only pay for what you use. AWS lets you work and run applications on the virtual machine like your own computer.

Watch this AWS Certification full course for Beginners video



To be very simple, AWS provides their services over the internet which is actually installed in their infrastructure.

Market share and Popularity

AWS cloud has 41% market share in the cloud industry. Azure is the next major cloud provider and it lacks behind with only 29% of the market. So, AWS leads way ahead.

AWS is the most popular cloud provider in the world and it is the most used in USA and India too. 

So, what makes AWS so special? Let’s take a look.

Learn more about AWS in this insightful AWS blog!

AWS Well-Architected Framework

The Well-Architected Framework has been designed and developed in a way that it could help cloud architects build secure, reliable, high-performing, and efficient infrastructure for their applications. The 5 pillars of AWS is used to provide a consistent approach for customers and clients to design, evaluate, and implement prototypes that will work perfectly for them in the long-run.

Go through the AWS Course in London to get clear understanding of AWS.

5 pillars of AWS

While creating an infrastructure in AWS, you can never afford to miss these 5 points which we will be seeing now. These instructions are provided by AWS itself.

  • Operational Excellence
  • Security
  • Reliability
  • Performance Efficiency
  • Cost Optimization

Operational Excellence:

The pillar includes the ability to run and monitor systems to deliver amazing business value and to improve exponentially in supporting processes and procedures.

To achieve operational excellence, there are 6 design principles to follow and they are:

  • Perform operations as code
  • Annotate documentation
  • Make frequent, small, reversible changes
  • Refine operations procedures frequently
  • Anticipate failure
  • Learn from all operational failures

Learn more about AWS in this AWS training in New York to get ahead in your career!

Security:

Security pillar has the ability to secure and protect the data and applications stored in the cloud and also to provide business value by having contingency plans, risk assessments, and migration strategies.

If you have any doubts or Queries related to AWS, get it clarifies from AWS Experts on AWS Community.

To achieve a secure architecture in the cloud, follow these design principles:

  • Implement a strong identity foundation
  • Enable traceability
  • Apply security at all layers
  • Automate security best practices
  • Protect data in transit and at rest
  • Prepare for security events

Interested in learning AWS? Click here to learn more in this AWS Training in Sydney!

Reliability:

This pillar includes the ability of the system to recover from a service or infrastructure destruction. Also, how fast it recovers from it.

The principles provided are:

  • Test recovery procedures
  • Automatically recover from failure
  • Scale horizontally to increase aggregate system availability
  • Stop guessing capacity
  • Manage change in automation

Performance Efficiency:

The ability to use computing resources efficiently so that they meet the demanded requirements is what this pillar talks about.

The design principles for this pillar are:

  • Democratize advanced technologies
  • Go global in minutes
  • Use serverless architectures
  • Experiment more often
  • Mechanical sympathy

Become Master of AWS by going through this online AWS course in Toronto.

Cost Optimization:

Minimizing the costs or avoiding them by getting rid of unwanted resources and cleverly designing an architecture which does cut costs.

The principles to be remembered are:

  • Adopt a consumption model
  • Measure overall efficiency
  • Stop spending money on data centre operations
  • Analyze and attribute expenditure
  • Use managed services to reduce the cost of ownership

When building an AWS architecture remember all these key pillars to create an effective architecture.

Get certified from top AWS course in Singapore Now

Services provided by AWS

We already saw what is AWS and it’s popularity. Now, let us see the services which made AWS what it is today. AWS offers a variety of services that can be categorized in the following categories:

  • Compute and Networking Services
  • Amazon EC2 – Amazon Simple Notification Service (SNS) is a messaging service. Amazon Simple Email Service is a email sending service via cloud.
  • Amazon VPC – Amazon VPC is one that provides an additional layer of security for all AWS services that you use.
  • AWS Elastic Beanstalk – AWS Elastic Beanstalk is a compute service which makes it easier for the developers to quickly deploy and manage applications which you upload to the AWS cloud.
  • Amazon ELB – Load balancer is a service which uniformly distributes network traffic and workloads across multiple servers or cluster of servers.
  • Storage and Content Delivery Services
  • Amazon S3 – Amazon Simple Storage Service (S3) is a storage that can be maintained and accessed over the Internet.
  • Amazon CloudFront – It is a Content Delivery Network (CDN) which delivers apps, videos, and other data in low latency to the customers.
  • Security and Identity Services
  • Amazon Cognito – Allows sign-in, sign-up and provides access to your mobile and web apps.
  • AWS IAM – AWS Identity and Access Management enables you to manage access to AWS services and resources securely
  • Database Services
  • Amazon RDS – Amazon RDS is a service which provides database connectivity through the internet. RDS makes it very simple, easy and set up a relational database through the cloud.
  • Amazon Redshift – Amazon Redshift is a data warehouse service which is fully managed by AWS.
  • Amazon Aurora – AWS Aurora is a relational database engine which provides the simplicity and cost-effectiveness of an open source database combined with the power, performance, and reliability of a high-end commercial database.

Watch this AWS S3 video by Intellipaat:

  • Analytics Services
  • Amazon Kinesis – For real-time analytics, Amazon Kinesis makes it easy to collect, process and analyze streaming data such as IoT telemetry data, application logs, and website clickstreams.
  • Amazon QuickSight – Fast, cloud-powered business analytics service, that that makes it easy to build stunning visualizations and rich dashboards that can be accessed from any browser or mobile device.
  • Application Services
  • Amazon SES – Amazon Simple Email Service is a email sending service via cloud. This is mainly developed for marketers and developers to send business and transactional emails.
  • Amazon SNS – Amazon Simple Notification Service (SNS) is a messaging service. SNS is highly available, durable, secure, and fully management publisher/subscriber service.
  • Management Tools
  • AWS Management Console – AWS Management Console is basically a web application that allows users to access and manage any of the resources/services running on the AWS infrastructure.
  • AWS CloudWatch – Amazon CloudWatch is AWS monitoring and management service which is designed for the purpose of maintaining the services and resources which are used.

The most used services are Compute, Storage and Security. Most of the businesses who opt for Cloud start with storage, that is migrating their data to the cloud.

Who uses AWS?

World’s biggest companies in their own domain use AWS for their cloud needs.

The number of active users of AWS exceeds 1,000,000. 10% of that users are Enterprise-scale users and the rest of it is covered by small and mid-sized companies.

The firms which have gone “all-in” that is removing their data centres and completely opting for AWS. Netflix went all-in in the year 2015, and they use a huge amount of resources from AWS. Hertz and Time incorporation are two other companies who went all-in.

Tata Motors and Wipro are the two of the biggest Indian MNCs to use AWS. 

Learn Complete AWS at Hyderabad in 24 Hrs.

Why businesses prefer AWS?

It is mainly because of all the cloud computing advantages but what does AWS have that is better than other providers like Azure and GCP. 

AWS started their cloud business in 2006 which was way before any other major company even thought of it. They have failed a lot and have gained experience from it, and right now they are the leaders because of it. They provide an overwhelming amount of services which can be used by many businesses to their advantage.

The Brand name they created by the success of Amazon.com is one of the main reason businesses trust AWS. Also, anybody could easily set up a profile and start with AWS.

Are you interested in learning AWS course in Bangalore from Experts?

Starting off with AWS

It is very simple to start off with AWS. There is no complex procedures to start. Let me give you the steps to easily create a AWS account and access the AWS management console.

Step 1: Create an AWS account by providing personal and credit/debit card details. INR 2 will be deducted to validate your card.

Step 2: Choose the plan you want. Basic, Developer, or Business plan. Basic is free for 12 months with limits. Developer costs 29$/month and Business costs 100$/month.

Step 3: Now you will get access to the AWS Mangement Console. You just have to learn how to use the services. For detailed information on Creating a account and the AWS Management Console click here.

Here ends this AWS Tutorial on Amazon Web Services. Wish to learn more? Check out Intellipaat’s AWS Courseto get an in-depth understanding of Amazon Web Services!

Table of Content

Definition of Cloud computing

What is Cloud Computing?

AWS is one of vendors of cloud services, which is why is becomes imperative to have a clear understanding of cloud computing. Cloud Computing is generally referred to as 'Cloud'. Cloud delivers the resources and services on a virtual platform on-demand in a pay-as-you-go manner. These resources and services can be anything ranging from applications to data Read More

Advantages And Disadvantages of Cloud Computing

Pros and Cons of Cloud Computing

Cloud Computing is shaping how we live and work today. Like it or not, it has become an integral part of our lives. Companies and businesses of all shapes and sizes are now turning to cloud Computing. But nothing is perfect and Cloud Computing is no exception. While it is vastly beneficial, it also Read More

Global Infrastructure

Global Infrastructure of AWS

AWS spreads its services around the world and has a million dynamic clients. The global infrastructure of AWS is broadening so that the clients or end users capable to get the result of higher throughput and lower latency and also to make sure that the client’s data remains in their desired space or region they specify. Read More

AWS Introduction

What is AWS?

Amazon Web Services (AWS) can be defined as the huge set of on-demand services provided to the customers on cloud with pay-as-you-go pricing method. Whether it is about configuring a server or running an application, AWS lets you execute your operations on cloud in a similar way as you would do on a physical computer. You can Read More

AWS Security

Security in AWS

Security is the most prior thing in AWS. The advantage of using the AWS service is that, you can reach the requirements of the most security-sensitive organization from data center and network architecture. If your desire is to maintain secure environments then clients can go to AWS cloud and also it permits clients to scale and innovate. Read More

AWS Compliance

AWS Compliance Overview

AWS makes clients to identify the vigorous controls in place with the aid of Cloud Compliance at AWS at data safety and sustain security in the cloud. The compliance responsibilities will get exchanged since the systems are created on top of AWS cloud infrastructure. Through tying mutually governance-targeted, audit-pleasant service characteristics with appropriate compliance or audit standards, Read More

Amazon Web Services Cloud Platform

AWS Cloud Platform

It is essential to have a wide variety of technologies, so that it is possible to develop, operate and manage our applications. The AWS comprises a several cloud services, they can be well suited to your business or to your organizational needs. The services can be extracted from the AWS management Console and Command Line Interface. Watch Read More

Compute

AWS Computation

Amazon EC2: This is a web service, have a feature of resizable compute in the cloud. It makes easy for developers in the process of web-scale computing. This provides an interface and with that we can get configure capacity by minimum resistance. We will be able to hold the entire control on our own computing resources and also, Read More

Storage and Content Delivery

Storage requirements

What is Amazon S3 Overview video [videothumb class="col-md-12" id="CvWoj1EBrTA" alt="What is Amazon S3 Overview" title="What is Amazon S3 Overview"] Amazon Web Services suggests a wide variety of services to reach the storage requirements. Key services Key concepts Usage scenarios Documentation Key services Key services Description Amazon S3 With this component one can retrieve the input data sets which Read More

Database

Database management system in AWS

The developers offered a several database management services from AWS. The there are numerous options you can find to run, managed relational data and also managed NoSQL databases, or there is one more option to run the code on online platform in the cloud on Amazon EBS and Amazon EC2. Amazon RDS : A known web Read More

Networking

AWS Networking

The networking services offered by AWS are immense. We can set up the internet route by joining IP addresses, setting up of the logical servers according to the transmission protocols. The networking services can be used on applications like: Key services Key concepts Architecture Documentation Want more? Here is an awesome AWS VPC video for your reference, which Read More

Developer Tools

AWS Development Tools Overview

Applications of AWS are used by developer tools brilliantly. Tools used by the developers are as below: AWS Management Console : It manages the quickly growing Amazon architecture.It controls your calculation, storing and also some cloud based activities using a very simple graphical border. AWS Toolkit for Eclipse : It is a tool for using Java Read More

Management Tools

Popular management tools offered by AWS

In this chapter we will be discussing about the various management tools offered by AWS. Amazon CloudWatch It monitors the tune-ups used in the cloud and also other functions on the Amazon web services. It is used for metrics following controlling logging files and also keeping alerts. Instances, Database tables, metrics are all controlled Read More

Security And Identity

AWS Security and Identity Overview

The safety concern offered by this AWS enables us to have a safer data storage in the cloud network. Safety services of AWS include AWS Identity and Access Management which uses terms and conditions to allow the entry of the users into the AWS assets. Another safety measure is the AWS Directory Service which allows Read More

Analytics

AWS Analytics Overview

The services provided by the analytical tools of AWS are as follows: The data are organized and controlled by the Amazon Elastic MapReduce with the assistance of the Hadoop technology sharing deals.This tool very simply sets up and manages the Hadoop framework. This tool controls the calculative assets and carries on the MapReduce process. The flowing of Read More

Internet Of Things

AWS Internet of things

The AWS IoT acts as the mediator between the components and the network. It hence, gathers information from those things and works on them. AWS IoT is defined as a platform which enables you to connect devices to AWS Services along with the other devices, secure data and interactions, process and act upon device data, and Read More

Mobile Services

Amazon Cognito

finds out ways to recognize sole users, recover provisional ,unimportant passwords and helps in information management operations. To initiate with Cognito, the steps are: Register in AWS. Get the token of your application. Develop an identity pool for Cognito. Develop SDK , accumulate and then synchronize the information. Learn more about AWS in this insightful AWS Read More

AWS Cheat Sheet

AWS Quick Reference Guide

If you are looking for a flexible and low cost IT resource, then you must take AWS into consideration. This AWS cheat sheet covers the services offered by Amazon, common use cases and technical limitations. This is a handy reference to the one who is willing to work on cloud services which provides all the basic Read More

AWS Lambda Tutorial

Introduction

In this tutorial, we will discuss Amazon’s very reliable serverless computing service AWS Lambda. Before starting this tutorial, we’re assuming that you’ve already worked on AWS cloud and you’ve knowledge of AWS compute and database services. We will dive you into the main concepts of AWS Lambda by understanding why we need this service, what is this service, its Read More

Originally published at www.intellipaat.com on May 16, 2016.

How to deploy a Node.js application to Amazon Web Services using Docker

<em>Plug: Originally published at Zeolearn&nbsp;</em><a href="https://www.zeolearn.com/magazine/how-to-deploy-a-node.js-application-to-amazon-web-services-using-docker" target="_blank"><em>magazine</em></a><em>.</em>

Plug: Originally published at Zeolearn magazine.

Table of Contents

1. Introduction

2. Prerequisites

3. A quick primer on Docker and AWS

4. What we’ll be deploying 

5. Creating a Dockerfile

6. Building a docker image

7. Running a docker container

8. Creating the Registry (ECR) and uploading the app image to it

9. Creating a new task definition

10. Creating a cluster

11. Creating a service to run it

12. Conclusion

1. Introduction

Writing code that does stuff is something most developers are familiar with. Sometimes, we need to take the responsibility of a SysAdmin or DevOps engineer and deploy our codebase to production where it will help a business solve problems for customers.

In this tutorial, I’ll show you how to dockerize a Node.js application and deploy it to Amazon Web Service (AWS) using Amazon ECR (Elastic Container Registry) and ECS (Elastic container service).

2. Prerequisites

To follow through this tutorial, you’ll need the following:


  1. Node and NpmFollow this link to install the latest versions.
  2. Basic knowledge of Node.js.
  3. Docker: The installation provides Docker Engine, Docker CLI client, and other cool stuff. Follow the instructions for your operating system. To check if the installation worked, fire this on the terminal:
docker --version

The command above should display the version number. If it doesn’t, the installation didn’t complete properly.

4. AWS account: Sign up for a free tier. There is a waiting period to verify your phone number and bank card. After this, you will have access to the console.

5. AWS CLI: Follow the instructions for your OS. You need Python installed.

3. A quick primer on Docker and AWS

Docker is an open source software that allows you to pack an application together with the required dependencies and environment in a ‘Container’ that you can ship and run anywhere. It is independent of platforms or hardware, and therefore the containerized application can run in any environment in an isolated fashion.

Docker containers solve many issues, such as when an app works on a co-worker’s computer but doesn’t run on yours, or it works in the local development environment but doesn’t work when you deploy it to a server.

Amazon Web Services (AWS) offers a reliable, scalable, and inexpensive cloud computing service for businesses. As I mentioned before, this tutorial will focus on using the ECR and ECS services.

4. What we’ll be deploying

Let’s quickly build a sample app that we’ll use for the purpose of this tutorial. It going to be very simple Node.js app.

Enter the following in your terminal:

// create a new directory
mkdir sample-nodejs-app
// change to new directory
cd sample-nodejs-app
// Initialize npm
npm init -y
// install express
npm install express
// create an server.js file
touch server.js


Open server.js and paste the code below into it:

// server.js
const express = require('express')
const app = express()
app.get('/', (req, res) => {
 res.send('Hello world from a Node.js app!')
})
app.listen(3000, () => {
 console.log('Server is up on 3000')
})

Start the app with:

node server.js

Access it on http://localhost:3000. You should get Hello world from a Node.js app! displayed in your browser. The complete code is available on GitHub.

Now let’s take our very important app to production 😄.

5. Creating a Dockerfile

We are going to start dockerizing the app by creating a single file called a Dockerfile in the base of our project directory.

The Dockerfile is the blueprint from which our images are built. And then images turn into containers, in which we run our apps.

Every Dockerfile starts with a base image as its foundation. There are two ways to approach creating your Dockerfile:

  1. Use a plain OS base image (For example, Ubuntu OS, Debian, CentOS etc.) and install an application environment in it such as Node.js OR
  2. Use an environment-ready base image to get an OS image with an application environment already installed.

We will proceed with the second approach. We can use the official Node.js image hosted on Dockerhub which is based on Alpine Linux.

Write this in the Dockerfile:

FROM node:8-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
RUN npm install
EXPOSE 3000
CMD [ "node", "server.js" ]

Let’s walk through this line by line to see what is happening here, and why.

FROM node:8-alpine

Here, we are building our Docker image using the official Node.js image from Dockerhub (a repository for base images).

  • Start our Dockerfile with a FROM statement. This is where you specify your base image.
  • The RUN statement will allow us to execute a command for anything you want to do. We created a subdirectory /usr/src/app that will hold our application code within the docker image.
  • WORKDIR instruction establishes the subdirectory we created as the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADDinstructions that follow it in the Dockerfile. /usr/src/app is our working directory.
  • COPY lets us copy files from a source to a destination. We copied the contents of our node application code ( server.js and package.json) from our current directory to the working directory in our docker image.
  • The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. We specified port 3000.
  • Last but not least, theCMD statement specifies the command to start our application. This tells Docker how to run your application. Here we use node server.js which is typically how files are run in Node.js.

With this completed file, we are now ready to build a new Docker image.

6. Building a docker image

Make sure you have Docker up and running. Now that we have defined our Dockerfile, let’s build the image with a title using -t:

docker build -t sample-nodejs-app .

This will output hashes, and alphanumeric strings that identify containers and images saying “Successfully built” on the last line:

Sending build context to Docker daemon  1.966MB
Step 1/7 : FROM node:6-alpine
 ---> 998971a692ca
Step 2/7 : RUN mkdir -p /usr/src/app
 ---> Using cache
 ---> f1aa1c112188
Step 3/7 : WORKDIR /usr/src/app
 ---> Using cache
 ---> b4421b83357b
Step 4/7 : COPY . .
 ---> 836112e1d526
Step 5/7 : RUN npm install
 ---> Running in 1c6b36b5381c
npm WARN [email protected] No description
npm WARN [email protected] No repository field.
Removing intermediate container 1c6b36b5381c
 ---> 93999e6c807f
Step 6/7 : EXPOSE 3000
 ---> Running in 7419020927f1
Removing intermediate container 7419020927f1
 ---> ed4ac8a31f83
Step 7/7 : CMD [ "node", "server.js" ]
 ---> Running in c77d34f4c873
Removing intermediate container c77d34f4c873
 ---> eaf97859f909
Successfully built eaf97859f909
// dont expect the same values from your terminal.


7. Running a Docker Container

We’ve built the docker image. To see previously created images, run:


docker images

You should see the image we just created as the most recent based on time:

Copy the image Id. To run the container, we write on the terminal:

docker run -p 80:3000 {image-id}
// fill with your image-id


By default, Docker containers can make connections to the outside world, but the outside world cannot connect to containers. -p publishes all exposed ports to the host interfaces. Here we publish the app to port 80:3000. Because we are running Docker locally, go to http://localhost to view.

At any moment, you can check running Docker containers by typing:

docker container ls

Finally, you can stop the container from running by:

docker stop {image-id}

Leave the Docker daemon running.

8. Create Registry (ECR) and upload the app image to it

Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon Elastic Container Service (ECS), simplifying your development to production workflow.

The keyword “Elastic” means you can scale the capacity or reduce it as desired.

Steps:

  1. Go to the AWS console and sign in.
  2. Select the EC2 container service and Get started

3. The first run page shows, scroll down and click cancel > enter ECS dashboard.

4. To ensure your CLI can connect with your AWS account, run on the terminal:

aws configure

If your AWS CLI was properly installed, aws configure will ask for the following:

$ aws configure
AWS Access Key ID [None]: accesskey
AWS Secret Access Key [None]: secretkey
Default region name [None]: us-west-2
Default output format [None]:

Get the security credentials from your AWS account under your username > Access keys. Run aws configure again and fill correctly.

4. Create a new repository and enter a name (preferably with the same container name as in your local dev environment for consistency).

For example, use sample-nodejs-app.

Follow the 5 instructions from the AWS console for building, tagging, and pushing Docker images:

Note: The arguments of the following are mine and will differ from yours, so just follow the steps outlined on your console.

  1. Retrieve the Docker login command that you can use to authenticate your Docker client to your registry: 
  2. Note: If you receive an “Unknown options: - no-include-email” error, install the latest version of the AWS CLI. Learn more here.
aws ecr get-login --no-include-email --region us-east-2

2. Run the docker login command that was returned in the previous step (just copy and paste)Note: If you are using Windows PowerShell, run the following command instead:

Invoke-Expression -Command (aws ecr get-login --no-include-email --region us-east-2)

It should output: Login Succeeded.

3. Build your Docker image using the following command. For information on building a Docker file from scratch, see the instructions here. You can skip this step since our image is already built:

docker build -t sample-nodejs-app .

4. With a completed build, tag your image with a keyword (For example, latest) so you can push the image to this repository:

docker tag sample-nodejs-app:latest 559908478199.dkr.ecr.us-east-2.amazonaws.com/sample-nodejs-app:latest

5. Run the following command to push this image to your newly created AWS repository:

docker push 559908478199.dkr.ecr.us-east-2.amazonaws.com/sample-nodejs-app:latest

9. Create a new task definition

Tasks function like the docker run command of the Docker CLI but for multiple containers. They define:

  • Container images (to use)
  • Volumes (if any)
  • Networks Environment variables
  • Port mappings

From Task Definitions in the ECS dashboard, press on the Create new Task Definition (ECS) button:

Set a task name and use the following steps:

  • Add Container: sample-nodejs-app (the one we pushed).
  • Image: the URL to your container. Mine is 559908478199.dkr.ecr.us-east-2.amazonaws.com/sample-nodejs-app
  • Soft limit: 512
  • Map 80 (host) to 3000 (container) for sample-nodejs-app
  • Env Variables:

NODE_ENV: production

10. Create a Cluster

A cluster is the place where AWS containers run. They use configurations similar to EC2 instances. Define the following:

  • Cluster name: demo-nodejs-app-cluster
  • EC2 instance type: t2.micro

(Note: you select the instances based on the size of your application. Here we’ve selected the smallest. Your selection affects how much money you are billed at the end of the month. Visit here for more information). Thank you Nicholas Kolatsisfor pointing out that the previous selection of m4.large was expensive for this tutorial.

  • Number of instances: 1
  • EBS storage: 22
  • Key pair: None
  • VPC: New

When the process is complete, you may choose to click on “View cluster.”

11. Create a service to run it

Go to Task Definition > click demo-nodejs-app > click on the latest revision.

Inside of the task definition, click on the actions dropdown and select Create servcie

Use the following:

  • Launch type: EC2
  • Service name: demo-nodejs-app-service
  • Number of tasks: 1

Skip through options and click Create service and View service.

You’ll see its status as PENDING. Give it a little time and it will indicate RUNNING.

Go to Cluster (through a link from the service we just created) > EC2 instances > Click on the container instance to reveal the public DNS.

12. Conclusion.

Congrats on finishing this post! Grab the code for the Docker part from Github.


Feel free to support me (devapparel.co) and look good while at it. Also, Comment or Share this post. Thanks for reading!


By : Emmanuel Yusufu