Deploy Web Services on GKE Cluster with Node.js

Deploy Web Services on GKE Cluster with Node.js

Kubernetes is an open source system for automating deployments, scaling and managing containerized applications.

Kubernetes is an open source system for automating deployments, scaling and managing containerized applications.

We will first briefly look at some of the concepts in Kubernetes that you may come across doing the Hands-on.

There is something called Images where you find the executable package of your application that includes your code, libraries, conf files, runtime, environment variables, etc. By running these images we launch something called Containers. These containers run in a container cluster which is managed using Kubernetes.

A container cluster is nothing but a group of compute engine VM (Virtual Machine) instances. In a container cluster, there are two types of VM instances.

  1. Master
  2. Node instances

Thus, in the above diagram, there are four VM instances, for each node instance and master.

Master is the supervising machine. It manages the cluster. Kublet is used to communicate with the master. The pods contain containers. Inside each pod, there can be multiple containers running. All the containers inside a pod share the same underlying resources. That means they all have the same IP address, share the same disk volumes, etc. A service is a grouping of pods that are running on the cluster.

If you want to have more in-depth knowledge on Kubernetes, I would recommend you refer their docs.


For this practical, I will be using Ubuntu 18.04.1 and node.js


1.Sign in to your Google account. If you do not have one, sign up for a new account.

  1. Install Google Cloud SDK.

  2. Install Kubectl

sudo snap install kubectl --classic

Go to the Google Cloud Platform Console and create a new project.

Open the newly created project.

In the Navigation menu, click the APIs & Services and go to the Library page.

Search for Kubernetes Engine API and click Enable

Now let us create a cluster.

Go to the Navigation menu, select Kubernetes Engine -> Clusters.

Then you will be prompted to Create a Cluster. You can customize your cluster according to your needs, but for this project, I will be using all the default settings.

Click the create button, and wait till the cluster is created.

Configure kubectl command line access by running the following command:

gcloud container clusters get-credentials <cluster name> -- zone <zone name> --project <project ID>
eg: gcloud container clusters get-credentials standard-cluster-service --zone us-central1-a --project k8s-api-service-project

Open the gcloud shell, and type

gcloud container clusters list

This will list out all the existing clusters for running containers. In our case, we have a cluster with three nodes.

a list of existing clusters for running containers

Now, in your local machine, open the terminal, go inside the folder where your application is at.

Create a text file named “Dockerfile” inside the folder you are now at.

touch Dockerfile

Open the created text file and copy and paste the following commands, and save it.

FROM node:8
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available ([email protected]+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
CMD [ "npm", "start" ]

Dockerfile is a text file that has a series of instructions on how to build your image. It supports a simple set of commands that you need to use in your Dockerfile.> <em>WORKDIR <path></em>: Sets the working directory for any *<em>RUN</em>, <em>CMD</em>, <em>ENTRYPOINT</em>, <em>COPY</em> and <em>ADD</em> instructions that follow it in the <em>Dockerfile</em>*. If the *<em>WORKDIR</em>doesn’t exist, it will be created even if it’s not used in any subsequent <em>Dockerfile</em>instruction.> <em>COPY <src>... <dest></em>*: Copy new files or directories from *<em><src></em> and adds them to the filesystem of the container at the path <em><dest></em>.> <em>RUN <command></em> : The command is run in a shell, which by default is <em>/bin/sh -c</em>on Linux> <em>CMD [“executable”,”param1",”param2"]</em> : Sets the command to be executed when running the image.* Similarly, create a “.dockerignore” file using touch .dockerignore and copy and paste the following commands.


In most cases, you’ll be copying in the source code of your application into a docker image. Typically you would do that by adding *<em>COPY src/ dest/</em> or similar to your <em>Dockerfile</em>*. That’s a great way to do it, but that’s also going to include things like your *<em>.git/</em> directory or <em>/tmp</em> folders that belong to your project, which you really do not need for building the docker image. Including such files will increase the docker image size unnecessarily.> *We can exclude files and directories we do not need within our final image. All you have to do is create a *<em>.dockerignore</em> file alongside your <em>Dockerfile</em>.> *At this point, it’s pretty similar to what a *<em>.gitignore</em> file does for your git repos. You just need to tell it what you want to ignore.* Then, build the container image for the data service application.

docker image build -t <image repository name>:<tag name> .
eg: docker image build -t dataserver:v2 . 

Note the “dot” at the end of the line. It specifies the current working directoryyou are in. Here, it is the directory where you are running the ‘docker image build’ command from, which is also where your Dockerfile is at. To see your built image, type docker images

Now, let’s put this image to the Docker Hub. ( If you do not have an account at docker hub, first you need to sign up there)

Create a new repository. Mine would be “apiservice”

Now push the image we built into the docker hub

Log in to the Docker Hub from the terminal

docker login

Enter your username and password

Get your image ID by typing docker images

Tag your image

docker image tag <image ID> <docker hub repository name>:<tag>
eg: docker image tag 377026348163 varuni95/dataservice:v2

If you do not specify the tag, it will always default to “latest”. Push your image

docker image push <docker hub repository name>
eg: docker image push varuni95/dataservice

Now let’s pull our image and create a new container from it.

Go to the Gcloud shell,

kubectl run <container name> --image=<docker hub repository name>:<tag name> --port=<port number>
eg: kubectl run data --image=varuni95/dataservice:v2 --port=3101

Now expose the Kubernetes deployment through a load balancer

kubectl expose deployment data --type=LoadBalancer

Get the external IP address

kubectl get svc

Copy that IP address (, and paste it there as the request URL in your api_service application (go inside api_service>routes>states_hash.js. Do the same for api_service>routes>states_titlecase.js)

Now let’s follow the same steps as mentioned above to expose the api service

docker image build -t apiserver:v2 .

Go to the Docker Hub and create a new repository (eg: apiservice)

Get your image Id from docker images

Tag your image

docker image tag c174a3d43afa varuni95/apiservice:v2

Push your image

docker image push varuni95/apiservice

Pull our image and run a new container in the cluster

kubectl run api --image=varuni95/apiservice:v2 --port=3100

Expose the deployment through a load balancer

kubectl expose deployment api --type=LoadBalancer

Get your external IP address

Now you should now be able to access the service by pointing your browser to this address:

Try out with different US state codes 😉

node-js docker

Bootstrap 5 Complete Course with Examples

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Building a simple Applications with Vue 3

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

Node.js Live | Node.js Docker Tutorial | Dockerizing Node.js App|Node.js Training|Edureka

🔥 Node.js Certification Training: This Edureka video on 'Node.js Docker Tutorial' will help you in learn...

How to Hire Node.js Developers And How Much Does It Cost?

A Guide to Hire Node.js Developers who can help you create fast and efficient web applications. Also, know how much does it cost to hire Node.js Developers.

Hire Node.JS Developers | Skenix Infotech

We are providing robust Node.JS Development Services with expert Node.js Developers. Get affordable Node.JS Web Development services from Skenix Infotech.

Hands on with Node.Js Streams | Examples & Approach

The practical implications of having Streams in Node.js are vast. Nodejs Streams are a great way to handle data chunks and uncomplicate development.

Node.js Performance: Node.js vs. Io.js

You may already be aware that Raygun uses Node.JS for our API nodes that receive your precious crash reporting data (we also do node.js crash reporting if you’re interested). We’ve peaked in the past at more than 110,000 requests per second coming...