How To Scale a Node.js Application with MongoDB Using Helm

How To Scale a Node.js Application with MongoDB Using Helm

<strong>Originally published by </strong>Kathleen Juell <em>at&nbsp;</em><a href="https://www.digitalocean.com/community/tutorials/how-to-scale-a-node-js-application-with-mongodb-using-helm" target="_blank">digitalocean.com</a>

Introduction

Kubernetes is a system for running modern, containerized applications at scale. With it, developers can deploy and manage applications across clusters of machines. And though it can be used to improve efficiency and reliability in single-instance application setups, Kubernetes is designed to run multiple instances of an application across groups of machines.

When creating multi-service deployments with Kubernetes, many developers opt to use the Helm package manager. Helm streamlines the process of creating multiple Kubernetes resources by offering charts and templates that coordinate how these objects interact. It also offers pre-packaged charts for popular open-source projects.

In this tutorial, you will deploy a Node.js application with a MongoDB database onto a Kubernetes cluster using Helm charts. You will use the official Helm MongoDB replica set chart to create a StatefulSet objectconsisting of three Pods, a Headless Service, and three PersistentVolumeClaims. You will also create a chart to deploy a multi-replica Node.js application using a custom application image. The setup you will build in this tutorial will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build a resilient Node.js application with a MongoDB data store that can scale with your needs.

Prerequisites

To complete this tutorial, you will need:

Step 1 — Cloning and Packaging the Application

To use our application with Kubernetes, we will need to package it so that the kubelet agent can pull the image. Before packaging the application, however, we will need to modify the MongoDB connection URI in the application code to ensure that our application can connect to the members of the replica set that we will create with the Helm mongodb-replicaset chart.

Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application with a MongoDB database to demonstrate how to set up a development environment with Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

Clone the repository into a directory called node_project:

git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

Navigate to the node_project directory:

cd node_project

The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application's state has been offloaded to a MongoDB database.

For more information about designing modern, containerized applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

When we deploy the Helm mongodb-replicaset chart, it will create:

  • A StatefulSet object with three Pods — the members of the MongoDB replica set. Each Pod will have an associated PersistentVolumeClaim and will maintain a fixed identity in the event of rescheduling.
  • A MongoDB replica set made up of the Pods in the StatefulSet. The set will include one primary and two secondaries. Data will be replicated from the primary to the secondaries, ensuring that our application data remains highly available.

For our application to interact with the database replicas, the MongoDB connection URI in our code will need to include both the hostnames of the replica set members as well as the name of the replica set itself. We therefore need to include these values in the URI.

The file in our cloned repository that specifies database connection information is called db.js. Open that file now using nano or your favorite editor:

nano db.js

Currently, the file includes constants that are referenced in the database connection URI at runtime. The values for these constants are injected using Node’s process.env property, which returns an object with information about your user environment at runtime. Setting values dynamically in our application code allows us to decouple the code from the underlying infrastructure, which is necessary in a dynamic, stateless environment. For more information about refactoring application code in this way, see Step 2 of Containerizing a Node.js Application for Development With Docker Compose and the relevant discussion in The 12-Factor App.

The constants for the connection URI and the URI string itself currently look like this:

~/node_project/db.js

...
const {
  MONGO_USERNAME,
  MONGO_PASSWORD,
  MONGO_HOSTNAME,
  MONGO_PORT,
  MONGO_DB
} = process.env;

...

const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
...

In keeping with a 12FA approach, we do not want to hard code the hostnames of our replica instances or our replica set name into this URI string. The existing MONGO_HOSTNAME constant can be expanded to include multiple hostnames — the members of our replica set — so we will leave that in place. We will need to add a replica set constant to the options section of the URI string, however.

Add MONGO_REPLICASET to both the URI constant object and the connection string:

~/node_project/db.js

...
const {
  MONGO_USERNAME,
  MONGO_PASSWORD,
  MONGO_HOSTNAME,
  MONGO_PORT,
  MONGO_DB,
  MONGO_REPLICASET
} = process.env;

...
const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?replicaSet=${MONGO_REPLICASET}&authSource=admin`;
...

Using the replicaSet option in the options section of the URI allows us to pass in the name of the replica set, which, along with the hostnames defined in the MONGO_HOSTNAME constant, will allow us to connect to the set members.

Save and close the file when you are finished editing.

With your database connection information modified to work with replica sets, you can now package your application, build the image with the docker build command, and push it to Docker Hub.

Build the image with docker build and the -t flag, which allows you to tag the image with a memorable name. In this case, tag the image with your Docker Hub username and name it node-replicas or a name of your own choosing:

docker build -t your_dockerhub_username/node-replicas .

The . in the command specifies that the build context is the current directory.

It will take a minute or two to build the image. Once it is complete, check your images:

docker images

You will see the following output:

Output

REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
your_dockerhub_username/node-replicas   latest              56a69b4bc882        7 seconds ago       90.1MB
node                                    10-alpine           aa57b0242b33        6 days ago          71MB

Next, log in to the Docker Hub account you created in the prerequisites:

docker login -u your_dockerhub_username 

When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user's home directory with your Docker Hub credentials.

Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

docker push your_dockerhub_username/node-replicas

You now have an application image that you can pull to run your replicated application with Kubernetes. The next step will be to configure specific parameters to use with the MongoDB Helm chart.

Step 2 — Creating Secrets for the MongoDB Replica Set

The stable/mongodb-replicaset chart provides different options when it comes to using Secrets, and we will create two to use with our chart deployment:

  • A Secret for our replica set keyfile that will function as a shared password between replica set members, allowing them to authenticate other members.
  • A Secret for our MongoDB admin user, who will be created as a root user on the admin database. This role will allow you to create subsequent users with limited permissions when deploying your application to production.

With these Secrets in place, we will be able to set our preferred parameter values in a dedicated values file and create the StatefulSet object and MongoDB replica set with the Helm chart.

First, let's create the keyfile. We will use the openssl command with the rand option to generate a 756 byte random string for the keyfile:

openssl rand -base64 756 > key.txt

The output generated by the command will be base64 encoded, ensuring uniform data transmission, and redirected to a file called key.txt, following the guidelines stated in the mongodb-replicaset chart authentication documentation . The key itself must be between 6 and 1024 characters long, consisting only of characters in the base64 set.

You can now create a Secret called keyfilesecret using this file with kubectl create

kubectl create secret generic keyfilesecret --from-file=key.txt

This will create a Secret object in the default namespace, since we have not created a specific namespace for our setup.

You will see the following output indicating that your Secret has been created:

Output
secret/keyfilesecret created

Remove key.txt:

rm key.txt

Alternatively, if you would like to save the file, be sure restrict its permissions and add it to your .gitignore file to keep it out of version control.

Next, create the Secret for your MongoDB admin user. The first step will be to convert your desired username and password to base64.

Convert your database username:

echo -n 'your_database_username' | base64

Note down the value you see in the output.

Next, convert your password:

echo -n 'your_database_password' | base64

Take note of the value in the output here as well.

Open a file for the Secret:

nano secret.yaml

Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your YAML files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validateflags:

kubectl create -f your_yaml_file.yaml --dry-run --validate=true

In general, it is a good idea to validate your syntax before creating resources with kubectl.

Add the following code to the file to create a Secret that will define a user and password with the encoded values you just created. Be sure to replace the dummy values here with your own encodedusername and password:

~/node_project/secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: mongo-secret
data:
  user: your_encoded_username
  password: your_encoded_password

Here, we're using the key names that the mongodb-replicaset chart expects: user and password. We have named the Secret object mongo-secret, but you are free to name it anything you would like.

Save and close the file when you are finished editing.

Create the Secret object with the following command:

kubectl create -f secret.yaml

You will see the following output:

Output
secret/mongo-secret created

Again, you can either remove secret.yaml or restrict its permissions and add it to your .gitignore file.

With your Secret objects created, you can move on to specifying the parameter values you will use with the mongodb-replicaset chart and creating the MongoDB deployment.

Step 3 — Configuring the MongoDB Helm Chart and Creating a Deployment

Helm comes with an actively maintained repository called stable that contains the chart we will be using: mongodb-replicaset. To use this chart with the Secrets we've just created, we will create a file with configuration parameter values called mongodb-values.yaml and then install the chart using this file.

Our mongodb-values.yaml file will largely mirror the default values.yaml file in the mongodb-replicaset chart repository. We will, however, make the following changes to our file:

  • We will set the auth parameter to true to ensure that our database instances start with authorization enabled. This means that all clients will be required to authenticate for access to database resources and operations.
  • We will add information about the Secrets we created in the previous Step so that the chart can use these values to create the replica set keyfile and admin user.
  • We will decrease the size of the PersistentVolumes associated with each Pod in the StatefulSet to use the minimum viable DigitalOcean Block Storage unit, 1GB, though you are free to modify this to meet your storage requirements.

Before writing the mongodb-values.yaml file, however, you should first check that you have a StorageClass created and configured to provision storage resources. Each of the Pods in your database StatefulSet will have a sticky identity and an associated PersistentVolumeClaim, which will dynamically provision a PersistentVolume for the Pod. If a Pod is rescheduled, the PersistentVolume will be mounted to whichever node the Pod is scheduled on (though each Volume must be manually deleted if its associated Pod or StatefulSet is permanently deleted).

Because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.comDigitalOcean Block Storage — which we can check by typing:

kubectl get storageclass

If you are working with a DigitalOcean cluster, you will see the following output:

Output
NAME                         PROVISIONER                 AGE
do-block-storage (default)   dobs.csi.digitalocean.com   21m

If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

Now that you have ensured that you have a StorageClass configured, open mongodb-values.yaml for editing:

nano mongodb-values.yaml

You will set values in this file that will do the following:

  • Enable authorization.
  • Reference your keyfilesecret and mongo-secret objects.
  • Specify 1Gi for your PersistentVolumes.
  • Set your replica set name to db.
  • Specify 3 replicas for the set.
  • Pin the mongo image to the latest version at the time of writing: 4.1.9.

Paste the following code into the file:

~/node_project/mongodb-values.yaml

replicas: 3
port: 27017
replicaSetName: db
podDisruptionBudget: {}
auth:
  enabled: true
  existingKeySecret: keyfilesecret
  existingAdminSecret: mongo-secret
imagePullSecrets: []
installImage:
  repository: unguiculus/mongodb-install
  tag: 0.7
  pullPolicy: Always
copyConfigImage:
  repository: busybox
  tag: 1.29.3
  pullPolicy: Always
image:
  repository: mongo
  tag: 4.1.9
  pullPolicy: Always
extraVars: {}
metrics:
  enabled: false
  image:
    repository: ssalaues/mongodb-exporter
    tag: 0.6.1
    pullPolicy: IfNotPresent
  port: 9216
  path: /metrics
  socketTimeout: 3s
  syncTimeout: 1m
  prometheusServiceDiscovery: true
  resources: {}
podAnnotations: {}
securityContext:
  enabled: true
  runAsUser: 999
  fsGroup: 999
  runAsNonRoot: true
init:
  resources: {}
  timeout: 900
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
extraLabels: {}
persistentVolume:
  enabled: true
  #storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 1Gi
  annotations: {}
serviceAnnotations: {}
terminationGracePeriodSeconds: 30
tls:
  enabled: false
configmap: {}
readinessProbe:
  initialDelaySeconds: 5
  timeoutSeconds: 1
  failureThreshold: 3
  periodSeconds: 10
  successThreshold: 1
livenessProbe:
  initialDelaySeconds: 30
  timeoutSeconds: 5
  failureThreshold: 3
  periodSeconds: 10
  successThreshold: 1

The persistentVolume.storageClass parameter is commented out here: removing the comment and setting its value to "-" would disable dynamic provisioning. In our case, because we are leaving this value undefined, the chart will choose the default provisioner — in our case, dobs.csi.digitalocean.com.

Also note the accessMode associated with the persistentVolume key: ReadWriteOnce means that the provisioned volume will be read-write only by a single node. Please see the documentation for more information about different access modes.

To learn more about the other parameters included in the file, see the configuration table included with the repo.

Save and close the file when you are finished editing.

Before deploying the mongodb-replicaset chart, you will want to update the stable repo with the helm repo update command:

helm repo update

This will get the latest chart information from the stable repository.

Finally, install the chart with the following command:

helm install --name mongo -f mongodb-values.yaml stable/mongodb-replicaset

Note: Before installing a chart, you can run helm install with the --dry-run and --debug options to check the generated manifests for your release:

helm install --name your_release_name -f your_values_file.yaml --dry-run --debug your_chart

Note that we are naming the Helm release mongo. This name will refer to this particular deployment of the chart with the configuration options we've specified. We've pointed to these options by including the -fflag and our mongodb-values.yaml file.

Also note that because we did not include the --namespace flag with helm install, our chart objects will be created in the default namespace.

Once you have created the release, you will see output about its status, along with information about the created objects and instructions for interacting with them:

Output
NAME:   mongo
LAST DEPLOYED: Tue Apr 16 21:51:05 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                              DATA  AGE
mongo-mongodb-replicaset-init     1     1s
mongo-mongodb-replicaset-mongodb  1     1s
mongo-mongodb-replicaset-tests    1     0s
...

You can now check on the creation of your Pods with the following command:

kubectl get pods

You will see output like the following as the Pods are being created:

Output
NAME                         READY   STATUS     RESTARTS   AGE
mongo-mongodb-replicaset-0   1/1     Running    0          67s
mongo-mongodb-replicaset-1   0/1     Init:0/3   0          8s

The READY and STATUS outputs here indicate that the Pods in our StatefulSet are not fully ready: the Init Containers associated with the Pod's containers are still running. Because StatefulSet members are created in sequential order, each Pod in the StatefulSet must be Running and Ready before the next Pod will be created.

Once the Pods have been created and all of their associated containers are running, you will see this output:

Output
NAME                         READY   STATUS    RESTARTS   AGE
mongo-mongodb-replicaset-0   1/1     Running   0          2m33s
mongo-mongodb-replicaset-1   1/1     Running   0          94s
mongo-mongodb-replicaset-2   1/1     Running   0          36s

The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

Note:

If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

kubectl describe pods your_pod
kubectl logs your_pod

Each of the Pods in your StatefulSet has a name that combines the name of the StatefulSet with the ordinal index of the Pod. Because we created three replicas, our StatefulSet members are numbered 0-2, and each has a stable DNS entry comprised of the following elements: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local.

In our case, the StatefulSet and the Headless Service created by the mongodb-replicaset chart have the same names:

kubectl get statefulset
Output

NAME                       READY   AGE
mongo-mongodb-replicaset   3/3     4m2s
kubectl get svc
Output

NAME                              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
kubernetes                        ClusterIP   10.245.0.1   <none>        443/TCP     42m
mongo-mongodb-replicaset          ClusterIP   None         <none>        27017/TCP   4m35s
mongo-mongodb-replicaset-client   ClusterIP   None         <none>        27017/TCP   4m35s

This means that the first member of our StatefulSet will have the following DNS entry:

mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local

Because we need our application to connect to each MongoDB instance, it's essential that we have this information so that we can communicate directly with the Pods, rather than with the Service. When we create our custom application Helm chart, we will pass the DNS entries for each Pod to our application using environment variables.

With your database instances up and running, you are ready to create the chart for your Node application.

Step 4 — Creating a Custom Application Chart and Configuring Parameters

We will create a custom Helm chart for our Node application and modify the default files in the standard chart directory so that our application can work with the replica set we have just created. We will also create files to define ConfigMap and Secret objects for our application.

First, create a new chart directory called nodeapp with the following command:

helm create nodeapp

This will create a directory called nodeapp in your ~/node_project folder with the following resources:

  • A Chart.yaml file with basic information about your chart.
  • A values.yaml file that allows you to set specific parameter values, as you did with your MongoDB deployment.
  • A .helmignore file with file and directory patterns that will be ignored when packaging charts.
  • A templates/ directory with the template files that will generate Kubernetes manifests.
  • A templates/tests/ directory for test files.
  • A charts/ directory for any charts that this chart depends on.

The first file we will modify out of these default files is values.yaml. Open that file now:

nano nodeapp/values.yaml

The values that we will set here include:

  • The number of replicas.
  • The application image we want to use. In our case, this will be the node-replicas image we created in Step 1.
  • The ServiceType. In this case, we will specify LoadBalancer to create a point of access to our application for testing purposes. Because we are working with a DigitalOcean Kubernetes cluster, this will create a DigitalOcean Load Balancer when we deploy our chart. In production, you can configure your chart to use Ingress Resources and Ingress Controllers to route traffic to your Services.
  • The targetPort to specify the port on the Pod where our application will be exposed.

We will not enter environment variables into this file. Instead, we will create templates for ConfigMap and Secret objects and add these values to our application Deployment manifest, located at ~/node_project/nodeapp/templates/deployment.yaml.

Configure the following values in the values.yaml file:

~/node_project/nodeapp/values.yaml

# Default values for nodeapp.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 3

image:
  repository: your_dockerhub_username/node-replicas
  tag: latest
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

service:
  type: LoadBalancer
  port: 80
  targetPort: 8080
...

Save and close the file when you are finished editing.

Next, open a secret.yaml file in the nodeapp/templates directory:

nano nodeapp/templates/secret.yaml

In this file, add values for your MONGO_USERNAME and MONGO_PASSWORD application constants. These are the constants that your application will expect to have access to at runtime, as specified in db.js, your database connection file. As you add the values for these constants, remember to the use the base64-encoded values that you used earlier in Step 2 when creating your mongo-secret object. If you need to recreate those values, you can return to Step 2 and run the relevant commands again.

Add the following code to the file:

~/node_project/nodeapp/templates/secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: {{ .Release.Name }}-auth
data:
  MONGO_USERNAME: your_encoded_username
  MONGO_PASSWORD: your_encoded_password

The name of this Secret object will depend on the name of your Helm release, which you will specify when you deploy the application chart.

Save and close the file when you are finished.

Next, open a file to create a ConfigMap for your application:

nano nodeapp/templates/configmap.yaml

In this file, we will define the remaining variables that our application expects: MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET. Our MONGO_HOSTNAME variable will include the DNS entry for each instance in our replica set, since this is what the MongoDB connection URI requires.

According to the Kubernetes documentation, when an application implements liveness and readiness checks, SRV records should be used when connecting to the Pods. As discussed in Step 3, our Pod SRV records follow this pattern: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local. Since our MongoDB StatefulSet implements liveness and readiness checks, we should use these stable identifiers when defining the values of the MONGO_HOSTNAMEvariable.

Add the following code to the file to define the MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET variables. You are free to use another name for your MONGO_DB database, but your MONGO_HOSTNAME and MONGO_REPLICASET values must be written as they appear here:

~/node_project/nodeapp/templates/configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-config
data:
  MONGO_HOSTNAME: "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local"  
  MONGO_PORT: "27017"
  MONGO_DB: "sharkinfo"
  MONGO_REPLICASET: "db"

Because we have already created the StatefulSet object and replica set, the hostnames that are listed here must be listed in your file exactly as they appear in this example. If you destroy these objects and rename your MongoDB Helm release, then you will need to revise the values included in this ConfigMap. The same applies for MONGO_REPLICASET, since we specified the replica set name with our MongoDB release.

Also note that the values listed here are quoted, which is the expectation for environment variables in Helm.

Save and close the file when you are finished editing.

With your chart parameter values defined and your Secret and ConfigMap manifests created, you can edit the application Deployment template to use your environment variables.

Step 5 — Integrating Environment Variables into Your Helm Deployment

With the files for our application Secret and ConfigMap in place, we will need to make sure that our application Deployment can use these values. We will also customize the liveness and readiness probesthat are already defined in the Deployment manifest.

Open the application Deployment template for editing:

nano nodeapp/templates/deployment.yaml

Though this is a YAML file, Helm templates use a different syntax from standard Kubernetes YAML files in order to generate manifests. For more information about templates, see the Helm documentation.

In the file, first add an env key to your application container specifications, below the imagePullPolicykey and above ports:

~/node_project/nodeapp/templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        env:
        ports:

Next, add the following keys to the list of env variables:

~/node_project/nodeapp/templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        env:
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              key: MONGO_USERNAME
              name: {{ .Release.Name }}-auth
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              key: MONGO_PASSWORD
              name: {{ .Release.Name }}-auth
        - name: MONGO_HOSTNAME
          valueFrom:
            configMapKeyRef:
              key: MONGO_HOSTNAME
              name: {{ .Release.Name }}-config
        - name: MONGO_PORT
          valueFrom:
            configMapKeyRef:
              key: MONGO_PORT
              name: {{ .Release.Name }}-config
        - name: MONGO_DB
          valueFrom:
            configMapKeyRef:
              key: MONGO_DB
              name: {{ .Release.Name }}-config      
        - name: MONGO_REPLICASET
          valueFrom:
            configMapKeyRef:
              key: MONGO_REPLICASET
              name: {{ .Release.Name }}-config        

Each variable includes a reference to its value, defined either by a secretKeyRef key, in the case of Secret values, or configMapKeyRef for ConfigMap values. These keys point to the Secret and ConfigMap files we created in the previous Step.

Next, under the ports key, modify the containerPort definition to specify the port on the container where our application will be exposed:

~/node_project/nodeapp/templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
    ...
      env:
    ...
      ports:
        - name: http
          containerPort: 8080
          protocol: TCP
      ...

Next, let's modify the liveness and readiness checks that are included in this Deployment manifest by default. These checks ensure that our application Pods are running and ready to serve traffic:

  • Readiness probes assess whether or not a Pod is ready to serve traffic, stopping all requests to the Pod until the checks succeed.
  • Liveness probes check basic application behavior to determine whether or not the application in the container is running and behaving as expected. If a liveness probe fails, Kubernetes will restart the container.

For more about both, see the relevant discussion in Architecting Applications for Kubernetes.

In our case, we will build on the httpGet request that Helm has provided by default and test whether or not our application is accepting requests on the /sharks endpoint. The kubelet service will perform the probe by sending a GET request to the Node server running in the application Pod's container and listening on port 8080. If the status code for the response is between 200 and 400, then the kubelet will conclude that the container is healthy. Otherwise, in the case of a 400 or 500 status, kubelet will either stop traffic to the container, in the case of the readiness probe, or restart the container, in the case of the liveness probe.

Add the following modification to the stated path for the liveness and readiness probes:

~/node_project/nodeapp/templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
    ...
      env:
    ...
      ports:
        - name: http
          containerPort: 8080
          protocol: TCP
      livenessProbe:
        httpGet:
          path: /sharks
          port: http
      readinessProbe:
        httpGet:
          path: /sharks
          port: http

Save and close the file when you are finished editing.

You are now ready to create your application release with Helm. Run the following helm install command, which includes the name of the release and the location of the chart directory:

helm install --name nodejs ./nodeapp

Remember that you can run helm install with the --dry-run and --debug options first, as discussed in Step 3, to check the generated manifests for your release.

Again, because we are not including the --namespace flag with helm install, our chart objects will be created in the default namespace.

You will see the following output indicating that your release has been created:

Output
NAME:   nodejs
LAST DEPLOYED: Wed Apr 17 18:10:29 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME           DATA  AGE
nodejs-config  4     1s

==> v1/Deployment
NAME            READY  UP-TO-DATE  AVAILABLE  AGE
nodejs-nodeapp  0/3    3           0          1s

...

Again, the output will indicate the status of the release, along with information about the created objects and how you can interact with them.

Check the status of your Pods:

kubectl get pods
Output
NAME                              READY   STATUS    RESTARTS   AGE
mongo-mongodb-replicaset-0        1/1     Running   0          57m
mongo-mongodb-replicaset-1        1/1     Running   0          56m
mongo-mongodb-replicaset-2        1/1     Running   0          55m
nodejs-nodeapp-577df49dcc-b5fq5   1/1     Running   0          117s
nodejs-nodeapp-577df49dcc-bkk66   1/1     Running   0          117s
nodejs-nodeapp-577df49dcc-lpmt2   1/1     Running   0          117s

Once your Pods are up and running, check your Services:

kubectl get svc
Output

NAME                              TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)        AGE
kubernetes                        ClusterIP      10.245.0.1     <none>            443/TCP        96m
mongo-mongodb-replicaset          ClusterIP      None           <none>            27017/TCP      58m
mongo-mongodb-replicaset-client   ClusterIP      None           <none>            27017/TCP      58m
nodejs-nodeapp                    LoadBalancer   10.245.33.46   your_lb_ip        80:31518/TCP   3m22s

The EXTERNAL_IP associated with the nodejs-nodeapp Service is the IP address where you can access the application from outside of the cluster. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

You should see the following landing page:

Now that your replicated application is working, let's add some test data to ensure that replication is working between members of the replica set.

Step 6 — Testing MongoDB Replication

With our application running and accessible through an external IP address, we can add some test data and ensure that it is being replicated between the members of our MongoDB replica set.

First, make sure you have navigated your browser to the application landing page:

Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

In the form, add an initial shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

Click on the Submit button. You will see a page with this shark information displayed back to you:

Now head back to the shark information form by clicking on Sharks in the top navigation bar:

Enter a new shark of your choosing. We'll go with Whale Shark and Large:

Once you click Submit, you will see that the new shark has been added to the shark collection in your database:

Let's check that the data we've entered has been replicated between the primary and secondary members of our replica set.

Get a list of your Pods:

kubectl get pods
Output
NAME                              READY   STATUS    RESTARTS   AGE
mongo-mongodb-replicaset-0        1/1     Running   0          74m
mongo-mongodb-replicaset-1        1/1     Running   0          73m
mongo-mongodb-replicaset-2        1/1     Running   0          72m
nodejs-nodeapp-577df49dcc-b5fq5   1/1     Running   0          5m4s
nodejs-nodeapp-577df49dcc-bkk66   1/1     Running   0          5m4s
nodejs-nodeapp-577df49dcc-lpmt2   1/1     Running   0          5m4s

To access the mongo shell on your Pods, you can use the kubectl exec command and the username you used to create your mongo-secret in Step 2. Access the mongo shell on the first Pod in the StatefulSet with the following command:

kubectl exec -it mongo-mongodb-replicaset-0 -- mongo -u your_database_username -p --authenticationDatabase admin

When prompted, enter the password associated with this username:

Output
MongoDB shell version v4.1.9
Enter password: 

You will be dropped into an administrative shell:

Output
MongoDB server version: 4.1.9
Welcome to the MongoDB shell.
...

db:PRIMARY>

Though the prompt itself includes this information, you can manually check to see which replica set member is the primary with the
rs.isMaster() method:

rs.isMaster()

You will see output like the following, indicating the hostname of the primary:

Output
db:PRIMARY> rs.isMaster()
{
        "hosts" : [
                "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017",
                "mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local:27017",
                "mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local:27017"
        ],
        ...
        "primary" : "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017",
        ...

Next, switch to your sharkinfo database:

use sharkinfo
Output
switched to db sharkinfo

List the collections in the database:

show collections
Output
sharks

Output the documents in the collection:

db.sharks.find()

You will see the following output:

Output
{ "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 }
{ "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

Exit the MongoDB Shell:

exit

Now that we have checked the data on our primary, let's check that it's being replicated to a secondary. kubectl exec into mongo-mongodb-replicaset-1 with the following command:

kubectl exec -it mongo-mongodb-replicaset-1 -- mongo -u your_database_username -p --authenticationDatabase admin

Once in the administrative shell, we will need to use the db.setSlaveOk() method to permit read operations from the secondary instance:

db.setSlaveOk(1)

Switch to the sharkinfo database:

use sharkinfo
Output
switched to db sharkinfo

Permit the read operation of the documents in the sharks collection:

db.setSlaveOk(1)

Output the documents in the collection:

db.sharks.find()

You should now see the same information that you saw when running this method on your primary instance:

Output
db:SECONDARY> db.sharks.find()
{ "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 }
{ "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

This output confirms that your application data is being replicated between the members of your replica set.

Conclusion

You have now deployed a replicated, highly-available shark information application on a Kubernetes cluster using Helm charts. This demo application and the workflow outlined in this tutorial can act as a starting point as you build custom charts for your application and take advantage of Helm's stablerepository and other chart repositories.

As you move toward production, consider implementing the following:

To learn more about Helm, see An Introduction to Helm, the Package Manager for Kubernetes, How To Install Software on Kubernetes Clusters with the Helm Package Manager, and the Helm documentation.

How to Use Express.js, Node.js and MongoDB.js

How to Use Express.js, Node.js and MongoDB.js

In this post, I will show you how to use Express.js, Node.js and MongoDB.js. We will be creating a very simple Node application, that will allow users to input data that they want to store in a MongoDB database. It will also show all items that have been entered into the database.

In this post, I will show you how to use Express.js, Node.js and MongoDB.js. We will be creating a very simple Node application, that will allow users to input data that they want to store in a MongoDB database. It will also show all items that have been entered into the database.

Creating a Node Application

To get started I would recommend creating a new database that will contain our application. For this demo I am creating a directory called node-demo. After creating the directory you will need to change into that directory.

mkdir node-demo
cd node-demo

Once we are in the directory we will need to create an application and we can do this by running the command
npm init

This will ask you a series of questions. Here are the answers I gave to the prompts.

The first step is to create a file that will contain our code for our Node.js server.

touch app.js

In our app.js we are going to add the following code to build a very simple Node.js Application.

var express = require("express");
var app = express();
var port = 3000;
 
app.get("/", (req, res) => {
&nbsp;&nbsp;res.send("Hello World");
});
 
app.listen(port, () => {
  console.log("Server listening on port " + port);
});

What the code does is require the express.js application. It then creates app by calling express. We define our port to be 3000.

The app.use line will listen to requests from the browser and will return the text “Hello World” back to the browser.

The last line actually starts the server and tells it to listen on port 3000.

Installing Express

Our app.js required the Express.js module. We need to install express in order for this to work properly. Go to your terminal and enter this command.

npm install express --save

This command will install the express module into our package.json. The module is installed as a dependency in our package.json as shown below.

To test our application you can go to the terminal and enter the command

node app.js

Open up a browser and navigate to the url http://localhost:3000

You will see the following in your browser

Creating Website to Save Data to MongoDB Database

Instead of showing the text “Hello World” when people view your application, what we want to do is to show a place for user to save data to the database.

We are going to allow users to enter a first name and a last name that we will be saving in the database.

To do this we will need to create a basic HTML file. In your terminal enter the following command to create an index.html file.

touch index.html

In our index.html file we will be creating an input filed where users can input data that they want to have stored in the database. We will also need a button for users to click on that will add the data to the database.

Here is what our index.html file looks like.

<!DOCTYPE html>
<html>
  <head>
    <title>Intro to Node and MongoDB<title>
  <head>

  <body>
    <h1>Into to Node and MongoDB<&#47;h1>
    <form method="post" action="/addname">
      <label>Enter Your Name<&#47;label><br>
      <input type="text" name="firstName" placeholder="Enter first name..." required>
      <input type="text" name="lastName" placeholder="Enter last name..." required>
      <input type="submit" value="Add Name">
    </form>
  <body>
<html>

If you are familiar with HTML, you will not find anything unusual in our code for our index.html file. We are creating a form where users can input their first name and last name and then click an “Add Name” button.

The form will do a post call to the /addname endpoint. We will be talking about endpoints and post later in this tutorial.

Displaying our Website to Users

We were previously displaying the text “Hello World” to users when they visited our website. Now we want to display our html file that we created. To do this we will need to change the app.use line our our app.js file.

We will be using the sendFile command to show the index.html file. We will need to tell the server exactly where to find the index.html file. We can do that by using a node global call __dirname. The __dirname will provide the current directly where the command was run. We will then append the path to our index.html file.

The app.use lines will need to be changed to
app.use("/", (req, res) => {   res.sendFile(__dirname + "/index.html"); });

Once you have saved your app.js file, we can test it by going to terminal and running node app.js

Open your browser and navigate to “http://localhost:3000”. You will see the following

Connecting to the Database

Now we need to add our database to the application. We will be connecting to a MongoDB database. I am assuming that you already have MongoDB installed and running on your computer.

To connect to the MongoDB database we are going to use a module called Mongoose. We will need to install mongoose module just like we did with express. Go to your terminal and enter the following command.
npm install mongoose --save

This will install the mongoose model and add it as a dependency in our package.json.

Connecting to the Database

Now that we have the mongoose module installed, we need to connect to the database in our app.js file. MongoDB, by default, runs on port 27017. You connect to the database by telling it the location of the database and the name of the database.

In our app.js file after the line for the port and before the app.use line, enter the following two lines to get access to mongoose and to connect to the database. For the database, I am going to use “node-demo”.

var mongoose = require("mongoose"); mongoose.Promise = global.Promise; mongoose.connect("mongodb://localhost:27017/node-demo");

Creating a Database Schema

Once the user enters data in the input field and clicks the add button, we want the contents of the input field to be stored in the database. In order to know the format of the data in the database, we need to have a Schema.

For this tutorial, we will need a very simple Schema that has only two fields. I am going to call the field firstName and lastName. The data stored in both fields will be a String.

After connecting to the database in our app.js we need to define our Schema. Here are the lines you need to add to the app.js.
var nameSchema = new mongoose.Schema({   firstName: String,   lastNameName: String });

Once we have built our Schema, we need to create a model from it. I am going to call my model “DataInput”. Here is the line you will add next to create our mode.
var User = mongoose.model("User", nameSchema);

Creating RESTful API

Now that we have a connection to our database, we need to create the mechanism by which data will be added to the database. This is done through our REST API. We will need to create an endpoint that will be used to send data to our server. Once the server receives this data then it will store the data in the database.

An endpoint is a route that our server will be listening to to get data from the browser. We already have one route that we have created already in the application and that is the route that is listening at the endpoint “/” which is the homepage of our application.

HTTP Verbs in a REST API

The communication between the client(the browser) and the server is done through an HTTP verb. The most common HTTP verbs are
GET, PUT, POST, and DELETE.

The following table explains what each HTTP verb does.

HTTP Verb Operation
GET Read
POST Create
PUT Update
DELETE Delete

As you can see from these verbs, they form the basis of CRUD operations that I talked about previously.

Building a CRUD endpoint

If you remember, the form in our index.html file used a post method to call this endpoint. We will now create this endpoint.

In our previous endpoint we used a “GET” http verb to display the index.html file. We are going to do something very similar but instead of using “GET”, we are going to use “POST”. To get started this is what the framework of our endpoint will look like.

app.post("/addname", (req, res) => {
 
});
Express Middleware

To fill out the contents of our endpoint, we want to store the firstName and lastName entered by the user into the database. The values for firstName and lastName are in the body of the request that we send to the server. We want to capture that data, convert it to JSON and store it into the database.

Express.js version 4 removed all middleware. To parse the data in the body we will need to add middleware into our application to provide this functionality. We will be using the body-parser module. We need to install it, so in your terminal window enter the following command.

npm install body-parser --save

Once it is installed, we will need to require this module and configure it. The configuration will allow us to pass the data for firstName and lastName in the body to the server. It can also convert that data into JSON format. This will be handy because we can take this formatted data and save it directly into our database.

To add the body-parser middleware to our application and configure it, we can add the following lines directly after the line that sets our port.

var bodyParser = require('body-parser');
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
Saving data to database

Mongoose provides a save function that will take a JSON object and store it in the database. Our body-parser middleware, will convert the user’s input into the JSON format for us.

To save the data into the database, we need to create a new instance of our model that we created early. We will pass into this instance the user’s input. Once we have it then we just need to enter the command “save”.

Mongoose will return a promise on a save to the database. A promise is what is returned when the save to the database completes. This save will either finish successfully or it will fail. A promise provides two methods that will handle both of these scenarios.

If this save to the database was successful it will return to the .then segment of the promise. In this case we want to send text back the user to let them know the data was saved to the database.

If it fails it will return to the .catch segment of the promise. In this case, we want to send text back to the user telling them the data was not saved to the database. It is best practice to also change the statusCode that is returned from the default 200 to a 400. A 400 statusCode signifies that the operation failed.

Now putting all of this together here is what our final endpoint will look like.

app.post("/addname", (req, res) => {
  var myData = new User(req.body);
  myData.save()
    .then(item => {
      res.send("item saved to database");
    })
    .catch(err => {
      res.status(400).send("unable to save to database");
    });
});
Testing our code

Save your code. Go to your terminal and enter the command node app.js to start our server. Open up your browser and navigate to the URL “http://localhost:3000”. You will see our index.html file displayed to you.

Make sure you have mongo running.

Enter your first name and last name in the input fields and then click the “Add Name” button. You should get back text that says the name has been saved to the database like below.

Access to Code

The final version of the code is available in my Github repo. To access the code click here. Thank you for reading !

Build a REST API using Node.js, Express.js, Mongoose.js and MongoDB

Build a REST API using Node.js, Express.js, Mongoose.js and MongoDB

Node.js, Express.js, Mongoose.js, and MongoDB is a great combination for building easy and fast REST API. You will see how fast that combination than other existing frameworks because of Node.js is a packaged compilation of Google’s V8 JavaScript engine and it works on non-blocking and event-driven I/O. Express.js is a Javascript web server that has a complete function of web development including REST API.

Node.js, Express.js, Mongoose.js, and MongoDB is a great combination for building easy and fast REST API. You will see how fast that combination than other existing frameworks because of Node.js is a packaged compilation of Google’s V8 JavaScript engine and it works on non-blocking and event-driven I/O. Express.js is a Javascript web server that has a complete function of web development including REST API.

This tutorial divided into several steps:

Step #1. Create Express.js Application and Install Required Modules
Step #2. Add Mongoose.js Module as ORM for MongoDB
Step #3. Create Product Mongoose Model
Step #4. Create Routes for the REST API endpoint
Step #5. Test REST API Endpoints

Source codes here:
https://github.com/didinj/NodeRestApi...

Node.js, ExpressJs, MongoDB and Vue.js (MEVN Stack) Application Tutorial

Node.js, ExpressJs, MongoDB and Vue.js (MEVN Stack) Application Tutorial

In this tutorial, you'll learn how to integrate Vue.js with Node.js backend (using Express framework) and MongoDB and how to build application with Node.js, ExpressJs, MongoDB and Vue.js

In this tutorial, you'll learn how to integrate Vue.js with Node.js backend (using Express framework) and MongoDB and how to build application with Node.js, ExpressJs, MongoDB and Vue.js

Vue.js is a JavaScript framework with growing number of users. Released 4 years ago, it’s now one of the most populare front-end frameworks. There are some reasons why people like Vue.js. Using Vue.js is very simple if you are already familiar with HTML and JavaScript. They also provide clear documentation and examples, makes it easy for starters to learn the framework. Vue.js can be used for both simple and complex applications. If your application is quite complex, you can use Vuex for state management, which is officially supported. In addition, it’s also very flexible that yu can write template in HTML, JavaScript or JSX.

This tutorial shows you how to integrate Vue.js with Node.js backend (using Express framework) and MongoDB. As for example, we’re going to create a simple application for managing posts which includes list posts, create post, update post and delete post (basic CRUD functionality). I divide this tutorial into two parts. The first part is setting up the Node.js back-end and database. The other part is writing Vue.js code including how to build .vue code using Webpack.

Dependencies

There are some dependencies required for this project. Add the dependencies below to your package.json. Then run npm install to install these dependencies.

  "dependencies": {
    "body-parser": "~1.17.2",
    "dotenv": "~4.0.0",
    "express": "~4.16.3",
    "lodash": "~4.17.10",
    "mongoose": "~5.2.9",
    "morgan": "~1.9.0"
  },
  "devDependencies": {
    "axios": "~0.18.0",
    "babel-core": "~6.26.3",
    "babel-loader": "~7.1.5",
    "babel-preset-env": "~1.7.0",
    "babel-preset-stage-3": "~6.24.1",
    "bootstrap-vue": "~2.0.0-rc.11",
    "cross-env": "~5.2.0",
    "css-loader": "~1.0.0",
    "vue": "~2.5.17",
    "vue-loader": "~15.3.0",
    "vue-router": "~3.0.1",
    "vue-style-loader": "~4.1.2",
    "vue-template-compiler": "~2.5.17",
    "webpack": "~4.16.5",
    "webpack-cli": "^3.1.0"
  },

Project Structure

Below is the overview of directory structure for this project.

  app
    config
    controllers
    models
    queries
    routes
    views
  public
    dist
    src

The app directory contains all files related to server-side. The public directory contains two sub-directories: dist and src. dist is used for the output of build result, while src is for front-end code files.

Model

First, we define a model for Post using Mongoose. To make it simple, it only has two properties: title and content.

app/models/Post.js

  const mongoose = require('mongoose');

  const { Schema } = mongoose;

  const PostSchema = new Schema(
    {
      title: { type: String, trim: true, index: true, default: '' },
      content: { type: String },
    },
    {
      collection: 'posts',
      timestamps: true,
    },
  );

  module.exports = mongoose.model('Post', PostSchema);

Queries

After defining the model, we write some queries that will be needed in the controllers.

app/queries/posts.js

  const Post = require('../models/Post');

  /**
   * Save a post.
   *
   * @param {Object} post - Javascript object or Mongoose object
   * @returns {Promise.}
   */
  exports.save = (post) => {
    if (!(post instanceof Post)) {
      post = new Post(post);
    }

    return post.save();
  };

  /**
   * Get post list.
   * @param {object} [criteria] - Filter options
   * @returns {Promise.<Array.>}
   */
  exports.getPostList = (criteria = {}) => Post.find(criteria);

  /**
   * Get post by ID.
   * @param {string} id - Post ID
   * @returns {Promise.}
   */
  exports.getPostById = id => Post.findOne({ _id: id });

  /**
   * Delete a post.
   * @param {string} id - Post ID
   * @returns {Promise}
   */
  exports.deletePost = id => Post.findByIdAndRemove(id);

Controllers

We need API controllers for handling create post, get post listing, get detail of a post, update a post and delete a post.

app/controllers/api/posts/create.js

  const postQueries = require('../../../queries/posts');

  module.exports = (req, res) => postQueries.save(req.body)
    .then((post) => {
      if (!post) {
        return Promise.reject(new Error('Post not created'));
      }

      return res.status(200).send(post);
    })
    .catch((err) => {
      console.error(err);

      return res.status(500).send('Unable to create post');
    });

app/controllers/api/posts/delete.js

  const postQueries = require('../../../queries/posts');

  module.exports = (req, res) => postQueries.deletePost(req.params.id)
    .then(() => res.status(200).send())
    .catch((err) => {
      console.error(err);

      return res.status(500).send('Unable to delete post');
    });

app/controllers/api/posts/details.js

  const postQueries = require('../../../queries/posts');

  module.exports = (req, res) => postQueries.getPostById(req.params.id)
    .then((post) => {
      if (!post) {
        return Promise.reject(new Error('Post not found'));
      }

      return res.status(200).send(post);
    })
    .catch((err) => {
      console.error(err);

      return res.status(500).send('Unable to get post');
    });

app/controllers/api/posts/list.js

  const postQueries = require('../../../queries/posts');

  module.exports = (req, res) => postQueries.getPostList(req.params.id)
    .then(posts => res.status(200).send(posts))
    .catch((err) => {
      console.error(err);

      return res.status(500).send('Unable to get post list');
    });

app/controllers/api/posts/update.js

  const _ = require('lodash');

  const postQueries = require('../../../queries/posts');

  module.exports = (req, res) => postQueries.getPostById(req.params.id)
    .then(async (post) => {
      if (!post) {
        return Promise.reject(new Error('Post not found'));
      }

      const { title, content } = req.body;

      _.assign(post, {
        title, content
      });

      await postQueries.save(post);

      return res.status(200).send({
        success: true,
        data: post,
      })
    })
    .catch((err) => {
      console.error(err);

      return res.status(500).send('Unable to update post');
    });

Routes

We need to have some pages for user interaction and some API endpoints for processing HTTP requests. To make the app scalable, it’s better to separate the routes for pages and APIs.

app/routes/index.js

  const express = require('express');

  const routes = express.Router();

  routes.use('/api', require('./api'));
  routes.use('/', require('./pages'));

  module.exports = routes;


Below is the API routes.

app/routes/api/index.js

  const express = require('express');

  const router = express.Router();

  router.get('/posts/', require('../../controllers/api/posts/list'));
  router.get('/posts/:id', require('../../controllers/api/posts/details'));
  router.post('/posts/', require('../../controllers/api/posts/create'));
  router.patch('/posts/:id', require('../../controllers/api/posts/update'));
  router.delete('/posts/:id', require('../../controllers/api/posts/delete'));

  module.exports = router;


For the pages, in this tutorial, we use plain HTML file. You can easily replace it with any HTML template engine if you want. The HTML file contains a div whose id is app. Later, in Vue.js application, it will use the element with id app for rendering the content. What will be rendered on each pages is configured on Vue.js route on part 2 of this tutorial.

app/routes/pages/index.js

  const express = require('express');

  const router = express.Router();

  router.get('/posts/', (req, res) => {
    res.sendFile(`${__basedir}/views/index.html`);
  });

  router.get('/posts/create', (req, res) => {
    res.sendFile(`${__basedir}/views/index.html`);
  });

  router.get('/posts/:id', (req, res) => {
    res.sendFile(`${__basedir}/views/index.html`);
  });

  module.exports = router;

Below is the HTML file

app/views/index.html

  <!DOCTYPE html>
  <html>
    <head>
      <meta charset="utf-8">
      <title>VueJS Tutorial by Woolha.com</title>
      <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.4.0/css/font-awesome.min.css" type="text/css" media="all" />
      <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous">
      <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
    </head>
    <body>
      <div id="app"></div>
      <script src="/dist/js/main.js"></script>
    </body>
  </html>

Below is the main script of the application, you need to run this for starting the server-side application.

app/index.js

  require('dotenv').config();

  const bodyParser = require('body-parser');
  const express = require('express');
  const http = require('http');
  const mongoose = require('mongoose');
  const morgan = require('morgan');
  const path = require('path');

  const dbConfig = require('./config/database');
  const routes = require('./routes');

  const app = express();
  const port = process.env.PORT || 4000;

  global.__basedir = __dirname;

  mongoose.Promise = global.Promise;

  mongoose.connect(dbConfig.url, dbConfig.options, (err) => {
    if (err) {
      console.error(err.stack || err);
    }
  });

  /* General setup */
  app.use(morgan('dev'));
  app.use(bodyParser.json());
  app.use(bodyParser.urlencoded({ extended: true }));
  app.use(morgan('dev'));

  app.use('/', routes);

  const MAX_AGE = 86400000;

  // Select which directories or files under public can be served to users
  app.use('/', express.static(path.join(__dirname, '../public'), { maxAge: MAX_AGE }));

  // Error handler
  app.use((err, req, res, next) => { // eslint-disable-line no-unused-vars
    res.status(err.status || 500);

    if (err.status === 404) {
      res.locals.page = {
        title: 'Not Found',
        noIndex: true,
      };

      console.error(`Not found: ${req.url}`);

      return res.status(404).send();
    }

    console.error(err.stack || err);

    return res.status(500).send();
  });

  http
    .createServer(app)
    .listen(port, () => {
      console.info(`HTTP server started on port ${port}`);
    })
    .on('error', (err) => {
      console.error(err.stack || err);
    });

  process.on('uncaughtException', (err) => {
    if (err.name === 'MongoError') {
      mongoose.connection.emit('error', err);
    } else {
      console.error(err.stack || err);
    }
  });

  module.exports = app;

That’s all for the server side preparation. On the next part, we’re going to set up the Vue.js client-side application and build the code into a single JavaScript file ready to be loaded from HTML.

Then, we build the code using Webpack, so that it can be loaded from HTML. In this tutorial, we’re building a simple application with basic CRUD functionality for managing posts.

Create Vue.js Components

For managing posts, there are three components we’re going to create. The first one is for creating a new post. The second is for editing a post. The other is for managing posts (displaying list of posts and allow post deletion)

First, this is the component for creating a new post. It has one method createPost which validate data and send HTTP request to the server. We use axios for sending HTTP request.

public/src/components/Posts/Create.vue

  <template>
    <b-container>
      <h1 class="d-flex justify-content-center">Create a Post</h1>
      <p v-if="errors.length">
        <b>Please correct the following error(s):</b>
        <ul>
          <li v-for="error in errors">{{ error }}</li>
        </ul>
      </p>
      <b-form @submit.prevent>
        <b-form-group>
          <b-form-input type="text" class="form-control" placeholder="Title of the post" v-model="post.title"></b-form-input>
        </b-form-group>
        <b-form-group>
          <b-form-textarea class="form-control" placeholder="Write the content here" v-model="post.content"></b-form-textarea>
        </b-form-group>
        <b-button variant="primary" v-on:click="createPost">Create Post</b-button>
      </b-form>
    </b-container>
  </template>

  <script>
    import axios from 'axios';

    export default {
      data: () => ({
        errors: [],
        post: {
          title: '',
          content: '',
        },
      }),
      methods: {
        createPost(event) {
          if (event) {
            event.preventDefault();
          }

          if (!this.post.title) {
            this.errors = [];

            if (!this.post.title) {
              this.errors.push('Title required.');
            }

            return;
          }

          const url = 'http://localhost:4000/api/posts';
          const param = this.post;

          axios
            .post(url, param)
            .then((response) => {
              console.log(response);
              window.location.href = 'http://localhost:4000/posts';
            }).catch((error) => {
              console.log(error);
            });
        },
      }
    }
  </script>


Below is the component for editing a post. Of course, we need the current data of the post before editing it. Therefore, there’s fetchPost method called when the component is created. There’s also updatePost method which validate data and call the API for updating post.

public/src/components/Posts/Edit.vue

  <template>
    <b-container>
      <h1 class="d-flex justify-content-center">Edit a Post</h1>
      <p v-if="errors.length">
        <b>Please correct the following error(s):</b>
        <ul>
          <li v-for="error in errors">{{ error }}</li>
        </ul>
      </p>
      <b-form @submit.prevent>
        <b-form-group>
          <b-form-input type="text" class="form-control" placeholder="Title of the post" v-model="post.title"></b-form-input>
        </b-form-group>
        <b-form-group>
          <b-form-textarea class="form-control" placeholder="Write the content here" v-model="post.content"></b-form-textarea>
        </b-form-group>
        <b-button variant="primary" v-on:click="updatePost">Update Post</b-button>
      </b-form>
    </b-container>
  </template>

  <script>
    import axios from 'axios';

    export default {
      data: () => ({
        errors: [],
        post: {
          _id: '',
          title: '',
          content: '',
        },
      }),
      created: function() {
        this.fetchPost();
      },
      methods: {
        fetchPost() {
          const postId = this.$route.params.id;
          const url = `http://localhost:4000/api/posts/${postId}`;

          axios
            .get(url)
            .then((response) => {
              this.post = response.data;
              console.log('this.post;');
              console.log(this.post);
          });
        },
        updatePost(event) {
          if (event) {
            event.preventDefault();
          }

          if (!this.post.title) {
            this.errors = [];

            if (!this.post.title) {
              this.errors.push('Title required.');
            }

            return;
          }

          const url = `http://localhost:4000/api/posts/${this.post._id}`;
          const param = this.post;

          axios
            .patch(url, param)
            .then((response) => {
                console.log(response);
              window.alert('Post successfully saved');
            }).catch((error) => {
              console.log(error);
            });
        },
      }
    }
  </script>


For managing posts, we need to fetch the list of post first. Similar to the edit component, in this component, we have fetchPosts method called when the component is created. For deleting a post, there’s also a method deletePost. If post successfully deleted, the fetchPosts method is called again to refresh the post list.

public/src/components/Posts/List.vue

  <template>
    <b-container>
      <h1 class="d-flex justify-content-center">Post List</h1>
      <b-button variant="primary" style="color: #ffffff; margin: 20px;"><a href="/posts/create" style="color: #ffffff;">Create New Post</a></b-button>
      <b-container-fluid v-if="posts.length">
        <table class="table">
          <thead>
            <tr class="d-flex">
              <td class="col-8">Titleqqqqqqqqq</td>
              <td class="col-4">Actions</td>
            </tr>
          </thead>
          <tbody>
            <tr v-for="post in posts" class="d-flex">
              <td class="col-8">{{ post.title }}</td>
              <td class="col-2"><a v-bind:href="'http://localhost:4000/posts/' + post._id"><button type="button" class="btn btn-primary"><i class="fa fa-edit" aria-hidden="true"></i></button></a></td>
              <td class="col-2"><button type="button" class="btn btn-danger" v-on:click="deletePost(post._id)"><i class="fa fa-remove" aria-hidden="true"></i></button></td>
            </tr>
          </tbody>
        </table>
      </b-container-fluid>
    </b-container>
  </template>

  <script>
    import axios from 'axios';

    export default {
      data: () => ({
        posts: [],
      }),
      created: () => {
        this.fetchPosts();
      },
      methods: {
        fetchPosts() {
          const url = 'http://localhost:4000/api/posts/';

          axios
            .get(url)
            .then((response) => {
              console.log(response.data);
              this.posts = response.data;
          });
        },
        deletePost(id) {
          if (event) {
            event.preventDefault();
          }

          const url = `http://localhost:4000/api/posts/${id}`;
          const param = this.post;

          axios
            .delete(url, param)
            .then((response) => {
              console.log(response);
              console.log('Post successfully deleted');

              this.fetchPosts();
            }).catch((error) => {
              console.log(error);
            });
        },
      }
    }
  </script>


All of the components above are wrapped into a root component which roles as the basic template. The root component renders the navbar which is same across all components. The component for each routes will be rendered on router-view.

public/src/App.vue

  <template>
    <div>
      <b-navbar toggleable="md" type="dark" variant="dark">
        <b-navbar-toggle target="nav_collapse"></b-navbar-toggle>
        <b-navbar-brand to="/">My Vue App</b-navbar-brand>
        <b-collapse is-nav id="nav_collapse">
          <b-navbar-nav>
            <b-nav-item to="/">Home</b-nav-item>
            <b-nav-item to="/posts">Manage Posts</b-nav-item>
          </b-navbar-nav>
        </b-collapse>
      </b-navbar>
      <!-- routes will be rendered here -->
      <router-view />
    </div>
  </template>

  <script>

  export default {
    name: 'app',
    data () {},
    methods: {}
  }
  </script>


For determining which component should be rendered, we use Vue.js’ router. For each routes, we need to define the path, component name and the component itself. A component will be rendered if the current URL matches the path.

public/src/router/index.js

  import Vue from 'vue'
  import Router from 'vue-router'

  import CreatePost from '../components/Posts/Create.vue';
  import EditPost from '../components/Posts/Edit.vue';
  import ListPost from '../components/Posts/List.vue';

  Vue.use(Router);

  let router = new Router({
    mode: 'history',
    routes: [
      {
        path: '/posts',
        name: 'ListPost',
        component: ListPost,
      },
      {
        path: '/posts/create',
        name: 'CreatePost',
        component: CreatePost,
      },
      {
        path: '/posts/:id',
        name: 'EditPost',
        component: EditPost,
      },
    ]
  });

  export default router;


Lastly, we need a main script as the entry point which imports the main App component and the router. Inside, it creates a new Vue instance

webpack.config.js

  import BootstrapVue from 'bootstrap-vue';
  import Vue from 'vue';

  import App from './App.vue';
  import router from './router';

  Vue.use(BootstrapVue);
  Vue.config.productionTip = false;
  new Vue({
    el: '#app',
    router,
    render: h => h(App),
  });

Configure Webpack

For building the code into a single JavaSript file. Below is the basic configuration for Webpack 4.

webpack.config.js

  const { VueLoaderPlugin } = require('vue-loader');

  module.exports = {
    entry: './public/src/main.js',
    output: {
      path: `${__dirname}/public/dist/js/`,
      filename: '[name].js',
    },
    resolve: {
      modules: [
        'node_modules',
      ],
      alias: {
        // vue: './vue.js'
      }
    },
    module: {
      rules: [
        {
          test: /\.css$/,
          use: [
            'vue-style-loader',
            'css-loader'
          ]
        },
        {
          test: /\.vue$/,
          loader: 'vue-loader',
          options: {
            loaders: {
            }
            // other vue-loader options go here
          }
        },
        {
          test: /\.js$/,
          loader: 'babel-loader',
          exclude: /node_modules/
        },
      ]
    },
    plugins: [
      new VueLoaderPlugin(),
    ]

After that, run ./node_modules/webpack/bin/webpack.js. You can add the command to the scripts section of package.json, so you can run Webpack with a shorter command npm run build, as examplified below.

  "dependencies": {
    ...
  },
  "devDependencies": {
    ...
  },
  "scripts": {
    "build": "./node_modules/webpack/bin/webpack.js",
    "start": "node app/index.js"
  },

Finally, you can start to try the application. This code is also available on Woolha.com’s Github.