How To Scale a Node.js Application with MongoDB Using Helm

How To Scale a Node.js Application with MongoDB Using Helm

<strong>Originally published by </strong>Kathleen Juell <em>at&nbsp;</em><a href="https://www.digitalocean.com/community/tutorials/how-to-scale-a-node-js-application-with-mongodb-using-helm" target="_blank">digitalocean.com</a>

Introduction

Kubernetes is a system for running modern, containerized applications at scale. With it, developers can deploy and manage applications across clusters of machines. And though it can be used to improve efficiency and reliability in single-instance application setups, Kubernetes is designed to run multiple instances of an application across groups of machines.

When creating multi-service deployments with Kubernetes, many developers opt to use the Helm package manager. Helm streamlines the process of creating multiple Kubernetes resources by offering charts and templates that coordinate how these objects interact. It also offers pre-packaged charts for popular open-source projects.

In this tutorial, you will deploy a Node.js application with a MongoDB database onto a Kubernetes cluster using Helm charts. You will use the official Helm MongoDB replica set chart to create a StatefulSet objectconsisting of three Pods, a Headless Service, and three PersistentVolumeClaims. You will also create a chart to deploy a multi-replica Node.js application using a custom application image. The setup you will build in this tutorial will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build a resilient Node.js application with a MongoDB data store that can scale with your needs.

Prerequisites

To complete this tutorial, you will need:

Step 1 — Cloning and Packaging the Application

To use our application with Kubernetes, we will need to package it so that the kubelet agent can pull the image. Before packaging the application, however, we will need to modify the MongoDB connection URI in the application code to ensure that our application can connect to the members of the replica set that we will create with the Helm mongodb-replicaset chart.

Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application with a MongoDB database to demonstrate how to set up a development environment with Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

Clone the repository into a directory called node_project:

git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

Navigate to the node_project directory:

cd node_project

The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application's state has been offloaded to a MongoDB database.

For more information about designing modern, containerized applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

When we deploy the Helm mongodb-replicaset chart, it will create:

  • A StatefulSet object with three Pods — the members of the MongoDB replica set. Each Pod will have an associated PersistentVolumeClaim and will maintain a fixed identity in the event of rescheduling.
  • A MongoDB replica set made up of the Pods in the StatefulSet. The set will include one primary and two secondaries. Data will be replicated from the primary to the secondaries, ensuring that our application data remains highly available.

For our application to interact with the database replicas, the MongoDB connection URI in our code will need to include both the hostnames of the replica set members as well as the name of the replica set itself. We therefore need to include these values in the URI.

The file in our cloned repository that specifies database connection information is called db.js. Open that file now using nano or your favorite editor:

nano db.js

Currently, the file includes constants that are referenced in the database connection URI at runtime. The values for these constants are injected using Node’s process.env property, which returns an object with information about your user environment at runtime. Setting values dynamically in our application code allows us to decouple the code from the underlying infrastructure, which is necessary in a dynamic, stateless environment. For more information about refactoring application code in this way, see Step 2 of Containerizing a Node.js Application for Development With Docker Compose and the relevant discussion in The 12-Factor App.

The constants for the connection URI and the URI string itself currently look like this:

~/node_project/db.js

...
const {
  MONGO_USERNAME,
  MONGO_PASSWORD,
  MONGO_HOSTNAME,
  MONGO_PORT,
  MONGO_DB
} = process.env;

...

const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
...

In keeping with a 12FA approach, we do not want to hard code the hostnames of our replica instances or our replica set name into this URI string. The existing MONGO_HOSTNAME constant can be expanded to include multiple hostnames — the members of our replica set — so we will leave that in place. We will need to add a replica set constant to the options section of the URI string, however.

Add MONGO_REPLICASET to both the URI constant object and the connection string:

~/node_project/db.js

...
const {
  MONGO_USERNAME,
  MONGO_PASSWORD,
  MONGO_HOSTNAME,
  MONGO_PORT,
  MONGO_DB,
  MONGO_REPLICASET
} = process.env;

...
const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?replicaSet=${MONGO_REPLICASET}&authSource=admin`;
...

Using the replicaSet option in the options section of the URI allows us to pass in the name of the replica set, which, along with the hostnames defined in the MONGO_HOSTNAME constant, will allow us to connect to the set members.

Save and close the file when you are finished editing.

With your database connection information modified to work with replica sets, you can now package your application, build the image with the docker build command, and push it to Docker Hub.

Build the image with docker build and the -t flag, which allows you to tag the image with a memorable name. In this case, tag the image with your Docker Hub username and name it node-replicas or a name of your own choosing:

docker build -t your_dockerhub_username/node-replicas .

The . in the command specifies that the build context is the current directory.

It will take a minute or two to build the image. Once it is complete, check your images:

docker images

You will see the following output:

Output

REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
your_dockerhub_username/node-replicas   latest              56a69b4bc882        7 seconds ago       90.1MB
node                                    10-alpine           aa57b0242b33        6 days ago          71MB

Next, log in to the Docker Hub account you created in the prerequisites:

docker login -u your_dockerhub_username 

When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user's home directory with your Docker Hub credentials.

Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

docker push your_dockerhub_username/node-replicas

You now have an application image that you can pull to run your replicated application with Kubernetes. The next step will be to configure specific parameters to use with the MongoDB Helm chart.

Step 2 — Creating Secrets for the MongoDB Replica Set

The stable/mongodb-replicaset chart provides different options when it comes to using Secrets, and we will create two to use with our chart deployment:

  • A Secret for our replica set keyfile that will function as a shared password between replica set members, allowing them to authenticate other members.
  • A Secret for our MongoDB admin user, who will be created as a root user on the admin database. This role will allow you to create subsequent users with limited permissions when deploying your application to production.

With these Secrets in place, we will be able to set our preferred parameter values in a dedicated values file and create the StatefulSet object and MongoDB replica set with the Helm chart.

First, let's create the keyfile. We will use the openssl command with the rand option to generate a 756 byte random string for the keyfile:

openssl rand -base64 756 > key.txt

The output generated by the command will be base64 encoded, ensuring uniform data transmission, and redirected to a file called key.txt, following the guidelines stated in the mongodb-replicaset chart authentication documentation . The key itself must be between 6 and 1024 characters long, consisting only of characters in the base64 set.

You can now create a Secret called keyfilesecret using this file with kubectl create

kubectl create secret generic keyfilesecret --from-file=key.txt

This will create a Secret object in the default namespace, since we have not created a specific namespace for our setup.

You will see the following output indicating that your Secret has been created:

Output
secret/keyfilesecret created

Remove key.txt:

rm key.txt

Alternatively, if you would like to save the file, be sure restrict its permissions and add it to your .gitignore file to keep it out of version control.

Next, create the Secret for your MongoDB admin user. The first step will be to convert your desired username and password to base64.

Convert your database username:

echo -n 'your_database_username' | base64

Note down the value you see in the output.

Next, convert your password:

echo -n 'your_database_password' | base64

Take note of the value in the output here as well.

Open a file for the Secret:

nano secret.yaml

Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your YAML files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validateflags:

kubectl create -f your_yaml_file.yaml --dry-run --validate=true

In general, it is a good idea to validate your syntax before creating resources with kubectl.

Add the following code to the file to create a Secret that will define a user and password with the encoded values you just created. Be sure to replace the dummy values here with your own encodedusername and password:

~/node_project/secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: mongo-secret
data:
  user: your_encoded_username
  password: your_encoded_password

Here, we're using the key names that the mongodb-replicaset chart expects: user and password. We have named the Secret object mongo-secret, but you are free to name it anything you would like.

Save and close the file when you are finished editing.

Create the Secret object with the following command:

kubectl create -f secret.yaml

You will see the following output:

Output
secret/mongo-secret created

Again, you can either remove secret.yaml or restrict its permissions and add it to your .gitignore file.

With your Secret objects created, you can move on to specifying the parameter values you will use with the mongodb-replicaset chart and creating the MongoDB deployment.

Step 3 — Configuring the MongoDB Helm Chart and Creating a Deployment

Helm comes with an actively maintained repository called stable that contains the chart we will be using: mongodb-replicaset. To use this chart with the Secrets we've just created, we will create a file with configuration parameter values called mongodb-values.yaml and then install the chart using this file.

Our mongodb-values.yaml file will largely mirror the default values.yaml file in the mongodb-replicaset chart repository. We will, however, make the following changes to our file:

  • We will set the auth parameter to true to ensure that our database instances start with authorization enabled. This means that all clients will be required to authenticate for access to database resources and operations.
  • We will add information about the Secrets we created in the previous Step so that the chart can use these values to create the replica set keyfile and admin user.
  • We will decrease the size of the PersistentVolumes associated with each Pod in the StatefulSet to use the minimum viable DigitalOcean Block Storage unit, 1GB, though you are free to modify this to meet your storage requirements.

Before writing the mongodb-values.yaml file, however, you should first check that you have a StorageClass created and configured to provision storage resources. Each of the Pods in your database StatefulSet will have a sticky identity and an associated PersistentVolumeClaim, which will dynamically provision a PersistentVolume for the Pod. If a Pod is rescheduled, the PersistentVolume will be mounted to whichever node the Pod is scheduled on (though each Volume must be manually deleted if its associated Pod or StatefulSet is permanently deleted).

Because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.comDigitalOcean Block Storage — which we can check by typing:

kubectl get storageclass

If you are working with a DigitalOcean cluster, you will see the following output:

Output
NAME                         PROVISIONER                 AGE
do-block-storage (default)   dobs.csi.digitalocean.com   21m

If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

Now that you have ensured that you have a StorageClass configured, open mongodb-values.yaml for editing:

nano mongodb-values.yaml

You will set values in this file that will do the following:

  • Enable authorization.
  • Reference your keyfilesecret and mongo-secret objects.
  • Specify 1Gi for your PersistentVolumes.
  • Set your replica set name to db.
  • Specify 3 replicas for the set.
  • Pin the mongo image to the latest version at the time of writing: 4.1.9.

Paste the following code into the file:

~/node_project/mongodb-values.yaml

replicas: 3
port: 27017
replicaSetName: db
podDisruptionBudget: {}
auth:
  enabled: true
  existingKeySecret: keyfilesecret
  existingAdminSecret: mongo-secret
imagePullSecrets: []
installImage:
  repository: unguiculus/mongodb-install
  tag: 0.7
  pullPolicy: Always
copyConfigImage:
  repository: busybox
  tag: 1.29.3
  pullPolicy: Always
image:
  repository: mongo
  tag: 4.1.9
  pullPolicy: Always
extraVars: {}
metrics:
  enabled: false
  image:
    repository: ssalaues/mongodb-exporter
    tag: 0.6.1
    pullPolicy: IfNotPresent
  port: 9216
  path: /metrics
  socketTimeout: 3s
  syncTimeout: 1m
  prometheusServiceDiscovery: true
  resources: {}
podAnnotations: {}
securityContext:
  enabled: true
  runAsUser: 999
  fsGroup: 999
  runAsNonRoot: true
init:
  resources: {}
  timeout: 900
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
extraLabels: {}
persistentVolume:
  enabled: true
  #storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 1Gi
  annotations: {}
serviceAnnotations: {}
terminationGracePeriodSeconds: 30
tls:
  enabled: false
configmap: {}
readinessProbe:
  initialDelaySeconds: 5
  timeoutSeconds: 1
  failureThreshold: 3
  periodSeconds: 10
  successThreshold: 1
livenessProbe:
  initialDelaySeconds: 30
  timeoutSeconds: 5
  failureThreshold: 3
  periodSeconds: 10
  successThreshold: 1

The persistentVolume.storageClass parameter is commented out here: removing the comment and setting its value to "-" would disable dynamic provisioning. In our case, because we are leaving this value undefined, the chart will choose the default provisioner — in our case, dobs.csi.digitalocean.com.

Also note the accessMode associated with the persistentVolume key: ReadWriteOnce means that the provisioned volume will be read-write only by a single node. Please see the documentation for more information about different access modes.

To learn more about the other parameters included in the file, see the configuration table included with the repo.

Save and close the file when you are finished editing.

Before deploying the mongodb-replicaset chart, you will want to update the stable repo with the helm repo update command:

helm repo update

This will get the latest chart information from the stable repository.

Finally, install the chart with the following command:

helm install --name mongo -f mongodb-values.yaml stable/mongodb-replicaset

Note: Before installing a chart, you can run helm install with the --dry-run and --debug options to check the generated manifests for your release:

helm install --name your_release_name -f your_values_file.yaml --dry-run --debug your_chart

Note that we are naming the Helm release mongo. This name will refer to this particular deployment of the chart with the configuration options we've specified. We've pointed to these options by including the -fflag and our mongodb-values.yaml file.

Also note that because we did not include the --namespace flag with helm install, our chart objects will be created in the default namespace.

Once you have created the release, you will see output about its status, along with information about the created objects and instructions for interacting with them:

Output
NAME:   mongo
LAST DEPLOYED: Tue Apr 16 21:51:05 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                              DATA  AGE
mongo-mongodb-replicaset-init     1     1s
mongo-mongodb-replicaset-mongodb  1     1s
mongo-mongodb-replicaset-tests    1     0s
...

You can now check on the creation of your Pods with the following command:

kubectl get pods

You will see output like the following as the Pods are being created:

Output
NAME                         READY   STATUS     RESTARTS   AGE
mongo-mongodb-replicaset-0   1/1     Running    0          67s
mongo-mongodb-replicaset-1   0/1     Init:0/3   0          8s

The READY and STATUS outputs here indicate that the Pods in our StatefulSet are not fully ready: the Init Containers associated with the Pod's containers are still running. Because StatefulSet members are created in sequential order, each Pod in the StatefulSet must be Running and Ready before the next Pod will be created.

Once the Pods have been created and all of their associated containers are running, you will see this output:

Output
NAME                         READY   STATUS    RESTARTS   AGE
mongo-mongodb-replicaset-0   1/1     Running   0          2m33s
mongo-mongodb-replicaset-1   1/1     Running   0          94s
mongo-mongodb-replicaset-2   1/1     Running   0          36s

The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

Note:

If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

kubectl describe pods your_pod
kubectl logs your_pod

Each of the Pods in your StatefulSet has a name that combines the name of the StatefulSet with the ordinal index of the Pod. Because we created three replicas, our StatefulSet members are numbered 0-2, and each has a stable DNS entry comprised of the following elements: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local.

In our case, the StatefulSet and the Headless Service created by the mongodb-replicaset chart have the same names:

kubectl get statefulset
Output

NAME                       READY   AGE
mongo-mongodb-replicaset   3/3     4m2s
kubectl get svc
Output

NAME                              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
kubernetes                        ClusterIP   10.245.0.1   <none>        443/TCP     42m
mongo-mongodb-replicaset          ClusterIP   None         <none>        27017/TCP   4m35s
mongo-mongodb-replicaset-client   ClusterIP   None         <none>        27017/TCP   4m35s

This means that the first member of our StatefulSet will have the following DNS entry:

mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local

Because we need our application to connect to each MongoDB instance, it's essential that we have this information so that we can communicate directly with the Pods, rather than with the Service. When we create our custom application Helm chart, we will pass the DNS entries for each Pod to our application using environment variables.

With your database instances up and running, you are ready to create the chart for your Node application.

Step 4 — Creating a Custom Application Chart and Configuring Parameters

We will create a custom Helm chart for our Node application and modify the default files in the standard chart directory so that our application can work with the replica set we have just created. We will also create files to define ConfigMap and Secret objects for our application.

First, create a new chart directory called nodeapp with the following command:

helm create nodeapp

This will create a directory called nodeapp in your ~/node_project folder with the following resources:

  • A Chart.yaml file with basic information about your chart.
  • A values.yaml file that allows you to set specific parameter values, as you did with your MongoDB deployment.
  • A .helmignore file with file and directory patterns that will be ignored when packaging charts.
  • A templates/ directory with the template files that will generate Kubernetes manifests.
  • A templates/tests/ directory for test files.
  • A charts/ directory for any charts that this chart depends on.

The first file we will modify out of these default files is values.yaml. Open that file now:

nano nodeapp/values.yaml

The values that we will set here include:

  • The number of replicas.
  • The application image we want to use. In our case, this will be the node-replicas image we created in Step 1.
  • The ServiceType. In this case, we will specify LoadBalancer to create a point of access to our application for testing purposes. Because we are working with a DigitalOcean Kubernetes cluster, this will create a DigitalOcean Load Balancer when we deploy our chart. In production, you can configure your chart to use Ingress Resources and Ingress Controllers to route traffic to your Services.
  • The targetPort to specify the port on the Pod where our application will be exposed.

We will not enter environment variables into this file. Instead, we will create templates for ConfigMap and Secret objects and add these values to our application Deployment manifest, located at ~/node_project/nodeapp/templates/deployment.yaml.

Configure the following values in the values.yaml file:

~/node_project/nodeapp/values.yaml

# Default values for nodeapp.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 3

image:
  repository: your_dockerhub_username/node-replicas
  tag: latest
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

service:
  type: LoadBalancer
  port: 80
  targetPort: 8080
...

Save and close the file when you are finished editing.

Next, open a secret.yaml file in the nodeapp/templates directory:

nano nodeapp/templates/secret.yaml

In this file, add values for your MONGO_USERNAME and MONGO_PASSWORD application constants. These are the constants that your application will expect to have access to at runtime, as specified in db.js, your database connection file. As you add the values for these constants, remember to the use the base64-encoded values that you used earlier in Step 2 when creating your mongo-secret object. If you need to recreate those values, you can return to Step 2 and run the relevant commands again.

Add the following code to the file:

~/node_project/nodeapp/templates/secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: {{ .Release.Name }}-auth
data:
  MONGO_USERNAME: your_encoded_username
  MONGO_PASSWORD: your_encoded_password

The name of this Secret object will depend on the name of your Helm release, which you will specify when you deploy the application chart.

Save and close the file when you are finished.

Next, open a file to create a ConfigMap for your application:

nano nodeapp/templates/configmap.yaml

In this file, we will define the remaining variables that our application expects: MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET. Our MONGO_HOSTNAME variable will include the DNS entry for each instance in our replica set, since this is what the MongoDB connection URI requires.

According to the Kubernetes documentation, when an application implements liveness and readiness checks, SRV records should be used when connecting to the Pods. As discussed in Step 3, our Pod SRV records follow this pattern: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local. Since our MongoDB StatefulSet implements liveness and readiness checks, we should use these stable identifiers when defining the values of the MONGO_HOSTNAMEvariable.

Add the following code to the file to define the MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET variables. You are free to use another name for your MONGO_DB database, but your MONGO_HOSTNAME and MONGO_REPLICASET values must be written as they appear here:

~/node_project/nodeapp/templates/configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-config
data:
  MONGO_HOSTNAME: "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local"  
  MONGO_PORT: "27017"
  MONGO_DB: "sharkinfo"
  MONGO_REPLICASET: "db"

Because we have already created the StatefulSet object and replica set, the hostnames that are listed here must be listed in your file exactly as they appear in this example. If you destroy these objects and rename your MongoDB Helm release, then you will need to revise the values included in this ConfigMap. The same applies for MONGO_REPLICASET, since we specified the replica set name with our MongoDB release.

Also note that the values listed here are quoted, which is the expectation for environment variables in Helm.

Save and close the file when you are finished editing.

With your chart parameter values defined and your Secret and ConfigMap manifests created, you can edit the application Deployment template to use your environment variables.

Step 5 — Integrating Environment Variables into Your Helm Deployment

With the files for our application Secret and ConfigMap in place, we will need to make sure that our application Deployment can use these values. We will also customize the liveness and readiness probesthat are already defined in the Deployment manifest.

Open the application Deployment template for editing:

nano nodeapp/templates/deployment.yaml

Though this is a YAML file, Helm templates use a different syntax from standard Kubernetes YAML files in order to generate manifests. For more information about templates, see the Helm documentation.

In the file, first add an env key to your application container specifications, below the imagePullPolicykey and above ports:

~/node_project/nodeapp/templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        env:
        ports:

Next, add the following keys to the list of env variables:

~/node_project/nodeapp/templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        env:
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              key: MONGO_USERNAME
              name: {{ .Release.Name }}-auth
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              key: MONGO_PASSWORD
              name: {{ .Release.Name }}-auth
        - name: MONGO_HOSTNAME
          valueFrom:
            configMapKeyRef:
              key: MONGO_HOSTNAME
              name: {{ .Release.Name }}-config
        - name: MONGO_PORT
          valueFrom:
            configMapKeyRef:
              key: MONGO_PORT
              name: {{ .Release.Name }}-config
        - name: MONGO_DB
          valueFrom:
            configMapKeyRef:
              key: MONGO_DB
              name: {{ .Release.Name }}-config      
        - name: MONGO_REPLICASET
          valueFrom:
            configMapKeyRef:
              key: MONGO_REPLICASET
              name: {{ .Release.Name }}-config        

Each variable includes a reference to its value, defined either by a secretKeyRef key, in the case of Secret values, or configMapKeyRef for ConfigMap values. These keys point to the Secret and ConfigMap files we created in the previous Step.

Next, under the ports key, modify the containerPort definition to specify the port on the container where our application will be exposed:

~/node_project/nodeapp/templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
    ...
      env:
    ...
      ports:
        - name: http
          containerPort: 8080
          protocol: TCP
      ...

Next, let's modify the liveness and readiness checks that are included in this Deployment manifest by default. These checks ensure that our application Pods are running and ready to serve traffic:

  • Readiness probes assess whether or not a Pod is ready to serve traffic, stopping all requests to the Pod until the checks succeed.
  • Liveness probes check basic application behavior to determine whether or not the application in the container is running and behaving as expected. If a liveness probe fails, Kubernetes will restart the container.

For more about both, see the relevant discussion in Architecting Applications for Kubernetes.

In our case, we will build on the httpGet request that Helm has provided by default and test whether or not our application is accepting requests on the /sharks endpoint. The kubelet service will perform the probe by sending a GET request to the Node server running in the application Pod's container and listening on port 8080. If the status code for the response is between 200 and 400, then the kubelet will conclude that the container is healthy. Otherwise, in the case of a 400 or 500 status, kubelet will either stop traffic to the container, in the case of the readiness probe, or restart the container, in the case of the liveness probe.

Add the following modification to the stated path for the liveness and readiness probes:

~/node_project/nodeapp/templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
    ...
      env:
    ...
      ports:
        - name: http
          containerPort: 8080
          protocol: TCP
      livenessProbe:
        httpGet:
          path: /sharks
          port: http
      readinessProbe:
        httpGet:
          path: /sharks
          port: http

Save and close the file when you are finished editing.

You are now ready to create your application release with Helm. Run the following helm install command, which includes the name of the release and the location of the chart directory:

helm install --name nodejs ./nodeapp

Remember that you can run helm install with the --dry-run and --debug options first, as discussed in Step 3, to check the generated manifests for your release.

Again, because we are not including the --namespace flag with helm install, our chart objects will be created in the default namespace.

You will see the following output indicating that your release has been created:

Output
NAME:   nodejs
LAST DEPLOYED: Wed Apr 17 18:10:29 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME           DATA  AGE
nodejs-config  4     1s

==> v1/Deployment
NAME            READY  UP-TO-DATE  AVAILABLE  AGE
nodejs-nodeapp  0/3    3           0          1s

...

Again, the output will indicate the status of the release, along with information about the created objects and how you can interact with them.

Check the status of your Pods:

kubectl get pods
Output
NAME                              READY   STATUS    RESTARTS   AGE
mongo-mongodb-replicaset-0        1/1     Running   0          57m
mongo-mongodb-replicaset-1        1/1     Running   0          56m
mongo-mongodb-replicaset-2        1/1     Running   0          55m
nodejs-nodeapp-577df49dcc-b5fq5   1/1     Running   0          117s
nodejs-nodeapp-577df49dcc-bkk66   1/1     Running   0          117s
nodejs-nodeapp-577df49dcc-lpmt2   1/1     Running   0          117s

Once your Pods are up and running, check your Services:

kubectl get svc
Output

NAME                              TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)        AGE
kubernetes                        ClusterIP      10.245.0.1     <none>            443/TCP        96m
mongo-mongodb-replicaset          ClusterIP      None           <none>            27017/TCP      58m
mongo-mongodb-replicaset-client   ClusterIP      None           <none>            27017/TCP      58m
nodejs-nodeapp                    LoadBalancer   10.245.33.46   your_lb_ip        80:31518/TCP   3m22s

The EXTERNAL_IP associated with the nodejs-nodeapp Service is the IP address where you can access the application from outside of the cluster. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

You should see the following landing page:

Now that your replicated application is working, let's add some test data to ensure that replication is working between members of the replica set.

Step 6 — Testing MongoDB Replication

With our application running and accessible through an external IP address, we can add some test data and ensure that it is being replicated between the members of our MongoDB replica set.

First, make sure you have navigated your browser to the application landing page:

Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

In the form, add an initial shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

Click on the Submit button. You will see a page with this shark information displayed back to you:

Now head back to the shark information form by clicking on Sharks in the top navigation bar:

Enter a new shark of your choosing. We'll go with Whale Shark and Large:

Once you click Submit, you will see that the new shark has been added to the shark collection in your database:

Let's check that the data we've entered has been replicated between the primary and secondary members of our replica set.

Get a list of your Pods:

kubectl get pods
Output
NAME                              READY   STATUS    RESTARTS   AGE
mongo-mongodb-replicaset-0        1/1     Running   0          74m
mongo-mongodb-replicaset-1        1/1     Running   0          73m
mongo-mongodb-replicaset-2        1/1     Running   0          72m
nodejs-nodeapp-577df49dcc-b5fq5   1/1     Running   0          5m4s
nodejs-nodeapp-577df49dcc-bkk66   1/1     Running   0          5m4s
nodejs-nodeapp-577df49dcc-lpmt2   1/1     Running   0          5m4s

To access the mongo shell on your Pods, you can use the kubectl exec command and the username you used to create your mongo-secret in Step 2. Access the mongo shell on the first Pod in the StatefulSet with the following command:

kubectl exec -it mongo-mongodb-replicaset-0 -- mongo -u your_database_username -p --authenticationDatabase admin

When prompted, enter the password associated with this username:

Output
MongoDB shell version v4.1.9
Enter password: 

You will be dropped into an administrative shell:

Output
MongoDB server version: 4.1.9
Welcome to the MongoDB shell.
...

db:PRIMARY>

Though the prompt itself includes this information, you can manually check to see which replica set member is the primary with the
rs.isMaster() method:

rs.isMaster()

You will see output like the following, indicating the hostname of the primary:

Output
db:PRIMARY> rs.isMaster()
{
        "hosts" : [
                "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017",
                "mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local:27017",
                "mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local:27017"
        ],
        ...
        "primary" : "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017",
        ...

Next, switch to your sharkinfo database:

use sharkinfo
Output
switched to db sharkinfo

List the collections in the database:

show collections
Output
sharks

Output the documents in the collection:

db.sharks.find()

You will see the following output:

Output
{ "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 }
{ "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

Exit the MongoDB Shell:

exit

Now that we have checked the data on our primary, let's check that it's being replicated to a secondary. kubectl exec into mongo-mongodb-replicaset-1 with the following command:

kubectl exec -it mongo-mongodb-replicaset-1 -- mongo -u your_database_username -p --authenticationDatabase admin

Once in the administrative shell, we will need to use the db.setSlaveOk() method to permit read operations from the secondary instance:

db.setSlaveOk(1)

Switch to the sharkinfo database:

use sharkinfo
Output
switched to db sharkinfo

Permit the read operation of the documents in the sharks collection:

db.setSlaveOk(1)

Output the documents in the collection:

db.sharks.find()

You should now see the same information that you saw when running this method on your primary instance:

Output
db:SECONDARY> db.sharks.find()
{ "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 }
{ "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

This output confirms that your application data is being replicated between the members of your replica set.

Conclusion

You have now deployed a replicated, highly-available shark information application on a Kubernetes cluster using Helm charts. This demo application and the workflow outlined in this tutorial can act as a starting point as you build custom charts for your application and take advantage of Helm's stablerepository and other chart repositories.

As you move toward production, consider implementing the following:

To learn more about Helm, see An Introduction to Helm, the Package Manager for Kubernetes, How To Install Software on Kubernetes Clusters with the Helm Package Manager, and the Helm documentation.

Angular 7 CRUD with Nodejs and MySQL Example

Angular 7 CRUD with Nodejs and MySQL Example

Angular7 CRUD with nodejs and mysql example - Hey there, Today we will proceed to create a demo for CRUD with Mysql, Express, Angular7(MEAN) and Nodejs from scratch using Angular CLI

Below are the requirements for creating the CRUD on MEAN

  • Node.js
  • Angular CLI
  • Angular 7
  • Mysql
  • IDE or Text Editor

We assume that you have already available the above tools/frameworks and you are familiar with all the above that what individually actually does.

So now we will proceed step by step to achieve the task.

1. Update Angular CLI and Create Angular 7 Application

At first, We have to update the Angular CLI to the latest version. Open the terminal then go to the project folder and then type the below command to update the Angular CLI

sudo npm install -g @angular/cli

Once the above task finishes, Next task is to create new angular application with below command. So go to your project folder and then type below command:

ng new angular7-crud

then go to the newly created folder of angular application with cd /angular7-crud  and type **ng serve. **Now, open the browser then go to http://localhost:4200 you should see this page.

Angular 7 CRUD with Nodejs and MySQL Example

2. Create a server with node.js express and Mysql for REST APIs

create a separate folder named server for server-side stuff, Then move inside folder and create server.js by typing touch server.js

Let’s have a look on the server.js file

let app = require('express')(),
server = require('http').Server(app),
bodyParser = require('body-parser')
express = require('express'),
cors = require('cors'),
http = require('http'),
path = require('path');
 
let articleRoute = require('./Routes/article'),
util = require('./Utilities/util');
 
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: false }));
 
app.use(cors());
 
app.use(function(err, req, res, next) {
return res.send({ "statusCode": util.statusCode.ONE, "statusMessage": util.statusMessage.SOMETHING_WENT_WRONG });
});
 
app.use('/article', articleRoute);
 
// catch 404 and forward to error handler
app.use(function(req, res, next) {
next();
});
 
/*first API to check if server is running*/
app.get('*', (req, res) => {
res.sendFile(path.join(__dirname, '../server/client/dist/index.html'));
})
 
 
server.listen(3000,function(){
console.log('app listening on port: 3000');
});

In the above file we can see, at the top, there are required packages for the app. Below that body parsing, middleware and routing is done.

The next task is to create routes and create a file article.js . So creating a folder name ‘Routes’ and adding article.js within it.

Add the below code for routing in article.js inside routing folder

let express = require('express'),
router = express.Router(),
util = require('../Utilities/util'),
articleService = require('../Services/article');
 
/**Api to create article */
router.post('/create-article', (req, res) => {
articleService.createArticle(req.body, (data) => {
res.send(data);
});
});
 
// /**Api to update article */
router.put('/update-article', (req, res) => {
articleService.updateArticle(req.body, (data) => {
res.send(data);
});
});
 
// /**Api to delete the article */
router.delete('/delete-article', (req, res) => {
articleService.deleteArticle(req.query, (data) => {
res.send(data);
});
});
 
/**Api to get the list of article */
router.get('/get-article', (req, res) => {
documentService.getArticle(req.query, (data) => {
res.send(data);
});
});
 
// /**API to get the article by id... */
router.get('/get-article-by-id', (req, res) => {
articleService.getArticleById(req.query, (data) => {
res.send(data);
});
});
 
module.exports = router;

Now create a folder named Utilities for all config, common methods and mysql connection config.

Now I am adding config values in a file named config.js

let environment = "dev";
 
let serverURLs = {
"dev": {
"NODE_SERVER": "http://localhost",
"NODE_SERVER_PORT": "3000",
"MYSQL_HOST": 'localhost',
"MYSQL_USER": 'root',
"MYSQL_PASSWORD": 'password',
'MYSQL_DATABASE': 'demo_angular7_crud',
}
}
 
let config = {
"DB_URL_MYSQL": {
"host": `${serverURLs[environment].MYSQL_HOST}`,
"user": `${serverURLs[environment].MYSQL_USER}`,
"password": `${serverURLs[environment].MYSQL_PASSWORD}`,
"database": `${serverURLs[environment].MYSQL_DATABASE}`
},
"NODE_SERVER_PORT": {
"port": `${serverURLs[environment].NODE_SERVER_PORT}`
},
"NODE_SERVER_URL": {
"url": `${serverURLs[environment].NODE_SERVER}`
}
};
 
module.exports = {
config: config
};

Now configure mysql connection. So I am writing the connection with database in a separate file. So creating a file named mysqkConfig.js under Utilities folder and adding the below line of code for mysql connection:

var config = require("../Utilities/config").config;
var mysql = require('mysql');
var connection = mysql.createConnection({
host: config.DB_URL_MYSQL.host,
user: config.DB_URL_MYSQL.user,
password: config.DB_URL_MYSQL.password,
database: config.DB_URL_MYSQL.database,
});
 
connection.connect(() => {
require('../Models/Article').initialize();
});
 
let getDB = () => {
return connection;
}
 
module.exports = {
getDB: getDB
}

Now I am creating separate file name util.js to save common methods and common status code/message:

// Define Error Codes
let statusCode = {
OK: 200,
FOUR_ZERO_FOUR: 404,
FOUR_ZERO_THREE: 403,
FOUR_ZERO_ONE: 401,
FIVE_ZERO_ZERO: 500
};
 
// Define Error Messages
let statusMessage = {
SERVER_BUSY : 'Our Servers are busy. Please try again later.',
DATA_UPDATED: 'Data updated successfully.',
DELETE_DATA : 'Delete data successfully',
 
};
 
module.exports = {
statusCode: statusCode,
statusMessage: statusMessage
}

Now the next part is model, So create a folder named Models and create a file **Article.js **and add the below code in it:

let mysqlConfig = require("../Utilities/mysqlConfig");
 
let initialize = () => {
mysqlConfig.getDB().query("create table IF NOT EXISTS article (id INT auto_increment primary key, category VARCHAR(30), title VARCHAR(24))");
 
}
 
module.exports = {
initialize: initialize
}

Now create DAO folder and add a file articleDAO.js for writting the mysql queries common functions:

let dbConfig = require("../Utilities/mysqlConfig");


 
let getArticle = (criteria, callback) => {
//criteria.aricle_id ? conditions += ` and aricle_id = '${criteria.aricle_id}'` : true;
dbConfig.getDB().query(`select * from article where 1`,criteria, callback);
}
 
let getArticleDetail = (criteria, callback) => {
    let conditions = "";
criteria.id ? conditions += ` and id = '${criteria.id}'` : true;
dbConfig.getDB().query(`select * from article where 1 ${conditions}`, callback);
}
 
let createArticle = (dataToSet, callback) => {
console.log("insert into article set ? ", dataToSet,'pankaj')
dbConfig.getDB().query("insert into article set ? ", dataToSet, callback);
}
 
let deleteArticle = (criteria, callback) => {
let conditions = "";
criteria.id ? conditions += ` and id = '${criteria.id}'` : true;
console.log(`delete from article where 1 ${conditions}`);
dbConfig.getDB().query(`delete from article where 1 ${conditions}`, callback);
 
}
 
let updateArticle = (criteria,dataToSet,callback) => {
    let conditions = "";
let setData = "";
criteria.id ? conditions += ` and id = '${criteria.id}'` : true;
dataToSet.category ? setData += `category = '${dataToSet.category}'` : true;
dataToSet.title ? setData += `, title = '${dataToSet.title}'` : true;
console.log(`UPDATE article SET ${setData} where 1 ${conditions}`);
dbConfig.getDB().query(`UPDATE article SET ${setData} where 1 ${conditions}`, callback);
}
module.exports = {
getArticle : getArticle,
createArticle : createArticle,
deleteArticle : deleteArticle,
updateArticle : updateArticle,
getArticleDetail : getArticleDetail
}

Now one create Services folder and add a file article.js for all the logic of API

let async = require('async'),
parseString = require('xml2js').parseString;
 
let util = require('../Utilities/util'),
articleDAO = require('../DAO/articleDAO');
//config = require("../Utilities/config").config;
 
 
/**API to create the atricle */
let createArticle = (data, callback) => {
async.auto({
article: (cb) => {
var dataToSet = {
"category":data.category?data.category:'',
"title":data.title,
}
console.log(dataToSet);
articleDAO.createArticle(dataToSet, (err, dbData) => {
if (err) {
cb(null, { "statusCode": util.statusCode.FOUR_ZERO_ONE, "statusMessage": util.statusMessage.SERVER_BUSY });
return;
}
 
cb(null, { "statusCode": util.statusCode.OK, "statusMessage": util.statusMessage.DATA_UPDATED,"result":dataToSet });
});
}
//]
}, (err, response) => {
callback(response.article);
});
}
 
/**API to update the article */
let updateArticle = (data,callback) => {
async.auto({
articleUpdate :(cb) =>{
if (!data.id) {
cb(null, { "statusCode": util.statusCode.FOUR_ZERO_ONE, "statusMessage": util.statusMessage.PARAMS_MISSING })
return;
}
console.log('phase 1');
var criteria = {
id : data.id,
}
var dataToSet={
"category": data.category,
"title":data.title,
}
console.log(criteria,'test',dataToSet);
                    articleDAO.updateArticle(criteria, dataToSet, (err, dbData)=>{
                        if(err){
cb(null,{"statusCode":util.statusCode.FOUR_ZERO_ONE,"statusMessage":util.statusMessage.SERVER_BUSY});
                        return; 
                        }
                        else{
cb(null, { "statusCode": util.statusCode.OK, "statusMessage": util.statusMessage.DATA_UPDATED,"result":dataToSet });                        
                        }
                    });
}
}, (err,response) => {
callback(response.articleUpdate);
});
}
 
/**API to delete the subject */
let deleteArticle = (data,callback) => {
console.log(data,'data to set')
async.auto({
removeArticle :(cb) =>{
if (!data.id) {
cb(null, { "statusCode": util.statusCode.FOUR_ZERO_ONE, "statusMessage": util.statusMessage.PARAMS_MISSING })
return;
}
var criteria = {
id : data.id,
}
articleDAO.deleteArticle(criteria,(err,dbData) => {
if (err) {
console.log(err);
cb(null, { "statusCode": util.statusCode.FOUR_ZERO_ONE, "statusMessage": util.statusMessage.SERVER_BUSY });
return;
}
cb(null, { "statusCode": util.statusCode.OK, "statusMessage": util.statusMessage.DELETE_DATA });
});
}
}, (err,response) => {
callback(response.removeArticle);
});
}
 
/***API to get the article list */
let getArticle = (data, callback) => {
async.auto({
article: (cb) => {
articleDAO.getArticle({},(err, data) => {
if (err) {
cb(null, {"errorCode": util.statusCode.INTERNAL_SERVER_ERROR,"statusMessage": util.statusMessage.SERVER_BUSY});
return;
}
cb(null, data);
return;
});
}
}, (err, response) => {
callback(response.article);
})
}
 
/***API to get the article detail by id */
let getArticleById = (data, callback) => {
async.auto({
article: (cb) => {
let criteria = {
"id":data.id
}
articleDAO.getArticleDetail(criteria,(err, data) => {
if (err) {
console.log(err,'error----');
cb(null, {"errorCode": util.statusCode.INTERNAL_SERVER_ERROR,"statusMessage": util.statusMessage.SERVER_BUSY});
return;
}
cb(null, data[0]);
return;
});
}
}, (err, response) => {
callback(response.article);
})
}
 
module.exports = {
createArticle : createArticle,
updateArticle : updateArticle,
deleteArticle : deleteArticle,
getArticle : getArticle,
getArticleById : getArticleById
};

3. Create angular component for performing CRUD task of article

ng g component article

Above command will generate all required files for build article component and also automatically added this component to app.module.ts.

create src/app/article/article.component.css (0 bytes)
create src/app/article/article.component.html (23 bytes)
create src/app/article/article.component.spec.ts (614 bytes)
create src/app/article/article.component.ts (321 bytes)
update src/app/app.module.ts (390 bytes)

Now we need to add HttpClientModule to app.module.ts. Open and edit src/app/app.module.ts then add this import. And add it to @NgModule imports after BrowserModule. Now our app.module.ts will have following code:

import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { ReactiveFormsModule } from '@angular/forms';
import { HttpModule } from '@angular/http';
 
import { AppComponent } from './app.component';
import { ArticleComponent } from './article.component';
import { ArticleService } from './article.service';
 
@NgModule({
imports: [
BrowserModule,
HttpModule,
ReactiveFormsModule
],
declarations: [
AppComponent,
ArticleComponent
],
providers: [
ArticleService
],
bootstrap: [
AppComponent
]
})
export class AppModule { }

Now create a service file where we will make all the request to the server for CRUD operation. Command for creating service is ng g service artcle , for now I have just created a file named it article.service.ts. Let's have a look in the code inside this file.

import { Injectable } from '@angular/core';
import { Http, Response, Headers, URLSearchParams, RequestOptions } from '@angular/http';
import { Observable } from 'rxjs';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/catch';
 
import { Article } from './article';
 
@Injectable()
export class ArticleService {
//URL for CRUD operations
    articleUrl = "http://localhost:3000/article";
    //Create constructor to get Http instance
    constructor(private http:Http) {
    }
    
    //Fetch all articles
getAllArticles(): Observable<Article[]> {
return this.http.get(this.articleUrl+"/get-article")
              .map(this.extractData)
         .catch(this.handleError);
 
}
    //Create article
createArticle(article: Article):Observable<number> {
     let cpHeaders = new Headers({ 'Content-Type': 'application/json' });
let options = new RequestOptions({ headers: cpHeaders });
return this.http.post(this.articleUrl+"/create-article", article, options)
.map(success => success.status)
.catch(this.handleError);
}
    //Fetch article by id
getArticleById(articleId: string): Observable<Article> {
        let cpHeaders = new Headers({ 'Content-Type': 'application/json' });
        let options = new RequestOptions({ headers: cpHeaders });
        console.log(this.articleUrl +"/get-article-by-id?id="+ articleId);
        return this.http.get(this.articleUrl +"/get-article-by-id?id="+ articleId)
             .map(this.extractData)
             .catch(this.handleError);
}   
    //Update article
updateArticle(article: Article):Observable<number> {
     let cpHeaders = new Headers({ 'Content-Type': 'application/json' });
        let options = new RequestOptions({ headers: cpHeaders });
return this.http.put(this.articleUrl +"/update-article", article, options)
.map(success => success.status)
.catch(this.handleError);
}
//Delete article    
deleteArticleById(articleId: string): Observable<number> {
        let cpHeaders = new Headers({ 'Content-Type': 'application/json' });
        let options = new RequestOptions({ headers: cpHeaders });
        return this.http.delete(this.articleUrl +"/delete-article?id="+ articleId)
             .map(success => success.status)
             .catch(this.handleError);
}   
    private extractData(res: Response) {
        let body = res.json();
return body;
}
private handleError (error: Response | any) {
        console.error(error.message || error);
        return Observable.throw(error.status);
}
}

In the above file we have made all the http request for the CRUD operation. Observables of rxjs library has been used to handle the data fetching from http request.

Now let's move to the next file, article.component.ts. Here we have all the login part of the app. Let's have a look code inside this file:

import { Component, OnInit } from '@angular/core';
import { FormControl, FormGroup, Validators } from '@angular/forms';
 
import { ArticleService } from './article.service';
import { Article } from './article';
 
@Component({
selector: 'app-article',
templateUrl: './article.component.html',
styleUrls: ['./article.component.css']
})
export class ArticleComponent implements OnInit {
//Component properties
allArticles: Article[];
statusCode: number;
requestProcessing = false;
articleIdToUpdate = null;
processValidation = false;
//Create form
articleForm = new FormGroup({
title: new FormControl('', Validators.required),
category: new FormControl('', Validators.required)   
});
//Create constructor to get service instance
constructor(private articleService: ArticleService) {
}
//Create ngOnInit() and and load articles
ngOnInit(): void {
     this.getAllArticles();
}
//Fetch all articles
 
getAllArticles() {
        this.articleService.getAllArticles()
         .subscribe(
data => this.allArticles = data,
                errorCode => this.statusCode = errorCode);
                
}
//Handle create and update article
onArticleFormSubmit() {
     this.processValidation = true;
     if (this.articleForm.invalid) {
     return; //Validation failed, exit from method.
     }
     //Form is valid, now perform create or update
this.preProcessConfigurations();
     let article = this.articleForm.value;
     if (this.articleIdToUpdate === null) {
     //Generate article id then create article
this.articleService.getAllArticles()
     .subscribe(articles => {
            
         //Generate article id    
         let maxIndex = articles.length - 1;
         let articleWithMaxIndex = articles[maxIndex];
         let articleId = articleWithMaxIndex.id + 1;
         article.id = articleId;
         console.log(article,'this is form data---');
         //Create article
    this.articleService.createArticle(article)
             .subscribe(successCode => {
                    this.statusCode = successCode;
                    this.getAllArticles();  
                    this.backToCreateArticle();
                 },
                 errorCode => this.statusCode = errorCode
             );
         });        
     } else {
  //Handle update article
article.id = this.articleIdToUpdate;        
     this.articleService.updateArticle(article)
     .subscribe(successCode => {
         this.statusCode = successCode;
                 this.getAllArticles();  
                    this.backToCreateArticle();
             },
         errorCode => this.statusCode = errorCode);  
     }
}
//Load article by id to edit
loadArticleToEdit(articleId: string) {
this.preProcessConfigurations();
this.articleService.getArticleById(articleId)
     .subscribe(article => {
            console.log(article,'poiuytre');
         this.articleIdToUpdate = article.id;
                    this.articleForm.setValue({ title: article.title, category: article.category });
                    this.processValidation = true;
                    this.requestProcessing = false;
         },
         errorCode => this.statusCode = errorCode);
}
//Delete article
deleteArticle(articleId: string) {
this.preProcessConfigurations();
this.articleService.deleteArticleById(articleId)
     .subscribe(successCode => {
         //this.statusCode = successCode;
                    //Expecting success code 204 from server
                    this.statusCode = 204;
                 this.getAllArticles();  
                 this.backToCreateArticle();
             },
         errorCode => this.statusCode = errorCode);
}
//Perform preliminary processing configurations
preProcessConfigurations() {
this.statusCode = null;
     this.requestProcessing = true;
}
//Go back from update to create
backToCreateArticle() {
this.articleIdToUpdate = null;
this.articleForm.reset(); 
     this.processValidation = false;
}
}

Now we have to show the task over browser, So lets have a look inside article.component.html file.

<h1 class="text-center">Angular 7 CRUD Demo App</h1>
<h3 class="text-center" *ngIf="articleIdToUpdate; else create">
Update Article for Id: {{articleIdToUpdate}}
</h3>
<ng-template #create>
<h3 class="text-center"> Create New Article </h3>
</ng-template>
<div>
<form [formGroup]="articleForm" (ngSubmit)="onArticleFormSubmit()">
<table class="table-striped" style="margin:0 auto;">
<tr><td>Enter Title</td><td><input formControlName="title">
   <label *ngIf="articleForm.get('title').invalid && processValidation" [ngClass] = "'error'"> Title is required. </label>
 </td></tr>
<tr><td>Enter Category</td><td><input formControlName="category">
   <label *ngIf="articleForm.get('category').invalid && processValidation" [ngClass] = "'error'"> Category is required. </label>
  </td></tr>  
<tr><td colspan="2">
   <button class="btn btn-default" *ngIf="!articleIdToUpdate">CREATE</button>
    <button class="btn btn-default" *ngIf="articleIdToUpdate">UPDATE</button>
   <button (click)="backToCreateArticle()" *ngIf="articleIdToUpdate">Go Back</button>
  </td></tr>
</table>
</form>
<br/>
<div class="text-center" *ngIf="statusCode; else processing">
<div *ngIf="statusCode === 201" [ngClass] = "'success'">
   Article added successfully.
</div>
<div *ngIf="statusCode === 409" [ngClass] = "'success'">
Article already exists.
</div>   
<div *ngIf="statusCode === 200" [ngClass] = "'success'">
Article updated successfully.
</div>   
<div *ngIf="statusCode === 204" [ngClass] = "'success'">
Article deleted successfully.
</div>   
<div *ngIf="statusCode === 500" [ngClass] = "'error'">
Internal Server Error.
</div> 
</div>
<ng-template #processing>
  <img *ngIf="requestProcessing" src="assets/images/loading.gif">
</ng-template>
</div>
<h3 class="text-center">Article List</h3>
<table class="table-striped" style="margin:0 auto;" *ngIf="allArticles">
<tr><th> Id</th> <th>Title</th><th>Category</th><th></th><th></th></tr>
<tr *ngFor="let article of allArticles" >
<td>{{article.id}}</td> <td>{{article.title}}</td> <td>{{article.category}}</td>
  <td><button class="btn btn-default" type="button" (click)="loadArticleToEdit(article.id)">Edit</button> </td>
  <td><button class="btn btn-default" type="button" (click)="deleteArticle(article.id)">Delete</button></td>
</tr>
</table>

Now since I have created server and client two separate folder for nodejs and angular task. So will run both the apps with npm start over two tabs of terminal.

On the browser, over link http://localhost:4200. App will look like below

Angular CRUD with Nodejs and MySQL Example

That’s all for now. Thank you for reading and I hope this post will be very helpful for creating CRUD operations with angular7,node.js & mysql.

================================================

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Kubernetes Fundamentals - Learn Kubernetes from the Beginning

Kubernetes Fundamentals - Learn Kubernetes from the Beginning

Kubernetes is about orchestrating containerized apps. Docker is great for your first few containers. As soon as you need to run on multiple machines and need to scale/up down and distribute the load and so on, you need an orchestrator - you need Kubernetes

Kubernetes is about orchestrating containerized apps. Docker is great for your first few containers. As soon as you need to run on multiple machines and need to scale/up down and distribute the load and so on, you need an orchestrator - you need Kubernetes

This is the first part of a series of articles on Kubernetes, cause this topic is BIG!.

  • Part I - From the beginning, Part I, Basics, Deployment and Minikube: In this part, we cover why Kubernetes, some history and some basic concepts like deploying, Nodes, Pods.
  • Part II - Introducing Services and Labeling: In this part, we deepen our knowledge of Pods and Nodes. We also introduce Services and labeling using labels to query our artifacts.
  • Part III - Scaling: Here we cover how to scale our app
  • Part IV - Auto scaling: In this part we look at how to set up auto-scaling so we can handle sudden large increases of incoming requests
Resources Kubernetes

So what do we know about Kubernetes?

It's an open-source system for automating deployment, scaling, and management of containerized applications

Let'start with the name. It's Greek for Helmsman, the person who steers the ship. Which is why the logo looks like this, a steering wheel on a boat:

It's Also called K8s so K ubernete s, 8 characters in the middle are removed. Now you can impress your friends that you know why it's referred to as K8.

Here is some more Jeopardy knowledge on its origin. Kubernetes was born out of systems called Borg and Omega. It was donated to CNCF, Cloud Native Computing Foundation in 2014. It's written in Go/Golang.

If we see past all this trivia knowledge, it was built by Google as a response to their own experience handling a ton of containers. It's also Open Source and battle-tested to handle really large systems, like planet-scale large systems.

So the sales pitch is:

Run billions of containers a week, Kubernetes can scale without increasing your ops team

Sounds amazing right, billions of containers cause we are all Google size. No? :) Well even if you have something like 10-100 containers, it's for you.

In this part I hope to cover the following:

  • Why Kubernetes and Orchestration in General
  • Hello world: Minikube basics, talking through Minikube, simple deploy example
  • Cluster and basic commands, Nodes,
  • Deployments, what it is and deploying an app
  • Pods and Nodes, explain concepts and troubleshooting
Part I - From the beginning, Part I, Basics, Deployment and Minikube Why Orchestration

Well, it all started with containers. Containers gave us the ability to create repeatable environments so dev, staging, and prod all looked and functioned the same way. We got predictability and they were also light-weight as they drew resources from the host operating system. Such a great breakthrough for Developers and Ops but the Container API is really only good for managing a few containers at a time. Larger systems might consist of 100s or 1000+ containers and needs to be managed as well so we can do things like scheduling, load balancing, distribution and more.

At this point, we need orchestration the ability for a system to handle all these container instances. This is where Kubernetes comes in.

Getting started

Ok ok, let's say I buy into all of this, how do I get started?

Impatient ey, sure let's start to do something practical with Minikube

Ok, sounds good I'm a coder, I like practical stuff. What is Minikube?

Minikube is a tool that lets us run Kubernetes locally

Oh, sweet, millions of containers on my little machine?

Well, no, let's start with a few and learn Kubernetes basics while at it.

Installation

To install Minikube lets go to this installation page

It's just a few short steps that means we install

  • a Hypervisor
  • Kubectl (Kube control tool)
  • Minikube

Run

Get that thing up and running by typing:

minikube start

It should look something like this:

You can also ensure that kubectl have been correctly installed and running:

kubectl version

Should give you something like this in response:

Ok, now we are ready to learn Kubernetes.

Learning kubectl and basic concepts

In learning Kubernetes lets do so by learning more about kubectl a command line program that lets us interact with our Cluster and lets us deploy and manage applications on said Cluster.

The word Cluster just means a group of similar things but in the context of Kubernetes, it means a Master and multiple worker machines called Nodes. Nodes were historically called Minions

, but not so anymore.

The master decides what will run on the Nodes, which includes things like scheduled workloads or containerized apps. Which brings us to our next command:

kubectl get nodes

This should give us a result like this:

What this tells us what Nodes we have available to do work.

Next up let's try to run our first app on Kubernetes with the run command like so:

kubectl run kubernetes-first-app --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080

This should give us a response like so:

Next up lets check that everything is up and running with the command:

kubectl get deployments

This shows the following in the terminal:

In putting our app on the Kluster, by invoking the run command, Kubernetes performed a few things behind the scenes, it:

  • searched for a suitable node where an instance of the application could be run, there was only one node so it got chosen
  • scheduled the application to run on that Node
  • configured the cluster to reschedule the instance on a new Node when needed

Next up we are going to introduce the concept Pod, so what is a Pod?

A Pod is the smallest deployable unit and consists of one or many containers, for example, Docker containers. That's all we are going to say about Pods at the moment but if you really really want to know more have a read here

The reason for mentioning Pods at this point is that our container and app is placed inside of a Pod. Furthermore, Pods runs in a private isolated network that, although visible from other Pods and services, it cannot be accessed outside the network. Which means we can't reach our app with say a curl command.

We can change that though. There is more than one way to expose our application to the outside world for now however we will use a proxy.

Now open up a 2nd terminal window and type:

kubectl proxy

This will expose the kubectl as an API that we can query with HTTP request. The result should look like:

Instead of typing kubectl version we can now type curl http://localhost:8001/version and get the same results:

The API Server inside of Kubernetes have created an endpoint for each pod by its pod name. So the next step is to find out the pod name:

kubectl get pods

This will list all the pods you have, it should just be one pod at this point and look something like this:

Then you can just save that down to a variable like so:

Lastly, we can now do an HTTP call to learn more about our pod:

curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME

This will give us a long JSON response back (I trimmed it a bit but it goes on and on...)

Maybe that's not super interesting for us as app developers. We want to know how our app is doing. Best way to know that is looking at the logs. Let's do that with this command:

kubectl logs $POD_NAME

As you can see below we know get logs from our app:

Now that we know the Pods name we can do all sorts of things like checking its environment variables or even step inside the container and look at the content.

kubectl exec $POD_NAME env

This yields the following result:

Now lets step inside the container:

kubectl exec -ti $POD_NAME bash

We are inside! This means we can see what the source code looks like even:

cat server.js

Inside of our container, we can now reach the running app by typing:

curl http://localhost:8080

Summary Part I

This is where we will stop for now.
What did we actually learn?

  • Kubernetes, its origin what it is
  • Orchestration why you will soon need it
  • Concepts like Master, Nodes and Pods
  • Minikube, kubectl and how to deploy an image onto our Cluster
Part II - Introducing Services and Labeling

In this part we will cover the following:

  • Deepen our knowledge on Pods and Nodes
  • Introduce Services and labeling
  • Perform an exercise, involving setting labels on Pods and use labels to query our artifacts
Concepts revisited

When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them. So Pods are tied to Nodes and will continue to exist until terminated or deleted. Let's try to educate ourselves a bit more on Pods, Nodes and let's also introduce a new topic Services.

Pods

Pods are the atomic unit on the Kubernetes platform, i.e smallest possible deployable unit

We've stated the above before but it's worth mentioning again.

What else is there to know?

A Pod is an abstraction that represents a group of one or more containers, for example, Docker or rkt, and some shared resources for those containers. Those resources include:

  • Shared storage, as Volumes
  • Networking, as a unique cluster IP address
  • Information about how to run each container, such as the container image version or specific ports to use

A Pod can have more than one container. If it does contain more than one container it is so the other containers can support the primary application.
Typical examples of helper applications are data pullers, data pushers, and proxies. You can read more on that use case here

  1. The containers in a Pod share an IP Address and port space and are:
  2. Always co-located
  3. Co-scheduled

Let me show you an image to make it easier to visualize:

As we can see above a Pod can have a lot of different artifacts in them that are able to communicate and support the app in some way.

Nodes

A Pod always runs on a Node

So Node is the Pods parent?

Yes.

A Node is a worker machine and may be either a virtual or a physical machine, depending on the cluster

Each Node is managed by the Master. A Node can have multiple pods.

So it's a one to many relationship

The Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster

Every Kubernetes Node runs at least a:

  • Kubelet, is responsible for the pod spec and talks to the cri interface

  • Kube proxy, is the main interface for coms between nodes

  • A container runtime, (like Docker, rkt) responsible for pulling the container image from a registry, unpacking the container, and running the application.

Ok so a Node contains a Kubelet and container runtime and one to many Pods. I think I got it.

Let's show an image to make this info stick, cause it's quite important that we know what goes on, at least at a high level:

Services

Pods are mortal, they can die. Pods, in fact, have a lifecycle.

When a worker node dies, the Pods running on the Node are also lost.

What happens to our apps? :(

You might think them and their data are lost but not so. The whole point with Kubernetes is to not let that happen. We normally deploy something like a ReplicaSet.

A ReplicaSet, what do you mean?

A ReplicaSet is a high-level artifact that can drive the cluster back to desired state via the creation of new Pods to keep your application running.

Ok so if a Pod goes down the ReplicaSet just creates a new Pod/s in its place?

Yes, exactly that. If you focus on defining a desired state the rest is up to Kubernetes.

Phew sounds really great then.

This concept of desired state is a very important one. You need to specify how many containers you want of each kind, at all times.

Oh so 4 database containers, 3 services etc?

Yes exactly.

So you don't have to care about the details just tell Kubernetes what state you want and it does the rest. If something goes up, Kubernetes ensures it comes back up again to desired state.

Each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node, so there needs to be a way of automatically reconciling changes among Pods so that your applications continue to function.

Ok?

Yea, think like this. If a Pod containing your app goes down and another Pod is created in its place, running your app. Users should still be able to use your app after that.

Ok I got it. Makes me think...

The motivation for a Service

You should never refer to a Pod by it's IP address, just think what happens when a Pod goes down and comes back up again but this time with a different IP. It is for that reason a Service exists.

A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them.

Makes me think of a routers and subnets

Yea I guess you can say there is a resemblance in there somewhere.

Services enable a loose coupling between dependent Pods and are defined using YAML or JSON file, just like all Kubernetes objects.

That's handy, just JSON and YAML :)

Services and Labels

The set of Pods targeted by a Service is usually determined by a LabelSelector.

Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. We can expose them through a proxy though as we showed in part I.

Wait, go back a second here, you said LabelSelector. I wasn't quite following?

Remember how we couldn't refer to Pods by IP, cause Pods might go down and a new Pod could come back in its place?

Yes

Well, labels are the answer to how Services and Pods are able to communicate. This is what we mean by loose coupling. By applying labels like for example frontend, backend, release and so on to Pods, we are able to refer to Pods by their logical name rather than their specifics, i.e IP number.

Oh I get it, so it's a high-level domain language

Mm, kind of.

Services and Traffic

Services allow your applications to receive traffic.

Services can be exposed in different ways by specifying a type in ServiceSpec, service specification.

  • ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
  • NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using :. Superset of ClusterIP.
  • LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
  • ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns.

Ok I think I get it. Ensure I'm speaking externally to a Service instead of specific Pods. Depending on what I expose the Service as, that leads to different behavior?

Yea that's correct.

You said something about labels though, how do we create and apply them to Pods?

Yea lets talk about that next.

Labels

As we just mentioned, Services are the abstraction that allows pods to die and replicate in Kubernetes without impacting your application.

Now, Services match a set of Pods using labels and selectors, it allows us to operate on Pods like a group.

Labels are key/value pairs attached to objects and can be used in any number of ways:

  • Designate objects for development, test, and production
  • Embed version tags
  • Classify an object using tags

Labels can be attached to objects at creation time or later on. They can be modified at any time.

Lab - Fun with Labels and kubectl

It's a good idea to have read the first part of this series where we create a deployment. If you haven't you need to first create a deployment like so:

kubectl run kubernetes-first-app --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080

Now we should be good to go.

Ok. I know you are probably all tired from all theory by now.

I bet you are just itching to learn more hands on Kubernetes with kubectl.

Well, the time for that has come :). We will do two things:

  1. Create a Service and learn how we can expose our app using said Service
  2. Learn about Labeling and how we can improve our querying game by having appropriate labels on our artifacts.

Let's create a new service.

We will get acquainted with the expose command.

Let's check for existing pods,

kubectl get pods

Next let's see what services we have:

kubectl get services

Next lets create a Service like so:

kubectl expose deployment/kubernetes-first-app --type="NodePort" --port 8080

As you can see above we are just targeting one of our deployments kubernetes-first-app and referring to it with [type]/[deployment name] and type being deployment.

We expose it as service of type NodePort and finally, we choose to expose it at port 8080.

Now run kubectl get services again and see the results:

As you can see we now have two services in use, our basic kubernetes service and our newly created kubernetes-first-app.

Next up we need to grab the port of our service and assign that to a variable:

export NODE_PORT=$(kubectl get services/kubernetes-first-app -o go-template='{{(index .spec.ports 0).nodePort}}')
echo NODE_PORT=$NODE_PORT

We now have a our port stored on environment variable NODE_PORT and we are ready to start communicating with our service like so:

curl $(minikube ip):$NODE_PORT

Which leads to the following output:

Creating and applying Labels

When we created our deployment and our Pod, it was automatically assigned with a label.

By typing

kubectl describe deployment

we can see the name of said label.

Next up we can query the pods by that same label

kubectl get pods -l run=kubernetes-first-app

Above we are using -l to query for a specific label and kubernetes-bootcamp as the name of the label. This gives us the following result:

You can do a similar query to your services:

kubectl get services -l run=kubernetes-first-app

That just shows that you can query on different levels, for specific Pods or Services that have Pods with that label.

Next up we will look at how to change the label

First let's get the name of the pod, like so:

POD_NAME=kubernetes-first-app-669789f4f8-6glpx

Above I'm just assigning what my Pod is called to a variable POD_NAME. Check with a kubectl getpods what your Pod is called.

Then we can add/apply the new label like so:

kubectl label pod $POD_NAME app=v1

Verify that the new label have been set, like so:

kubectl describe pod

or

kubectl describe pods $POD_NAME

As you can see from the result our new label app=v1 has been appended to existing labels.

Now we can query like so:

kubectl get pods -l app=v1

That's pretty much how labeling works, how to get available labels, apply them and use them in a query. Ensure to give them a descriptive name like an app version, a certain environment or a name like frontend or backend, something that makes sense to your situation.

Clean up

Ok, so we created a service. We should learn how to clean up after ourselves. Run the following command to remove our service:

kubectl delete service -l run=kubernetes-bootcamp

Verify the service is no longer there with:

kubectl get services

also, ensure our exposed IP and port can no longer be reached:

curl $(minikube ip):$NODE_PORT

Just because the service is gone doesn't mean the app is gone. The app should still be reachable on:

kubectl exec -ti $POD_NAME curl localhost:8080

Summary Part II

So what did we learn? We learned a bit more on Pods and Nodes. Furthermore, we learned that we shouldn't speak directly to Pods but rather use a high-level abstraction such as Services. Services use labels as a way to define a domain language and apply those to different Pods.

Ok, so we understand a bit more on Kubernetes and how different concepts relate. We mentioned something called desired state a number of times but we didn't go into detail on how to set such a state. That's our next part in this series where we will cover how to set the desired state and how Kubernetes maintains it, so stay tuned.

Part III - Scaling

This third part aims to show how you scale your application. We can easily set the number of Replicas we want of a certain application and let Kubernetes figure out how to do that. This is us defining a so-called desired state.

When traffic increases, we will need to scale the application to keep up with user demand. We've talked about deployments and services, now lets talk scaling.

What does scaling mean in the context of Kubernetes?

We get more Pods. More Pods that are scheduled to nodes.

Now it's time to talk about desired state again, that we mentioned in previous parts.

This is where we relinquish control to Kubernetes. All we need to do is tell Kubernetes how many Pods we want and Kubernetes does the rest.

So we tell Kubernetes about the number of Pods we want, what does that mean? What does Kubernetes do for us?

It means we get multiple instances of our application. It also means traffic is being distributed to all of our Pods, ie. load balancing.

Furthermore, Kubernetes, or more specifically, services within Kubernetes will monitor which Pods are available and send traffic to those Pods.

Scaling demo Lab

If you haven't followed the first two parts I do recommend you go back and have a read. What you need for the following to work is at least a deployment. So if you haven't created one, here is how:

kubectl run kubernetes-first-app --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080

Let's have a look at our deployments:

kubectl get deployments

Let's look closer at the response we get:

We have three pieces of information that are important to us. First, we have the READY column in which we should read the value in the following way, CURRENT STATE/DESIRED STATE. Next up is the UP_TO_DATE column which shows the number of replicas that were updated to match the desired state.
Lastly, we have the AVAILABLE column that shows how many replicas we have available to do work.

Let's scale

Now, let's do some scaling. For that we will use the scale command like so:

kubectl scale deployments/kubernetes-first-app --replicas=4 

as we can see above the number of replicas was increased to 4 and kubernetes is thereby ready to load balance any incoming requests.

Let's have a look at our Pods next:

When we asked for 4 replicas we got 4 Pods.

We can see that this scaling operation took place by using the describe command, like so:

kubectl describe deployments/kubernetes-first-app

In the above image, we are given quite a lot of information on our Replicas for example, but there is some other information in there that we will explain later on.

Does it load balance?

The whole point with the scaling was so that we could balance the load on incoming requests. That means that not the same Pod would handle all the requests but that different Pods would be hit.
We can easily try this out, now that we have scaled our app to contain 4 replicas of itself.

So far we used the describe command to describe the deployment but we can use it to describe the IP and port of. Once we have the IP and port we can then hit it with different HTTP requests.

kubectl describe services/kubernetes-first-app

Especially look at the NodePort and the Endpoints. NodePort is the port value that we want to hit with an HTTP request.

Now we will actually invoke the cURL command and ensure that it hits a different port each time and thereby prove our load balancing is working. Let's do the following:

NODE_PORT=30450

Next up the cURL call:

curl $(minikube ip):$NODE_PORT

As you can see above we are doing the call 4 times. Judging by the output and the name of the instance we see that we are hitting a different Pod for each request. Thereby we see that the load balancing is working.

Scaling down

So far we have scaled up. We managed to go from one Pod to 4 Pods thanks to the scale command. We can use the same command to scale down, like so:

kubectl scale deployments/kubernetes-first-app --replicas=2 

Now if we are really fast adding the next command we can see how the Pods are being removed as Kubernetes is trying to adjust to desired state.

2 out of 4 Pods are saying Terminating as only 2 Pods are needed to maintain the new desired state.

Running our command again we see that only 2 Pods remain and thereby our new desired state have been reached:

We can also look at our deployment to see that our scale instruction has been parsed correctly:

Self-healing

Self-healing is Kubernetes way of ensuring that the desired state is maintained. Pods don't self heal cause Pods can die. What happens is that a new Pod appears in its place, thanks to Kubernetes.

So how do we test this?

Glad you asked, we can delete a Pod and see what happens. So how do we do that? We use the delete command. We need to know the name of our Pod though so we need to call get pods for that. So let's start with that:

kubectl get pods

Then lets pick one of our two Pods kubernetes-first-app-669789f4f8-6glpx and assign it to a variable:

POD_NAME=kubernetes-first-app-669789f4f8-6glpx

Now remove it:

kubectl delete pods $POD_NAME

Let's be quick about it and check our Pod status with get pods. It should say Terminating like so:

Wait some time and then echo out our variable $POD_NAME followed by get pods. That should give you a result similar to the below.

So what does the above image tell us? It tells us that the Pod we deleted is truly deleted but it also tells us that the desired state of two replicas has been achieved by spinning up a new Pod. What we are seeing is * self-healing* at work.

Different ways to scale

Ok, we looked at a way to scale by explicitly saying how many replicas we want of a certain deployment. Sometimes, however, we might want a different way to scale namely auto-scaling. Auto-scaling is about you not having to set the exact number of replicas you want but rather rely on Kubernetes to create the number of replicas it thinks it needs. So how would Kubernetes know that? Well, it can look at more than one thing but a common metric is CPU utilization. So let's say you have a booking site and suddenly someone releases Bruce Springsteen tickets you are likely to want to rely on auto-scaling, cause the next day when the tickets are all sold out you want the number of Pods to go back to normal and you wouldn't want to do this manually.

Auto-scaling is a topic I plan to cover more in detail in a future article so if you are really curious how that is done I recommend you have a look here

Summary Part III

Ok. So we did it. We managed to scale an app by creating replicas of it. It wasn't so hard to accomplish. We showed how we only needed to provide Kubernetes with a desired state and it would do its utmost to preserve said state, also called * self-healing*. Furthermore, we mentioned that there was another way to scale, namely auto-scaling but decided to leave that topic for another article. Hopefully, you are now more in awe of how amazing Kubernetes is and how easy it is to scale your app.

Part IV - Auto scaling

In this article, we will cover the following:

  • Why auto scaling, we will discuss different scenarios in which it makes sense to rely on auto scaling over defining it statically like we do with desired state
  • How, lets talk about Horizontal Auto Scaling the concept/feature that allows us to scale in an elastic way.
  • Lab - lets scale, we will look at how to actually set this up in kubectl and simulate a ton of incoming requests. We will then inspect the results and see that Kubernetes acts the way we think
Why

So in our last part, we talked about desired state. That's an OK strategy until something unforeseen happens and suddenly you got a great influx of traffic. This is likely to happen to businesses such as e-commerce around a big sale or a ticket vendor when you release tickets to a popular event.

Events like these are an anomaly which forces you to quickly scale up. The other side of the coin though is that at some point you need to scale down or you suddenly have overcapacity you might need to pay for. What you really want is for the scaling to act in an elastic way so it scaled up when you need it to and scales down when there is less traffic.

How

Horizontal auto-scaling, what does it mean?

It's a concept in Kubernetes that can scale the number of Pods we need. It can do so on a replication controller, deployment or replica set. It usually looks at CPU utilization but can be made to look at other things by using something called custom metrics support, so it's customizable.

It consists of two parts a resource and a controller. The controller checks utilization, or whatever metric you decided, to ensure that the number of replicas matches your specification. If need be it spins up more Pods or removes them. The default is checking every 15 seconds but you can change that by looking at a flag called --horizontal-pod-autoscaler-sync-period.

The underlying algorithm that decides the number of replicas looks like this:

desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]

Lab - lets scale

Ok, the first thing we need to do is to scale our deployment to use something other than desired state.

We have two things we need to specify when we do autoscaling:

  • min/max, we define a minimum and maximum in terms of how many Pods we want
  • CPU, in this version we set a certain CPU utilization percentage. When it goes above that it scales out as needed. Think of this one as an IF clause, if CPU value greater than the threshold, try to scale

Set up

Before we can attempt our scaling experiment we need to make sure we have the correct add-ons enabled. You can easily see what add-ons you have enabled by typing:

minikube addons list

If it looks like the above we are all good. Why am I saying that? Well, what we need, to be able to auto-scale, is that heapster and metrics-server add ons are enabled.

Heapster enables Container Cluster Monitoring and Performance Analysis.

Metrics server provide metrics via the resource metrics API. Horizontal Pod Autoscaler uses this API to collect metrics

We can easily enable them both with the following commands (we will need to for auto-scaling to show correct data):

minikube addons enable heapster

and

minikube addons enable metrics-server

We need to do one more thing, namely to enable Custom metrics, which we do by starting minikube with such a flag like so:

minikube start --extra-config kubelet.EnableCustomMetrics=true

Ok, now we are good to go.

Running the experiment

We need to do the following to run our experiment

  • Create a deployment
  • Apply autoscaling
  • Bombard the deployment with incoming requests
  • Watch the auto scaling how it changes

Create a deployment

kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80

Above we are creating a deployment php-apache and expose it as a service on port 80. We can see that we are using the image k8s.gcr.io/hpa-example

It should tell us the following:

service/php-apache created
deployment.apps/php-apache created

Autoscaling

Next up we will use the command autoscale. We will use it like so:

kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

It should say something like:

horizontalpodautoscaler.autoscaling/php-apache autoscaled

Above we are applying the auto-scaling on the deployment php-apache and as you can see we are applying both min-max and cpu based auto scaling which means we give a rule for how the auto scaling should happen:

If CPU load is >= 50% create a new Pod, but only maximum 10 Pods. If the load is low go back gradually to one Pod

Bombard with requests

Next step is to send a ton of requests against our deployment and see our auto-scaling doing its work. So how do we do that?

First off let's check the current status of our horizontal pod auto-scaler or hpa for short by typing:

kubectl get hpa

This should give us something like this:

The above shows us two pieces of information. The first is the TARGETS column which shows our CPU utilization, actual usage/trigger value. The next bit of interest is the column REPLICAS that shows us the number of copies, which is 1 at the moment.

For our next trick open up a separate terminal tab. We need to do is to set things up so we can send a ton of requests.

Next up we create a container using this command:

kubectl run -i --tty load-generator --image=busybox /bin/sh

This should take us to a prompt within the container. This is followed by:

while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done

The command above should result in something looking like this.

This will just go on and on until you hit CTRL+C, but leave it be for now.

This throws a ton on requests in while true loop.

I thought while true loops were bad?

They are but we are only going to run it for a minute so that the auto scaling can happen. Yes, the CPU will sound a lot but don't worry :)

Let this go on for a minute or so, then enter the following command into the first terminal tab (not the one running the requests), like so:

kubectl get hpa

It should now show something like this:

As you can see from the above the column TARGETS looks different and now says 339%/50% which means the current load on the CPU and REPLICAS is 7 which means it has gone from 1 to 7 replicas. So as you can see we have been bombarding it pretty hard.

Now go to the second terminal and hit CTRL+C or you will have a situation like this:

It will actually take a few minutes for Kubernetes to cool off and get the values back to normal. A first look at the Pods situation shows us the following:

kubectl get pods

As you can see we have 7 Pods up and running still but let's wait a minute or two and it should look like this:

Ok, now we are back to normal.

Summary Part IV

Now we did some great stuff in this article. We managed to set up auto-scaling, bombard it with requests and without frying our CPU, hopefully ;)

We also managed to learn some new Kubernetes commands while at it and got to see auto-scaling at work giving us new Pods based on our specification.

Docker Best Practices for Node Developers

Docker Best Practices for Node Developers

Welcome to the "Docker Best Practices for Node Developers"! With your basic knowledge of Docker and Node.js in hand, Docker Mastery for Node.js is a course for anyone on the Node.js path. This course will help you master them together.

Welcome to the best course on the planet for using Docker with Node.js! With your basic knowledge of Docker and Node.js in hand, Docker Mastery for Node.js is a course for anyone on the Node.js path. This course will help you master them together.

My talk on all the best of Docker for Node.js developers and DevOps dealing with Node apps. From DockerCon 2019. Get the full 9-hour training course with my coupon at http://bit.ly/365ogba

Get the source code for this talk at https://github.com/BretFisher/dockercon19

Some of the many cool things you'll do in this course
  • Build Node.js Images that auto-scan for security vulnerabilities
  • Use Docker's cutting-edge BuildKit with SSH Agents and NPM Caches for better image building
  • Use docker-compose with Visual Studio Code for full Node.js debug support
  • Use BuildKit and Multi-stage Builds to create minimal and flexible Dockerfiles
  • Build custom Node.js images using distro's like CentOS and Alpine
  • Test Docker init, tini, and Node.js as a PID 1 process in containers
  • Create Node.js apps that properly startup and respond to healthchecks
  • Develop ARM based Node.js apps with Docker Desktop, and deploy to AWS A1 Servers
  • Build graceful shutdown code into your apps for zero-downtime deploys
  • Dig into HTTP connections with orchestration, and how Proxies can help
  • Study examples of Docker Swarm and Kubernetes deployments for Node.js
  • Spend time Migrating traditional (legacy) Node.js apps into containers
  • Simplify your microservice solutions with advanced Docker Compose features
What you will learn in this course

You'll start with a quick review about getting set up with Docker, as well as Docker Compose basics. That way we're on the same page for the basics.

Then you'll jump into Node.js Dockerfile basics, that way you'll have a good Dockerfile foundation for new features we'll add throughout the course.

You'll be building on all the different things you learn from each Lecture in the course. Once you have the basics down of Compose, Dockerfile, and Docker Image, then you'll focus on nuances like how Docker and Linux control the Node process and how Docker changes that to make sure you know what options there are for starting up and shutting down Node.js and the right way to do it in different scenarios.

We'll cover advanced, newer features around making the Dockerfile the most efficient and flexible as possible using things like BuildKit and Multi-stage.

Then we'll talk about distributed computing and cloud design to ensure your Node.js apps have 12-factor design in your containers, as well as learning how to migrate old apps into this new way of doing things.

Next we cover Compose and its awesome features to get really efficient local development and test set-up using the Docker Compose command line and Docker Compose YAML file.

With all this knowledge, you'll progress to production concerns and making images production-ready.

Then we'll jump into deploying those containers and running them in production. Whether you use Docker Engine or orchestration with Kubernetes or Swarm, I've got you covered. In addition, we'll cover HTTP connections and reverse proxies for connection handling and routing with multi-container systems.

Lastly, you'll get a final, big assignment where you'll be building and deploying a large, complex solution, including multiple Node.js containers that are doing different things. You'll build Docker images, Dockerfiles, and compose files, and deploy them to a server to test. You'll need to check whether connections failover properly. You'll basically take everything you've learned and apply it in one big project!

What's new capabilities in Node.js 13?

What's new capabilities in Node.js 13?

Node.js 13 brings programming enhancements, worker threads. Node.js 13, the latest version of the popular JavaScript runtime, was released this week, emphasizing worker threads, programming enhancements, and internationalization capabilities.

Node.js 13, the latest version of the popular JavaScript runtime, was released this week, emphasizing worker threads, programming enhancements, and internationalization capabilities.

Node.js 13 replaces Node.js 12 as the “current” release but Node.js 12 remains the long-term support (LTS) release. Thus Node.js 13 is not recommended for production use. Nevertheless, Node.js 13 will be useful for building and testing the latest features. Developers can use Node.js 13 to ensure that their packages and applications will be compatible with future versions.

These are the key new capabilities in Node.js 13:

  • Worker threads for performing CPU-intensive JavaScript operations are now stable in both Node.js 12 and Node.js 13.
  • Node.js releases now are built with default full-ICU (International Components for Unicode) support. All locales supported by ICU are included and Intl-related APIs may return different values than before.
  • N-API, for building native add-ons, has been updated with additional supported functions.
  • If the validation function passed to assert.throws() or assert.rejects() returns a value besides true, an assertion error will be thrown instead of the original error. This will highlight the programming mistake. Also, if a constructor function is passed to validate the instance of errors thrown in assert.throw() or assert.reject(), an assertion will be thrown instead of the original error.
  • The minimum supported version of Xcode is now Xcode 10. Xcode is Apple’s integrated development environment, available only for MacOS. Developers can continue to use Xcode 8 for now, but this may change in a future Node.js 13.x release.
  • The Google V8 JavaScript engine used in Node.js has been updated to version 7.8, which brings performance improvements for object destructuring, memory usage, and WebAssembly startup time.
  • For HTTP communications, data will no longer be emitted after a socket error. In addition, the legacy HTTP parser has been removed and the request.connection and response.connection properties have been runtime deprecated. The equivalent request.socket and response.socket should be used instead.
  • The timing and behavior of streams was consolidated for several edge cases.
Where to download Node.js

You can download Node.js from the project website.

How to using Docker Compose for NodeJS Development

How to using Docker Compose for NodeJS Development

Docker is an amazing tool for developers. It allows us to build and replicate images on any host, removing the inconsistencies of dev environments and reducing onboarding timelines considerably.

Docker is an amazing tool for developers. It allows us to build and replicate images on any host, removing the inconsistencies of dev environments and reducing onboarding timelines considerably.

To provide an example of how you might move to containerized development, I built a simple todo API using NodeJS, Express, and PostgreSQL using Docker Compose for development, testing, and eventually in my CI/CD pipeline.

In a two-part series, I will cover the development and pipeline creation steps. In this post, I will cover the first part: developing and testing with Docker Compose.

Requirements for This Tutorial

This tutorial requires you to have a few items before you can get started.

The todo app here is essentially a stand-in, and you could replace it with your own application. Some of the setup here is specific for this application, and the needs of your application may not be covered, but it should be a good starting point for you to get the concepts needed to Dockerize your own applications.

Once you have everything set up, you can move on to the next section.

Creating the Dockerfile

At the foundation of any Dockerized application, you will find a Dockerfile. The Dockerfile contains all of the instructions used to build out the application image. You can set this up by installing NodeJS and all of its dependencies; however the Docker ecosystem has an image repository (the Docker Store) with a NodeJS image already created and ready to use.

In the root directory of the application, create a new Dockerfile

/> touch Dockerfile

Open the newly created Dockerfile in your favorite editor. The first instruction, FROM, will tell Docker to use the prebuilt NodeJS image. There are several choices, but this project uses the node:7.7.2-alpine image.

FROM node:7.7.2-alpine

If you run docker build ., you will see something similar to the following:

Sending build context to Docker daemon 249.3 kB
Step 1/1 : FROM node:7.7.2-alpine
7.7.2-alpine: Pulling from library/node
709515475419: Pull complete
1a7746e437f7: Pull complete
662ac7b95f9d: Pull complete
Digest: sha256:6dcd183eaf2852dd8c1079642c04cc2d1f777e4b34f2a534cc0ad328a98d7f73
Status: Downloaded newer image for node:7.7.2-alpine
 ---> 95b4a6de40c3
Successfully built 95b4a6de40c3

With only one instruction in the Dockerfile, this doesn’t do too much, but it does show you the build process without too much happening. At this point, you now have an image created, and running docker images will show you the images you have available:

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
node                7.7.2-alpine        95b4a6de40c3        6 weeks ago         59.2 MB

The Dockerfile needs more instructions to build out the application. Currently it’s only creating an image with NodeJS installed, but we still need our application code to run inside the container. Let’s add some more instructions to do this and build this image again.

This particular Docker file uses RUN, COPY, and WORKDIR. You can read more about those on Docker’s reference page to get a deeper understanding.

Let’s add the instructions to the Dockerfile now:

FROM node:7.7.2-alpine

WORKDIR /usr/app

COPY package.json .
RUN npm install --quiet

COPY . .

Here is what is happening:

  • Set the working directory to /usr/app
  • Copy the package.json file to /usr/app
  • Install node_modules
  • Copy all the files from the project’s root to /usr/app

You can now run docker build . again and see the results:

Sending build context to Docker daemon 249.3 kB
Step 1/5 : FROM node:7.7.2-alpine
  ---> 95b4a6de40c3
Step 2/5 : WORKDIR /usr/app
 ---> e215b737ca38
Removing intermediate container 3b0bb16a8721
Step 3/5 : COPY package.json .
 ---> 930082a35f18
Removing intermediate container ac3ab0693f61
Step 4/5 : RUN npm install --quiet
 ---> Running in 46a7dcbba114

 ### NPM MODULES INSTALLED ###

 ---> 525f662aeacf
 ---> dd46e9316b4d
Removing intermediate container 46a7dcbba114
Step 5/5 : COPY . .
 ---> 1493455bcf6b
Removing intermediate container 6d75df0498f9
Successfully built 1493455bcf6b

You have now successfully created the application image using Docker. Currently, however, our app won’t do much since we still need a database, and we want to connect everything together. This is where Docker Compose will help us out.

Docker Compose Services

Now that you know how to create an image with a Dockerfile, let’s create an application as a service and connect it to a database. Then we can run some setup commands and be on our way to creating that new todo list.

Create the file docker-compose.yml:

/> touch docker-compose.yml

The Docker Compose file will define and run the containers based on a configuration file. We are using compose file version 2https://docs.docker.com/compose/compose-file/compose-file-v2/ syntax, and you can read up on it on Docker’s site.

An important concept to understand is that Docker Compose spans “buildtime” and “runtime.” Up until now, we have been building images using docker build ., which is “buildtime.” This is when our containers are actually built. We can think of “runtime” as what happens once our containers are built and being used.

Compose triggers “buildtime” — instructing our images and containers to build — but it also populates data used at “runtime,” such as env vars and volumes. This is important to be clear on. For instance, when we add things like volumes and command, they will override the same things that may have been set up via the Dockerfile at “buildtime.”

Open your docker-compose.yml file in your editor and copy/paste the following lines:

version: '2'
services:
  web:
    build: .
    command: npm run dev
    volumes:
      - .:/usr/app/
      - /usr/app/node_modules
    ports:
      - "3000:3000"
    depends_on:
      - postgres
    environment:
      DATABASE_URL: postgres://[email protected]/todos
  postgres:
    image: postgres:9.6.2-alpine
    environment:
      POSTGRES_USER: todoapp
      POSTGRES_DB: todos

This will take a bit to unpack, but let’s break it down by service.

The web service

The first directive in the web service is to build the image based on our Dockerfile. This will recreate the image we used before, but it will now be named according to the project we are in, nodejsexpresstodoapp. After that, we are giving the service some specific instructions on how it should operate:

  • command: npm run dev – Once the image is built, and the container is running, the npm run dev command will start the application.

  • volumes: – This section will mount paths between the host and the container.

  • .:/usr/app/ – This will mount the root directory to our working directory in the container.

- /usr/app/node_modules – This will mount the node_modules directory to the host machine using the buildtime directory.

  • environment: – The application itself expects the environment variable DATABASE_URL to run. This is set in db.js.

  • ports: – This will publish the container’s port, in this case 3000, to the host as port 3000.

The DATABASE_URL is the connection string. postgres://[email protected]/todos connects using the todoapp user, on the host postgres, using the database todos.

The Postgres service

Like the NodeJS image we used, the Docker Store has a prebuilt image for PostgreSQL. Instead of using a build directive, we can use the name of the image, and Docker will grab that image for us and use it. In this case, we are using postgres:9.6.2-alpine. We could leave it like that, but it hasenvironmentvariables to let us customize it a bit.

environment: – This particular image accepts a couple environment variables so we can customize things to our needs. POSTGRES_USER: todoapp – This creates the user todoapp as the default user for PostgreSQL. POSTGRES_DB: todos – This will create the default database as todos.

Running The Application

Now that we have our services defined, we can build the application using docker-compose up. This will show the images being built and eventually starting. After the initial build, you will see the names of the containers being created:

Pulling postgres (postgres:9.6.2-alpine)...
9.6.2-alpine: Pulling from library/postgres
627beaf3eaaf: Pull complete
e351d01eba53: Pull complete
cbc11f1629f1: Pull complete
2931b310bc1e: Pull complete
2996796a1321: Pull complete
ebdf8bbd1a35: Pull complete
47255f8e1bca: Pull complete
4945582dcf7d: Pull complete
92139846ff88: Pull complete
Digest: sha256:7f3a59bc91a4c80c9a3ff0430ec012f7ce82f906ab0a2d7176fcbbf24ea9f893
Status: Downloaded newer image for postgres:9.6.2-alpine
Building web
...
Creating nodejsexpresstodoapp_postgres_1
Creating nodejsexpresstodoapp_web_1
...
web_1       | Your app is running on port 3000

At this point, the application is running, and you will see log output in the console. You can also run the services as a background process, using docker-compose up -d. During development, I prefer to run without -d and create a second terminal window to run other commands. If you want to run it as a background process and view the logs, you can run docker-compose logs.

At a new command prompt, you can run docker-compose ps to view your running containers. You should see something like the following:

            Name                            Command              State           Ports
------------------------------------------------------------------------------------------------
nodejsexpresstodoapp_postgres_1   docker-entrypoint.sh postgres   Up      5432/tcp
nodejsexpresstodoapp_web_1        npm run dev                     Up      0.0.0.0:3000->3000/tcp

This will tell you the name of the services, the command used to start it, its current state, and the ports. Notice nodejsexpresstodoapp_web_1 has listed the port as 0.0.0.0:3000->3000/tcp. This tells us that you can access the application using localhost:3000/todos on the host machine.

/> curl localhost:3000/todos

[]

The package.json file has a script to automatically build the code and migrate the schema to PostgreSQL. The schema and all of the data in the container will persist as long as the postgres:9.6.2-alpine image is not removed.

Eventually, however, it would be good to check how your app will build with a clean setup. You can run docker-compose down, which will clear things that are built and let you see what is happening with a fresh start.

Feel free to check out the source code, play around a bit, and see how things go for you.

Testing the Application

The application itself includes some integration tests built using jest. There are various ways to go about testing, including creating something like Dockerfile.test and docker-compose.test.ymlfiles specific for the test environment. That’s a bit beyond the current scope of this article, but I want to show you how to run the tests using the current setup.

The current containers are running using the project name nodejsexpresstodoapp. This is a default from the directory name. If we attempt to run commands, it will use the same project, and containers will restart. This is what we don’t want.

Instead, we will use a different project name to run the application, isolating the tests into their own environment. Since containers are ephemeral (short-lived), running your tests in a separate set of containers makes certain that your app is behaving exactly as it should in a clean environment.

In your terminal, run the following command:

/> docker-compose -p tests run -p 3000 --rm web npm run watch-tests

You should see jest run through integration tests and wait for changes.

The docker-compose command accepts several options, followed by a command. In this case, you are using -p tests to run the services under the tests project name. The command being used is run, which will execute a one-time command against a service.

Since the docker-compose.yml file specifies a port, we use -p 3000 to create a random port to prevent port collision. The--rmoption will remove the containers when we stop the containers. Finally, we are running in the web service npm run watch-tests.

Thanks for reading !