Using Docker Secrets with NodeJS

Using Docker Secrets with NodeJS

I hope you to start using Docker secrets for storing sensitive information in production.

Docker Secrets

Docker secrets help to manage sensitive data that a container needs at run-time. For example, usernames, passwords, and certificates. The largest size of a single secret is 500KB.

You can use Docker secrets when your container is running inside a cluster such as Docker Swarm. Docker supports Docker secrets on all types of containers from Docker version 17.06.

Creating Docker secrets

I prefer creating Docker secrets using the command line.

There are two options: use echo or use a file that contains your secret. The following command creates a Docker secret called DB_PASSWORD that holds the string “secretpassword” using echo.

echo "secretpassword" | docker secret create DB_PASSWORD -

The other way is using a text file that contains the value of the secret.

docker secret create DB_PASSWORD db_password.txt

Both of the commands have the same result. A Docker secret DB_PASSWORD is created that contains the secret, the password. As with any Docker resource, you can use the inspect command to get details of the secret.

Docker secret inspect [secretid or name]

If you created the secret successfully, it returns a JSON object with its details. The details do not include value.

[
    {
        "ID": "lefggw7gfeahqz34b70dws8jx",
        "Version": {
            "Index": 15020
        },
        "CreatedAt": "2020-02-02T11:24:00.151051151Z",
        "UpdatedAt": "2020-02-02T11:24:00.151051151Z",
        "Spec": {
            "Name": "DB_PASSWORD",
            "Labels": {}
        }
    }
]

The details of a Docker Secret when executing docker secret inspect

Reading and using Docker secrets in Node.js

Docker uses an in-memory filesystem for storing secrets. Docker secrets look like regular files inside your container.

Docker stores each secret as a single file in /run/secrets/. The filename is the name of the secret. When your Node.js app runs inside a container, it can read a secret like a regular file.

I developed a Node.js module that reads a file from/run/secrets and returns the contents of the file.

// dependencies
const fs = require('fs');
const log = require('../log');

const dockerSecret = {};

dockerSecret.read = function read(secretName) {
  try {
    return fs.readFileSync(`/run/secrets/${secretName}`, 'utf8');
  } catch(err) {
    if (err.code !== 'ENOENT') {
      log.error(`An error occurred while trying to read the secret: ${secretName}. Err: ${err}`);
    } else {
      log.debug(`Could not find the secret, probably not running in swarm mode: ${secretName}. Err: ${err}`);
    }    
    return false;
  }
};

module.exports = dockerSecret;

secrets.js, a Node.js module to read a Docker Secret

For local development, I want to use the .env file as described earlier.

But once the app is running in production, it should read settings from Docker secrets. By combining secrets.js with my standard configuration object, I get the best of both worlds.

/*
 * Create and export configuration variables used by the API
 *
 */
const constants = require('./constants');
const secrets = require('./secrets');

// Container for all environments
const environments = {};

environments.production = {
  httpPort: process.env.HTTP_PORT,
  httpAddress: process.env.HOST,
  envName: 'production',
  log: {
    level: process.env.LOG_LEVEL,
  },
  database: {
    url: secrets.read('STORAGE_HOST') || process.env.STORAGE_HOST,
    name: 'workflow-db',
    connectRetry: 5, // seconds
  },
  workflow: {
    pollingInterval: 10, // Seconds
  },
  authprovider: {
    domain: secrets.read('AUTH_DOMAIN') || process.env.AUTH_DOMAIN,
    secret: secrets.read('AUTH_SECRET') || process.env.AUTH_SECRET
  }
};

// Determine which environment was passed as a command-line argument
const currentEnvironment = typeof process.env.NODE_ENV === 'string' ? process.env.NODE_ENV.toLowerCase() : '';

// Check that the current environment is one of the environment defined above,
// if not default to production
const environmentToExport = typeof environments[currentEnvironment] === 'object' ? environments[currentEnvironment] : environments.production;

// export the module
module.exports = environmentToExport;

Config object that combines Docker Secrets and Environment variables

On row 19, for example, I combine reading a secret and an environment setting secrets.read(‘STORAGE_HOST’) || process.env.STORAGE_HOST. The secret has priority over the environment.

Instead of developing your own, there are existing npm libraries that help with reading Docker secrets.

For example, docker-swarm-secrets, docker-secret, and @cloudreach/docker-secret. These modules read all the Docker secrets and expose them via a JavaScript object.

Assigning Docker secrets to services

We have to give a container explicit permission before it can access a secret. There are two options to give permissions, add it when creating a service or adding it to your docker-compose.yml file.

docker service create --name myservice --secret STORAGE_HOST myimage 

The service myservice has access to the secret STORAGE_HOST which is defined on the host using the command line.

A different and my preferred way is to use a compose file, see below.

The compose file names each secret in a separate block, see row 13. The secrets get the external: true to define that these secrets already exist and were created externally using the command line.

version: "3.5"

services:
  workflow-engine:
    image: pkalkman/mve-workflowengine:0.4.6
    environment:
      AUTH_DOMAIN: /run/secrets/AUTH_DOMAIN
      AUTH_SECRET: /run/secrets/AUTH_SECRET
      STORAGE_HOST: /run/secrets/STORAGE_HOST
      HOST: 0.0.0.0
      HTTP_PORT: 8181
      LOG_LEVEL: 1
secrets:
  AUTH_DOMAIN:
    external: true
  AUTH_SECRET:
    external: true
  STORAGE_HOST:
    external: true

docker-compose.yml that defines and uses Docker Secrets

Docker Secrets in Official Docker Images

If you are wondering how official Docker images handle Docker secrets, there seems to be a typical pattern. Most of the official images that use an environment setting also contain the same environment setting with a _FILE postfix.

For example, the MongoDB image uses the MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD environment variables.

It also accepts the MONGO_INITDB_ROOT_USERNAME_FILE and MONGO_INITDB_ROOT_PASSWORD_FILE environment variables. If you set the latter ones to /run/secrets/[secret name], the image will read and use that secret.

So, the official Postgres image also uses the same _FILE pattern.

Using the _FILE Pattern in Node.js

Although the earlier module to read secrets works fine, it would be better to use the same pattern as official images. We only have to make a small change to the secrets module and the config object.

// dependencies
const fs = require('fs');
const log = require('../log');

const dockerSecret = {};

dockerSecret.read = function read(secretNameAndPath) {
  try {
    return fs.readFileSync(`${secretNameAndPath}`, 'utf8');
  } catch(err) {
    if (err.code !== 'ENOENT') {
      log.error(`An error occurred while trying to read the secret: ${secretNameAndPath}. Err: ${err}`);
    } else {
      log.debug(`Could not find the secret, probably not running in swarm mode: ${secretNameAndPath}. Err: ${err}`);
    }    
    return false;
  }
};

module.exports = dockerSecret;

Reading Docker Secrets by specifying the complete path

Instead of accepting the name of the secret and prepending the path of the secret, we receive the complete path. The Config object first reads the value of the _FILE settings, and if not available, it uses the environment setting.

/*
 * Create and export configuration variables used by the API
 *
 */
const constants = require('./constants');
const secrets = require('./secrets');

// Container for all environments
const environments = {};

environments.production = {
  httpPort: process.env.HTTP_PORT,
  httpAddress: process.env.HOST,
  envName: 'production',
  log: {
    level: process.env.LOG_LEVEL,
  },
  database: {
    url: secrets.read('STORAGE_HOST_FILE') || process.env.STORAGE_HOST,
    name: 'workflow-db',
    connectRetry: 5, // seconds
  },
  workflow: {
    pollingInterval: 10, // Seconds
  },
  authprovider: {
    domain: secrets.read('AUTH_DOMAIN_FILE') || process.env.AUTH_DOMAIN,
    secret: secrets.read('AUTH_SECRET_FILE') || process.env.AUTH_SECRET
  }
};

// Determine which environment was passed as a command-line argument
const currentEnvironment = typeof process.env.NODE_ENV === 'string' ? process.env.NODE_ENV.toLowerCase() : '';

// Check that the current environment is one of the environment defined above,
// if not default to production
const environmentToExport = typeof environments[currentEnvironment] === 'object' ? environments[currentEnvironment] : environments.production;

// export the module
module.exports = environmentToExport;

Config object that uses the same pattern as the official Docker images

For example, domain: secrets.read(‘AUTH_DOMAIN_FILE’) || process.env.AUTH_DOMAIN on row 27 first tries to read the secret file, and if it does not succeed, it uses the AUTH_DOMAIN environment setting.

I hope that I have convinced you to start using Docker secrets for storing sensitive information in production.

Thank you for reading and if you have any questions or remarks, feel free to leave a response.

Tips Use Docker Compose for Local Node.js Development

Tips Use Docker Compose for Local Node.js Development

If you want to create an excellent local development and test environment for Node.js using Docker Compose, I have the following 10 tips.

Docker Compose offers a great local development setup for designing and developing container solutions. Whether you are a tester, developer, or a DevOps operator, Docker Compose has got you covered.

If you want to create an excellent local development and test environment for Node.js using Docker Compose, I have the following 10 tips.

1. Use the Correct Version in Your Docker Compose File

The docker-compose.yml file is a YAML file that defines services, networks, and volumes for a Docker application. The first line of the file contains the version keyword and tells Docker Compose which version of its file format you are using.

There are two major versions that you can use, version 2 and version 3; both have a different use case.

The Docker Compose development team created version 2 for local development and version 3 to be compatible with container orchestrators such as Swarm and Kubernetes.

As we are talking about local Node.js development, I always use the latest version 2 release, at the time of writing, v2.4.

version: "2.4"
services:
  web:
2. Use Bind Mounts Correctly

My first tip for your bind mounts is to always mount your Node.js source code from your host using relative paths.

Using relative paths allows other developers to use this Compose file even when they have a different folder structure on their host.

volumes:
  - ./src:/home/nodeapp/src

Use named volumes to mount your databases

Almost all Node.js applications are deployed to production using a Linux container. If you use a Linux container and develop your application on Windows or a Mac you shouldn’t bind-mount your database files.

In this situation, the database server has to cross the operating system boundaries when reading or writing the database. Instead, you should use a named volume, and let Docker handle the database files.

version: '2.4'
services:
  workflowdb:
    image: 'mongo:4.0.14'
    environment:
      - MONGO_INITDB_ROOT_USERNAME=mveroot
      - MONGO_INITDB_ROOT_PASSWORD=2020minivideoencoder!
      - MONGO_INITDB_DATABASE=workflow-db
    volumes:
      - workflowdatabase:/data.db
    ports:
      - '27017:27017'
volumes:
  workflowdatabase:

Mounting a MongoDB database using a named volume

The volumes: keyword defines the named volumes of your docker-compose file. Here, we define the named volume workflowdatabase and use it in the workflowdb service.

Use delegated configuration for improved performance

I always add the delegated configuration to my volume mounts to improve performance. By using a delegated configuration on your bind mount, you tell Docker that it may delay updates from the container to appear in the host.

Usually, with local development, there is no need for writes performed in a container to be reflected immediately on the host. The delegated flag is an option that is specific to Docker Desktop for Mac.

volumes:
  - ./src:/home/app/src:delegated

Depending on the level of consistency you need between the container and your host, there are two other options to consider, [consistent](https://docs.docker.com/docker-for-mac/osxfs-caching/) and cached.

3. Correctly Handle Your node_modules

You can’t bind mount the node_modules directory from your host on macOS or Windows into your container because of the difference in the operating system.

Some npm modules perform dynamic compilation during npm install, and these dynamically compiled modules from macOS won’t run on Linux.

There are two different solutions to solve this:

  1. Fill the node_module directory on the host via the Docker container.

You can fill the node_module directory on the host via the Docker container by running npm install via the docker-compose run command. This installs the correct node_modules using the operation of the container.

For example, a standard Node.js app with the following Dockerfile and docker-compose.yml file.:

FROM node:12-alpine 

WORKDIR /app

COPY . .

CMD [ "node", "index.js"]

Standard Dockerfile for a Node.js app

version: '2.4'

services: 
  workflowengine:
    build: .
    ports: 
      - 8080:8080
    volumes:
      - .:/app

Standard Docker-Compose.yml file

By executing the command docker-compose run workflowengine npm install, I install the node_modules on the host via the running Docker container.

This means that the node_modules on the host are now for the architecture and operating system of the Dockerfile and cannot be used from your host anymore.

2. Hide the host’s node_modules using an empty bind mount.

The second solution is more flexible than the first one as you can still run and develop your application from the host as from the Docker container. This is known as the node_modules volume trick.

We have to change the Dockerfile so that the node_modules are installed one directory higher than the Node.js app.

The package.json is copied and installed in the /node directory while the application is installed in the /node/app directory. Node.js applications look for the node_modules directory up from the current application folder.

FROM node:12-alpine
  
WORKDIR /node

COPY package*.json ./

RUN npm install && npm cache clean --force --loglevel=error

WORKDIR /node/app

COPY ./index.js index.js

CMD [ "node", "index.js"]

The node_modules from the host are in the same folder as the application source code.

To make sure that the node_modules from the host don't bind mount into the Docker image, we mount an empty volume using this docker-compose file.

version: '2.4'

services: 
  workflowengine:
    build: .
    ports: 
      - 8080:8080
    volumes:
      - .:/node/app
      - /node/app/node_modules

The second statement in the volumes section actually hides the node_modules directory from the host.

4. Using Tools With Docker Compose

If you want to run your tools when developing with Docker Compose, you have two options: use docker-compose run or use docker-compose exe. Both behave differently.

docker-compose run [service] [command] starts a new container from the image of the service and runs the command.

docker-compose exec [service] [command] runs the command in the currently running container of that service.

5. Using nodemon for File Watching

I always use [nodemon](https://www.npmjs.com/package/nodemon) for watching file changes and restarting Node.js. When you are developing using Docker Compose, you can use nodemon by installing nodemon via the following Compose run command:

docker-compose run workflowengine npm install nodemon —-save-dep

Then adding command below the workflowengine service in the docker-compose.yml file. You also have to set the NODE_ENV to development so that the dev dependencies are installed.

version: '2.4'

services: 
  workflowengine:
    build: .
    command: /app/node_modules/.bin/nodemon ./index.js
    ports: 
      - 8080:8080
    volumes:
      - .:/app
    environment:
      - NODE_ENV=development
6. Specify the Startup Order of Services

Docker Compose does not use a specific order when starting its services. If your services need a specific startup order, you can specify this using the depends_on keyword in your docker-compose file.

With depends_on you can specify that your service A depends on service B. Docker Compose starts service B before service A and makes sure that service B can be reached through DNS before starting service A.

If you are using version 2 of the Docker Compose YAML, depend_on can be combined with the HEALTHCHECK command to make sure that the service you depend on is started and healthy.

7. Healthchecks in Combination With depends_on

If you want your service to start after the service you depend on has started and healthy, you have to combine depends on with health checks.

version: '2.4'
services:
  workflowengine:
    image: 'workflowengine:0.6.0'
    depends_on: 
      workflowdb:
        condition: service_healthy
    environment: 
      - STORAGE_HOST=mongodb://mve-workflowengine:[email protected]:27017/workflow-db?authMechanism=DEFAULT&authSource=workflow-db
    ports:
      - '8181:8181'
    networks:
      - mve-network
  workflowdb:
    image: 'mongo:4.0.14'
    healthcheck:
      test: echo 'db.runCommand("ping").ok' | mongo localhost:27017/test --quiet
    environment:
      - MONGO_INITDB_ROOT_USERNAME=mveroot
      - MONGO_INITDB_ROOT_PASSWORD=2020minivideoencoder!
      - MONGO_INITDB_DATABASE=workflow-db
    volumes:
      - ./WorkflowDatabase/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
      - ./WorkflowDatabase/data/workflow-db.db:/data.db
    ports:
      - '27017:27017'
    networks: 
      - mve-network

networks:
    mve-network:

Combining dependson with a health check

You have to add condition: service_healthy to depends_on to indicate that the service you depend on should be healthy before starting this service.

The health check specified for the MongoDB database makes sure that the database server has started and is accepting connections before reporting healthy.

8. Shrinking Compose Files Using Extension Fields

You can increase the flexibility of your Compose files using environment variables and extension fields. Environment variables can be set using the environment keyword.

For example, to change the connection string of the database or the port that your API is listening to. See my article Node.js with Docker in production on how to configure and use environment variables in your Node.js application.

Extension fields let you define a block of text in a Compose file that can be reused in that same file. This way, you decrease the size of your Compose file and make it more DRY.

version: '2.4'

# template:
x-base: &base-service-template
  build: .
  networks:
    - mve-network
  
services: 
  workflowengine:
    <<: *base-service-template
    build: .
    ports: 
      - 8080:8080
    volumes:
      - .:/node/app
      - /node/app/node_modules

networks:
  mve-network:

I define a template that includes build and networks which is the same on each service by using the syntax <<: *base-service-template. I inject the defined template into the service definition.

9. Add a Reverse Proxy Service

Once you have multiple services defined in your Compose file that expose an HTTP endpoint, you should start using a reverse proxy. Instead of having to manage all the ports and port mappings for your HTTP endpoints, you can start performing host header routing.

Instead of different ports, you can use DNS names to route between different services. The most common reverse proxies used in container solutions are NGINX, HAProxy, and Traefik.

Using NGINX

If you plan to use NGINX, I suggest the jwilder/nginx-proxy Docker container from Jason Wilder. Nginx-proxy uses docker-gen to generate NGINX configuration templates based on the services in your Compose file.

Every time you add or remove a service from your Compose file, Nginx-proxy regenerates the templates and automatically restarts NGINX. Automatically regenerating and restarting means that you always have an up-to-date reverse proxy configuration that includes all your services.

You can specify the DNS name of your service by adding the VIRTUAL_HOST environment variable to your service definition.

version: '2.4'
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    port:
      - "80:80"
    volumes:
      - /var/run/docker/docker.sock:/tmp/docker.sock

  workflowengine:
    image: 'workflowengine:0.6.0'
    depends_on: 
      workflowdb:
        condition: service_healthy
    environment: 
      - VIRTUAL_HOST=workflowengine.localhost
      - STORAGE_HOST=mongodb://mve-workflowengine:[email protected]:27017/workflow-db?authMechanism=DEFAULT&authSource=workflow-db
    ports:
      - '8181:8181'
    networks:
      - mve-network

  workflowencoder:
    image: 'videoencoder:0.6.0'
    depends_on: 
      workflowdb:
        condition: service_healthy
    environment: 
      - VIRTUAL_HOST=videoencoder.localhost  
    ports:
      - '8181:8181'
    networks:
      - mve-network

Using jwilder/nginx-proxy as a reverse proxy for your services

Nginx-proxy service mounts the Docker socket, this enables it to respond to containers being added or removed. In the VIRTUAL_HOST environment variable, I use *.localhost domains.

Chrome automatically points .localhost domains to 127.0.0.1.

Using Traefik

Traefik is a specialized open-source reverse proxy container image for HTTP and TCP-based applications.

Using Traefik as a reverse proxy inside our Docker Compose is more or less the same as Nginx-proxy. Traefik offers an HTTP-based dashboard to show you the currently active routes handled by Traefik.

version: '2.4'
services:
  traefik:
    image: traefik:v1.7.20-alpine
    port:
      - "80:80"
    volumes:
      - /var/run/docker/docker.sock:/tmp/docker.sock
    command:
      - --docker
      - --docker.domain=traefik
      - --docker.watch
      - --api
      - --defautlentrypoints=http,https
    labels:
      - traefik.port=8080
      - traefik.frontend.rule=Host:traefik.localhost

  workflowengine:
    image: 'workflowengine:0.6.0'
    depends_on: 
      workflowdb:
        condition: service_healthy
    environment:
      - STORAGE_HOST=mongodb://mve-workflowengine:[email protected]:27017/workflow-db?authMechanism=DEFAULT&authSource=workflow-db
    labels:
      - traefik.port=8080
      - traefik.frontend.rule=HOST:workflowengine.localhost
    ports:
      - '8181:8181'
    networks:
      - mve-network

  workflowencoder:
    image: 'videoencoder:0.6.0'
    depends_on: 
      workflowdb:
        condition: service_healthy
    labels:
      - traefik.port=8081
      - traefik.frontend.rule=HOST:videoencoder.localhost
    ports:
      - '8181:8181'
    networks:
      - mve-network

Traefik uses labels instead of environment variables to define your DNS names. See the example above.

Traefik offers a lot more functionality than shown above, if you are interested, I direct you to their website which offers complete documentation on other features such as load balancing, and automatic requesting and renewing of Let’s Encrypt certificates.

Thank you for reading, I hope these nine tips help with Node.js development using Docker Compose. If you have any questions, feel free to leave a response!

Top 7 Most Popular Node.js Frameworks You Should Know

Top 7 Most Popular Node.js Frameworks You Should Know

Node.js is an open-source, cross-platform, runtime environment that allows developers to run JavaScript outside of a browser. In this post, you'll see top 7 of the most popular Node frameworks at this point in time (ranked from high to low by GitHub stars).

Node.js is an open-source, cross-platform, runtime environment that allows developers to run JavaScript outside of a browser.

One of the main advantages of Node is that it enables developers to use JavaScript on both the front-end and the back-end of an application. This not only makes the source code of any app cleaner and more consistent, but it significantly speeds up app development too, as developers only need to use one language.

Node is fast, scalable, and easy to get started with. Its default package manager is npm, which means it also sports the largest ecosystem of open-source libraries. Node is used by companies such as NASA, Uber, Netflix, and Walmart.

But Node doesn't come alone. It comes with a plethora of frameworks. A Node framework can be pictured as the external scaffolding that you can build your app in. These frameworks are built on top of Node and extend the technology's functionality, mostly by making apps easier to prototype and develop, while also making them faster and more scalable.

Below are 7of the most popular Node frameworks at this point in time (ranked from high to low by GitHub stars).

Express

With over 43,000 GitHub stars, Express is the most popular Node framework. It brands itself as a fast, unopinionated, and minimalist framework. Express acts as middleware: it helps set up and configure routes to send and receive requests between the front-end and the database of an app.

Express provides lightweight, powerful tools for HTTP servers. It's a great framework for single-page apps, websites, hybrids, or public HTTP APIs. It supports over fourteen different template engines, so developers aren't forced into any specific ORM.

Meteor

Meteor is a full-stack JavaScript platform. It allows developers to build real-time web apps, i.e. apps where code changes are pushed to all browsers and devices in real-time. Additionally, servers send data over the wire, instead of HTML. The client renders the data.

The project has over 41,000 GitHub stars and is built to power large projects. Meteor is used by companies such as Mazda, Honeywell, Qualcomm, and IKEA. It has excellent documentation and a strong community behind it.

Koa

Koa is built by the same team that built Express. It uses ES6 methods that allow developers to work without callbacks. Developers also have more control over error-handling. Koa has no middleware within its core, which means that developers have more control over configuration, but which means that traditional Node middleware (e.g. req, res, next) won't work with Koa.

Koa already has over 26,000 GitHub stars. The Express developers built Koa because they wanted a lighter framework that was more expressive and more robust than Express. You can find out more about the differences between Koa and Express here.

Sails

Sails is a real-time, MVC framework for Node that's built on Express. It supports auto-generated REST APIs and comes with an easy WebSocket integration.

The project has over 20,000 stars on GitHub and is compatible with almost all databases (MySQL, MongoDB, PostgreSQL, Redis). It's also compatible with most front-end technologies (Angular, iOS, Android, React, and even Windows Phone).

Nest

Nest has over 15,000 GitHub stars. It uses progressive JavaScript and is built with TypeScript, which means it comes with strong typing. It combines elements of object-oriented programming, functional programming, and functional reactive programming.

Nest is packaged in such a way it serves as a complete development kit for writing enterprise-level apps. The framework uses Express, but is compatible with a wide range of other libraries.

LoopBack

LoopBack is a framework that allows developers to quickly create REST APIs. It has an easy-to-use CLI wizard and allows developers to create models either on their schema or dynamically. It also has a built-in API explorer.

LoopBack has over 12,000 GitHub stars and is used by companies such as GoDaddy, Symantec, and the Bank of America. It's compatible with many REST services and a wide variety of databases (MongoDB, Oracle, MySQL, PostgreSQL).

Hapi

Similar to Express, hapi serves data by intermediating between server-side and client-side. As such, it's can serve as a substitute for Express. Hapi allows developers to focus on writing reusable app logic in a modular and prescriptive fashion.

The project has over 11,000 GitHub stars. It has built-in support for input validation, caching, authentication, and more. Hapi was originally developed to handle all of Walmart's mobile traffic during Black Friday.

Node.js for Beginners - Learn Node.js from Scratch (Step by Step)

Node.js for Beginners - Learn Node.js from Scratch (Step by Step)

Node.js for Beginners - Learn Node.js from Scratch (Step by Step) - Learn the basics of Node.js. This Node.js tutorial will guide you step by step so that you will learn basics and theory of every part. Learn to use Node.js like a professional. You’ll learn: Basic Of Node, Modules, NPM In Node, Event, Email, Uploading File, Advance Of Node.

Node.js for Beginners

Learn Node.js from Scratch (Step by Step)

Welcome to my course "Node.js for Beginners - Learn Node.js from Scratch". This course will guide you step by step so that you will learn basics and theory of every part. This course contain hands on example so that you can understand coding in Node.js better. If you have no previous knowledge or experience in Node.js, you will like that the course begins with Node.js basics. otherwise if you have few experience in programming in Node.js, this course can help you learn some new information . This course contain hands on practical examples without neglecting theory and basics. Learn to use Node.js like a professional. This comprehensive course will allow to work on the real world as an expert!
What you’ll learn:

  • Basic Of Node
  • Modules
  • NPM In Node
  • Event
  • Email
  • Uploading File
  • Advance Of Node