A brief introduction to NodeJS, its history, usage, and popularity in the current web world.

BEST PRACTICES

  • Structure the Project
    Break down the project into folders and filesStructure the app according to the API endpointsDon’t make one huge file
  • Use Code Formatters
    Use Prettier or Beautify* Avoid anonymous functions
    Create reusable functions* Use camelCase
    Follow a naming convention for methods, objects, and variables* Avoid callbacks - Use async-await
    Do not run onto the callback hell* **Handle errors - **Use try-catch
  • Use a logger
  • Use a process monitor
  • Use environment variables
  • Always use LTS
  • Prevent XSS attacks
    Escape HTML, JS and CSS scripts for special characters.Avoid JS eval* **Learn to crash - **Dodge DOS attacks
  • Use ESLint
    Catch anti-patterns and follow standards

Introduction To Nodejs

NodeJS is an open-source, cross-platform JavaScript run-time environment that executes JavaScript code outside of a browser. That is the official definition of NodeJS. But in simpler words, NodeJS allows you to write and execute JavaScript code on a server or inside an application that does not have to work inside the browser. NodeJS is primarily used to write system utilities and web servers.

In the recent few years, NodeJS has become tremendously popular and is one of the most popular runtime environments for web servers. It is an open source, a cross-platform runtime environment for developing server-side applications, or more precisely, code that run on a server.

NodeJS is a platform built on Chrome’s JavaScript runtime for easily building fast and scalable network applications. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

In this article, we are going to have a look at some of the best practices that developers should follow to keep their huge NodeJS projects maintainable.

BEST PRACTICES

Structure the Project

Structuring the code base is very important to projects that scale and involve large teams. This ensures that the code is manageable and developers across teams can understand the code. For this, there is a need to establish some ground rules that will be followed by all developers and even testers. As the project grows, the code needs to be broken down, both logically and physically. This ensures that as the project scales up, it stays manageable. A few guidelines as to how to break down a huge code base into smaller and manageable chunks are as following.

The biggest hurdle of large projects is that they tend to grow with each new feature or bug fix. This requires that the project be broken down into different files and then structure those files into directories. Now, this can vary from project to project but the concept is that the project needs to organized such that to make a small change or to add a small feature to one part of the codebase, the developer only needs to navigate to one branch of the directory tree.

With large codebases come a huge set of dependencies. Having one big software with many dependencies is just hard to reason about and this often leads to spaghetti code which is touch or almost impossible to manage. The ultimate solution is to break down the beast into smaller and tamable beasts. This calls to divide the whole stack into self-contained components that don’t share files with others, each constitutes very few files and therefore becomes much more manageable. Also, changes to one are completely opaque to another.

This opacity ensures that changes to one module or component in the project do not break the other ones that in some way are using that module or component.

Use code formatters

This one is not really specific to NodeJS but tools like Prettier and Beautify improve the visual aesthetics of the codebase which can otherwise look very messy and stressing. These help you quickly find and fix common syntactical errors. Although it is more about the visual aesthetics of the code, these tools really help as they work constantly as you code. They work according to the language you are coding in (JavaScript or TypeScript) and then color certain keywords, format the code and display common errors like missing braces or semicolons.

Most of the popular code formatters support popular Code Editors as well. For example, both Prettier and Beautify support Visual Studio Code which is my personal favorite when NodeJS is concerned. Prettier has a VSCode extension called prettier-vscode and unibeautify-vscode.

The official extensions are totally configurable and auto-format the code when the file is saved thereby saving a lot of time and ensure that developers do not lose focus.

Not using any of these code formatters will not affect the coding efficiency in any way but it definitely affects the developers’ efficiency. Developers will focus on tedious spacing and line-width concerns and time might be wasted overthinking the project’s code style.

Avoid Anonymous Functions

According to this principle, it is recommended to have named functions over anonymous inline functions. It suggests you to always create named functions, including closures and callbacks and name them logically. An example would be more appropriate for explaining this. Let’s have a look at an example.

In the code snippet on the left in the above set, you can see that once the createOrder method is initiated, it executes certain operations in a certain order but it all happens inside an anonymous function. In the code on the right, instead of creating an anonymous function, we create a named function called postOrderTasks() that only does those tasks in the same order. The primary advantage here is that this function can be re-used over an order again in other parts of the project as well and this reduces code redundancy.

Moreover, if you find a bug in the code that creates an order, you fix that bug only in once place i.e. postOrderTasks() method and it will reflect in all places - saves time.

Another advantage of using named functions is that when you are profiling a NodeJS app, it allows you to easily understand what you’re looking at when checking a memory snapshot for memory leaks or inefficient code. Debugging production issues using a core dump (memory snapshot) might become challenging if you do use too many anonymous functions.

Use camelCase

It is generally considered a good practice to follow a naming convention while naming all variables, objects, and classes but the case of the names and the characters in the names is also very important as lower and upper cases represent different meanings.

It is recommended to use lowerCamelCase when naming constants, variables, and functions and UpperCamelCase (capital first letter as well) when naming classes.

This is universally accepted conventions and developers across the globe can easily identify and differentiate objects and classes when using this naming convention.

Article article = new Article();

Article firstArticle = new Article();

Avoid callbacks - Use async-await

NodeJS 8 and higher have to fill Async-await support in their LTS releases. It is often a good practice to have async-await operators instead of using multiple callbacks. Inspired by the C#’s async-await operators, NodeJS offers Async-await operators that appear to block asynchronous operations waiting for the results before continuing with the following statement where synchronous execution is a requirement.

Async-await in NodeJS is completely non-blocking although it may appear at blocking. It is fast and efficient and works very similar to C#’s async-await operators.

Let’s have a look at an example. Following the code that uses traditional callback pattern to execute asynchronous synchronously.

In the above code block, there are 3 asynchronous functions executing in order and therefore the code looks like a staircase. With more functions, this looks terrible and code readability becomes very poor. Async-await comes to the rescue from callback hell. Now, let’s rewrite the code using the async-await operators.

As you can see, the code is so much more readable. The function foo() does not block NodeJS’s event loop, in spite of its synchronous appearance. Execution within the function is suspended during each of its three asynchronous operations, but NodeJS’s event loop can execute other code whilst those operations are being executed.

Using the await operator next to an asynchronous function call ensures that the next line of code will only be executed when the awaited function is executed successfully.

It is recommended to use try-catch blocks when using async-await operators which we will discuss next.

Avoid callbacks - Use async-await

This is more of a requirement than a recommendation. When using async-await operators, we no longer have the access to success and failure callbacks in the promises - then() and catch(). How do we handle errors then? Well, we fall back to the old try-catch method of handling errors in the code.

Wrap the code with the await keywords in a try block and correspondingly write the error handling code in the catch block. Simple right? Let’s rewrite the above code block that uses async-await using the try-catch.

Simple, elegant and the code stays readable. Keep in mind that if you do not recommend you awaited calls in try blocks, you may not be able to find if you function executed successfully or failed. In such a situation, you will not be able to handle errors.

Use a logger

Whether the software is working perfectly or not, logs are an avoidable piece of information. Always use a logger tool that logs information about databases accesses, crashes, user access patterns and other information that might be useful for the teams. There are many sophisticated tools available out there for NodeJS. Some of the most popular ones are -

  • Node-Loggly
  • Bunyan
  • Winston
  • Morgan

I have used Winston in numerous projects and it is one of the best but it varies from project to project and from developer to developer as well.

With Winston, you can:

  • Use multiple means of transport
  • Create custom transports
  • Perform profiling
  • Handle exceptions
  • Use one of a range of pre-defined error levels
  • Create custom error levels

Multiple modes of transport allow you to log to files, network, or pretty much anything. There are several core modes of transport included in Winston, which leverage the built-in networking and file I/O offered by Node.js core. In addition, there are additional transports written by members of the community.

Without a logger, skimming through console logs or manually through messy text file without querying tools or a decent log viewer can be a pain when your software is down.

Use a process monitor

There may be cases when the server crashes or run into errors that it cannot handle since we, as developers, have not considered certain edge cases. In these situations, we need to ensure that the process is terminated gracefully, and restarted immediately without no or minimal downtime. Information about the crash should also be written to logs so that we can handle the crash whenever it occurs in the future or eliminate it altogether.

Process monitors or process management tools like PM2 are available for NodeJS that handle all of these things for us. PM2 utility also integrates a load balancer. You can keep application server processes alive and reload/restart them with zero downtime.

PM2 allows you to easily manage your application’s logs as well. You can display the logs coming from all your applications in real-time, flush them, and reload them. There are also different ways to configure how PM2 will handle your logs (separated in different files, merged, with timestamp…) without modifying anything in your code.

Primary features of PM2 are -

  • Process management including automatic app restarts on failure or system reboots
  • Application monitoring
  • Declarative configuration via JSON file
  • Log management
  • Built-in cluster mode
  • Startup script generation for *nix systems
  • Seamless updates
  • Integrated module system

You can also check out forever or supervisor which offer similar features for process monitoring.

Use environment variables

Environment variables are a fundamental part of developing with NodeJS, allowing your app to behave differently based on the environment you want them to run in. These are what the name implies, variables stored in the environment in which the code is being executed. These variables can be used to store sensitive information like API keys or secret strings and they can also be used to store other information to indicate if the app is in test, dev or production mode. Based on the values of these variables, different code blocks can be executed to perform different operations or perform operations differently.

Environment variables can be used to store app-specific keys, software version and build information, files paths and folder paths, ports and host information etc. When the app is deployed on another server or on the cloud, the environment variables ensure that the app still works but now, since the environment is different, it will work exactly as it is supposed to work. You are saved from the trouble of switching variables before deploying the app.

There is a specific package called dotenv that is very popular and can be used to read .env files that you can use to use environment variables. Environment variables ensure that your code is always aware of the environment it is running in and can work in ways as intended.

Keep in mind that these files should be ignored when you push the code to version control like Github.

Always use LTS

It is always recommended to use the LTS version of the latest available NodeJS version. LTS stands for Long term support which means that no matter what happens, you will get support from the NodeJS team for this version in the future. It is future-proof and therefore is suitable to be used in production.

The other variant, which is called Stable version, receive frequent updates, bug fixes and performance improvements which can be breaking at times. If you want the most stable version, use the latest available LTS.

According to Rod Vagg from NodeJS LTS team, the point of establishing an LTS plan for Node is to build on top of an existing stable release cycle by delivering new versions on a predictable schedule that have a clearly defined extended support lifecycle. While this may seem at odds with the open source tradition of “release early, release often”, it is an essential requirement for enterprise application development and operations teams.

Prevent XSS Attacks

Cross-Site Scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted websites. XSS attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user.

In NodeJS, XSS attacks can be prevented by using dedicated modules. There are a lot of modules available for this very purpose. The application should be using secure headers to prevent attackers. These can be configured easily using modules like a helmet. There are other packages like sanitizer and dompurify that ensure that the content is sent down to the client as pure content, and it cannot be evaluated. Basically, this is mitigated by using dedicated libraries that explicitly mark the data as pure content that should never get executed.

Another common cause is the JavaScript’s eval(), setTimeout(), setInterval() methods. These methods and new Function() are global functions, often used in NodeJS, which accept a string parameter representing a JavaScript expression, that can be evaluated to perform an operation on the client. The security concern of using these functions is the possibility that untrusted user input might find its way into code execution leading to server compromise, as evaluating user code essentially allows an attacker to perform any actions that you can. It is therefore suggested to refactor code to not rely on the usage of these functions where user input could be passed to the function and executed. Another alternative is to sanitize the user input before passing it to one of these methods.

Learn to crash - Dodge DOS attacks

The Node process will crash when errors are not handled. Many best practices even recommend to exit even though an error was caught and got handled. This is because if not crashes, this opens a very sweet attack spot for attackers who recognize what input makes the process crash and repeatedly send the same request.

A denial-of-service attack (DoS attack) is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to the Internet.

There is no one solution to this because there is a human sitting at the attacking end but there are a few things that can help.

  • Alert whenever a process crashes due to an unhandled error
  • Validate and sanitize the input
  • Avoid crashing the process due to invalid user input
  • Wrap all routes with a catch and consider not to crash when an error originated within a request

Crashing does not look all bad now, does it?

Using ESLint

ESLint is a standard tool for checking possible code errors and fixing code style issues. It is not only used to identify minor spacing issues but also to detect serious code antipatterns like developers throwing errors without classification or not writing a return statement in the method that is supposed to return something. ESLint can automatically fix code styles, but it is often used with other tools like prettier and beautify.

Linting forces developers to follow standard practices and therefore makes development easier and coherent for everyone working on the project. In my personal experience, linting has made me a better developer overall.

#node-js #web-development

Node Js Best Practices
4 Likes82.45 GEEK