JavaScript Testing using Selenium WebDriver, Mocha and NodeJS

JavaScript Testing using Selenium WebDriver, Mocha and NodeJS

In case you are looking to write a functional test in JavaScript, the following tutorial provides UI automation engineers with the perfect structural reference material for JavaScript testing with Selenium WebDriver 3, Mocha and NodeJS.

In case you are looking to write a functional test in JavaScript, the following tutorial provides UI automation engineers with the perfect structural reference material for JavaScript testing with Selenium WebDriver 3, Mocha and NodeJS.

These days, JavaScript is a ubiquitous web language which seems to overcome its ‘notorious’ past and has become a more solid platform not only for client-side, but for server domains too. Mocha.js, or simply Mocha, is a feature-rich JavaScript test framework running on Node.js, which provides the platform and the API for building standalone applications server-side using Google’s V8 JavaScript engine at its base.

*Note: to get started with this JavaScript tutorial, you’ll need to be familiar with the basics of NodeJS and the JavaScript programming language.

Tutorial Overview:

1. Mocha Test Construction
  • Introduction
  • Installation
  • Installing Chai Assertion Module
  • Test suite and Test Case Structure
  • Constructing Tests with Mocha
  • Running Mocha’s Test Suite and Test Cases
  • Managing Syncing of Async Testing Code
2. Using Javascript Selenium 3 API Integrated with MochaJS
  • Selenium Introduction
  • Selenium Installation
  • WebDriver Construction
  • Integrating MochaJS with Selenium WebDriver 3

Versions used:

  • Node version used: 6.10.1 (LTS)
  • Mocha: 2.5.3
  • WebDriverJS: 3.3.0
1. Constructing Tests with Mocha

Introduction to Mocha

As mentioned, Mocha is a JavaScript test framework that runs tests on Node. Mocha comes in the form of a Node package via npm, allowing you to use any library for assertions as a replacement to Node’s standard ‘assert’ function, such as ChaiJS.

Mocha provides an API, which specifies a way to structure the testing code into test suites and test case modules for execution, and later on to produce a test report. Mocha provides two modes for running: either by command line (CLI) or programmatically (Mocha API).

Install Mocha

If Mocha is to be used in CLI, then it should be installed globally as Node.js.

npm install -g mocha 

Install Chai Assertion Module

npm install --save chai 

The –save option is used to install the module in the project’s scope and not globally.

Test Suite and Test Case Structure

In Mocha, a test suite is defined by the ‘describe’ keyword which accepts a callback function. A test suite can contain child / inner test suites, which can contain their own child test suites, etc. A test case is denoted by the ‘it’ function, which accepts a callback function and contains the testing code.

Mocha supports test suite setup and test case setup functions. A test suite setup is denoted by before while a test case setup applies beforeEach. beforeEach is actually a common setup for every case in the suite, and will be executed before each case.

As with the setup, Mocha supports test suite and test case teardown functions. A test suite teardown is denoted by after, while a test case teardown is implemented with afterEach, functions that are executed after a test suite and after each test case respectively.

Create a file that will ‘host’ the test suite, e.g. test_suite.js, and write the following to it;

describe("Inner Suite 1", function(){

    before(function(){

        // do something before test suite execution
        // no matter if there are failed cases

    });

    after(function(){

        // do something after test suite execution is finished
        // no matter if there are failed cases

    });

    beforeEach(function(){

        // do something before test case execution
        // no matter if there are failed cases

    });

    afterEach(function(){

        // do something after test case execution is finished
        // no matter if there are failed cases

    });

    it("Test-1", function(){

        // test Code
        // assertions

    });

    it("Test-2", function(){

        // test Code
        // assertions

    });

    it("Test-3", function(){

        // test Code
        // assertions

    });

});

Running Mocha Test Suite and Test Cases

Mocha supports execution of tests in three ways: Whole Test Suite file, tests filtered by “grep” patterns and tests grep filtering looking in a directory tree (recursive option)

Run whole Test Suite file:

mocha /path/to/test_suite.js 

Run a specific suite or test from a specific suite file.

If a suite is selected then all the child suites and/or tests will be executed.

mocha -g “Test-2” /path/to/test_suite.js 

Run a specific suite or test file by searching recursively in a directory tree.

mocha --recursive -g “Test-2” /directory/ 

For extensive CLI options:

mocha –-help 

Managing Syncing of Async Testing Code

In case async functions are used with Mocha and not handled properly, you may find yourself struggling. If asyncing code (e.g. http requests, files, selenium, etc.) is to be used in a test case, follow these guidelines to overcome unexpected results:

1. **done** Function

In your test function (it) you need to pass the done function down the callback chain — this ensures it is executed after your last step.

The example below emphasizes the done functionality. In this case three seconds of timeout will occur at the end of the test function.

it(‘Test-1’, function(done){

    setTimeout(function(){

        console.log(“timeout!”);

  // mocha will wait for done to be called before exiting function.
        done();     
    }, 3000);

});

2. Return Promise

Returning a promise is another way to ensure Mocha has executed all code lines when async functions are used (‘done’ function is not needed in this case.)

it(‘Test-1’, function(done){

    var promise;
    promise = new Promise(function(resolve, reject){
        setTimeout(function(){

            console.log("Timeout");
            resolve();

        }, 3000);

    });
    // mocha will wait for the promise to be resolved before exiting
    return promise;  
});

2. Javascript Selenium 3 Integration with MochaJS

Selenium Introduction

Selenium is a library that controls a web browser and emulates the user’s behavior. More specifically, Selenium offers specific language library APIs called ‘bindings’ for the user. These ‘bindings’ act as a client in order to perform requests to intermediate components and acting as servers in order to finally control a Browser.

The intermediate components could be the actual webdriver, found natively in each Selenium package, the selenium-standalone-server, as well as vendor native browser controlling drivers — such as Geckodriver for Mozilla, chromedriver for Chrome, etc. Moreover, Selenium webdriver communicates with browser drivers via ‘JsonWired Protocol’ and becomes a W3C Web Standard.

Selenium Installation

Before diving any deeper into Selenium integration with MochaJS, we will take a quick look into Selenium implementation with NodeJS.

In order to use the Selenium API for JavaScript (or Selenium JavaScript bindings), we should install the appropriate module:

npm install selenium-webdriver 

At this point, it should be clarified that Javascript Selenium WebDriver can also be referred to as Webdriverjs (although not in npm). Webdrivejs is different than other libs/modules, such as WebdriverIO, Protractor, etc. selenium-webdriver is the official open-source base JavaScript Selenium library while the others are wrapper libraries/frameworks that are built on top of webdriverjs API, claiming to enhance usability and maintenance.

In NodeJS code, the module is used by:

require(‘selenium-webdriver’) 

WebDriver Construction

In order to be able to use Selenium, we should build the appropriate ‘webdriver’ object which will then control our browser. Below, we can see how we use the “Builder” pattern to construct a webdriver object by chaining several functions.

Builder with Options

var webdriver = require('selenium-webdriver')
var chrome = require('selenium-webdriver/chrome'),
var firefox = require('selenium-webdriver/firefox');

var driver = new webdriver.Builder()
    .forBrowser(‘firefox’)
    .setFirefoxOptions( /* … */)
    .setChromeOptions( /* … */)
    .build();

In the code above, we have managed to build a WebDriver object which aggregates configuration for more than one browser (notice the ‘options’ methods), despite the fact that the forBrowser() method explicitly sets firefox.

The user can set the SELENIUM_BROWSER environmental variable on runtime to set the desired browser. It will override any option set by forBrowser, since we have already configured multiple browser capabilities by set Options.

The browser properties can have several types of information depending on the browser under test. For example, in Mozilla’s properties we can set the desired ‘profile’ configuration as follows:

var profile = new firefox.Profile( /* … path to firefox local profile … */);
var firefoxOptions = new firefox Options().setProfile(profile);

Then, in the above Builder snippet we can add:

‘setFirefoxOptions( firefoxOptions )’ 

Builder with Capabilities

Selenium WebDriver JavaScript API documents several ways that a webdriver could be built. One more possible way is by setting all the required driver configurations in capabilities:

var driver = new webdriver.Builder().
    .withCapabilities( { ‘browserName’ : ‘firefox’ } )
    .build();

Note that if setOptions are set after withCapabilities, the configurations will be overridden (e.g. proxy configurations).

Selenium WebDriver Control Flow and Promise Management

Since JavaScript and NodeJS are based on asynchronous principles, Selenium WebDriver behaves in a similar way. In order to avoid callback pyramids and to assist a test engineer with the scripting experience as well as code readability and maintainability, Selenium WebDriver objects incorporate a promise manager that uses a ‘ControlFlow’. ‘ControlFlow’ is a class responsible for the sequential execution of the asynchronous webdriver commands.

Practically, each command is executed on the driver object and a promise is returned. The next commands do not need to be nested in ‘thens’, unless there is a need to handle a promise resolved value as follows:

driver.get("http://www.google.com");
driver.getTitle().then(function( title ) {

    // google page title should be printed 
    console.log(title)

});

driver.quit();

Pointers for JavaScript Testing with Selenium WebDriver and Mocha

  1. driver is a webdriver object, not a promise object
  2. driver.getTitle() or driver.get(url), or any other Selenium command, returns a promise object!

This means that we can perform the following:

var titlePromise = driver.getTitle();
titlePromise.then(function(title){

    console.log(title);

});
  1. Additionally, since driver is asyncing in its base, the following will not work:
var title = driver.getTitle();
expect (title).equals("Google");

Note: title is a promise object and not an actual resolved value.

MochaJS + Selenium WebDriver

Generally speaking, Selenium WebDriver can be integrated with MochaJS since it is used in any plain NodeJS script. However, since Mocha doesn’t know when an asynchronous function has finished before a done() is called or a promise is returned, we have to be very careful with handling.

Promise Based

Selenium commands are registered automatically, to assure webdriver commands are executed in the correct sequential order a promise should be returned.

The code below shows Mocha’s (before, beforeEach, after, afterEach) or test case body it hooks.

describe( 'Test Suite' , function(){

    before(function(){

        driver.get( my_service );
        driver.findElement(webdriver.By.id(username)).sendKeys(my_username);

        // a promise is returned while ‘click’ action
        // is registered in ‘driver’ object
        return driver.findElement(webdriver.By.id(submit)).click();
    });

    after(function(){

        return driver.quit();

    });

    it( 'Test Case', function(){

        driver.getTitle().then(function(title){
            expect(title).equals(my_title);
        })

The following actions will be executed:

  1. Browser page of “my_service” is loaded
  2. Text Field with id ‘username’ is located
  3. Text Field with id ‘username’ is filled with ‘my_username’
  4. Page title is retrieved and checked for equality against ‘my_title’
  5. WebDriver quits and browser window is closed. Browser process is terminated.

Selenium Webdriver Support for MochaJS

In order to perform JavaScript testing with Selenium WebDriver and Mocha in a simple way, WebDriver facilitates usage with MochaJS by wrapping around MochaJS test functions (before, beforeEach, it, etc.) with a test object. This creates a scope that provides awareness that WebDriver is being used. Therefore, there is no need for promise returns.

First, the corresponding module should be loaded:

var test = require('selenium-webdriver/testing'); 

All the function of Mocha are preceded by ‘test.’ as follows:

test.before()
test.describe()

And so on. Then, the above code is fully re-written as:

test.describe( 'Test Suite' , function(){

    test.before(function(){

        driver.get( my_service );
        driver.findElement(webdriver.By.id(username)).sendKeys(my_username);
        driver.findElement(webdriver.By.id(submit)).click();
    });

    test.after(function(){
        driver.quit();
    });

    test.it( 'Test Case' , function(){

        driver.getTitle().then(function(title){
            expect(title).equals(my_title);
        })

        driver.sleep();
    });

});
Conclusion

In this tutorial we got a chance to experience JavaScript testing with Selenium WebDriver and MochaJS. We should keep in mind the main difference when comparing to other programming language bindings, due to the asynchronous nature of NodeJS, MochaJS and Selenium WebDriver.

As long as we keep returning promises in any function which creates a promise (either a custom test lib function or a MochaJS hook/testcase), Mocha will execute them in the correct order.

Other frameworks such as WebdriverIO, Protractor and CodeseptJS provide wrapper solutions that hide some configurations from the user, and provide some promise-enhanced handling for a better scripting experience that many test automation experts might find helpful.

How to Test Node.js Apps using Mocha, Chai and SinonJS

How to Test Node.js Apps using Mocha, Chai and SinonJS

In this article, we'll look into testing Node.js apps. We'll use Mocha, Chai and SinonJS, and delve into using spies, stubs and mocks.

Tests help document the core features of an application. Properly written tests ensure that new features do not introduce changes that break the application.

An engineer maintaining a codebase might not necessarily be the same engineer that wrote the initial code. If the code is properly tested another engineer can confidently add new code or modify existing code with the expectation that the new changes do not break other features or, at the very least, do not cause side effects to other features.

JavaScript and Node.js have so many testing and assertion libraries like Jest, Jasmine, Qunit, and Mocha. However, in this article, we will look at how to use Mocha for testing, Chai for assertions and Sinon for mocks, spies, and stubs.

Mocha

Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser. It encapsulates tests in test suites (describe-block) and test cases (it-block).

Mocha has lots of interesting features:

  • browser support
  • simple async support, including promises
  • test coverage reporting
  • async test timeout support
  • before, after, beforeEach, afterEach Hooks, etc.
Chai

To make equality checks or compare expected results against actual results we can use Node.js built-in assertion module. However, when an error occurs the test cases will still pass. So Mocha recommends using other assertion libraries and for this tutorial, we will be using Chai.

Chai exposes three assertion interfaces: expect(), assert() and should(). Any of them can be used for assertions.

Sinon

Often, the method that is being tested is required to interact with or call other external methods. Therefore you need a utility to spy, stub, or mock those external methods. This is exactly what Sinon does for you.

Stubs, mocks, and spies make tests more robust and less prone to breakage should dependent codes evolve or have their internals modified.

Spy

A spy is a fake function that keeps track of arguments, returns value, the value of this and exception is thrown (if any) for all its calls.

Stub

A stub is a spy with predetermined behavior.

We can use a stub to:

  • Take a predetermined action, like throwing an exception
  • Provide a predetermined response
  • Prevent a specific method from being called directly (especially when it triggers undesired behaviors like HTTP requests)

Mock

A mock is a fake function (like a spy) with pre-programmed behavior (like a stub) as well as pre-programmed expectations.

We can use a mock to:

  • Verify the contract between the code under test and the external methods that it calls
  • Verify that an external method is called the correct number of times
  • Verify an external method is called with the correct parameters

The rule of thumb for a mock is: if you are not going to add an assertion for some specific call, don’t mock it. Use a stub instead.

Writing tests

To demonstrate what we have explained above we will be building a simple node application that creates and retrieves a user. The complete code sample for this article can be found on CodeSandbox.

Project setup

Let’s create a new project directory for our user app project:

mkdir mocha-unit-test && cd mocha-unit-test
mkdir src

Create a package.json file within the source folder and add the code below:

// src/package.json
{
  "name": "mocha-unit-test",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "scripts": {
    "test": "mocha './src/**/*.test.js'",
    "start": "node src/app.js"
  },
  "keywords": [
    "mocha",
    "chai"
  ],
  "author": "Godwin Ekuma",
  "license": "ISC",
  "dependencies": {
    "body-parser": "^1.18.3",
    "dotenv": "^6.2.0",
    "express": "^4.16.4",
    "jsonwebtoken": "^8.4.0",
    "morgan": "^1.9.1",
    "pg": "^7.12.1",
    "pg-hstore": "^2.3.3",
    "sequelize": "^5.19.6"
  },
  "devDependencies": {
    "chai": "^4.2.0",
    "mocha": "^6.2.1",
    "sinon": "^7.5.0",
    "faker": "^4.1.0"
  }
}

Run npm install to install project dependencies.

Notice that test-related packages mocha, chai, sinon , and faker are saved in the dev-dependencies.

The test script uses a custom glob (./src/**/*.test.js) to configure the file path of test files. Mocha will look for test files(files ending with .test.js ) within the directories and subdirectories of the src folder.

Repositories, services, and controllers

We will structure our application using the controller, service, and, repository pattern so our app will be broken into the repositories, services, and controllers. The Repository-Service-Controller pattern breaks up the business layer of the app into three distinct layers:

  • The repository class handles getting data into and out of our data store. A repository is used between the service layer and the model layer. For example, in the UserRepository you would create methods that write/read a user to and from the database
  • The service class calls the repository class and can combine their data to form new, more complex business objects. It is an abstraction between the controller and the repository. For example, the UserService would be responsible for performing the required logic in order to create a new user
  • A controller contains very little logic and is used to make calls to services. Rarely does the controller make direct calls to the repositories unless there’s a valid reason. The controller will perform basic checks on the data returned from the services in order to send a response back to the client

Breaking down applications this way makes testing easy.

UserRepository class

Let’s begin by creating a repository class:

// src/user/user.repository.js
const { UserModel } = require("../database");
class UserRepository {
  constructor() {
    this.user = UserModel;
    this.user.sync({ force: true });
  }
  async create(name, email) {
    return this.user.create({
      name,
      email
    });
  }
  async getUser(id) {
    return this.user.findOne({ id });
  }
}
module.exports = UserRepository;

The UserRepository class has two methods, create and getUser. The create method adds a new user to the database while getUser method searches a user from the database.

Let’s test the userRepository methods below:

// src/user/user.repository.test.js
const chai = require("chai");
const sinon = require("sinon");
const expect = chai.expect;
const faker = require("faker");
const { UserModel } = require("../database");
const UserRepository = require("./user.repository");
describe("UserRepository", function() {
  const stubValue = {
    id: faker.random.uuid(),
    name: faker.name.findName(),
    email: faker.internet.email(),
    createdAt: faker.date.past(),
    updatedAt: faker.date.past()
  };
  describe("create", function() {
    it("should add a new user to the db", async function() {
      const stub = sinon.stub(UserModel, "create").returns(stubValue);
      const userRepository = new UserRepository();
      const user = await userRepository.create(stubValue.name, stubValue.email);
      expect(stub.calledOnce).to.be.true;
      expect(user.id).to.equal(stubValue.id);
      expect(user.name).to.equal(stubValue.name);
      expect(user.email).to.equal(stubValue.email);
      expect(user.createdAt).to.equal(stubValue.createdAt);
      expect(user.updatedAt).to.equal(stubValue.updatedAt);
    });
  });
});

The above code is testing the create method of the UserRepository . Notice that we are stubbing the UserModel.create method. The stub is necessary because our goal is to test the repository and not the model. We use [faker](https://github.com/marak/Faker.js/) for the test fixtures:

// src/user/user.repository.test.js

const chai = require("chai");
const sinon = require("sinon");
const expect = chai.expect;
const faker = require("faker");
const { UserModel } = require("../database");
const UserRepository = require("./user.repository");

describe("UserRepository", function() {
  const stubValue = {
    id: faker.random.uuid(),
    name: faker.name.findName(),
    email: faker.internet.email(),
    createdAt: faker.date.past(),
    updatedAt: faker.date.past()
  };
   describe("getUser", function() {
    it("should retrieve a user with specific id", async function() {
      const stub = sinon.stub(UserModel, "findOne").returns(stubValue);
      const userRepository = new UserRepository();
      const user = await userRepository.getUser(stubValue.id);
      expect(stub.calledOnce).to.be.true;
      expect(user.id).to.equal(stubValue.id);
      expect(user.name).to.equal(stubValue.name);
      expect(user.email).to.equal(stubValue.email);
      expect(user.createdAt).to.equal(stubValue.createdAt);
      expect(user.updatedAt).to.equal(stubValue.updatedAt);
    });
  });
});

To test the getUser method, we have to also stub UserModel.findone. We use expect(stub.calledOnce).to.be.true to assert that the stub is called at least once. The other assertions are checking the value returned by the getUser method.

UserService class

// src/user/user.service.js

const UserRepository = require("./user.repository");
class UserService {
  constructor(userRepository) {
    this.userRepository = userRepository;
  }
  async create(name, email) {
    return this.userRepository.create(name, email);
  }
  getUser(id) {
    return this.userRepository.getUser(id);
  }
}
module.exports = UserService;

The UserService class also has two methods create and getUser. The create method calls the create repository method passing name and email of a new user as arguments. The getUser calls the repository getUser method.

Let’s test the userService methods below:

// src/user/user.service.test.js

const chai = require("chai");
const sinon = require("sinon");
const UserRepository = require("./user.repository");
const expect = chai.expect;
const faker = require("faker");
const UserService = require("./user.service");
describe("UserService", function() {
  describe("create", function() {
    it("should create a new user", async function() {
      const stubValue = {
        id: faker.random.uuid(),
        name: faker.name.findName(),
        email: faker.internet.email(),
        createdAt: faker.date.past(),
        updatedAt: faker.date.past()
      };
      const userRepo = new UserRepository();
      const stub = sinon.stub(userRepo, "create").returns(stubValue);
      const userService = new UserService(userRepo);
      const user = await userService.create(stubValue.name, stubValue.email);
      expect(stub.calledOnce).to.be.true;
      expect(user.id).to.equal(stubValue.id);
      expect(user.name).to.equal(stubValue.name);
      expect(user.email).to.equal(stubValue.email);
      expect(user.createdAt).to.equal(stubValue.createdAt);
      expect(user.updatedAt).to.equal(stubValue.updatedAt);
    });
  });
});

The code above is testing the UserService create method. We have created a stub for the repository create method. The code below will test the getUser service method:

const chai = require("chai");
const sinon = require("sinon");
const UserRepository = require("./user.repository");
const expect = chai.expect;
const faker = require("faker");
const UserService = require("./user.service");
describe("UserService", function() {
  describe("getUser", function() {
    it("should return a user that matches the provided id", async function() {
      const stubValue = {
        id: faker.random.uuid(),
        name: faker.name.findName(),
        email: faker.internet.email(),
        createdAt: faker.date.past(),
        updatedAt: faker.date.past()
      };
      const userRepo = new UserRepository();
      const stub = sinon.stub(userRepo, "getUser").returns(stubValue);
      const userService = new UserService(userRepo);
      const user = await userService.getUser(stubValue.id);
      expect(stub.calledOnce).to.be.true;
      expect(user.id).to.equal(stubValue.id);
      expect(user.name).to.equal(stubValue.name);
      expect(user.email).to.equal(stubValue.email);
      expect(user.createdAt).to.equal(stubValue.createdAt);
      expect(user.updatedAt).to.equal(stubValue.updatedAt);
    });
  });
});

Again we are stubbing the UserRepository getUser method. We also assert that the stub is called at least once and then assert that the return value of the method is correct.

UserContoller class

/ src/user/user.controller.js

class UserController {
  constructor(userService) {
    this.userService = userService;
  }
  async register(req, res, next) {
    const { name, email } = req.body;
    if (
      !name ||
      typeof name !== "string" ||
      (!email || typeof email !== "string")
    ) {
      return res.status(400).json({
        message: "Invalid Params"
      });
    }
    const user = await this.userService.create(name, email);
    return res.status(201).json({
      data: user
    });
  }
  async getUser(req, res) {
    const { id } = req.params;
    const user = await this.userService.getUser(id);
    return res.json({
      data: user
    });
  }
}
module.exports = UserController;

The UserController class has register and getUser methods as well. Each of these methods accepts two parameters req and res objects.

// src/user/user.controller.test.js

describe("UserController", function() {
  describe("register", function() {
    let status json, res, userController, userService;
    beforeEach(() => {
      status = sinon.stub();
      json = sinon.spy();
      res = { json, status };
      status.returns(res);
      const userRepo = sinon.spy();
      userService = new UserService(userRepo);
    });
    it("should not register a user when name param is not provided", async function() {
      const req = { body: { email: faker.internet.email() } };
      await new UserController().register(req, res);
      expect(status.calledOnce).to.be.true;
      expect(status.args\[0\][0]).to.equal(400);
      expect(json.calledOnce).to.be.true;
      expect(json.args\[0\][0].message).to.equal("Invalid Params");
    });
    it("should not register a user when name and email params are not provided", async function() {
      const req = { body: {} };
      await new UserController().register(req, res);
      expect(status.calledOnce).to.be.true;
      expect(status.args\[0\][0]).to.equal(400);
      expect(json.calledOnce).to.be.true;
      expect(json.args\[0\][0].message).to.equal("Invalid Params");
    });
    it("should not register a user when email param is not provided", async function() {
      const req = { body: { name: faker.name.findName() } };
      await new UserController().register(req, res);
      expect(status.calledOnce).to.be.true;
      expect(status.args\[0\][0]).to.equal(400);
      expect(json.calledOnce).to.be.true;
      expect(json.args\[0\][0].message).to.equal("Invalid Params");
    });
    it("should register a user when email and name params are provided", async function() {
      const req = {
        body: { name: faker.name.findName(), email: faker.internet.email() }
      };
      const stubValue = {
        id: faker.random.uuid(),
        name: faker.name.findName(),
        email: faker.internet.email(),
        createdAt: faker.date.past(),
        updatedAt: faker.date.past()
      };
      const stub = sinon.stub(userService, "create").returns(stubValue);
      userController = new UserController(userService);
      await userController.register(req, res);
      expect(stub.calledOnce).to.be.true;
      expect(status.calledOnce).to.be.true;
      expect(status.args\[0\][0]).to.equal(201);
      expect(json.calledOnce).to.be.true;
      expect(json.args\[0\][0].data).to.equal(stubValue);
    });
  });
});

In the first three it blocks, we are testing that a user will not be created when one or both of the required parameters (email and name) are not provided. Notice that we are stubbing the res.status and spying on res.json:

describe("UserController", function() {
  describe("getUser", function() {
    let req;
    let res;
    let userService;
    beforeEach(() => {
      req = { params: { id: faker.random.uuid() } };
      res = { json: function() {} };
      const userRepo = sinon.spy();
      userService = new UserService(userRepo);
    });
    it("should return a user that matches the id param", async function() {
      const stubValue = {
        id: req.params.id,
        name: faker.name.findName(),
        email: faker.internet.email(),
        createdAt: faker.date.past(),
        updatedAt: faker.date.past()
      };
      const mock = sinon.mock(res);
      mock
        .expects("json")
        .once()
        .withExactArgs({ data: stubValue });
      const stub = sinon.stub(userService, "getUser").returns(stubValue);
      userController = new UserController(userService);
      const user = await userController.getUser(req, res);
      expect(stub.calledOnce).to.be.true;
      mock.verify();
    });
  });
});

For the getUser test we mocked on the json method. Notice that we also had to use a spy in place UserRepository while creating a new instance of the UserService.

Conclusion

Run the test using the command below:

npm test

You should see the tests passing:

We have seen how we can use a combination of Mocha, Chai, and Sinon to create a robust test for a node application. Be sure to check out their respective documentations to broaden your knowledge of these tools. Got a question or comment? Please drop them in the comment section below.

JavaScript and Node.js Testing Best Practices

JavaScript and Node.js Testing Best Practices

Why this guide can take your testing skills to the next level

Originally published by Yoni Goldberg at https://github.com

This is a guide for JavaScript & Node.js reliability from A-Z. It summarizes and curates for you dozens of the best blog posts, books and tools the market has to offer

Advanced: Goes 10,000 miles beyond the basics

Hop into a journey that travells way beyond the basics into advanced topics like testing in production, mutation testing, property-based testing and many other strategic & professional tools. Should you read every word in this guide your testing skills are likely to go way above the average

Full-stack: front, backend, CI, anything

Start by understanding the ubiquitous testing practices that are the foundation for any application tier. Then, delve into your area of choice: frontend/UI, backend, CI or maybe all of them?

Table of contents

·        Section 0: The Golden Rule

A single advice that inspires all the others (1 special bullet)

·        Section 1: The Test Anatomy

The foundation - strucuturing clean tests (12 bullets)

·        Section 2: Backend

Writing backend and Microservices tests efficiently (8 bullets)

·        Section 3: Frontend, UI, E2E

Writing tests for web UI including component and E2E tests (11 bullets)

·        Section 4: Measuring Tests Effectivenss

Watching the watchman - measuring test quality (4 bullets)

·        Section 5: Continous Integration

Guideliness for CI in the JS world (9 bullets)

Section : The Golden Rule

0. The Golden Rule: Design for lean testing

Do: Testing code is not like production-code - design it to be dead-simple, short, abstraction-free, flat, delightful to work with, lean. One should look at a test and get the intent instantly.

Our minds are full with the main production code, we don't have 'headspace' for additional complexity. Should we try to squeeze yet another challenging code into our poor brain it will slow the team down which works against the reason we do testing. Practically this is where many teams just abandon testing.

The tests are an opportunity for something else - a friendly and smiley assistant, one that it's delightful to work with and delivers great value for such a small investment. Science tells we have two brain systems: system 1 which is used for effortless activities like driving a car on an empty road and system 2 which is meant for complex and conscious operations like solving a math equation. Design your test for system 1, when looking at test code it should feel as easy as modifying an HTML document and not like solving 2X(17 × 24).

This can be achieved by selectively cherry-picking techniques, tools and test targets that are cost-effective and provide great ROI. Test only as much as needed, strive to keep it nimble, sometimes it's even worth dropping some tests and trade reliability for agility and simplicity.

Most of the advice below are derivatives of this principle.

Ready to start?

Section 1. The Test Anatomy

1.1 Include 3 parts in each test name

Do: A test report should tell whether the current application revision satisfies the requirements for the people who are not necessarily familiar with the code: the tester, the DevOps engineer who is deploying and the future you two years from now. This can be achieved best if the tests speak at the requirements level and include 3 parts:

(1) What is being tested? For example, the ProductsService.addNewProduct method

(2) Under what circumstances and scenario? For example, no price is passed to the method

(3) What is the expected result? For example, the new product is not approved

Otherwise: A deployment just failed, a test named “Add product” failed. Does this tell you what exactly is malfunctioning?

Note: Each bullet has code examples and sometime also an image illustration. Click to expand

Code Examples: Doing It Right Example: A test name that constitutes 3 parts

//1. unit under test
describe('Products Service', function() {
  describe('Add new product', function() {
    //2. scenario and 3. expectation
    it('When no price is specified, then the product status is pending approval', ()=> {
      const newProduct = new ProductService().add(...);
      expect(newProduct.status).to.equal('pendingApproval');
    });
  });
});

1.2 Structure tests by the AAA pattern

Do: Structure your tests with 3 well-separated sections Arrange, Act & Assert (AAA). Following this structure guarantees that the reader spends no brain CPU on understanding the test plan:

1st A - Arrange: All the setup code to bring the system to the scenario the test aims to simulate. This might include instantiating the unit under test constructor, adding DB records, mocking/stubbing on objects and any other preparation code

2nd A - Act: Execute the unit under test. Usually 1 line of code

3rd A - Assert: Ensure that the received value satisfies the expectation. Usually 1 line of code

Otherwise: Not only you spend long daily hours on understanding the main code, now also what should have been the simple part of the day (testing) stretches your brain

Code Examples

Doing It Right Example: A test strcutured with the AAA pattern

describe('Customer classifier', () => {
    test('When customer spent more than 500$, should be classified as premium', () => {
        //Arrange
        const customerToClassify = {spent:505, joined: new Date(), id:1}
        const DBStub = sinon.stub(dataAccess, "getCustomer")
            .reply({id:1, classification: 'regular'});

        //Act
        const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);

        //Assert
        expect(receivedClassification).toMatch('premium');
    });
});

Anti Pattern Example: No separation, one bulk, harder to interpret

test('Should be classified as premium', () => {
        const customerToClassify = {spent:505, joined: new Date(), id:1}
        const DBStub = sinon.stub(dataAccess, "getCustomer")
            .reply({id:1, classification: 'regular'});
        const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);
        expect(receivedClassification).toMatch('premium');
    });

1.3 Describe expectations in a product language: use BDD-style assertions

Do: Coding your tests in a declarative-style allows the reader to get the grab instantly without spending even a single brain-CPU cycle. When you write an imperative code that is packed with conditional logic the reader is thrown away to an effortful mental mood. In that sense, code the expectation in a human-like language, declarative BDD style using expect or should and not using custom code. If Chai & Jest don’t include the desired assertion and it’s highly repeatable, consider extending Jest matcher (Jest) or writing a custom Chai plugin

Otherwise: The team will write less test and decorate the annoying ones with .skip()

Code Examples

Anti Pattern Example: The reader must skim through not so short, and imperative code just to get the test story

test("When asking for an admin, ensure only ordered admins in results" , ()={
    //assuming we've added here two admins "admin1", "admin2" and "user1"
    const allAdmins = getUsers({adminOnly:true});

    const admin1Found, adming2Found = false;

    allAdmins.forEach(aSingleUser => {
        if(aSingleUser === "user1"){
            assert.notEqual(aSingleUser, "user1", "A user was found and not admin");
        }
        if(aSingleUser==="admin1"){
            admin1Found = true;
        }
        if(aSingleUser==="admin2"){
            admin2Found = true;
        }
    });

    if(!admin1Found || !admin2Found ){
        throw new Error("Not all admins were returned");
    }
});

Doing It Right Example: Skimming through the following declarative test is a breeze

it("When asking for an admin, ensure only ordered admins in results" , ()={
    //assuming we've added here two admins
    const allAdmins = getUsers({adminOnly:true});

    expect(allAdmins).to.include.ordered.members(["admin1" , "admin2"])
  .but.not.include.ordered.members(["user1"]);
});

1.4 Stick to black-box testing: Test only public methods

Do: Testing the internals brings huge overhead for almost nothing. If your code/API deliver the right results, should you really invest your next 3 hours in testing HOW it worked internally and then maintain these fragile tests? Whenever a public behavior is checked, the private implementation is also implicitly tested and your tests will break only if there is a certain problem (e.g. wrong output). This approach is also referred to as behavioral testing. On the other side, should you test the internals (white box approach) — your focus shifts from planning the component outcome to nitty-gritty details and your test might break because of minor code refactors although the results are fine- this dramatically increases the maintenance burden

Otherwise: Your test behaves like the child who cries wolf: shoot out loud false-positive cries (e.g., A test fails because a private variable name was changed). Unsurprisingly, people will soon start to ignore the CI notifications until someday a real bug will get ignored…

Code Examples

Anti Pattern Example: A test case is testing the internals for no good reason

class ProductService{
  //this method is only used internally
  //Change this name will make the tests fail
  calculateVAT(priceWithoutVAT){
    return {finalPrice: priceWithoutVAT * 1.2};
    //Change the result format or key name above will make the tests fail
  }
  //public method
  getPrice(productId){
    const desiredProduct= DB.getProduct(productId);
    finalPrice = this.calculateVATAdd(desiredProduct.price).finalPrice;
  }
}

it("White-box test: When the internal methods get 0 vat, it return 0 response", async () => {
    //There's no requirement to allow users to calculate the VAT, only show the final price. Nevertheless we falsely insist here to test the class internals
    expect(new ProductService().calculateVATAdd(0).finalPrice).to.equal(0);
});

1.5 Choose the right test doubles: Avoid mocks in favor of stubs and spies

Do: Test doubles are a necessary evil because they are coupled to the application internals, yet some provide an immense value. However, the various techniques were not born equal: some of them, spies and stubs, are focused on testing the requirements but as an inevitable side-effect they also slightly touch the internals. Mocks, on the contrary side, are focused on testing the internals — this brings huge overhead as explained in the bullet “Stick to black box testing”.

However, the various techniques were not born equal: some of them, spies and stubs, are focused on testing the requirements but as an inevitable side-effect they also slightly touch the internals. Mocks, on the contrary side, are focused on testing the internals — this brings huge overhead as explained in the bullet “Stick to black box testing”.

Before using test doubles, ask a very simple question: Do I use it to test functionality that appears, or could appear, in the requirements document? If no, it’s a smell of white-box testing.

For example, if you want to test what your app behaves reasonably when the payment service is down, you might stub the payment service and trigger some ‘No Response’ return to ensure that the unit under test returns the right value. This checks our application behavior/response/outcome under certain scenarios. You might also use a spy to assert that an email was sent when that service is down — this is again a behavioral check which is likely to appear in a requirements doc (“Send an email if payment couldn’t be saved”). On the flip side, if you mock the Payment service and ensure that it was called with the right JavaScript types — then your test is focused on internal things that got nothing with the application functionality and are likely to change frequently

Otherwise: Any refactoring of code mandates searching for all the mocks in the code and updating accordingly. Tests become a burden rather than a helpful friend

Code Examples

Anti-pattern example: Mocks focus on the internals

it("When a valid product is about to be deleted, ensure data access DAL was called once, with the right product and right config", async () => {
    //Assume we already added a product
    const dataAccessMock = sinon.mock(DAL);
    //hmmm BAD: testing the internals is actually our main goal here, not just a side-effect
    dataAccessMock.expects("deleteProduct").once().withArgs(DBConfig, theProductWeJustAdded, true, false);
   new ProductService().deletePrice(theProductWeJustAdded);
    mock.verify();
});

Doing It Right Example: spies are focused on testing the requirements but as a side-effect are unavoidably touching to the internals

it("When a valid product is about to be deleted, ensure an email is sent", async () => {
    //Assume we already added here a product
    const spy = sinon.spy(Emailer.prototype, "sendEmail");
    new ProductService().deletePrice(theProductWeJustAdded);
    //hmmm OK: we deal with internals? Yes, but as a side effect of testing the requirements (sending an email)
});

1.6 Don’t “foo”, use realistic input dataing

Do: Often production bugs are revealed under some very specific and surprising input — the more realistic the test input is, the greater the chances are to catch bugs early. Use dedicated libraries like Faker to generate pseudo-real data that resembles the variety and form of production data. For example, such libraries can generate realistic phone numbers, usernames, credit card, company names, and even ‘lorem ipsum’ text. You may also create some tests (on top of unit tests, not instead) that randomize fakers data to stretch your unit under test or even import real data from your production environment. Want to take it to the next level? see next bullet (property-based testing).

Otherwise: All your development testing will falsely seem green when you use synthetic inputs like “Foo” but then production might turn red when a hacker passes-in a nasty string like “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA”

Code Examples

Anti-Pattern Example: A test suite that passes due to non-realistic data

const addProduct = (name, price) =>{
  const productNameRegexNoSpace = /^\S*$/;//no white-space allowd

  if(!productNameRegexNoSpace.test(name))
    return false;//this path never reached due to dull input

    //some logic here
    return true;
};

test("Wrong: When adding new product with valid properties, get successful confirmation", async () => {
    //The string "Foo" which is used in all tests never triggers a false result
    const addProductResult = addProduct("Foo", 5);
    expect(addProductResult).toBe(true);
    //Positive-false: the operation succeeded because we never tried with long
    //product name including spaces
});

Doing It Right Example: Randomizing realistic input

it("Better: When adding new valid product, get successful confirmation", async () => {
    const addProductResult = addProduct(faker.commerce.productName(), faker.random.number());
    //Generated random input: {'Sleek Cotton Computer',  85481}
    expect(addProductResult).to.be.true;
    //Test failed, the random input triggered some path we never planned for.
    //We discovered a bug early!
});

1.7 Test many input combinations using Property-based testing

Do: Typically we choose a few input samples for each test. Even when the input format resembles real-world data (see bullet ‘Don’t foo’), we cover only a few input combinations (method(‘’, true, 1), method(“string” , false” , 0)), However, in production, an API that is called with 5 parameters can be invoked with thousands of different permutations, one of them might render our process down (see Fuzz Testing). What if you could write a single test that sends 1000 permutations of different inputs automatically and catches for which input our code fails to return the right response? Property-based testing is a technique that does exactly that: by sending all the possible input combinations to your unit under test it increases the serendipity of finding a bug. For example, given a method — addNewProduct(id, name, isDiscount) — the supporting libraries will call this method with many combinations of (number, string, boolean) like (1, “iPhone”, false), (2, “Galaxy”, true). You can run property-based testing using your favorite test runner (Mocha, Jest, etc) using libraries like js-verify or testcheck (much better documentation). Update: Nicolas Dubien suggests in the comments below to checkout fast-check which seems to offer some additional features and also to be actively maintained

Otherwise: Unconsciously, you choose the test inputs that cover only code paths that work well. Unfortunately, this decreases the efficiency of testing as a vehicle to expose bugs

Code Examples

Doing It Right Example: Testing many input permutations with “mocha-testcheck”

require('mocha-testcheck').install();
const {expect} = require('chai');
const faker = require('faker');

describe('Product service', () => {
  describe('Adding new', () => {
    //this will run 100 times with different random properties
    check.it('Add new product with random yet valid properties, always successful',
      gen.int, gen.string, (id, name) => {
        expect(addNewProduct(id, name).status).to.equal('approved');
      });
  })
});

1.8 If needed, use only short & inline snapshots

Do: When there is a need for snapshot testing, use only short and focused snapshots (i.e. 3-7 lines) that are included as part of the test (Inline Snapshot) and not within external files. Keeping this guideline will ensure your tests remain self-explanatory and less fragile.

On the other hand, ‘classic snapshots’ tutorials and tools encourage to store big files (e.g. component rendering markup, API JSON result) over some external medium and ensure each time when the test run to compare the received result with the saved version. This, for example, can implicitly couple our test to 1000 lines with 3000 data values that the test writer never read and reasoned about. Why is this wrong? By doing so, there are 1000 reasons for your test to fail - it’s enough for a single line to change for the snapshot to get invalid and this is likely to happen a lot. How frequently? for every space, comment or minor CSS/HTML change. Not only this, the test name wouldn’t give a clue about the failure as it just checks that 1000 lines didn’t change, also it encourages to the test writer to accept as the desired true a long document he couldn’t inspect and verify. All of these are symptoms of obscure and eager test that is not focused and aims to achieve too much

It’s worth noting that there are few cases where long & external snapshots are acceptable - when asserting on schema and not data (extracting out values and focusing on fields) or when the received document rarely changes

Otherwise: A UI test fails. The code seems right, the screen renders perfect pixels, what happened? your snapshot testing just found a difference from the origin document to current received one - a single space character was added to the markdown...

Code Examples

Anti-Pattern Example: Coupling our test to unseen 2000 lines of code

it('TestJavaScript.com is renderd correctly', ()  => {

//Arrange

//Act
const receivedPage = renderer
.create(  <DisplayPage page  =  "http://www.testjavascript.com"  > Test JavaScript < /DisplayPage>)
.toJSON();

//Assert
expect(receivedPage).toMatchSnapshot();
//We now implicitly maintain a 2000 lines long document
//every additional line break or comment - will break this test

});

Doing It Right Example: Expectations are visible and focused

it('When visiting TestJavaScript.com home page, a menu is displayed', () => {
//Arrange

//Act
receivedPage tree = renderer
.create(  <DisplayPage page  =  "http://www.testjavascript.com"  > Test JavaScript < /DisplayPage>)
.toJSON();

//Assert

const menu = receivedPage.content.menu;
expect(menu).toMatchInlineSnapshot(&lt;ul&gt; &lt;li&gt;Home&lt;/li&gt; &lt;li&gt; About &lt;/li&gt; &lt;li&gt; Contact &lt;/li&gt; &lt;/ul&gt;);
});

1.9 Avoid global test fixtures and seeds, add data per-test

Do: Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests (also known as ‘test fixture’) for the sake of performance improvement. While performance is indeed a valid concern — it can be mitigated (see “Component testing” bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concern — a balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)

Otherwise: Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? let’s investigate, oh no — it seems that two tests were mutating the same seed data

Code Examples

Anti Pattern Example: tests are not independent and rely on some global hook to feed global DB data

before(() => {
  //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
  await DB.AddSeedDataFromJson('seed.json');
});
it("When updating site name, get successful confirmation", async () => {
  //I know that site name "portal" exists - I saw it in the seed files
  const siteToUpdate = await SiteService.getSiteByName("Portal");
  const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
  expect(updateNameResult).to.be(true);
});
it("When querying by site name, get the right site", async () => {
  //I know that site name "portal" exists - I saw it in the seed files
  const siteToCheck = await SiteService.getSiteByName("Portal");
  expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
});

Doing It Right Example: We can stay within the test, each test acts on its own set of data

it("When updating site name, get successful confirmation", async () => {
  //test is adding a fresh new records and acting on the records only
  const siteUnderTest = await SiteService.addSite({
    name: "siteForUpdateTest"
  });
  const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");
  expect(updateNameResult).to.be(true);
});

1.10 Don’t catch errors, expect them

Do: When trying to assert that some input triggers an error, it might look right to use try-catch-finally and asserts that the catch clause was entered. The result is an awkward and verbose test case (example below) that hides the simple test intent and the result expectations

A more elegant alternative is the using the one-line dedicated Chai assertion: expect(method).to.throw (or in Jest: expect(method).toThrow()). It’s absolutely mandatory to also ensure the exception contains a property that tells the error type, otherwise given just a generic error the application won’t be able to do much rather than show a disappointing message to the user

**Otherwise:**It will be challenging to infer from the test reports (e.g. CI reports) what went wrong

Code Examples

Anti-pattern Example: A long test case that tries to assert the existence of error with try-catch

/it("When no product name, it throws error 400", async() => {
let errorWeExceptFor = null;
try {
  const result = await addNewProduct({name:'nest'});}
catch (error) {
  expect(error.code).to.equal('InvalidInput');
  errorWeExceptFor = error;
}
expect(errorWeExceptFor).not.to.be.null;
//if this assertion fails, the tests results/reports will only show
//that some value is null, there won't be a word about a missing Exception
});

Doing It Right Example: A human-readable expectation that could be understood easily, maybe even by QA or technical PM

it.only("When no product name, it throws error 400", async() => {
  expect(addNewProduct)).to.eventually.throw(AppError).with.property('code', "InvalidInput");
});

1.11 Tag your tests

Do: Different tests must run on different scenarios: quick smoke, IO-less, tests should run when a developer saves or commits a file, full end-to-end tests usually run when a new pull request is submitted, etc. This can be achieved by tagging tests with keywords like #cold #api #sanity so you can grep with your testing harness and invoke the desired subset. For example, this is how you would invoke only the sanity test group with Mocha: mocha — grep ‘sanity’

Otherwise: Running all the tests, including tests that perform dozens of DB queries, any time a developer makes a small change can be extremely slow and keeps developers away from running tests

Code Examples

Doing It Right Example: Tagging tests as ‘#cold-test’ allows the test runner to execute only fast tests (Cold===quick tests that are doing no IO and can be executed frequently even as the developer is typing)

//this test is fast (no DB) and we're tagging it correspondigly
//now the user/CI can run it frequently
describe('Order service', function() {
  describe('Add new order #cold-test #sanity', function() {
    test('Scenario - no currency was supplied. Excpectation - Use the default currency #sanity', function() {
      //code logic here
    });
  });
});

1.12 Other generic good testing hygiene

Do: This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known

Learn and practice TDD principles — they are extremely valuable for many but don’t get intimidated if they don’t fit your style, you’re not the only one. Consider writing the tests before the code in a red-green-refactor style, ensure each test checks exactly one thing, when you find a bug — before fixing write a test that will detect this bug in the future, let each test fail at least once before turning green, start a module by writing a quick and simplistic code that satsifies the test - then refactor gradually and take it to a prdoction grade level, avoid any dependency on the environment (paths, OS, etc)

Otherwise: You‘ll miss pearls of wisdom that were collected for decades

Section 2: Backend Testing

2.1 Enrich your testing portfolio: Look beyond unit tests and the pyramid

Do: The testing pyramid, though 10> years old, is a great and relevant model that suggests three testing types and influences most developers’ testing strategy. At the same time, more than a handful of shiny new testing techniques emerged and are hiding in the shadows of the testing pyramid. Given all the dramatic changes that we’ve seen in the recent 10 years (Microservices, cloud, serverless), is it even possible that one quite-old model will suit all types of applications? shouldn’t the testing world consider welcoming new testing techniques?

Don’t get me wrong, in 2019 the testing pyramid, TDD and unit tests are still a powerful technique and are probably the best match for many applications. Only like any other model, despite its usefulness, it must be wrong sometimes. For example, consider an IOT application that ingests many events into a message-bus like Kafka/RabbitMQ, which then flow into some data-warehouse and are eventually queried by some analytics UI. Should we really spend 50% of our testing budget on writing unit tests for an application that is integration-centric and has almost no logic? As the diversity of application types increase (bots, crypto, Alexa-skills) greater are the chances to find scenarios where the testing pyramid is not the best match.

It’s time to enrich your testing portfolio and become familiar with more testing types (the next bullets suggest few ideas), mind models like the testing pyramid but also match testing types to real-world problems that you’re facing (‘Hey, our API is broken, let’s write consumer-driven contract testing!’), diversify your tests like an investor that build a portfolio based on risk analysis — assess where problems might arise and match some prevention measures to mitigate those potential risks

A word of caution: the TDD argument in the software world takes a typical false-dichotomy face, some preach to use it everywhere, others think it’s the devil. Everyone who speaks in absolutes is wrong :]

Otherwise: You’re going to miss some tools with amazing ROI, some like Fuzz, lint, and mutation can provide value in 10 minutes

Code Examples

Doing It Right Example: Cindy Sridharan suggests a rich testing portfolio in her amazing post ‘Testing Microservices — the sane way’

2.2 Component testing might be your best affair

Do: Each unit test covers a tiny portion of the application and it’s expensive to cover the whole, whereas end-to-end testing easily covers a lot of ground but is flaky and slower, why not apply a balanced approach and write tests that are bigger than unit tests but smaller than end-to-end testing? Component testing is the unsung song of the testing world — they provide the best from both worlds: reasonable performance and a possibility to apply TDD patterns + realistic and great coverage.

Component tests focus on the Microservice ‘unit’, they work against the API, don’t mock anything which belongs to the Microservice itself (e.g. real DB, or at least the in-memory version of that DB) but stub anything that is external like calls to other Microservices. By doing so, we test what we deploy, approach the app from outwards to inwards and gain great confidence in a reasonable amount of time.

Otherwise: You may spend long days on writing unit tests to find out that you got only 20% system coverage

Code Examples

Doing It Right Example: Supertest allows approaching Express API in-process (fast and cover many layers)

2.3 Ensure new releases don’t break the API using

Do: So your Microservice has multiple clients, and you run multiple versions of the service for compatibility reasons (keeping everyone happy). Then you change some field and ‘boom!’, some important client who relies on this field is angry. This is the Catch-22 of the integration world: It’s very challenging for the server side to consider all the multiple client expectations — On the other hand, the clients can’t perform any testing because the server controls the release dates. Consumer-driven contracts and the framework PACT were born to formalize this process with a very disruptive approach — not the server defines the test plan of itself rather the client defines the tests of the… server! PACT can record the client expectation and put in a shared location, “broker”, so the server can pull the expectations and run on every build using PACT library to detect broken contracts — a client expectation that is not met. By doing so, all the server-client API mismatches are caught early during build/CI and might save you a great deal of frustration

Otherwise: The alternatives are exhausting manual testing or deployment fear

Code Examples

Doing It Right Example:

2.4 Test your middlewares in isolation

Do: Many avoid Middleware testing because they represent a small portion of the system and require a live Express server. Both reasons are wrong — Middlewares are small but affect all or most of the requests and can be tested easily as pure functions that get {req,res} JS objects. To test a middleware function one should just invoke it and spy (using Sinon for example) on the interaction with the {req,res} objects to ensure the function performed the right action. The library node-mock-http takes it even further and factors the {req,res} objects along with spying on their behavior. For example, it can assert whether the http status that was set on the res object matches the expectation (See example below)

Otherwise: A bug in Express middleware === a bug in all or most requests

Code Examples

Doing It Right Example: Testing middleware in isolation without issuing network calls and waking-up the entire Express machine

//the middleware we want to test
const unitUnderTest = require('./middleware')
const httpMocks = require('node-mocks-http');
//Jest syntax, equivelant to describe() & it() in Mocha
test('A request without authentication header, should return http status 403', () => {
  const request = httpMocks.createRequest({
    method: 'GET',
    url: '/user/42',
    headers: {
      authentication: ''
    }
  });
  const response = httpMocks.createResponse();
  unitUnderTest(request, response);
  expect(response.statusCode).toBe(403);
});

2.5 Measure and refactor using static analysis tools

Do: Using static analysis tools helps by giving objective ways to improve code quality and keep your code maintainable. You can add static analysis tools to your CI build to abort when it finds code smells. Its main selling points over plain linting are the ability to inspect quality in the context of multiple files (e.g. detect duplications), perform advanced analysis (e.g. code complexity) and follow the history and progress of code issues. Two examples of tools you can use are Sonarqube (2,600+ stars) and Code Climate (1,500+ stars)

Credit:: Keith Holliday

Otherwise: With poor code quality, bugs and performance will always be an issue that no shiny new library or state of the art features can fix

Code Examples

Doing It Right Example: CodeClimat, a commercial tool that can identify complex methods:

2.6 Check your readiness for Node-related chaos

Do: Weirdly, most software testings are about logic & data only, but some of the worst things that happen (and are really hard to mitigate ) are infrastructural issues. For example, did you ever test what happens when your process memory is overloaded, or when the server/process dies, or does your monitoring system realizes when the API becomes 50% slower?. To test and mitigate these type of bad things — Chaos engineering was born by Netflix. It aims to provide awareness, frameworks and tools for testing our app resiliency for chaotic issues. For example, one of its famous tools, the chaos monkey, randomly kills servers to ensure that our service can still serve users and not relying on a single server (there is also a Kubernetes version, kube-monkey, that kills pods). All these tools work on the hosting/platform level, but what if you wish to test and generate pure Node chaos like check how your Node process copes with uncaught errors, unhandled promise rejection, v8 memory overloaded with the max allowed of 1.7GB or whether your UX stays satisfactory when the event loop gets blocked often? to address this I’ve written, node-chaos (alpha) which provides all sort of Node-related chaotic acts

Otherwise: No escape here, Murphy’s law will hit your production without mercy

Code Examples

Doing It Right Example: : Node-chaos can generate all sort of Node.js pranks so you can test how resilience is your app to chaos

2.7 Avoid global test fixtures and seeds, add data per-test

Do: Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests (also known as ‘test fixture’) for the sake of performance improvement. While performance is indeed a valid concern — it can be mitigated (see “Component testing” bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concern — a balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)

Otherwise: Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? let’s investigate, oh no — it seems that two tests were mutating the same seed data

Code Examples

Anti Pattern Example: tests are not independent and rely on some global hook to feed global DB data

before(() => {
  //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
  await DB.AddSeedDataFromJson('seed.json');
});
it("When updating site name, get successful confirmation", async () => {
  //I know that site name "portal" exists - I saw it in the seed files
  const siteToUpdate = await SiteService.getSiteByName("Portal");
  const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
  expect(updateNameResult).to.be(true);
});
it("When querying by site name, get the right site", async () => {
  //I know that site name "portal" exists - I saw it in the seed files
  const siteToCheck = await SiteService.getSiteByName("Portal");
  expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
});

Doing It Right Example: We can stay within the test, each test acts on its own set of data

it("When updating site name, get successful confirmation", async () => {
  //test is adding a fresh new records and acting on the records only
  const siteUnderTest = await SiteService.addSite({
    name: "siteForUpdateTest"
  });
  const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");
  expect(updateNameResult).to.be(true);
});
Section 3: Frontend Testing

3.1. Separate UI from functionality

Do: When focusing on testing component logic, UI details become a noise that should be extracted, so your tests can focus on pure data. Practically, extract the desired data from the markup in an abstract way that is not too coupled to the graphic implementation, assert only on pure data (vs HTML/CSS graphic details) and disable animations that slow down. You might get tempted to avoid rendering and test only the back part of the UI (e.g. services, actions, store) but this will result in fictional tests that don't resemble the reality and won't reveal cases where the right data doesn't even arrive in the UI

Otherwise: The pure calculated data of your test might be ready in 10ms, but then the whole test will last 500ms (100 tests = 1 min) due to some fancy and irrelevant animation

Code Examples

Doing It Right Example: Separating out the UI details

test('When users-list is flagged to show only VIP, should display only VIP members', () => {
  // Arrange
 const allUsers = [
   { id: 1, name: 'Yoni Goldberg', vip: false },
   { id: 2, name: 'John Doe', vip: true }
  ];

  // Act
  const { getAllByTestId } = render(<UsersList users={allUsers} showOnlyVIP={true}/>);

  // Assert - Extract the data from the UI first
  const allRenderedUsers = getAllByTestId('user').map(uiElement => uiElement.textContent);
  const allRealVIPUsers = allUsers.filter((user) => user.vip).map((user) => user.name);
  expect(allRenderedUsers).toEqual(allRealVIPUsers); //compare data with data, no UI here
});

Anti Pattern Example: Assertion mix UI details and data

test('When flagging to show only VIP, should display only VIP members', () => {
  // Arrange
  const allUsers = [
   {id: 1, name: 'Yoni Goldberg', vip: false },
   {id: 2, name: 'John Doe', vip: true }
  ];

  // Act
  const { getAllByTestId } = render(<UsersList users={allUsers} showOnlyVIP={true}/>);

  // Assert - Mix UI & data in assertion
  expect(getAllByTestId('user')).toEqual('[<li data-testid="user">John Doe</li>]');
});

3.2 Query HTML elements based on attributes that are unlikely to change

Do: Query HTML elements based on attributes that are likely to survive graphic changes unlike CSS selectors and like form labels. If the designated element doesn't have such attributes, create a dedicated test attribute like 'test-id-submit-button'. Going this route not only ensures that your functional/logic tests never break because of look & feel changes but also it becomes clear to the entire team that this element and attribute are utilized by tests and shouldn't get removed

Otherwise: You want to test the login functionality that spans many components, logic and services, everything is set up perfectly - stubs, spies, Ajax calls are isolated. All seems perfect. Then the test fails because the designer changed the div CSS class from 'thick-border' to 'thin-border'

Code Examples

Doing It Right Example: Querying an element using a dedicated attrbiute for testing

// the markup code (part of React component)
<h3>
  <Badge pill className="fixed_badge" variant="dark">
    <span data-testid="errorsLabel">{value}</span> <!-- note the attribute data-testid -->
  </Badge>
</h3>
// this example is using react-testing-library
  test('Whenever no data is passed to metric, show 0 as default', () => {
    // Arrange
    const metricValue = undefined;

    // Act
    const { getByTestId } = render(<dashboardMetric value={undefined}/>);   
    
    expect(getByTestId('errorsLabel')).text()).toBe("0");
  });

Anti-Pattern Example: Relying on CSS attributes

<!-- the markup code (part of React component) -->
<span id="metric" className="d-flex-column">{value}</span> <!-- what if the designer changes the classs? -->
// this exammple is using enzyme
test('Whenever no data is passed, error metric shows zero', () => {
    // ...
   
    expect(wrapper.find("[className='d-flex-column']").text()).toBe("0");
  });

3.3 Whenever possible, test with a realistic and fully rendered component

Do: Whenever reasonably sized, test your component from outside like your users do, fully render the UI, act on it and assert that the rendered UI behaves as expected. Avoid all sort of mocking, partial and shallow rendering - this approach might result in untrapped bugs due to lack of details and harden the maintenance as the tests mess with the internals (see bullet 'Favour blackbox testing'). If one of the child components is significantly slowing down (e.g. animation) or complicating the setup - consider explicitly replacing it with a fake

With all that said, a word of caution is in order: this technique works for small/medium components that pack a reasonable size of child components. Fully rendering a component with too many children will make it hard to reason about test failures (root cause analysis) and might get too slow. In such cases, write only a few tests against that fat parent component and more tests against its children

Otherwise: When poking into a component's internal by invoking its private methods, and checking the inner state - you would have to refactor all tests when refactoring the components implementation. Do you really have a capacity for this level of maintenance?

Code Examples

Doing It Right Example: Working realstically with a fully rendered component

class Calendar extends React.Component {
  static defaultProps = {showFilters: false}
 
  render() {
    return (
      <div>
        A filters panel with a button to hide/show filters
        <FiltersPanel showFilter={showFilters} title='Choose Filters'/>
      </div>
    )
  }
}

//Examples use React & Enzyme
test('Realistic approach: When clicked to show filters, filters are displayed', () => {
    // Arrange
    const wrapper = mount(<Calendar showFilters={false} />)

    // Act
    wrapper.find('button').simulate('click');

    // Assert
    expect(wrapper.text().includes('Choose Filter'));
    // This is how the user will approach this element: by text
})

Anti-Pattern Example: Mocking the reality with shallow rendering

test('Shallow/mocked approach: When clicked to show filters, filters are displayed', () => {
    // Arrange
    const wrapper = shallow(<Calendar showFilters={false} title='Choose Filter'/>)

    // Act
    wrapper.find('filtersPanel').instance().showFilters();
    // Tap into the internals, bypass the UI and invoke a method. White-box approach

    // Assert
    expect(wrapper.find('Filter').props()).toEqual({title: 'Choose Filter'});
    // what if we change the prop name or don't pass anything relevant?
})

3.4 Don't sleep, use frameworks built-in support for async events. Also try to speed things up

Do: In many cases, the unit under test completion time is just unknown (e.g. animation suspends element appearance) - in that case, avoid sleeping (e.g. setTimeOut) and prefer more deterministic methods that most platforms provide. Some libraries allows awaiting on operations (e.g. Cypress cy.request('url')), other provide API for waiting like @testing-library/dom method wait(expect(element)). Sometimes a more elegant way is to stub the slow resource, like API for example, and then once the response moment becomes deterministic the component can be explicitly re-rendered. When depending upon some external component that sleeps, it might turn useful to hurry-up the clock. Sleeping is a pattern to avoid because it forces your test to be slow or risky (when waiting for a too short period). Whenever sleeping and polling is inevitable and there's no support from the testing framework, some npm libraries like wait-for-expect can help with a semi-deterministic solution

Otherwise: When sleeping for a long time, tests will be an order of magnitude slower. When trying to sleep for small numbers, test will fail when the unit under test didn't respond in a timely fashion. So it boils down to a trade-off between flakiness and bad performance

Code Examples

Doing It Right Example: E2E API that resolves only when the async operations is done (Cypress)

// using Cypress
cy.get('#show-products').click()// navigate
cy.wait('@products')// wait for route to appear
// this line will get executed only when the route is ready

Doing It Right Example: Testing library that waits for DOM elements

// @testing-library/dom
test('movie title appears', async () => {
    // element is initially not present...

    // wait for appearance
    await wait(() => {
        expect(getByText('the lion king')).toBeInTheDocument()
    })

    // wait for appearance and return the element
    const movie = await waitForElement(() => getByText('the lion king'))
})

Anti-Pattern Example: custom sleep code

test('movie title appears', async () => {
    // element is initially not present...

    // custom wait logic (caution: simplistic, no timeout)
    const interval = setInterval(() => {
        const found = getByText('the lion king');
        if(found){
            clearInterval(interval);
            expect(getByText('the lion king')).toBeInTheDocument();
        }
       
    }, 100);

    // wait for appearance and return the element
    const movie = await waitForElement(() => getByText('the lion king'))
})

3.5. Watch how the content is served over the network

Do: Apply some active monitor that ensures the page load under real network is optimized - this includes any UX concern like slow page load or un-minified bundle. The inspection tools market is no short: basic tools like pingdom, AWS CloudWatch, gcp StackDriver can be easily configured to watch whether the server is alive and response under a reasonable SLA. This only scratches the surface of what might get wrong, hence it's preferable to opt for tools that specialize in frontend (e.g. lighthouse, pagespeed) and perform richer analysis. The focus should be on symptoms, metrics that directly affect the UX, like page load time, meaningful paint, time until the page gets interactive (TTI). On top of that, one may also watch for technical causes like ensuring the content is compressed, time to the first byte, optimize images, ensuring reasonable DOM size, SSL and many others. It's advisable to have these rich monitors both during development, as part of the CI and most important - 24x7 over the production's servers/CDN

Otherwise: It must be disappointing to realize that after such great care for crafting a UI, 100% functional tests passing and sophisticated bundling - the UX is horrible and slow due to CDN misconfiguration

Code Examples

Doing It Right Example: Lighthouse page load inspection report

3.6 Stub flakky and slow resources like backend APIs

Do: When coding your mainstream tests (not E2E tests), avoid involving any resource that is beyond your responsibility and control like backend API and use stubs instead (i.e. test double). Practically, instead of real network calls to APIs, use some test double library (like [Sinon]https://sinonjs.org/, Test doubles, etc) for stubbing the API response. The main benefit is preventing flakiness - testing or staging APIs by definition are not highly stable and from time to time will fail your tests although YOUR component behaves just fine (production env was not meant for testing and it usually throttles requests). Doing this will allow simulating various API behavior that should drive your component behavior as when no data was found or the case when API throws an error. Last but not least, network calls will greatly slow down the tests

Otherwise: The average test runs no longer than few ms, a typical API call last 100ms>, this makes each test ~20x slower

Code Examples

Doing It Right Example: Stubbing or intercepting API calls

// unit under test
export default function ProductsList() {
    const [products, setProducts] = useState(false)

    const fetchProducts = async() => {
      const products = await axios.get('api/products')
      setProducts(products);
    }

    useEffect(() => {
      fetchProducts();
    }, []);

  return products ? <div>{products}</div> : <div data-testid='no-products-message'>No products</div>
}

// test
test('When no products exist, show the appropriate message', () => {
    // Arrange
    nock("api")
            .get(/products)
            .reply(404);

    // Act
    const {getByTestId} = render(<ProductsList/>);

    // Assert
    expect(getByTestId('no-products-message')).toBeTruthy();
});

3.7 Have very few end-to-end tests that spans the whole system

Do: Although E2E (end-to-end) usually means UI-only testing with a real browser (See bullet 3.6), for other they mean tests that stretch the entire system including the real backend. The latter type of tests is highly valuable as they cover integration bugs between frontend and backend that might happen due to a wrong understanding of the exchange schema. They are also an efficient method to discover backend-to-backend integration issues (e.g. Microservice A sends the wrong message to Microservice B) and even to detect deployment failures - there are no backend frameworks for E2E testing that are as friendly and mature as UI frameworks like Cypress and Pupeteer. The downside of such tests is the high cost of configuring an environment with so many components, and mostly their brittleness - given 50 microservices, even if one fails then the entire E2E just failed. For that reason, we should use this technique sparingly and probably have 1-10 of those and no more. That said, even a small number of E2E tests are likely to catch the type of issues they are targeted for - deployment & integration faults. It's advisable to run those over a production-like staging environment

Otherwise: UI might invest much in testing its functionality only to realizes very late that the backend returned payload (the data schema the UI has to work with) is very differnt than expected

3.8 Speed-up E2E tests by reusing login credentials

Do: In E2E tests that involve a real backend and rely on a valid user token for API calls, it doesn't payoff to isolate the test to a level where a user is created and logged-in in every request. Instead, login only once before the tests execution start (i.e. before-all hook), save the token in some local storage and reuse it across requests. This seem to violate one of the core testing principle - keep the test autonomous without resources coupling. While this is a valid worry, in E2E tests performance is a key concern and creating 1-3 API requests before starting each individial tests might lead to horrible execution time. Reusing credentials doesn't mean the tests have to act on the same user records - if relying on user records (e.g. test user payments history) than make sure to generate those records as part of the test and avoid sharing their existence with other tests. Also remember that the backend can be faked - if your tests are focused on the frontend it might be better to isolate it and stub the backend API (see bullet 3.6).

Otherwise: Given 200 test cases and assuming login=100ms = 20 seconds only for logging-in again and again

Code Examples

Doing It Right Example: Logging-in before-all and not before-each

let authenticationToken;

// happens before ALL tests run
before(() => {
  cy.request('POST', 'http://localhost:3000/login', {
    username: Cypress.env('username'),
    password: Cypress.env('password'),
  })
  .its('body')
  .then((responseFromLogin) => {
    authenticationToken = responseFromLogin.token;
  })
})

// happens before EACH test
beforeEach(setUser => () {
  cy.visit('/home', {
    onBeforeLoad (win) {
      win.localStorage.setItem('token', JSON.stringify(authenticationToken))
    },
  })
})

3.9 Have one E2E smoke test that just travells across the site map

Do: For production monitoring and development-time sanity check, run a single E2E test that visits all/most of the site pages and ensures no one breaks. This type of test brings a great return on investment as it's very easy to write and maintain, but it can detect any kind of failure including functional, network and deployment issues. Other styles of smoke and sanity checking are not as reliable and exhaustive - some ops teams just ping the home page (production) or developers who run many integration tests which don't discover packaging and browser issues. Goes without saying that the smoke test doesn't replace functional tests rather just aim to serve as a quick smoke detector

Otherwise: Everything might seem perfect, all tests pass, production health-check is also positive but the Payment component had some packaging issue and only the /Payment route is not rendering

Code Examples

Doing It Right Example: Smoke travelling across all pages

it('When doing smoke testing over all page, should load them all successfully', () => {
    // exemplified using Cypress but can be implemented easily
    // using any E2E suite
    cy.visit('https://mysite.com/home');
    cy.contains('Home');
    cy.contains('https://mysite.com/Login');
    cy.contains('Login');
    cy.contains('https://mysite.com/About');
    cy.contains('About');
  })

3.10 Expose the tests as a live collaborative document

Do: Besides increasing app reliability, tests bring another attractive opportunity to the table - serve as live app documentation. Since tests inherently speak at a less-technical and product/UX language, using the right tools they can serve as a communication artifact that greatly aligns all the peers - developers and their customers. For example, some frameworks allow expressing the flow and expectations (i.e. tests plan) using a human-readable language so any stakeholder, including product managers, can read, approve and collaborate on the tests which just became the live requirements document. This technique is also being referred to as 'acceptance test' as it allows the customer to define his acceptance criteria in plain language. This is BDD (behavior-driven testing) at its purest form. One of the popular frameworks that enable this is Cucumber which has a JavaScript flavor, see example below. Another similar yet different opportunity, StoryBook, allows exposing UI components as a graphic catalog where one can walk through the various states of each component (e.g. render a grid w/o filters, render that grid with multiple rows or with none, etc), see how it looks like, and how to trigger that state - this can appeal also to product folks but mostly serves as live doc for developers who consume those components.

Otherwise: After investing top resources on testing, it's just a pity not to leverage this investment and win great value

Code Examples

Doing It Right Example: Describing tests in human-language using cocumber-js

// this is how one can describe tests using cocumber: plain language that allows anyone to understand and collaborate

Feature: Twitter new tweet

  I want to tweet something in Twitter
 
  @focus
  Scenario: Tweeting from the home page
    Given I open Twitter home
    Given I click on "New tweet" button
    Given I type "Hello followers!" in the textbox
    Given I click on "Submit" button
    Then I see message "Tweet saved"
   

Doing It Right Example: Visualizing our components, their various states and inputs using Storybook

3.11 Detect visual issues with automated tools

Do: Setup automated tools to capture UI screenshots when changes are presented and detect visual issues like content overlapping or breaking. This ensures that not only the right data is prepared but also the user can conveniently see it. This technique is not widely adopted, our testing mindset leans toward functional tests but it's the visuals what the user experience and with so many device types it's very easy to overlook some nasty UI bug. Some free tools can provide the basics - generate and save screenshots for the inspection of human eyes. While this approach might be sufficient for small apps, it's flawed as any other manual testing that demands human labor anytime something changes. On the other hand, it's quite challenging to detect UI issues automatically due to the lack of clear definition - this is where the field of 'Visual Regression' chime in and solve this puzzle by comparing old UI with the latest changes and detect differences. Some OSS/free tools can provide some of this functionality (e.g. wraith, PhantomCSS but might charge signficant setup time. The commercial line of tools (e.g. Applitools, Perci.io) takes is a step further by smoothing the installation and packing advanced features like management UI, alerting, smart capturing by elemeinating 'visual noise' (e.g. ads, animations) and even root cause analysis of the DOM/css changes that led to the issue

Otherwise: How good is a content page that display great content (100% tests passed), loads instantly but half of the content area is hidden?

Code Examples

Anti Pattern Example: A typical visual regression - right content that is served badly


Doing It Right Example: Configuring wraith to capture and compare UI snapshots

​# Add as many domains as necessary. Key will act as a label​

domains:
 english: "http://www.mysite.com"​

​# Type screen widths below, here are a couple of examples​

screen_widths:

 - 600​
 - 768​
 - 1024​
 - 1280​

​# Type page URL paths below, here are a couple of examples​
paths:
 about:
   path: /about
   selector: '.about'​
 subscribe:
     selector: '.subscribe'​
   path: /subscribe

Doing It Right Example: Using Applitools to get snapshot comaprison and other advanced features

import  *  as todoPage from  '../page-objects/todo-page';

describe('visual validation',  ()  =>  {

before(()  =>  todoPage.navigate());

beforeEach(()  =>  cy.eyesOpen({ appName:  'TAU TodoMVC'  }));

afterEach(()  =>  cy.eyesClose());

 

it('should look good',  ()  =>  {

cy.eyesCheckWindow('empty todo list');

 

todoPage.addTodo('Clean room');

 

todoPage.addTodo('Learn javascript');

 

cy.eyesCheckWindow('two todos');

 

todoPage.toggleTodo(0);

 

cy.eyesCheckWindow('mark as completed');

});

});

Section 4: Measuring Test Effectiveness

4.1 Get enough coverage for being confident, ~80% seems to be the lucky number

Do: The purpose of testing is to get enough confidence for moving fast, obviously the more code is tested the more confident the team can be. Coverage is a measure of how many code lines (and branches, statements, etc) are being reached by the tests. So how much is enough? 10–30% is obviously too low to get any sense about the build correctness, on the other side 100% is very expensive and might shift your focus from the critical paths to the exotic corners of the code. The long answer is that it depends on many factors like the type of application — if you’re building the next generation of Airbus A380 than 100% is a must, for a cartoon pictures website 50% might be too much. Although most of the testing enthusiasts claim that the right coverage threshold is contextual, most of them also mention the number 80% as a thumb of a rule (Fowler: “in the upper 80s or 90s”) that presumably should satisfy most of the applications.

Implementation tips: You may want to configure your continuous integration (CI) to have a coverage threshold (Jest link) and stop a build that doesn’t stand to this standard (it’s also possible to configure threshold per component, see code example below). On top of this, consider detecting build coverage decrease (when a newly committed code has less coverage) — this will push developers raising or at least preserving the amount of tested code. All that said, coverage is only one measure, a quantitative based one, that is not enough to tell the robustness of your testing. And it can also be fooled as illustrated in the next bullets

Otherwise: Confidence and numbers go hand in hand, without really knowing that you tested most of the system — there will also be some fear. and fear will slow you down

Code Examples

Example: A typical coverage report

Doing It Right Example: Setting up coverage per component (using Jest)

![alt text](assets/bp-18-code-coverage2.jpeg "Setting up coverage per component (using Jest)

4.2 Inspect coverage reports to detect untested areas and other oddities

Do: Some issues sneak just under the radar and are really hard to find using traditional tools. These are not really bugs but more of surprising application behavior that might have a severe impact. For example, often some code areas are never or rarely being invoked — you thought that the ‘PricingCalculator’ class is always setting the product price but it turns out it is actually never invoked although we have 10000 products in DB and many sales… Code coverage reports help you realize whether the application behaves the way you believe it does. Other than that, it can also highlight which types of code is not tested — being informed that 80% of the code is tested doesn’t tell whether the critical parts are covered. Generating reports is easy — just run your app in production or during testing with coverage tracking and then see colorful reports that highlight how frequent each code area is invoked. If you take your time to glimpse into this data — you might find some gotchas

Otherwise: If you don’t know which parts of your code are left un-tested, you don’t know where the issues might come from

Code Examples

Anti-Pattern Example: What’s wrong with this coverage report? based on a real-world scenario where we tracked our application usage in QA and find out interesting login patterns (Hint: the amount of login failures is non-proportional, something is clearly wrong. Finally it turned out that some frontend bug keeps hitting the backend login API)

![alt text](assets/bp-19-coverage-yoni-goldberg-nodejs-consultant.png "What’s wrong with this coverage report? based on a real-world scenario where we tracked our application usage in QA and find out interesting login patterns (Hint: the amount of login failures is non-proportional, something is clearly wrong. Finally it turned out that some frontend bug keeps hitting the backend login API)

4.3 Measure logical coverage using mutation testing

Do: The Traditional Coverage metric often lies: It may show you 100% code coverage, but none of your functions, even not one, return the right response. How come? it simply measures over which lines of code the test visited, but it doesn’t check if the tests actually tested anything — asserted for the right response. Like someone who’s traveling for business and showing his passport stamps — this doesn’t prove any work done, only that he visited few airports and hotels.

Mutation-based testing is here to help by measuring the amount of code that was actually TESTED not just VISITED. Stryker is a JavaScript library for mutation testing and the implementation is really neat:

(1) it intentionally changes the code and “plants bugs”. For example the code newOrder.price===0 becomes newOrder.price!=0. This “bugs” are called mutations

(2) it runs the tests, if all succeed then we have a problem — the tests didn’t serve their purpose of discovering bugs, the mutations are so-called survived. If the tests failed, then great, the mutations were killed.

Knowing that all or most of the mutations were killed gives much higher confidence than traditional coverage and the setup time is similar

Otherwise: You’ll be fooled to believe that 85% coverage means your test will detect bugs in 85% of your code

Code Examples

Anti Pattern Example: 100% coverage, 0% testing

function addNewOrder(newOrder) {
    logger.log(Adding new order ${newOrder});
    DB.save(newOrder);
    Mailer.sendMail(newOrder.assignee, A new order was places ${newOrder});

    return {approved: true};
}

it("Test addNewOrder, don't use such test names", () => {
    addNewOrder({asignee: "[email protected]",price: 120});
});//Triggers 100% code coverage, but it doesn't check anything

Doing It Right Example: Stryker reports, a tool for mutation testing, detects and counts the amount of code that is not tested (Mutations)

![alt text](assets/bp-20-yoni-goldberg-mutation-testing.jpeg "Stryker reports, a tool for mutation testing, detects and counts the amount of code that is not tested (Mutations)

4.4 Preventing test code issues with Test linters

Do: A set of ESLint plugins were built specifically for inspecting the tests code patterns and discover issues. For example, eslint-plugin-mocha will warn when a test is written at the global level (not a son of a describe() statement) or when tests are skipped which might lead to a false belief that all tests are passing. Similarly, eslint-plugin-jest can, for example, warn when a test has no assertions at all (not checking anything)

Otherwise: Seeing 90% code coverage and 100% green tests will make your face wear a big smile only until you realize that many tests aren’t asserting for anything and many test suites were just skipped. Hopefully, you didn’t deploy anything based on this false observation

Code Examples

Anti Pattern Example: A test case full of errors, luckily all are caught by Linters

describe("Too short description", () => {
const userToken = userService.getDefaultToken() // error:no-setup-in-describe, use hooks (sparingly) instead
it("Some description", () => {});//
error: valid-test-description. Must include the word "Should" + at least 5 words
});

it.skip("Test name", () => {// *error:no-skipped-tests, error:error:no-global-tests. Put tests only under describe or suite
expect("somevalue"); // error:no-assert
});

it("Test name", () => {*//error:no-identical-title. Assign unique titles to tests
});

Section 5: CI and Other Quality Measures

5.1 Enrich your linters and abort builds that have linting issues

Do: Linters are a free lunch, with 5 min setup you get for free an auto-pilot guarding your code and catching significant issue as you type. Gone are the days where linting was about cosmetics (no semi-colons!). Nowadays, Linters can catch severe issues like errors that are not thrown correctly and losing information. On top of your basic set of rules (like ESLint standard or Airbnb style), consider including some specializing Linters like eslint-plugin-chai-expect that can discover tests without assertions, eslint-plugin-promise can discover promises with no resolve (your code will never continue), eslint-plugin-security which can discover eager regex expressions that might get used for DOS attacks, and eslint-plugin-you-dont-need-lodash-underscore is capable of alarming when the code uses utility library methods that are part of the V8 core methods like Lodash._map(…)

Otherwise: Consider a rainy day where your production keeps crashing but the logs don’t display the error stack trace. What happened? Your code mistakenly threw a non-error object and the stack trace was lost, a good reason for banging your head against a brick wall. A 5min linter setup could detect this TYPO and save your day

Code Examples

Anti Pattern Example: The wrong Error object is thrown mistakenly, no stack-trace will appear for this error. Luckily, ESLint catches the next production bug

5.2 Shorten the feedback loop with local developer-CI

Do: Using a CI with shiny quality inspections like testing, linting, vulnerabilities check, etc? Help developers run this pipeline also locally to solicit instant feedback and shorten the feedback loop. Why? an efficient testing process constitutes many and iterative loops: (1) try-outs -> (2) feedback -> (3) refactor. The faster the feedback is, the more improvement iterations a developer can perform per-module and perfect the results. On the flip, when the feedback is late to come fewer improvement iterations could be packed into a single day, the team might already move forward to another topic/task/module and might not be up for refining that module.

Practically, some CI vendors (Example: CircleCI load CLI) allow running the pipeline locally. Some commercial tools like wallaby provide highly-valuable & testing insights as a developer prototype (no affiliation). Alternatively, you may just add npm script to package.json that runs all the quality commands (e.g. test, lint, vulnerabilities) — use tools like concurrently for parallelization and non-zero exit code if one of the tools failed. Now the developer should just invoke one command — e.g. ‘npm run quality’ — to get instant feedback. Consider also aborting a commit if the quality check failed using a githook (husky can help)

Otherwise: When the quality results arrive the day after the code, testing doesn’t become a fluent part of development rather an after the fact formal artifact

Code Examples

Doing It Right Example: npm scripts that perform code quality inspection, all are run in parallel on demand or when a developer is trying to push new code

"scripts": {
    "inspect:sanity-testing": "mocha /--test.js --grep "sanity"",
    "inspect:lint": "eslint .",
    "inspect:vulnerabilities": "npm audit",
    "inspect:license": "license-checker --failOn GPLv2",
    "inspect:complexity": "plato .",

    "inspect:all": "concurrently -c "bgBlue.bold,bgMagenta.bold,yellow" "npm:inspect:quick-testing" "npm:inspect:lint" "npm:inspect:vulnerabilities" "npm:inspect:license""
  },
  "husky": {
    "hooks": {
      "precommit": "npm run inspect:all",
      "prepush": "npm run inspect:all"
    }
}

5.3 Perform e2e testing over a true production-mirror

Do: End to end (e2e) testing are the main challenge of every CI pipeline — creating an identical ephemeral production mirror on the fly with all the related cloud services can be tedious and expensive. Finding the best compromise is your game: Docker-compose allows crafting isolated dockerized environment with identical containers using a single plain text file but the backing technology (e.g. networking, deployment model) is different from real-world productions. You may combine it with ‘AWS Local’ to work with a stub of the real AWS services. If you went serverless multiple frameworks like serverless and AWS SAM allows the local invocation of Faas code.

The huge Kubernetes eco-system is yet to formalize a standard convenient tool for local and CI-mirroring though many new tools are launched frequently. One approach is running a ‘minimized-Kubernetes’ using tools like Minikube and MicroK8s which resemble the real thing only come with less overhead. Another approach is testing over a remote ‘real-Kubernetes’, some CI providers (e.g. Codefresh) has native integration with Kubernetes environment and make it easy to run the CI pipeline over the real thing, others allow custom scripting against a remote Kubernetes.

Otherwise: Using different technologies for production and testing demands maintaining two deployment models and keeps the developers and the ops team separated

Code Examples

Example: a CI pipeline that generates Kubernetes cluster on the fly (Credit: Dynamic-environments Kubernetes)

deploy:

stage: deploy

image: registry.gitlab.com/gitlab-examples/kubernetes-deploy

script:

  • ./configureCluster.sh $KUBE_CA_PEM_FILE $KUBE_URL $KUBE_TOKEN

  • kubectl create ns $NAMESPACE

  • kubectl create secret -n $NAMESPACE docker-registry gitlab-registry --docker-server="$CI_REGISTRY" --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" --docker-email="$GITLAB_USER_EMAIL"

  • mkdir .generated

  • echo "$CI_BUILD_REF_NAME-$CI_BUILD_REF"

  • sed -e "s/TAG/$CI_BUILD_REF_NAME-$CI_BUILD_REF/g" templates/deals.yaml | tee ".generated/deals.yaml"

  • kubectl apply --namespace $NAMESPACE -f .generated/deals.yaml

  • kubectl apply --namespace $NAMESPACE -f templates/my-sock-shop.yaml

environment:

name: test-for-ci

5.4 Parallelize test execution

Do: When done right, testing is your 24/7 friend providing almost instant feedback. In practice, executing 500 CPU-bounded unit test on a single thread can take too long. Luckily, modern test runners and CI platforms (like Jest, AVA and Mocha extensions) can parallelize the test into multiple processes and achieve significant improvement in feedback time. Some CI vendors do also parallelize tests across containers (!) which shortens the feedback loop even further. Whether locally over multiple processes, or over some cloud CLI using multiple machines — parallelizing demand keeping the tests autonomous as each might run on different processes

Otherwise: Getting test results 1 hour long after pushing new code, as you already code the next features, is a great recipe for making testing less relevant

Code Examples

Doing It Right Example: Mocha parallel & Jest easily outrun the traditional Mocha thanks to testing parallelization (Credit: JavaScript Test-Runners Benchmark)

5.5 Stay away from legal issues using license and plagiarism check

Do: Licensing and plagiarism issues are probably not your main concern right now, but why not tick this box as well in 10 minutes? A bunch of npm packages like license check and plagiarism check (commercial with free plan) can be easily baked into your CI pipeline and inspect for sorrows like dependencies with restrictive licenses or code that was copy-pasted from Stackoverflow and apparently violates some copyrights

Otherwise: Unintentionally, developers might use packages with inappropriate licenses or copy paste commercial code and run into legal issues

Code Examples

Doing It Right Example:

//install license-checker in your CI environment or also locally
npm install -g license-checker

//ask it to scan all licenses and fail with exit code other than 0 if it found unauthorized license. The CI system should catch this failure and stop the build
license-checker --summary --failOn BSD

5.6 Constantly inspect for vulnerable dependencies

Do: Licensing and plagiarism issues are probably not your main concern right now, but why not tick this box as well in 10 minutes? A bunch of npm packages like license check and plagiarism check (commercial with free plan) can be easily baked into your CI pipeline and inspect for sorrows like dependencies with restrictive licenses or code that was copy-pasted from Stackoverflow and apparently violates some copyrights

Otherwise: Even the most reputable dependencies such as Express have known vulnerabilities. This can get easily tamed using community tools such as npm audit, or commercial tools like snyk (offer also a free community version). Both can be invoked from your CI on every build

Code Examples

Example: NPM Audit result

5.7 Automate dependency updates

Do: Yarn and npm latest introduction of package-lock.json introduced a serious challenge (the road to hell is paved with good intentions) — by default now, packages are no longer getting updates. Even a team running many fresh deployments with ‘npm install’ & ‘npm update’ won’t get any new updates. This leads to subpar dependent packages versions at best or to vulnerable code at worst. Teams now rely on developers goodwill and memory to manually update the package.json or use tools like ncu manually. A more reliable way could be to automate the process of getting the most reliable dependency versions, though there are no silver bullet solutions yet there are two possible automation roads: (1) CI can fail builds that have obsolete dependencies — using tools like ‘npm outdated’ or ‘npm-check-updates (ncu)’ . Doing so will enforce developers to update dependencies. (2) Use commercial tools that scan the code and automatically send pull requests with updated dependencies. One interesting question remaining is what should be the dependency update policy — updating on every patch generates too many overhead, updating right when a major is released might point to an unstable version (many packages found vulnerable on the very first days after being released, see the eslint-scope incident). An efficient update policy may allow some ‘vesting period’ — let the code lag behind the @latest for some time and versions before considering the local copy as obsolete (e.g. local version is 1.3.1 and repository version is 1.3.8)

Otherwise: Your production will run packages that have been explicitly tagged by their author as risky

Code Examples

Example: ncu can be used manually or within a CI pipeline to detect to which extent the code lag behind the latest versions

5.8 Other, non-Node related, CI tips

Do: This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known

  1. Use a declarative syntax. This is the only option for most vendors but older versions of Jenkins allows using code or UI
  2. Opt for a vendor that has native Docker support
  3. Fail early, run your fastest tests first. Create a ‘Smoke testing’ step/milestone that groups multiple fast inspections (e.g. linting, unit tests) and provide snappy feedback to the code committer
  4. Make it easy to skim-through all build artifacts including test reports, coverage reports, mutation reports, logs, etc
  5. Create multiple pipelines/jobs for each event, reuse steps between them. For example, configure a job for feature branch commits and a different one for master PR. Let each reuse logic using shared steps (most vendors provide some mechanism for code reuse
  6. Never embed secrets in a job declaration, grab them from a secret store or from the job’s configuration
  7. Explicitly bump version in a release build or at least ensure the developer did so
  8. Build only once and perform all the inspections over the single build artifact (e.g. Docker image)
  9. Test in an ephemeral environment that doesn’t drift state between builds. Caching node_modules might be the only exception

Otherwise: You‘ll miss years of wisdom

5.9 Build matrix: Run the same CI steps using multiple Node versions

Do: Quality checking is about serendipity, the more ground you cover the luckier you get in detecting issues early. When developing reusable packages or running a multi-customer production with various configuration and Node versions, the CI must run the pipeline of tests over all the permutations of configurations. For example, assuming we use mySQL for some customers and Postgres for others — some CI vendors support a feature called ‘Matrix’ which allow running the suit of testing against all permutations of mySQL, Postgres and multiple Node version like 8, 9 and 10. This is done using configuration only without any additional effort (assuming you have testing or any other quality checks). Other CIs who doesn’t support Matrix might have extensions or tweaks to allow that

Otherwise: So after doing all that hard work of writing testing are we going to let bugs sneak in only because of configuration issues?

Code Examples

Example: Using Travis (CI vendor) build definition to run the same test over multiple Node versions

language: node_js

node_js:

  - "7"

  - "6"

  - "5"

  - "4"

install:

  - npm install

script:

  - npm run test

Thanks for reading

If you liked this post, please do share/like it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about JavaScript

The Complete JavaScript Course 2019: Build Real Projects!

Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)

JavaScript Bootcamp - Build Real World Applications

The Web Developer Bootcamp

JavaScript Programming Tutorial - Full JavaScript Course for Beginners

New ES2019 Features Every JavaScript Developer Should Know

Best JavaScript Frameworks, Libraries and Tools to Use in 2019

React vs Angular vs Vue.js by Example

The Complete Node.js Developer Course (3rd Edition)

Angular & NodeJS - The MEAN Stack Guide

NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

Best 50 Nodejs interview questions from Beginners to Advanced in 2019

Node.js 12: The future of server-side JavaScript

An Introduction to Node.js Design Patterns

Top 7 Most Popular Node.js Frameworks You Should Know

Top 7 Most Popular Node.js Frameworks You Should Know

Node.js is an open-source, cross-platform, runtime environment that allows developers to run JavaScript outside of a browser. In this post, you'll see top 7 of the most popular Node frameworks at this point in time (ranked from high to low by GitHub stars).

Node.js is an open-source, cross-platform, runtime environment that allows developers to run JavaScript outside of a browser.

One of the main advantages of Node is that it enables developers to use JavaScript on both the front-end and the back-end of an application. This not only makes the source code of any app cleaner and more consistent, but it significantly speeds up app development too, as developers only need to use one language.

Node is fast, scalable, and easy to get started with. Its default package manager is npm, which means it also sports the largest ecosystem of open-source libraries. Node is used by companies such as NASA, Uber, Netflix, and Walmart.

But Node doesn't come alone. It comes with a plethora of frameworks. A Node framework can be pictured as the external scaffolding that you can build your app in. These frameworks are built on top of Node and extend the technology's functionality, mostly by making apps easier to prototype and develop, while also making them faster and more scalable.

Below are 7of the most popular Node frameworks at this point in time (ranked from high to low by GitHub stars).

Express

With over 43,000 GitHub stars, Express is the most popular Node framework. It brands itself as a fast, unopinionated, and minimalist framework. Express acts as middleware: it helps set up and configure routes to send and receive requests between the front-end and the database of an app.

Express provides lightweight, powerful tools for HTTP servers. It's a great framework for single-page apps, websites, hybrids, or public HTTP APIs. It supports over fourteen different template engines, so developers aren't forced into any specific ORM.

Meteor

Meteor is a full-stack JavaScript platform. It allows developers to build real-time web apps, i.e. apps where code changes are pushed to all browsers and devices in real-time. Additionally, servers send data over the wire, instead of HTML. The client renders the data.

The project has over 41,000 GitHub stars and is built to power large projects. Meteor is used by companies such as Mazda, Honeywell, Qualcomm, and IKEA. It has excellent documentation and a strong community behind it.

Koa

Koa is built by the same team that built Express. It uses ES6 methods that allow developers to work without callbacks. Developers also have more control over error-handling. Koa has no middleware within its core, which means that developers have more control over configuration, but which means that traditional Node middleware (e.g. req, res, next) won't work with Koa.

Koa already has over 26,000 GitHub stars. The Express developers built Koa because they wanted a lighter framework that was more expressive and more robust than Express. You can find out more about the differences between Koa and Express here.

Sails

Sails is a real-time, MVC framework for Node that's built on Express. It supports auto-generated REST APIs and comes with an easy WebSocket integration.

The project has over 20,000 stars on GitHub and is compatible with almost all databases (MySQL, MongoDB, PostgreSQL, Redis). It's also compatible with most front-end technologies (Angular, iOS, Android, React, and even Windows Phone).

Nest

Nest has over 15,000 GitHub stars. It uses progressive JavaScript and is built with TypeScript, which means it comes with strong typing. It combines elements of object-oriented programming, functional programming, and functional reactive programming.

Nest is packaged in such a way it serves as a complete development kit for writing enterprise-level apps. The framework uses Express, but is compatible with a wide range of other libraries.

LoopBack

LoopBack is a framework that allows developers to quickly create REST APIs. It has an easy-to-use CLI wizard and allows developers to create models either on their schema or dynamically. It also has a built-in API explorer.

LoopBack has over 12,000 GitHub stars and is used by companies such as GoDaddy, Symantec, and the Bank of America. It's compatible with many REST services and a wide variety of databases (MongoDB, Oracle, MySQL, PostgreSQL).

Hapi

Similar to Express, hapi serves data by intermediating between server-side and client-side. As such, it's can serve as a substitute for Express. Hapi allows developers to focus on writing reusable app logic in a modular and prescriptive fashion.

The project has over 11,000 GitHub stars. It has built-in support for input validation, caching, authentication, and more. Hapi was originally developed to handle all of Walmart's mobile traffic during Black Friday.