How to Test Node.js Apps using Mocha, Chai and SinonJS

How to Test Node.js Apps using Mocha, Chai and SinonJS

In this article, we'll look into testing Node.js apps. We'll use Mocha, Chai and SinonJS, and delve into using spies, stubs and mocks.

Tests help document the core features of an application. Properly written tests ensure that new features do not introduce changes that break the application.

An engineer maintaining a codebase might not necessarily be the same engineer that wrote the initial code. If the code is properly tested another engineer can confidently add new code or modify existing code with the expectation that the new changes do not break other features or, at the very least, do not cause side effects to other features.

JavaScript and Node.js have so many testing and assertion libraries like Jest, Jasmine, Qunit, and Mocha. However, in this article, we will look at how to use Mocha for testing, Chai for assertions and Sinon for mocks, spies, and stubs.

Mocha

Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser. It encapsulates tests in test suites (describe-block) and test cases (it-block).

Mocha has lots of interesting features:

  • browser support
  • simple async support, including promises
  • test coverage reporting
  • async test timeout support
  • before, after, beforeEach, afterEach Hooks, etc.
Chai

To make equality checks or compare expected results against actual results we can use Node.js built-in assertion module. However, when an error occurs the test cases will still pass. So Mocha recommends using other assertion libraries and for this tutorial, we will be using Chai.

Chai exposes three assertion interfaces: expect(), assert() and should(). Any of them can be used for assertions.

Sinon

Often, the method that is being tested is required to interact with or call other external methods. Therefore you need a utility to spy, stub, or mock those external methods. This is exactly what Sinon does for you.

Stubs, mocks, and spies make tests more robust and less prone to breakage should dependent codes evolve or have their internals modified.

Spy

A spy is a fake function that keeps track of arguments, returns value, the value of this and exception is thrown (if any) for all its calls.

Stub

A stub is a spy with predetermined behavior.

We can use a stub to:

  • Take a predetermined action, like throwing an exception
  • Provide a predetermined response
  • Prevent a specific method from being called directly (especially when it triggers undesired behaviors like HTTP requests)

Mock

A mock is a fake function (like a spy) with pre-programmed behavior (like a stub) as well as pre-programmed expectations.

We can use a mock to:

  • Verify the contract between the code under test and the external methods that it calls
  • Verify that an external method is called the correct number of times
  • Verify an external method is called with the correct parameters

The rule of thumb for a mock is: if you are not going to add an assertion for some specific call, don’t mock it. Use a stub instead.

Writing tests

To demonstrate what we have explained above we will be building a simple node application that creates and retrieves a user. The complete code sample for this article can be found on CodeSandbox.

Project setup

Let’s create a new project directory for our user app project:

mkdir mocha-unit-test && cd mocha-unit-test
mkdir src

Create a package.json file within the source folder and add the code below:

// src/package.json
{
  "name": "mocha-unit-test",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "scripts": {
    "test": "mocha './src/**/*.test.js'",
    "start": "node src/app.js"
  },
  "keywords": [
    "mocha",
    "chai"
  ],
  "author": "Godwin Ekuma",
  "license": "ISC",
  "dependencies": {
    "body-parser": "^1.18.3",
    "dotenv": "^6.2.0",
    "express": "^4.16.4",
    "jsonwebtoken": "^8.4.0",
    "morgan": "^1.9.1",
    "pg": "^7.12.1",
    "pg-hstore": "^2.3.3",
    "sequelize": "^5.19.6"
  },
  "devDependencies": {
    "chai": "^4.2.0",
    "mocha": "^6.2.1",
    "sinon": "^7.5.0",
    "faker": "^4.1.0"
  }
}

Run npm install to install project dependencies.

Notice that test-related packages mocha, chai, sinon , and faker are saved in the dev-dependencies.

The test script uses a custom glob (./src/**/*.test.js) to configure the file path of test files. Mocha will look for test files(files ending with .test.js ) within the directories and subdirectories of the src folder.

Repositories, services, and controllers

We will structure our application using the controller, service, and, repository pattern so our app will be broken into the repositories, services, and controllers. The Repository-Service-Controller pattern breaks up the business layer of the app into three distinct layers:

  • The repository class handles getting data into and out of our data store. A repository is used between the service layer and the model layer. For example, in the UserRepository you would create methods that write/read a user to and from the database
  • The service class calls the repository class and can combine their data to form new, more complex business objects. It is an abstraction between the controller and the repository. For example, the UserService would be responsible for performing the required logic in order to create a new user
  • A controller contains very little logic and is used to make calls to services. Rarely does the controller make direct calls to the repositories unless there’s a valid reason. The controller will perform basic checks on the data returned from the services in order to send a response back to the client

Breaking down applications this way makes testing easy.

UserRepository class

Let’s begin by creating a repository class:

// src/user/user.repository.js
const { UserModel } = require("../database");
class UserRepository {
  constructor() {
    this.user = UserModel;
    this.user.sync({ force: true });
  }
  async create(name, email) {
    return this.user.create({
      name,
      email
    });
  }
  async getUser(id) {
    return this.user.findOne({ id });
  }
}
module.exports = UserRepository;

The UserRepository class has two methods, create and getUser. The create method adds a new user to the database while getUser method searches a user from the database.

Let’s test the userRepository methods below:

// src/user/user.repository.test.js
const chai = require("chai");
const sinon = require("sinon");
const expect = chai.expect;
const faker = require("faker");
const { UserModel } = require("../database");
const UserRepository = require("./user.repository");
describe("UserRepository", function() {
  const stubValue = {
    id: faker.random.uuid(),
    name: faker.name.findName(),
    email: faker.internet.email(),
    createdAt: faker.date.past(),
    updatedAt: faker.date.past()
  };
  describe("create", function() {
    it("should add a new user to the db", async function() {
      const stub = sinon.stub(UserModel, "create").returns(stubValue);
      const userRepository = new UserRepository();
      const user = await userRepository.create(stubValue.name, stubValue.email);
      expect(stub.calledOnce).to.be.true;
      expect(user.id).to.equal(stubValue.id);
      expect(user.name).to.equal(stubValue.name);
      expect(user.email).to.equal(stubValue.email);
      expect(user.createdAt).to.equal(stubValue.createdAt);
      expect(user.updatedAt).to.equal(stubValue.updatedAt);
    });
  });
});

The above code is testing the create method of the UserRepository . Notice that we are stubbing the UserModel.create method. The stub is necessary because our goal is to test the repository and not the model. We use [faker](https://github.com/marak/Faker.js/) for the test fixtures:

// src/user/user.repository.test.js

const chai = require("chai");
const sinon = require("sinon");
const expect = chai.expect;
const faker = require("faker");
const { UserModel } = require("../database");
const UserRepository = require("./user.repository");

describe("UserRepository", function() {
  const stubValue = {
    id: faker.random.uuid(),
    name: faker.name.findName(),
    email: faker.internet.email(),
    createdAt: faker.date.past(),
    updatedAt: faker.date.past()
  };
   describe("getUser", function() {
    it("should retrieve a user with specific id", async function() {
      const stub = sinon.stub(UserModel, "findOne").returns(stubValue);
      const userRepository = new UserRepository();
      const user = await userRepository.getUser(stubValue.id);
      expect(stub.calledOnce).to.be.true;
      expect(user.id).to.equal(stubValue.id);
      expect(user.name).to.equal(stubValue.name);
      expect(user.email).to.equal(stubValue.email);
      expect(user.createdAt).to.equal(stubValue.createdAt);
      expect(user.updatedAt).to.equal(stubValue.updatedAt);
    });
  });
});

To test the getUser method, we have to also stub UserModel.findone. We use expect(stub.calledOnce).to.be.true to assert that the stub is called at least once. The other assertions are checking the value returned by the getUser method.

UserService class

// src/user/user.service.js

const UserRepository = require("./user.repository");
class UserService {
  constructor(userRepository) {
    this.userRepository = userRepository;
  }
  async create(name, email) {
    return this.userRepository.create(name, email);
  }
  getUser(id) {
    return this.userRepository.getUser(id);
  }
}
module.exports = UserService;

The UserService class also has two methods create and getUser. The create method calls the create repository method passing name and email of a new user as arguments. The getUser calls the repository getUser method.

Let’s test the userService methods below:

// src/user/user.service.test.js

const chai = require("chai");
const sinon = require("sinon");
const UserRepository = require("./user.repository");
const expect = chai.expect;
const faker = require("faker");
const UserService = require("./user.service");
describe("UserService", function() {
  describe("create", function() {
    it("should create a new user", async function() {
      const stubValue = {
        id: faker.random.uuid(),
        name: faker.name.findName(),
        email: faker.internet.email(),
        createdAt: faker.date.past(),
        updatedAt: faker.date.past()
      };
      const userRepo = new UserRepository();
      const stub = sinon.stub(userRepo, "create").returns(stubValue);
      const userService = new UserService(userRepo);
      const user = await userService.create(stubValue.name, stubValue.email);
      expect(stub.calledOnce).to.be.true;
      expect(user.id).to.equal(stubValue.id);
      expect(user.name).to.equal(stubValue.name);
      expect(user.email).to.equal(stubValue.email);
      expect(user.createdAt).to.equal(stubValue.createdAt);
      expect(user.updatedAt).to.equal(stubValue.updatedAt);
    });
  });
});

The code above is testing the UserService create method. We have created a stub for the repository create method. The code below will test the getUser service method:

const chai = require("chai");
const sinon = require("sinon");
const UserRepository = require("./user.repository");
const expect = chai.expect;
const faker = require("faker");
const UserService = require("./user.service");
describe("UserService", function() {
  describe("getUser", function() {
    it("should return a user that matches the provided id", async function() {
      const stubValue = {
        id: faker.random.uuid(),
        name: faker.name.findName(),
        email: faker.internet.email(),
        createdAt: faker.date.past(),
        updatedAt: faker.date.past()
      };
      const userRepo = new UserRepository();
      const stub = sinon.stub(userRepo, "getUser").returns(stubValue);
      const userService = new UserService(userRepo);
      const user = await userService.getUser(stubValue.id);
      expect(stub.calledOnce).to.be.true;
      expect(user.id).to.equal(stubValue.id);
      expect(user.name).to.equal(stubValue.name);
      expect(user.email).to.equal(stubValue.email);
      expect(user.createdAt).to.equal(stubValue.createdAt);
      expect(user.updatedAt).to.equal(stubValue.updatedAt);
    });
  });
});

Again we are stubbing the UserRepository getUser method. We also assert that the stub is called at least once and then assert that the return value of the method is correct.

UserContoller class

/ src/user/user.controller.js

class UserController {
  constructor(userService) {
    this.userService = userService;
  }
  async register(req, res, next) {
    const { name, email } = req.body;
    if (
      !name ||
      typeof name !== "string" ||
      (!email || typeof email !== "string")
    ) {
      return res.status(400).json({
        message: "Invalid Params"
      });
    }
    const user = await this.userService.create(name, email);
    return res.status(201).json({
      data: user
    });
  }
  async getUser(req, res) {
    const { id } = req.params;
    const user = await this.userService.getUser(id);
    return res.json({
      data: user
    });
  }
}
module.exports = UserController;

The UserController class has register and getUser methods as well. Each of these methods accepts two parameters req and res objects.

// src/user/user.controller.test.js

describe("UserController", function() {
  describe("register", function() {
    let status json, res, userController, userService;
    beforeEach(() => {
      status = sinon.stub();
      json = sinon.spy();
      res = { json, status };
      status.returns(res);
      const userRepo = sinon.spy();
      userService = new UserService(userRepo);
    });
    it("should not register a user when name param is not provided", async function() {
      const req = { body: { email: faker.internet.email() } };
      await new UserController().register(req, res);
      expect(status.calledOnce).to.be.true;
      expect(status.args\[0\][0]).to.equal(400);
      expect(json.calledOnce).to.be.true;
      expect(json.args\[0\][0].message).to.equal("Invalid Params");
    });
    it("should not register a user when name and email params are not provided", async function() {
      const req = { body: {} };
      await new UserController().register(req, res);
      expect(status.calledOnce).to.be.true;
      expect(status.args\[0\][0]).to.equal(400);
      expect(json.calledOnce).to.be.true;
      expect(json.args\[0\][0].message).to.equal("Invalid Params");
    });
    it("should not register a user when email param is not provided", async function() {
      const req = { body: { name: faker.name.findName() } };
      await new UserController().register(req, res);
      expect(status.calledOnce).to.be.true;
      expect(status.args\[0\][0]).to.equal(400);
      expect(json.calledOnce).to.be.true;
      expect(json.args\[0\][0].message).to.equal("Invalid Params");
    });
    it("should register a user when email and name params are provided", async function() {
      const req = {
        body: { name: faker.name.findName(), email: faker.internet.email() }
      };
      const stubValue = {
        id: faker.random.uuid(),
        name: faker.name.findName(),
        email: faker.internet.email(),
        createdAt: faker.date.past(),
        updatedAt: faker.date.past()
      };
      const stub = sinon.stub(userService, "create").returns(stubValue);
      userController = new UserController(userService);
      await userController.register(req, res);
      expect(stub.calledOnce).to.be.true;
      expect(status.calledOnce).to.be.true;
      expect(status.args\[0\][0]).to.equal(201);
      expect(json.calledOnce).to.be.true;
      expect(json.args\[0\][0].data).to.equal(stubValue);
    });
  });
});

In the first three it blocks, we are testing that a user will not be created when one or both of the required parameters (email and name) are not provided. Notice that we are stubbing the res.status and spying on res.json:

describe("UserController", function() {
  describe("getUser", function() {
    let req;
    let res;
    let userService;
    beforeEach(() => {
      req = { params: { id: faker.random.uuid() } };
      res = { json: function() {} };
      const userRepo = sinon.spy();
      userService = new UserService(userRepo);
    });
    it("should return a user that matches the id param", async function() {
      const stubValue = {
        id: req.params.id,
        name: faker.name.findName(),
        email: faker.internet.email(),
        createdAt: faker.date.past(),
        updatedAt: faker.date.past()
      };
      const mock = sinon.mock(res);
      mock
        .expects("json")
        .once()
        .withExactArgs({ data: stubValue });
      const stub = sinon.stub(userService, "getUser").returns(stubValue);
      userController = new UserController(userService);
      const user = await userController.getUser(req, res);
      expect(stub.calledOnce).to.be.true;
      mock.verify();
    });
  });
});

For the getUser test we mocked on the json method. Notice that we also had to use a spy in place UserRepository while creating a new instance of the UserService.

Conclusion

Run the test using the command below:

npm test

You should see the tests passing:

We have seen how we can use a combination of Mocha, Chai, and Sinon to create a robust test for a node application. Be sure to check out their respective documentations to broaden your knowledge of these tools. Got a question or comment? Please drop them in the comment section below.

JavaScript and Node.js Testing Best Practices

JavaScript and Node.js Testing Best Practices

Why this guide can take your testing skills to the next level

Originally published by Yoni Goldberg at https://github.com

This is a guide for JavaScript & Node.js reliability from A-Z. It summarizes and curates for you dozens of the best blog posts, books and tools the market has to offer

Advanced: Goes 10,000 miles beyond the basics

Hop into a journey that travells way beyond the basics into advanced topics like testing in production, mutation testing, property-based testing and many other strategic & professional tools. Should you read every word in this guide your testing skills are likely to go way above the average

Full-stack: front, backend, CI, anything

Start by understanding the ubiquitous testing practices that are the foundation for any application tier. Then, delve into your area of choice: frontend/UI, backend, CI or maybe all of them?

Table of contents

·        Section 0: The Golden Rule

A single advice that inspires all the others (1 special bullet)

·        Section 1: The Test Anatomy

The foundation - strucuturing clean tests (12 bullets)

·        Section 2: Backend

Writing backend and Microservices tests efficiently (8 bullets)

·        Section 3: Frontend, UI, E2E

Writing tests for web UI including component and E2E tests (11 bullets)

·        Section 4: Measuring Tests Effectivenss

Watching the watchman - measuring test quality (4 bullets)

·        Section 5: Continous Integration

Guideliness for CI in the JS world (9 bullets)

Section : The Golden Rule

0. The Golden Rule: Design for lean testing

Do: Testing code is not like production-code - design it to be dead-simple, short, abstraction-free, flat, delightful to work with, lean. One should look at a test and get the intent instantly.

Our minds are full with the main production code, we don't have 'headspace' for additional complexity. Should we try to squeeze yet another challenging code into our poor brain it will slow the team down which works against the reason we do testing. Practically this is where many teams just abandon testing.

The tests are an opportunity for something else - a friendly and smiley assistant, one that it's delightful to work with and delivers great value for such a small investment. Science tells we have two brain systems: system 1 which is used for effortless activities like driving a car on an empty road and system 2 which is meant for complex and conscious operations like solving a math equation. Design your test for system 1, when looking at test code it should feel as easy as modifying an HTML document and not like solving 2X(17 × 24).

This can be achieved by selectively cherry-picking techniques, tools and test targets that are cost-effective and provide great ROI. Test only as much as needed, strive to keep it nimble, sometimes it's even worth dropping some tests and trade reliability for agility and simplicity.

Most of the advice below are derivatives of this principle.

Ready to start?

Section 1. The Test Anatomy

1.1 Include 3 parts in each test name

Do: A test report should tell whether the current application revision satisfies the requirements for the people who are not necessarily familiar with the code: the tester, the DevOps engineer who is deploying and the future you two years from now. This can be achieved best if the tests speak at the requirements level and include 3 parts:

(1) What is being tested? For example, the ProductsService.addNewProduct method

(2) Under what circumstances and scenario? For example, no price is passed to the method

(3) What is the expected result? For example, the new product is not approved

Otherwise: A deployment just failed, a test named “Add product” failed. Does this tell you what exactly is malfunctioning?

Note: Each bullet has code examples and sometime also an image illustration. Click to expand

Code Examples: Doing It Right Example: A test name that constitutes 3 parts

//1. unit under test
describe('Products Service', function() {
  describe('Add new product', function() {
    //2. scenario and 3. expectation
    it('When no price is specified, then the product status is pending approval', ()=> {
      const newProduct = new ProductService().add(...);
      expect(newProduct.status).to.equal('pendingApproval');
    });
  });
});

1.2 Structure tests by the AAA pattern

Do: Structure your tests with 3 well-separated sections Arrange, Act & Assert (AAA). Following this structure guarantees that the reader spends no brain CPU on understanding the test plan:

1st A - Arrange: All the setup code to bring the system to the scenario the test aims to simulate. This might include instantiating the unit under test constructor, adding DB records, mocking/stubbing on objects and any other preparation code

2nd A - Act: Execute the unit under test. Usually 1 line of code

3rd A - Assert: Ensure that the received value satisfies the expectation. Usually 1 line of code

Otherwise: Not only you spend long daily hours on understanding the main code, now also what should have been the simple part of the day (testing) stretches your brain

Code Examples

Doing It Right Example: A test strcutured with the AAA pattern

describe('Customer classifier', () => {
    test('When customer spent more than 500$, should be classified as premium', () => {
        //Arrange
        const customerToClassify = {spent:505, joined: new Date(), id:1}
        const DBStub = sinon.stub(dataAccess, "getCustomer")
            .reply({id:1, classification: 'regular'});

        //Act
        const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);

        //Assert
        expect(receivedClassification).toMatch('premium');
    });
});

Anti Pattern Example: No separation, one bulk, harder to interpret

test('Should be classified as premium', () => {
        const customerToClassify = {spent:505, joined: new Date(), id:1}
        const DBStub = sinon.stub(dataAccess, "getCustomer")
            .reply({id:1, classification: 'regular'});
        const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);
        expect(receivedClassification).toMatch('premium');
    });

1.3 Describe expectations in a product language: use BDD-style assertions

Do: Coding your tests in a declarative-style allows the reader to get the grab instantly without spending even a single brain-CPU cycle. When you write an imperative code that is packed with conditional logic the reader is thrown away to an effortful mental mood. In that sense, code the expectation in a human-like language, declarative BDD style using expect or should and not using custom code. If Chai & Jest don’t include the desired assertion and it’s highly repeatable, consider extending Jest matcher (Jest) or writing a custom Chai plugin

Otherwise: The team will write less test and decorate the annoying ones with .skip()

Code Examples

Anti Pattern Example: The reader must skim through not so short, and imperative code just to get the test story

test("When asking for an admin, ensure only ordered admins in results" , ()={
    //assuming we've added here two admins "admin1", "admin2" and "user1"
    const allAdmins = getUsers({adminOnly:true});

    const admin1Found, adming2Found = false;

    allAdmins.forEach(aSingleUser => {
        if(aSingleUser === "user1"){
            assert.notEqual(aSingleUser, "user1", "A user was found and not admin");
        }
        if(aSingleUser==="admin1"){
            admin1Found = true;
        }
        if(aSingleUser==="admin2"){
            admin2Found = true;
        }
    });

    if(!admin1Found || !admin2Found ){
        throw new Error("Not all admins were returned");
    }
});

Doing It Right Example: Skimming through the following declarative test is a breeze

it("When asking for an admin, ensure only ordered admins in results" , ()={
    //assuming we've added here two admins
    const allAdmins = getUsers({adminOnly:true});

    expect(allAdmins).to.include.ordered.members(["admin1" , "admin2"])
  .but.not.include.ordered.members(["user1"]);
});

1.4 Stick to black-box testing: Test only public methods

Do: Testing the internals brings huge overhead for almost nothing. If your code/API deliver the right results, should you really invest your next 3 hours in testing HOW it worked internally and then maintain these fragile tests? Whenever a public behavior is checked, the private implementation is also implicitly tested and your tests will break only if there is a certain problem (e.g. wrong output). This approach is also referred to as behavioral testing. On the other side, should you test the internals (white box approach) — your focus shifts from planning the component outcome to nitty-gritty details and your test might break because of minor code refactors although the results are fine- this dramatically increases the maintenance burden

Otherwise: Your test behaves like the child who cries wolf: shoot out loud false-positive cries (e.g., A test fails because a private variable name was changed). Unsurprisingly, people will soon start to ignore the CI notifications until someday a real bug will get ignored…

Code Examples

Anti Pattern Example: A test case is testing the internals for no good reason

class ProductService{
  //this method is only used internally
  //Change this name will make the tests fail
  calculateVAT(priceWithoutVAT){
    return {finalPrice: priceWithoutVAT * 1.2};
    //Change the result format or key name above will make the tests fail
  }
  //public method
  getPrice(productId){
    const desiredProduct= DB.getProduct(productId);
    finalPrice = this.calculateVATAdd(desiredProduct.price).finalPrice;
  }
}

it("White-box test: When the internal methods get 0 vat, it return 0 response", async () => {
    //There's no requirement to allow users to calculate the VAT, only show the final price. Nevertheless we falsely insist here to test the class internals
    expect(new ProductService().calculateVATAdd(0).finalPrice).to.equal(0);
});

1.5 Choose the right test doubles: Avoid mocks in favor of stubs and spies

Do: Test doubles are a necessary evil because they are coupled to the application internals, yet some provide an immense value. However, the various techniques were not born equal: some of them, spies and stubs, are focused on testing the requirements but as an inevitable side-effect they also slightly touch the internals. Mocks, on the contrary side, are focused on testing the internals — this brings huge overhead as explained in the bullet “Stick to black box testing”.

However, the various techniques were not born equal: some of them, spies and stubs, are focused on testing the requirements but as an inevitable side-effect they also slightly touch the internals. Mocks, on the contrary side, are focused on testing the internals — this brings huge overhead as explained in the bullet “Stick to black box testing”.

Before using test doubles, ask a very simple question: Do I use it to test functionality that appears, or could appear, in the requirements document? If no, it’s a smell of white-box testing.

For example, if you want to test what your app behaves reasonably when the payment service is down, you might stub the payment service and trigger some ‘No Response’ return to ensure that the unit under test returns the right value. This checks our application behavior/response/outcome under certain scenarios. You might also use a spy to assert that an email was sent when that service is down — this is again a behavioral check which is likely to appear in a requirements doc (“Send an email if payment couldn’t be saved”). On the flip side, if you mock the Payment service and ensure that it was called with the right JavaScript types — then your test is focused on internal things that got nothing with the application functionality and are likely to change frequently

Otherwise: Any refactoring of code mandates searching for all the mocks in the code and updating accordingly. Tests become a burden rather than a helpful friend

Code Examples

Anti-pattern example: Mocks focus on the internals

it("When a valid product is about to be deleted, ensure data access DAL was called once, with the right product and right config", async () => {
    //Assume we already added a product
    const dataAccessMock = sinon.mock(DAL);
    //hmmm BAD: testing the internals is actually our main goal here, not just a side-effect
    dataAccessMock.expects("deleteProduct").once().withArgs(DBConfig, theProductWeJustAdded, true, false);
   new ProductService().deletePrice(theProductWeJustAdded);
    mock.verify();
});

Doing It Right Example: spies are focused on testing the requirements but as a side-effect are unavoidably touching to the internals

it("When a valid product is about to be deleted, ensure an email is sent", async () => {
    //Assume we already added here a product
    const spy = sinon.spy(Emailer.prototype, "sendEmail");
    new ProductService().deletePrice(theProductWeJustAdded);
    //hmmm OK: we deal with internals? Yes, but as a side effect of testing the requirements (sending an email)
});

1.6 Don’t “foo”, use realistic input dataing

Do: Often production bugs are revealed under some very specific and surprising input — the more realistic the test input is, the greater the chances are to catch bugs early. Use dedicated libraries like Faker to generate pseudo-real data that resembles the variety and form of production data. For example, such libraries can generate realistic phone numbers, usernames, credit card, company names, and even ‘lorem ipsum’ text. You may also create some tests (on top of unit tests, not instead) that randomize fakers data to stretch your unit under test or even import real data from your production environment. Want to take it to the next level? see next bullet (property-based testing).

Otherwise: All your development testing will falsely seem green when you use synthetic inputs like “Foo” but then production might turn red when a hacker passes-in a nasty string like “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA”

Code Examples

Anti-Pattern Example: A test suite that passes due to non-realistic data

const addProduct = (name, price) =>{
  const productNameRegexNoSpace = /^\S*$/;//no white-space allowd

  if(!productNameRegexNoSpace.test(name))
    return false;//this path never reached due to dull input

    //some logic here
    return true;
};

test("Wrong: When adding new product with valid properties, get successful confirmation", async () => {
    //The string "Foo" which is used in all tests never triggers a false result
    const addProductResult = addProduct("Foo", 5);
    expect(addProductResult).toBe(true);
    //Positive-false: the operation succeeded because we never tried with long
    //product name including spaces
});

Doing It Right Example: Randomizing realistic input

it("Better: When adding new valid product, get successful confirmation", async () => {
    const addProductResult = addProduct(faker.commerce.productName(), faker.random.number());
    //Generated random input: {'Sleek Cotton Computer',  85481}
    expect(addProductResult).to.be.true;
    //Test failed, the random input triggered some path we never planned for.
    //We discovered a bug early!
});

1.7 Test many input combinations using Property-based testing

Do: Typically we choose a few input samples for each test. Even when the input format resembles real-world data (see bullet ‘Don’t foo’), we cover only a few input combinations (method(‘’, true, 1), method(“string” , false” , 0)), However, in production, an API that is called with 5 parameters can be invoked with thousands of different permutations, one of them might render our process down (see Fuzz Testing). What if you could write a single test that sends 1000 permutations of different inputs automatically and catches for which input our code fails to return the right response? Property-based testing is a technique that does exactly that: by sending all the possible input combinations to your unit under test it increases the serendipity of finding a bug. For example, given a method — addNewProduct(id, name, isDiscount) — the supporting libraries will call this method with many combinations of (number, string, boolean) like (1, “iPhone”, false), (2, “Galaxy”, true). You can run property-based testing using your favorite test runner (Mocha, Jest, etc) using libraries like js-verify or testcheck (much better documentation). Update: Nicolas Dubien suggests in the comments below to checkout fast-check which seems to offer some additional features and also to be actively maintained

Otherwise: Unconsciously, you choose the test inputs that cover only code paths that work well. Unfortunately, this decreases the efficiency of testing as a vehicle to expose bugs

Code Examples

Doing It Right Example: Testing many input permutations with “mocha-testcheck”

require('mocha-testcheck').install();
const {expect} = require('chai');
const faker = require('faker');

describe('Product service', () => {
  describe('Adding new', () => {
    //this will run 100 times with different random properties
    check.it('Add new product with random yet valid properties, always successful',
      gen.int, gen.string, (id, name) => {
        expect(addNewProduct(id, name).status).to.equal('approved');
      });
  })
});

1.8 If needed, use only short & inline snapshots

Do: When there is a need for snapshot testing, use only short and focused snapshots (i.e. 3-7 lines) that are included as part of the test (Inline Snapshot) and not within external files. Keeping this guideline will ensure your tests remain self-explanatory and less fragile.

On the other hand, ‘classic snapshots’ tutorials and tools encourage to store big files (e.g. component rendering markup, API JSON result) over some external medium and ensure each time when the test run to compare the received result with the saved version. This, for example, can implicitly couple our test to 1000 lines with 3000 data values that the test writer never read and reasoned about. Why is this wrong? By doing so, there are 1000 reasons for your test to fail - it’s enough for a single line to change for the snapshot to get invalid and this is likely to happen a lot. How frequently? for every space, comment or minor CSS/HTML change. Not only this, the test name wouldn’t give a clue about the failure as it just checks that 1000 lines didn’t change, also it encourages to the test writer to accept as the desired true a long document he couldn’t inspect and verify. All of these are symptoms of obscure and eager test that is not focused and aims to achieve too much

It’s worth noting that there are few cases where long & external snapshots are acceptable - when asserting on schema and not data (extracting out values and focusing on fields) or when the received document rarely changes

Otherwise: A UI test fails. The code seems right, the screen renders perfect pixels, what happened? your snapshot testing just found a difference from the origin document to current received one - a single space character was added to the markdown...

Code Examples

Anti-Pattern Example: Coupling our test to unseen 2000 lines of code

it('TestJavaScript.com is renderd correctly', ()  => {

//Arrange

//Act
const receivedPage = renderer
.create(  <DisplayPage page  =  "http://www.testjavascript.com"  > Test JavaScript < /DisplayPage>)
.toJSON();

//Assert
expect(receivedPage).toMatchSnapshot();
//We now implicitly maintain a 2000 lines long document
//every additional line break or comment - will break this test

});

Doing It Right Example: Expectations are visible and focused

it('When visiting TestJavaScript.com home page, a menu is displayed', () => {
//Arrange

//Act
receivedPage tree = renderer
.create(  <DisplayPage page  =  "http://www.testjavascript.com"  > Test JavaScript < /DisplayPage>)
.toJSON();

//Assert

const menu = receivedPage.content.menu;
expect(menu).toMatchInlineSnapshot(&lt;ul&gt; &lt;li&gt;Home&lt;/li&gt; &lt;li&gt; About &lt;/li&gt; &lt;li&gt; Contact &lt;/li&gt; &lt;/ul&gt;);
});

1.9 Avoid global test fixtures and seeds, add data per-test

Do: Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests (also known as ‘test fixture’) for the sake of performance improvement. While performance is indeed a valid concern — it can be mitigated (see “Component testing” bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concern — a balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)

Otherwise: Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? let’s investigate, oh no — it seems that two tests were mutating the same seed data

Code Examples

Anti Pattern Example: tests are not independent and rely on some global hook to feed global DB data

before(() => {
  //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
  await DB.AddSeedDataFromJson('seed.json');
});
it("When updating site name, get successful confirmation", async () => {
  //I know that site name "portal" exists - I saw it in the seed files
  const siteToUpdate = await SiteService.getSiteByName("Portal");
  const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
  expect(updateNameResult).to.be(true);
});
it("When querying by site name, get the right site", async () => {
  //I know that site name "portal" exists - I saw it in the seed files
  const siteToCheck = await SiteService.getSiteByName("Portal");
  expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
});

Doing It Right Example: We can stay within the test, each test acts on its own set of data

it("When updating site name, get successful confirmation", async () => {
  //test is adding a fresh new records and acting on the records only
  const siteUnderTest = await SiteService.addSite({
    name: "siteForUpdateTest"
  });
  const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");
  expect(updateNameResult).to.be(true);
});

1.10 Don’t catch errors, expect them

Do: When trying to assert that some input triggers an error, it might look right to use try-catch-finally and asserts that the catch clause was entered. The result is an awkward and verbose test case (example below) that hides the simple test intent and the result expectations

A more elegant alternative is the using the one-line dedicated Chai assertion: expect(method).to.throw (or in Jest: expect(method).toThrow()). It’s absolutely mandatory to also ensure the exception contains a property that tells the error type, otherwise given just a generic error the application won’t be able to do much rather than show a disappointing message to the user

**Otherwise:**It will be challenging to infer from the test reports (e.g. CI reports) what went wrong

Code Examples

Anti-pattern Example: A long test case that tries to assert the existence of error with try-catch

/it("When no product name, it throws error 400", async() => {
let errorWeExceptFor = null;
try {
  const result = await addNewProduct({name:'nest'});}
catch (error) {
  expect(error.code).to.equal('InvalidInput');
  errorWeExceptFor = error;
}
expect(errorWeExceptFor).not.to.be.null;
//if this assertion fails, the tests results/reports will only show
//that some value is null, there won't be a word about a missing Exception
});

Doing It Right Example: A human-readable expectation that could be understood easily, maybe even by QA or technical PM

it.only("When no product name, it throws error 400", async() => {
  expect(addNewProduct)).to.eventually.throw(AppError).with.property('code', "InvalidInput");
});

1.11 Tag your tests

Do: Different tests must run on different scenarios: quick smoke, IO-less, tests should run when a developer saves or commits a file, full end-to-end tests usually run when a new pull request is submitted, etc. This can be achieved by tagging tests with keywords like #cold #api #sanity so you can grep with your testing harness and invoke the desired subset. For example, this is how you would invoke only the sanity test group with Mocha: mocha — grep ‘sanity’

Otherwise: Running all the tests, including tests that perform dozens of DB queries, any time a developer makes a small change can be extremely slow and keeps developers away from running tests

Code Examples

Doing It Right Example: Tagging tests as ‘#cold-test’ allows the test runner to execute only fast tests (Cold===quick tests that are doing no IO and can be executed frequently even as the developer is typing)

//this test is fast (no DB) and we're tagging it correspondigly
//now the user/CI can run it frequently
describe('Order service', function() {
  describe('Add new order #cold-test #sanity', function() {
    test('Scenario - no currency was supplied. Excpectation - Use the default currency #sanity', function() {
      //code logic here
    });
  });
});

1.12 Other generic good testing hygiene

Do: This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known

Learn and practice TDD principles — they are extremely valuable for many but don’t get intimidated if they don’t fit your style, you’re not the only one. Consider writing the tests before the code in a red-green-refactor style, ensure each test checks exactly one thing, when you find a bug — before fixing write a test that will detect this bug in the future, let each test fail at least once before turning green, start a module by writing a quick and simplistic code that satsifies the test - then refactor gradually and take it to a prdoction grade level, avoid any dependency on the environment (paths, OS, etc)

Otherwise: You‘ll miss pearls of wisdom that were collected for decades

Section 2: Backend Testing

2.1 Enrich your testing portfolio: Look beyond unit tests and the pyramid

Do: The testing pyramid, though 10> years old, is a great and relevant model that suggests three testing types and influences most developers’ testing strategy. At the same time, more than a handful of shiny new testing techniques emerged and are hiding in the shadows of the testing pyramid. Given all the dramatic changes that we’ve seen in the recent 10 years (Microservices, cloud, serverless), is it even possible that one quite-old model will suit all types of applications? shouldn’t the testing world consider welcoming new testing techniques?

Don’t get me wrong, in 2019 the testing pyramid, TDD and unit tests are still a powerful technique and are probably the best match for many applications. Only like any other model, despite its usefulness, it must be wrong sometimes. For example, consider an IOT application that ingests many events into a message-bus like Kafka/RabbitMQ, which then flow into some data-warehouse and are eventually queried by some analytics UI. Should we really spend 50% of our testing budget on writing unit tests for an application that is integration-centric and has almost no logic? As the diversity of application types increase (bots, crypto, Alexa-skills) greater are the chances to find scenarios where the testing pyramid is not the best match.

It’s time to enrich your testing portfolio and become familiar with more testing types (the next bullets suggest few ideas), mind models like the testing pyramid but also match testing types to real-world problems that you’re facing (‘Hey, our API is broken, let’s write consumer-driven contract testing!’), diversify your tests like an investor that build a portfolio based on risk analysis — assess where problems might arise and match some prevention measures to mitigate those potential risks

A word of caution: the TDD argument in the software world takes a typical false-dichotomy face, some preach to use it everywhere, others think it’s the devil. Everyone who speaks in absolutes is wrong :]

Otherwise: You’re going to miss some tools with amazing ROI, some like Fuzz, lint, and mutation can provide value in 10 minutes

Code Examples

Doing It Right Example: Cindy Sridharan suggests a rich testing portfolio in her amazing post ‘Testing Microservices — the sane way’

2.2 Component testing might be your best affair

Do: Each unit test covers a tiny portion of the application and it’s expensive to cover the whole, whereas end-to-end testing easily covers a lot of ground but is flaky and slower, why not apply a balanced approach and write tests that are bigger than unit tests but smaller than end-to-end testing? Component testing is the unsung song of the testing world — they provide the best from both worlds: reasonable performance and a possibility to apply TDD patterns + realistic and great coverage.

Component tests focus on the Microservice ‘unit’, they work against the API, don’t mock anything which belongs to the Microservice itself (e.g. real DB, or at least the in-memory version of that DB) but stub anything that is external like calls to other Microservices. By doing so, we test what we deploy, approach the app from outwards to inwards and gain great confidence in a reasonable amount of time.

Otherwise: You may spend long days on writing unit tests to find out that you got only 20% system coverage

Code Examples

Doing It Right Example: Supertest allows approaching Express API in-process (fast and cover many layers)

2.3 Ensure new releases don’t break the API using

Do: So your Microservice has multiple clients, and you run multiple versions of the service for compatibility reasons (keeping everyone happy). Then you change some field and ‘boom!’, some important client who relies on this field is angry. This is the Catch-22 of the integration world: It’s very challenging for the server side to consider all the multiple client expectations — On the other hand, the clients can’t perform any testing because the server controls the release dates. Consumer-driven contracts and the framework PACT were born to formalize this process with a very disruptive approach — not the server defines the test plan of itself rather the client defines the tests of the… server! PACT can record the client expectation and put in a shared location, “broker”, so the server can pull the expectations and run on every build using PACT library to detect broken contracts — a client expectation that is not met. By doing so, all the server-client API mismatches are caught early during build/CI and might save you a great deal of frustration

Otherwise: The alternatives are exhausting manual testing or deployment fear

Code Examples

Doing It Right Example:

2.4 Test your middlewares in isolation

Do: Many avoid Middleware testing because they represent a small portion of the system and require a live Express server. Both reasons are wrong — Middlewares are small but affect all or most of the requests and can be tested easily as pure functions that get {req,res} JS objects. To test a middleware function one should just invoke it and spy (using Sinon for example) on the interaction with the {req,res} objects to ensure the function performed the right action. The library node-mock-http takes it even further and factors the {req,res} objects along with spying on their behavior. For example, it can assert whether the http status that was set on the res object matches the expectation (See example below)

Otherwise: A bug in Express middleware === a bug in all or most requests

Code Examples

Doing It Right Example: Testing middleware in isolation without issuing network calls and waking-up the entire Express machine

//the middleware we want to test
const unitUnderTest = require('./middleware')
const httpMocks = require('node-mocks-http');
//Jest syntax, equivelant to describe() & it() in Mocha
test('A request without authentication header, should return http status 403', () => {
  const request = httpMocks.createRequest({
    method: 'GET',
    url: '/user/42',
    headers: {
      authentication: ''
    }
  });
  const response = httpMocks.createResponse();
  unitUnderTest(request, response);
  expect(response.statusCode).toBe(403);
});

2.5 Measure and refactor using static analysis tools

Do: Using static analysis tools helps by giving objective ways to improve code quality and keep your code maintainable. You can add static analysis tools to your CI build to abort when it finds code smells. Its main selling points over plain linting are the ability to inspect quality in the context of multiple files (e.g. detect duplications), perform advanced analysis (e.g. code complexity) and follow the history and progress of code issues. Two examples of tools you can use are Sonarqube (2,600+ stars) and Code Climate (1,500+ stars)

Credit:: Keith Holliday

Otherwise: With poor code quality, bugs and performance will always be an issue that no shiny new library or state of the art features can fix

Code Examples

Doing It Right Example: CodeClimat, a commercial tool that can identify complex methods:

2.6 Check your readiness for Node-related chaos

Do: Weirdly, most software testings are about logic & data only, but some of the worst things that happen (and are really hard to mitigate ) are infrastructural issues. For example, did you ever test what happens when your process memory is overloaded, or when the server/process dies, or does your monitoring system realizes when the API becomes 50% slower?. To test and mitigate these type of bad things — Chaos engineering was born by Netflix. It aims to provide awareness, frameworks and tools for testing our app resiliency for chaotic issues. For example, one of its famous tools, the chaos monkey, randomly kills servers to ensure that our service can still serve users and not relying on a single server (there is also a Kubernetes version, kube-monkey, that kills pods). All these tools work on the hosting/platform level, but what if you wish to test and generate pure Node chaos like check how your Node process copes with uncaught errors, unhandled promise rejection, v8 memory overloaded with the max allowed of 1.7GB or whether your UX stays satisfactory when the event loop gets blocked often? to address this I’ve written, node-chaos (alpha) which provides all sort of Node-related chaotic acts

Otherwise: No escape here, Murphy’s law will hit your production without mercy

Code Examples

Doing It Right Example: : Node-chaos can generate all sort of Node.js pranks so you can test how resilience is your app to chaos

2.7 Avoid global test fixtures and seeds, add data per-test

Do: Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests (also known as ‘test fixture’) for the sake of performance improvement. While performance is indeed a valid concern — it can be mitigated (see “Component testing” bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concern — a balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)

Otherwise: Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? let’s investigate, oh no — it seems that two tests were mutating the same seed data

Code Examples

Anti Pattern Example: tests are not independent and rely on some global hook to feed global DB data

before(() => {
  //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
  await DB.AddSeedDataFromJson('seed.json');
});
it("When updating site name, get successful confirmation", async () => {
  //I know that site name "portal" exists - I saw it in the seed files
  const siteToUpdate = await SiteService.getSiteByName("Portal");
  const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
  expect(updateNameResult).to.be(true);
});
it("When querying by site name, get the right site", async () => {
  //I know that site name "portal" exists - I saw it in the seed files
  const siteToCheck = await SiteService.getSiteByName("Portal");
  expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
});

Doing It Right Example: We can stay within the test, each test acts on its own set of data

it("When updating site name, get successful confirmation", async () => {
  //test is adding a fresh new records and acting on the records only
  const siteUnderTest = await SiteService.addSite({
    name: "siteForUpdateTest"
  });
  const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");
  expect(updateNameResult).to.be(true);
});
Section 3: Frontend Testing

3.1. Separate UI from functionality

Do: When focusing on testing component logic, UI details become a noise that should be extracted, so your tests can focus on pure data. Practically, extract the desired data from the markup in an abstract way that is not too coupled to the graphic implementation, assert only on pure data (vs HTML/CSS graphic details) and disable animations that slow down. You might get tempted to avoid rendering and test only the back part of the UI (e.g. services, actions, store) but this will result in fictional tests that don't resemble the reality and won't reveal cases where the right data doesn't even arrive in the UI

Otherwise: The pure calculated data of your test might be ready in 10ms, but then the whole test will last 500ms (100 tests = 1 min) due to some fancy and irrelevant animation

Code Examples

Doing It Right Example: Separating out the UI details

test('When users-list is flagged to show only VIP, should display only VIP members', () => {
  // Arrange
 const allUsers = [
   { id: 1, name: 'Yoni Goldberg', vip: false },
   { id: 2, name: 'John Doe', vip: true }
  ];

  // Act
  const { getAllByTestId } = render(<UsersList users={allUsers} showOnlyVIP={true}/>);

  // Assert - Extract the data from the UI first
  const allRenderedUsers = getAllByTestId('user').map(uiElement => uiElement.textContent);
  const allRealVIPUsers = allUsers.filter((user) => user.vip).map((user) => user.name);
  expect(allRenderedUsers).toEqual(allRealVIPUsers); //compare data with data, no UI here
});

Anti Pattern Example: Assertion mix UI details and data

test('When flagging to show only VIP, should display only VIP members', () => {
  // Arrange
  const allUsers = [
   {id: 1, name: 'Yoni Goldberg', vip: false },
   {id: 2, name: 'John Doe', vip: true }
  ];

  // Act
  const { getAllByTestId } = render(<UsersList users={allUsers} showOnlyVIP={true}/>);

  // Assert - Mix UI & data in assertion
  expect(getAllByTestId('user')).toEqual('[<li data-testid="user">John Doe</li>]');
});

3.2 Query HTML elements based on attributes that are unlikely to change

Do: Query HTML elements based on attributes that are likely to survive graphic changes unlike CSS selectors and like form labels. If the designated element doesn't have such attributes, create a dedicated test attribute like 'test-id-submit-button'. Going this route not only ensures that your functional/logic tests never break because of look & feel changes but also it becomes clear to the entire team that this element and attribute are utilized by tests and shouldn't get removed

Otherwise: You want to test the login functionality that spans many components, logic and services, everything is set up perfectly - stubs, spies, Ajax calls are isolated. All seems perfect. Then the test fails because the designer changed the div CSS class from 'thick-border' to 'thin-border'

Code Examples

Doing It Right Example: Querying an element using a dedicated attrbiute for testing

// the markup code (part of React component)
<h3>
  <Badge pill className="fixed_badge" variant="dark">
    <span data-testid="errorsLabel">{value}</span> <!-- note the attribute data-testid -->
  </Badge>
</h3>
// this example is using react-testing-library
  test('Whenever no data is passed to metric, show 0 as default', () => {
    // Arrange
    const metricValue = undefined;

    // Act
    const { getByTestId } = render(<dashboardMetric value={undefined}/>);   
    
    expect(getByTestId('errorsLabel')).text()).toBe("0");
  });

Anti-Pattern Example: Relying on CSS attributes

<!-- the markup code (part of React component) -->
<span id="metric" className="d-flex-column">{value}</span> <!-- what if the designer changes the classs? -->
// this exammple is using enzyme
test('Whenever no data is passed, error metric shows zero', () => {
    // ...
   
    expect(wrapper.find("[className='d-flex-column']").text()).toBe("0");
  });

3.3 Whenever possible, test with a realistic and fully rendered component

Do: Whenever reasonably sized, test your component from outside like your users do, fully render the UI, act on it and assert that the rendered UI behaves as expected. Avoid all sort of mocking, partial and shallow rendering - this approach might result in untrapped bugs due to lack of details and harden the maintenance as the tests mess with the internals (see bullet 'Favour blackbox testing'). If one of the child components is significantly slowing down (e.g. animation) or complicating the setup - consider explicitly replacing it with a fake

With all that said, a word of caution is in order: this technique works for small/medium components that pack a reasonable size of child components. Fully rendering a component with too many children will make it hard to reason about test failures (root cause analysis) and might get too slow. In such cases, write only a few tests against that fat parent component and more tests against its children

Otherwise: When poking into a component's internal by invoking its private methods, and checking the inner state - you would have to refactor all tests when refactoring the components implementation. Do you really have a capacity for this level of maintenance?

Code Examples

Doing It Right Example: Working realstically with a fully rendered component

class Calendar extends React.Component {
  static defaultProps = {showFilters: false}
 
  render() {
    return (
      <div>
        A filters panel with a button to hide/show filters
        <FiltersPanel showFilter={showFilters} title='Choose Filters'/>
      </div>
    )
  }
}

//Examples use React & Enzyme
test('Realistic approach: When clicked to show filters, filters are displayed', () => {
    // Arrange
    const wrapper = mount(<Calendar showFilters={false} />)

    // Act
    wrapper.find('button').simulate('click');

    // Assert
    expect(wrapper.text().includes('Choose Filter'));
    // This is how the user will approach this element: by text
})

Anti-Pattern Example: Mocking the reality with shallow rendering

test('Shallow/mocked approach: When clicked to show filters, filters are displayed', () => {
    // Arrange
    const wrapper = shallow(<Calendar showFilters={false} title='Choose Filter'/>)

    // Act
    wrapper.find('filtersPanel').instance().showFilters();
    // Tap into the internals, bypass the UI and invoke a method. White-box approach

    // Assert
    expect(wrapper.find('Filter').props()).toEqual({title: 'Choose Filter'});
    // what if we change the prop name or don't pass anything relevant?
})

3.4 Don't sleep, use frameworks built-in support for async events. Also try to speed things up

Do: In many cases, the unit under test completion time is just unknown (e.g. animation suspends element appearance) - in that case, avoid sleeping (e.g. setTimeOut) and prefer more deterministic methods that most platforms provide. Some libraries allows awaiting on operations (e.g. Cypress cy.request('url')), other provide API for waiting like @testing-library/dom method wait(expect(element)). Sometimes a more elegant way is to stub the slow resource, like API for example, and then once the response moment becomes deterministic the component can be explicitly re-rendered. When depending upon some external component that sleeps, it might turn useful to hurry-up the clock. Sleeping is a pattern to avoid because it forces your test to be slow or risky (when waiting for a too short period). Whenever sleeping and polling is inevitable and there's no support from the testing framework, some npm libraries like wait-for-expect can help with a semi-deterministic solution

Otherwise: When sleeping for a long time, tests will be an order of magnitude slower. When trying to sleep for small numbers, test will fail when the unit under test didn't respond in a timely fashion. So it boils down to a trade-off between flakiness and bad performance

Code Examples

Doing It Right Example: E2E API that resolves only when the async operations is done (Cypress)

// using Cypress
cy.get('#show-products').click()// navigate
cy.wait('@products')// wait for route to appear
// this line will get executed only when the route is ready

Doing It Right Example: Testing library that waits for DOM elements

// @testing-library/dom
test('movie title appears', async () => {
    // element is initially not present...

    // wait for appearance
    await wait(() => {
        expect(getByText('the lion king')).toBeInTheDocument()
    })

    // wait for appearance and return the element
    const movie = await waitForElement(() => getByText('the lion king'))
})

Anti-Pattern Example: custom sleep code

test('movie title appears', async () => {
    // element is initially not present...

    // custom wait logic (caution: simplistic, no timeout)
    const interval = setInterval(() => {
        const found = getByText('the lion king');
        if(found){
            clearInterval(interval);
            expect(getByText('the lion king')).toBeInTheDocument();
        }
       
    }, 100);

    // wait for appearance and return the element
    const movie = await waitForElement(() => getByText('the lion king'))
})

3.5. Watch how the content is served over the network

Do: Apply some active monitor that ensures the page load under real network is optimized - this includes any UX concern like slow page load or un-minified bundle. The inspection tools market is no short: basic tools like pingdom, AWS CloudWatch, gcp StackDriver can be easily configured to watch whether the server is alive and response under a reasonable SLA. This only scratches the surface of what might get wrong, hence it's preferable to opt for tools that specialize in frontend (e.g. lighthouse, pagespeed) and perform richer analysis. The focus should be on symptoms, metrics that directly affect the UX, like page load time, meaningful paint, time until the page gets interactive (TTI). On top of that, one may also watch for technical causes like ensuring the content is compressed, time to the first byte, optimize images, ensuring reasonable DOM size, SSL and many others. It's advisable to have these rich monitors both during development, as part of the CI and most important - 24x7 over the production's servers/CDN

Otherwise: It must be disappointing to realize that after such great care for crafting a UI, 100% functional tests passing and sophisticated bundling - the UX is horrible and slow due to CDN misconfiguration

Code Examples

Doing It Right Example: Lighthouse page load inspection report

3.6 Stub flakky and slow resources like backend APIs

Do: When coding your mainstream tests (not E2E tests), avoid involving any resource that is beyond your responsibility and control like backend API and use stubs instead (i.e. test double). Practically, instead of real network calls to APIs, use some test double library (like [Sinon]https://sinonjs.org/, Test doubles, etc) for stubbing the API response. The main benefit is preventing flakiness - testing or staging APIs by definition are not highly stable and from time to time will fail your tests although YOUR component behaves just fine (production env was not meant for testing and it usually throttles requests). Doing this will allow simulating various API behavior that should drive your component behavior as when no data was found or the case when API throws an error. Last but not least, network calls will greatly slow down the tests

Otherwise: The average test runs no longer than few ms, a typical API call last 100ms>, this makes each test ~20x slower

Code Examples

Doing It Right Example: Stubbing or intercepting API calls

// unit under test
export default function ProductsList() {
    const [products, setProducts] = useState(false)

    const fetchProducts = async() => {
      const products = await axios.get('api/products')
      setProducts(products);
    }

    useEffect(() => {
      fetchProducts();
    }, []);

  return products ? <div>{products}</div> : <div data-testid='no-products-message'>No products</div>
}

// test
test('When no products exist, show the appropriate message', () => {
    // Arrange
    nock("api")
            .get(/products)
            .reply(404);

    // Act
    const {getByTestId} = render(<ProductsList/>);

    // Assert
    expect(getByTestId('no-products-message')).toBeTruthy();
});

3.7 Have very few end-to-end tests that spans the whole system

Do: Although E2E (end-to-end) usually means UI-only testing with a real browser (See bullet 3.6), for other they mean tests that stretch the entire system including the real backend. The latter type of tests is highly valuable as they cover integration bugs between frontend and backend that might happen due to a wrong understanding of the exchange schema. They are also an efficient method to discover backend-to-backend integration issues (e.g. Microservice A sends the wrong message to Microservice B) and even to detect deployment failures - there are no backend frameworks for E2E testing that are as friendly and mature as UI frameworks like Cypress and Pupeteer. The downside of such tests is the high cost of configuring an environment with so many components, and mostly their brittleness - given 50 microservices, even if one fails then the entire E2E just failed. For that reason, we should use this technique sparingly and probably have 1-10 of those and no more. That said, even a small number of E2E tests are likely to catch the type of issues they are targeted for - deployment & integration faults. It's advisable to run those over a production-like staging environment

Otherwise: UI might invest much in testing its functionality only to realizes very late that the backend returned payload (the data schema the UI has to work with) is very differnt than expected

3.8 Speed-up E2E tests by reusing login credentials

Do: In E2E tests that involve a real backend and rely on a valid user token for API calls, it doesn't payoff to isolate the test to a level where a user is created and logged-in in every request. Instead, login only once before the tests execution start (i.e. before-all hook), save the token in some local storage and reuse it across requests. This seem to violate one of the core testing principle - keep the test autonomous without resources coupling. While this is a valid worry, in E2E tests performance is a key concern and creating 1-3 API requests before starting each individial tests might lead to horrible execution time. Reusing credentials doesn't mean the tests have to act on the same user records - if relying on user records (e.g. test user payments history) than make sure to generate those records as part of the test and avoid sharing their existence with other tests. Also remember that the backend can be faked - if your tests are focused on the frontend it might be better to isolate it and stub the backend API (see bullet 3.6).

Otherwise: Given 200 test cases and assuming login=100ms = 20 seconds only for logging-in again and again

Code Examples

Doing It Right Example: Logging-in before-all and not before-each

let authenticationToken;

// happens before ALL tests run
before(() => {
  cy.request('POST', 'http://localhost:3000/login', {
    username: Cypress.env('username'),
    password: Cypress.env('password'),
  })
  .its('body')
  .then((responseFromLogin) => {
    authenticationToken = responseFromLogin.token;
  })
})

// happens before EACH test
beforeEach(setUser => () {
  cy.visit('/home', {
    onBeforeLoad (win) {
      win.localStorage.setItem('token', JSON.stringify(authenticationToken))
    },
  })
})

3.9 Have one E2E smoke test that just travells across the site map

Do: For production monitoring and development-time sanity check, run a single E2E test that visits all/most of the site pages and ensures no one breaks. This type of test brings a great return on investment as it's very easy to write and maintain, but it can detect any kind of failure including functional, network and deployment issues. Other styles of smoke and sanity checking are not as reliable and exhaustive - some ops teams just ping the home page (production) or developers who run many integration tests which don't discover packaging and browser issues. Goes without saying that the smoke test doesn't replace functional tests rather just aim to serve as a quick smoke detector

Otherwise: Everything might seem perfect, all tests pass, production health-check is also positive but the Payment component had some packaging issue and only the /Payment route is not rendering

Code Examples

Doing It Right Example: Smoke travelling across all pages

it('When doing smoke testing over all page, should load them all successfully', () => {
    // exemplified using Cypress but can be implemented easily
    // using any E2E suite
    cy.visit('https://mysite.com/home');
    cy.contains('Home');
    cy.contains('https://mysite.com/Login');
    cy.contains('Login');
    cy.contains('https://mysite.com/About');
    cy.contains('About');
  })

3.10 Expose the tests as a live collaborative document

Do: Besides increasing app reliability, tests bring another attractive opportunity to the table - serve as live app documentation. Since tests inherently speak at a less-technical and product/UX language, using the right tools they can serve as a communication artifact that greatly aligns all the peers - developers and their customers. For example, some frameworks allow expressing the flow and expectations (i.e. tests plan) using a human-readable language so any stakeholder, including product managers, can read, approve and collaborate on the tests which just became the live requirements document. This technique is also being referred to as 'acceptance test' as it allows the customer to define his acceptance criteria in plain language. This is BDD (behavior-driven testing) at its purest form. One of the popular frameworks that enable this is Cucumber which has a JavaScript flavor, see example below. Another similar yet different opportunity, StoryBook, allows exposing UI components as a graphic catalog where one can walk through the various states of each component (e.g. render a grid w/o filters, render that grid with multiple rows or with none, etc), see how it looks like, and how to trigger that state - this can appeal also to product folks but mostly serves as live doc for developers who consume those components.

Otherwise: After investing top resources on testing, it's just a pity not to leverage this investment and win great value

Code Examples

Doing It Right Example: Describing tests in human-language using cocumber-js

// this is how one can describe tests using cocumber: plain language that allows anyone to understand and collaborate

Feature: Twitter new tweet

  I want to tweet something in Twitter
 
  @focus
  Scenario: Tweeting from the home page
    Given I open Twitter home
    Given I click on "New tweet" button
    Given I type "Hello followers!" in the textbox
    Given I click on "Submit" button
    Then I see message "Tweet saved"
   

Doing It Right Example: Visualizing our components, their various states and inputs using Storybook

3.11 Detect visual issues with automated tools

Do: Setup automated tools to capture UI screenshots when changes are presented and detect visual issues like content overlapping or breaking. This ensures that not only the right data is prepared but also the user can conveniently see it. This technique is not widely adopted, our testing mindset leans toward functional tests but it's the visuals what the user experience and with so many device types it's very easy to overlook some nasty UI bug. Some free tools can provide the basics - generate and save screenshots for the inspection of human eyes. While this approach might be sufficient for small apps, it's flawed as any other manual testing that demands human labor anytime something changes. On the other hand, it's quite challenging to detect UI issues automatically due to the lack of clear definition - this is where the field of 'Visual Regression' chime in and solve this puzzle by comparing old UI with the latest changes and detect differences. Some OSS/free tools can provide some of this functionality (e.g. wraith, PhantomCSS but might charge signficant setup time. The commercial line of tools (e.g. Applitools, Perci.io) takes is a step further by smoothing the installation and packing advanced features like management UI, alerting, smart capturing by elemeinating 'visual noise' (e.g. ads, animations) and even root cause analysis of the DOM/css changes that led to the issue

Otherwise: How good is a content page that display great content (100% tests passed), loads instantly but half of the content area is hidden?

Code Examples

Anti Pattern Example: A typical visual regression - right content that is served badly


Doing It Right Example: Configuring wraith to capture and compare UI snapshots

​# Add as many domains as necessary. Key will act as a label​

domains:
 english: "http://www.mysite.com"​

​# Type screen widths below, here are a couple of examples​

screen_widths:

 - 600​
 - 768​
 - 1024​
 - 1280​

​# Type page URL paths below, here are a couple of examples​
paths:
 about:
   path: /about
   selector: '.about'​
 subscribe:
     selector: '.subscribe'​
   path: /subscribe

Doing It Right Example: Using Applitools to get snapshot comaprison and other advanced features

import  *  as todoPage from  '../page-objects/todo-page';

describe('visual validation',  ()  =>  {

before(()  =>  todoPage.navigate());

beforeEach(()  =>  cy.eyesOpen({ appName:  'TAU TodoMVC'  }));

afterEach(()  =>  cy.eyesClose());

 

it('should look good',  ()  =>  {

cy.eyesCheckWindow('empty todo list');

 

todoPage.addTodo('Clean room');

 

todoPage.addTodo('Learn javascript');

 

cy.eyesCheckWindow('two todos');

 

todoPage.toggleTodo(0);

 

cy.eyesCheckWindow('mark as completed');

});

});

Section 4: Measuring Test Effectiveness

4.1 Get enough coverage for being confident, ~80% seems to be the lucky number

Do: The purpose of testing is to get enough confidence for moving fast, obviously the more code is tested the more confident the team can be. Coverage is a measure of how many code lines (and branches, statements, etc) are being reached by the tests. So how much is enough? 10–30% is obviously too low to get any sense about the build correctness, on the other side 100% is very expensive and might shift your focus from the critical paths to the exotic corners of the code. The long answer is that it depends on many factors like the type of application — if you’re building the next generation of Airbus A380 than 100% is a must, for a cartoon pictures website 50% might be too much. Although most of the testing enthusiasts claim that the right coverage threshold is contextual, most of them also mention the number 80% as a thumb of a rule (Fowler: “in the upper 80s or 90s”) that presumably should satisfy most of the applications.

Implementation tips: You may want to configure your continuous integration (CI) to have a coverage threshold (Jest link) and stop a build that doesn’t stand to this standard (it’s also possible to configure threshold per component, see code example below). On top of this, consider detecting build coverage decrease (when a newly committed code has less coverage) — this will push developers raising or at least preserving the amount of tested code. All that said, coverage is only one measure, a quantitative based one, that is not enough to tell the robustness of your testing. And it can also be fooled as illustrated in the next bullets

Otherwise: Confidence and numbers go hand in hand, without really knowing that you tested most of the system — there will also be some fear. and fear will slow you down

Code Examples

Example: A typical coverage report

Doing It Right Example: Setting up coverage per component (using Jest)

![alt text](assets/bp-18-code-coverage2.jpeg "Setting up coverage per component (using Jest)

4.2 Inspect coverage reports to detect untested areas and other oddities

Do: Some issues sneak just under the radar and are really hard to find using traditional tools. These are not really bugs but more of surprising application behavior that might have a severe impact. For example, often some code areas are never or rarely being invoked — you thought that the ‘PricingCalculator’ class is always setting the product price but it turns out it is actually never invoked although we have 10000 products in DB and many sales… Code coverage reports help you realize whether the application behaves the way you believe it does. Other than that, it can also highlight which types of code is not tested — being informed that 80% of the code is tested doesn’t tell whether the critical parts are covered. Generating reports is easy — just run your app in production or during testing with coverage tracking and then see colorful reports that highlight how frequent each code area is invoked. If you take your time to glimpse into this data — you might find some gotchas

Otherwise: If you don’t know which parts of your code are left un-tested, you don’t know where the issues might come from

Code Examples

Anti-Pattern Example: What’s wrong with this coverage report? based on a real-world scenario where we tracked our application usage in QA and find out interesting login patterns (Hint: the amount of login failures is non-proportional, something is clearly wrong. Finally it turned out that some frontend bug keeps hitting the backend login API)

![alt text](assets/bp-19-coverage-yoni-goldberg-nodejs-consultant.png "What’s wrong with this coverage report? based on a real-world scenario where we tracked our application usage in QA and find out interesting login patterns (Hint: the amount of login failures is non-proportional, something is clearly wrong. Finally it turned out that some frontend bug keeps hitting the backend login API)

4.3 Measure logical coverage using mutation testing

Do: The Traditional Coverage metric often lies: It may show you 100% code coverage, but none of your functions, even not one, return the right response. How come? it simply measures over which lines of code the test visited, but it doesn’t check if the tests actually tested anything — asserted for the right response. Like someone who’s traveling for business and showing his passport stamps — this doesn’t prove any work done, only that he visited few airports and hotels.

Mutation-based testing is here to help by measuring the amount of code that was actually TESTED not just VISITED. Stryker is a JavaScript library for mutation testing and the implementation is really neat:

(1) it intentionally changes the code and “plants bugs”. For example the code newOrder.price===0 becomes newOrder.price!=0. This “bugs” are called mutations

(2) it runs the tests, if all succeed then we have a problem — the tests didn’t serve their purpose of discovering bugs, the mutations are so-called survived. If the tests failed, then great, the mutations were killed.

Knowing that all or most of the mutations were killed gives much higher confidence than traditional coverage and the setup time is similar

Otherwise: You’ll be fooled to believe that 85% coverage means your test will detect bugs in 85% of your code

Code Examples

Anti Pattern Example: 100% coverage, 0% testing

function addNewOrder(newOrder) {
    logger.log(Adding new order ${newOrder});
    DB.save(newOrder);
    Mailer.sendMail(newOrder.assignee, A new order was places ${newOrder});

    return {approved: true};
}

it("Test addNewOrder, don't use such test names", () => {
    addNewOrder({asignee: "[email protected]",price: 120});
});//Triggers 100% code coverage, but it doesn't check anything

Doing It Right Example: Stryker reports, a tool for mutation testing, detects and counts the amount of code that is not tested (Mutations)

![alt text](assets/bp-20-yoni-goldberg-mutation-testing.jpeg "Stryker reports, a tool for mutation testing, detects and counts the amount of code that is not tested (Mutations)

4.4 Preventing test code issues with Test linters

Do: A set of ESLint plugins were built specifically for inspecting the tests code patterns and discover issues. For example, eslint-plugin-mocha will warn when a test is written at the global level (not a son of a describe() statement) or when tests are skipped which might lead to a false belief that all tests are passing. Similarly, eslint-plugin-jest can, for example, warn when a test has no assertions at all (not checking anything)

Otherwise: Seeing 90% code coverage and 100% green tests will make your face wear a big smile only until you realize that many tests aren’t asserting for anything and many test suites were just skipped. Hopefully, you didn’t deploy anything based on this false observation

Code Examples

Anti Pattern Example: A test case full of errors, luckily all are caught by Linters

describe("Too short description", () => {
const userToken = userService.getDefaultToken() // error:no-setup-in-describe, use hooks (sparingly) instead
it("Some description", () => {});//
error: valid-test-description. Must include the word "Should" + at least 5 words
});

it.skip("Test name", () => {// *error:no-skipped-tests, error:error:no-global-tests. Put tests only under describe or suite
expect("somevalue"); // error:no-assert
});

it("Test name", () => {*//error:no-identical-title. Assign unique titles to tests
});

Section 5: CI and Other Quality Measures

5.1 Enrich your linters and abort builds that have linting issues

Do: Linters are a free lunch, with 5 min setup you get for free an auto-pilot guarding your code and catching significant issue as you type. Gone are the days where linting was about cosmetics (no semi-colons!). Nowadays, Linters can catch severe issues like errors that are not thrown correctly and losing information. On top of your basic set of rules (like ESLint standard or Airbnb style), consider including some specializing Linters like eslint-plugin-chai-expect that can discover tests without assertions, eslint-plugin-promise can discover promises with no resolve (your code will never continue), eslint-plugin-security which can discover eager regex expressions that might get used for DOS attacks, and eslint-plugin-you-dont-need-lodash-underscore is capable of alarming when the code uses utility library methods that are part of the V8 core methods like Lodash._map(…)

Otherwise: Consider a rainy day where your production keeps crashing but the logs don’t display the error stack trace. What happened? Your code mistakenly threw a non-error object and the stack trace was lost, a good reason for banging your head against a brick wall. A 5min linter setup could detect this TYPO and save your day

Code Examples

Anti Pattern Example: The wrong Error object is thrown mistakenly, no stack-trace will appear for this error. Luckily, ESLint catches the next production bug

5.2 Shorten the feedback loop with local developer-CI

Do: Using a CI with shiny quality inspections like testing, linting, vulnerabilities check, etc? Help developers run this pipeline also locally to solicit instant feedback and shorten the feedback loop. Why? an efficient testing process constitutes many and iterative loops: (1) try-outs -> (2) feedback -> (3) refactor. The faster the feedback is, the more improvement iterations a developer can perform per-module and perfect the results. On the flip, when the feedback is late to come fewer improvement iterations could be packed into a single day, the team might already move forward to another topic/task/module and might not be up for refining that module.

Practically, some CI vendors (Example: CircleCI load CLI) allow running the pipeline locally. Some commercial tools like wallaby provide highly-valuable & testing insights as a developer prototype (no affiliation). Alternatively, you may just add npm script to package.json that runs all the quality commands (e.g. test, lint, vulnerabilities) — use tools like concurrently for parallelization and non-zero exit code if one of the tools failed. Now the developer should just invoke one command — e.g. ‘npm run quality’ — to get instant feedback. Consider also aborting a commit if the quality check failed using a githook (husky can help)

Otherwise: When the quality results arrive the day after the code, testing doesn’t become a fluent part of development rather an after the fact formal artifact

Code Examples

Doing It Right Example: npm scripts that perform code quality inspection, all are run in parallel on demand or when a developer is trying to push new code

"scripts": {
    "inspect:sanity-testing": "mocha /--test.js --grep "sanity"",
    "inspect:lint": "eslint .",
    "inspect:vulnerabilities": "npm audit",
    "inspect:license": "license-checker --failOn GPLv2",
    "inspect:complexity": "plato .",

    "inspect:all": "concurrently -c "bgBlue.bold,bgMagenta.bold,yellow" "npm:inspect:quick-testing" "npm:inspect:lint" "npm:inspect:vulnerabilities" "npm:inspect:license""
  },
  "husky": {
    "hooks": {
      "precommit": "npm run inspect:all",
      "prepush": "npm run inspect:all"
    }
}

5.3 Perform e2e testing over a true production-mirror

Do: End to end (e2e) testing are the main challenge of every CI pipeline — creating an identical ephemeral production mirror on the fly with all the related cloud services can be tedious and expensive. Finding the best compromise is your game: Docker-compose allows crafting isolated dockerized environment with identical containers using a single plain text file but the backing technology (e.g. networking, deployment model) is different from real-world productions. You may combine it with ‘AWS Local’ to work with a stub of the real AWS services. If you went serverless multiple frameworks like serverless and AWS SAM allows the local invocation of Faas code.

The huge Kubernetes eco-system is yet to formalize a standard convenient tool for local and CI-mirroring though many new tools are launched frequently. One approach is running a ‘minimized-Kubernetes’ using tools like Minikube and MicroK8s which resemble the real thing only come with less overhead. Another approach is testing over a remote ‘real-Kubernetes’, some CI providers (e.g. Codefresh) has native integration with Kubernetes environment and make it easy to run the CI pipeline over the real thing, others allow custom scripting against a remote Kubernetes.

Otherwise: Using different technologies for production and testing demands maintaining two deployment models and keeps the developers and the ops team separated

Code Examples

Example: a CI pipeline that generates Kubernetes cluster on the fly (Credit: Dynamic-environments Kubernetes)

deploy:

stage: deploy

image: registry.gitlab.com/gitlab-examples/kubernetes-deploy

script:

  • ./configureCluster.sh $KUBE_CA_PEM_FILE $KUBE_URL $KUBE_TOKEN

  • kubectl create ns $NAMESPACE

  • kubectl create secret -n $NAMESPACE docker-registry gitlab-registry --docker-server="$CI_REGISTRY" --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" --docker-email="$GITLAB_USER_EMAIL"

  • mkdir .generated

  • echo "$CI_BUILD_REF_NAME-$CI_BUILD_REF"

  • sed -e "s/TAG/$CI_BUILD_REF_NAME-$CI_BUILD_REF/g" templates/deals.yaml | tee ".generated/deals.yaml"

  • kubectl apply --namespace $NAMESPACE -f .generated/deals.yaml

  • kubectl apply --namespace $NAMESPACE -f templates/my-sock-shop.yaml

environment:

name: test-for-ci

5.4 Parallelize test execution

Do: When done right, testing is your 24/7 friend providing almost instant feedback. In practice, executing 500 CPU-bounded unit test on a single thread can take too long. Luckily, modern test runners and CI platforms (like Jest, AVA and Mocha extensions) can parallelize the test into multiple processes and achieve significant improvement in feedback time. Some CI vendors do also parallelize tests across containers (!) which shortens the feedback loop even further. Whether locally over multiple processes, or over some cloud CLI using multiple machines — parallelizing demand keeping the tests autonomous as each might run on different processes

Otherwise: Getting test results 1 hour long after pushing new code, as you already code the next features, is a great recipe for making testing less relevant

Code Examples

Doing It Right Example: Mocha parallel & Jest easily outrun the traditional Mocha thanks to testing parallelization (Credit: JavaScript Test-Runners Benchmark)

5.5 Stay away from legal issues using license and plagiarism check

Do: Licensing and plagiarism issues are probably not your main concern right now, but why not tick this box as well in 10 minutes? A bunch of npm packages like license check and plagiarism check (commercial with free plan) can be easily baked into your CI pipeline and inspect for sorrows like dependencies with restrictive licenses or code that was copy-pasted from Stackoverflow and apparently violates some copyrights

Otherwise: Unintentionally, developers might use packages with inappropriate licenses or copy paste commercial code and run into legal issues

Code Examples

Doing It Right Example:

//install license-checker in your CI environment or also locally
npm install -g license-checker

//ask it to scan all licenses and fail with exit code other than 0 if it found unauthorized license. The CI system should catch this failure and stop the build
license-checker --summary --failOn BSD

5.6 Constantly inspect for vulnerable dependencies

Do: Licensing and plagiarism issues are probably not your main concern right now, but why not tick this box as well in 10 minutes? A bunch of npm packages like license check and plagiarism check (commercial with free plan) can be easily baked into your CI pipeline and inspect for sorrows like dependencies with restrictive licenses or code that was copy-pasted from Stackoverflow and apparently violates some copyrights

Otherwise: Even the most reputable dependencies such as Express have known vulnerabilities. This can get easily tamed using community tools such as npm audit, or commercial tools like snyk (offer also a free community version). Both can be invoked from your CI on every build

Code Examples

Example: NPM Audit result

5.7 Automate dependency updates

Do: Yarn and npm latest introduction of package-lock.json introduced a serious challenge (the road to hell is paved with good intentions) — by default now, packages are no longer getting updates. Even a team running many fresh deployments with ‘npm install’ & ‘npm update’ won’t get any new updates. This leads to subpar dependent packages versions at best or to vulnerable code at worst. Teams now rely on developers goodwill and memory to manually update the package.json or use tools like ncu manually. A more reliable way could be to automate the process of getting the most reliable dependency versions, though there are no silver bullet solutions yet there are two possible automation roads: (1) CI can fail builds that have obsolete dependencies — using tools like ‘npm outdated’ or ‘npm-check-updates (ncu)’ . Doing so will enforce developers to update dependencies. (2) Use commercial tools that scan the code and automatically send pull requests with updated dependencies. One interesting question remaining is what should be the dependency update policy — updating on every patch generates too many overhead, updating right when a major is released might point to an unstable version (many packages found vulnerable on the very first days after being released, see the eslint-scope incident). An efficient update policy may allow some ‘vesting period’ — let the code lag behind the @latest for some time and versions before considering the local copy as obsolete (e.g. local version is 1.3.1 and repository version is 1.3.8)

Otherwise: Your production will run packages that have been explicitly tagged by their author as risky

Code Examples

Example: ncu can be used manually or within a CI pipeline to detect to which extent the code lag behind the latest versions

5.8 Other, non-Node related, CI tips

Do: This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known

  1. Use a declarative syntax. This is the only option for most vendors but older versions of Jenkins allows using code or UI
  2. Opt for a vendor that has native Docker support
  3. Fail early, run your fastest tests first. Create a ‘Smoke testing’ step/milestone that groups multiple fast inspections (e.g. linting, unit tests) and provide snappy feedback to the code committer
  4. Make it easy to skim-through all build artifacts including test reports, coverage reports, mutation reports, logs, etc
  5. Create multiple pipelines/jobs for each event, reuse steps between them. For example, configure a job for feature branch commits and a different one for master PR. Let each reuse logic using shared steps (most vendors provide some mechanism for code reuse
  6. Never embed secrets in a job declaration, grab them from a secret store or from the job’s configuration
  7. Explicitly bump version in a release build or at least ensure the developer did so
  8. Build only once and perform all the inspections over the single build artifact (e.g. Docker image)
  9. Test in an ephemeral environment that doesn’t drift state between builds. Caching node_modules might be the only exception

Otherwise: You‘ll miss years of wisdom

5.9 Build matrix: Run the same CI steps using multiple Node versions

Do: Quality checking is about serendipity, the more ground you cover the luckier you get in detecting issues early. When developing reusable packages or running a multi-customer production with various configuration and Node versions, the CI must run the pipeline of tests over all the permutations of configurations. For example, assuming we use mySQL for some customers and Postgres for others — some CI vendors support a feature called ‘Matrix’ which allow running the suit of testing against all permutations of mySQL, Postgres and multiple Node version like 8, 9 and 10. This is done using configuration only without any additional effort (assuming you have testing or any other quality checks). Other CIs who doesn’t support Matrix might have extensions or tweaks to allow that

Otherwise: So after doing all that hard work of writing testing are we going to let bugs sneak in only because of configuration issues?

Code Examples

Example: Using Travis (CI vendor) build definition to run the same test over multiple Node versions

language: node_js

node_js:

  - "7"

  - "6"

  - "5"

  - "4"

install:

  - npm install

script:

  - npm run test

Thanks for reading

If you liked this post, please do share/like it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about JavaScript

The Complete JavaScript Course 2019: Build Real Projects!

Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)

JavaScript Bootcamp - Build Real World Applications

The Web Developer Bootcamp

JavaScript Programming Tutorial - Full JavaScript Course for Beginners

New ES2019 Features Every JavaScript Developer Should Know

Best JavaScript Frameworks, Libraries and Tools to Use in 2019

React vs Angular vs Vue.js by Example

The Complete Node.js Developer Course (3rd Edition)

Angular & NodeJS - The MEAN Stack Guide

NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

Best 50 Nodejs interview questions from Beginners to Advanced in 2019

Node.js 12: The future of server-side JavaScript

An Introduction to Node.js Design Patterns

Introduction to Unit and Integration Testing for Node.js Applications

Introduction to Unit and Integration Testing for Node.js Applications

This post is going to introduce unit and integration testing of Node.js applications.

With any application, testing is an integral part of the development process.

Building tests with your application enables you to:

  • Quickly verify that changes to a project do not break expected behavior
  • Act as pseudo documentation as path flows are documented
  • Easily demonstrate application behaviors
  • Quickly take a review of your application’s health and codebase

This post is going to introduce unit and integration testing of Node.js applications.

We’re going to be reviewing my Express.js API ms-starwars, which is on GitHub here. I recommend doing a git clone of my project and following along as I discuss different ways to unit test the application.

An overview of testing

When testing with Node.js, you’ll typically use the following:

The term testing also typically refers to the following:

  • unit testing – testing your application code and logic. This is anything that your code actually does and is not reliant upon external services and data to accomplish.
  • integration testing – testing your application as it connects with services inside (or outside) of your application. This could include connecting different parts of your application, or connecting two different applications in a larger umbrella project.
  • regression testing – testing your application behaviors after a set of changes have been made. This is typically something you do before major product releases.
  • end-to-end testing – testing the full end-to-end flow of your project. This includes external HTTP calls and complete flows within your project.

Beyond these four, there are also other forms of testing specific to applications and frameworks.

In this post, we’re going to focus on unit and integration testing.

First, let’s discuss the different framework’s we’ll be using.

What is mocha?

Mocha is a test runner that enables you to exercise your Node.js code. It works well with any Node.js project, and follows the basic Jasmine syntax similar to the following (borrowed from the mocha getting started docs.

describe('Array', function() {
  describe('#indexOf()', function() {
    it('should return -1 when the value is not present', function() {
      assert.equal([1, 2, 3].indexOf(4), -1);
    });
  });
});

With mocha, you can also include the use of assertion libraries like assert, expect, and others.

Mocha also has many features within the test runner itself. I highly recommend reading A quick and complete guide to Mocha testing by Glad Chinda for more information.

What is chai and chai-http?

Chai offers an assertion library for Node.js.

Chai includes basic assertions that you can use to verify behavior. Some of the more popular ones include:

  • should
  • expect
  • assert

These can be used in your tests to evaluate conditions of code you’re testing, such as the following borrowed from chai’s homepage:

chai.should();

foo.should.be.a('string');
foo.should.equal('bar');
foo.should.have.lengthOf(3);
tea.should.have.property('flavors')
  .with.lengthOf(3);

Chai-http is a plugin that offers a full-fledged test runner that will actually run your application and test its endpoints directly:

describe('GET /films-list', () => {
  it('should return a list of films when called', done => {
    chai
      .request(app)
      .get('/films-list')
      .end((err, res) => {
        res.should.have.status(200);
        expect(res.body).to.deep.equal(starwarsFilmListMock);
        done();
      });
  });
});

With chai-http, the test runner starts your application, calls the requested endpoint, and then brings it down all in one command.

This is really powerful and helps with integration testing of your application.

What is sinon?

In addition to having a test runner and assertions, testing also requires spying, stubbing, and mocking. Sinon provides a framework for spys, stubs, and mocks with your Node.js tests.

Sinon is fairly straightforward, and you just use the associated spy, stub, and mock objects for different tests in your application.

A simple test with some stubs from sinon would look like this:

describe('Station Information', function() {
  afterEach(function() {
    wmata.stationInformation.restore();
  });
  it('should return station information when called', async function() {
    const lineCode = 'SV';
    const stationListStub = sinon
      .stub(wmata, 'stationInformation')
      .withArgs(lineCode)
      .returns(wmataStationInformationMock);
    const response = await metro.getStationInformation(lineCode);
    expect(response).to.deep.equal(metroStationInformationMock);
  });
});

I know there’s a lot going on here, but lets just pay attention to this:

const stationListStub = sinon
      .stub(wmata, 'stationInformation')
      .withArgs(lineCode)
      .returns(wmataStationInformationMock);

This is creating a stub for the wmata service’s method stationInformation with args lineCode that will return the mock at wmataStationInformationMock.

This lets you build out basic stubs so that the test runner will use your stubs in lieu of methods it runs across. This is good because you can isolate behavior.

Sinon can do a lot more than just stubs.

For more on testing with sinon, I recommend reading How to best use Sinon with Chai by Leighton Wallace.

Demo

Before I dive into actually building tests, I want to give a brief description of my project.

ms-starwars is actually an orchestration of API calls to the Star Wars API (SWAPI), which is available here. SWAPI is a very good API unto itself, and provides a wealth of data on a large part of the Star Wars cannon.

What’s even cooler is that SWAPI is community-driven. So if you see somewhere missing information, you can open a PR to their project here and add it yourself.

When you call endpoints for SWAPI, the API returns additional endpoints you can call to get more information. This makes the rest calls somewhat lightweight.

Here’s a response from the film endpoint:

{
    "title": "A New Hope",
    "episode_id": 4,
    "opening_crawl": "It is a period of civil war.\r\nRebel spaceships, striking\r\nfrom a hidden base, have won\r\ntheir first victory against\r\nthe evil Galactic Empire.\r\n\r\nDuring the battle, Rebel\r\nspies managed to steal secret\r\nplans to the Empire's\r\nultimate weapon, the DEATH\r\nSTAR, an armored space\r\nstation with enough power\r\nto destroy an entire planet.\r\n\r\nPursued by the Empire's\r\nsinister agents, Princess\r\nLeia races home aboard her\r\nstarship, custodian of the\r\nstolen plans that can save her\r\npeople and restore\r\nfreedom to the galaxy....",
    "director": "George Lucas",
    "producer": "Gary Kurtz, Rick McCallum",
    "release_date": "1977-05-25",
    "characters": [
        "https://swapi.co/api/people/1/",
        "https://swapi.co/api/people/2/",
        "https://swapi.co/api/people/3/",
        "https://swapi.co/api/people/4/",
        "https://swapi.co/api/people/5/",
        "https://swapi.co/api/people/6/",
        "https://swapi.co/api/people/7/",
        "https://swapi.co/api/people/8/",
        "https://swapi.co/api/people/9/",
        "https://swapi.co/api/people/10/",
        "https://swapi.co/api/people/12/",
        "https://swapi.co/api/people/13/",
        "https://swapi.co/api/people/14/",
        "https://swapi.co/api/people/15/",
        "https://swapi.co/api/people/16/",
        "https://swapi.co/api/people/18/",
        "https://swapi.co/api/people/19/",
        "https://swapi.co/api/people/81/"
    ],
    "planets": [
        "https://swapi.co/api/planets/2/",
        "https://swapi.co/api/planets/3/",
        "https://swapi.co/api/planets/1/"
    ],
    "starships": [
        "https://swapi.co/api/starships/2/",
        "https://swapi.co/api/starships/3/",
        "https://swapi.co/api/starships/5/",
        "https://swapi.co/api/starships/9/",
        "https://swapi.co/api/starships/10/",
        "https://swapi.co/api/starships/11/",
        "https://swapi.co/api/starships/12/",
        "https://swapi.co/api/starships/13/"
    ],
    "vehicles": [
        "https://swapi.co/api/vehicles/4/",
        "https://swapi.co/api/vehicles/6/",
        "https://swapi.co/api/vehicles/7/",
        "https://swapi.co/api/vehicles/8/"
    ],
    "species": [
        "https://swapi.co/api/species/5/",
        "https://swapi.co/api/species/3/",
        "https://swapi.co/api/species/2/",
        "https://swapi.co/api/species/1/",
        "https://swapi.co/api/species/4/"
    ],
    "created": "2014-12-10T14:23:31.880000Z",
    "edited": "2015-04-11T09:46:52.774897Z",
    "url": "https://swapi.co/api/films/1/"
}

Additional API endpoints are returned for various areas including characters, planets, etc.

To get all the data about a specific film, you’d have to call:

  • the film endpoint
  • all endpoints for characters
  • all endpoints for planets
  • all endpoints for starships
  • all endpoints for vehicles
  • all endpoints for species

I built ms-starwars as an attempt to bundle HTTP calls to the returned endpoints, and enable you to make single requests and get associated data for any of the endpoints.

To setup this orchestration, I created Express.js routes and associated controllers.

I also added a cache mechanism for each of the SWAPI calls. This boosted my APIs performance so that these bundled HTTP calls don’t have the latency associated with making multiple HTTP calls, etc.

Within the project, the unit tests are available at /test/unit. The integration tests are available at test/integration. You can run them with my project’s npm scripts:
npm run unit-tests and npm run intergration-tests.

In the next sections, we’ll walk through writing unit and integration tests. Then we’ll cover some considerations and optimizations you can make.

Let’s get to the code.

Unit tests

First, let’s create a new file in the sample project at /test/firstUnit.js

At the top of your test, let’s add the following:

const sinon = require('sinon');
const chai = require('chai');
const expect = chai.expect;
const swapi = require('../apis/swapi');
const starwars = require('../controllers/starwars');
// swapi mocks
const swapiFilmListMock = require('../mocks/swapi/film_list.json');
// starwars mocks
const starwarsFilmListMock = require('../mocks/starwars/film_list.json');

What’s this doing? Well, the first several lines are pulling in the project’s dependencies:

const sinon = require('sinon');
const chai = require('chai');
const expect = chai.expect;
const swapi = require('../apis/swapi');
const starwars = require('../controllers/starwars');
  • Pulling in the sinon framework.
  • Pulling in the chai framework.
  • Defining expect so we can use it assertions.
  • Pulling in the swapi api service that are defined in the project. These are direct calls to the SWAPI endpoints.
  • Pulling in the starwars api controllers that are defined in the project. These are orchestration of the SWAPI endpoints.

Next, you’ll notice all the mocks pulled in:

// swapi mocks
const swapiFilmListMock = require('../mocks/swapi/film_list.json');
// starwars mocks
const starwarsFilmListMock = require('../mocks/starwars/film_list.json');

These are JSON responses from both the SWAPI endpoints and results returned from the project’s controllers.

Since our unit tests are just testing our actual code and not dependent on the actual flows, mocking data enables us to just test the code without relying on the running services.

Next, let’s define our first test with the following:

describe('Film List', function() {
  afterEach(function() {
    swapi.films.restore();
  });
  it('should return all the star wars films when called', async function() {
    sinon.stub(swapi, 'films').returns(swapiFilmListMock);
    const response = await starwars.filmList();
    expect(response).to.deep.equal(starwarsFilmListMock);
  });
});

Here, the describe block is defining an occurrence of the test.

You would normally use describe and wrap that with an it . This enables you to group tests so that describe can be thought of as a name for the group and it can be thought of as the individual tests that will be run.

You’ll also notice we have an afterEach function.

There are several of these type of functions that work with Mocha.

Typically, the one’s you’ll see most often are afterEach and beforeEach. These are basically lifecycle hooks that enable you to setup data for a test, and then free resources after a test is run.

There’s a swapi.films.restore() call within the afterEach.

This frees up the SWAPI films endpoint for stubbing and future tests. This is necessary since the starwars controller that I’m testing is calling the SWAPI films endpoint.

In the it block, you’ll notice there is a definition followed by an async function call. The async call here indicates to the runner that there is asynchronous behavior to be tested. This enables us to use the await call that you see in line 7.

Finally, we get to the test itself.

First, we define a stub with:

sinon.stub(swapi, 'films').returns(swapiFilmListMock);

This stub signals to Mocha to use the mock file whenever the films method is called from the swapis API service.

To free up this method in your test runner, you’ll need to call the restore.

This isn’t really a problem for us here since we’re just running one test, but if you had many tests defined then you’d want to do this. I’ve included it here just to indicate convention.

Finally, we have our actual method call and an expect to check the result:

const response = await starwars.filmList();
expect(response).to.deep.equal(starwarsFilmListMock);

When you run this test, it should call the filmList controller, and return what would be expected with the starwarsFilmListMock response.

Let’s run it.

Install Mocha globally in your terminal with:

npm i mocha --global

Then, run the test with:

mocha test/firstUnit

You should see the following:

On a high level, this is what you can expect with any unit tests.

Notice that we did the following:

  1. Arrange – we set up our data by creating a stub
  2. Act – we made a call to our controller method to act on the test
  3. Assert – we asserted that the response from the controller equals our saved mock value

This pattern of Arrange, Act, and Assert is a good thing to keep in mind when running any test.

A more complicated unit test

This first test showed you the basic set-up — you now have a basic understanding of arrange, act, and assert.

Let’s consider a more complicated test:

describe('Film', function() {
  afterEach(function() {
    swapi.film.restore();
    swapi.people.restore();
  });
  it('should return all the metadata for a film when called', async function() {
    const filmId = '1';
    const peopleId = '1';
    const planetId = '1';
    const starshipId = '2';
    const vehicleId = '4';
    const speciesId = '1';
    sinon
      .stub(swapi, 'film')
      .withArgs(filmId)
      .resolves(swapiFilmMock);
    sinon
      .stub(swapi, 'people')
      .withArgs(peopleId)
      .resolves(swapiPeopleMock);
    sinon
      .stub(swapi, 'planet')
      .withArgs(planetId)
      .resolves(swapiPlanetMock);
    sinon
      .stub(swapi, 'starship')
      .withArgs(starshipId)
      .resolves(swapiStarshipMock);
    sinon
      .stub(swapi, 'vehicle')
      .withArgs(vehicleId)
      .resolves(swapiVehicleMock);
    sinon
      .stub(swapi, 'species')
      .withArgs(speciesId)
      .resolves(swapiSpeciesMock);
    const response = await starwars.film(filmId);
    expect(response).to.deep.equal(starwarsFilmMock);
  });
});

Wow, that’s a lot of stubs! But it’s not as scary as it looks — this test basically does the same thing as our previous example.

I wanted to highlight this test because it uses multiple stubs (with args).

As I mentioned before, ms-starwars bundles several HTTP calls under the hood. The one call to the film endpoint actually makes calls to film, people, planet, starship, vehicle, and species. All of these mocks are necessary to do this.

Generally speaking, this is what your unit tests will look like. You can do similar behaviors for PUT, POST, and DELETE method calls.

The key is to be testing code. Notice that we made use of a stub and mock in our return value.

We were testing the application logic, and not concerned with the application working in its entirety. Tests that test full flows are typically integration or end-to-end tests.

Integration tests

For the unit tests, we were just focused on testing the code itself without being concerned with end-to-end flows.

We were only focused on making sure the application methods have the expected outputs from the expected input.

With integration tests (and also for end-to-end tests), we’re testing flows.

Integration tests are important because they make sure individual components of your application are able to work together.

This is important with microservices because you’ll have different classes defined that (together) create a microservice.

You might also have a single project with multiple services, and you’d write integration tests to make sure they work well together.

For the ms-starwars project, we’re just going to make sure that the orchestration provided by the controllers work with the individual API calls to the SWAPI endpoints.

Go ahead and define a new file with /test/firstIntegration.js.

Add the following to the top of the file:

const chai = require('chai');
const chaiHttp = require('chai-http');
chai.use(chaiHttp);
const app = require('../server');
const should = chai.should();
const expect = chai.expect;
// starwars mocks
const starwarsFilmListMock = require('../mocks/starwars/film_list.json');

What is this doing?

First, we’re defining an instance of chai and chai-http. Next, we’re defining an instance of the actual app itself from the server.js file.

Then we’re pulling in should and expect, and finally we’re pulling in a mock that we’re going to use to compare the response.

Let’s build our test:

describe('GET /films-list', () => {
  it('should return a list of films when called', done => {
    chai
      .request(app)
      .get('/films-list')
      .end((err, res) => {
        res.should.have.status(200);
        expect(res.body).to.deep.equal(starwarsFilmListMock);
        done();
      });
  });
});

So what’s this doing?

Well, this is similar to the syntax we saw before — we have the describe with an it. This sets up the test and indicates that the test is actually occurring here.

Then we make a call to chai.request and pass our reference to our app (server.js) file. This is how we can engage the chai-http library to make our HTTP call.

We then are passing a GET call to the films-list endpoint from our API.

Then we call end to signal behavior on what to do when the call completes.

We expect a status of 200 with:

res.should.have.status(200);

Then we expect a body to equal our mock with:

expect(res.body).to.deep.equal(starwarsFilmListMock);

Finally, we call done() to stop the test runner.

The really cool part about this is that it starts your application locally, runs the request you specify (GET, POST PUT DELETE, etc.), enables you to capture the response, and brings down the local running application.

So now with our integration test set up, run it with the following:

    mocha --exit test/firstIntegration
> note that the `--exit` flag is being passed here just to signal to the test runner to stop after the test finishes.  You can run it without `--exit` , but it would just wait for you to manually cancel the process.

Then you should see something like this:

There are other frameworks that can literally run your application beside your test runner.

However, using chai-http is clean and easy to implement with any of your projects and doesn’t require additional frameworks in general.

I recommend playing with the chai-http library and your application, and consulting the documentation when you have questions.

Testing strategies

With any test suite, we should also consider an overall strategy. You should ask yourself, what do you want to test? Have you covered all the application flows? Are there specific edge conditions that you want to test? Do you need to provide reports for your Product Owner or Team Lead?

The frameworks I’ve covered so far enable you to run tests, but there are many options for test reporters. Additionally, there are several test tools out there that provide code coverage.

One of the failures that I’ve experienced with teams is that they think if the code coverage tool says you’ve got 90% coverage, then you’re good. This isn’t really accurate.

When you write your tests, you should consider odd behavior and testing for specific inputs. Just because your code has been covered doesn’t mean that the outliers and edge cases have been covered.

With any test suite, you should consider not just the “happy path” and “sad path” scenarios, but also edge cases and specific cases to your customers.

Additionally, there are often times with integration and end-to-end tests that you may be reliant on external HTTP calls.

This could be problematic if the external APIs are down.

I actually recently built another microservice that did just that. I employed a mock server to run my tests, and used start-server-and-test to run both together.

This proved to be a great experience because I could run my tests in isolation, and it freed me up from relying on the external APIs.

Overall, your test strategy is going to be based on your situation. I recommend you look beyond just “happy path” or “expected cases” and consider everything else.

Conclusion

I hope my post here has given you a good introduction to testing your Node.js applications.

We’ve discussed the different frameworks and technologies you can use in your Node.js applications. We’ve also walked through unit and integration tests for your Node.js applications.

The framework I used here was Express.js, but these patterns could apply to other Node.js frameworks as well. I recommend checking out the links I’ve provided above, as well as the documentation for each framework.

Javascript Testing Selenium Automation Nightwatch js Nodejs

Javascript Testing Selenium Automation Nightwatch js Nodejs

Simpliv offers solid learning on the basics of Selenium Automation Scripts and how they help automate the browser. By the end of this highly practical course, you will be able to build a Testing Framework from scratch. All at just $9! Enroll today!

Description
We will go over of basics such as what is selenium and how it help automate the browser.

To complete Setup and Installation of Nightwatch js Testing framework and we will create Test plan and test case, we will manually test a real website and then take those test case and we will automate it.

As we build our framework we will see how page object model/pattern help us re-use our code and how scale our test cases with ease

Who is the target audience?

Manual QA, SDET,Developer, Devops and anyone Interested in Learning Automation Testing
Basic knowledge
Beginner Level understanding on Software Testing
What will you learn
Basics of Selenium Automation Scripts, by the end build a Testing Framework from scratch