How to Run Cypress Tests in Parallel

In the following tutorial, we walk you through configuring Cypress to run tests in parallel with CircleCI.

Want to see the final project in action? Check out the video.

Project Setup

Let's start by setting up a basic Cypress project:

$ mkdir cypress-parallel && cd cypress-parallel
$ npm init -y
$ npm install cypress --save-dev
$ ./node_modules/.bin/cypress open

This creates a new project folder, adds a package.json file, installs Cypress, opens the Cypress GUI, and scaffolds out the following files and folders:

├── cypress
│   ├── fixtures
│   │   └── example.json
│   ├── integration
│   │   └── examples
│   │       ├── actions.spec.js
│   │       ├── aliasing.spec.js
│   │       ├── assertions.spec.js
│   │       ├── connectors.spec.js
│   │       ├── cookies.spec.js
│   │       ├── cypress_api.spec.js
│   │       ├── files.spec.js
│   │       ├── local_storage.spec.js
│   │       ├── location.spec.js
│   │       ├── misc.spec.js
│   │       ├── navigation.spec.js
│   │       ├── network_requests.spec.js
│   │       ├── querying.spec.js
│   │       ├── spies_stubs_clocks.spec.js
│   │       ├── traversal.spec.js
│   │       ├── utilities.spec.js
│   │       ├── viewport.spec.js
│   │       ├── waiting.spec.js
│   │       └── window.spec.js
│   ├── plugins
│   │   └── index.js
│   └── support
│       ├── commands.js
│       └── index.js
└─── cypress.json

Close the Cypress GUI. Then, remove the "cypress/integration/examples" folder and add four sample spec files:

sample1.spec.js

describe('Cypress parallel run example - 1', () => {
  it('should display the title', () => {
    cy.visit(`https://mherman.org`);
    cy.get('a').contains('Michael Herman');
  });
});

sample2.spec.js

describe('Cypress parallel run example - 2', () => {
  it('should display the blog link', () => {
    cy.visit(`https://mherman.org`);
    cy.get('a').contains('Blog');
  });
});

sample3.spec.js

describe('Cypress parallel run example - 3', () => {
  it('should display the about link', () => {
    cy.visit(`https://mherman.org`);
    cy.get('a').contains('About');
  });
});

sample4.spec.js

describe('Cypress parallel run example - 4', () => {
  it('should display the rss link', () => {
    cy.visit(`https://mherman.org`);
    cy.get('a').contains('RSS');
  });
});

Your project should now have the following structure:

├── cypress
│   ├── fixtures
│   │   └── example.json
│   ├── integration
│   │   ├── sample1.spec.js
│   │   ├── sample2.spec.js
│   │   ├── sample3.spec.js
│   │   └── sample4.spec.js
│   ├── plugins
│   │   └── index.js
│   └── support
│       ├── commands.js
│       └── index.js
├── cypress.json
├── package-lock.json
└── package.json

Make sure the tests pass before moving on:

$ ./node_modules/.bin/cypress run

      Spec                                                Tests  Passing  Failing  Pending  Skipped
  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐
  │ ✔ sample1.spec.js                           00:02        1        1        -        -        - │
  ├────────────────────────────────────────────────────────────────────────────────────────────────┤
  │ ✔ sample2.spec.js                           00:01        1        1        -        -        - │
  ├────────────────────────────────────────────────────────────────────────────────────────────────┤
  │ ✔ sample3.spec.js                           00:02        1        1        -        -        - │
  ├────────────────────────────────────────────────────────────────────────────────────────────────┤
  │ ✔ sample4.spec.js                           00:01        1        1        -        -        - │
  └────────────────────────────────────────────────────────────────────────────────────────────────┘
    All specs passed!                           00:08        4        4        -        -        -

Once done, add a .gitignore file:

node_modules/
cypress/videos/
cypress/screenshots/

Create a new repository on GitHub called cypress-parallel, init a new git repo locally, and then commit and push your code up to GitHub.

CircleCI Setup

Sign up for a CircleCI account if you don't already have one. Then, add cypress-parallel as a new project on CircleCI.

Review the Getting Started guide for info on how to set up and work with projects on CircleCI.

Add a new file to the folder called ".circleci", and then add a new file to that folder called config.yml:

version: 2

jobs:
  build:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - checkout
      - run: pwd
      - run: ls
      - restore_cache:
          keys:
            - 'v2-deps-{{ .Branch }}-{{ checksum "package-lock.json" }}'
            - 'v2-deps-{{ .Branch }}-'
            - v2-deps-
      - run: npm ci
      - save_cache:
          key: 'v2-deps-{{ .Branch }}-{{ checksum "package-lock.json" }}'
          paths:
            - ~/.npm
            - ~/.cache
      - persist_to_workspace:
          root: ~/
          paths:
            - .cache
            - tmp
  test:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - attach_workspace:
          at: ~/
      - run: ls -la cypress
      - run: ls -la cypress/integration
      - run:
          name: Running cypress tests
          command: $(npm bin)/cypress run
      - store_artifacts:
          path: cypress/videos
      - store_artifacts:
          path: cypress/screenshots

workflows:
  version: 2
  build_and_test:
    jobs:
      - build
      - test:
          requires:
            - build

Here, we configured two jobs, build and test. The build job installs Cypress, and the tests are run in the test job. Both jobs run inside Docker and extend from the cypress/base image.

For more on CircleCI configuration, review the Configuration Introduction guide.

Commit and push your code to trigger a new build. Make sure both jobs pass. You should be able to see the Cypress recorded videos within the "Artifacts" tab on the test job:

circleci dashboard

With that, let's look at how to split the tests up using the config file, so the Cypress tests can be run in parallel.

Parallelism

We'll start by manually splitting them up. Update the config file like so:

version: 2

jobs:
  build:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - checkout
      - run: pwd
      - run: ls
      - restore_cache:
          keys:
            - 'v2-deps-{{ .Branch }}-{{ checksum "package-lock.json" }}'
            - 'v2-deps-{{ .Branch }}-'
            - v2-deps-
      - run: npm ci
      - save_cache:
          key: 'v2-deps-{{ .Branch }}-{{ checksum "package-lock.json" }}'
          paths:
            - ~/.npm
            - ~/.cache
      - persist_to_workspace:
          root: ~/
          paths:
            - .cache
            - tmp
  test1:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - attach_workspace:
          at: ~/
      - run: ls -la cypress
      - run: ls -la cypress/integration
      - run:
          name: Running cypress tests 1
          command: $(npm bin)/cypress run --spec cypress/integration/sample1.spec.js
      - store_artifacts:
          path: cypress/videos
      - store_artifacts:
          path: cypress/screenshots
  test2:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - attach_workspace:
          at: ~/
      - run: ls -la cypress
      - run: ls -la cypress/integration
      - run:
          name: Running cypress tests 2
          command: $(npm bin)/cypress run --spec cypress/integration/sample2.spec.js
      - store_artifacts:
          path: cypress/videos
      - store_artifacts:
          path: cypress/screenshots
  test3:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - attach_workspace:
          at: ~/
      - run: ls -la cypress
      - run: ls -la cypress/integration
      - run:
          name: Running cypress tests 3
          command: $(npm bin)/cypress run --spec cypress/integration/sample3.spec.js
      - store_artifacts:
          path: cypress/videos
      - store_artifacts:
          path: cypress/screenshots
  test4:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - attach_workspace:
          at: ~/
      - run: ls -la cypress
      - run: ls -la cypress/integration
      - run:
          name: Running cypress tests 4
          command: $(npm bin)/cypress run --spec cypress/integration/sample4.spec.js
      - store_artifacts:
          path: cypress/videos
      - store_artifacts:
          path: cypress/screenshots

workflows:
  version: 2
  build_and_test:
    jobs:
      - build
      - test1:
          requires:
            - build
      - test2:
          requires:
            - build
      - test3:
          requires:
            - build
      - test4:
          requires:
            - build

So, we created four test jobs, each will run a single spec file on a different machine on CircleCI. Commit your code and push it up to GitHub. This time, once the build job finishes, you should see each of the test jobs running at the same time:

circleci dashboard

Next, let's look at how to generate the config file dynamically.

Generate CircleCI Config

Create a "lib" folder in the project root, and then add the following files to that folder:

  1. circle.json
  2. generate-circle-config.js

Add the config for the build job to circle.json:

{
  "version": 2,
  "jobs": {
    "build": {
      "working_directory": "~/tmp",
      "docker": [
        {
          "image": "cypress/base:10",
          "environment": {
            "TERM": "xterm"
          }
        }
      ],
      "steps": [
        "checkout",
        {
          "run": "pwd"
        },
        {
          "run": "ls"
        },
        {
          "restore_cache": {
            "keys": [
              "v2-deps-{{ .Branch }}-{{ checksum \"package-lock.json\" }}",
              "v2-deps-{{ .Branch }}-",
              "v2-deps-"
            ]
          }
        },
        {
          "run": "npm ci"
        },
        {
          "save_cache": {
            "key": "v2-deps-{{ .Branch }}-{{ checksum \"package-lock.json\" }}",
            "paths": [
              "~/.npm",
              "~/.cache"
            ]
          }
        },
        {
          "persist_to_workspace": {
            "root": "~/",
            "paths": [
              ".cache",
              "tmp"
            ]
          }
        }
      ]
    }
  },
  "workflows": {
    "version": 2,
    "build_and_test": {
      "jobs": [
        "build"
      ]
    }
  }
}

Essentially, we'll use this config as the base, add the test jobs to it dynamically, and then save the final config file in YAML.

Add the code to generate-circle-config.js that:

  1. Gets the name of the spec files from the "cypress/integration" directory
  2. Reads the circle.json file as an object
  3. Adds the test jobs to the object
  4. Converts the object to YAML and writes it to disc as .circleci/config.yml

Code:

const path = require('path');
const fs = require('fs');

const yaml = require('write-yaml');


/*
  helpers
*/

function createJSON(fileArray, data) {
  for (const [index, value] of fileArray.entries()) {
    data.jobs[`test${index + 1}`] = {
      working_directory: '~/tmp',
      docker: [
        {
          image: 'cypress/base:10',
          environment: {
            TERM: 'xterm',
          },
        },
      ],
      steps: [
        {
          attach_workspace: {
            at: '~/',
          },
        },
        {
          run: 'ls -la cypress',
        },
        {
          run: 'ls -la cypress/integration',
        },
        {
          run: {
            name: `Running cypress tests ${index + 1}`,
            command: `$(npm bin)/cypress run --spec cypress/integration/${value}`,
          },
        },
        {
          store_artifacts: {
            path: 'cypress/videos',
          },
        },
        {
          store_artifacts: {
            path: 'cypress/screenshots',
          },
        },
      ],
    };
    data.workflows.build_and_test.jobs.push({
      [`test${index + 1}`]: {
        requires: [
          'build',
        ],
      },
    });
  }
  return data;
}

function writeFile(data) {
  yaml(path.join(__dirname, '..', '.circleci', 'config.yml'), data, (err) => {
    if (err) {
      console.log(err);
    } else {
      console.log('Success!');
    }
  });
}


/*
  main
*/

// get spec files as an array
const files = fs.readdirSync(path.join(__dirname, '..', 'cypress', 'integration')).filter(fn => fn.endsWith('.spec.js'));
// read circle.json
const circleConfigJSON = require(path.join(__dirname, 'circle.json'));
// add cypress specs to object as test jobs
const data = createJSON(files, circleConfigJSON);
// write file to disc
writeFile(data);

Review (and refactor) this on your own.

Install write-yaml and then generate the new config file:

$ npm install write-yaml --save-dev
$ node lib/generate-circle-config.js

Commit your code again and push it up to GitHub to trigger a new build. Again, four test jobs should run in parallel after the build job finishes.

Mochawesome

Moving along, let's add mochawesome as a Cypress custom reporter so we can generate a nice report after all test jobs finish running.

Install:

$ npm install mochawesome mocha --save-dev

Update the following run step in the createJSON function in generate-circle-config.js:

run: {
  name: `Running cypress tests ${index + 1}`,
  command: `$(npm bin)/cypress run --spec cypress/integration/${value} --reporter mochawesome --reporter-options "reportFilename=test${index + 1}"`,
},

Then, add a new step to store the generated report as an artifact to createJSON:

{
  store_artifacts: {
    path: 'mochawesome-report',
  },
},

createJSON should now look like:

function createJSON(fileArray, data) {
  for (const [index, value] of fileArray.entries()) {
    data.jobs[`test${index + 1}`] = {
      working_directory: '~/tmp',
      docker: [
        {
          image: 'cypress/base:10',
          environment: {
            TERM: 'xterm',
          },
        },
      ],
      steps: [
        {
          attach_workspace: {
            at: '~/',
          },
        },
        {
          run: 'ls -la cypress',
        },
        {
          run: 'ls -la cypress/integration',
        },
        {
          run: {
            name: `Running cypress tests ${index + 1}`,
            command: `$(npm bin)/cypress run --spec cypress/integration/${value} --reporter mochawesome --reporter-options "reportFilename=test${index + 1}"`,
          },
        },
        {
          store_artifacts: {
            path: 'cypress/videos',
          },
        },
        {
          store_artifacts: {
            path: 'cypress/screenshots',
          },
        },
        {
          store_artifacts: {
            path: 'mochawesome-report',
          },
        },
      ],
    };
    data.workflows.build_and_test.jobs.push({
      [`test${index + 1}`]: {
        requires: [
          'build',
        ],
      },
    });
  }
  return data;
}

Now, each test run will generate a mochawesome report with a unique name. Try it out. Generate the new config. Commit and push your code. Each test job should store a copy of the generated mochawesome report in the "Artifacts" tab:

circleci dashboard

The actual report should look something like:

mochawesome report

Combine Reports

The next step is to combine the separate reports into a single report. Start by adding a new step to store the generated report in a workspace to the createJSON function:

{
  persist_to_workspace: {
    root: 'mochawesome-report',
    paths: [
      `test${index + 1}.json`,
      `test${index + 1}.html`,
    ],
  },
},

Also, add a new job to lib/circle.json called combine_reports, which attaches the workspace and then runs an ls command to display the contents of the directory:

"combine_reports": {
  "working_directory": "~/tmp",
  "docker": [
    {
      "image": "cypress/base:10",
      "environment": {
        "TERM": "xterm"
      }
    }
  ],
  "steps": [
    {
      "attach_workspace": {
        "at": "/tmp/mochawesome-report"
      }
    },
    {
      "run": "ls /tmp/mochawesome-report"
    }
  ]
}

The purpose of the ls is to just make sure that we are persisting and attaching the workspace correctly. In other words, when run, you should see all the reports in the "/tmp/mochawesome-report" directory.

Since this job depends on the test jobs, update createJSON again, like so:

function createJSON(fileArray, data) {
  const jobs = [];
  for (const [index, value] of fileArray.entries()) {
    jobs.push(`test${index + 1}`);
    data.jobs[`test${index + 1}`] = {
      working_directory: '~/tmp',
      docker: [
        {
          image: 'cypress/base:10',
          environment: {
            TERM: 'xterm',
          },
        },
      ],
      steps: [
        {
          attach_workspace: {
            at: '~/',
          },
        },
        {
          run: 'ls -la cypress',
        },
        {
          run: 'ls -la cypress/integration',
        },
        {
          run: {
            name: `Running cypress tests ${index + 1}`,
            command: `$(npm bin)/cypress run --spec cypress/integration/${value} --reporter mochawesome --reporter-options "reportFilename=test${index + 1}"`,
          },
        },
        {
          store_artifacts: {
            path: 'cypress/videos',
          },
        },
        {
          store_artifacts: {
            path: 'cypress/screenshots',
          },
        },
        {
          store_artifacts: {
            path: 'mochawesome-report',
          },
        },
        {
          persist_to_workspace: {
            root: 'mochawesome-report',
            paths: [
              `test${index + 1}.json`,
              `test${index + 1}.html`,
            ],
          },
        },
      ],
    };
    data.workflows.build_and_test.jobs.push({
      [`test${index + 1}`]: {
        requires: [
          'build',
        ],
      },
    });
  }
  data.workflows.build_and_test.jobs.push({
    combine_reports: {
      'requires': jobs,
    },
  });
  return data;
}

Generate the config:

$ node lib/generate-circle-config.js

The config file should now look like:

version: 2
jobs:
  build:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - checkout
      - run: pwd
      - run: ls
      - restore_cache:
          keys:
            - 'v2-deps-{{ .Branch }}-{{ checksum "package-lock.json" }}'
            - 'v2-deps-{{ .Branch }}-'
            - v2-deps-
      - run: npm ci
      - save_cache:
          key: 'v2-deps-{{ .Branch }}-{{ checksum "package-lock.json" }}'
          paths:
            - ~/.npm
            - ~/.cache
      - persist_to_workspace:
          root: ~/
          paths:
            - .cache
            - tmp
  combine_reports:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - attach_workspace:
          at: /tmp/mochawesome-report
      - run: ls /tmp/mochawesome-report
  test1:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - attach_workspace:
          at: ~/
      - run: ls -la cypress
      - run: ls -la cypress/integration
      - run:
          name: Running cypress tests 1
          command: >-
            $(npm bin)/cypress run --spec cypress/integration/sample1.spec.js
            --reporter mochawesome --reporter-options "reportFilename=test1"
      - store_artifacts:
          path: cypress/videos
      - store_artifacts:
          path: cypress/screenshots
      - store_artifacts:
          path: mochawesome-report
      - persist_to_workspace:
          root: mochawesome-report
          paths:
            - test1.json
            - test1.html
  test2:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - attach_workspace:
          at: ~/
      - run: ls -la cypress
      - run: ls -la cypress/integration
      - run:
          name: Running cypress tests 2
          command: >-
            $(npm bin)/cypress run --spec cypress/integration/sample2.spec.js
            --reporter mochawesome --reporter-options "reportFilename=test2"
      - store_artifacts:
          path: cypress/videos
      - store_artifacts:
          path: cypress/screenshots
      - store_artifacts:
          path: mochawesome-report
      - persist_to_workspace:
          root: mochawesome-report
          paths:
            - test2.json
            - test2.html
  test3:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - attach_workspace:
          at: ~/
      - run: ls -la cypress
      - run: ls -la cypress/integration
      - run:
          name: Running cypress tests 3
          command: >-
            $(npm bin)/cypress run --spec cypress/integration/sample3.spec.js
            --reporter mochawesome --reporter-options "reportFilename=test3"
      - store_artifacts:
          path: cypress/videos
      - store_artifacts:
          path: cypress/screenshots
      - store_artifacts:
          path: mochawesome-report
      - persist_to_workspace:
          root: mochawesome-report
          paths:
            - test3.json
            - test3.html
  test4:
    working_directory: ~/tmp
    docker:
      - image: 'cypress/base:10'
        environment:
          TERM: xterm
    steps:
      - attach_workspace:
          at: ~/
      - run: ls -la cypress
      - run: ls -la cypress/integration
      - run:
          name: Running cypress tests 4
          command: >-
            $(npm bin)/cypress run --spec cypress/integration/sample4.spec.js
            --reporter mochawesome --reporter-options "reportFilename=test4"
      - store_artifacts:
          path: cypress/videos
      - store_artifacts:
          path: cypress/screenshots
      - store_artifacts:
          path: mochawesome-report
      - persist_to_workspace:
          root: mochawesome-report
          paths:
            - test4.json
            - test4.html
workflows:
  version: 2
  build_and_test:
    jobs:
      - build
      - test1:
          requires:
            - build
      - test2:
          requires:
            - build
      - test3:
          requires:
            - build
      - test4:
          requires:
            - build
      - combine_reports:
          requires:
            - test1
            - test2
            - test3
            - test4

Commit and push to GitHub again. Make sure combine_reports runs at the end:

circleci dashboard

Next, add a script to combine the reports:

const fs = require('fs');
const path = require('path');

const shell = require('shelljs');
const uuidv1 = require('uuid/v1');


function getFiles(dir, ext, fileList = []) {
  const files = fs.readdirSync(dir);
  files.forEach((file) => {
    const filePath = `${dir}/${file}`;
    if (fs.statSync(filePath).isDirectory()) {
      getFiles(filePath, fileList);
    } else if (path.extname(file) === ext) {
      fileList.push(filePath);
    }
  });
  return fileList;
}

function traverseAndModifyTimedOut(target, deep) {
  if (target['tests'] && target['tests'].length) {
    target['tests'].forEach(test => {
      test.timedOut = false;
    });
  }
  if (target['suites']) {
    target['suites'].forEach(suite => {
      traverseAndModifyTimedOut(suite, deep + 1);
    })
  }
}

function combineMochaAwesomeReports() {
  const reportDir = path.join('/', 'tmp', 'mochawesome-report');
  const reports = getFiles(reportDir, '.json', []);
  const suites = [];
  let totalSuites = 0;
  let totalTests = 0;
  let totalPasses = 0;
  let totalFailures = 0;
  let totalPending = 0;
  let startTime;
  let endTime;
  let totalskipped = 0;
  reports.forEach((report, idx) => {
    const rawdata = fs.readFileSync(report);
    const parsedData = JSON.parse(rawdata);
    if (idx === 0) { startTime = parsedData.stats.start; }
    if (idx === (reports.length - 1)) { endTime = parsedData.stats.end; }
    totalSuites += parseInt(parsedData.stats.suites, 10);
    totalskipped += parseInt(parsedData.stats.skipped, 10);
    totalPasses += parseInt(parsedData.stats.passes, 10);
    totalFailures += parseInt(parsedData.stats.failures, 10);
    totalPending += parseInt(parsedData.stats.pending, 10);
    totalTests += parseInt(parsedData.stats.tests, 10);

    if (parsedData && parsedData.suites && parsedData.suites.suites) {
      parsedData.suites.suites.forEach(suite => {
        suites.push(suite)
      })
    }
  });
  return {
    totalSuites,
    totalTests,
    totalPasses,
    totalFailures,
    totalPending,
    startTime,
    endTime,
    totalskipped,
    suites,
  };
}

function getPercentClass(pct) {
  if (pct <= 50) {
    return 'danger';
  } else if (pct > 50 && pct < 80) {
    return 'warning';
  }
  return 'success';
}

function writeReport(obj, uuid) {
  const sampleFile = path.join(__dirname, 'sample.json');
  const outFile = path.join(__dirname, '..', `${uuid}.json`);
  fs.readFile(sampleFile, 'utf8', (err, data) => {
    if (err) throw err;
    const parsedSampleFile = JSON.parse(data);
    const stats = parsedSampleFile.stats;
    stats.suites = obj.totalSuites;
    stats.tests = obj.totalTests;
    stats.passes = obj.totalPasses;
    stats.failures = obj.totalFailures;
    stats.pending = obj.totalPending;
    stats.start = obj.startTime;
    stats.end = obj.endTime;
    stats.duration =  new Date(obj.endTime) - new Date(obj.startTime);
    stats.testsRegistered = obj.totalTests - obj.totalPending;
    stats.passPercent = Math.round((stats.passes / (stats.tests - stats.pending)) * 1000) / 10;
    stats.pendingPercent = Math.round((stats.pending / stats.testsRegistered) * 1000) /10;
    stats.skipped = obj.totalskipped;
    stats.hasSkipped = obj.totalskipped > 0;
    stats.passPercentClass = getPercentClass(stats.passPercent);
    stats.pendingPercentClass = getPercentClass(stats.pendingPercent);

    obj.suites.forEach(suit => {
      traverseAndModifyTimedOut(suit, 0);
    });

    parsedSampleFile.suites.suites = obj.suites;
    parsedSampleFile.suites.uuid = uuid;
    fs.writeFile(outFile, JSON.stringify(parsedSampleFile), { flag: 'wx' }, (error) => {
      if (error) throw error;
    });
  });
}

const data = combineMochaAwesomeReports();
const uuid = uuidv1();
writeReport(data, uuid);
shell.exec(`./node_modules/.bin/marge ${uuid}.json --reportDir mochareports --reportTitle ${uuid}`, (code, stdout, stderr) => {
  if (stderr) {
    console.log(stderr);
  } else {
    console.log('Success!');
  }
});

Save this as combine.js in "lib".

This script will gather up all the mochawesome JSON files (which contain the raw JSON output for each mochawesome report), combine them, and generate a new mochawesome report.

If interested, hop back to CircleCI to view one of the generated mochawesome JSON files in the "Artifacts" tab from one of the test jobs.

Install the dependencies:

$ npm install shelljs uuid --save-dev

Add sample.json to the "lib" directory:

{
  "stats": {
    "suites": 0,
    "tests": 0,
    "passes": 0,
    "pending": 0,
    "failures": 0,
    "start": "",
    "end": "",
    "duration": 0,
    "testsRegistered": 0,
    "passPercent": 0,
    "pendingPercent": 0,
    "other": 0,
    "hasOther": false,
    "skipped": 0,
    "hasSkipped": false,
    "passPercentClass": "success",
    "pendingPercentClass": "success"
  },
  "suites": {
    "uuid": "",
    "title": "",
    "fullFile": "",
    "file": "",
    "beforeHooks": [],
    "afterHooks": [],
    "tests": [],
    "suites": [],
    "passes": [],
    "failures": [],
    "pending": [],
    "skipped": [],
    "duration": 0,
    "root": true,
    "rootEmpty": true,
    "_timeout": 2000
  },
  "copyrightYear": 2019
}

Update combine_reports in circle.json to run the combine.js script and then save the new reports as an artifact:

"combine_reports": {
  "working_directory": "~/tmp",
  "docker": [
    {
      "image": "cypress/base:10",
      "environment": {
        "TERM": "xterm"
      }
    }
  ],
  "steps": [
    "checkout",
    {
      "attach_workspace": {
        "at": "~/"
      }
    },
    {
      "attach_workspace": {
        "at": "/tmp/mochawesome-report"
      }
    },
    {
      "run": "ls /tmp/mochawesome-report"
    },
    {
      "run": "node ./lib/combine.js"
    },
    {
      "store_artifacts": {
        "path": "mochareports"
      }
    }
  ]
}

To test, generate the new config, commit, and push your code. All jobs should pass and you should see the combined final report.

circleci dashboard

circleci dashboard

mochawesome report

Handle Test Failures

What happens if a test fails?

Change cy.get('a').contains('Blog'); to cy.get('a').contains('Not Real'); in sample2.spec.js:

describe('Cypress parallel run example - 2', () => {
  it('should display the blog link', () => {
    cy.visit(`https://mherman.org`);
    cy.get('a').contains('Not Real');
  });
});

Commit and push your code. Since the combine_reports job is dependent on the test jobs, if any one of those test jobs fail it won't run.

circleci dashboard

circleci dashboard

So, how do you get the combine_reports job to run even if a previous job in the workflow fails?

Unfortunately, this functionality is not currently supported by CircleCI. See this discussion for more info. Because we really only care about the mochawesome JSON report, you can get around this issue by suppressing the exit code for the test jobs. The test jobs will still run and generate the mochawesome report--they will just always pass regardless of whether the underlying tests pass or fail.

Update the following run again in createJSON:

run: {
  name: `Running cypress tests ${index + 1}`,
  command: `if $(npm bin)/cypress run --spec cypress/integration/${value} --reporter mochawesome --reporter-options "reportFilename=test${index + 1}"; then echo 'pass'; else echo 'fail'; fi`,
},

The single line bash if/else is a bit hard to read. Refactor this on your own.

Does it work? Generate the new config file, commit, and push your code. All test jobs should pass and the final mochawesome report should show the failing spec.

circleci dashboard

mochawesome report

One last thing: We should probably still fail the entire build if a job fails. The quickest way to implement this is within the shell.exec callback in combine.js:

shell.exec(`./node_modules/.bin/marge ${uuid}.json --reportDir mochareports --reportTitle ${uuid}`, (code, stdout, stderr) => {
  if (stderr) {
    console.log(stderr);
  } else {
    console.log('Success!');
    if (data.totalFailures > 0) {
      process.exit(1);
    } else {
      process.exit(0);
    }
  }
});

Test this out. Then, try testing a few other scenarios, like skipping a test or adding more than four spec files.

Conclusion

This tutorial looked at how to run Cypress tests in parallel, without using the Cypress record feature, on CircleCI. It's worth noting that you can implement the exact same workflow with any of the CI services that offer parallelism--like GitLab CI, Travis, and Semaphore, to name a few--as well as your own custom CI platform with Jenkins or Concourse. If your CI service does not offer parallelism, then you can use Docker to run jobs in parallel. Contact us for more details on this.

Looking for some challenges?

  1. Create a Slack bot that notifies a channel when the tests are done running and adds a link to the mochawesome report as well as any screenshots or videos of failed test specs
  2. Upload the final report to an S3 bucket (see cypress-mochawesome-s3)
  3. Track the number of failed tests over time by storing the test results in a database
  4. Run the entire test suite multiple times as a nightly job and then only indicate whether or not a test has failed if it fails X number of times--this will help surface flaky tests and eliminate unnecessary developer intervention

Grab the final code from the cypress-parallel repo. Cheers!

Original article source at: https://testdriven.io/

#cypress #tests #parallel 

How to Run Cypress Tests in Parallel
Dexter  Goodwin

Dexter Goodwin

1661918520

React-fix-it: Automagically Generate Tests From Errors

React Fix It

Automagically generate tests from errors.

⚠️ This package uses react-component-errors to wrap the lifecycle methods into a try...catch block, which affects the performance of your components. Therefore it should not be used in production.

How to use it

  • Enhance your components with fixIt
  • Write some bugs (or wait for your components to fail)
  • Open the console and copy the test snippet
  • Paste the code to reproduce the error
  • Fix the bugs and celebrate

Demo

https://michelebertoli.github.io/react-fix-it/

Preview

Installation

You can either install it with npm or yarn.

npm install --save-dev react-fix-it

or

yarn add --dev react-fix-it

Example

import React, { Component } from 'react'
import fixIt, { options } from 'react-fix-it'

// defaults to console.log
options.log = (test) => {
  console.warn(test)
  doWatheverYouWant(test)
}

class MyComponent extends Component {
  render() {
    return <div>Hello ⚛</div>
  }
}

export default fixIt(MyComponent)

:bulb: They easiest way to patch automatically all the components in development mode is by using babel-plugin-react-fix-it with the following configuration:

{
  "env": {
    "development": {
      "plugins": ["react-fix-it"]
    }
  }
}

Test

npm test

or

yarn test

Download Details:

Author: MicheleBertoli
Source Code: https://github.com/MicheleBertoli/react-fix-it 
License: MIT license

#javascript #react #tests 

React-fix-it: Automagically Generate Tests From Errors

Memstore: in-memory Store for Gorilla/sessions For Use in Tests

memstore 

In-memory implementation of gorilla/sessions for use in tests and dev environments

How to install

go get github.com/quasoft/memstore

How to use

package main

import (
    "fmt"
    "log"
    "net/http"

    "github.com/quasoft/memstore"
)

func main() {
    // Create a memory store, providing authentication and
    // encryption key for securecookie
    store := memstore.NewMemStore(
        []byte("authkey123"),
        []byte("enckey12341234567890123456789012"),
    )

    http.HandleFunc("/hello", func(w http.ResponseWriter, r *http.Request) {
        // Get session by name.
        session, err := store.Get(r, "session1")
        if err != nil {
            log.Printf("Error retrieving session: %v", err)
        }

        // The name should be 'foobar' if home page was visited before that and 'Guest' otherwise.
        user, ok := session.Values["username"]
        if !ok {
            user = "Guest"
        }
        fmt.Fprintf(w, "Hello %s", user)
    })

    http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
        // Get session by name.
        session, err := store.Get(r, "session1")
        if err != nil {
            log.Printf("Error retrieving session: %v", err)
        }

        // Add values to the session object
        session.Values["username"] = "foobar"
        session.Values["email"] = "spam@eggs.com"

        // Save values
        err = session.Save(r, w)
        if err != nil {
            log.Fatalf("Error saving session: %v", err)
        }
    })

    log.Printf("listening on http://%s/", "127.0.0.1:9090")
    log.Fatal(http.ListenAndServe("127.0.0.1:9090", nil))
}

Documentation

Documentation, as usual, can be found at godoc.org.

The interface of gorilla/sessions is described at http://www.gorillatoolkit.org/pkg/sessions.

Download Details:

Author: quasoft
Source Code: https://github.com/quasoft/memstore 
License: BSD-3-Clause license

#go #golang #tests 

Memstore: in-memory Store for Gorilla/sessions For Use in Tests

File Watcher in Julia - Can Be Used to Auto-run Unit Tests Etc

Watcher 

This package allows to run a custom command every time a file in the specified directories changes. It was initally written to auto-run unit tests every time a file gets saved.

The default invokation is very simple:

julia -e "using Watcher"

This will watch all jl files in the current directory and in subdiretories, and run "julia test/runtests.jl" when a file changes.

You can change this behaviour:

julia -e "using Watcher" -- [-f=jl,txt] [-w=src,test] [--now] [--run echo "something changed"]

-f=type1,type2 specifies which file types to watch, default is jl

-w=dir1,dir2 tells it to look only in these two directors, default is the current directory and all its sub directories

--now will run the command already once on startup, and then continue watching for changes

Everything after --run is the command the will get executed, with the default being julia test/runtests.jl.

Tips

It is recommended to put println statements at the beginning and end of your unit test file, to get immediate feedback that the tests started running (executing using statements can take some time):

println("Starting runtests.jl ...")
using FactCheck, YourPackage

# run your tests

println(" ... finished runtests.jl")
FactCheck.exitstatus()

Download Details:

Author: Rened
Source Code: https://github.com/rened/Watcher.jl 
License: View license

#julia #tests 

File Watcher in Julia - Can Be Used to Auto-run Unit Tests Etc

RunTests.jl: A Test Running Framework for Julia

RunTests.jl

RunTests.jl is a test running framework for Julia. In its simplest form RunTests.jl saves you from writing test/runtests.jl scripts that look like this:

my_tests = ["sometests.jl",
            "somemoretests.jl",
            "evenmoretests.jl"]

println("Running tests:")
for my_test in my_tests
  include(my_test)
end

and allows you to write them, more simply, like this:

using RunTests
exit(run_tests())

Or if you wish to be explicit what test files are run and in what order (not recomended, ideally the order tests are run should not matter) you can do this:

using RunTests
exit(run_tests(["sometests.jl",
                "somemoretests.jl",
                "evenmoretests.jl"]))

But it has more to offer than that! RunTests.jl builds on top of Julia's Base.Test library to make it easy to add structure to your tests. Structuring your tests with RunTests.jl gives the following advantages:

  • All the tests are run - the tests script doesn't bomb out after the first failure so you can see all your test results at once.
  • A summary of how many tests passed/failed is produced so you can judge at a glance how the test run went.
  • The STDOUT and STDERR output from each test is captured and reported along with the details of the test failure.
  • You get a progress bar showing how far through the tests you are, it is green while all the tests are passing and goes red if any fail
  • Using modules and functions to structure test files gives you a natural isolation between tests.
  • You can selectively skip tests with @skipif and mark failing tests with @xfail.
  • Using @parameterize you can run the same test again again with different parameters and see which pass and which fail.

Here is an example test file written using RunTests.jl that demonstrates a number of features of the package:

using RunTests
using Base.Test

@testmodule ExampleTests begin

  function test_one()
      @test true
  end

  function test_two()
    println("seen")
    @test true
    println("also seen")
    @test false
    println("never seen")
  end

  @skipif false function test_not_skipped()
    @test true
  end

  @skipif true function test_skipped()
    @test true
  end

  @xfail function test_xfails()
    @test false
  end

  @xfail function test_xpasses()
    @test true
  end

  @parameterize 1:4 function test_parameterized(x)
    @test x<3
  end

end

Running the file will run the tests and you will get this output:

Running 10 tests 100%|##############################| Time: 0:00:01

Tests:
======

ExampleTests.test_not_skipped PASSED
ExampleTests.test_one PASSED
ExampleTests.test_parameterized[1] PASSED
ExampleTests.test_parameterized[2] PASSED
ExampleTests.test_parameterized[3] FAILED
ExampleTests.test_parameterized[4] FAILED
ExampleTests.test_skipped SKIPPED
ExampleTests.test_two FAILED
ExampleTests.test_xfails XFAILED
ExampleTests.test_xpasses XPASSED

=================================== Failures ===================================

---------------------- ExampleTests.test_parameterized[3] ---------------------

test failed: :((x<3))
 in error at error.jl:21
 in default_handler at test.jl:19
 in do_test at test.jl:39

 --------------------------------------------------------------------------------

---------------------- ExampleTests.test_parameterized[4] ---------------------

test failed: :((x<3))
 in error at error.jl:21
 in default_handler at test.jl:19
 in do_test at test.jl:39

--------------------------------------------------------------------------------

----------------------------- ExampleTests.test_two ----------------------------

test failed: false
 in error at error.jl:21
 in default_handler at test.jl:19

Captured Output:
================

seen
also seen

--------------------------------------------------------------------------------    

================ 3 failed 4 passed 1 skipped 1 xfailed 1 xpassed ===============

But you can also run the file along with many others by putting them under the same directory (sub directories work too) and running them all together with:

using RunTests
exit(run_tests(<path_to_directory_containing_tests>))

When you run many test files together, like this, all their tests are pooled and you get one report for them all. If you don't specify a directory run_tests will default to running tests from the "test" folder.

RunTests.jl is extensible, in fact @xfail, @skipif and @parameterize are implemented as extensions. You can extend RunTests.jl to add further types of tests or categories of test result.

Download Details:

Author: Burrowsa
Source Code: https://github.com/burrowsa/RunTests.jl 
License: View license

#julia #run #tests #framework 

RunTests.jl: A Test Running Framework for Julia

Automated integrated Regression Tests for Graphics Libraries

VisualRegressionTests.jl

Easy regression testing for visual packages. Automated tests compare similarity between a newly generated image and a reference image using the Images package. While in interactive mode, the tests can optionally pop up a Gtk GUI window showing a side-by-side comparison of the test and reference image, and then optionally overwrite the reference image with the test image. This allows for straightforward regression testing of image data, even when the "correct" images change over time.

Usage:

Two macros are provided that can be used to perform visual regression. The first macro is for general visual objects:

@visualtest testfun refimg popup tol

where:

  • testfun is a function that takes a filename as input, produces a visual, and saves it to disk:
function testfun(fname)
  visual = produce() # produce some visual object
  save(fname, visual) # save visual to file using filename
end

refimg is the filename where to save the reference image for regression testing

popup tells whether or not a Gtk popup window should be shown in case of mismatch (default to true)

tol the tolerance of the comparison (default to 0.02)

The second macro is for plots generated with Plots.jl:

@plottest plotfun refimg popup tol

where the only difference is in the plotfun function. In this case, the function should take no argument, and produce a plot, without saving it. The macro will take care of saving the image as a PNG in the disk. Alternatively, the plotfun argument can be an entire sequence of commands (i.e. a function body):

@plottest begin
  plot([1.,2.,3.])
  plot!([3.,2.,1.])
  # ...
end "foo.png"

Example GUI popup:

popup

Download Details:

Author: JuliaPlots
Source Code: https://github.com/JuliaPlots/VisualRegressionTests.jl 
License: View license

#julia #tests #graphic 

Automated integrated Regression Tests for Graphics Libraries
Royce  Reinger

Royce Reinger

1659934020

Guard-minitest: Guard::Minitest Automatically Run Your Tests

Guard::Minitest 

Guard::Minitest allows to automatically & intelligently launch tests with the minitest framework when files are modified.

  • Compatible with minitest >= 3.0 (optimal support for 5.x).
  • Tested against Ruby 1.9.3, 2.0.0, JRuby and Rubinius (1.9 mode).

IMPORTANT NOTE: guard-minitest does not depend on guard due to obscure issues - you must either install guard first or add it explicitly in your Gemfile (see: 131 for details)

Install

Please be sure to have Guard installed before you continue.

The simplest way to install Guard::Minitest is to use Bundler.

Add Guard::Minitest to your Gemfile:

group :development do
  gem 'guard' # NOTE: this is necessary in newer versions
  gem 'guard-minitest'
end

and install it by running Bundler:

$ bundle

Add guard definition to your Guardfile by running the following command:

guard init minitest

Ruby on Rails

Spring

Due to complexities in how arguments are handled and running tests for selected files, it's best to use the following spring command:

guard "minitest", spring: "bin/rails test" do
  # ...
end

(For details see issue #130).

Rails gem dependencies

Ruby on Rails lazy loads gems as needed in its test suite. As a result Guard::Minitest may not be able to run all tests until the gem dependencies are resolved.

To solve the issue either add the missing dependencies or remove the tests.

Example:

Specify ruby-prof as application's dependency in Gemfile to run benchmarks.

Rails automatically generates a performance test stub in the test/performance directory which can trigger this error. Either add ruby-prof to your Gemfile (inside the test group):

group :test do
   gem 'ruby-prof'
end

Or remove the test (or even the test/performance directory if it isn't necessary).

Usage

Please read Guard usage doc

Guardfile

Guard::Minitest can be adapated to all kind of projects. Please read guard doc for more info about the Guardfile DSL.

Standard Guardfile when using Minitest::Unit

guard :minitest do
  watch(%r{^test/(.*)\/?test_(.*)\.rb$})
  watch(%r{^lib/(.*/)?([^/]+)\.rb$})     { |m| "test/#{m[1]}test_#{m[2]}.rb" }
  watch(%r{^test/test_helper\.rb$})      { 'test' }
end

Standard Guardfile when using Minitest::Spec

guard :minitest do
  watch(%r{^spec/(.*)_spec\.rb$})
  watch(%r{^lib/(.+)\.rb$})         { |m| "spec/#{m[1]}_spec.rb" }
  watch(%r{^spec/spec_helper\.rb$}) { 'spec' }
end

Options

List of available options

all_on_start: false               # run all tests in group on startup, default: true
all_after_pass: true              # run all tests in group after changed specs pass, default: false
cli: '--test'                     # pass arbitrary Minitest CLI arguments, default: ''
test_folders: ['tests']           # specify an array of paths that contain test files, default: %w[test spec]
include: ['lib']                  # specify an array of include paths to the command that runs the tests
test_file_patterns: %w[test_*.rb] # specify an array of patterns that test files must match in order to be run, default: %w[*_test.rb test_*.rb *_spec.rb]
spring: true                      # enable spring support, default: false
zeus: true                        # enable zeus support; default: false
drb: true                         # enable DRb support, default: false
bundler: false                    # don't use "bundle exec" to run the minitest command, default: true
rubygems: true                    # require rubygems when running the minitest command (only if bundler is disabled), default: false
env: {}                           # specify some environment variables to be set when the test command is invoked, default: {}
all_env: {}                       # specify additional environment variables to be set when all tests are being run, default: false
autorun: false                    # require 'minitest/autorun' automatically, default: true

Options usage examples

:test_folders and :test_file_patterns

You can change the default location of test files using the :test_folders option and change the pattern of test files using the :test_file_patterns option:

guard :minitest, test_folders: 'test/unit', test_file_patterns: '*_test.rb' do
  # ...
end

:cli

You can pass any of the standard MiniTest CLI options using the :cli option:

guard :minitest, cli: '--seed 123456 --verbose' do
  # ...
end

:spring

Spring is supported (Ruby 1.9.X / Rails 3.2+ only), but you must enable it:

guard :minitest, spring: true do
  # ...
end

Since version 2.3.0, the default Spring command works is bin/rake test making the integration with your Rails >= 4.1 app effortless.

If you're using an older version of Rails (or no Rails at all), you might want to customize the Spring command, e.g.:

guard :minitest, spring: 'spring rake test' do
  # ...
end

:zeus

Zeus is supported, but you must enable it. Please note that notifications support is very basic when using Zeus. The zeus client exit status is evaluated, and a Guard :success or :failed notification is triggered. It does not include the test results though.

If you're interested in improving it, please open a new issue.

If your test helper matches the test_file_patterns, it can lead to problems as guard-minitest will submit the test helper itself to the zeus test command when running all tests. For example, if the test helper is called test/test_helper.rb it will match test_*.rb. In this case you can either change the test_file_patterns or rename the test helper.

guard :minitest, zeus: true do
  # ...
end

:drb

Spork / spork-testunit is supported, but you must enable it:

guard :minitest, drb: true do
  # ...
end

The drb test runner honors the :include option, but does not (unlike the default runner) automatically include :test_folders. If you want to include the test paths, you must explicitly add them to :include.

Development

Pull requests are very welcome! Please try to follow these simple rules if applicable:

  • Please create a topic branch for every separate change you make.
  • Make sure your patches are well tested. All specs run by Travis CI must pass.
  • Update the README.
  • Please do not change the version number.

For questions please join us in our Google group or on #guard (irc.freenode.net).

Download Details:

Author: Guard
Source Code: https://github.com/guard/guard-minitest 
License: MIT license

#ruby #tests 

Guard-minitest: Guard::Minitest Automatically Run Your Tests
Monty  Boehm

Monty Boehm

1656529980

HypothesisTests.jl: Hypothesis Tests for Julia

HypothesisTests.jl

HypothesisTests.jl is a Julia package that implements a wide range of hypothesis tests.

Quick start

Some examples:

using HypothesisTests

pvalue(OneSampleTTest(x))
pvalue(OneSampleTTest(x), tail=:left)
pvalue(OneSampleTTest(x), tail=:right)
confint(OneSampleTTest(x))
confint(OneSampleTTest(x, tail=:left))
confint(OneSampleTTest(x, tail=:right))
OneSampleTTest(x).t
OneSampleTTest(x).df

pvalue(OneSampleTTest(x, y))
pvalue(EqualVarianceTTest(x, y))
pvalue(UnequalVarianceTTest(x, y))

pvalue(MannWhitneyUTest(x, y))
pvalue(SignedRankTest(x, y))
pvalue(SignedRankTest(x))

Build & Testing Status: Build Status Coverage Status

Documentation:

Author: JuliaStats
Source Code: https://github.com/JuliaStats/HypothesisTests.jl 
License: View license

#julia #statistics #tests #hacktoberfest 

HypothesisTests.jl: Hypothesis Tests for Julia
Lawrence  Lesch

Lawrence Lesch

1654154520

Jest-test-gen: CLI tool To Generate A Test File with Test Scaffold

JestTestGen 

Automates creation of initial unit test files taking dependencies into account.

Supported exports:

  • React Functional components 🆕
  • React Class based components 🆕
  • ES6 Classes default export or named exports
  • Exported named functions and arrow functions
  • Exported POJOs with methods
  • Async functions and methods

This tool will take a js/ts file as input and generate a jest unit test file next to it with all imports mocked and tests stubs for every class method and function exported.

This project is inspired and started as a fork of jasmine-unit-test-generator

Preview

Basic ES6 Class example:

Basic

React Component example:

ReactComponent

Usage

Installation

run npm i -g jest-test-gen

Basic Usage

run jest-test-gen <path-to-file>

TODO

  • Custom test output for React components
  • Enhance jest.mock support
  • TS unit test output for Typescript sources

Development

It's probably best to:

  • add an input file in spec/fixtures folder test.js
  • add expected output file, e.g. expected.test.js
  • link them in integration.spec.ts

Alternavely, you can:

  • run npm link
  • run npm run build:dev
  • run jest-test-gen <option> in your project of choice

Release

run npm run build run npm publish

Author: Egm0121
Source Code: https://github.com/egm0121/jest-test-gen 
License: View license

#javascript #typescript #jest #tests 

Jest-test-gen: CLI tool To Generate A Test File with Test Scaffold
Lawrence  Lesch

Lawrence Lesch

1654113420

Jest-dynamodb: Jest Preset for DynamoDB Local Server

jest-dynamodb

Jest preset to run DynamoDB Local

Usage

0. Install

$ yarn add @shelf/jest-dynamodb --dev

Make sure aws-sdk is installed as a peer dependency. And java runtime available for running DynamoDBLocal.jar

1. Create jest.config.js

module.exports = {
  preset: '@shelf/jest-dynamodb'
};

2. Create jest-dynamodb-config.js

2.1 Properties

tables

  • Type: object[]
  • Required: true

Array of createTable params.

port

  • Type: number
  • Required: false

Port number. The default port number is 8000.

options

  • Type: string[]
  • Required: false

Additional arguments for dynamodb-local. The default value is ['-sharedDb'].

clientConfig

  • Type: object
  • Required: false

Constructor params of DynamoDB client.

installerConfig

Type: {installPath?: string, downloadUrl?: string}

Required: false

installPath defines the location where dynamodb-local is installed or will be installed.

downloadUrl defines the url of dynamodb-local package.

The default value is defined at https://github.com/rynop/dynamodb-local/blob/2e6c1cb2edde4de0dc51a71c193c510b939d4352/index.js#L16-L19

2.2 Examples

You can set up tables as an object:

module.exports = {
  tables: [
    {
      TableName: `files`,
      KeySchema: [{AttributeName: 'id', KeyType: 'HASH'}],
      AttributeDefinitions: [{AttributeName: 'id', AttributeType: 'S'}],
      ProvisionedThroughput: {ReadCapacityUnits: 1, WriteCapacityUnits: 1}
    }
    // etc
  ],
  port: 8000
};

Or as an async function (particularly useful when resolving DynamoDB setup dynamically from serverless.yml):

module.exports = async () => {
  const serverless = new (require('serverless'))();
  // If using monorepo where DynamoDB serverless.yml is in another directory
  // const serverless = new (require('serverless'))({ servicePath: '../../../core/data' });

  await serverless.init();
  const service = await serverless.variables.populateService();
  const resources = service.resources.filter(r => Object.keys(r).includes('Resources'))[0];

  const tables = Object.keys(resources)
    .map(name => resources[name])
    .filter(r => r.Type === 'AWS::DynamoDB::Table')
    .map(r => r.Properties);

  return {
    tables,
    port: 8000
  };
};

Or read table definitions from a CloudFormation template (example handles a !Sub on TableName, i.e. TableName: !Sub "${env}-users" ):

const yaml = require('js-yaml');
const fs = require('fs');
const {CLOUDFORMATION_SCHEMA} = require('cloudformation-js-yaml-schema');

module.exports = async () => {
  const cf = yaml.safeLoad(fs.readFileSync('../cf-templates/example-stack.yaml', 'utf8'), {
    schema: CLOUDFORMATION_SCHEMA
  });
  var tables = [];
  Object.keys(cf.Resources).forEach(item => {
    tables.push(cf.Resources[item]);
  });

  tables = tables
    .filter(r => r.Type === 'AWS::DynamoDB::Table')
    .map(r => {
      let table = r.Properties;
      if (typeof r.TableName === 'object') {
        table.TableName = table.TableName.data.replace('${env}', 'test');
      }
      delete table.TimeToLiveSpecification; //errors on dynamo-local
      return table;
    });

  return {
    tables,
    port: 8000
  };
};

3.1 Configure DynamoDB client (from aws-sdk v2)

const {DocumentClient} = require('aws-sdk/clients/dynamodb');

const isTest = process.env.JEST_WORKER_ID;
const config = {
  convertEmptyValues: true,
  ...(isTest && {endpoint: 'localhost:8000', sslEnabled: false, region: 'local-env',
    credentials: {
      accessKeyId: 'fakeMyKeyId',
      secretAccessKey: 'fakeSecretAccessKey'
    }})
};

const ddb = new DocumentClient(config);

3.2 Configure DynamoDB client (from aws-sdk v3)

const {DynamoDB} = require('@aws-sdk/client-dynamodb');
const {DynamoDBDocument} = require('@aws-sdk/lib-dynamodb');

const isTest = process.env.JEST_WORKER_ID;

const ddb = DynamoDBDocument.from(
  new DynamoDB({
    ...(isTest && {
      endpoint: 'localhost:8000',
      sslEnabled: false,
      region: 'local-env',
      credentials: {
        accessKeyId: 'fakeMyKeyId',
        secretAccessKey: 'fakeSecretAccessKey'
      }
    })
  }),
  {
    marshallOptions: {
      convertEmptyValues: true
    }
  }
);

4. PROFIT! Write tests

it('should insert item into table', async () => {
  await ddb.put({TableName: 'files', Item: {id: '1', hello: 'world'}}).promise();

  const {Item} = await ddb.get({TableName: 'files', Key: {id: '1'}}).promise();

  expect(Item).toEqual({
    id: '1',
    hello: 'world'
  });
});

Monorepo Support

By default the jest-dynamodb-config.js is read from cwd directory, but this might not be suitable for monorepos with nested jest projects with nested jest.config.* files nested in subdirectories.

If your jest-dynamodb-config.js file is not located at {cwd}/jest-dynamodb-config.js or you are using nested jest projects, you can define the environment variable JEST_DYNAMODB_CONFIG with the absolute path of the respective jest-dynamodb-config.js file.

Example Using JEST_DYNAMODB_CONFIG in nested project

// src/nested/project/jest.config.js
const path = require('path');

// Define path of project level config - extension not required as file will be imported via `require(process.env.JEST_DYNAMODB_CONFIG)`
process.env.JEST_DYNAMODB_CONFIG = path.resolve(__dirname, './jest-dynamodb-config');

module.exports = {
  preset: '@shelf/jest-dynamodb'
  displayName: 'nested-project',
};

Troubleshooting

UnknownError: Not Foundcom.almworks.sqlite4java.Internal log WARNING: [sqlite] cannot open DB[1]:

Alternatives

  • jest-dynalite - a much lighter version which spins up an instance for each runner & doesn't depend on Java

Read

Used by

See Also

Publish

$ git checkout master
$ yarn version
$ yarn publish
$ git push origin master --tags

Author: Shelfio
Source Code: https://github.com/shelfio/jest-dynamodb 
License: MIT license

#javascript #jest #node #dynamodb #tests 

Jest-dynamodb: Jest Preset for DynamoDB Local Server
Lawrence  Lesch

Lawrence Lesch

1654068840

Marko-jest: Jest Marko Transformer, Import .marko Files in Jest Tests

Marko Jest  

[DEPRECATED] Jest Marko transformer and rendering test utility.

DEPRECATED ⚠

This module is deprecated and no longer maintained, in favour of the official @marko/jest 🖖

What is this?

Transformer and rendering test library for Marko 4 component with Jest & JSDOM.

  • Renders Marko component on JSDOM
  • Supports rendering and client-side behaviour testing
  • Snapshot testing
  • TypeScript support

Requirements

  • Jest: 23.x
  • Marko: ^4.9.0

Setup

Add marko-jest to your dev dependencies. You could do it by yarn add marko-jest --dev or npm install marko-jest --save-dev.

Register marko preprocessor/transformer on your Jest config. This allows Jest to process and compile Marko file. Add the following lines to the Jest transform section:

// package.json or jest config
{
  ...

  "jest": {
    "transform": {
      "^.+\\.marko$": "<rootDir>/node_modules/marko-jest/preprocessor.js"
    },
    ...
  },

  ...
}

Quick Start

These are a quick steps to test a Marko component with marko-jest:

Require marko-jest module and use the init function to initiate the Marko component you want to test. This is the way to 'require' Marko component on test files.

The init function will return render function which you can use to render the initiated Marko component.

// __tests__/component.spec.js
import { init } from 'marko-jest';
// or const { init } = require('marko-jest');

// init() requires full path to Marko component
const componentPath = path.resolve(__dirname, '../index.marko');
const { render } = init(componentPath);

describe('test-button', () => {
  ...
});

The render function returns RenderResult object which allows you to get the component instance. Use the component instance to access its properties (e.g el, els, or state) or methods (e.g getEl(), update(), rerender()) for testing.

// __tests__/component.spec.js
import { init, cleanup } from 'marko-jest';

const componentPath = path.resolve(__dirname, '../index.marko');
const { render } = init(componentPath);

describe('test-button', () => {
  let renderResult;

  afterEach(cleanup);

  describe('on rendering', () => {
    const input = { label: 'Click here' };

    beforeEach(async () => {
      renderResult = await render(input);
    });

    it('should render a link given default input', () => {
      const button = renderResult.component.el.querySelector('a');
      expect(button).toBeDefined();
    });
  });
});

Component Rendering Test

One way to test a component is to test its generated HTML. You can access it from the RenderResult object returned by the render function.

You can use the following methods/property from the RenderResult object:

  • Property component: component instance. You can access the output HTML element using Marko component instance's properties (such as el or els), or methods (getEl(key) or getEls(key)).
  • Property container: the test container element, which is a div element. Behind the scene, marko-jest render function automatically creates a test container and renders the component inside it.
  • Method getNodes: return the list of rendered HTML elements. Usually useful for snapshot testing (see next section).

Once you get the HTML element, you can use any native HTML methods to assert if a certain element or class is existed.

Examples:

// container
it('should render icon DOWN', () => {
  const iconEl = return renderResult.container.querySelector('.btn__icon');
  expect(iconEl.innerHTML).toContain('#svg-icon-chevron-down');
});

// component instance with property el
it('should render a link given default input', () => {
  const ctaLink = rendereResult.component.el.querySelector('main-cta');
  expect(ctaLink.textContent).toBe('Shop Now');
});

// component instance with getEl() if you have key attribute inside Marko template
it('should render a link given default input', () => {
  const button = component.getEl('main-cta');
  expect(button).toBeDefined();
});

// component instance with getEls()
it('should render benefit links', () => {
  const benefitLinks = component.getEls('benefits');
  expect(benefitLinks.lenth).toBeGreaterThan(2);
});

Accessing Non-Element Nodes

Marko's getEl() and getEls() returns HTML elements only, which means it does not return any non-element nodes such as text and comment nodes. If you want to access element & non-elements (e.g for snapshot testing), you can use RenderResult getNodes() which will return array of all Nodes, including HTML elements, text, and comment nodes.

it('should render text node', () => {
  const nodes = renderResult.getNodes();

  expect(nodes[0].nodeType).toEqual(Node.TEXT_NODE);
});

A use case for this is you have a component which can render a text node without any HTML element as container

// span-or-text-component.marko
<span body-only-if(!input.showSpan)>
  ${input.text}
</span>
// test-span-or-text-component.spec.js
import { init, cleanup } from 'marko-jest';
const { render } = init(path.resolve(__dirname, './resources/body-only-item/index.marko'));

describe('span-or-text component', () => {
  let renderResult;

  afterEach(cleanup);

  it('should render component as a span element', async () => {
    renderResult = await render({ showSpan: true, text: 'test' });
    const nodes = renderResult.getNodes();

    expect(nodes[0].nodeName).toEqual('SPAN');
    expect(nodes[0].nodeType).toEqual(Node.ELEMENT_NODE);
  });

  it('should render component as a text node', async () => {
    renderResult = await render({ showSpan: false, text: 'test' });
    const nodes = renderResult.getNodes();

    expect(nodes[0].nodeName).toEqual('#text');
    expect(nodes[0].nodeType).toEqual(Node.TEXT_NODE);
  });
});

Snapshot testing

You can utilize Jest snapshot testing to test component rendering. The RenderResult getNodes() will return array of HTML elements which we can use for Jest snapshot feature.

Example:

// __tests__/component.spec.js
import * as path from 'path';
import { init, cleanup } from 'marko-jest';

const componentPath = path.resolve(__dirname, '../index.marko');
const { render } = init(componentPath);

describe('test-button', () => {
  afterEach(cleanup);

  it('should render correctly given default input', async () => {
    const input = { label: 'Click here' };
    const renderResult = await render(input);

    expect(renderResult.getNodes()).toMatchSnapshot();
  });
});

Behaviour Testing

You can test component behaviour (e.g click handler) by triggering event though the HTML element.

Example on testing a toggle button:

// index.marko
class {
  onCreate() {
    this.state = {
      clicked: false
    };
  }

  toggleButton() {
    this.state.clicked = !this.state.clicked;
  }
}

<button class="btn" on-click('toggleButton')>
  <span if(state.clicked)>DONE</span>
  <span else>Click me</span>
</button>

You can access the button element and trigger the click:

// __tests__/index.spec.js
import * as path from 'path';
import { init, cleanup } from 'marko-jest';

const componentPath = path.resolve(__dirname, '../index.marko');
const { render } = init(componentPath);

describe('test-simple-button', () => {
  let component;

  afterEach(cleanup);

  describe('on rendering', () => {
    let mainButton;

    beforeEach(async () => {
      const renderResult = await render({ });

      component = renderResult.component;
      mainButton = component.getEl('rootButton');
    });

    it('should render a button', () => {
      expect(mainButton).toBeTruthy();
    });

    it('should render default label', () => {
      const buttonLabel = mainButton.textContent;
      expect(buttonLabel).toEqual('Click me');
    });

    describe('when clicked', () => {
      beforeEach(() => {
        mainButton.click();
        component.update();
      });

      it('should change the button label', () => {
        const buttonLabel = mainButton.textContent;
        expect(buttonLabel).toEqual('DONE');
      });
    });
  });
});

You can also combine it with snapshot testing:

import * as path from 'path';
import { init, cleanup } from 'marko-jest';

const componentPath = path.resolve(__dirname, '../index.marko');
const { render } = init(componentPath);

describe('test-simple-button', () => {
  let renderResult;
  let component;

  afterEach(cleanup);

  describe('on rendering', () => {
    beforeEach(async () => {
      renderResult = await render({ });
      component = renderResult.component;
    });

    it('should render correctly', () => {
      expect(renderResult.getNodes()).toMatchSnapshot();
    });

    describe('when clicked', () => {
      beforeEach(() => {
        component.getEl('rootButton').click();
        component.update();
      });

      it('should update the element', () => {
        expect(renderResult.getNodes()).toMatchSnapshot();
      });
    });
  });
});

TypeScript Support

marko-jest module provides TypeScript type definition. Make sure you also install type definition for Marko by adding module @types/marko to your project.

Shallow Rendering

marko-jest can do shallow rendering on external component. If you use external Marko component module/library (such as ebayui-core), you can exclude those components from being rendered deeply by adding the module name to Jest globals config taglibExcludePackages. marko-jest will use Marko's taglibFinder.excludePackage() to prevent any components from those modules to be rendered.

For example, if you want to do shallow rendering on all components from @ebay/ebayui-core module, add the module name to Jest globals config:

// package.json
{
  ...

  "jest": {
    "transform": {
      ...
    },
    ...
    "globals": {
      "marko-jest": {
        "taglibExcludePackages": [
          "@ebay/ebayui-core",
          "marko-material"
        ]
      }
    }
  },

  ...
}

Now Marko Jest will render your Marko component:

// cta-component.marko
<section>
  <ebay-button priority="primary" on-click('toggleButton')>
    PAY
  </ebay-button>
</section>

As:

<section>
  <ebay-button priority="primary">PAY</ebay-button>
<section>

Instead of

<section>
  <button type="button" class="btn btn--primary">PAY</button>
<section>

One of the advantages of shallow rendering is to isolate your unit test so you can focus on testing your component instead of the external ones. On the example above, if the ebay-button implementation has changed (e.g css class name or new attribute added), your snapshot test will not failed.

Current Limitation of marko-jest shallow rendering

  • The shallow rendering will affect ALL test suites, you cannot turn it on or off during runtime.
  • You can only do shallow rendering on external modules. Unfortunately, you cannot do shallow rendering on component from the same project. The only workaround so far is to separate your UI component as external module (npm package) and consume it on your project.

marko-jest APIs

marko-jest API provides 2 high level functions: init and cleanup.

init(fullPathToMarkoComponent: string): InitResult

This is a way to 'require' Marko component on test file. It requires full path to Marko component.

At the moment, you can't easily require Marko component on Node.js with JSDOM. By default, when a Marko component is required on Node.js, you can only do server-side-only component. This means you can render the component as HTML but without any browser-side features such as render to virtual DOM, DOM event handling, or browser-side lifecycle.

init function will 'trick' Marko to require a component on Node.js as if it's done on browser. Therefore, the required component will have all browser-side features, including component rendering.

The init function will return an object InitResult which has:

  • property componentClass: Component, the Class of require/init-ed Marko Component. Quite useful if you want to spy on Marko component lifecycle method.
  • function render(input: any): Promise<RenderResult>: Asynchronously render the component using the given input. This will return a promise which will be resolved with an instance of RenderResult.

The RenderResult is the result of component rendering which has:

  • property component: Component: the rendered component instance. Use this instance to access any Marko component properties or methods.
  • property container: HTMLElement: the test container element, which is a div element. Behind the scene, marko-jest render function automatically creates a test container and renders the component inside it.
  • method getNodes(): HTMLElement[]: return the list of any rendered HTML elements. This method is better than Marko's getEl() and getEls() which does not return any non-element nodes such as text and comment nodes. If you want to access element & non-elements (e.g for snapshot testing), you can use getNodes() which will return array of all Nodes, including HTML elements, text, and comment nodes.

cleanup(): void

Remove all test containers created by the render function. Totally recommended to call cleanup on Jest afterEach.

For more info about marko-jest API, you can check the TypeScript type definition here

Known Issues

  • Failed rendering Marko component with custom transformer
  • Limited support of shallow rendering, see Shallow Rendering above or #1

Roadmap

Planned new features and improvements:

  • Better support of shallow and deep rendering.

Contributing

Contributing guidelines is still WIP but you're welcome to contribute by creating issues or Pull Request.

Author: Abiyasa
Source Code: https://github.com/abiyasa/marko-jest 
License: MIT license

#javascript #jest #tests 

Marko-jest: Jest Marko Transformer, Import .marko Files in Jest Tests
Gordon  Taylor

Gordon Taylor

1653652033

Jest-allure: Generate Allure Report for Jest

Jest-Allure reporting plugin

Add more power to your tests using Jest-Allure. Easily generate nice reports at the end of the execution.


Allure Report

Allure Framework is a flexible lightweight multi-language test report tool that not only shows a very concise representation of what have been tested in a neat web report form, but allows everyone participating in the development process to extract maximum of useful information from everyday execution of tests.

Installation

yarn add -D jest-allure

or

npm install --save-dev jest-allure

Uses Jest-circus or jest -v >27 ?

Jest-allure doesn't support jest-circus. (but PR's are welcome)

As starting from jest@27 it uses jest-circus as default testrunner you must update jest.config.js and set:

"testRunner": "jest-jasmine2"

jest -v >24 ?

Then add jest-allure/dist/setup to setupFilesAfterEnv section of your config.

setupFilesAfterEnv: ["jest-allure/dist/setup"]

jest -v < 24 ?

add reporter to jest.config.js

reporters: ["default", "jest-allure"],

Run your tests and enjoy 🥤🚀


How to get a report

You need to install the CLI in order to obtain a report.

To see a report in browser, run in console

allure serve

If you want to generate html version, run in console

allure generate

Advanced features

You can add description, screenshots, steps, severity and lots of other fancy stuff to your reports.

Global variable reporter available in your tests with such methods:

    description(description: string): this;
    severity(severity: Severity): this;
    epic(epic: string): this;
    feature(feature: string): this;
    story(story: string): this;
    startStep(name: string): this;
    endStep(status?: Status): this;
    addArgument(name: string): this;
    addEnvironment(name: string, value: string): this;
    addAttachment(name: string, buffer: any, type: string): this;
    addLabel(name: string, value: string): this;
    addParameter(paramName: string, name: string, value: string): this;

Example

import { Severity } from "jest-allure/dist/Reporter";
import { Feature } from "somwhere in your project";

describe("Fancy test", () => {
        ...
        
        it("Test your amazing feature", async () => {
            reporter
                .description("Feature should work cool")
                .severity(Severity.Critical)
                .feature(Feature.Betting)
                .story("BOND-007");

            reporter.startStep("Check it's fancy");
            // expect that it's fancy
            reporter.endStep();
            
            reporter.startStep("Check it's cool");
            // expect that it's cool
            reporter.endStep();

            const screenshotBuffer = await page.screenshot();
            reporter.addAttachment("Screenshot", screenshotBuffer, "image/png");
        });
        
        ...
    }
);

What's next

  •  Generate report from Jest results
  •  Add steps support
  •  Add labels support
  •  Add attachments support
  •  Add more examples

Additional projects

visual-unit-tests

jest-allure-image-snapshot

Warning

jest-allure reporter dynamically configure "setupTestFrameworkScriptFile" option in Jest configuration. If you have your own setupTestFrameworkScriptFile file, you need to manually register allure reporter, for it you need to import jest-allure/dist/setup.

import "jest-allure/dist/setup";

In case if you have jest version > 24 just add jest-allure/dist/setup to setupFilesAfterEnv section of your config.


Examples


Author: zaqqaz
Source Code: https://github.com/zaqqaz/jest-allure 
License: MIT license

#javascript #jest #tests 

Jest-allure: Generate Allure Report for Jest

Mocha vs Jest Comparison of Testing Tools in 2022

https://www.blog.duomly.com/mocha-vs-jest/

It’s hard to believe that it’s been only 10 years since Jasmine was created. In that time, the JavaScript testing landscape has changed dramatically. 

There are now dozens of options for choosing a testing tool, each with its own advantages and disadvantages. This article will compare two of the most popular options: Mocha and Jest.

#testing #tests #tdd #mocha #jest #react #javascript #typescript #frontend 

Mocha vs Jest Comparison of Testing Tools in 2022

Integrate Tests with Jenkins - WebdriverIO #19

In this video, we will integrate our tests with Jenkins. We will set up a new Jenkins job, install JUnit reporting, and run tests in Jenkins.

Timestamps:
0:00 - Introduction
0:20 - JUnit Reporter Installation and Configuration
3:20 - Jenkins Job Setup
7:35 - Run Tests
8:28 - Jenkins JUnit Setup
12:40 - Run Tests with JUnit Reporting
13:45 - Wrap up

In this tutorial series, we will be building a fully functional test automation framework in JavaScript using WebdriverIO and integrate our tests with Mocha, Chai, Allure, BrowserStack, JUnit, and Jenkins.
https://www.youtube.com/playlist?list=PL6AdzyjjD5HBbt9amjf3wIVMaobb28ZYN

All the code is available on Github - https://github.com/automationbro/webdriverio-tutorial

Thanks for watching :)

Automation Bro

#webdriverio #jenkins #tests

Integrate Tests with Jenkins - WebdriverIO #19

Write Tests - WebdriverIO Tutorial | #3

In this video, we will begin to write our tests from scratch. We will cover different ways to access elements, learn about CSS selectors, and create our own custom selectors for our tests.

In this tutorial series, we will be building a fully functional test automation framework in JavaScript using WebdriverIO and integrate our tests with Mocha, Chai, Allure, BrowserStack, JUnit, and Jenkins.
https://www.youtube.com/playlist?list=PL6AdzyjjD5HBbt9amjf3wIVMaobb28ZYN

All the code is available on Github - https://github.com/automationbro/webdriverio-tutorial

Thanks for watching :)

Automation Bro

#webdriverio #tests #javascript

Write Tests - WebdriverIO Tutorial | #3