A Quick and Practical Example of Kafka Testing

A Quick and Practical Example of Kafka Testing

In this tutorial, we will quickly explore some basic to high-level approaches for testing microservice applications built using Kafka. Also, we will learn about the advantages of the declarative way of testing Kafka applications over the traditional/existing way of testing.

1. Introduction

In this tutorial, we will quickly explore some basic to high-level approaches for testing microservice applications built using Kafka. Also, we will learn about the advantages of the declarative way of testing Kafka applications over the traditional/existing way of testing.

For everything explained here, we can find running code examples in the "Conclusion" section of this post.

To keep the tutorial concise, we will demonstrate only the below aspects.

  1. Producer Testing
  2. Consumer Testing
  3. Hooking Both Producer and Consumer Tests
  4. Producing RAW records and JSON records
  5. Consuming RAW records and JSON records
  6. Traditional Testing Challenges.
  7. Advantages of Declarative Style Testing (IEEE Paper)
  8. Combining REST API Testing with Kafka Testing
  9. Spinning Up Kafka in Docker - Single Node and Multi-Node

I strongly recommend reading through the Minimum Things We Need To Know For Kafka Testing post before proceeding with this tutorial.

For more details about Kafka streams and how to develop a streaming application, please visit Developing Streaming Applications Tutorial by Confluent.

2. Kafka Testing Challenges

The difficult part is some part of the application logic or a DB procedure keeps producing records to a topic and another part of the application keeps consuming the records and continuously processes them based on business rules.

The records, partitions, offsets, exception scenarios, etc. keep on changing, making it difficult to think in terms of what to test, when to test, and how to test.

3. Testing Solution Approach

We can go for an end-to-end testing approach which will validate both producing, consuming, and DLQ records as well as the application processing logic. This will give us good confidence in releasing our application to higher environments.

We can do this by bringing up Kafka in dockerized containers or by pointing our tests to any integrated test environment somewhere in our Kubernetes-Kafka cluster or any other microservices infrastructure.

Here we pick a functionality, produce the desired record and validate, consume the intended record and validate, alongside the HTTP REST or SOAP API validation which helps in keeping our tests much cleaner and less noisy.

4. Producer Testing

When we produce a record to a topic we can verify the acknowledgment from a Kafka broker. This verification is in the format of recordMetadata.

For example, visualizing the "recordMetaData" as JSON would look like:

Response from the broker after a successful "produce".
{
    "recordMetadata": {
        "offset": 0,
        "timestamp": 1547760760264,
        "serializedKeySize": 13,
        "serializedValueSize": 34,
        "topicPartition": {
            "hash": 749715182,
            "partition": 0,   //<--- To which partition the record landed
            "topic": "demo-topic"
        }
    }
}

5. Consumer Testing

When we read or consume from a topic we can verify the record(s) fetched from the topics. Here we can validate/assert some of the metadata too, but most of the time you might need to deal with the records only (not the metadata).

There may be times, for instance, that we validate only the number of records, i.e. the size only, not the actual records

For example, visualizing the fetched "records" as JSON would look like:

Records fetched after a successful "consume".
{
    "records": [
        {
            "topic": "demo-topic",
            "key": "1547792460796",
            "value": "Hello World 1"
        },
        {
            // ...
        }
    ]
}

The full record(s) with the meta-data information looks like what we've got below, which we can also validate/assert if we have a test requirement to do so.

The fetched records with the metadata from the broker.
{
    "records": [
        {
            "topic": "demo-topic",
            "partition": 0,
            "offset": 3,
            "key": "1547792460796", //<---- Record key
            "value": "Hello World", //<---- Record value 
        }
    ]
}

6. Producer and Consumer Testing

In the same end-to-end test, we can perform two steps like below for the same record(s):

  • Step 1:
  • Produce to the topic "demo-topic" and validate the receivedrecordMetadata from the broker.
    For example, produce a record with "key":"1234", "value":"Hello World"* Step 2:
  • Consume from the same topic "demo-topic" and validate records.
    Assert that the same record was present in the response, i.e. "key": "1234", "value": "Hello World".We might have consumed more than one record if they were produced to the same topic before we started consuming.## 7. Challenges With the Traditional Style of Testing

Point 1

In the first place, there is nothing wrong in with the traditional style. But it has a steep learning curve to deal with when it comes to Kafka brokers.

For instance, when we deal with the brokers, we need to thoroughly get acquainted with the Kafka Client APIs, e.g. Key-SerDe, Value-SerDe, Time-Outs while record Poolings, commitSyncs, recordTypes, etc., and many more things at the API level.

For functional testing, we don't really need to know these concepts at the API level.

Point 2

Our test code gets tightly coupled with the client API code. This means we introduce many challenges in maintaining the test suites along with test framework's code.

8. Advantages of the Declarative Style of Testing

To draw a simile, the interesting way 'docker-compose' works is called the "Declarative Way." We tell the Docker Compose framework (in a YAML file) to spin up certain things at certain ports, link certain services to other services, etc., and things are done for us by the framework. We can drive our tests also in similar declarative fashion, which we are going to see in next sections.

How neat is that? Just think how much of a hassle it would be if we had to write code/shell scripts for the same repetitive tasks.

Point 1

In the declarative style, we can completely skip the API level that deals with brokers and only focus on test scenarios. But still, we have the flexibility to use the Kafka Client APIs and to add our own flavors to it.

Point 2

This contributes to finding more defects because we don't spend time in writing code, but spend more time in writing tests and covering more business scenarios/user journeys.

How?

  • Here, we tell the test to use the Kafka-Topic which is our "end point" or "url"

    i.e. "url": "kafka-topic: demo-topic"

  • Next, we tell the test to use operation "produce"

    i.e. "operation":"produce"

  • Next, we need to send the records to the request payload:

"request": {
    "records": [
        {
            "key": "KEY-1234",
            "value": "Hello World"
        }
    ]
}

  • Then, we tell the test that we are expecting the response "status" to be returned as "Ok" and some record metadata from the broker, i.e. a not-null value. This is the "assertions" part of our test.
"assertions": {
    "status" : "Ok",
    "recordMetadata" : "$NOT.NULL"
}

  • Note: We can even assert all the 'recordMetadata at once, which we will see in the later sections. For now, let's keep it simple and proceed.
  • Once we are done, our full test will look like the code below:
{
    "name": "produce_a_record",
    "url": "kafka-topic:demo-topic",
    "operation": "produce",
    "request": {
        "recordType" : "RAW",
        "records": [
            {
                "key": 101,
                "value": "Hello World"
            }
        ]
    },
    "assertions": {
        "status": "Ok",
        "recordMetadata": "$NOT.NULL"
    }
}

And that's it. We are done with the test case and ready to run.

Now, looking at the test above, anyone can easily figure out the what scenario being tested is.
Note that:

  1. We eliminated the coding hassles using the client API to deal with Kafka brokers.
  2. We eliminated the coding hassles of asserting each field key/value by traversing through their object path, parsing request-payloads, parsing response-payloads, etc.

At the same time, we used the JSON comparison feature of the framework to assert the outcome at once, therefore, making the tests a lot easier and cleaner.

We escaped two major hassles while testing.
And, the order of the fields doesn't matter here. The below code is also correct (field order swapped).

"assertions": {
        "recordMetadata": "$NOT.NULL"
        "status": "Ok",
}

9. Running a Single Test Using JUnit

It's super easy. We just need to point our JUnit @Test method to the JSON file. That's it really.

@TargetEnv("kafka_servers/kafka_test_server.properties")
@RunWith(ZeroCodeUnitRunner.class)
public class KafkaProduceTest {
    @Test
    @JsonTestCase("kafka/produce/test_kafka_produce.json")
    public void testProduce() throws Exception {
         // No code is needed here. What? 
         // Where are the 'assertions' gone ?
    }
}

In the above code:

  • 'test_kafka_produce.json' is the test case which contains the JSON step(s) we talked about earlier.
  • 'kafka_test_server.properties' contains the "Broker" details and producer/consumer configs.
  • '@RunWith(ZeroCodeUnitRunner.class)' is a JUnit custom runner to run the test.

Also, we can use the Suite runner or Package runner to run the entire test suite.

Please visit these RAW and JSON examples and explanations.

10. Writing Our First Producer Test

We learned in the above section how to produce a record and assert the broker response/acknowledgment.

But we don't have to stop there. We can go further and ask our test to assert the "recordMetadata" field-by-field to verify it was written to the correct "partition" of the correct "topic" and much more, as shown below.

"assertions": {
    "status": "Ok",
    "recordMetadata": {
        "offset": 0,   //<--- This is the record 'offset' in the partition
        "topicPartition": {
            "partition": 0,   //<--- This is the partition number
            "topic": "demo-topic"  //<--- This is the topic name
        }
    }
}

That's it. In the above "assertions" block, we finished comparing the expected vs. actual values.

Note: The comparisons and assertions are instantly done. The "assertion" block is instantly compared against the actual "status" and "recordMetadata" received from the Kafka Broker. The order of the fields doesn't really matter here. The test only fails if the field values or structures don't match.

11. Writing Our First Consumer Test

Similarly, to write a "consumer" test, we need to know:

  • The topic name 'demo-topic' is our "end point," a.k.a. "url": "url": "kafka-topic: demo-topic".
  • The operation, i.e. 'consume': "operation": "consume".
  • While consuming message(s) from the topic, we need to send as below: "request": { }

The above 'request' means to do nothing but consume without doing a 'commit'.

Or we can mention in our test to do certain things while consuming or after consuming the records.

"request": {
    "consumerLocalConfigs": {
        "commitSync": true,
        "maxNoOfRetryPollsOrTimeouts": 3
    }
}

  • "commitSync": true: Here, we are telling the test to do a commitSync after consuming the message, that means, it won't read the message again when you poll next time. It will only read the new messages if any arrive on the topic.
  • "maxNoOfRetryPollsOrTimeouts": 3: Here, we are telling the test to show the poll a maximum of three times, then stop polling. If we have more records, we can set this to a larger value. The default value is 1.
  • "pollingTime": 500: Here, we are telling the test to poll for 500 milliseconds each time it polls. The default value is 100 milliseconds if you skip this flag.

Visit this page for All configurable keys - ConsumerLocalConfigs from the source code.

Visit the HelloWorld Kafka examples repo to try it at home.

Note: These config values can be set in the properties file globally to all the tests, which means it will apply to all the tests in our test pack. Also, we can override any of the configs for a particular test or tests inside the suite. Hence it gives us flexibility for covering all kind of test scenarios.

Well, setting up these properties is not big deal and we have to do this to externalize them anyway. Hence, the simpler they are maintained, the better for us! But we must get an idea of what goes inside them.

We will discuss this in the coming sections.

12. Combining REST API Testing With Kafka Testing

Most of the time in a microservices architecture, we build applications using RESTful services, SOAP services (probably legacy), and Kafka.

Therefore, we need to cover all API contract validations in our end-to-end test scenarios, including Kafka.

But it's not a big deal as, after all, nothing changes here, except we just point our "url" to the HTTP endpoint for our REST or SOAP service, then manipulate payload/assertions block accordingly. That's it really.

Please visit Combining Kafka testing with REST API testing for a full step-by-step approach.

If we have a usecase:

Step 1: Kafka call - We send an "Address" record with id "id-lon-123" to the "address-topic," which eventually gets processed and written to the"Address" database (e.g. Postgres or Hadoop). We then assert the broker acknowledgment.

Step 2: REST call - Query (GET) the "Address" REST API by using "/api/v1/addresses/id-lon-123" and assert the response.

The corresponding test case looks like below.

{
    "scenarioName": "Kafka and REST api validation example",
    "steps": [
        {
            "name": "produce_to_kafka",
            "url": "kafka-topic:people-address",
            "operation": "produce",
            "request": {
                "recordType" : "JSON",
                "records": [
                    {
                        "key": "id-lon-123",
                        "value": {
                            "id": "id-lon-123",
                            "postCode": "UK-BA9"
                        }
                    }
                ]
            },
            "assertions": {
                "status": "Ok",
                "recordMetadata" : "$NOT.NULL"
            }
        },
        {
            "name": "verify_updated_address",
            "url": "/api/v1/addresses/${$.produce_to_kafka.request.records[0].value.id}",
            "operation": "GET",
            "request": {
                "headers": {
                    "X-GOVT-API-KEY": "top-key-only-known-to-secu-cleared"
                }
            },
            "assertions": {
                "status": 200,
                "value": {
                    "id": "${$.produce_to_kafka.request.records[0].value.id}",
                    "postCode": "${$.produce_to_kafka.request.records[0].value.postcode}"
                }
            }
        }
    ]
}

Easy to read! Easy to write!

Field reused values via JSON path instead of hardcoding. It's a great time saver!

13. Producing RAW Records vs. JSON Records
  1. In the case of RAW, we just say it quietly:
"recordType" : "RAW",

Then, our test case looks like below:

{
    "name": "produce_a_record",
    "url": "kafka-topic:demo-topic",
    "operation": "produce",
    "request": {
        "recordType" : "RAW",
        "records": [
            {
                "key": 101,
                "value": "Hello World"
            }
        ]
    },
    "assertions": {
        "status": "Ok",
        "recordMetadata": "$NOT.NULL"
    }
}

  1. And for the JSON record, we mention it in the same way:
"recordType" : "JSON"

And, our test case looks like below:

{
    "name": "produce_a_record",
    "url": "kafka-topic:demo-topic",
    "operation": "produce",
    "request": {
        "recordType" : "JSON",
        "records": [
            {
                "key": 101,
                "value": { 
                    "name" : "Jey"
                }
            }
        ]
    },
    "assertions": {
        "status": "Ok",
        "recordMetadata": "$NOT.NULL"
    }
}

Note: The "value" section has a JSON record this time.

14. Kafka in a Docker Container

Ideally, this section should have been at the beginning. But, what's the point of just running a docker-compose file without even knowing the outcome of it? We can find it here to make everyone's life easy!

We can find the docker-compose files and the step-by-step instructions below.

  1. Single Node Kafka in Docker
  2. Multi-Node Kafka Cluster in Docker
15. Conclusion

In this tutorial, we learned some of the fundamental aspects of Kafka testing in a declarative way. Also, we learned how easily we can test microservices involving both Kafka and REST.

Using this approach, we have tested and validated clustered Kafka Data Pipelines to Hadoop *as well as *Http REST/SOAP APIs deployed in Kubernetes orchestrated pods. We found this approach very, very straight forward and reduced complexity to maintain and promote the artifacts to the higher environments.

With this approach, we were able to cover a lot of test scenarios with full clarity and find more defects in the early stages of the development cycle, even without writing any test code. This helped us to build up and maintain our regression pack in an easy and clean manner.

The complete source code of these examples of the repo GitHub (Try at Home) are given below.

To run any test(s), we can directly navigate to their corresponding JUnit @Test, under 'src/test/java'. We need to bring up Docker with kafka prior to clicking any Junit tests.

Use "kafka-schema-registry.yml* (See Wiki)" to be able to run all the tests.*

If you found this page helpful for testing Kafka and HTTP APIs, please leave a "star" on GitHub!

Happy testing!

Performance Testing: How to Load Test

Performance Testing: How to Load Test

Check your performance trend over many iterations of the same set of tests. Run a Stress Test: See how your application performs under a high load. Iterate your stress test.

Objective

The purpose of this article is to help you design and run load test 👍

Why Learn?

As developers we already have the foundational knowledge and with a little effort we could expand our skillset.

  • Your company cannot afford to hire performance engineers
  • Not enough testers compared to developers
  • The skill & knowledge could help you write better and scalable code
  • Less dependant on other’s expertise
Why Load Testing?

While unit & integration tests ensure code is functionally correct, load testing measures its performance which is equally important.

Only a Load Test can shed light into concurrency issues, whether the database queries make good use of indexes instead of a full table scans, where are the bottlenecks, does the application scale efficiently, what is the application’s response time and throughput and so on.

Getting Started Apache JMeter

In this section we will design and run Apache JMeter load tests.

Environment Setup

For environment, either find a suitable online resource (not recommended), come up with your own simple service (node, python, whatever) or simply use the web service provided in this article.

We will be using a simple java-based spring boot web service that exposes four (4) endpoints. The requirement are Java 1.8 and Apache Maven.

git clone https://github.com/rhamedy/tinyservice4loadtest.git
cd tinyservice4loadtest
mvn spring-boot:run

Apache JMeter

Please go ahead and install Apache JMeter from download site, unzip and execute the following command

apache-jmeter/bin/jmeter.sh       // Linux & MacOS
apache-jmeter/bin/jmeter.bat      // Windows

Design a Test Plan for Apache JMeter

Let’s load test the following apis

http://localhost:8080/students/list  - [GET] List students
http://localhost:8080/students/id    - [GET] Get student by id
http://localhost:8080/students       - [POST] Create student
http://localhost:8080/students/id    - [DELETE] Delete student by id

All the sample tests for above endpoints are available in the GitHub repository.

Step 1

Right click on test plan and pick Add > Threads (Users) > Thread Group. A test plan must have at least 1 Thread Group.

  1. The Number of Threads (users)
  2. The Ramp-Up period (in seconds). How long to reach the max users?
  3. How many times or for how long to run the test

Step 2

Let’s specify what to test. Right Click on ThreadGroup and select Add > Config Elements > HTTP Request Defaults option.

Config Elements are useful if you wish to share configuration among one or more requests for example i.e. server address, port number, token, etc.

Let’s fill out the HTTP Request Defaults Config element

Also let’s add an HTTP Header Manager via Add > Config Elements > HTTP Header Manager for Content-Type header

Step 3

Let’s configure the http requests, Right Click on the ThreadGroup and select Add > Sampler > HTTP Request

  1. What is the method type i.e.GET or POST etc
  2. What is the api path i.e. /students/list or /students etc

Right click to duplicate a request and update it.

Step 4

The Listeners are used to collect results. Right Click on ThreadGroup and select Add > Listener > Summary Report option.

The test plans in words, we create an Apache JMeter Test Plan to Load Testtwo apis with 50 Users with a ramp-up for a duration of 30 seconds.

If you have the tinyservice4loadtest (or your own) running, then let’s hit the play button and see the results in Summary Report

Run Apache JMeter Test Using CLI

The GUI is not recommended for running complex tests. Let’s open a complex sample test from here.

The above test plan has more elements

  • The Random Variable generates a value between x and y
  • The Loop Controller executes the content of loop x times

To generate a meaningful report we need user.properties (Source) in the directory where .jmx test is.

jmeter.reportgenerator.report_title=Apache JMeter Dashboard
jmeter.reportgenerator.overall_granularity=60000
jmeter.reportgenerator.graph.responseTimeDistribution.property.set_granularity=100
jmeter.reportgenerator.apdex_satisfied_threshold=1500
jmeter.reportgenerator.apdex_tolerated_threshold=3000
jmeter.reportgenerator.exporter.html.property.output_dir=/tmp/test-jmeter.reportgenerator.exporter.html.filters_only_sample_series=true

Run the test script using command (output-directory should be empty)

jmeter.sh -n -t loadtest.jmx -l log.jtl -e -o output-directory

In the above command the -n stands for no gui the -t indicates the scripts loadtest.jmx the -l is for log.jtl where -e and -o are for reporting.

The output-directory will contain a bunch of files including an index.htmlthat opens the graphical results of the test as shown below.

In this graph the left side is APDEX and the right side is Request summary. The red indicates all our 404 errors and the green is 200 successful requests.

Some numbers relating to Number of samples, Response time, Throughput

Most importantly the Response Time and Throughput

Lastly, it’s worth-mentioning that Apache JMeter can be configured to listen to a browser’s activity and capture the network request.

Getting Started with Taurus

Now that we know how to use Apache JMeter to run a basic load test, let’s explore open-source framework Taurus. In short, one of the reasons for birth of Taurus was because Apache JMeter has a steep learning curve and Taurusmake things a lot simpler.

Taurus is an abstraction layer (or a wrapper) on top of Apache JMeter and that means you can run an Apache JMeter script using Taurus. So go ahead and install Taurus using the easy to install instruction

The Taurus scripts can be written in YAML or JSON using follow blocks

Scenarios is basically where one or more requests are defined. For each scenario an execution is defined with props such as no of usersdurationramp-up period and so on. The modules allow us to configure executor that could be Apache JMeterSelenium and so on. Likewise, the reporting allows configuring how the report should be generated i.e. csv, live reporting in console, or push the result to the blazemeter website.

scenarios: 
 StudentList: 
  requests: 
   - url: http://localhost:8080/students/list
     label: Student List
execution: 
 - scenario: StudentList 
   concurrency: 15
   ramp-up: 2s
   hold-for: 10s
reporting:
 - module: console 
 - module: final-stats 
   summary: true 
   percentiles: true
   test-duration: true 
   dump-csv: single_scenario_single_req.csv

Load test /students/list api reaching 15 users within 2s (ramp-up) for a duration of 10s and display live result in the console as well as csv file.

To run the Taurus test, simply run the command bzt test.yaml

In a Taurus test you can also configure a scenario to point to an Apache JMeter script and override execution and other parameters.

scenarios:
JMeterExample:
script: student_crud_jmeter.jmx

Taurus seems to be a very interesting framework and it is worth checking it out. It is very well-documented here.

Conclusion

I highly recommend checking out Apache JMeter and Taurus documentations if you wish you learn more techniques and tricks.

Thanks, Keep Visiting. If you liked this post, share it with all of your programming buddies!

Modern Web Apps Performance Tricks with PWA and Vue

The performance benchmarks of Redis and MySQL

Optimize the Performance of a Vue App with Async Components

How to use React Test Renderer to test React components

How to use React Test Renderer to test React components

Test Driven Development (TDD) with React Test Renderer: Find out how to use React Test Renderer to test React components

It’s no secret that Enzyme has become the de facto standard for React components testing, but there are other good options around.

For example: React Test Renderer.

I personally like Test Renderer because of the way it works–it renders React components into pure JavaScript objects that are easy to use and understand.

Another advantage of React Test Renderer is that it is maintained by a core team at Facebook and is always up-to-date.

React Test Renderer has a great documentation, so I won’t duplicate it. Instead, I’d like to illustrate a few of the most common use cases in an example with a Test Driven Development (TDD) approach.

Setup

Test Renderer has a really easy setup process–just install the lib and you’re ready to go:

npm install --save-dev react-test-renderer
Testing with TDD

Ordinarily we’d need a component in order to start writing a test, but React Test Renderer enables us to write a test before the component is implemented.

Side Note: The reason for this is that TDD works like a charm when you test functions, so taking into account that most of the React components are pure functional components, TDD is applied really well here, especially with React Test Renderer. Sometimes it’s even faster to write your component starting with tests in case of complex logic because you need fewer iterations and debugging.

Let’s consider the requirements for a simple component:

  • It needs to have a class btn-group
  • It should be able to render its children
Testing className

First, we need to test the class of an empty component (as we follow TDD):

import React from "react";
  // [ 1 ] import the React Test Renderer
  import { create } from "react-test-renderer";

  const BtnGroup = () => null;
  
  test("the className of the component includes btn-group", () => {
    // [ 2 ] boilerplate code
    const root = create(<BtnGroup />).root;

    // [ 3 ] query for element
    const element = root.findByType("div");

    // [ 4 ] assert that className to include btn-group
    expect(element.props.className.includes("btn-group")).toBe(true);
  });

The test has 3 steps: test instance creation, element querying, and assertion.

Let’s skip over the more in-depth explanation of that for now and focus on fixing the test.
At first, it will break (as expected):

No instances found with node type: "undefined"

That means we need to add some node with some type. In our case, the type should be <div>:

const BtnGroup = () => <div />;

Once we change the code, the file watcher runs the test again and we receive an updated message:

expect(received).toEqual(expected) // deep equality

Expected: "btn-group"
Received: undefined

We’re already asserting. To pass the first test, all we need to do now is add a className prop.

const BtnGroup = () => <div className="btn-group" />;

After this change, we’ll see that rewarding green message:

As soon as the test is green we can slow down a bit and revisit the code of the test line by line. Here’s that code again:

import React from "react";
  // [ 1 ] import the React Test Renderer
  import { create } from "react-test-renderer";

  const BtnGroup = () => null;
  
  test("the className of the component includes btn-group", () => {
    // [ 2 ] boilerplate code
    const root = create(<BtnGroup />).root;

    // [ 3 ] query for element
    const element = root.findByType("div");

    // [ 4 ] assert that className to include btn-group
    expect(element.props.className.includes("btn-group")).toBe(true);
  });

[ 1 ] Test Renderer has only one way of creating component — the create method, so just import and use it.

[ 2 ] When creating a component, getting a test instance is a standard boilerplate code for React Test Renderer.

[ 3 ] There are 2 main ways to query for an element in Test Renderer: by type and by props. I prefer querying by type when there are no other containers around like in the current example. We’ll get to other methods a bit later.

[ 4 ] This assertion is pretty self-explanatory: just check that the ‘className’ prop value includes btn-group and you’re good to go.

Testing children

Let’s continue adding functionality to the BtnGroup component we already have since we know we need to meet the following requirement:

It should be able to render its children.

Testing the children prop is very straightforward. We just need to make sure that the passed value matches the result rendered:

import React from "react";
import { create } from "react-test-renderer";

const BtnGroup = () => <div className="btn-group" />;

test("renders BtnGroup component with children", () => {
  // [ 6 ] child text
  const text = "child";

  // boilerplate code, already mentioned in [ 2 - 3 ] above
  const instance = create(<BtnGroup>{text}</BtnGroup>).root;

  // query for element
  const element = instance.findByType("div");

  // assert child to match text passed
  expect(element.props.children).toEqual(text);
  });

[ 6 ] The value we pass to the component and the value we use to assert against it should be the same.

Since we’re using TDD, you might expect the test to break here. However, React supports passing children to components out of the box, so our test will be green.

If you’re wondering if the test is running successfully, you can print the element value with console.log.

The output is as follows:

Testing any props

Let’s continue adding requirements for our component:

should render any props passed.

Here’s a test:

import React from "react";
  import { create } from "react-test-renderer";

  // the component is still not updated as we use TDD
  const BtnGroup = () => <div className="btn-group" />;

  test("renders BtnGroup component with custom props", () => {
    // generate some custom props
    const props = { id: "awesome-button-id", className: "mb-3", children: "child" };

    // boilerplate code
    const instance = create(<BtnGroup {...props} />).root;
    
    // get element by component name
    const element = instance.findByType("div");

    // assert if an additional className was added to existing one
    expect(element.props.className).toEqual("btn-group mb-3");
    // assert "id" prop to match passed one
    expect(element.props.id).toEqual(props.id);
    // assert "children" to match passed
    expect(element.props.children).toEqual(children);
  });

The code of the test already looks familiar: we’re just checking that the prop values match passed.

Now, the test will break and issue the following message:

Expected: "btn-group mb-3"
Received: "btn-group"

What happens now is that we need to actually start passing props. Otherwise btn-group className will always be there:

const BtnGroup = props => <div className="btn-group" {...props} />;

Here’s where having tests comes in handy. We have another message telling us that the className case is specific:

Expected: "btn-group mb-3"
Received: "mb-3"

Now, the passed props replace the props that our component already has–in our case, btn-group is replaced with mb-3.

We should change the code of the component to fix this so that it handles className differently:

const BtnGroup = ({className = "", ...rest}) =>
    <div {...rest} className={`btn-group ${className}`} />;

The trick here is to de-structure props so that items needing special treatment have their name and all other props consolidated into a rest object.

Again, there is no special approach needed for the children prop, although they’re passed now as a regular prop instead of in the body of the component.

Now, the test should be green again. All of the previously written tests will also be green:

Note: I left a console.log here to show how you can check the output at any time.

As you can see, all of the assertions we’ve done — for now — are just checks that strings match.

But if there’s a need to check the number of items, we can use this handy method in Test Renderer: testInstance.findAllByType().

Let’s see how it works.

Testing the amount of items

To demonstrate how to count items in React Test Renderer, we should have some component that renders an array, or list. The requirement for it is something like this:

should render a list with correct items count.

To follow TDD, we’ll start with an empty functional component that renders an empty ul tag:

const ProductList = ({ list }) => <ul />;

Here’s a test we could write:

import React from "react";
  import { create } from "react-test-renderer";

  test("renders a list of items with correct items count", () => {
    // prepare the list for testing
    const list = [{ id: 1, text: "first item" }, { id: 2, text: "second item" }];

    // boilerplate code
    const root = create(<ProductList list={list} />).root;
    
    // [ 7 ] get list items
    const elementList = root.findAllByType("li");

    // assert if the length match with original list passed as a prop
    expect(elementList.length).toEqual(list.length);
  });

The goal of this test is to check if the number of rendered nodes equals the number of passed items.

Initially, the test will break with the following message:

To fix the test, we should render list items with li tags inside the container:

const ProductList = ({ list }) => <ul>
    {list.map(li => <li key={li.id}>{li.text}</li>)}
</ul>;

Now the test is green and we can talk about the code.

[ 7 ] To query specifically for nodes with type li, I use the testInstance.findAllByType() method that returns all elements with tag “li”.

There are also some other methods to search for multiple items: testInstance.findAll() and testInstance.findAllByProps().

The first one is useful when you need to check the overall amount, while the second one comes in handy when you want to count a specific prop, e.g., all nodes with a specific className.

Testing text

In most cases having a test for only items count is not sufficient, and you’ll also want to test the actual text a user can read.

There’s no specific functionality in React Test Renderer for that purpose, but that’s pretty easy to write if you consider that text can only be found in children.

import React from "react";
  import { create } from "react-test-renderer";

  test("renders all items with correct text", () => {
    // [ 8 ] prepare the list for testing
    const list = [{ id: 1, text: "first item" }, { id: 2, text: 33 }];

    // boilerplate code
    const root = create(<ProductList list={list} />).root;

    // get list items
    const elementList = root.findAllByType("li");

    // [ 10 ] Iterate over all items and search for text occurence in children
    elementList.forEach((el, index) => {
        // [ 11 ] convert text to string
        expect(el.children.includes(`${list[index].text}`)).toBe(true);
    });
  });

Having a list of all items in [ 8 ] we can iterate over the nodes of the component and make sure that every text was found [ 10 ].

This test is instantly green as soon as the component doesn’t have any filtering or sorting logic inside and just renders a list as it is, so we don’t have to change any lines of code in the test.

The only nit to add here is that rendered text is always a string regardless of the value type you pass [ 11 ].

Testing event handlers and hooks

Some of the functional components rely on more than just props and have their own state management thanks to the Hooks API.
Consider a classic example of a toggler component with the following requirements:

  • should render a button
  • should toggle children on button click

That means that children visibility should change on click.

Here’s an example of a test you could write:

import React from "react";
import { create } from "react-test-renderer";

// let component to be a fragment for start
const VisibilityToggler = () => <></>;

test("should toggle children nodes on button click", () => {
  const root = create(
    <VisibilityToggler>
      <div>awecome content</div>
    </VisibilityToggler>
  ).root;

  // helper to get nodes other than "button"
  const getChildrenCount = () =>
    root.findAll(node => node.type !== "button").length;

  // assert that button exists
  expect(root.findAllByType("button").length).toEqual(1);

  // query for a button
  const button = root.findAllByType("button")[0];

  // remember initial nodes count (before toggle)
  const initialCount = getChildrenCount();

  // trigger a hook by calling onClick of a button
  act(button.props.onClick);
  const countAfterFirstClick = getChildrenCount();

  // assert that nodes count after a click is greater than before
  expect(countAfterFirstClick > initialCount).toBe(true);

  // trigger another click
  act(button.props.onClick);
  const countAfterSecondClick = getChildrenCount();

  // check that nodes were toggled off and the count of rendered nodes match initial
  expect(countAfterSecondClick === initialCount).toBe(true);
});

The test looks huge, so let’s not try to fix it right away. First, let’s discuss the code a bit.

[ 12 ] Here is one new thing happens: act() method is used to wrap event handler calls.

Why should we? And how should we remember to do so? The second answer is easy: no need to remember, because React Test Renderer checks the code and prints a warning with a reason:

When writing UI tests, tasks like rendering, user events, or data fetching can be considered as “units” of interaction with a user interface.

React provides a helper called act() that makes sure all updates related to these “units” have been processed and applied to the DOM before you make any assertions ~ from the docs.

In other words, an act() method “awaits” for React updates and makes otherwise async code to look synchronous very similar to await from ES7.
At this stage, the test can’t find a button and breaks:

To resolve this issue, let’s add a button:

const VisibilityToggler = () => <><button /></>;

The button exists, but the onClick method is not found:

Don’t forget to add a button:

const VisibilityToggler = () => <><button /></>;

This is the next message you’ll receive after adding an onClick handler:

Finally, we’re at the point where we’re ready to add some state management with Hooks:

const VisibilityToggler = ({ children }) => {
  const [isVisible, setVisibility] = useState(false);
  const toggle = () => setVisibility(!isVisible);
  return (
    <>
      <button onClick={toggle}>toggle</button>
      {isVisible && children}
    </>
  );
};

Clicking on a button now toggles a state variable isVisible to the opposite value (true or false) that in return causes a render of “children” in case of “true” and skips rendering “children” in case of “false”.

All tests should be green now. You can find the complete source code for this example here:

Conclusion

Although React Test Renderer is usually associated with Snapshot testing, it can still be used to make specific assertions against your components with sufficient accuracy for most common use cases.

I personally like it because it has a clean API, it’s simple, and it’s easy to use along with TDD. I hope you like it too!

Selenium Testing For Effective Test Automation

This article has been republished from pCloudy

While there is a substantial increment in the mobile apps market share, web apps are still prevalent with a significant user base. Enterprises are focusing on the quality at speed when it comes to web apps, before deployment. This is where testing has an important role to play. UI testing is still mostly a manual process unlike some functional testing which can be automated. But it is sensible to automate testing which will save time and effort. When it comes to automation, Selenium testing is the first thing that comes to mind as it is the most popular test automation tool in the world. So let’s learn more about selenium testing.

What is Selenium Testing

Selenium testing tool is open-source and it enables users to drive interactions with the page and test the app across various platforms. It allows users to control a browser from their preferred language like Java, JavaScript, C#, PHP, Python, etc. Selenium has many tools and APIs for automating user interactions on HTML JavaScript apps on browsers like IE, Chrome, Firefox, Safari, Opera, etc.

Selenium Framework is a code structure that helps to simplify and reuse the code. Without frameworks, we will place the code as well as data in the same place which is neither re-usable nor readable. Selenium automation frameworks are beneficial for higher portability, increased code re-usage, higher code readability, reduced script maintenance cost, etc.

What is Selenium Web Driver

Selenium WebDriver accepts commands via the client API and sends them to browsers. Selenium WebDriver is a browser-specific driver which helps in accessing and launching the different browsers like Chrome, Firefox, IE, etc. The WebDriver provides an interface to create and run automation scripts and every browser has different drivers to run the tests. The different drivers are IE Driver, Firefox Driver, Safari Driver, Chrome Driver, etc.

Selenium WebDriver was introduced to negate limitations of Selenium RC, which offers enhanced support for web pages where the elements on a page change without reloading. Many browsers support selenium WebDriver and it uses each browser’s native support for automation testing. The features are supported and direct calls are made depending on the browser that is being used.

Continue Reading...