Brain  Crist

Brain Crist

1597338000

Testing young children’s computational thinking

Computational thinking (CT) comprises a set of skills that are fundamental to computing and being taught in more and more schools across the world. There has been much debate about the details of what CT is and how it should be approached in education, particularly for younger students.

A girl doing digital making on a tablet

In our research seminar this week, we were joined by María Zapata Cáceres from the Universidad Rey Juan Carlos in Madrid. María shared research she and her colleagues have done around CT. Specifically, she presented work on how we can understand what CT skills young children are developing. Building on existing work on assessing CT, she and her colleagues have developed a reliable test for CT skills that can be used with children as young as 5.

María Zapata Cáceres

Why do we need to test computational thinking?

Until we can assess something, María argues, we don’t know what children have or haven’t learned or what they are capable of. While testing is often associated with the final stages in learning, in order to teach something well, educators need to understand where their students’ skills are to know what they are aiming for them to learn. With CT being taught in increasing numbers of schools and in many different ways, María argues that it is imperative to be able to test learners on it.

Screenshot from an online research seminar about computational thinking with María Zapata Cáceres

How was the test developed?

One of the key challenges for assessing learning is knowing whether the activities or questions you present to learners are actually testing what you intend them to. To make sure this is the case, assessments go through a process of validation: they are tried out with large groups to ensure that the results they give are valid. María’s and her colleagues’ CT test for beginners is based on a CT test developed by researcher Marcos Román González. That test had been validated, but since it is aimed at 10- to 16-year-olds, María and her colleagues needed to adapt it for younger children and then validate the adapted rest.

Developing the first version

The new test for beginners consists of 25 questions, each of which has four possible responses, which are to be answered within 40 minutes. The questions are of two types: one that involves using instructions to draw on a canvas, and one that involves moving characters through mazes. Since the test is for younger children, María and her colleagues designed it so it involves as little text as possible to reduce the need for reading; instead the test includes self-explanatory symbols.

Screenshot from an online research seminar about computational thinking with María Zapata Cáceres

Developing a second version based on feedback

To refine the test, the researchers consulted with a group of 45 experts about the difficulty of the questions and the test’s length of the test. The general feedback was very positive.

Drawing on the experts’ feedback, María and her colleagues made some very specific improvements to the test to make it more appropriate for younger children:

  • The improve test mandates that an verbal explanation be given to children at the start, to make sure they clearly understand how to take the test and don’t have to rely on reading the instructions.
  • In some areas, the researchers added written explanations where experts had identified that questions contained ambiguity that could cause the children to misinterpret them.
  • A key improvement was to adapt the grids in the original test to include pathways between each box of the maze. It was found that children could misinterpret the maze, for example as allowing diagonal moves between squares; the added pathways are visual cues that it clear that this is not possible.

Screenshot from an online research seminar about computational thinking with María Zapata Cáceres

Validating the test

After these improvements, the test was validated with 299 primary school students aged 5-12. To assess the differences the improvements might make, the students were given different version of the test. María and her colleagues found that the younger students benefited from the improvements, and the improvements made the test more reliable for testing students’ computational thinking: students made fewer errors due to ambiguity and misinterpretation.

Statistical analysis of the test results showed that the improved version of the test is reliable and can be used with confidence to assess the skills of younger children.

What can you use this test for?

Firstly, the test is a tool for educators who want to assess the skills young people have and develop over time. Secondly, the test is also valuable for researchers. It can be used to perform projects that evaluate the outcomes of different approaches to teaching computational thinking, as well as projects investigating the effectiveness of specific learning resources, because the test can be given to children before and again after they engage with the resources.

Assessment is one of the many tools educators use to shape their teaching and promote the learning of their students, and tools like this CT test developed by María and her colleagues allow us to better understand what children are learning.

Find out more & join our next seminar

The video and slides of María’s presentation are available on our seminars page. To find out more about this test, and the process used to create and validate it, read the paper by María and her colleagues.

Our final seminar of this series takes place Tuesday 28 July before we take a break for the summer. In the session, we will explore gender balance in computing, led by Katharine Childs, who works on the Gender Balance in Computing research project at the Raspberry Pi Foundation. You can find out more and sign up to attend for free on our Computing Education Research Seminars page.

#education #research #computational thinking #computing education #research seminar #assessment

What is GEEK

Buddha Community

Testing young children’s computational thinking
Mike  Kozey

Mike Kozey

1656151740

Test_cov_console: Flutter Console Coverage Test

Flutter Console Coverage Test

This small dart tools is used to generate Flutter Coverage Test report to console

How to install

Add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dev_dependencies:
  test_cov_console: ^0.2.2

How to run

run the following command to make sure all flutter library is up-to-date

flutter pub get
Running "flutter pub get" in coverage...                            0.5s

run the following command to generate lcov.info on coverage directory

flutter test --coverage
00:02 +1: All tests passed!

run the tool to generate report from lcov.info

flutter pub run test_cov_console
---------------------------------------------|---------|---------|---------|-------------------|
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
 print_cov_constants.dart                    |    0.00 |    0.00 |    0.00 |    no unit testing|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|

Optional parameter

If not given a FILE, "coverage/lcov.info" will be used.
-f, --file=<FILE>                      The target lcov.info file to be reported
-e, --exclude=<STRING1,STRING2,...>    A list of contains string for files without unit testing
                                       to be excluded from report
-l, --line                             It will print Lines & Uncovered Lines only
                                       Branch & Functions coverage percentage will not be printed
-i, --ignore                           It will not print any file without unit testing
-m, --multi                            Report from multiple lcov.info files
-c, --csv                              Output to CSV file
-o, --output=<CSV-FILE>                Full path of output CSV file
                                       If not given, "coverage/test_cov_console.csv" will be used
-t, --total                            Print only the total coverage
                                       Note: it will ignore all other option (if any), except -m
-p, --pass=<MINIMUM>                   Print only the whether total coverage is passed MINIMUM value or not
                                       If the value >= MINIMUM, it will print PASSED, otherwise FAILED
                                       Note: it will ignore all other option (if any), except -m
-h, --help                             Show this help

example run the tool with parameters

flutter pub run test_cov_console --file=coverage/lcov.info --exclude=_constants,_mock
---------------------------------------------|---------|---------|---------|-------------------|
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|

report for multiple lcov.info files (-m, --multi)

It support to run for multiple lcov.info files with the followings directory structures:
1. No root module
<root>/<module_a>
<root>/<module_a>/coverage/lcov.info
<root>/<module_a>/lib/src
<root>/<module_b>
<root>/<module_b>/coverage/lcov.info
<root>/<module_b>/lib/src
...
2. With root module
<root>/coverage/lcov.info
<root>/lib/src
<root>/<module_a>
<root>/<module_a>/coverage/lcov.info
<root>/<module_a>/lib/src
<root>/<module_b>
<root>/<module_b>/coverage/lcov.info
<root>/<module_b>/lib/src
...
You must run test_cov_console on <root> dir, and the report would be grouped by module, here is
the sample output for directory structure 'with root module':
flutter pub run test_cov_console --file=coverage/lcov.info --exclude=_constants,_mock --multi
---------------------------------------------|---------|---------|---------|-------------------|
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|
---------------------------------------------|---------|---------|---------|-------------------|
File - module_a -                            |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|
---------------------------------------------|---------|---------|---------|-------------------|
File - module_b -                            |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|

Output to CSV file (-c, --csv, -o, --output)

flutter pub run test_cov_console -c --output=coverage/test_coverage.csv

#### sample CSV output file:
File,% Branch,% Funcs,% Lines,Uncovered Line #s
lib/,,,,
test_cov_console.dart,0.00,0.00,0.00,no unit testing
lib/src/,,,,
parser.dart,100.00,100.00,97.22,"97"
parser_constants.dart,100.00,100.00,100.00,""
print_cov.dart,100.00,100.00,82.91,"29,49,51,52,171,174,177,180,183,184,185,186,187,188,279,324,325,387,388,389,390,391,392,393,394,395,398"
print_cov_constants.dart,0.00,0.00,0.00,no unit testing
All files with unit testing,100.00,100.00,86.07,""

Installing

Use this package as an executable

Install it

You can install the package from the command line:

dart pub global activate test_cov_console

Use it

The package has the following executables:

$ test_cov_console

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add test_cov_console

With Flutter:

 $ flutter pub add test_cov_console

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

dependencies:
  test_cov_console: ^0.2.2

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:test_cov_console/test_cov_console.dart';

example/lib/main.dart

import 'package:flutter/material.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        // This is the theme of your application.
        //
        // Try running your application with "flutter run". You'll see the
        // application has a blue toolbar. Then, without quitting the app, try
        // changing the primarySwatch below to Colors.green and then invoke
        // "hot reload" (press "r" in the console where you ran "flutter run",
        // or simply save your changes to "hot reload" in a Flutter IDE).
        // Notice that the counter didn't reset back to zero; the application
        // is not restarted.
        primarySwatch: Colors.blue,
        // This makes the visual density adapt to the platform that you run
        // the app on. For desktop platforms, the controls will be smaller and
        // closer together (more dense) than on mobile platforms.
        visualDensity: VisualDensity.adaptivePlatformDensity,
      ),
      home: MyHomePage(title: 'Flutter Demo Home Page'),
    );
  }
}

class MyHomePage extends StatefulWidget {
  MyHomePage({Key? key, required this.title}) : super(key: key);

  // This widget is the home page of your application. It is stateful, meaning
  // that it has a State object (defined below) that contains fields that affect
  // how it looks.

  // This class is the configuration for the state. It holds the values (in this
  // case the title) provided by the parent (in this case the App widget) and
  // used by the build method of the State. Fields in a Widget subclass are
  // always marked "final".

  final String title;

  @override
  _MyHomePageState createState() => _MyHomePageState();
}

class _MyHomePageState extends State<MyHomePage> {
  int _counter = 0;

  void _incrementCounter() {
    setState(() {
      // This call to setState tells the Flutter framework that something has
      // changed in this State, which causes it to rerun the build method below
      // so that the display can reflect the updated values. If we changed
      // _counter without calling setState(), then the build method would not be
      // called again, and so nothing would appear to happen.
      _counter++;
    });
  }

  @override
  Widget build(BuildContext context) {
    // This method is rerun every time setState is called, for instance as done
    // by the _incrementCounter method above.
    //
    // The Flutter framework has been optimized to make rerunning build methods
    // fast, so that you can just rebuild anything that needs updating rather
    // than having to individually change instances of widgets.
    return Scaffold(
      appBar: AppBar(
        // Here we take the value from the MyHomePage object that was created by
        // the App.build method, and use it to set our appbar title.
        title: Text(widget.title),
      ),
      body: Center(
        // Center is a layout widget. It takes a single child and positions it
        // in the middle of the parent.
        child: Column(
          // Column is also a layout widget. It takes a list of children and
          // arranges them vertically. By default, it sizes itself to fit its
          // children horizontally, and tries to be as tall as its parent.
          //
          // Invoke "debug painting" (press "p" in the console, choose the
          // "Toggle Debug Paint" action from the Flutter Inspector in Android
          // Studio, or the "Toggle Debug Paint" command in Visual Studio Code)
          // to see the wireframe for each widget.
          //
          // Column has various properties to control how it sizes itself and
          // how it positions its children. Here we use mainAxisAlignment to
          // center the children vertically; the main axis here is the vertical
          // axis because Columns are vertical (the cross axis would be
          // horizontal).
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            Text(
              'You have pushed the button this many times:',
            ),
            Text(
              '$_counter',
              style: Theme.of(context).textTheme.headline4,
            ),
          ],
        ),
      ),
      floatingActionButton: FloatingActionButton(
        onPressed: _incrementCounter,
        tooltip: 'Increment',
        child: Icon(Icons.add),
      ), // This trailing comma makes auto-formatting nicer for build methods.
    );
  }
}

Author: DigitalKatalis
Source Code: https://github.com/DigitalKatalis/test_cov_console 
License: BSD-3-Clause license

#flutter #dart #test 

Brain  Crist

Brain Crist

1597338000

Testing young children’s computational thinking

Computational thinking (CT) comprises a set of skills that are fundamental to computing and being taught in more and more schools across the world. There has been much debate about the details of what CT is and how it should be approached in education, particularly for younger students.

A girl doing digital making on a tablet

In our research seminar this week, we were joined by María Zapata Cáceres from the Universidad Rey Juan Carlos in Madrid. María shared research she and her colleagues have done around CT. Specifically, she presented work on how we can understand what CT skills young children are developing. Building on existing work on assessing CT, she and her colleagues have developed a reliable test for CT skills that can be used with children as young as 5.

María Zapata Cáceres

Why do we need to test computational thinking?

Until we can assess something, María argues, we don’t know what children have or haven’t learned or what they are capable of. While testing is often associated with the final stages in learning, in order to teach something well, educators need to understand where their students’ skills are to know what they are aiming for them to learn. With CT being taught in increasing numbers of schools and in many different ways, María argues that it is imperative to be able to test learners on it.

Screenshot from an online research seminar about computational thinking with María Zapata Cáceres

How was the test developed?

One of the key challenges for assessing learning is knowing whether the activities or questions you present to learners are actually testing what you intend them to. To make sure this is the case, assessments go through a process of validation: they are tried out with large groups to ensure that the results they give are valid. María’s and her colleagues’ CT test for beginners is based on a CT test developed by researcher Marcos Román González. That test had been validated, but since it is aimed at 10- to 16-year-olds, María and her colleagues needed to adapt it for younger children and then validate the adapted rest.

Developing the first version

The new test for beginners consists of 25 questions, each of which has four possible responses, which are to be answered within 40 minutes. The questions are of two types: one that involves using instructions to draw on a canvas, and one that involves moving characters through mazes. Since the test is for younger children, María and her colleagues designed it so it involves as little text as possible to reduce the need for reading; instead the test includes self-explanatory symbols.

Screenshot from an online research seminar about computational thinking with María Zapata Cáceres

Developing a second version based on feedback

To refine the test, the researchers consulted with a group of 45 experts about the difficulty of the questions and the test’s length of the test. The general feedback was very positive.

Drawing on the experts’ feedback, María and her colleagues made some very specific improvements to the test to make it more appropriate for younger children:

  • The improve test mandates that an verbal explanation be given to children at the start, to make sure they clearly understand how to take the test and don’t have to rely on reading the instructions.
  • In some areas, the researchers added written explanations where experts had identified that questions contained ambiguity that could cause the children to misinterpret them.
  • A key improvement was to adapt the grids in the original test to include pathways between each box of the maze. It was found that children could misinterpret the maze, for example as allowing diagonal moves between squares; the added pathways are visual cues that it clear that this is not possible.

Screenshot from an online research seminar about computational thinking with María Zapata Cáceres

Validating the test

After these improvements, the test was validated with 299 primary school students aged 5-12. To assess the differences the improvements might make, the students were given different version of the test. María and her colleagues found that the younger students benefited from the improvements, and the improvements made the test more reliable for testing students’ computational thinking: students made fewer errors due to ambiguity and misinterpretation.

Statistical analysis of the test results showed that the improved version of the test is reliable and can be used with confidence to assess the skills of younger children.

What can you use this test for?

Firstly, the test is a tool for educators who want to assess the skills young people have and develop over time. Secondly, the test is also valuable for researchers. It can be used to perform projects that evaluate the outcomes of different approaches to teaching computational thinking, as well as projects investigating the effectiveness of specific learning resources, because the test can be given to children before and again after they engage with the resources.

Assessment is one of the many tools educators use to shape their teaching and promote the learning of their students, and tools like this CT test developed by María and her colleagues allow us to better understand what children are learning.

Find out more & join our next seminar

The video and slides of María’s presentation are available on our seminars page. To find out more about this test, and the process used to create and validate it, read the paper by María and her colleagues.

Our final seminar of this series takes place Tuesday 28 July before we take a break for the summer. In the session, we will explore gender balance in computing, led by Katharine Childs, who works on the Gender Balance in Computing research project at the Raspberry Pi Foundation. You can find out more and sign up to attend for free on our Computing Education Research Seminars page.

#education #research #computational thinking #computing education #research seminar #assessment

Tamia  Walter

Tamia Walter

1596754901

Testing Microservices Applications

The shift towards microservices and modular applications makes testing more important and more challenging at the same time. You have to make sure that the microservices running in containers perform well and as intended, but you can no longer rely on conventional testing strategies to get the job done.

This is where new testing approaches are needed. Testing your microservices applications require the right approach, a suitable set of tools, and immense attention to details. This article will guide you through the process of testing your microservices and talk about the challenges you will have to overcome along the way. Let’s get started, shall we?

A Brave New World

Traditionally, testing a monolith application meant configuring a test environment and setting up all of the application components in a way that matched the production environment. It took time to set up the testing environment, and there were a lot of complexities around the process.

Testing also requires the application to run in full. It is not possible to test monolith apps on a per-component basis, mainly because there is usually a base code that ties everything together, and the app is designed to run as a complete app to work properly.

Microservices running in containers offer one particular advantage: universal compatibility. You don’t have to match the testing environment with the deployment architecture exactly, and you can get away with testing individual components rather than the full app in some situations.

Of course, you will have to embrace the new cloud-native approach across the pipeline. Rather than creating critical dependencies between microservices, you need to treat each one as a semi-independent module.

The only monolith or centralized portion of the application is the database, but this too is an easy challenge to overcome. As long as you have a persistent database running on your test environment, you can perform tests at any time.

Keep in mind that there are additional things to focus on when testing microservices.

  • Microservices rely on network communications to talk to each other, so network reliability and requirements must be part of the testing.
  • Automation and infrastructure elements are now added as codes, and you have to make sure that they also run properly when microservices are pushed through the pipeline
  • While containerization is universal, you still have to pay attention to specific dependencies and create a testing strategy that allows for those dependencies to be included

Test containers are the method of choice for many developers. Unlike monolith apps, which lets you use stubs and mocks for testing, microservices need to be tested in test containers. Many CI/CD pipelines actually integrate production microservices as part of the testing process.

Contract Testing as an Approach

As mentioned before, there are many ways to test microservices effectively, but the one approach that developers now use reliably is contract testing. Loosely coupled microservices can be tested in an effective and efficient way using contract testing, mainly because this testing approach focuses on contracts; in other words, it focuses on how components or microservices communicate with each other.

Syntax and semantics construct how components communicate with each other. By defining syntax and semantics in a standardized way and testing microservices based on their ability to generate the right message formats and meet behavioral expectations, you can rest assured knowing that the microservices will behave as intended when deployed.

Ways to Test Microservices

It is easy to fall into the trap of making testing microservices complicated, but there are ways to avoid this problem. Testing microservices doesn’t have to be complicated at all when you have the right strategy in place.

There are several ways to test microservices too, including:

  • Unit testing: Which allows developers to test microservices in a granular way. It doesn’t limit testing to individual microservices, but rather allows developers to take a more granular approach such as testing individual features or runtimes.
  • Integration testing: Which handles the testing of microservices in an interactive way. Microservices still need to work with each other when they are deployed, and integration testing is a key process in making sure that they do.
  • End-to-end testing: Which⁠—as the name suggests⁠—tests microservices as a complete app. This type of testing enables the testing of features, UI, communications, and other components that construct the app.

What’s important to note is the fact that these testing approaches allow for asynchronous testing. After all, asynchronous development is what makes developing microservices very appealing in the first place. By allowing for asynchronous testing, you can also make sure that components or microservices can be updated independently to one another.

#blog #microservices #testing #caylent #contract testing #end-to-end testing #hoverfly #integration testing #microservices #microservices architecture #pact #testing #unit testing #vagrant #vcr

Software Testing 101: Regression Tests, Unit Tests, Integration Tests

Automation and segregation can help you build better software
If you write automated tests and deliver them to the customer, he can make sure the software is working properly. And, at the end of the day, he paid for it.

Ok. We can segregate or separate the tests according to some criteria. For example, “white box” tests are used to measure the internal quality of the software, in addition to the expected results. They are very useful to know the percentage of lines of code executed, the cyclomatic complexity and several other software metrics. Unit tests are white box tests.

#testing #software testing #regression tests #unit tests #integration tests

Dejah  Reinger

Dejah Reinger

1599859380

How to Do API Testing?

Nowadays API testing is an integral part of testing. There are a lot of tools like postman, insomnia, etc. There are many articles that ask what is API, What is API testing, but the problem is How to do API testing? What I need to validate.

Note: In this article, I am going to use postman assertions for all the examples since it is the most popular tool. But this article is not intended only for the postman tool.

Let’s directly jump to the topic.

Let’s consider you have an API endpoint example http://dzone.com/getuserDetails/{{username}} when you send the get request to that URL it returns the JSON response.

My API endpoint is http://dzone.com/getuserDetails/{{username}}

The response is in JSON format like below

JSON

{
  "jobTitle": "string",
  "userid": "string",
  "phoneNumber": "string",
  "password": "string",
  "email": "user@example.com",
  "firstName": "string",
  "lastName": "string",
  "userName": "string",
  "country": "string",
  "region": "string",
  "city": "string",
  "department": "string",
  "userType": 0
}

In the JSON we can see there are properties and associated values.

Now, For example, if we need details of the user with the username ‘ganeshhegde’ we need to send a **GET **request to **http://dzone.com/getuserDetails/ganeshhegde **

dzone.com

Now there are two scenarios.

1. Valid Usecase: User is available in the database and it returns user details with status code 200

2. Invalid Usecase: User is Unavailable/Invalid user in this case it returns status with code 404 with not found message.

#tutorial #performance #api #test automation #api testing #testing and qa #application programming interface #testing as a service #testing tutorial #api test