1603450560
Automatic testing of web applications is bread and butter for any Quality Assurance specialist. But among so many testing suites available and various opinions about each of them, one can lose his bearings. We’ve come our way of trying a few end-to-end testing tools and we ultimately chose Cypress. But his article won’t be a paean to Cypress.io (although it might seem like one), neither will I be drawing any comparisons between it and any other testing suites. You can find lots of such content in the Web.
But over the course of this brief article, I’d like to show you why we even started seeking a new automatic testing suite, how Cypress eventually worked out, and why we find it the best fit to our QA team at Monterail.
Intrigued whether we’ve found the Holy Grail of the end-to-end testing tool? Let’s dive deeper into our real life case and see yourself.
Before we get to how and why we selected Cypress, we’ll start out by going over some relevant information about the aforementioned case study. The Web app I’m writing about is based on Vue.js (yep, JavaScript again!) + a mock server, and has been in development for over twelve months now. The first automatic testing framework that we decided to use for the project was Nightwatch.js and reached a test coverage level of around thirty percent, covering very basic features, including logging in, navigating between views, and filling out basic forms.
When the app’s development was advanced enough that we began thinking the next release, our client (working in the network security industry) started asking questions about the stability and security of the application with increasing frequency.
It became clear that a 30% test coverage was not a satisfactory outcome for any involved party.
The problem was that we’ve hit a wall with Nightwatch-based testing. To give an example, the Selenium Web Driver was very issue-prone, we’ve had problems with test stability, and we were rather dissatisfied with the pace of writing new tests. It quickly became apparent to us that if we were to boost our test coverage to include most of the features, we would have to find an alternative testing suite. And a good one at that.
#qa #cypress.io #development #nightwatch.js #javascript
1651383480
This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.
Install
npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator
Usage
This plugin relies on your serverless yml file and on the serverless-offline
plugin.
plugins:
- serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
- serverless-appsync-simulator
- serverless-offline
Note: Order is important serverless-appsync-simulator
must go before serverless-offline
To start the simulator, run the following command:
sls offline start
You should see in the logs something like:
...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...
Configuration
Put options under custom.appsync-simulator
in your serverless.yml
file
| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789
| When using API_KEY
as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref
function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt
function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue
function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |
Example:
custom:
appsync-simulator:
location: '.webpack/service' # use webpack build directory
dynamoDb:
endpoint: 'http://my-custom-dynamo:8000'
Hot-reloading
By default, the simulator will hot-relad when changes to *.graphql
or *.vtl
files are detected. Changes to *.yml
files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.
Hot-reloading relies on watchman. Make sure it is installed on your system.
You can change the files being watched with the watch
option, which is then passed to watchman as the match expression.
e.g.
custom:
appsync-simulator:
watch:
- ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
- "*.graphql" # => string like this is equivalent to `["match", "*.graphql"]`
Or you can opt-out by leaving an empty array or set the option to false
Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.
Resource CloudFormation functions resolution
This plugin supports some resources resolution from the Ref
, Fn::GetAtt
and Fn::ImportValue
functions in your yaml file. It also supports some other Cfn functions such as Fn::Join
, Fb::Sub
, etc.
Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation
You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.
provider:
environment:
BUCKET_NAME:
Ref: MyBucket # resolves to `my-bucket-name`
resources:
Resources:
MyDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
...
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-name
...
# in your appsync config
dataSources:
- type: AMAZON_DYNAMODB
name: dynamosource
config:
tableName:
Ref: MyDbTable # resolves to `myTable`
Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.
In those cases, you can define (or override) those values using the refMap
, getAttMap
and importValueMap
options.
refMap
takes a mapping of resource name to value pairsgetAttMap
takes a mapping of resource name to attribute/values pairsimportValueMap
takes a mapping of import name to values pairsExample:
custom:
appsync-simulator:
refMap:
# Override `MyDbTable` resolution from the previous example.
MyDbTable: 'mock-myTable'
getAttMap:
# define ElasticSearchInstance DomainName
ElasticSearchInstance:
DomainEndpoint: 'localhost:9200'
importValueMap:
other-service-api-url: 'https://other.api.url.com/graphql'
# in your appsync config
dataSources:
- type: AMAZON_ELASTICSEARCH
name: elasticsource
config:
# endpoint resolves as 'http://localhost:9200'
endpoint:
Fn::Join:
- ''
- - https://
- Fn::GetAtt:
- ElasticSearchInstance
- DomainEndpoint
In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}
) in the import name.
This notation can be used with all mocks - refMap
, getAttMap
and importValueMap
provider:
environment:
FINISH_ACTIVITY_FUNCTION_ARN:
Fn::ImportValue: other-service-api-${self:provider.stage}-url
custom:
serverless-appsync-simulator:
importValueMap:
- key: other-service-api-${self:provider.stage}-url
value: 'https://other.api.url.com/graphql'
This plugin only tries to resolve the following parts of the yml tree:
provider.environment
functions[*].environment
custom.appSync
If you have the need of resolving others, feel free to open an issue and explain your use case.
For now, the supported resources to be automatically resovled by Ref:
are:
Feel free to open a PR or an issue to extend them as well.
External functions
When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".
custom:
appsync-simulator:
functions:
addUser:
url: http://localhost:3016/2015-03-31/functions/addUser/invocations
method: post
addPost:
url: https://jsonplaceholder.typicode.com/posts
method: post
Supported Resolver types
This plugin supports resolvers implemented by amplify-appsync-simulator
, as well as custom resolvers.
From Aws Amplify:
Implemented by this plugin
#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#set( $discard = $cols.add("$toSnake") )
#if( $util.isBoolean($ctx.args.input[$entry]) )
#if( $ctx.args.input[$entry] )
#set( $discard = $vals.add("1") )
#else
#set( $discard = $vals.add("0") )
#end
#else
#set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
#end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
#set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
#set( $colStr = "($colStr)" )
#end
{
"version": "2018-05-29",
"statements": ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM <name-of-table> ORDER BY id DESC LIMIT 1"]
}
#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $cur = $ctx.args.input[$entry] )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#if( $util.isBoolean($cur) )
#if( $cur )
#set ( $cur = "1" )
#else
#set ( $cur = "0" )
#end
#end
#if ( $util.isNullOrEmpty($update) )
#set($update = "$toSnake$equals'$cur'" )
#else
#set($update = "$update,$toSnake$equals'$cur'" )
#end
#end
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}
#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
#set ($index = $index + 1)
#if ( $column["typeName"] == "timestamptz" )
#set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
#set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
#set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
$util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
#end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
#set ( $s = $mapKey.split("_") )
#set ( $camelCase="" )
#set ( $isFirst=true )
#foreach($entry in $s)
#if ( $isFirst )
#set ( $first = $entry.substring(0,1) )
#else
#set ( $first = $entry.substring(0,1).toUpperCase() )
#end
#set ( $isFirst=false )
#set ( $stringLength = $entry.length() )
#set ( $remaining = $entry.substring(1, $stringLength) )
#set ( $camelCase = "$camelCase$first$remaining" )
#end
$util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)
Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.
Will be escaped properly: null
, true
, and false
values.
{
"version": "2018-05-29",
"statements": [
"UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
"SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
],
variableMap: {
":ID": $ctx.args.id,
## ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
}
}
Requires
Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator
License: MIT License
1648972740
Generis
Versatile Go code generator.
Generis is a lightweight code preprocessor adding the following features to the Go language :
package main;
// -- IMPORTS
import (
"html"
"io"
"log"
"net/http"
"net/url"
"strconv"
);
// -- DEFINITIONS
#define DebugMode
#as true
// ~~
#define HttpPort
#as 8080
// ~~
#define WriteLine( {{text}} )
#as log.Println( {{text}} )
// ~~
#define local {{variable}} : {{type}};
#as var {{variable}} {{type}};
// ~~
#define DeclareStack( {{type}}, {{name}} )
#as
// -- TYPES
type {{name}}Stack struct
{
ElementArray []{{type}};
}
// -- INQUIRIES
func ( stack * {{name}}Stack ) IsEmpty(
) bool
{
return len( stack.ElementArray ) == 0;
}
// -- OPERATIONS
func ( stack * {{name}}Stack ) Push(
element {{type}}
)
{
stack.ElementArray = append( stack.ElementArray, element );
}
// ~~
func ( stack * {{name}}Stack ) Pop(
) {{type}}
{
local
element : {{type}};
element = stack.ElementArray[ len( stack.ElementArray ) - 1 ];
stack.ElementArray = stack.ElementArray[ : len( stack.ElementArray ) - 1 ];
return element;
}
#end
// ~~
#define DeclareStack( {{type}} )
#as DeclareStack( {{type}}, {{type:PascalCase}} )
// -- TYPES
DeclareStack( string )
DeclareStack( int32 )
// -- FUNCTIONS
func HandleRootPage(
response_writer http.ResponseWriter,
request * http.Request
)
{
local
boolean : bool;
local
natural : uint;
local
integer : int;
local
real : float64;
local
escaped_html_text,
escaped_url_text,
text : string;
local
integer_stack : Int32Stack;
boolean = true;
natural = 10;
integer = 20;
real = 30.0;
text = "text";
escaped_url_text = "&escaped text?";
escaped_html_text = "<escaped text/>";
integer_stack.Push( 10 );
integer_stack.Push( 20 );
integer_stack.Push( 30 );
#write response_writer
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title><%= request.URL.Path %></title>
</head>
<body>
<% if ( boolean ) { %>
<%= "URL : " + request.URL.Path %>
<br/>
<%@ natural %>
<%# integer %>
<%& real %>
<br/>
<%~ text %>
<%^ escaped_url_text %>
<%= escaped_html_text %>
<%= "<%% ignored %%>" %>
<%% ignored %%>
<% } %>
<br/>
Stack :
<br/>
<% for !integer_stack.IsEmpty() { %>
<%# integer_stack.Pop() %>
<% } %>
</body>
</html>
#end
}
// ~~
func main()
{
http.HandleFunc( "/", HandleRootPage );
#if DebugMode
WriteLine( "Listening on http://localhost:HttpPort" );
#end
log.Fatal(
http.ListenAndServe( ":HttpPort", nil )
);
}
Constants and generic code can be defined with the following syntax :
#define old code
#as new code
#define old code
#as
new
code
#end
#define
old
code
#as new code
#define
old
code
#as
new
code
#end
The #define
directive can contain one or several parameters :
{{variable name}} : hierarchical code (with properly matching brackets and parentheses)
{{variable name#}} : statement code (hierarchical code without semicolon)
{{variable name$}} : plain code
{{variable name:boolean expression}} : conditional hierarchical code
{{variable name#:boolean expression}} : conditional statement code
{{variable name$:boolean expression}} : conditional plain code
They can have a boolean expression to require they match specific conditions :
HasText text
HasPrefix prefix
HasSuffix suffix
HasIdentifier text
false
true
!expression
expression && expression
expression || expression
( expression )
The #define
directive must not start or end with a parameter.
The #as
directive can use the value of the #define
parameters :
{{variable name}}
{{variable name:filter function}}
{{variable name:filter function:filter function:...}}
Their value can be changed through one or several filter functions :
LowerCase
UpperCase
MinorCase
MajorCase
SnakeCase
PascalCase
CamelCase
RemoveComments
RemoveBlanks
PackStrings
PackIdentifiers
ReplacePrefix old_prefix new_prefix
ReplaceSuffix old_suffix new_suffix
ReplaceText old_text new_text
ReplaceIdentifier old_identifier new_identifier
AddPrefix prefix
AddSuffix suffix
RemovePrefix prefix
RemoveSuffix suffix
RemoveText text
RemoveIdentifier identifier
Conditional code can be defined with the following syntax :
#if boolean expression
#if boolean expression
...
#else
...
#end
#else
#if boolean expression
...
#else
...
#end
#end
The boolean expression can use the following operators :
false
true
!expression
expression && expression
expression || expression
( expression )
Templated HTML code can be sent to a stream writer using the following syntax :
#write writer expression
<% code %>
<%@ natural expression %>
<%# integer expression %>
<%& real expression %>
<%~ text expression %>
<%= escaped text expression %>
<%! removed content %>
<%% ignored tags %%>
#end
--join
option requires to end the statements with a semicolon.#writer
directive is only available for the Go language.Install the DMD 2 compiler (using the MinGW setup option on Windows).
Build the executable with the following command line :
dmd -m64 generis.d
generis [options]
--prefix # : set the command prefix
--parse INPUT_FOLDER/ : parse the definitions of the Generis files in the input folder
--process INPUT_FOLDER/ OUTPUT_FOLDER/ : reads the Generis files in the input folder and writes the processed files in the output folder
--trim : trim the HTML templates
--join : join the split statements
--create : create the output folders if needed
--watch : watch the Generis files for modifications
--pause 500 : time to wait before checking the Generis files again
--tabulation 4 : set the tabulation space count
--extension .go : generate files with this extension
generis --process GS/ GO/
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder.
generis --process GS/ GO/ --create
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder, creating the output folders if needed.
generis --process GS/ GO/ --create --watch
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder, creating the output folders if needed and watching the Generis files for modifications.
generis --process GS/ GO/ --trim --join --create --watch
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder, trimming the HTML templates, joining the split statements, creating the output folders if needed and watching the Generis files for modifications.
2.0
Author: Senselogic
Source Code: https://github.com/senselogic/GENERIS
License: View license
1596754901
The shift towards microservices and modular applications makes testing more important and more challenging at the same time. You have to make sure that the microservices running in containers perform well and as intended, but you can no longer rely on conventional testing strategies to get the job done.
This is where new testing approaches are needed. Testing your microservices applications require the right approach, a suitable set of tools, and immense attention to details. This article will guide you through the process of testing your microservices and talk about the challenges you will have to overcome along the way. Let’s get started, shall we?
Traditionally, testing a monolith application meant configuring a test environment and setting up all of the application components in a way that matched the production environment. It took time to set up the testing environment, and there were a lot of complexities around the process.
Testing also requires the application to run in full. It is not possible to test monolith apps on a per-component basis, mainly because there is usually a base code that ties everything together, and the app is designed to run as a complete app to work properly.
Microservices running in containers offer one particular advantage: universal compatibility. You don’t have to match the testing environment with the deployment architecture exactly, and you can get away with testing individual components rather than the full app in some situations.
Of course, you will have to embrace the new cloud-native approach across the pipeline. Rather than creating critical dependencies between microservices, you need to treat each one as a semi-independent module.
The only monolith or centralized portion of the application is the database, but this too is an easy challenge to overcome. As long as you have a persistent database running on your test environment, you can perform tests at any time.
Keep in mind that there are additional things to focus on when testing microservices.
Test containers are the method of choice for many developers. Unlike monolith apps, which lets you use stubs and mocks for testing, microservices need to be tested in test containers. Many CI/CD pipelines actually integrate production microservices as part of the testing process.
As mentioned before, there are many ways to test microservices effectively, but the one approach that developers now use reliably is contract testing. Loosely coupled microservices can be tested in an effective and efficient way using contract testing, mainly because this testing approach focuses on contracts; in other words, it focuses on how components or microservices communicate with each other.
Syntax and semantics construct how components communicate with each other. By defining syntax and semantics in a standardized way and testing microservices based on their ability to generate the right message formats and meet behavioral expectations, you can rest assured knowing that the microservices will behave as intended when deployed.
It is easy to fall into the trap of making testing microservices complicated, but there are ways to avoid this problem. Testing microservices doesn’t have to be complicated at all when you have the right strategy in place.
There are several ways to test microservices too, including:
What’s important to note is the fact that these testing approaches allow for asynchronous testing. After all, asynchronous development is what makes developing microservices very appealing in the first place. By allowing for asynchronous testing, you can also make sure that components or microservices can be updated independently to one another.
#blog #microservices #testing #caylent #contract testing #end-to-end testing #hoverfly #integration testing #microservices #microservices architecture #pact #testing #unit testing #vagrant #vcr
1593373080
Cypress automated everything that runs on the browser and many times we have a use case where we need to validate our UI behavior against the browser network calls, here cypress comes in the picture. So as per the cypress best practices we have created a REST-API-Testing.spec.js file and inside that spec.js file, we have defined our test cases for performing CRUD operations.
C: Create
R: Read
U: Update
D: Delete
We have also added some assertions on the response as we used to do while testing backend API(s) with the different rest clients.
it('GET-list user',()=>{
cy.request('GET','/users?page=2').then((response)=>{
expect(response.status).equal(200)
expect(response.body.data[0].first_name).equal('Michael')
expect(response.body).to.not.be.null
expect(response.body.data).to.have.length(6)
})
})
here we have defined a get method through which we are fetching the user’s detail.
it('POST-Create user',()=>{
var user = {
"name": "Vandana Yadav",
"job": "QA"
}
cy.request('POST','/users',user).then((response)=>{
expect(response.status).equal(201)
expect(response.body.name).equal(user.name)
expect(response.body.job).equal(user.job)
})
cy.request('POST','/users',user).its('body').should('include',{name:'Vandana Yadav'})
})
here we have defined a POST method in which we are creating a new user.
it('Ùpdate user',()=>{
var user1 = {
"name": "Samantha",
"job": "DevOps"
}
cy.request('PUT','/users/2',user1 ).then((response)=>{
expect(response.status).equal(200)
expect(response.body.name).equal(user1.name)
expect(response.body.job).equal(user1.job)
})
})
here we have defined a PUT method for updating the user’s detail.
it('Delete user',()=>{
var user1 = {
"name": "Samantha",
"job": "DevOps"
}
cy.request('DELETE','/users/2').then((response)=>{
expect(response.status).equal(204)
})
})
})
here we have defined a Delete method for deleting a user.
We can execute our cypress tests as per our requirement, like if we want to run our test cases on the browser then we need to pass an argument –headed along with our cypress run command and if you want to run your tests on console only then you can pass –headless along with cypress run command.
The command for executing our tests in headed mode:
npx cypress run --spec "cypress/integration/REST-API-Testing.spec.js" --headed
The command for executing our tests in headless mode:
npx cypress run --spec "cypress/integration/REST-API-Testing.spec.js" --headless
#scala #cypress #cypress.io #rest api testing #restapi #ios
1620983255
Automation and segregation can help you build better software
If you write automated tests and deliver them to the customer, he can make sure the software is working properly. And, at the end of the day, he paid for it.
Ok. We can segregate or separate the tests according to some criteria. For example, “white box” tests are used to measure the internal quality of the software, in addition to the expected results. They are very useful to know the percentage of lines of code executed, the cyclomatic complexity and several other software metrics. Unit tests are white box tests.
#testing #software testing #regression tests #unit tests #integration tests