Improve Your End to End Tests with Cypress Intercept

If you have used Cypress in the past you should be familiar with the cy.server() and cy.route() methods. They enabled engineers to handle XHR requests within the browser for patterns such as mocking responses, aborting requests, and smart waiting.

Deprecated in Cypress 6.0.0, both cy.route() and cy.server() have been replaced by a new method, cy.intercept()

Released in November of 2020, the cy.intercept() method allows engineers to monitor all network traffic, not just XHR requests.² Simply put, this is a game changer. It places Cypress on the same level as Puppeteer and Playwright in terms of its ability to manipulate requests within the browser.

This tutorial will showcase the power behind the intercept() method by providing examples of request interceptions, mocks, and assertions. We will use the DemoQA Bookstore application as a base.


What is GEEK

Buddha Community

Improve Your End to End Tests with Cypress Intercept
Hermann  Frami

Hermann Frami


A Simple Wrapper Around Amplify AppSync Simulator

This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.


npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator


This plugin relies on your serverless yml file and on the serverless-offline plugin.

  - serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
  - serverless-appsync-simulator
  - serverless-offline

Note: Order is important serverless-appsync-simulator must go before serverless-offline

To start the simulator, run the following command:

sls offline start

You should see in the logs something like:

Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002


Put options under custom.appsync-simulator in your serverless.yml file

| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789 | When using API_KEY as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |


    location: '.webpack/service' # use webpack build directory
      endpoint: 'http://my-custom-dynamo:8000'


By default, the simulator will hot-relad when changes to *.graphql or *.vtl files are detected. Changes to *.yml files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.

Hot-reloading relies on watchman. Make sure it is installed on your system.

You can change the files being watched with the watch option, which is then passed to watchman as the match expression.


      - ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
      - "*.graphql"                                 # => string like this is equivalent to `["match", "*.graphql"]`

Or you can opt-out by leaving an empty array or set the option to false

Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.

Resource CloudFormation functions resolution

This plugin supports some resources resolution from the Ref, Fn::GetAtt and Fn::ImportValue functions in your yaml file. It also supports some other Cfn functions such as Fn::Join, Fb::Sub, etc.

Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation

Basic usage

You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.

      Ref: MyBucket # resolves to `my-bucket-name`

      Type: AWS::DynamoDB::Table
        TableName: myTable
      Type: AWS::S3::Bucket
        BucketName: my-bucket-name

# in your appsync config
    name: dynamosource
        Ref: MyDbTable # resolves to `myTable`

Override (or mock) values

Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.

In those cases, you can define (or override) those values using the refMap, getAttMap and importValueMap options.

  • refMap takes a mapping of resource name to value pairs
  • getAttMap takes a mapping of resource name to attribute/values pairs
  • importValueMap takes a mapping of import name to values pairs


      # Override `MyDbTable` resolution from the previous example.
      MyDbTable: 'mock-myTable'
      # define ElasticSearchInstance DomainName
        DomainEndpoint: 'localhost:9200'
      other-service-api-url: ''

# in your appsync config
    name: elasticsource
      # endpoint resolves as 'http://localhost:9200'
          - ''
          - - https://
            - Fn::GetAtt:
                - ElasticSearchInstance
                - DomainEndpoint

Key-value mock notation

In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}) in the import name.

This notation can be used with all mocks - refMap, getAttMap and importValueMap

      Fn::ImportValue: other-service-api-${self:provider.stage}-url

      - key: other-service-api-${self:provider.stage}-url
        value: ''


This plugin only tries to resolve the following parts of the yml tree:

  • provider.environment
  • functions[*].environment
  • custom.appSync

If you have the need of resolving others, feel free to open an issue and explain your use case.

For now, the supported resources to be automatically resovled by Ref: are:

  • DynamoDb tables
  • S3 Buckets

Feel free to open a PR or an issue to extend them as well.

External functions

When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".

        url: http://localhost:3016/2015-03-31/functions/addUser/invocations
        method: post
        method: post

Supported Resolver types

This plugin supports resolvers implemented by amplify-appsync-simulator, as well as custom resolvers.

From Aws Amplify:

  • NONE

Implemented by this plugin

  • HTTP

Relational Database

Sample VTL for a create mutation

#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #set( $discard = $cols.add("$toSnake") )
  #if( $util.isBoolean($ctx.args.input[$entry]) )
      #if( $ctx.args.input[$entry] )
        #set( $discard = $vals.add("1") )
        #set( $discard = $vals.add("0") )
      #set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
  #set( $valStr = "($valStr)" )
#if ( $colStr.substring(0, 1) != '(' )
  #set( $colStr = "($colStr)" )
  "version": "2018-05-29",
  "statements":   ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM    <name-of-table> ORDER BY id DESC LIMIT 1"]

Sample VTL for an update mutation

#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $cur = $ctx.args.input[$entry] )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #if( $util.isBoolean($cur) )
      #if( $cur )
        #set ( $cur = "1" )
        #set ( $cur = "0" )
  #if ( $util.isNullOrEmpty($update) )
      #set($update = "$toSnake$equals'$cur'" )
      #set($update = "$update,$toSnake$equals'$cur'" )
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> SET $update WHERE id=$", "SELECT * FROM <name-of-table> WHERE id=$"]

Sample resolver for delete mutation

  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$", "SELECT * FROM <name-of-table> WHERE id=$"]

Sample mutation response VTL with support for handling AWSDateTime

#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
    #set ($index = $index + 1)
    #if ( $column["typeName"] == "timestamptz" )
        #set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
        #set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
        #set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
        $util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
    #set ( $s = $mapKey.split("_") )
    #set ( $camelCase="" )
    #set ( $isFirst=true )
    #foreach($entry in $s)
        #if ( $isFirst )
          #set ( $first = $entry.substring(0,1) )
          #set ( $first = $entry.substring(0,1).toUpperCase() )
        #set ( $isFirst=false )
        #set ( $stringLength = $entry.length() )
        #set ( $remaining = $entry.substring(1, $stringLength) )
        #set ( $camelCase = "$camelCase$first$remaining" )
    $util.qr( $response.put("$camelCase", $res[$mapKey]) )

Using Variable Map

Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.

Will be escaped properly: null, true, and false values.

  "version": "2018-05-29",
  "statements":   [
    "UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
    "SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
  variableMap: {
    ":ID": $,
##    ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!


Author: Serverless-appsync
Source Code: 
License: MIT License

#serverless #sync #graphql 

Generis: Versatile Go Code Generator


Versatile Go code generator.


Generis is a lightweight code preprocessor adding the following features to the Go language :

  • Generics.
  • Free-form macros.
  • Conditional compilation.
  • HTML templating.
  • Allman style conversion.


package main;


import (


#define DebugMode
#as true

// ~~

#define HttpPort
#as 8080

// ~~

#define WriteLine( {{text}} )
#as log.Println( {{text}} )

// ~~

#define local {{variable}} : {{type}};
#as var {{variable}} {{type}};

// ~~

#define DeclareStack( {{type}}, {{name}} )
    // -- TYPES

    type {{name}}Stack struct
        ElementArray []{{type}};

    // -- INQUIRIES

    func ( stack * {{name}}Stack ) IsEmpty(
        ) bool
        return len( stack.ElementArray ) == 0;

    // -- OPERATIONS

    func ( stack * {{name}}Stack ) Push(
        element {{type}}
        stack.ElementArray = append( stack.ElementArray, element );

    // ~~

    func ( stack * {{name}}Stack ) Pop(
        ) {{type}}
            element : {{type}};

        element = stack.ElementArray[ len( stack.ElementArray ) - 1 ];

        stack.ElementArray = stack.ElementArray[ : len( stack.ElementArray ) - 1 ];

        return element;

// ~~

#define DeclareStack( {{type}} )
#as DeclareStack( {{type}}, {{type:PascalCase}} )

// -- TYPES

DeclareStack( string )
DeclareStack( int32 )


func HandleRootPage(
    response_writer http.ResponseWriter,
    request * http.Request
        boolean : bool;
        natural : uint;
        integer : int;
        real : float64;
        text : string;
        integer_stack : Int32Stack;

    boolean = true;
    natural = 10;
    integer = 20;
    real = 30.0;
    text = "text";
    escaped_url_text = "&escaped text?";
    escaped_html_text = "<escaped text/>";

    integer_stack.Push( 10 );
    integer_stack.Push( 20 );
    integer_stack.Push( 30 );

    #write response_writer
        <!DOCTYPE html>
        <html lang="en">
                <meta charset="utf-8">
                <title><%= request.URL.Path %></title>
                <% if ( boolean ) { %>
                    <%= "URL : " + request.URL.Path %>
                    <%@ natural %>
                    <%# integer %>
                    <%& real %>
                    <%~ text %>
                    <%^ escaped_url_text %>
                    <%= escaped_html_text %>
                    <%= "<%% ignored %%>" %>
                    <%% ignored %%>
                <% } %>
                Stack :
                <% for !integer_stack.IsEmpty() { %>
                    <%# integer_stack.Pop() %>
                <% } %>

// ~~

func main()
    http.HandleFunc( "/", HandleRootPage );

    #if DebugMode
        WriteLine( "Listening on http://localhost:HttpPort" );

        http.ListenAndServe( ":HttpPort", nil )


#define directive

Constants and generic code can be defined with the following syntax :

#define old code
#as new code

#define old code

#as new code


#define parameter

The #define directive can contain one or several parameters :

{{variable name}} : hierarchical code (with properly matching brackets and parentheses)
{{variable name#}} : statement code (hierarchical code without semicolon)
{{variable name$}} : plain code
{{variable name:boolean expression}} : conditional hierarchical code
{{variable name#:boolean expression}} : conditional statement code
{{variable name$:boolean expression}} : conditional plain code

They can have a boolean expression to require they match specific conditions :

HasText text
HasPrefix prefix
HasSuffix suffix
HasIdentifier text
expression && expression
expression || expression
( expression )

The #define directive must not start or end with a parameter.

#as parameter

The #as directive can use the value of the #define parameters :

{{variable name}}
{{variable name:filter function}}
{{variable name:filter function:filter function:...}}

Their value can be changed through one or several filter functions :

ReplacePrefix old_prefix new_prefix
ReplaceSuffix old_suffix new_suffix
ReplaceText old_text new_text
ReplaceIdentifier old_identifier new_identifier
AddPrefix prefix
AddSuffix suffix
RemovePrefix prefix
RemoveSuffix suffix
RemoveText text
RemoveIdentifier identifier

#if directive

Conditional code can be defined with the following syntax :

#if boolean expression
    #if boolean expression
    #if boolean expression

The boolean expression can use the following operators :

expression && expression
expression || expression
( expression )

#write directive

Templated HTML code can be sent to a stream writer using the following syntax :

#write writer expression
    <% code %>
    <%@ natural expression %>
    <%# integer expression %>
    <%& real expression %>
    <%~ text expression %>
    <%= escaped text expression %>
    <%! removed content %>
    <%% ignored tags %%>


  • There is no operator precedence in boolean expressions.
  • The --join option requires to end the statements with a semicolon.
  • The #writer directive is only available for the Go language.


Install the DMD 2 compiler (using the MinGW setup option on Windows).

Build the executable with the following command line :

dmd -m64 generis.d

Command line

generis [options]


--prefix # : set the command prefix
--parse INPUT_FOLDER/ : parse the definitions of the Generis files in the input folder
--process INPUT_FOLDER/ OUTPUT_FOLDER/ : reads the Generis files in the input folder and writes the processed files in the output folder
--trim : trim the HTML templates
--join : join the split statements
--create : create the output folders if needed
--watch : watch the Generis files for modifications
--pause 500 : time to wait before checking the Generis files again
--tabulation 4 : set the tabulation space count
--extension .go : generate files with this extension


generis --process GS/ GO/

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder.

generis --process GS/ GO/ --create

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder, creating the output folders if needed.

generis --process GS/ GO/ --create --watch

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder, creating the output folders if needed and watching the Generis files for modifications.

generis --process GS/ GO/ --trim --join --create --watch

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder, trimming the HTML templates, joining the split statements, creating the output folders if needed and watching the Generis files for modifications.



Author: Senselogic
Source Code: 
License: View license

#go #golang #code 

Tamia  Walter

Tamia Walter


Testing Microservices Applications

The shift towards microservices and modular applications makes testing more important and more challenging at the same time. You have to make sure that the microservices running in containers perform well and as intended, but you can no longer rely on conventional testing strategies to get the job done.

This is where new testing approaches are needed. Testing your microservices applications require the right approach, a suitable set of tools, and immense attention to details. This article will guide you through the process of testing your microservices and talk about the challenges you will have to overcome along the way. Let’s get started, shall we?

A Brave New World

Traditionally, testing a monolith application meant configuring a test environment and setting up all of the application components in a way that matched the production environment. It took time to set up the testing environment, and there were a lot of complexities around the process.

Testing also requires the application to run in full. It is not possible to test monolith apps on a per-component basis, mainly because there is usually a base code that ties everything together, and the app is designed to run as a complete app to work properly.

Microservices running in containers offer one particular advantage: universal compatibility. You don’t have to match the testing environment with the deployment architecture exactly, and you can get away with testing individual components rather than the full app in some situations.

Of course, you will have to embrace the new cloud-native approach across the pipeline. Rather than creating critical dependencies between microservices, you need to treat each one as a semi-independent module.

The only monolith or centralized portion of the application is the database, but this too is an easy challenge to overcome. As long as you have a persistent database running on your test environment, you can perform tests at any time.

Keep in mind that there are additional things to focus on when testing microservices.

  • Microservices rely on network communications to talk to each other, so network reliability and requirements must be part of the testing.
  • Automation and infrastructure elements are now added as codes, and you have to make sure that they also run properly when microservices are pushed through the pipeline
  • While containerization is universal, you still have to pay attention to specific dependencies and create a testing strategy that allows for those dependencies to be included

Test containers are the method of choice for many developers. Unlike monolith apps, which lets you use stubs and mocks for testing, microservices need to be tested in test containers. Many CI/CD pipelines actually integrate production microservices as part of the testing process.

Contract Testing as an Approach

As mentioned before, there are many ways to test microservices effectively, but the one approach that developers now use reliably is contract testing. Loosely coupled microservices can be tested in an effective and efficient way using contract testing, mainly because this testing approach focuses on contracts; in other words, it focuses on how components or microservices communicate with each other.

Syntax and semantics construct how components communicate with each other. By defining syntax and semantics in a standardized way and testing microservices based on their ability to generate the right message formats and meet behavioral expectations, you can rest assured knowing that the microservices will behave as intended when deployed.

Ways to Test Microservices

It is easy to fall into the trap of making testing microservices complicated, but there are ways to avoid this problem. Testing microservices doesn’t have to be complicated at all when you have the right strategy in place.

There are several ways to test microservices too, including:

  • Unit testing: Which allows developers to test microservices in a granular way. It doesn’t limit testing to individual microservices, but rather allows developers to take a more granular approach such as testing individual features or runtimes.
  • Integration testing: Which handles the testing of microservices in an interactive way. Microservices still need to work with each other when they are deployed, and integration testing is a key process in making sure that they do.
  • End-to-end testing: Which⁠—as the name suggests⁠—tests microservices as a complete app. This type of testing enables the testing of features, UI, communications, and other components that construct the app.

What’s important to note is the fact that these testing approaches allow for asynchronous testing. After all, asynchronous development is what makes developing microservices very appealing in the first place. By allowing for asynchronous testing, you can also make sure that components or microservices can be updated independently to one another.

#blog #microservices #testing #caylent #contract testing #end-to-end testing #hoverfly #integration testing #microservices #microservices architecture #pact #testing #unit testing #vagrant #vcr

Software Testing 101: Regression Tests, Unit Tests, Integration Tests

Automation and segregation can help you build better software
If you write automated tests and deliver them to the customer, he can make sure the software is working properly. And, at the end of the day, he paid for it.

Ok. We can segregate or separate the tests according to some criteria. For example, “white box” tests are used to measure the internal quality of the software, in addition to the expected results. They are very useful to know the percentage of lines of code executed, the cyclomatic complexity and several other software metrics. Unit tests are white box tests.

#testing #software testing #regression tests #unit tests #integration tests

Dejah  Reinger

Dejah Reinger


How to Do API Testing?

Nowadays API testing is an integral part of testing. There are a lot of tools like postman, insomnia, etc. There are many articles that ask what is API, What is API testing, but the problem is How to do API testing? What I need to validate.

Note: In this article, I am going to use postman assertions for all the examples since it is the most popular tool. But this article is not intended only for the postman tool.

Let’s directly jump to the topic.

Let’s consider you have an API endpoint example{{username}} when you send the get request to that URL it returns the JSON response.

My API endpoint is{{username}}

The response is in JSON format like below


  "jobTitle": "string",
  "userid": "string",
  "phoneNumber": "string",
  "password": "string",
  "email": "",
  "firstName": "string",
  "lastName": "string",
  "userName": "string",
  "country": "string",
  "region": "string",
  "city": "string",
  "department": "string",
  "userType": 0

In the JSON we can see there are properties and associated values.

Now, For example, if we need details of the user with the username ‘ganeshhegde’ we need to send a **GET **request to ** **

Now there are two scenarios.

1. Valid Usecase: User is available in the database and it returns user details with status code 200

2. Invalid Usecase: User is Unavailable/Invalid user in this case it returns status with code 404 with not found message.

#tutorial #performance #api #test automation #api testing #testing and qa #application programming interface #testing as a service #testing tutorial #api test