Daisy Rees

Daisy Rees

1576548915

Automated End-To-End Testing with Selenium and Docker

Automated end to end testing is the most effective way to test your app. It also requires the least effort to get the benefit of tests if you currently have no tests at all. And you don’t need a ton of infrastructure or cloud servers to get there. In this guide we’ll see how we can easily overcome the two main hurdles of end to end testing.

The first hurdle is Selenium. The API you use to write tests is ugly and non-intuitive. But it is not that difficult to use, and with a few convenient functions it can become a breeze to write Selenium tests. The effort is well rewarded because you can automatically test your end users’ flows end to end.

The second hurdle is putting components together in an isolated environment. We want the frontend, the backend, the database and everything else your app uses. We will use Docker compose to put things together and automate the tests. It is easy even if you have your components in different Git repositories.

Writing end to end tests in Selenium

Even if you are an API only business, you have a frontend, and an admin or back office frontend. So end to end tests ultimately talk to a frontend application.
The industry standard tool is Selenium. Selenium provides an API to talk to the web browser and interact with the DOM. You can check what elements are displayed, fill inputs and click around. Anything a real user would do with your application, you can automate.

Selenium uses something called the WebDriver API. It is not very handy to use at first glance. But the learning curve is not steep. Creating a few convenience functions will get you productive in no time. I won’t go into the details of the WebDriver API here.

There are also libraries to make your life easier. Nightwatch is the most popular.

If you have an Angular application, Protractor is your best friend. It integrates with the Angular event loop and allows you to use selectors based on your model. That is gold.

Writing a test for your most critical user feature or your app should take only a few hours, so go ahead. It will run automatically ever after. Let’s see how.

Running your tests in Docker

We need to run our tests in an isolated environment so the outcome is predictable. And so we can enable Continuous Integration easily. We’ll use Docker compose.

Selenium provides Docker images out of the box to test with one or several browsers. The images spawn a Selenium server and a browser underneath. It can work with different browsers.

Let’s start with one browser for now: headless-chrome. The docker-compose.yml file looks as below (commands are from an Angular example).

Note: If you’ve never used Docker you can easily install it on your computer. Docker has the troublesome tendency of forcing you to sign up for an account just to download the thing. But you actually don’t have to. Go to the release notes (link for Windows and link for Mac) and download not the latest version but the one right before. This is a direct download link.

version: '3.1'

services:
 app-serve:
   build: .
   image: myapp
   command: npm run serve:production
   expose:
    - 4200

 app-e2e-tests:
   image: myapp
   command: dockerize -wait tcp://app-serve:4200 
             -wait tcp://selenium-chrome-standalone:4444 
             -timeout 10s -wait-retry-interval 1s bash -c "npm run e2e"
   depends_on:
     - app-serve
     - selenium-chrome-standalone

 selenium-chrome-standalone:
   image: selenium/standalone-chrome
   expose:
    - 44444

The file above tells Docker to spin up an environment with 3 containers:

  • Our app to test: the container uses the myapp image which we’ll build right below
  • A container running the tests: the container uses the same myapp image. It uses dockerize to wait for the servers to be up before running the tests
  • The Selenium server: the container uses the official Selenium image. Nothing to do here. We could run the tests from the same container as the app. Splitting it makes things more clear. It also allows you to separate outputs from the 2 containers in the result logs.

The containers will live inside a private virtual network and see each other as http://the-container-name (more here on networking in Docker).

We need a Dockerfile to build the myapp image used for the containers. It should turn your frontend code into a bundle as close to production as possible. Running unit tests and linting is a good idea at that stage. After all no need to run end to end tests if the basics do not work.

In the Dockerfile below we use a node image as base, install dockerize and bundle the application. It is important to build for production. You don’t want to test a development build that is pre-optimizations. Many things can go wrong there.

FROM node:12.13.0 AS base

ENV DOCKERIZE_VERSION v0.6.0

RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
   && tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
   && rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz

RUN mkdir -p ~/app
WORKDIR ~/app

COPY package.json .
COPY package-lock.json .

FROM base AS dependencies

RUN npm install

FROM dependencies AS runtime

COPY . .
RUN npm run lint
RUN npm run test:ci
RUN npm run build --production

Now that we have all the pieces together let’s run the tests using this command:

docker-compose up --build --abort-on-container-exit

It is long-ish, so script it in your project somehow. What it will do is build the myapp image based on the provided Dockerfile and then start all the containers. Dockerize makes sure your app and Selenium are up before executing the tests.

The –abort-on-container-exit option will kill the environment when one container exists. Since only our testing container is meant to exit (the others are servers), that is what we want.

The docker-compose command will have the same exit code as the exiting container. It means you can easily detect from the command line if the tests succeeded or not.

You are now ready to run end to end tests locally and on any server supporting Docker. That’s pretty good!

Tests run with only one browser for now, though. Let’s add more.

Testing on different browsers

The standalone Selenium image spins up a Selenium server with the browser you want. To run the tests on different browsers you need to update your tests’ configuration and use the selenium/hub Docker image.

The image creates a hub between your application and the standalone Selenium images. Replace the selenium container section in your docker-compose.yml as follows:

  selenium-hub:
    image: selenium/hub
    container_name: selenium-hub
    expose:
      - 4444
  chrome:
    image: selenium/node-chrome
    volumes:
      - /dev/shm:/dev/shm
    depends_on:
      - selenium-hub
    environment:
      - HUB_HOST=selenium-hub
      - HUB_PORT=4444
  firefox:
    image: selenium/node-firefox
    volumes:
      - /dev/shm:/dev/shm
    depends_on:
      - selenium-hub
    environment:
      - HUB_HOST=selenium-hub
      - HUB_PORT=4444

We now have 3 containers: Chrome, Firefox and the Selenium hub.

All the Docker images provided by Selenium are in this repository.

Careful! There is a tricky timing effect to consider. We use dockerize to have our test container wait for the Selenium hub to be up. That is not enough because we need to wait for the standalone images to be ready – in fact to register themselves to the hub.

We can do this by waiting for the standalone images to expose a port but that is not a guarantee. It is easier to wait a few seconds before starting the tests. Update your test script to wait 5 seconds before the tests start (add a sleep command after the dockerize call).

Now you can be sure that your tests won’t start until all the browser are ready. Waiting is not ideal but a few seconds are worth it. There is nothing more annoying than failing tests because of unstable automations.

Good. We have now covered the frontend part. Let’s add the backend.

Add the backend as containers or git modules

The above might seem overkill to test only the front end part of your app. But we’re aiming for much more. We want to test the whole system.

Let’s add a database and a backend to our Docker compose environment.

If you are a front end developer you might think “we are the frontend team we don’t care about testing the backend”. Are you sure?

The frontend is always the last part to integrate all the other pieces. That means crunch time. Crunch time that would no longer be if you were able to test the frontend with the backend continuously and catch errors sooner.

The technique I describe here is very easy to apply even if your backend is in a different repository.

This is what the docker-compose.yml looks like:

version: '3.1'

services:
 db:
   image: postgres
   environment:
     POSTGRES_USER: john
     POSTGRES_PASSWORD: mysecretpassword
   expose:
     - 5432
 backend:
   context: ./backend
   dockerfile: ./backend/Dockerfile
   image:mybackend
   command: dockerize
       -wait tcp://db:5432 -timeout 10s
       bash -c "./seed_db.sh && ./start_server.sh"
   environment:
     APP_DB_HOST: db
     APP_DB_USER: john
     APP_DB_PASSWORD: mysecretpassword
   expose:
     - 8000
   depends_on:
     - db
 app-serve:
   build: .
   image: myapp
   command: npm run serve:sw
   expose:
    - 4200
 app-e2e-tests:
   image: myapp
   command: dockerize -wait tcp://app-serve:4200 
        -wait tcp://backend:8000 
        -wait tcp://selenium-chrome-standalone:4444 -timeout 10s 
        -wait-retry-interval 1s bash -c "npm run e2e:docker"
   depends_on:
     - app-serve
     - selenium-chrome-standalone
 selenium-chrome-standalone:
   image: selenium/standalone-chrome
   expose:
    - 44444

In this example we added a postgres database and a container for the backend to run. Dockerize synchronizes the containers’ commands.

If your system has more than one backend component add as many containers as you need. You need to wire the container dependencies properly. This means proper hostnames as environment variables on your components. And order of startup if some components depend on others.

The Selenium tests you have written should not need any modifications. You might need to put test data in the database. To keep this step in the testing area we added the seeding script before the backend startup script. This way we are sure that things happen in the proper order:

  • The DB starts and is ready to accept connections
  • A script seeds the DB data
  • The backend and the frontend start – so the tests can start

Monorepository

If you look at the backend container you can see there is a catch. It uses an image called mybackend built from a file located at backend/Dockerfile. This implies that your backend is in the same git repository in a folder called backend. The name is just an example of course.

If your backend and frontend are in the same repository you are good. Define the Dockerfile to build your backend and adjust the startup command to what you need.

That’s all good but usually the backend is not in the same repository. Or you can have many backend components in different repositories. What do you do then?

Multiple repositories

The super clean solution is to have a CI process on each backend component repository.

The CI process for each component runs automated tests. Upon success it pushes a docker image with the component to a private Docker registry. The backend container in our docker-compose.yml file above would use this image.

This solution requires a private Docker registry to store your images. You can use Docker Hub but then it becomes public. If you don’t have one already and don’t plan to do so, it is not worth the effort.

The other solution is to use the submodules feature in git. Your backend repository becomes a virtual child of your frontend repository. You just need to add the file .gitmodules like this to your frontend repository:

[submodule "backend"]
  path = backend
  url = [email protected]:backend/repository.git
  branch = develop

Run the command git submodule update --remote whichwill pull the specified branch of the backend repository into a folder called “backend”. Add as many submodules as you need if you have more than one backend component.

That’s it. Have your CI run the submodule command and from a file system perspective you are as in a monorepository.

If you don’t want the backend code locally while developing the frontend just don’t run the command. You’ll have an empty backend folder.

Versioning and backend/frontend incompatibilities

The 2 techniques above test the frontend with the latest “CI tests passed” version of your backend. That may lead to broken builds if your components are not compatible at times.

If they are compatible more often than not, stick to the “always test with the latest versions” approach. You’ll fix the occasional incompatibilities on the fly.

That won’t work, though, if incompatibilities are business as usual. In this case you need to manually control version updates. That is very easy to do.

You can lock the version of a component in the docker-compose.yml file or in the .gitmodules file. When pushing to the Docker registry you would tag the component image with the commit number of the corresponding code. The relevant docker-compose.yml file section becomes:

backend:
  image: backendapp:34028fc

Similarly the .gitmodules file would not target a branch head but a given commit:

[submodule "backend"]
  path = backend
  url = [email protected]:backend/repository.git
  branch = 34028fc

Bonus: version updates are versioned with your code. You can track which version was used for each build. This is useful when fixing failed builds or trying to reproduce old bugs.

We could push the approach to the next level. You could have a dedicated repository that would wire all your components as git modules. Bumping the versions could be a form of delivery and handover to the test/QA team.

In theory it is best to keep the latest versions of components working together more often than not. And drop the need for manual versioning. If that is not the case that is OK. Ignore the purists who will tell you that you are not following best practices and so on.

If you are just starting don’t aim for the stars at first. Pick what works best for you to enjoy the benefits of automated testing right now. Then keep improving your process along the way.

Bonus on writing maintainable Selenium tests

Back to Selenium and 3 important bits of advice to help you write good UI tests.

First, avoid CSS selectors if you can. Selenium works on the DOM and can identify elements by IDs or CSS or XPath. Use IDs as much as possible even if you have to add them to your app code for only this purpose. CSS and XPath selectors are shaky. As soon as your application structure changes, they will be broken.

Second, use the Page Objects approach. It is about encapsulating your application so selectors are not directly used in tests. If your page HTML/CSS changes, your tests will have to be rewritten to use new selectors. Page Objects abstract selectors and turn them into user actions. Here is a great article on how to use Page Objects properly.

Third, don’t build long user journeys in your tests. If your tests fail at the 50th action it’s going to be difficult to reproduce and fix. Create test suites that play part of the scenarios starting from the login page. This way you are always a few clicks away from the bug your tests will catch.

Also don’t risk writing tests that rely on state from previous actions. Test suite coupling is something you want to avoid.

Let’s take a practical example. Say you are testing a SaaS application for schools. The use cases could be:

  • Create a class
  • Register kids’ and parents’ data
  • Setup the weekly plan for the class
  • Check presence/absences
  • Input grades

Along the way you will have the login process and some navigation checks.

You could write a test that goes through the whole chain as described above. And this would be convenient because to declare kids you need a class to exist. To check presence/absences you need a weekly plan in place. And you need kids to input grades. It’s a quick win to build 1 test suite that does all these things at first.

If you have nothing at the moment and want to achieve good test coverage quickly: go for it! Done is better than perfect if it allows you to catch errors in your application now.

The cleaner solution would be to use a baseline scenario to start smaller test suites. In the example above the baseline scenario should be to create a class and register kids data.

Create a test suite that does exactly that: create a class and registered kids and parents data. Always run it first. If this stops working then you don’t need to move further on. This version of the code will never reach end users anyway.

Then create a function that encapsulates the baseline scenario. It will be duplicate code to some extent with the previous test suite. But it will allow you to have a one line function to use as a setup hook for all the other test suites. This is the best of both worlds: test scenarios starting from a fresh state in the application with minimal effort.

Conclusion

I hope this gave you a good insight on how you can quickly set up end to end tests for a complex system. Multiple components in multiple repositories should not be a barrier. Docker compose makes it easy to put things together.

End to end tests are the best way to avoid crunch times. In complex systems late deliveries of some components put a burden on other teams. Integrations are done in a rush. Code quality drops. That’s a vicious circle. Testing often and catching cross component errors early is the solution.

Selenium tests can be done quick and dirty to get going fast. That is perfectly OK. Automate things. Then improve. Remember:

Done is better than perfect any day of the year.

Thanks for reading!

#selenium #testing #docker

What is GEEK

Buddha Community

Automated End-To-End Testing with Selenium and Docker
Hermann  Frami

Hermann Frami

1651383480

A Simple Wrapper Around Amplify AppSync Simulator

This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.

Install

npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator

Usage

This plugin relies on your serverless yml file and on the serverless-offline plugin.

plugins:
  - serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
  - serverless-appsync-simulator
  - serverless-offline

Note: Order is important serverless-appsync-simulator must go before serverless-offline

To start the simulator, run the following command:

sls offline start

You should see in the logs something like:

...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...

Configuration

Put options under custom.appsync-simulator in your serverless.yml file

| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789 | When using API_KEY as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |

Example:

custom:
  appsync-simulator:
    location: '.webpack/service' # use webpack build directory
    dynamoDb:
      endpoint: 'http://my-custom-dynamo:8000'

Hot-reloading

By default, the simulator will hot-relad when changes to *.graphql or *.vtl files are detected. Changes to *.yml files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.

Hot-reloading relies on watchman. Make sure it is installed on your system.

You can change the files being watched with the watch option, which is then passed to watchman as the match expression.

e.g.

custom:
  appsync-simulator:
    watch:
      - ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
      - "*.graphql"                                 # => string like this is equivalent to `["match", "*.graphql"]`

Or you can opt-out by leaving an empty array or set the option to false

Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.

Resource CloudFormation functions resolution

This plugin supports some resources resolution from the Ref, Fn::GetAtt and Fn::ImportValue functions in your yaml file. It also supports some other Cfn functions such as Fn::Join, Fb::Sub, etc.

Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation

Basic usage

You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.

provider:
  environment:
    BUCKET_NAME:
      Ref: MyBucket # resolves to `my-bucket-name`

resources:
  Resources:
    MyDbTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: myTable
      ...
    MyBucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: my-bucket-name
    ...

# in your appsync config
dataSources:
  - type: AMAZON_DYNAMODB
    name: dynamosource
    config:
      tableName:
        Ref: MyDbTable # resolves to `myTable`

Override (or mock) values

Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.

In those cases, you can define (or override) those values using the refMap, getAttMap and importValueMap options.

  • refMap takes a mapping of resource name to value pairs
  • getAttMap takes a mapping of resource name to attribute/values pairs
  • importValueMap takes a mapping of import name to values pairs

Example:

custom:
  appsync-simulator:
    refMap:
      # Override `MyDbTable` resolution from the previous example.
      MyDbTable: 'mock-myTable'
    getAttMap:
      # define ElasticSearchInstance DomainName
      ElasticSearchInstance:
        DomainEndpoint: 'localhost:9200'
    importValueMap:
      other-service-api-url: 'https://other.api.url.com/graphql'

# in your appsync config
dataSources:
  - type: AMAZON_ELASTICSEARCH
    name: elasticsource
    config:
      # endpoint resolves as 'http://localhost:9200'
      endpoint:
        Fn::Join:
          - ''
          - - https://
            - Fn::GetAtt:
                - ElasticSearchInstance
                - DomainEndpoint

Key-value mock notation

In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}) in the import name.

This notation can be used with all mocks - refMap, getAttMap and importValueMap

provider:
  environment:
    FINISH_ACTIVITY_FUNCTION_ARN:
      Fn::ImportValue: other-service-api-${self:provider.stage}-url

custom:
  serverless-appsync-simulator:
    importValueMap:
      - key: other-service-api-${self:provider.stage}-url
        value: 'https://other.api.url.com/graphql'

Limitations

This plugin only tries to resolve the following parts of the yml tree:

  • provider.environment
  • functions[*].environment
  • custom.appSync

If you have the need of resolving others, feel free to open an issue and explain your use case.

For now, the supported resources to be automatically resovled by Ref: are:

  • DynamoDb tables
  • S3 Buckets

Feel free to open a PR or an issue to extend them as well.

External functions

When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".

custom:
  appsync-simulator:
    functions:
      addUser:
        url: http://localhost:3016/2015-03-31/functions/addUser/invocations
        method: post
      addPost:
        url: https://jsonplaceholder.typicode.com/posts
        method: post

Supported Resolver types

This plugin supports resolvers implemented by amplify-appsync-simulator, as well as custom resolvers.

From Aws Amplify:

  • NONE
  • AWS_LAMBDA
  • AMAZON_DYNAMODB
  • PIPELINE

Implemented by this plugin

  • AMAZON_ELASTIC_SEARCH
  • HTTP
  • RELATIONAL_DATABASE

Relational Database

Sample VTL for a create mutation

#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #set( $discard = $cols.add("$toSnake") )
  #if( $util.isBoolean($ctx.args.input[$entry]) )
      #if( $ctx.args.input[$entry] )
        #set( $discard = $vals.add("1") )
      #else
        #set( $discard = $vals.add("0") )
      #end
  #else
      #set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
  #end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
  #set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
  #set( $colStr = "($colStr)" )
#end
{
  "version": "2018-05-29",
  "statements":   ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM    <name-of-table> ORDER BY id DESC LIMIT 1"]
}

Sample VTL for an update mutation

#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $cur = $ctx.args.input[$entry] )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #if( $util.isBoolean($cur) )
      #if( $cur )
        #set ( $cur = "1" )
      #else
        #set ( $cur = "0" )
      #end
  #end
  #if ( $util.isNullOrEmpty($update) )
      #set($update = "$toSnake$equals'$cur'" )
  #else
      #set($update = "$update,$toSnake$equals'$cur'" )
  #end
#end
{
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}

Sample resolver for delete mutation

{
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}

Sample mutation response VTL with support for handling AWSDateTime

#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
    #set ($index = $index + 1)
    #if ( $column["typeName"] == "timestamptz" )
        #set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
        #set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
        #set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
        $util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
    #end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
    #set ( $s = $mapKey.split("_") )
    #set ( $camelCase="" )
    #set ( $isFirst=true )
    #foreach($entry in $s)
        #if ( $isFirst )
          #set ( $first = $entry.substring(0,1) )
        #else
          #set ( $first = $entry.substring(0,1).toUpperCase() )
        #end
        #set ( $isFirst=false )
        #set ( $stringLength = $entry.length() )
        #set ( $remaining = $entry.substring(1, $stringLength) )
        #set ( $camelCase = "$camelCase$first$remaining" )
    #end
    $util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)

Using Variable Map

Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.

Will be escaped properly: null, true, and false values.

{
  "version": "2018-05-29",
  "statements":   [
    "UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
    "SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
  ],
  variableMap: {
    ":ID": $ctx.args.id,
##    ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
  }
}

Requires

Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator 
License: MIT License

#serverless #sync #graphql 

Generis: Versatile Go Code Generator

Generis

Versatile Go code generator.

Description

Generis is a lightweight code preprocessor adding the following features to the Go language :

  • Generics.
  • Free-form macros.
  • Conditional compilation.
  • HTML templating.
  • Allman style conversion.

Sample

package main;

// -- IMPORTS

import (
    "html"
    "io"
    "log"
    "net/http"
    "net/url"
    "strconv"
    );

// -- DEFINITIONS

#define DebugMode
#as true

// ~~

#define HttpPort
#as 8080

// ~~

#define WriteLine( {{text}} )
#as log.Println( {{text}} )

// ~~

#define local {{variable}} : {{type}};
#as var {{variable}} {{type}};

// ~~

#define DeclareStack( {{type}}, {{name}} )
#as
    // -- TYPES

    type {{name}}Stack struct
    {
        ElementArray []{{type}};
    }

    // -- INQUIRIES

    func ( stack * {{name}}Stack ) IsEmpty(
        ) bool
    {
        return len( stack.ElementArray ) == 0;
    }

    // -- OPERATIONS

    func ( stack * {{name}}Stack ) Push(
        element {{type}}
        )
    {
        stack.ElementArray = append( stack.ElementArray, element );
    }

    // ~~

    func ( stack * {{name}}Stack ) Pop(
        ) {{type}}
    {
        local
            element : {{type}};

        element = stack.ElementArray[ len( stack.ElementArray ) - 1 ];

        stack.ElementArray = stack.ElementArray[ : len( stack.ElementArray ) - 1 ];

        return element;
    }
#end

// ~~

#define DeclareStack( {{type}} )
#as DeclareStack( {{type}}, {{type:PascalCase}} )

// -- TYPES

DeclareStack( string )
DeclareStack( int32 )

// -- FUNCTIONS

func HandleRootPage(
    response_writer http.ResponseWriter,
    request * http.Request
    )
{
    local
        boolean : bool;
    local
        natural : uint;
    local
        integer : int;
    local
        real : float64;
    local
        escaped_html_text,
        escaped_url_text,
        text : string;
    local
        integer_stack : Int32Stack;

    boolean = true;
    natural = 10;
    integer = 20;
    real = 30.0;
    text = "text";
    escaped_url_text = "&escaped text?";
    escaped_html_text = "<escaped text/>";

    integer_stack.Push( 10 );
    integer_stack.Push( 20 );
    integer_stack.Push( 30 );

    #write response_writer
        <!DOCTYPE html>
        <html lang="en">
            <head>
                <meta charset="utf-8">
                <title><%= request.URL.Path %></title>
            </head>
            <body>
                <% if ( boolean ) { %>
                    <%= "URL : " + request.URL.Path %>
                    <br/>
                    <%@ natural %>
                    <%# integer %>
                    <%& real %>
                    <br/>
                    <%~ text %>
                    <%^ escaped_url_text %>
                    <%= escaped_html_text %>
                    <%= "<%% ignored %%>" %>
                    <%% ignored %%>
                <% } %>
                <br/>
                Stack :
                <br/>
                <% for !integer_stack.IsEmpty() { %>
                    <%# integer_stack.Pop() %>
                <% } %>
            </body>
        </html>
    #end
}

// ~~

func main()
{
    http.HandleFunc( "/", HandleRootPage );

    #if DebugMode
        WriteLine( "Listening on http://localhost:HttpPort" );
    #end

    log.Fatal(
        http.ListenAndServe( ":HttpPort", nil )
        );
}

Syntax

#define directive

Constants and generic code can be defined with the following syntax :

#define old code
#as new code

#define old code
#as
    new
    code
#end

#define
    old
    code
#as new code

#define
    old
    code
#as
    new
    code
#end

#define parameter

The #define directive can contain one or several parameters :

{{variable name}} : hierarchical code (with properly matching brackets and parentheses)
{{variable name#}} : statement code (hierarchical code without semicolon)
{{variable name$}} : plain code
{{variable name:boolean expression}} : conditional hierarchical code
{{variable name#:boolean expression}} : conditional statement code
{{variable name$:boolean expression}} : conditional plain code

They can have a boolean expression to require they match specific conditions :

HasText text
HasPrefix prefix
HasSuffix suffix
HasIdentifier text
false
true
!expression
expression && expression
expression || expression
( expression )

The #define directive must not start or end with a parameter.

#as parameter

The #as directive can use the value of the #define parameters :

{{variable name}}
{{variable name:filter function}}
{{variable name:filter function:filter function:...}}

Their value can be changed through one or several filter functions :

LowerCase
UpperCase
MinorCase
MajorCase
SnakeCase
PascalCase
CamelCase
RemoveComments
RemoveBlanks
PackStrings
PackIdentifiers
ReplacePrefix old_prefix new_prefix
ReplaceSuffix old_suffix new_suffix
ReplaceText old_text new_text
ReplaceIdentifier old_identifier new_identifier
AddPrefix prefix
AddSuffix suffix
RemovePrefix prefix
RemoveSuffix suffix
RemoveText text
RemoveIdentifier identifier

#if directive

Conditional code can be defined with the following syntax :

#if boolean expression
    #if boolean expression
        ...
    #else
        ...
    #end
#else
    #if boolean expression
        ...
    #else
        ...
    #end
#end

The boolean expression can use the following operators :

false
true
!expression
expression && expression
expression || expression
( expression )

#write directive

Templated HTML code can be sent to a stream writer using the following syntax :

#write writer expression
    <% code %>
    <%@ natural expression %>
    <%# integer expression %>
    <%& real expression %>
    <%~ text expression %>
    <%= escaped text expression %>
    <%! removed content %>
    <%% ignored tags %%>
#end

Limitations

  • There is no operator precedence in boolean expressions.
  • The --join option requires to end the statements with a semicolon.
  • The #writer directive is only available for the Go language.

Installation

Install the DMD 2 compiler (using the MinGW setup option on Windows).

Build the executable with the following command line :

dmd -m64 generis.d

Command line

generis [options]

Options

--prefix # : set the command prefix
--parse INPUT_FOLDER/ : parse the definitions of the Generis files in the input folder
--process INPUT_FOLDER/ OUTPUT_FOLDER/ : reads the Generis files in the input folder and writes the processed files in the output folder
--trim : trim the HTML templates
--join : join the split statements
--create : create the output folders if needed
--watch : watch the Generis files for modifications
--pause 500 : time to wait before checking the Generis files again
--tabulation 4 : set the tabulation space count
--extension .go : generate files with this extension

Examples

generis --process GS/ GO/

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder.

generis --process GS/ GO/ --create

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder, creating the output folders if needed.

generis --process GS/ GO/ --create --watch

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder, creating the output folders if needed and watching the Generis files for modifications.

generis --process GS/ GO/ --trim --join --create --watch

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder, trimming the HTML templates, joining the split statements, creating the output folders if needed and watching the Generis files for modifications.

Version

2.0

Author: Senselogic
Source Code: https://github.com/senselogic/GENERIS 
License: View license

#go #golang #code 

Aurelie  Block

Aurelie Block

1598916060

Top 10 Automation Testing Tools: 2020 Edition

The demand for delivering quality software faster — or “Quality at Speed” — requires organizations to search for solutions in Agile, continuous integration (CI), and DevOps methodologies. Test automation is an essential part of these aspects. The latest World Quality Report 2018–2019 suggests that test automation is the biggest bottleneck to deliver “Quality at Speed,” as it is an enabler of successful Agile and DevOps adoption.

Test automation cannot be realized without good tools; as they determine how automation is performed and whether the benefits of automation can be delivered. Test automation tools is a crucial component in the DevOps toolchain. The current test automation trends have increased in applying artificial intelligence and machine learning (AI/ML) to offer advanced capabilities for test optimization, intelligent test generation, execution, and reporting. It will be worthwhile to understand which tools are best poised to take advantage of these trends.****

#automation-testing #automation-testing-tools #testing #testing-tools #selenium #open-source #test-automation #automated-testing

Anthony  Dach

Anthony Dach

1627025027

What's New in Selenium 4?

The newly released Selenium 4 is creating a lot of buzz and the complete testing community is looking forward to exploring its updated features.

Selenium has gone through a tremendous evolution since its introduction and that’s the reason today it is the most popular and powerful automation testing tool. The newly released Selenium 4 is creating a lot of buzz and the complete testing community is looking forward to exploring its updated features.

Before we dive into Selenium 4, let’s have a brief introduction to its previous versions. Selenium 1 was declared as the free open source automation testing framework in the year 2004 consisting of selenium IDE, RC, and web driver. Whereas, the Selenium 2 released in 2011 consisted of the IDE, Web driver, and Grid. The RC server was merged with the web driver, as the web driver facilitated easy automation scripting for the browsers. Selenium 3 was officially released in 2016. One of the most noticeable changes in selenium 3 was the replacement of the selenium core with the web driver-backed option, the introduction of the gecko driver, and W3C web driver integration.

With the aim of executing much seamless, accurate and faster test automation, Selenium 4 was released on 24th April 2019. So let’s unleash all the major features of selenium 4 which sets it apart from the earlier versions delivering better test automation. There are a lot of exciting features in Selenium 4 across the complete suite i.e. Selenium IDE, Webdriver and Grid. In Selenium 4 though the Webdriver captures the spotlight, we will cover the updated features of selenium IDE and selenium grid. So first of all let’s define the different user groups for the Selenium suite.

Selenium is a suite of tools that caters to the various requirements of the project such as:

  • Selenium IDE (Integrated Development Environment) Supports Rapid test development. This record/run tool helps in preparing test cases. Selenium IDE is an easy-to-use tool from the Selenium Test Suite which is used for developing automated test cases for web applications.
  • Selenium Webdriver provides flexible and seamless automation through a friendly and flexible API. It can perform automation in almost all programming languages.
  • The grid helps in providing automation testing by distributing and running the test cases in multiple machines simultaneously.

Let us explore the features of Selenium 4 across the different Selenium Tools.

#selenium #automation testing #selenium automation #selenium automated testing #selenium test automation #selenium 4

Anthony  Dach

Anthony Dach

1621163880

An Intro to Build Automation Tools for Selenium Automation Testing

Being an automation tester, we do realize that in a release cycle, time is always of the essence! Selenium test automation helps to save us a considerable amount of time in our test cycles. However, it is pivotal to note the way through which you are executing your Selenium testing scripts. Which frameworks are you using? Are you doing it with an in-house infrastructure or with an online Selenium Grid? Are you making use of build automation tools or not?!

Build automation tools like Maven, Gradle and ANT provide you to accelerate the Selenium test automation even further. Not only do they help you manage build lifecycles, dependencies but they also allow you to perform parallel test execution. In this post, we are going to understand why every automation tester needs a build management tool for Selenium testing.

What Are Build Automation Tools?

Build automation tools allow us to orchestrate our project builds by automating the processes for handling Selenium dependencies, compiling source code to binary & then later packages the binary. All in order to run automation testing. Build automation tools have become pivotal for the software development & testing process. These tools help developers in completing day to day activities like.

  • Downloading dependencies
  • Generation of the source code and extracting documentation from it
  • Compiling the source code
  • Packaging the compiled source code
  • Installing packaged code into a server, local or a central repository
  • Running Tests

#selenium #automation #testing #build-automation-tools #selenium-automation-testing #lambda #test-cycles #coding