What is the best practice to use sequential test-cases?

I know that, in test automation, we have to&nbsp;<a href="https://stackoverflow.com/questions/5215285/how-to-pre-define-the-running-sequences-of-junit-test-cases" target="_blank">avoid sequential test-cases</a>.So, the order of running the test-cases are not important.

I know that, in test automation, we have to avoid sequential test-cases.So, the order of running the test-cases are not important.

I believe in some cases the sequential test-cases is unavoidable:

1.Consider a scenario which a user needs to take some previous steps in order to complete a final goal. For example, a user needs to be logged in, so that he can purchase.

   Given User is logged in
   When  User adds an item into its basket
   And   User Complete his purchase
   Then  He receives an Email

So, in above scenario, each part (GivenWhenAnd, or Then) are seperate testcases. But still the order of testcase is crucial.

2.Also, Junit team provides a method called @FixMethodOrder(MethodSorters.NAME_ASCENDING) for such usage, but i am not sure when we are allowed to use this?

So, how do you write independent test-cases in end-2-end testing?

Running Android Tests in Docker

Running Android Tests in Docker

In this article, you'll learn how to run and test their automated Android mobile application tests in a Docker container.

In this article, you'll learn how to run and test their automated Android mobile application tests in a Docker container.

As part of the project I’m currently engaged on, my team is writing automated tests for an application which has a web interface, and also two mobile apps, one for Android, and one for iOS. As part of the project, we’ve built a test automation pipeline which runs our tests against our application to ensure changes we’re making don’t impact other tests. Yes, we’re testing our tests. One of the challenges we ran into was ensuring we could verify our Android and iOS tests still worked in a timely fashion. The solution we eventually found worked best was to create our own emulators inside of a Docker container to test on.

The Setup

If you’re not familiar with Docker, check out some of our many excellent posts on it. It also works great for testing. Essentially, instead of spinning up a web application inside a container, we wanted to spin up an Android emulator, running our app. We would then connect to the container, the same way we would a tethered physical device, a local emulator, or something in the cloud. This made it incredibly cheap (and fast) to parallelize our testing. All we needed to do was launch multiple Docker containers, and then connect using ADB to each one.

The Dockerfile

Unfortunately, building the Docker container wasn’t straightforward. We needed a Docker container with Android SDK installed, an Android VM created, and Appium. Because I had gotten everything running just fine locally (Ubuntu), I decided to start with a similar base image (maven:3.5.2-jdk-8). We then installed the ADK, set up the emulator, and installed Appium. Finally, we coped over the APK and test suite. We soon discovered that the base image didn’t have kvm set up, so we needed to enable this. Not an easy task, which unfortunately required a manual step. As a result, I decided to split this into two Docker files, a new base one with kvm setup, and one with all of the installs.

DockerfileKVM

This Docker container was relatively straightforward, however, not quite simple. I built it once, uploaded it to my local Docker repository, and then built on top of it for the Docker Android container. All I did was install kvm, and manually copy over the proper lib modules. The Dockerfile looked like:

FROM maven:3.5.2-jdk-8
#debian based

RUN apt-get update -qqy \
    && apt-get -qqy install libglu1 qemu-kvm libvirt-dev virtinst bridge-utils msr-tools kmod \
    && wget -q http://security.ubuntu.com/ubuntu/pool/main/c/cpu-checker/cpu-checker_0.7-0ubuntu7_amd64.deb \
    && dpkg -i cpu-checker_0.7-0ubuntu7_amd64.deb \
    && apt-get install -f \
    && kvm-ok


From there, I built and tagged the Docker file. I then ran it, logged in (-it), and copied over my machine’s lib modules (/lib/modules). Please note, your mileage might vary based on your machine’s kernel and version. Then I pushed this image into my local Docker repository.

DockerfileAndroid

This Docker container had a lot more steps, but ultimately wasn’t too much more complex. I set up my Android SK, loaded up my emulator, installed Appium, and copied over our APK and tests. The Dockerfile looked like this:

FROM kvm:maven-3.5.2-jdk-8#tag we gave to DockerfileKVM
# debian based

ENV UDIDS=""

#=====================
# Install android sdk
#=====================
ARG ANDROID_SDK_VERSION=4333796
ENV ANDROID_SDK_VERSION=$ANDROID_SDK_VERSION
ARG ANDROID_PLATFORM="android-25"
ARG BUILD_TOOLS="26.0.0"
ENV ANDROID_PLATFORM=$ANDROID_PLATFORM
ENV BUILD_TOOLS=$BUILD_TOOLS

# install adk
RUN mkdir -p /opt/adk \
    && wget -q https://dl.google.com/android/repository/sdk-tools-linux-${ANDROID_SDK_VERSION}.zip \
    && unzip sdk-tools-linux-${ANDROID_SDK_VERSION}.zip -d /opt/adk \
    && rm sdk-tools-linux-${ANDROID_SDK_VERSION}.zip \
    && wget -q https://dl.google.com/android/repository/platform-tools-latest-linux.zip \
    && unzip platform-tools-latest-linux.zip -d /opt/adk \
    && rm platform-tools-latest-linux.zip \
    && yes | /opt/adk/tools/bin/sdkmanager --licenses \
    && /opt/adk/tools/bin/sdkmanager "emulator" "build-tools;${BUILD_TOOLS}" "platforms;${ANDROID_PLATFORM}" "system-images;${ANDROID_PLATFORM};google_apis;armeabi-v7a" \
    && echo no | /opt/adk/tools/bin/avdmanager create avd -n "Android" -k "system-images;${ANDROID_PLATFORM};google_apis;armeabi-v7a" \
    && mkdir -p ${HOME}/.android/ \
    && ln -s /root/.android/avd ${HOME}/.android/avd \
    && ln -s /opt/adk/tools/emulator /usr/bin \
    && ln -s /opt/adk/platform-tools/adb /usr/bin
ENV ANDROID_HOME /opt/adk

#====================================
# Install latest nodejs, npm, appium
#====================================
ARG NODE_VERSION=v8.11.3
ENV NODE_VERSION=$NODE_VERSION
ARG APPIUM_VERSION=1.9.1
ENV APPIUM_VERSION=$APPIUM_VERSION

# install appium
RUN wget -q https://nodejs.org/dist/${NODE_VERSION}/node-${NODE_VERSION}-linux-x64.tar.xz \
    && tar -xJf node-${NODE_VERSION}-linux-x64.tar.xz -C /opt/ \
    && ln -s /opt/node-${NODE_VERSION}-linux-x64/bin/npm /usr/bin/ \
    && ln -s /opt/node-${NODE_VERSION}-linux-x64/bin/node /usr/bin/ \
    && ln -s /opt/node-${NODE_VERSION}-linux-x64/bin/npx /usr/bin/ \
    && npm install -g [email protected]${APPIUM_VERSION} --allow-root --unsafe-perm=true \
    && ln -s /opt/node-${NODE_VERSION}-linux-x64/bin/appium /usr/bin/

EXPOSE [4723,2251,5555]
CMD ["docker-entrypoint.sh"]


What you’ll notice is there was one last piece to this puzzle, the docker-entrypoint, which we used to launch the emulator, and get everything connected with Appium. This luckily was pretty easy, once you knew what to do.

docker-entrypoint.sh

#!/bin/bash

# launch the emulator
exec /opt/adk/tools/emulator -avd Android -no-audio -no-window &

# setup appium
while [ -z $udid ]; do
    udid=`adb devices | grep emulator | cut -f 1`
done
exec appium -p 4723 -bp 2251 --default-capabilities '{"udid":"'${udid}'"}' &


And that was it, all we needed to do, was launch our tests.

Test Execution

Without getting into the specifics of the project, we could do that either locally, or through another Docker container, but the simplest way was definitely locally, using Maven. We could do this by simply specifying the Docker IP, the same way you would if you have the emulator running locally, or the device tethered. Swapping ports is simple enough as well.

Final Thoughts

We did run into one last issue, which was trying to run this all in AWS. We’ve been trying to keep our pipelines as fast moving as possible, and a large portion of that means dynamically provisioned machines in AWS when we need them, it greatly increased our throughput. Unfortunately, AWS machines don’t support nested KVM (yes, I’m aware of using metal, but we wanted to avoid that cost increase). Unfortunately, that meant a large portion of this couldn’t be used in this fashion. Stay tuned for the next blog post, in which I get into the work around to solve this issue.

Find this useful? Got stuck? As always, please leave some comments below.

Learn More

An illustrated guide to Kubernetes Networking

AWS DevOps: Introduction to DevOps on AWS

Getting started with Flutter

Android Studio for beginners

Building a mobile chat app with Nest.js and Ionic 4

Creating an iOS app with user presence using Node.js and Swift

Let’s Develop a Mobile App in Flutter

Docker Tutorial for Beginners

Docker Basics: Docker Compose

Docker and Kubernetes: The Complete Guide

Docker Mastery: The Complete Toolset From a Docker Captain

Docker for the Absolute Beginner - Hands On - DevOps

Learn DevOps: The Complete Kubernetes Course

Android X + Truth + Guava test compile issue

I have an Android library (called&nbsp;<code>api</code>) gradle module as part of a larger project. I just migrated the whole project to AndroidX. I now have this error when running instrumentation test on the&nbsp;<code>api</code>&nbsp;lib:

I have an Android library (called api) gradle module as part of a larger project. I just migrated the whole project to AndroidX. I now have this error when running instrumentation test on the api lib:

 Task :api:checkDebugAndroidTestDuplicateClasses FAILED

FAILURE: Build failed with an exception.

  • What went wrong:
    Execution failed for task ':api:checkDebugAndroidTestDuplicateClasses'.
    > 1 exception was raised by workers:
    java.lang.RuntimeException: java.lang.RuntimeException: Duplicate class com.google.common.util.concurrent.ListenableFuture found in modules jetified-guava-25.1-android.jar (com.google.guava:guava:25.1-android) and listenablefuture-1.0.jar (com.google.guava:listenablefuture:1.0)

If I check the runtime class path for the debugAndroidTest variant:

./gradlew api:dependencies --configuration debugAndroidTestRuntimeClasspath | grep --color -E "guava|$"

I get this output. I can see the problem:

------------------------------------------------------------
Project :api

debugAndroidTestRuntimeClasspath - Resolved configuration for runtime for variant: debugAndroidTest
+--- project :test_utils
| +--- project :core
...
| +--- project :api ()
| +--- com.google.android.material:material:1.1.0-alpha03
| | +--- androidx.annotation:annotation:1.0.1 -> 1.1.0-alpha01
| | +--- androidx.appcompat:appcompat:1.1.0-alpha01
| | | +--- androidx.annotation:annotation:1.0.0 -> 1.1.0-alpha01
| | | +--- androidx.core:core:1.1.0-alpha01 -> 1.1.0-alpha03
| | | | +--- com.google.guava:listenablefuture:1.0 // <------ GUAVA
| | | | +--- androidx.annotation:annotation:1.0.1 -> 1.1.0-alpha01
...
+--- com.google.truth:truth:0.42
| +--- com.google.guava:guava:25.1-android / <------ MORE GUAVA
| | +--- com.google.code.findbugs:jsr305:3.0.2
| | +--- com.google.errorprone:error_prone_annotations:2.1.3 -> 2.3.1
| | +--- com.google.j2objc:j2objc-annotations:1.1
| | --- org.codehaus.mojo:animal-sniffer-annotations:1.14
| +--- org.checkerframework:checker-compat-qual:2.5.3
| +--- org.checkerframework:checker-qual:2.5.3
| +--- junit:junit:4.12 (
)
| +--- com.googlecode.java-diff-utils:diffutils:1.3.0
| +--- com.google.auto.value:auto-value-annotations:1.6.2
| --- com.google.errorprone:error_prone_annotations:2.3.1
...

AndroidX core is depending on the new "ListableFuture-only" build of guava and Truth is depending on the full Guava 25.

I think I understand the underlying problem with ListenableFuture: https://groups.google.com/forum/#!topic/guava-announce/Km82fZG68Sw

What is the right solution here?

I don't want to exclude Guava from Truth entirely (otherwise Truth won't compile)

androidTestImplementation("com.google.truth:truth:0.42") {
exclude group: 'com.google.guava', module: 'guava'
}

Can I exclude + force update to Guava 27 by making it a first-level dependency:

androidTestImplementation("com.google.truth:truth:$rootProject.ext.truthVersion") {
exclude group: 'com.google.guava', module: 'guava'
}
// must add guava as top level dependency to force Truth to use latest version
androidTestImplementation 'com.google.guava:guava:27.0.1-android'

If I do this, should I be using the android or JRE version of guava?

Side question:

Why don't I see the guava dependency when looking at compile classpath? The error is a compile time error, not a runtime error

./gradlew api:dependencies --configuration debugAndroidTestCompileClasspath | grep --color -E "guava|$"

Resulting deps:

debugAndroidTestCompileClasspath - Resolved configuration for compilation for variant: debugAndroidTest
+--- project :test_utils // <----------- why are test_utils deps not listed here???
...
+--- com.google.truth:truth:0.42
| +--- com.google.guava:guava:25.1-android <------ GUAVA
| | +--- com.google.code.findbugs:jsr305:3.0.2
| | +--- org.checkerframework:checker-compat-qual:2.0.0 -> 2.5.3
| | +--- com.google.errorprone:error_prone_annotations:2.1.3 -> 2.3.1
| | +--- com.google.j2objc:j2objc-annotations:1.1
| | --- org.codehaus.mojo:animal-sniffer-annotations:1.14
| +--- org.checkerframework:checker-compat-qual:2.5.3
| +--- org.checkerframework:checker-qual:2.5.3
| +--- junit:junit:4.12 (*)
| +--- com.googlecode.java-diff-utils:diffutils:1.3.0
| +--- com.google.auto.value:auto-value-annotations:1.6.2
| --- com.google.errorprone:error_prone_annotations:2.3.1
...
+--- com.google.truth:truth:{strictly 0.42} -> 0.42 (c)
+--- com.google.guava:guava:{strictly 25.1-android} -> 25.1-android (c) // <--------- why is this listed again here at top level?


Performance Testing: How to Load Test

Performance Testing: How to Load Test

Check your performance trend over many iterations of the same set of tests. Run a Stress Test: See how your application performs under a high load. Iterate your stress test.

Objective

The purpose of this article is to help you design and run load test 👍

Why Learn?

As developers we already have the foundational knowledge and with a little effort we could expand our skillset.

  • Your company cannot afford to hire performance engineers
  • Not enough testers compared to developers
  • The skill & knowledge could help you write better and scalable code
  • Less dependant on other’s expertise
Why Load Testing?

While unit & integration tests ensure code is functionally correct, load testing measures its performance which is equally important.

Only a Load Test can shed light into concurrency issues, whether the database queries make good use of indexes instead of a full table scans, where are the bottlenecks, does the application scale efficiently, what is the application’s response time and throughput and so on.

Getting Started Apache JMeter

In this section we will design and run Apache JMeter load tests.

Environment Setup

For environment, either find a suitable online resource (not recommended), come up with your own simple service (node, python, whatever) or simply use the web service provided in this article.

We will be using a simple java-based spring boot web service that exposes four (4) endpoints. The requirement are Java 1.8 and Apache Maven.

git clone https://github.com/rhamedy/tinyservice4loadtest.git
cd tinyservice4loadtest
mvn spring-boot:run

Apache JMeter

Please go ahead and install Apache JMeter from download site, unzip and execute the following command

apache-jmeter/bin/jmeter.sh       // Linux & MacOS
apache-jmeter/bin/jmeter.bat      // Windows

Design a Test Plan for Apache JMeter

Let’s load test the following apis

http://localhost:8080/students/list  - [GET] List students
http://localhost:8080/students/id    - [GET] Get student by id
http://localhost:8080/students       - [POST] Create student
http://localhost:8080/students/id    - [DELETE] Delete student by id

All the sample tests for above endpoints are available in the GitHub repository.

Step 1

Right click on test plan and pick Add > Threads (Users) > Thread Group. A test plan must have at least 1 Thread Group.

  1. The Number of Threads (users)
  2. The Ramp-Up period (in seconds). How long to reach the max users?
  3. How many times or for how long to run the test

Step 2

Let’s specify what to test. Right Click on ThreadGroup and select Add > Config Elements > HTTP Request Defaults option.

Config Elements are useful if you wish to share configuration among one or more requests for example i.e. server address, port number, token, etc.

Let’s fill out the HTTP Request Defaults Config element

Also let’s add an HTTP Header Manager via Add > Config Elements > HTTP Header Manager for Content-Type header

Step 3

Let’s configure the http requests, Right Click on the ThreadGroup and select Add > Sampler > HTTP Request

  1. What is the method type i.e.GET or POST etc
  2. What is the api path i.e. /students/list or /students etc

Right click to duplicate a request and update it.

Step 4

The Listeners are used to collect results. Right Click on ThreadGroup and select Add > Listener > Summary Report option.

The test plans in words, we create an Apache JMeter Test Plan to Load Testtwo apis with 50 Users with a ramp-up for a duration of 30 seconds.

If you have the tinyservice4loadtest (or your own) running, then let’s hit the play button and see the results in Summary Report

Run Apache JMeter Test Using CLI

The GUI is not recommended for running complex tests. Let’s open a complex sample test from here.

The above test plan has more elements

  • The Random Variable generates a value between x and y
  • The Loop Controller executes the content of loop x times

To generate a meaningful report we need user.properties (Source) in the directory where .jmx test is.

jmeter.reportgenerator.report_title=Apache JMeter Dashboard
jmeter.reportgenerator.overall_granularity=60000
jmeter.reportgenerator.graph.responseTimeDistribution.property.set_granularity=100
jmeter.reportgenerator.apdex_satisfied_threshold=1500
jmeter.reportgenerator.apdex_tolerated_threshold=3000
jmeter.reportgenerator.exporter.html.property.output_dir=/tmp/test-jmeter.reportgenerator.exporter.html.filters_only_sample_series=true

Run the test script using command (output-directory should be empty)

jmeter.sh -n -t loadtest.jmx -l log.jtl -e -o output-directory

In the above command the -n stands for no gui the -t indicates the scripts loadtest.jmx the -l is for log.jtl where -e and -o are for reporting.

The output-directory will contain a bunch of files including an index.htmlthat opens the graphical results of the test as shown below.

In this graph the left side is APDEX and the right side is Request summary. The red indicates all our 404 errors and the green is 200 successful requests.

Some numbers relating to Number of samples, Response time, Throughput

Most importantly the Response Time and Throughput

Lastly, it’s worth-mentioning that Apache JMeter can be configured to listen to a browser’s activity and capture the network request.

Getting Started with Taurus

Now that we know how to use Apache JMeter to run a basic load test, let’s explore open-source framework Taurus. In short, one of the reasons for birth of Taurus was because Apache JMeter has a steep learning curve and Taurusmake things a lot simpler.

Taurus is an abstraction layer (or a wrapper) on top of Apache JMeter and that means you can run an Apache JMeter script using Taurus. So go ahead and install Taurus using the easy to install instruction

The Taurus scripts can be written in YAML or JSON using follow blocks

Scenarios is basically where one or more requests are defined. For each scenario an execution is defined with props such as no of usersdurationramp-up period and so on. The modules allow us to configure executor that could be Apache JMeterSelenium and so on. Likewise, the reporting allows configuring how the report should be generated i.e. csv, live reporting in console, or push the result to the blazemeter website.

scenarios: 
 StudentList: 
  requests: 
   - url: http://localhost:8080/students/list
     label: Student List
execution: 
 - scenario: StudentList 
   concurrency: 15
   ramp-up: 2s
   hold-for: 10s
reporting:
 - module: console 
 - module: final-stats 
   summary: true 
   percentiles: true
   test-duration: true 
   dump-csv: single_scenario_single_req.csv

Load test /students/list api reaching 15 users within 2s (ramp-up) for a duration of 10s and display live result in the console as well as csv file.

To run the Taurus test, simply run the command bzt test.yaml

In a Taurus test you can also configure a scenario to point to an Apache JMeter script and override execution and other parameters.

scenarios:
JMeterExample:
script: student_crud_jmeter.jmx

Taurus seems to be a very interesting framework and it is worth checking it out. It is very well-documented here.

Conclusion

I highly recommend checking out Apache JMeter and Taurus documentations if you wish you learn more techniques and tricks.

Thanks, Keep Visiting. If you liked this post, share it with all of your programming buddies!

Modern Web Apps Performance Tricks with PWA and Vue

The performance benchmarks of Redis and MySQL

Optimize the Performance of a Vue App with Async Components