Kyle  M. Farish

Kyle M. Farish

1564215018

Performance Testing: How to Load Test

Objective

The purpose of this article is to help you design and run load test 👍

Why Learn?

As developers we already have the foundational knowledge and with a little effort we could expand our skillset.

  • Your company cannot afford to hire performance engineers
  • Not enough testers compared to developers
  • The skill & knowledge could help you write better and scalable code
  • Less dependant on other’s expertise

Why Load Testing?

While unit & integration tests ensure code is functionally correct, load testing measures its performance which is equally important.

Only a Load Test can shed light into concurrency issues, whether the database queries make good use of indexes instead of a full table scans, where are the bottlenecks, does the application scale efficiently, what is the application’s response time and throughput and so on.

Getting Started Apache JMeter

In this section we will design and run Apache JMeter load tests.

Environment Setup

For environment, either find a suitable online resource (not recommended), come up with your own simple service (node, python, whatever) or simply use the web service provided in this article.

We will be using a simple java-based spring boot web service that exposes four (4) endpoints. The requirement are Java 1.8 and Apache Maven.

git clone https://github.com/rhamedy/tinyservice4loadtest.git
cd tinyservice4loadtest
mvn spring-boot:run

Apache JMeter

Please go ahead and install Apache JMeter from download site, unzip and execute the following command

apache-jmeter/bin/jmeter.sh       // Linux & MacOS
apache-jmeter/bin/jmeter.bat      // Windows

Design a Test Plan for Apache JMeter

Let’s load test the following apis

http://localhost:8080/students/list  - [GET] List students
http://localhost:8080/students/id    - [GET] Get student by id
http://localhost:8080/students       - [POST] Create student
http://localhost:8080/students/id    - [DELETE] Delete student by id

All the sample tests for above endpoints are available in the GitHub repository.

Step 1

Right click on test plan and pick Add > Threads (Users) > Thread Group. A test plan must have at least 1 Thread Group.

  1. The Number of Threads (users)
  2. The Ramp-Up period (in seconds). How long to reach the max users?
  3. How many times or for how long to run the test

Step 2

Let’s specify what to test. Right Click on ThreadGroup and select Add > Config Elements > HTTP Request Defaults option.

Config Elements are useful if you wish to share configuration among one or more requests for example i.e. server address, port number, token, etc.

Let’s fill out the HTTP Request Defaults Config element

Also let’s add an HTTP Header Manager via Add > Config Elements > HTTP Header Manager for Content-Type header

Step 3

Let’s configure the http requests, Right Click on the ThreadGroup and select Add > Sampler > HTTP Request

  1. What is the method type i.e.GET or POST etc
  2. What is the api path i.e. /students/list or /students etc

Right click to duplicate a request and update it.

Step 4

The Listeners are used to collect results. Right Click on ThreadGroup and select Add > Listener > Summary Report option.

The test plans in words, we create an Apache JMeter Test Plan to Load Testtwo apis with 50 Users with a ramp-up for a duration of 30 seconds.

If you have the tinyservice4loadtest (or your own) running, then let’s hit the play button and see the results in Summary Report

Run Apache JMeter Test Using CLI

The GUI is not recommended for running complex tests. Let’s open a complex sample test from here.

The above test plan has more elements

  • The Random Variable generates a value between x and y
  • The Loop Controller executes the content of loop x times

To generate a meaningful report we need user.properties (Source) in the directory where .jmx test is.

jmeter.reportgenerator.report_title=Apache JMeter Dashboard
jmeter.reportgenerator.overall_granularity=60000
jmeter.reportgenerator.graph.responseTimeDistribution.property.set_granularity=100
jmeter.reportgenerator.apdex_satisfied_threshold=1500
jmeter.reportgenerator.apdex_tolerated_threshold=3000
jmeter.reportgenerator.exporter.html.property.output_dir=/tmp/test-jmeter.reportgenerator.exporter.html.filters_only_sample_series=true

Run the test script using command (output-directory should be empty)

jmeter.sh -n -t loadtest.jmx -l log.jtl -e -o output-directory

In the above command the -n stands for no gui the -t indicates the scripts loadtest.jmx the -l is for log.jtl where -e and -o are for reporting.

The output-directory will contain a bunch of files including an index.htmlthat opens the graphical results of the test as shown below.

In this graph the left side is APDEX and the right side is Request summary. The red indicates all our 404 errors and the green is 200 successful requests.

Some numbers relating to Number of samples, Response time, Throughput

Most importantly the Response Time and Throughput

Lastly, it’s worth-mentioning that Apache JMeter can be configured to listen to a browser’s activity and capture the network request.

Getting Started with Taurus

Now that we know how to use Apache JMeter to run a basic load test, let’s explore open-source framework Taurus. In short, one of the reasons for birth of Taurus was because Apache JMeter has a steep learning curve and Taurusmake things a lot simpler.

Taurus is an abstraction layer (or a wrapper) on top of Apache JMeter and that means you can run an Apache JMeter script using Taurus. So go ahead and install Taurus using the easy to install instruction

The Taurus scripts can be written in YAML or JSON using follow blocks

Scenarios is basically where one or more requests are defined. For each scenario an execution is defined with props such as no of usersdurationramp-up period and so on. The modules allow us to configure executor that could be Apache JMeterSelenium and so on. Likewise, the reporting allows configuring how the report should be generated i.e. csv, live reporting in console, or push the result to the blazemeter website.

scenarios: 
 StudentList: 
  requests: 
   - url: http://localhost:8080/students/list
     label: Student List
execution: 
 - scenario: StudentList 
   concurrency: 15
   ramp-up: 2s
   hold-for: 10s
reporting:
 - module: console 
 - module: final-stats 
   summary: true 
   percentiles: true
   test-duration: true 
   dump-csv: single_scenario_single_req.csv

Load test /students/list api reaching 15 users within 2s (ramp-up) for a duration of 10s and display live result in the console as well as csv file.

To run the Taurus test, simply run the command bzt test.yaml

In a Taurus test you can also configure a scenario to point to an Apache JMeter script and override execution and other parameters.

scenarios:
JMeterExample:
script: student_crud_jmeter.jmx

Taurus seems to be a very interesting framework and it is worth checking it out. It is very well-documented here.

Conclusion

I highly recommend checking out Apache JMeter and Taurus documentations if you wish you learn more techniques and tricks.

Thanks, Keep Visiting. If you liked this post, share it with all of your programming buddies!

Modern Web Apps Performance Tricks with PWA and Vue

The performance benchmarks of Redis and MySQL

Optimize the Performance of a Vue App with Async Components

#java #testing #web-development

What is GEEK

Buddha Community

Performance Testing: How to Load Test

Alex Verh

1658761399

More info about loading software will be useful

Dotnet Script: Run C# Scripts From The .NET CLI

dotnet script

Run C# scripts from the .NET CLI, define NuGet packages inline and edit/debug them in VS Code - all of that with full language services support from OmniSharp.

NuGet Packages

NameVersionFramework(s)
dotnet-script (global tool)Nugetnet6.0, net5.0, netcoreapp3.1
Dotnet.Script (CLI as Nuget)Nugetnet6.0, net5.0, netcoreapp3.1
Dotnet.Script.CoreNugetnetcoreapp3.1 , netstandard2.0
Dotnet.Script.DependencyModelNugetnetstandard2.0
Dotnet.Script.DependencyModel.NugetNugetnetstandard2.0

Installing

Prerequisites

The only thing we need to install is .NET Core 3.1 or .NET 5.0 SDK.

.NET Core Global Tool

.NET Core 2.1 introduced the concept of global tools meaning that you can install dotnet-script using nothing but the .NET CLI.

dotnet tool install -g dotnet-script

You can invoke the tool using the following command: dotnet-script
Tool 'dotnet-script' (version '0.22.0') was successfully installed.

The advantage of this approach is that you can use the same command for installation across all platforms. .NET Core SDK also supports viewing a list of installed tools and their uninstallation.

dotnet tool list -g

Package Id         Version      Commands
---------------------------------------------
dotnet-script      0.22.0       dotnet-script
dotnet tool uninstall dotnet-script -g

Tool 'dotnet-script' (version '0.22.0') was successfully uninstalled.

Windows

choco install dotnet.script

We also provide a PowerShell script for installation.

(new-object Net.WebClient).DownloadString("https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.ps1") | iex

Linux and Mac

curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | bash

If permission is denied we can try with sudo

curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | sudo bash

Docker

A Dockerfile for running dotnet-script in a Linux container is available. Build:

cd build
docker build -t dotnet-script -f Dockerfile ..

And run:

docker run -it dotnet-script --version

Github

You can manually download all the releases in zip format from the GitHub releases page.

Usage

Our typical helloworld.csx might look like this:

Console.WriteLine("Hello world!");

That is all it takes and we can execute the script. Args are accessible via the global Args array.

dotnet script helloworld.csx

Scaffolding

Simply create a folder somewhere on your system and issue the following command.

dotnet script init

This will create main.csx along with the launch configuration needed to debug the script in VS Code.

.
├── .vscode
│   └── launch.json
├── main.csx
└── omnisharp.json

We can also initialize a folder using a custom filename.

dotnet script init custom.csx

Instead of main.csx which is the default, we now have a file named custom.csx.

.
├── .vscode
│   └── launch.json
├── custom.csx
└── omnisharp.json

Note: Executing dotnet script init inside a folder that already contains one or more script files will not create the main.csx file.

Running scripts

Scripts can be executed directly from the shell as if they were executables.

foo.csx arg1 arg2 arg3

OSX/Linux

Just like all scripts, on OSX/Linux you need to have a #! and mark the file as executable via chmod +x foo.csx. If you use dotnet script init to create your csx it will automatically have the #! directive and be marked as executable.

The OSX/Linux shebang directive should be #!/usr/bin/env dotnet-script

#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");

You can execute your script using dotnet script or dotnet-script, which allows you to pass arguments to control your script execution more.

foo.csx arg1 arg2 arg3
dotnet script foo.csx -- arg1 arg2 arg3
dotnet-script foo.csx -- arg1 arg2 arg3

Passing arguments to scripts

All arguments after -- are passed to the script in the following way:

dotnet script foo.csx -- arg1 arg2 arg3

Then you can access the arguments in the script context using the global Args collection:

foreach (var arg in Args)
{
    Console.WriteLine(arg);
}

All arguments before -- are processed by dotnet script. For example, the following command-line

dotnet script -d foo.csx -- -d

will pass the -d before -- to dotnet script and enable the debug mode whereas the -d after -- is passed to script for its own interpretation of the argument.

NuGet Packages

dotnet script has built-in support for referencing NuGet packages directly from within the script.

#r "nuget: AutoMapper, 6.1.0"

package

Note: Omnisharp needs to be restarted after adding a new package reference

Package Sources

We can define package sources using a NuGet.Config file in the script root folder. In addition to being used during execution of the script, it will also be used by OmniSharp that provides language services for packages resolved from these package sources.

As an alternative to maintaining a local NuGet.Config file we can define these package sources globally either at the user level or at the computer level as described in Configuring NuGet Behaviour

It is also possible to specify packages sources when executing the script.

dotnet script foo.csx -s https://SomePackageSource

Multiple packages sources can be specified like this:

dotnet script foo.csx -s https://SomePackageSource -s https://AnotherPackageSource

Creating DLLs or Exes from a CSX file

Dotnet-Script can create a standalone executable or DLL for your script.

SwitchLong switchdescription
-o--outputDirectory where the published executable should be placed. Defaults to a 'publish' folder in the current directory.
-n--nameThe name for the generated DLL (executable not supported at this time). Defaults to the name of the script.
 --dllPublish to a .dll instead of an executable.
-c--configurationConfiguration to use for publishing the script [Release/Debug]. Default is "Debug"
-d--debugEnables debug output.
-r--runtimeThe runtime used when publishing the self contained executable. Defaults to your current runtime.

The executable you can run directly independent of dotnet install, while the DLL can be run using the dotnet CLI like this:

dotnet script exec {path_to_dll} -- arg1 arg2

Caching

We provide two types of caching, the dependency cache and the execution cache which is explained in detail below. In order for any of these caches to be enabled, it is required that all NuGet package references are specified using an exact version number. The reason for this constraint is that we need to make sure that we don't execute a script with a stale dependency graph.

Dependency Cache

In order to resolve the dependencies for a script, a dotnet restore is executed under the hood to produce a project.assets.json file from which we can figure out all the dependencies we need to add to the compilation. This is an out-of-process operation and represents a significant overhead to the script execution. So this cache works by looking at all the dependencies specified in the script(s) either in the form of NuGet package references or assembly file references. If these dependencies matches the dependencies from the last script execution, we skip the restore and read the dependencies from the already generated project.assets.json file. If any of the dependencies has changed, we must restore again to obtain the new dependency graph.

Execution cache

In order to execute a script it needs to be compiled first and since that is a CPU and time consuming operation, we make sure that we only compile when the source code has changed. This works by creating a SHA256 hash from all the script files involved in the execution. This hash is written to a temporary location along with the DLL that represents the result of the script compilation. When a script is executed the hash is computed and compared with the hash from the previous compilation. If they match there is no need to recompile and we run from the already compiled DLL. If the hashes don't match, the cache is invalidated and we recompile.

You can override this automatic caching by passing --no-cache flag, which will bypass both caches and cause dependency resolution and script compilation to happen every time we execute the script.

Cache Location

The temporary location used for caches is a sub-directory named dotnet-script under (in order of priority):

  1. The path specified for the value of the environment variable named DOTNET_SCRIPT_CACHE_LOCATION, if defined and value is not empty.
  2. Linux distributions only: $XDG_CACHE_HOME if defined otherwise $HOME/.cache
  3. macOS only: ~/Library/Caches
  4. The value returned by Path.GetTempPath for the platform.

 

Debugging

The days of debugging scripts using Console.WriteLine are over. One major feature of dotnet script is the ability to debug scripts directly in VS Code. Just set a breakpoint anywhere in your script file(s) and hit F5(start debugging)

debug

Script Packages

Script packages are a way of organizing reusable scripts into NuGet packages that can be consumed by other scripts. This means that we now can leverage scripting infrastructure without the need for any kind of bootstrapping.

Creating a script package

A script package is just a regular NuGet package that contains script files inside the content or contentFiles folder.

The following example shows how the scripts are laid out inside the NuGet package according to the standard convention .

└── contentFiles
    └── csx
        └── netstandard2.0
            └── main.csx

This example contains just the main.csx file in the root folder, but packages may have multiple script files either in the root folder or in subfolders below the root folder.

When loading a script package we will look for an entry point script to be loaded. This entry point script is identified by one of the following.

  • A script called main.csx in the root folder
  • A single script file in the root folder

If the entry point script cannot be determined, we will simply load all the scripts files in the package.

The advantage with using an entry point script is that we can control loading other scripts from the package.

Consuming a script package

To consume a script package all we need to do specify the NuGet package in the #loaddirective.

The following example loads the simple-targets package that contains script files to be included in our script.

#load "nuget:simple-targets-csx, 6.0.0"

using static SimpleTargets;
var targets = new TargetDictionary();

targets.Add("default", () => Console.WriteLine("Hello, world!"));

Run(Args, targets);

Note: Debugging also works for script packages so that we can easily step into the scripts that are brought in using the #load directive.

Remote Scripts

Scripts don't actually have to exist locally on the machine. We can also execute scripts that are made available on an http(s) endpoint.

This means that we can create a Gist on Github and execute it just by providing the URL to the Gist.

This Gist contains a script that prints out "Hello World"

We can execute the script like this

dotnet script https://gist.githubusercontent.com/seesharper/5d6859509ea8364a1fdf66bbf5b7923d/raw/0a32bac2c3ea807f9379a38e251d93e39c8131cb/HelloWorld.csx

That is a pretty long URL, so why don't make it a TinyURL like this:

dotnet script https://tinyurl.com/y8cda9zt

Script Location

A pretty common scenario is that we have logic that is relative to the script path. We don't want to require the user to be in a certain directory for these paths to resolve correctly so here is how to provide the script path and the script folder regardless of the current working directory.

public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);

Tip: Put these methods as top level methods in a separate script file and #load that file wherever access to the script path and/or folder is needed.

REPL

This release contains a C# REPL (Read-Evaluate-Print-Loop). The REPL mode ("interactive mode") is started by executing dotnet-script without any arguments.

The interactive mode allows you to supply individual C# code blocks and have them executed as soon as you press Enter. The REPL is configured with the same default set of assembly references and using statements as regular CSX script execution.

Basic usage

Once dotnet-script starts you will see a prompt for input. You can start typing C# code there.

~$ dotnet script
> var x = 1;
> x+x
2

If you submit an unterminated expression into the REPL (no ; at the end), it will be evaluated and the result will be serialized using a formatter and printed in the output. This is a bit more interesting than just calling ToString() on the object, because it attempts to capture the actual structure of the object. For example:

~$ dotnet script
> var x = new List<string>();
> x.Add("foo");
> x
List<string>(1) { "foo" }
> x.Add("bar");
> x
List<string>(2) { "foo", "bar" }
>

Inline Nuget packages

REPL also supports inline Nuget packages - meaning the Nuget packages can be installed into the REPL from within the REPL. This is done via our #r and #load from Nuget support and uses identical syntax.

~$ dotnet script
> #r "nuget: Automapper, 6.1.1"
> using AutoMapper;
> typeof(MapperConfiguration)
[AutoMapper.MapperConfiguration]
> #load "nuget: simple-targets-csx, 6.0.0";
> using static SimpleTargets;
> typeof(TargetDictionary)
[Submission#0+SimpleTargets+TargetDictionary]

Multiline mode

Using Roslyn syntax parsing, we also support multiline REPL mode. This means that if you have an uncompleted code block and press Enter, we will automatically enter the multiline mode. The mode is indicated by the * character. This is particularly useful for declaring classes and other more complex constructs.

~$ dotnet script
> class Foo {
* public string Bar {get; set;}
* }
> var foo = new Foo();

REPL commands

Aside from the regular C# script code, you can invoke the following commands (directives) from within the REPL:

CommandDescription
#loadLoad a script into the REPL (same as #load usage in CSX)
#rLoad an assembly into the REPL (same as #r usage in CSX)
#resetReset the REPL back to initial state (without restarting it)
#clsClear the console screen without resetting the REPL state
#exitExits the REPL

Seeding REPL with a script

You can execute a CSX script and, at the end of it, drop yourself into the context of the REPL. This way, the REPL becomes "seeded" with your code - all the classes, methods or variables are available in the REPL context. This is achieved by running a script with an -i flag.

For example, given the following CSX script:

var msg = "Hello World";
Console.WriteLine(msg);

When you run this with the -i flag, Hello World is printed, REPL starts and msg variable is available in the REPL context.

~$ dotnet script foo.csx -i
Hello World
>

You can also seed the REPL from inside the REPL - at any point - by invoking a #load directive pointed at a specific file. For example:

~$ dotnet script
> #load "foo.csx"
Hello World
>

Piping

The following example shows how we can pipe data in and out of a script.

The UpperCase.csx script simply converts the standard input to upper case and writes it back out to standard output.

using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
    Write(streamReader.ReadToEnd().ToUpper());
}

We can now simply pipe the output from one command into our script like this.

echo "This is some text" | dotnet script UpperCase.csx
THIS IS SOME TEXT

Debugging

The first thing we need to do add the following to the launch.config file that allows VS Code to debug a running process.

{
    "name": ".NET Core Attach",
    "type": "coreclr",
    "request": "attach",
    "processId": "${command:pickProcess}"
}

To debug this script we need a way to attach the debugger in VS Code and the simplest thing we can do here is to wait for the debugger to attach by adding this method somewhere.

public static void WaitForDebugger()
{
    Console.WriteLine("Attach Debugger (VS Code)");
    while(!Debugger.IsAttached)
    {
    }
}

To debug the script when executing it from the command line we can do something like

WaitForDebugger();
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
    Write(streamReader.ReadToEnd().ToUpper()); // <- SET BREAKPOINT HERE
}

Now when we run the script from the command line we will get

$ echo "This is some text" | dotnet script UpperCase.csx
Attach Debugger (VS Code)

This now gives us a chance to attach the debugger before stepping into the script and from VS Code, select the .NET Core Attach debugger and pick the process that represents the executing script.

Once that is done we should see our breakpoint being hit.

Configuration(Debug/Release)

By default, scripts will be compiled using the debug configuration. This is to ensure that we can debug a script in VS Code as well as attaching a debugger for long running scripts.

There are however situations where we might need to execute a script that is compiled with the release configuration. For instance, running benchmarks using BenchmarkDotNet is not possible unless the script is compiled with the release configuration.

We can specify this when executing the script.

dotnet script foo.csx -c release

 

Nullable reference types

Starting from version 0.50.0, dotnet-script supports .Net Core 3.0 and all the C# 8 features. The way we deal with nullable references types in dotnet-script is that we turn every warning related to nullable reference types into compiler errors. This means every warning between CS8600 and CS8655 are treated as an error when compiling the script.

Nullable references types are turned off by default and the way we enable it is using the #nullable enable compiler directive. This means that existing scripts will continue to work, but we can now opt-in on this new feature.

#!/usr/bin/env dotnet-script

#nullable enable

string name = null;

Trying to execute the script will result in the following error

main.csx(5,15): error CS8625: Cannot convert null literal to non-nullable reference type.

We will also see this when working with scripts in VS Code under the problems panel.

image

Download Details:
Author: filipw
Source Code: https://github.com/filipw/dotnet-script
License: MIT License

#dotnet  #aspdotnet  #csharp 

Rahul  Hickle

Rahul Hickle

1598457456

Is API Load Testing Right for Your App? - DZone Integration

API load testing has been around for decades. There are lots of robust tools you can choose from, both commercial and open-source, and many of these tools have large communities and extensive documentation around how to script the most common causes. It’s a far cry from the browser-level testing space, which is relatively new and sparsely populated by comparison.
API load testing is one of the most cost-efficient ways you can get started with load testing, allowing you to scale up your load relatively cheaply while getting immediate results.
How to Get Started With API Load Testing
API load testing isn’t for every application, but depending on your test scenario, it may be the easiest way to test applic.

#integration #performance testing #load testing #jmeter #puppeteer #api load testing #load testing difficulties

Dejah  Reinger

Dejah Reinger

1599859380

How to Do API Testing?

Nowadays API testing is an integral part of testing. There are a lot of tools like postman, insomnia, etc. There are many articles that ask what is API, What is API testing, but the problem is How to do API testing? What I need to validate.

Note: In this article, I am going to use postman assertions for all the examples since it is the most popular tool. But this article is not intended only for the postman tool.

Let’s directly jump to the topic.

Let’s consider you have an API endpoint example http://dzone.com/getuserDetails/{{username}} when you send the get request to that URL it returns the JSON response.

My API endpoint is http://dzone.com/getuserDetails/{{username}}

The response is in JSON format like below

JSON

{
  "jobTitle": "string",
  "userid": "string",
  "phoneNumber": "string",
  "password": "string",
  "email": "user@example.com",
  "firstName": "string",
  "lastName": "string",
  "userName": "string",
  "country": "string",
  "region": "string",
  "city": "string",
  "department": "string",
  "userType": 0
}

In the JSON we can see there are properties and associated values.

Now, For example, if we need details of the user with the username ‘ganeshhegde’ we need to send a **GET **request to **http://dzone.com/getuserDetails/ganeshhegde **

dzone.com

Now there are two scenarios.

1. Valid Usecase: User is available in the database and it returns user details with status code 200

2. Invalid Usecase: User is Unavailable/Invalid user in this case it returns status with code 404 with not found message.

#tutorial #performance #api #test automation #api testing #testing and qa #application programming interface #testing as a service #testing tutorial #api test

Mikel  Okuneva

Mikel Okuneva

1596793726

Where To Learn Test Programming — July 2020 Edition

What do you do when you have lots of free time on your hands? Why not learn test programming strategies and approaches?

When you’re looking for places to learn test programming, Test Automation University has you covered. From API testing through visual validation, you can hone your skills and learn new approaches on TAU.

We introduced five new TAU courses from April through June, and each of them can help you expand your knowledge, learn a new approach, and improve your craft as a test automation engineer. They are:

These courses add to the other three courses we introduced in January through March 2020:

  • IntelliJ for Test Automation Engineers (3 hrs 41 min)
  • Cucumber with JavaScript (1 hr 22 min)
  • Python Programming (2 hrs)

Each of these courses can give you a new set of skills.

Let’s look at each in a little detail.

Mobile Automation With Appium in JavaScript

Orane Findley teaches Mobile Automation with Appium in JavaScript. Orane walks through all the basics of Appium, starting with what it is and where it runs.

javascript

“Appium is an open-source tool for automating native, web, and hybrid applications on different platforms.”

In the introduction, Orane describes the course parts:

  • Setup and Dependencies — installing Appium and setting up your first project
  • Working with elements by finding them, sending values, clicking, and submitting
  • Creating sessions, changing screen orientations, and taking screenshots
  • Timing, including TimeOuts and Implicit Waits
  • Collecting attributes and data from an element
  • Selecting and using element states
  • Reviewing everything to make it all make sense

The first chapter, broken into five parts, gets your system ready for the rest of the course. You’ll download and install a Java Developer Kit, a stable version of Node.js, Android Studio and Emulator (for a mobile device emulator), Visual Studio Code for an IDE, Appium Server, and a sample Appium Android Package Kit. If you get into trouble, you can use the Test Automation University Slack channel to get help from Orane. Each subchapter contains the links to get to the proper software. Finally, Orane has you customize your configuration for the course project.

Chapter 2 deals with elements and screen interactions for your app. You can find elements on the page, interact with those elements, and scroll the page to make other elements visible. Orane breaks the chapter into three distinct subchapters so you can become competent with each part of finding, scrolling, and interacting with the app. The quiz comes at the end of the third subchapter.

The remaining chapters each deal with specific bullets listed above: sessions and screen capture, timing, element attributes, and using element states. The final summary chapter ensures you have internalized the key takeaways from the course. Each of these chapters includes its quiz.

When you complete this course successfully, you will have both a certificate of completion and the code infrastructure available on your system to start testing mobile apps using Appium.

Selenium WebDriver With Python

Andrew Knight, who blogs as The Automation Panda, teaches the course on Selenium WebDriver with Python. As Andrew points out, Python has become a popular language for test automation. If you don’t know Python at all, he points you to Jess Ingrassellino’s great course, Python for Test Programming, also on Test Automation University.

Se

In the first chapter, Andrew has you write your first test. Not in Python, but Gherkin. If you have never used Gherkin syntax, it helps you structure your tests in pseudocode that you can translate into any language of your choice. Andrew points out that it’s important to write your test steps before you write test code — and Gherkin makes this process straightforward.

first test case

The second chapter goes through setting up a pytest, the test framework Andrew uses. He assumes you already have Python 3.8 installed. Depending on your machine, you may need to do some work (Macs come with Python 2.7.16 installed, which is old and won’t work. Andrew also goes through the pip package manager to install pipenv. He gives you a GitHub link to his test code for the project. And, finally, he creates a test using the Gherkin codes as comments to show you how a test runs in pytest.

In the third chapter, you set up Selenium Webdriver to work with specific browsers, then create your test fixture in the pytest. Andrew reminds you to download the appropriate browser driver for the browser you want to test — for example, chromedriver to drive Chrome and geckodriver to drive Firefox. Once you use pipenv to install Selenium, you begin your test fixture. One thing to remember is to call an explicit quit for your webdriver after a test.

Chapter 4 goes through page objects, and how you abstract page object details to simplify your test structure. Chapter 5 goes through element locator structures and how to use these in Python. And, in Chapter 6, Andrew goes through some common webdriver calls and how to use them in your tests. These first six chapters cover the basics of testing with Python and Selenium.

Now that you have the basics down, the final three chapters review some advanced ideas: testing with multiple browsers, handling race conditions, and running your tests in parallel. This course gives you specific skills around Python and Selenium on top of what you can get from the Python for Test Programming course.

#tutorial #performance #testing #automation #test automation #automated testing #visual testing #visual testing best practices #testing tutorial

Carmen  Grimes

Carmen Grimes

1595505720

How to Test Mobile App Performance: 3 Key Components - DZone Performance

You’ve probably interacted with an app on your phone or tablet that’s slow, takes a long time to load, freezes, or even crashes on you altogether.

Frustrating, right?

On the flip side, you can probably think of an app that you love to use because from day one, it’s never given you any trouble.

Or maybe you never paid any mind to an app that works quickly, because isn’t that how it’s supposed to be?

So, what causes one app to be crash-prone and another, fast, and reliable?

Whether an app has good or bad performance depends on three factors: the backend, the network, and the app itself running on the device.

A developer or mobile tester can measure the performance of an application in different scenarios.

For example, they can test for when there’s a concurrency of users on the app at the same time, on different devices (which vary in hardware resources and screen sizes), and multiple networks such as 3G, 4G, Wifi, and more.

The reality is that many variables affect the performance of a mobile application. Moreover, a user may have a very bad experience with your app and the cause might not even have anything to do with the code or its implementation.

But, by running performance tests for each of these three factors, you’ll be able to identify problems and optimize your app for the best user experience possible.

Keep reading as we’ll cover the different types of tests for each factor, what to measure, and what tools are available to help you along the way.

1st Mobile Performance Factor: The Backend

A mobile app’s backend architecture is generally based on an application server, a web server, and a database.

**When it comes to the backend, the things related to performance that are important to know when an app is under load are the server’s response times, database queries times, and the server’s resource usage. **

Using this information, it’s easier to detect issues such as:

  • High server response times
  • Bottlenecks or breakpoints in the database and application server resources
  • Poor implementation of escalation policies

So what kind of tests are normally run to check the app’s backend performance? Load tests.

This is when you simulate load on the backend in different ways, whether it be through stress testing, peak testing, endurance testing, load testing, etc.

In general, the objective of these tests is to understand how the backend systems of an app behave and handle a certain volume of concurrent users.

Several tools allow you to load test your mobile app. The most commonly used ones include:

Apache JMeter – the number one open-source load testing tool

Gatling– a developer-friendly, open-source load testing tool with scripts written in Scala

BlazeMeter – a cloud performance testing platform that scales your JMeter or Gatling tests for a greater amount of concurrent users

2nd Mobile Performance Factor: The Network

With regards to the network that the device is connected to, there are two key things to measure: latency and bandwidth.

  • Latency is the time that elapses when information is sent on the network (measured in milliseconds).
  • Bandwidth is the maximum capacity (the amount of data) that can be transmitted through the network (measured in bits per second).

For mobile performance, the lower the latency and the higher the bandwidth, the better.

An app’s performance can vary depending on, for example, whether it’s connected to a 3G network or a 4G network, and unfortunately, this is beyond an app developer or tester’s control.

But, it is possible to incorporate the network during the mobile app performance testing process, simulating the different types of networks and measuring their impact on the response times, both on the server-side and the client-side.

#tutorial #performance #mobile apps #load testing #mobile testing #mobile app performance #client side performance