Experimentation at Scale: Fuel Your Application’s Growth Through A/B Testing

You’ve worked long hours, shipped your application, and achieved moderate traction. Now what? See how you can use the “Build, Learn and Measure” loop to identify growth opportunities. In this talk, we will explore the core mechanics of A/B testing and how it can be used to grow your user base. We’ll also discuss how to adjust your experimentation strategy to scale alongside your growing teams and codebases.

#testing #developer #programming

What is GEEK

Buddha Community

Experimentation at Scale: Fuel Your Application’s Growth Through A/B Testing
Tamia  Walter

Tamia Walter

1596754901

Testing Microservices Applications

The shift towards microservices and modular applications makes testing more important and more challenging at the same time. You have to make sure that the microservices running in containers perform well and as intended, but you can no longer rely on conventional testing strategies to get the job done.

This is where new testing approaches are needed. Testing your microservices applications require the right approach, a suitable set of tools, and immense attention to details. This article will guide you through the process of testing your microservices and talk about the challenges you will have to overcome along the way. Let’s get started, shall we?

A Brave New World

Traditionally, testing a monolith application meant configuring a test environment and setting up all of the application components in a way that matched the production environment. It took time to set up the testing environment, and there were a lot of complexities around the process.

Testing also requires the application to run in full. It is not possible to test monolith apps on a per-component basis, mainly because there is usually a base code that ties everything together, and the app is designed to run as a complete app to work properly.

Microservices running in containers offer one particular advantage: universal compatibility. You don’t have to match the testing environment with the deployment architecture exactly, and you can get away with testing individual components rather than the full app in some situations.

Of course, you will have to embrace the new cloud-native approach across the pipeline. Rather than creating critical dependencies between microservices, you need to treat each one as a semi-independent module.

The only monolith or centralized portion of the application is the database, but this too is an easy challenge to overcome. As long as you have a persistent database running on your test environment, you can perform tests at any time.

Keep in mind that there are additional things to focus on when testing microservices.

  • Microservices rely on network communications to talk to each other, so network reliability and requirements must be part of the testing.
  • Automation and infrastructure elements are now added as codes, and you have to make sure that they also run properly when microservices are pushed through the pipeline
  • While containerization is universal, you still have to pay attention to specific dependencies and create a testing strategy that allows for those dependencies to be included

Test containers are the method of choice for many developers. Unlike monolith apps, which lets you use stubs and mocks for testing, microservices need to be tested in test containers. Many CI/CD pipelines actually integrate production microservices as part of the testing process.

Contract Testing as an Approach

As mentioned before, there are many ways to test microservices effectively, but the one approach that developers now use reliably is contract testing. Loosely coupled microservices can be tested in an effective and efficient way using contract testing, mainly because this testing approach focuses on contracts; in other words, it focuses on how components or microservices communicate with each other.

Syntax and semantics construct how components communicate with each other. By defining syntax and semantics in a standardized way and testing microservices based on their ability to generate the right message formats and meet behavioral expectations, you can rest assured knowing that the microservices will behave as intended when deployed.

Ways to Test Microservices

It is easy to fall into the trap of making testing microservices complicated, but there are ways to avoid this problem. Testing microservices doesn’t have to be complicated at all when you have the right strategy in place.

There are several ways to test microservices too, including:

  • Unit testing: Which allows developers to test microservices in a granular way. It doesn’t limit testing to individual microservices, but rather allows developers to take a more granular approach such as testing individual features or runtimes.
  • Integration testing: Which handles the testing of microservices in an interactive way. Microservices still need to work with each other when they are deployed, and integration testing is a key process in making sure that they do.
  • End-to-end testing: Which⁠—as the name suggests⁠—tests microservices as a complete app. This type of testing enables the testing of features, UI, communications, and other components that construct the app.

What’s important to note is the fact that these testing approaches allow for asynchronous testing. After all, asynchronous development is what makes developing microservices very appealing in the first place. By allowing for asynchronous testing, you can also make sure that components or microservices can be updated independently to one another.

#blog #microservices #testing #caylent #contract testing #end-to-end testing #hoverfly #integration testing #microservices #microservices architecture #pact #testing #unit testing #vagrant #vcr

Dejah  Reinger

Dejah Reinger

1599859380

How to Do API Testing?

Nowadays API testing is an integral part of testing. There are a lot of tools like postman, insomnia, etc. There are many articles that ask what is API, What is API testing, but the problem is How to do API testing? What I need to validate.

Note: In this article, I am going to use postman assertions for all the examples since it is the most popular tool. But this article is not intended only for the postman tool.

Let’s directly jump to the topic.

Let’s consider you have an API endpoint example http://dzone.com/getuserDetails/{{username}} when you send the get request to that URL it returns the JSON response.

My API endpoint is http://dzone.com/getuserDetails/{{username}}

The response is in JSON format like below

JSON

{
  "jobTitle": "string",
  "userid": "string",
  "phoneNumber": "string",
  "password": "string",
  "email": "user@example.com",
  "firstName": "string",
  "lastName": "string",
  "userName": "string",
  "country": "string",
  "region": "string",
  "city": "string",
  "department": "string",
  "userType": 0
}

In the JSON we can see there are properties and associated values.

Now, For example, if we need details of the user with the username ‘ganeshhegde’ we need to send a **GET **request to **http://dzone.com/getuserDetails/ganeshhegde **

dzone.com

Now there are two scenarios.

1. Valid Usecase: User is available in the database and it returns user details with status code 200

2. Invalid Usecase: User is Unavailable/Invalid user in this case it returns status with code 404 with not found message.

#tutorial #performance #api #test automation #api testing #testing and qa #application programming interface #testing as a service #testing tutorial #api test

The Ultimate Guide to Multiclass A/B Testing

One essential skill that certainly useful for any data analytics professional to comprehend is the ability to perform an A/B testing and gather conclusions accordingly.

Before we proceed further, it might be useful to have a quick refresher on the definition of A/B testing in the first place. As the name suggests, we can think of A/B testing as the act of testing two alternatives, A and B, and use the test result to choose which alternative is superior to the other. For convenience, let’s call this type of A/B testing as the binary A/B testing.

Despite its name, A/B testing in fact can be made more general, i.e. to include more than two alternatives/classes to be tested. To name a few, analyzing click-through rate (CTR) from a multisegment digital campaign and redemption rate of various tiers of promos are two nice examples of such multiclass A/B testing.

The difference in the number of classes involved between binary and multiclass A / B testing also results in a slight difference in the statistical methods used to draw conclusions from them. While in binary testings one would straightforwardly use a simple t-test, it turns out that an additional (preliminary) step is needed for their multiclass counterparts.

In this post, I will give one possible strategy to deal with (gather conclusions from) multiclass A/B testings. I will demonstrate the step-by-step process through a concrete example so you can follow along. Are you ready?

#hypothesis-testing #a-b-testing #click-through-rate #t-test #chi-square-test #testing

Vern  Greenholt

Vern Greenholt

1598523540

The A’s and B’s of A/B Testing — A Beginner’s guide to experimentation

I recently completed Udacity’s course on A/B testing. The course started with a high-level understanding of what a typical A/B test entails before diving into specifics of each stage in the process of experimentation. Needless to say, it was a great learning experience! In this post, I am going to summarize my key learning from the course and explain how it benefits a lot of companies which are focused on improving user experience.

So, let us dive in!

So, tell me what is an A/B Test?

A/B testing is a method of experimentation to understand how user experiences change following any changes/variations made in the way they interact with a website, a mobile app etc. It is a very popular method to understand the level of existing customer experience and improve it based on certain relevant metrics.

Possible changes that companies may bring about are changing the layout of the website, changing the appearance/position of certain buttons on a particular webpage, change the ranking of options on the website etc. A/B testing helps the companies to evaluate whether these changes would be successful or not post-launch. This done by testing these changes on a subset of viewers for a certain period of time.

Okay, gotcha! But how will I evaluate these tests?

Image for post

Evaluation takes place on the basis of select metrics.

  1. Sanity Metrics: These are called invariant metrics. These are the metrics which should not change across control and experiment groups during the course of the experiment. If they change then there is something fundamentally wrong in the experiment setup.
  2. Evaluation metrics: These can be chosen on the basis of business needs. These can be any of the following listed below:

#product-analytics #data-science #a-b-testing #product-management #experimentation #big data

The A’s and B’s of A/B Testing — A Beginner’s guide to experimentation

I recently completed Udacity’s course on A/B testing. It offers a high-level understanding of what a typical A/B test entails before diving into specifics of each stage in the process of experimentation. Needless to say, it was a great learning experience! In this post, I am going to summarize my key learning from the course and explain how it benefits a lot of companies which are focused on improving user experience.

So, let us dive in!

So, tell me what is an A/B Test?

A/B testing is a method of experimentation to understand how user experiences change following any changes/variations made in the way they interact with a website, a mobile app etc. It is a very popular method to understand the level of existing customer experience and improve it based on certain relevant metrics.

Possible changes that companies may bring about are changing the layout of the website, changing the appearance/position of certain buttons on a particular webpage, change the ranking of options on the website etc. A/B testing helps the companies to evaluate whether these changes would be successful or not post-launch. This done by testing these changes on a subset of viewers for a certain period of time.

Okay, gotcha! But how will I evaluate these tests?

Image for post

#product-analytics #data-science #a-b-testing #product-management #experimentation