1623722580
Case study: Bayesian analysis of A/B test results
Let’s recall our example from the previous article, where Fisher’s exact test indicated that a 0.3% drop in conversion wasn’t statistically significant:
We’ve been developing this amazing arcade game for the last couple of years, and things seem to be going pretty well. But at some point, players’ community started to ask for a co-op mode. After some discussions, the game team decided to develop the new mode and run an A/B test to check how it affects the metrics.
We ran an A/B test for the test group of 2500 users (the same size as the control group) and got these results:
- Average session length increased from 8 to 9 minutes
- Day-1 retention increased from 40% to 45%
- At the same time, conversion declined from 2% to 1.7%
After applying statistical tests to all three metrics, we have discovered that average session length and retention have shown statistically significant growth, while a 0.3% decrease in conversion rates wasn’t indicated as significant by a Fisher’s exact test.
#ab-testing #a/b testing #bayesian approach #testing
1591879620
One essential skill that certainly useful for any data analytics professional to comprehend is the ability to perform an A/B testing and gather conclusions accordingly.
Before we proceed further, it might be useful to have a quick refresher on the definition of A/B testing in the first place. As the name suggests, we can think of A/B testing as the act of testing two alternatives, A and B, and use the test result to choose which alternative is superior to the other. For convenience, let’s call this type of A/B testing as the binary A/B testing.
Despite its name, A/B testing in fact can be made more general, i.e. to include more than two alternatives/classes to be tested. To name a few, analyzing click-through rate (CTR) from a multisegment digital campaign and redemption rate of various tiers of promos are two nice examples of such multiclass A/B testing.
The difference in the number of classes involved between binary and multiclass A / B testing also results in a slight difference in the statistical methods used to draw conclusions from them. While in binary testings one would straightforwardly use a simple t-test, it turns out that an additional (preliminary) step is needed for their multiclass counterparts.
In this post, I will give one possible strategy to deal with (gather conclusions from) multiclass A/B testings. I will demonstrate the step-by-step process through a concrete example so you can follow along. Are you ready?
#hypothesis-testing #a-b-testing #click-through-rate #t-test #chi-square-test #testing
1623722580
Case study: Bayesian analysis of A/B test results
Let’s recall our example from the previous article, where Fisher’s exact test indicated that a 0.3% drop in conversion wasn’t statistically significant:
We’ve been developing this amazing arcade game for the last couple of years, and things seem to be going pretty well. But at some point, players’ community started to ask for a co-op mode. After some discussions, the game team decided to develop the new mode and run an A/B test to check how it affects the metrics.
We ran an A/B test for the test group of 2500 users (the same size as the control group) and got these results:
- Average session length increased from 8 to 9 minutes
- Day-1 retention increased from 40% to 45%
- At the same time, conversion declined from 2% to 1.7%
After applying statistical tests to all three metrics, we have discovered that average session length and retention have shown statistically significant growth, while a 0.3% decrease in conversion rates wasn’t indicated as significant by a Fisher’s exact test.
#ab-testing #a/b testing #bayesian approach #testing
1596754901
The shift towards microservices and modular applications makes testing more important and more challenging at the same time. You have to make sure that the microservices running in containers perform well and as intended, but you can no longer rely on conventional testing strategies to get the job done.
This is where new testing approaches are needed. Testing your microservices applications require the right approach, a suitable set of tools, and immense attention to details. This article will guide you through the process of testing your microservices and talk about the challenges you will have to overcome along the way. Let’s get started, shall we?
Traditionally, testing a monolith application meant configuring a test environment and setting up all of the application components in a way that matched the production environment. It took time to set up the testing environment, and there were a lot of complexities around the process.
Testing also requires the application to run in full. It is not possible to test monolith apps on a per-component basis, mainly because there is usually a base code that ties everything together, and the app is designed to run as a complete app to work properly.
Microservices running in containers offer one particular advantage: universal compatibility. You don’t have to match the testing environment with the deployment architecture exactly, and you can get away with testing individual components rather than the full app in some situations.
Of course, you will have to embrace the new cloud-native approach across the pipeline. Rather than creating critical dependencies between microservices, you need to treat each one as a semi-independent module.
The only monolith or centralized portion of the application is the database, but this too is an easy challenge to overcome. As long as you have a persistent database running on your test environment, you can perform tests at any time.
Keep in mind that there are additional things to focus on when testing microservices.
Test containers are the method of choice for many developers. Unlike monolith apps, which lets you use stubs and mocks for testing, microservices need to be tested in test containers. Many CI/CD pipelines actually integrate production microservices as part of the testing process.
As mentioned before, there are many ways to test microservices effectively, but the one approach that developers now use reliably is contract testing. Loosely coupled microservices can be tested in an effective and efficient way using contract testing, mainly because this testing approach focuses on contracts; in other words, it focuses on how components or microservices communicate with each other.
Syntax and semantics construct how components communicate with each other. By defining syntax and semantics in a standardized way and testing microservices based on their ability to generate the right message formats and meet behavioral expectations, you can rest assured knowing that the microservices will behave as intended when deployed.
It is easy to fall into the trap of making testing microservices complicated, but there are ways to avoid this problem. Testing microservices doesn’t have to be complicated at all when you have the right strategy in place.
There are several ways to test microservices too, including:
What’s important to note is the fact that these testing approaches allow for asynchronous testing. After all, asynchronous development is what makes developing microservices very appealing in the first place. By allowing for asynchronous testing, you can also make sure that components or microservices can be updated independently to one another.
#blog #microservices #testing #caylent #contract testing #end-to-end testing #hoverfly #integration testing #microservices #microservices architecture #pact #testing #unit testing #vagrant #vcr
1620983255
Automation and segregation can help you build better software
If you write automated tests and deliver them to the customer, he can make sure the software is working properly. And, at the end of the day, he paid for it.
Ok. We can segregate or separate the tests according to some criteria. For example, “white box” tests are used to measure the internal quality of the software, in addition to the expected results. They are very useful to know the percentage of lines of code executed, the cyclomatic complexity and several other software metrics. Unit tests are white box tests.
#testing #software testing #regression tests #unit tests #integration tests
1599859380
Nowadays API testing is an integral part of testing. There are a lot of tools like postman, insomnia, etc. There are many articles that ask what is API, What is API testing, but the problem is How to do API testing? What I need to validate.
Note: In this article, I am going to use postman assertions for all the examples since it is the most popular tool. But this article is not intended only for the postman tool.
Let’s directly jump to the topic.
Let’s consider you have an API endpoint example http://dzone.com/getuserDetails/{{username}} when you send the get request to that URL it returns the JSON response.
My API endpoint is http://dzone.com/getuserDetails/{{username}}
The response is in JSON format like below
JSON
{
"jobTitle": "string",
"userid": "string",
"phoneNumber": "string",
"password": "string",
"email": "user@example.com",
"firstName": "string",
"lastName": "string",
"userName": "string",
"country": "string",
"region": "string",
"city": "string",
"department": "string",
"userType": 0
}
In the JSON we can see there are properties and associated values.
Now, For example, if we need details of the user with the username ‘ganeshhegde’ we need to send a **GET **request to **http://dzone.com/getuserDetails/ganeshhegde **
Now there are two scenarios.
1. Valid Usecase: User is available in the database and it returns user details with status code 200
2. Invalid Usecase: User is Unavailable/Invalid user in this case it returns status with code 404 with not found message.
#tutorial #performance #api #test automation #api testing #testing and qa #application programming interface #testing as a service #testing tutorial #api test