Before we delve into the intuition behind using the Bayesian approach of estimation, we need to understand a few concepts. These concepts include:
Inferential statistics is when you _infer _something about a whole population based on a **sample **of that population, as opposed to descriptive statistics which _describes _something about the whole population.
When it comes to inferential statistics, there are two main philosophies: frequentist inference and Bayesian inference. The frequentist approach is known to be the more traditional approach to statistical inference, and thus studied more in most statistics courses (especially introductory courses). However, many would argue that the Bayesian approach is much closer to the way humans naturally perceive probability.
“1132 — Frequentists vs. Bayesians” by PhilWolff is licensed under CC BY-SA 2.0
The Bayesian approach involves updating one’s beliefs based on new evidence. For instance, you’re at the doctor’s because you’re feeling unwell and you believe you have a certain illness. A couple of doctors check on you and they both have different beliefs of what you may have. These are known as prior beliefs (prior probabilities). After you have been checked on, they conduct a blood test on you. Based on the test, they’ve ruled out some of the possible illnesses they initially expected and updated their beliefs according to the results. This new belief is known as the posterior belief (posterior probabilities).
I know, I know, a lot of jargon but I’ll try to explain everything in the example.
#machine-learning #editors-pick #bayesian-statistics #a-b-testing #data-science
Case study: Bayesian analysis of A/B test results
Let’s recall our example from the previous article, where Fisher’s exact test indicated that a 0.3% drop in conversion wasn’t statistically significant:
We’ve been developing this amazing arcade game for the last couple of years, and things seem to be going pretty well. But at some point, players’ community started to ask for a co-op mode. After some discussions, the game team decided to develop the new mode and run an A/B test to check how it affects the metrics.
We ran an A/B test for the test group of 2500 users (the same size as the control group) and got these results:
- Average session length increased from 8 to 9 minutes
- Day-1 retention increased from 40% to 45%
- At the same time, conversion declined from 2% to 1.7%
After applying statistical tests to all three metrics, we have discovered that average session length and retention have shown statistically significant growth, while a 0.3% decrease in conversion rates wasn’t indicated as significant by a Fisher’s exact test.
#ab-testing #a/b testing #bayesian approach #testing
One essential skill that certainly useful for any data analytics professional to comprehend is the ability to perform an A/B testing and gather conclusions accordingly.
Before we proceed further, it might be useful to have a quick refresher on the definition of A/B testing in the first place. As the name suggests, we can think of A/B testing as the act of testing two alternatives, A and B, and use the test result to choose which alternative is superior to the other. For convenience, let’s call this type of A/B testing as the binary A/B testing.
Despite its name, A/B testing in fact can be made more general, i.e. to include more than two alternatives/classes to be tested. To name a few, analyzing click-through rate (CTR) from a multisegment digital campaign and redemption rate of various tiers of promos are two nice examples of such multiclass A/B testing.
The difference in the number of classes involved between binary and multiclass A / B testing also results in a slight difference in the statistical methods used to draw conclusions from them. While in binary testings one would straightforwardly use a simple t-test, it turns out that an additional (preliminary) step is needed for their multiclass counterparts.
In this post, I will give one possible strategy to deal with (gather conclusions from) multiclass A/B testings. I will demonstrate the step-by-step process through a concrete example so you can follow along. Are you ready?
#hypothesis-testing #a-b-testing #click-through-rate #t-test #chi-square-test #testing
The shift towards microservices and modular applications makes testing more important and more challenging at the same time. You have to make sure that the microservices running in containers perform well and as intended, but you can no longer rely on conventional testing strategies to get the job done.
This is where new testing approaches are needed. Testing your microservices applications require the right approach, a suitable set of tools, and immense attention to details. This article will guide you through the process of testing your microservices and talk about the challenges you will have to overcome along the way. Let’s get started, shall we?
Traditionally, testing a monolith application meant configuring a test environment and setting up all of the application components in a way that matched the production environment. It took time to set up the testing environment, and there were a lot of complexities around the process.
Testing also requires the application to run in full. It is not possible to test monolith apps on a per-component basis, mainly because there is usually a base code that ties everything together, and the app is designed to run as a complete app to work properly.
Microservices running in containers offer one particular advantage: universal compatibility. You don’t have to match the testing environment with the deployment architecture exactly, and you can get away with testing individual components rather than the full app in some situations.
Of course, you will have to embrace the new cloud-native approach across the pipeline. Rather than creating critical dependencies between microservices, you need to treat each one as a semi-independent module.
The only monolith or centralized portion of the application is the database, but this too is an easy challenge to overcome. As long as you have a persistent database running on your test environment, you can perform tests at any time.
Keep in mind that there are additional things to focus on when testing microservices.
Test containers are the method of choice for many developers. Unlike monolith apps, which lets you use stubs and mocks for testing, microservices need to be tested in test containers. Many CI/CD pipelines actually integrate production microservices as part of the testing process.
As mentioned before, there are many ways to test microservices effectively, but the one approach that developers now use reliably is contract testing. Loosely coupled microservices can be tested in an effective and efficient way using contract testing, mainly because this testing approach focuses on contracts; in other words, it focuses on how components or microservices communicate with each other.
Syntax and semantics construct how components communicate with each other. By defining syntax and semantics in a standardized way and testing microservices based on their ability to generate the right message formats and meet behavioral expectations, you can rest assured knowing that the microservices will behave as intended when deployed.
It is easy to fall into the trap of making testing microservices complicated, but there are ways to avoid this problem. Testing microservices doesn’t have to be complicated at all when you have the right strategy in place.
There are several ways to test microservices too, including:
What’s important to note is the fact that these testing approaches allow for asynchronous testing. After all, asynchronous development is what makes developing microservices very appealing in the first place. By allowing for asynchronous testing, you can also make sure that components or microservices can be updated independently to one another.
#blog #microservices #testing #caylent #contract testing #end-to-end testing #hoverfly #integration testing #microservices #microservices architecture #pact #testing #unit testing #vagrant #vcr
This article will be interesting for IT directors, product managers, project managers, and anyone who wants to understand the processes of project quality assurance better.
At Qualitica, we test large web and mobile projects, both commercial and national ones. Before a separate testing agency has been established, I spent 10 years as a specialist and head of several digital studios. Usually, in any IT project (websites, applications, games, corporate software), you start by treating testing as a formal procedure. But normally the testing also evolves with the project: the more people are involved, the more complex is the process.
There are 7 testing evolution stages that may differ in different companies:
Let’s learn about each stage in more detail.
It’s the simplest, “instinctive” approach to testing. It is common in small companies. When it is impossible or presumably unwanted to hire a professional tester, this part of work is performed in house, but this is an inappropriate and problematic approach for the following reasons:
An extreme case is when no testing is done in the company and the error report goes from … the client. Then more and more errors/bugs appear. Thus, clients become testers at their expense.
#testing #testing and qa #testing automation #testing and developing #test
Automation and segregation can help you build better software
If you write automated tests and deliver them to the customer, he can make sure the software is working properly. And, at the end of the day, he paid for it.
Ok. We can segregate or separate the tests according to some criteria. For example, “white box” tests are used to measure the internal quality of the software, in addition to the expected results. They are very useful to know the percentage of lines of code executed, the cyclomatic complexity and several other software metrics. Unit tests are white box tests.
#testing #software testing #regression tests #unit tests #integration tests