Significant results are just the beginning. Obtaining significant results is a tremendous accomplishment in itself self but it does not tell the entire story behind your results.
Congratulations, your experiment has yielded significant results! You can be sure (well, 95% sure) that the independent variable influenced your dependent variable. I guess all you have left to do is write up your discussion and submit your results to a scholarly journal. Right…………?
Obtaining significant results is a tremendous accomplishment in itself self but it does not tell the entire story behind your results. I want to take this time and discuss statistical significance, sample size, statistical power, and effect size, all of which have an enormous impact on how we interpret our results.
First and foremost, let’s discuss statistical significance as it forms the cornerstone of inferential statistics. We’ll discuss significance in the context of true experiments as it is the most relevant and easily understood. A true experiment is used to test a specific hypothesis(s) we have regarding the causal relationship between one or many variables. Specifically, we hypothesize that one or more variables (ie. independent variables) produce a change in another variable (ie. dependent variable). The change is our inferred causality. If you would like to learn more about the various research design types visit my article (LINK).
For example, we want to test a hypothesis that an authoritative teaching style will produce higher test scores in students. In order to accurately test this hypothesis, we randomly select 2 groups of students that get randomly placed into one of two classrooms. One classroom is taught by an authoritarian teacher and one taught by an authoritative teacher. Throughout the semester, we collect all the test scores among all the classrooms. At the end of the year, we average all the scores to produce a grand average for each classroom. Let’s assume the average test score for the authoritarian classroom was 80%, and the authoritative classroom was 88%. It would seem your hypothesis was correct, the students taught by the authoritative teacher scored on average 8% higher on their tests compared to the students taught by the authoritarian teacher. However, what if we ran this experiment 100 times, each time with different groups of students do you think we would obtain similar results? What is the likelihood that this effect of teaching style on student test scores occurred by chance or another latent (ie. unmeasured) variable? Last but not least, is 8% considered “high enough” to be that different from 80%?
In this article, we will demonstrate their relationships with the sample size by graphs. Specifically, we will discuss different scenarios with one-tail hypothesis testing.
These statistical tests allow researchers to make inferences because they can show whether an observed pattern is due to intervention or chance. There is a wide range of statistical tests.
Data science is omnipresent to advanced statistical and machine learning methods. For whatever length of time that there is data to analyse, the need to investigate is obvious.
You will discover Exploratory Data Analysis (EDA), the techniques and tactics that you can use, and why you should be performing EDA on your next problem.
Global Terrorism Database Analysis was a quick project for understanding and implementing various descriptive statistics and exploratory data analysis techniques.