Effect Size vs. Inferential Statistics


What is the difference between a statistically significant result and a meaningful result?

Most doctoral students are exposed to the standard canon of quantitative research methods – null hypothesis testing using statistical inference.  Using a variety of different test methods, one can test hypotheses to determine whether or not relationships between variables exist, within a reasonable degree of error, and thereby whether or not a null hypothesis can be rejected or fail to be rejected.

While this sort of hypothesis testing is ubiquitous in research, a statistically significant result does not always equate with a meaningful result.  Particularly in large samples, statistical significance in a tested relationship can be present even while the effects of the variables on each other are minor, even trivial.  In such cases, we need a better approach to determine not just whether statistical significance is present, but whether the effects are sufficiently large to be important.

We have just added a resource to our “guides, tools, and worksheets” library that explores how measuring and reporting effect sizes can provide a stronger interpretation of a relationship between variables than the usual “p-value” approach in inferential statistics.   You can find the discussion here: