Statistics is a broad and complex field. For a given research question any number of statistical approaches could be taken. In an article published last year, researchers asked 61 analysts to use the same dataset to address the question of whether referees were more likely to give dark skinned players a red card than light skinned players. They got 61 different responses. Each analysis had its advantages and disadvantages and I’m sure each analyst would have defended their work. However, as many statisticians and economists may well know, the merit of an approach is not the only factor that matters in its adoption.

There has, for decades, been criticism about
the misunderstanding and misuse of null hypothesis significance testing (NHST).
*P*-values have been a common topic on
this blog. Despite this, NHST remains the predominant paradigm for most
statistical work. If used appropriately this needn’t be a problem, but if it
were being used appropriately it wouldn’t be used nearly as much: *p*-values can’t perform the inferential
role many expect of them. It’s not difficult to understand why things are this
way: most published work uses NHST, we teach students NHST in order to
understand the published work, students become researchers who use NHST, and so
on. Part of statistical education involves teaching the arbitrary conventions that
have gone before such as that *p*-values
are ‘significant’ if below 0.05 or a study is ‘adequately powered’ if power is
above 80%. One of the most pernicious consequences of this is that these
heuristics become a substitute for thinking. The presence of these key figures
is expected and their absence often marked by a request from reviewers and
other readers for their inclusion.

I have argued on this blog and elsewhere for a wider use of Bayesian methods (and less NHST) and I try to practice what I preach. For an ongoing randomised trial I am involved with, I adopted a Bayesian approach to design and analysis. Instead of the usual power calculation, I conducted a Bayesian assurance analysis (which Anthony O’Hagan has written some good articles on for those wanting more information). I’ll try to summarise the differences between ‘power’ and ‘assurance’ calculations by attempting to define them, which is actually quite hard!

*Power calculation*. If we were to repeat a trial infinitely many times, what sample size would we need so that in *x%* of trials the assumed data generating model produces data which would fall in the α% most extreme quantiles of the distribution of data that would be produced from the same data generating model but with one parameter set to exactly zero (or any equivalent hypothesis). Typically we set *x*%to be 80% (power) and *α*% to be 5% (statistical significance threshold).

*Assurance calculation*. For a given data generating model, what sample size do we need so that there is a *x*% probability that we will be 1-α% certain that the parameter is positive (or any equivalent choice).

The assurance calculation could be reframed in a decision framework as what sample size do we need so that there is a x% probability we will make the right decision about whether a parameter is positive (or any equivalent decision) given the costs of making the wrong decision.

Both of these are complex but I would argue it is the assurance calculation that gives us what we want to know most of the time when designing a trial. The assurance analysis also better represents uncertainty since we specify distributions over all the uncertain parameters rather than choose exact values. Despite this though, the funder of the trial mentioned above, who shall remain nameless, insisted on the results of a power calculation in order to be able to determine whether the trial was worth continuing with because that’s “what they’re used to.”

The main culprit for this issue is, I believe, communication. A simpler explanation with better presentation may have been easier to understand and accept. This is not to say that I do not believe the funder was substituting the heuristic ‘80% or more power = good’ for actually thinking about what we could learn from the trial. But until statisticians, economists, and other data analytic researchers start communicating better, how can we expect others to listen?

*Image credit: **Geralt*