Sam Watson’s journal round-up for 6th May 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Channeling Fisher: randomisation tests and the statistical insignificance of seemingly experimental results. Quarterly Journal of Economics Published May 2019

Anyone who pays close attention to the statistics literature may feel that a paradigm shift is underway. While papers cautioning on the use of null hypothesis significance testing (NHST) have been published for decades, a number of articles in recent years have highlighted large numbers of problems in published studies. For example, only 39% of replications of 100 experiments in social psychology were considered successful. Publication in prestigious journals like Science and Nature is no guarantee of replicability either. There is a growing number of voices calling for improvements in study reporting and conduct, changes to use of p-values or even their abandonment altogether.

Some of the failures of studies using NHST methods are due to poor experimental design, poorly defined interventions, or “noise-mining”. But even well-designed experiments that are theoretically correctly analysed are not immune from false inferences in the NHST paradigm. This article looks at the reliability of statistical significance claims in 53 experimental studies published in the journals of the American Economic Association.

Statistical significance is typically determined in experimental economic papers using the econometric techniques widely taught to all economics students. In particular, the t-statistic of a regression coefficient is calculated using either homoskedastic or robust standard errors, which is then compared to a t-distribution with the appropriate degrees of freedom. An alternative method to determine p-values is a permutation or randomisation test, which we have featured in a previous Method of the Month. The permutation test provides the exact distribution of the test statistic and is therefore highly reliable. This article compares results from permutation tests the author conducts to the reported p-values in the 53 selected experimental studies. It finds between 13% and 22% fewer statistically significant results than reported in the papers and in tests of multiple treatment effects, 33% to 49% fewer.

This discrepancy is explained in part by the leverage of certain observations in each study. Results are often sensitive to the removal of single observations. The more of an impact an observation has, the greater its leverage; in balanced experimental designs leverage is uniformly distributed. In regressions with multiple treatments and treatment interactions leverage becomes concentrated and standard errors become volatile. Needless to say, this article presents yet another piece of compelling evidence that NHST is unreliable and strengthens the case for abandoning statistical significance as the primary inferential tool.

Effect of a resuscitation strategy targeting peripheral perfusion status vs serum lactate levels on 28-day mortality among patients with septic shock. The ANDROMEDA-SHOCK randomized clinical trial. Journal of the American Medical Association [PubMed] Published 17th February 2019

This article gets a mention in this round-up not for its health or economic content but because it is a very good example how not to use statistical significance. In previous articles on the blog we’ve discussed the misuse and misinterpretation of p-values, but I generally don’t go as far as advocating their complete abandonment as a recent mass-signed letter in Nature has. What is crucial is that researchers stop making the mistake that statistical insignificance means no effect. Making this error can lead to pernicious consequences when it comes to patient treatment and the lack of adoption of effective and cost-effective technologies, which is exactly what this article does.

I first saw this ridiculous use of statistical significance when it was Tweeted by David Spiegelhalter. The trial (in JAMA, no less) compares two different methods of managing resuscitation in patients with septic shock. The key result is:

By day 28, 74 patients (34.9%) in the peripheral perfusion group and 92 patients (43.4%) in the lactate group had died (hazard ratio, 0.75 [95% CI, 0.55 to 1.02]; P = .06; risk difference, −8.5% [95% CI, −18.2% to 1.2%]).

And the conclusion?

Among patients with septic shock, a resuscitation strategy targeting normalization of capillary refill time, compared with a strategy targeting serum lactate levels, did not reduce all-cause 28-day mortality.


Which is determined solely on the basis of statistical significance. Certainly it is possible that the result is just chance variation. But the study was conducted because it was believed that there was a difference in survival between these methods, and a 25% reduction in mortality risk is significant indeed. Rather than take an abductive or Bayesian approach, which would see this result as providing some degree of evidence in support of one treatment, the authors abandon any attempt at thinking and just mechanically follow statistical significance logic. This is a good case study for anyone wanting to discuss interpretation of p-values, but more significantly (every pun intended) the reliance on statistical significance may well be jeopardising patient lives.

Value of information: sensitivity analysis and research design in Bayesian evidence synthesis. Journal of the American Statistical Association Published 30th April 2019.

Three things are necessary to make a decision in the decision theoretical sense. First, a set of possible decisions; second, a set of parameters describing the state of the world; and third, a loss (or utility) function. Given these three things the decision that is chosen is the one that minimises losses (or maximises utility) given the state of the world. Of course, the state of the world may not be known for sure. There can be some uncertainty about the parameters and hence the best course of action, which might lead to losses relative to the decision we would make if we knew everything perfectly. Thus, we can determine the benefits of collecting more information. This is the basis of value of information (VoI) analysis.

We can distinguish between different quantities of interest in VoI analyses. The expected value of perfect information (EVPI) is the difference in the expected loss under the optimal decision made with current information, and the expected loss under the decision we would make if we knew all the parameters exactly. The expected value of partial perfect information (EVPPI) is similar to the previous definition expect it considers only the difference to if we knew one of the parameters exactly. Finally, the expected value of sample information (EVSI) compares the losses under our current decision to those under the decision we would make if we had the information on our parameters from a particular study design. If we know the costs of conducting a given study then we can take the benefits estimated in the EVSI to get the expected net benefit of sampling.

Calculating EVPPI and EVSI is no easy feat though, particularly for more complex models. This article proposes a relatively straightforward and computationally feasible way of estimating these quantities for complex evidence synthesis models. For their example they use a model commonly used to estimate overall HIV prevalence. Since not all HIV cases are known or disclosed, one has to combine different sets of data to get to a reliable estimate. For example, it is known how many people attend sexual health clinics and what proportion of those have HIV, so it is also known how many do not attend sexual health clinics just not how many of those might be HIV positive. There are many epidemiological parameters in this complex model and the aim of the paper is to demonstrate how the principle sources of uncertainty can be determined in terms of EVPPI and EVSI.

Credits

Rita Faria’s journal round-up for 10th December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Calculating the expected value of sample information using efficient nested Monte Carlo: a tutorial. Value in Health [PubMed] Published 17th July 2018

The expected value of sample information (EVSI) represents the added benefit from collecting new information on specific parameters in future studies. It can be compared to the cost of conducting these future studies to calculate the expected net benefit of sampling. The objective is to help inform which study design is best, given the information it can gather and its costs. The theory and methods to calculate EVSI have been around for some time, but we rarely see it in applied economic evaluations.

In this paper, Anna Heath and Gianluca Baio present a tutorial about how to implement a method they had previously published on, which is more computationally efficient than the standard nested Monte Carlo simulations.

The authors start by explaining the method in theory, then illustrate it with a simple worked example. I’ll admit that I got a bit lost with the theory, but I found that the example made it much clearer. They demonstrate the method’s performance using a previously published cost-effectiveness model. Additionally, they have very helpfully published a suite of functions to apply this method in practice.

I really enjoyed reading this paper, as it takes the reader step-by-step through the method. However, I wasn’t sure about when this method is applicable, given that the authors note that it requires a large number of probabilistic simulations to perform well, and it is only appropriate when EVPPI is high. The issue is, how large is large and how high is high? Hopefully, these and other practical questions are on the list for this brilliant research team.

As an applied researcher, I find tutorial papers such as this one incredibly useful to learn new methods and help implement them in practice. Thanks to work such as this one and others, we’re getting close to making value of information analysis a standard element of cost-effectiveness studies.

Future costs in cost-effectiveness analyses: past, present, future. PharmacoEconomics [PubMed] Published 26th November 2018

Linda de Vries, Pieter van Baal and Werner Brouwer help illuminate the debate on future costs with this fascinating paper. Future costs are the costs of resources used by patients during the years of life added by the technology under evaluation. Future costs can be distinguished between related or unrelated, depending on whether the resources are used for the target disease. They can also be distinguished between medical or non-medical, depending on whether the costs fall on the healthcare budget.

The authors very skilfully summarise the theoretical literature on the inclusion of future costs. They conclude that future related and unrelated medical costs should be included and present compelling arguments to do so.

They also discuss empirical research, such as studies that estimate future unrelated costs. The references are a useful starting point for other researchers. For example, I noted that there is a tool to include future unrelated medical costs in the Netherlands and some studies on their estimation in the UK (see, for example, here).

There is a thought-provoking section on ethical concerns. If unrelated costs are included, technologies that increase the life expectancy of people who need a lot of resources will look less cost-effective. The authors suggest that these issues should not be concealed in the analysis, but instead dealt with in the decision-making process.

This is an enjoyable paper that provides an overview of the literature on future costs. I highly recommend it to get up to speed with the arguments and the practical implications. There is clearly a case for including future costs, and the question now is whether the cost-effectiveness practice follows suit.

Cost-utility analysis using EQ-5D-5L data: does how the utilities are derived matter? Value in Health Published 4th July 2018

We’ve recently become spoilt for choice when it comes to the EQ-5D. To obtain utility values, just in the UK, there are a few options: the 3L tariff, the 5L tariff, and crosswalk tariffs by Ben van Hout and colleagues and Mónica Hernandez and colleagues [PDF]. Which one to choose? And does it make any difference?

Fan Yang and colleagues have done a good job in getting us closer to the answer. They estimated utilities obtained from EQ-5D-5L data using the 5L value set and crosswalk tariffs to EQ-5D-3L and tested the values in cost-effectiveness models of hemodialysis compared to peritoneal dialysis.

Reassuringly, hemodialysis had always greater utilities than peritoneal dialysis. However, the magnitude of the difference varied with the approach. Therefore, using either EQ-5D-5L or the crosswalk tariff to EQ-5D-3L can influence the cost-effectiveness results. These results are in line with earlier work by Mónica Hernandez and colleagues, who compared the EQ-5D-3L with the EQ-5D-5L.

The message is clear in that both the type of EQ-5D questionnaire and the EQ-5D tariff makes a difference to the cost-effectiveness results. This can have huge policy implications as decisions by HTA agencies, such as NICE, depend on these results.

Which EQ-5D-5L to use in a new primary research study remains an open question. In the meantime, NICE recommends the use of the EQ-5D-3L or, if EQ-5D-5L was collected, Ben van Hout and colleagues’ mapping function to the EQ-5D-3L. Hopefully, a definite answer won’t be long in coming.

Credits

Thesis Thursday: Anna Heath

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Anna Heath who has a PhD from the University College London. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Bayesian computations for value of information measures using Gaussian processes, INLA and Moment Matching
Supervisors
Gianluca Baio, Ioanna Manolopoulou
Repository link
http://discovery.ucl.ac.uk/id/eprint/10050229

Why are new methods needed for value of information analysis?

Value of Information (VoI) has been around for a really long time – it was first mentioned in a book published in 1959! More recently, it has been suggested that VoI methods can be used in health economics to direct and design future research strategies. There are several different concepts in VoI analysis and each of these can be used to answer different questions. The VoI measure with the most potential calculates the economic benefit of collecting additional data to inform a health economic model (known as the EVSI). The EVSI can be compared with the cost of collecting data and allow us to make sure that our clinical research is “cost-effective”.

The problem is that, mathematically, VoI measures are almost impossible to calculate, so we have to use simulation. Traditionally, these simulation methods have been very slow (in my PhD, one example took over 300 days to compute 10 VoI measures) so we need simulation methods that speed up the computation significantly before VoI can be used for decisions about research design and funding.

Do current EVPPI and EVSI estimation methods give different results?

For most examples, the current estimation methods give similar results but the computational time to obtain these results differs significantly. Since starting my PhD, different estimation methods for the EVPPI and the EVSI have been published. The difference between these methods are the assumptions and the ease of use. The results seem to be pretty stable for all the different methods, which is good!

The EVPPI determines which model parameters have the biggest impact on the cost-effectiveness of the different treatments. This is used to direct possible avenues of future research, i.e. we should focus on gaining more information about parameters with a large impact on cost-effectiveness. The EVPPI is calculated based only on simulations of the model parameters so the number of methods for EVPPI calculation is quite small. To calculate the EVSI, you need to consider how to collect additional data, through a clinical trial, observational study etc, so there is a wider range of available methods.

How does the Gaussian process you develop improve EVPPI estimation?

Before my PhD started, Mark Strong and colleagues at the University of Sheffield developed a method to calculate the EVPPI based on flexible regression. This method is accurate but when you want to calculate the value of a group of model parameters, the computational time increases significantly. A Gaussian process is a method for very flexible regression but could be slow when trying to calculate the EVPPI for a group of parameters. The method we developed adapted the Gaussian process to speed up computation when calculating the EVPPI for a group of parameters. The size of the group of parameters does not really make a difference to the computation for this method, so we allowed for fast EVPPI computation in nearly all practical examples!

What is moment matching, and how can it be used to estimate EVSI?

Moments define the shape of a distribution – the first moment is the mean, the second the variance, the third is the skewness and so on. To estimate the EVSI, we need to estimate a distribution with some specific properties. We can show that this distribution is similar to the distribution of the net benefit from a probabilistic sensitivity analysis. Moment matching is a fancy way of saying that we estimate the EVSI by changing the distribution of the net benefit so it has the same variance as the distribution needed to estimate the EVSI. This significantly decreases the computation time for the EVSI because traditionally we would estimate the distribution for the EVSI using a large number of simulations (I’ve used 10 billion simulations for one estimate).

The really cool thing about this method is that we extended it to use the EVSI to find the trial design and sample size that gives the maximum value for money from research investment resources. The computation time for this analysis was around 5 minutes whereas the traditional method took over 300 days!

Do jobbing health economists need to be experts in value of information analysis to use your BCEA and EVSI software?

The BCEA software uses the costs and effects calculated from a probabilistic health economic model alongside the probabilistic analysis for the model parameters to give standard graphics and summaries. It is based in R and can be used to calculate the EVPPI without being an expert in VoI methods and analysis. All you need is to decide which model parameters you are interested in valuing. We’ve put together a Web interface, BCEAweb, which allows you to use BCEA without using R.

The EVSI software requires a model that incorporates how the data from the future study will be analysed. This can be complicated to design although I’m currently putting together a library of standard examples. Once you’ve designed the study, the software calculates the EVSI without any input from the user, so you don’t need to be an expert in the calculation methods. The software also provides graphics to display the EVSI results and includes text to help interpret the graphical results. An example of the graphical output can be seen here.