Rita Faria’s journal round-up for 10th December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Calculating the expected value of sample information using efficient nested Monte Carlo: a tutorial. Value in Health [PubMed] Published 17th July 2018

The expected value of sample information (EVSI) represents the added benefit from collecting new information on specific parameters in future studies. It can be compared to the cost of conducting these future studies to calculate the expected net benefit of sampling. The objective is to help inform which study design is best, given the information it can gather and its costs. The theory and methods to calculate EVSI have been around for some time, but we rarely see it in applied economic evaluations.

In this paper, Anna Heath and Gianluca Baio present a tutorial about how to implement a method they had previously published on, which is more computationally efficient than the standard nested Monte Carlo simulations.

The authors start by explaining the method in theory, then illustrate it with a simple worked example. I’ll admit that I got a bit lost with the theory, but I found that the example made it much clearer. They demonstrate the method’s performance using a previously published cost-effectiveness model. Additionally, they have very helpfully published a suite of functions to apply this method in practice.

I really enjoyed reading this paper, as it takes the reader step-by-step through the method. However, I wasn’t sure about when this method is applicable, given that the authors note that it requires a large number of probabilistic simulations to perform well, and it is only appropriate when EVPPI is high. The issue is, how large is large and how high is high? Hopefully, these and other practical questions are on the list for this brilliant research team.

As an applied researcher, I find tutorial papers such as this one incredibly useful to learn new methods and help implement them in practice. Thanks to work such as this one and others, we’re getting close to making value of information analysis a standard element of cost-effectiveness studies.

Future costs in cost-effectiveness analyses: past, present, future. PharmacoEconomics [PubMed] Published 26th November 2018

Linda de Vries, Pieter van Baal and Werner Brouwer help illuminate the debate on future costs with this fascinating paper. Future costs are the costs of resources used by patients during the years of life added by the technology under evaluation. Future costs can be distinguished between related or unrelated, depending on whether the resources are used for the target disease. They can also be distinguished between medical or non-medical, depending on whether the costs fall on the healthcare budget.

The authors very skilfully summarise the theoretical literature on the inclusion of future costs. They conclude that future related and unrelated medical costs should be included and present compelling arguments to do so.

They also discuss empirical research, such as studies that estimate future unrelated costs. The references are a useful starting point for other researchers. For example, I noted that there is a tool to include future unrelated medical costs in the Netherlands and some studies on their estimation in the UK (see, for example, here).

There is a thought-provoking section on ethical concerns. If unrelated costs are included, technologies that increase the life expectancy of people who need a lot of resources will look less cost-effective. The authors suggest that these issues should not be concealed in the analysis, but instead dealt with in the decision-making process.

This is an enjoyable paper that provides an overview of the literature on future costs. I highly recommend it to get up to speed with the arguments and the practical implications. There is clearly a case for including future costs, and the question now is whether the cost-effectiveness practice follows suit.

Cost-utility analysis using EQ-5D-5L data: does how the utilities are derived matter? Value in Health Published 4th July 2018

We’ve recently become spoilt for choice when it comes to the EQ-5D. To obtain utility values, just in the UK, there are a few options: the 3L tariff, the 5L tariff, and crosswalk tariffs by Ben van Hout and colleagues and Mónica Hernandez and colleagues [PDF]. Which one to choose? And does it make any difference?

Fan Yang and colleagues have done a good job in getting us closer to the answer. They estimated utilities obtained from EQ-5D-5L data using the 5L value set and crosswalk tariffs to EQ-5D-3L and tested the values in cost-effectiveness models of hemodialysis compared to peritoneal dialysis.

Reassuringly, hemodialysis had always greater utilities than peritoneal dialysis. However, the magnitude of the difference varied with the approach. Therefore, using either EQ-5D-5L or the crosswalk tariff to EQ-5D-3L can influence the cost-effectiveness results. These results are in line with earlier work by Mónica Hernandez and colleagues, who compared the EQ-5D-3L with the EQ-5D-5L.

The message is clear in that both the type of EQ-5D questionnaire and the EQ-5D tariff makes a difference to the cost-effectiveness results. This can have huge policy implications as decisions by HTA agencies, such as NICE, depend on these results.

Which EQ-5D-5L to use in a new primary research study remains an open question. In the meantime, NICE recommends the use of the EQ-5D-3L or, if EQ-5D-5L was collected, Ben van Hout and colleagues’ mapping function to the EQ-5D-3L. Hopefully, a definite answer won’t be long in coming.

Credits

Chris Sampson’s journal round-up for 19th November 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Valuation of health states considered to be worse than death—an analysis of composite time trade-off data from 5 EQ-5D-5L valuation studies. Value in Health Published 12th November 2018

I have a problem with the idea of health states being ‘worse than dead’, and I’ve banged on about it on this blog. Happily, this new article provides an opportunity for me to continue my campaign. Health state valuation methods estimate how much a person prefers being in a more healthy state. Positive values are easy to understand; 1.0 is twice as good as 0.5. But how about the negative values? Is -1.0 twice as bad as -0.5? How much worse than being dead is that? The purpose of this study is to evaluate whether or not negative EQ-5D-5L values meaningfully discriminate between different health states.

The study uses data from EQ-5D-5L valuation studies conducted in Singapore, the Netherlands, China, Thailand, and Canada. Altogether, more than 5000 people provided valuations of 10 states each. As a simple measure of severity, the authors summed the number of steps from full health in all domains, giving a value from 0 (11111) to 20 (55555). We’d expect this measure of severity of states to correlate strongly with the mean utility values derived from the composite time trade-off (TTO) exercise.

Taking Singapore as an example, the mean of positive values (states better than dead) decreased from 0.89 to 0.21 with increasing severity, which is reassuring. The mean of negative values, on the other hand, ranged from -0.98 to -0.89. Negative values were clustered between -0.5 and -1.0. Results were similar across the other countries. In all except Thailand, observed negative values were indistinguishable from random noise. There was no decreasing trend in mean utility values as severity increased for states worse than dead. A linear mixed model with participant-specific intercepts and an ANOVA model confirmed the findings.

What this means is that we can’t say much about states worse than dead except that they are worse than dead. How much worse doesn’t relate to severity, which is worrying if we’re using these values in trade-offs against states better than dead. Mostly, the authors frame this lack of discriminative ability as a practical problem, rather than anything more fundamental. The discussion section provides some interesting speculation, but my favourite part of the paper is an analogy, which I’ll be quoting in future: “it might be worse to be lost at sea in deep waters than in a pond, but not in any way that truly matters”. Dead is dead is dead.

Determining value in health technology assessment: stay the course or tack away? PharmacoEconomics [PubMed] Published 9th November 2018

The cost-per-QALY approach to value in health care is no stranger to assault. The majority of criticisms are ill-founded special pleading, but, sometimes, reasonable tweaks and alternatives have been proposed. The aim of this paper was to bring together a supergroup of health economists to review and discuss these reasonable alternatives. Specifically, the questions they sought to address were: i) what should health technology assessment achieve, and ii) what should be the approach to value-based pricing?

The paper provides an unstructured overview of a selection of possible adjustments or alternatives to the cost-per-QALY method. We’re very briefly introduced to QALY weighting, efficiency frontiers, and multi-criteria decision analysis. The authors don’t tell us why we ought (or ought not) to adopt these alternatives. I was hoping that the paper would provide tentative answers to the normative questions posed, but it doesn’t do that. It doesn’t even outline the thought processes required to answer them.

The purpose of this paper seems to be to argue that alternative approaches aren’t sufficiently developed to replace the cost-per-QALY approach. But it’s hardly a strong defence. I’m a big fan of the cost-per-QALY as a necessary (if not sufficient) part of decision making in health care, and I agree with the authors that the alternatives are lacking in support. But the lack of conviction in this paper scares me. It’s tempting to make a comparison between the EU and the QALY.

How can we evaluate the cost-effectiveness of health system strengthening? A typology and illustrations. Social Science & Medicine [PubMed] Published 3rd November 2018

Health care is more than the sum of its parts. This is particularly evident in low- and middle-income countries that might lack strong health systems and which therefore can’t benefit from a new intervention in the way a strong system could. Thus, there is value in health system strengthening. But, as the authors of this paper point out, this value can be difficult to identify. The purpose of this study is to provide new methods to model the impact of health system strengthening in order to support investment decisions in this context.

The authors introduce standard cost-effectiveness analysis and economies of scope as relevant pieces of the puzzle. In essence, this paper is trying to marry the two. An intervention is more likely to be cost-effective if it helps to provide economies of scope, either by making use of an underused platform or providing a new platform that would improve the cost-effectiveness of other interventions. The authors provide a typology with three types of health system strengthening: i) investing in platform efficiency, ii) investing in platform capacity, and iii) investing in new platforms. Examples are provided for each. Simple mathematical approaches to evaluating these are described, using scaling factors and disaggregated cost and outcome constraints. Numerical demonstrations show how these approaches can reveal differences in cost-effectiveness that arise through changes in technical efficiency or the opportunity cost linked to health system strengthening.

This paper is written with international development investment decisions in mind, and in particular the challenge of investments that can mostly be characterised as health system strengthening. But it’s easy to see how many – perhaps all – health services are interdependent. If anything, the broader impact of new interventions on health systems should be considered as standard. The methods described in this paper provide a useful framework to tackle these issues, with food for thought for anybody engaged in cost-effectiveness analysis.

Credits

Thesis Thursday: Anna Heath

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Anna Heath who has a PhD from the University College London. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Bayesian computations for value of information measures using Gaussian processes, INLA and Moment Matching
Supervisors
Gianluca Baio, Ioanna Manolopoulou
Repository link
http://discovery.ucl.ac.uk/id/eprint/10050229

Why are new methods needed for value of information analysis?

Value of Information (VoI) has been around for a really long time – it was first mentioned in a book published in 1959! More recently, it has been suggested that VoI methods can be used in health economics to direct and design future research strategies. There are several different concepts in VoI analysis and each of these can be used to answer different questions. The VoI measure with the most potential calculates the economic benefit of collecting additional data to inform a health economic model (known as the EVSI). The EVSI can be compared with the cost of collecting data and allow us to make sure that our clinical research is “cost-effective”.

The problem is that, mathematically, VoI measures are almost impossible to calculate, so we have to use simulation. Traditionally, these simulation methods have been very slow (in my PhD, one example took over 300 days to compute 10 VoI measures) so we need simulation methods that speed up the computation significantly before VoI can be used for decisions about research design and funding.

Do current EVPPI and EVSI estimation methods give different results?

For most examples, the current estimation methods give similar results but the computational time to obtain these results differs significantly. Since starting my PhD, different estimation methods for the EVPPI and the EVSI have been published. The difference between these methods are the assumptions and the ease of use. The results seem to be pretty stable for all the different methods, which is good!

The EVPPI determines which model parameters have the biggest impact on the cost-effectiveness of the different treatments. This is used to direct possible avenues of future research, i.e. we should focus on gaining more information about parameters with a large impact on cost-effectiveness. The EVPPI is calculated based only on simulations of the model parameters so the number of methods for EVPPI calculation is quite small. To calculate the EVSI, you need to consider how to collect additional data, through a clinical trial, observational study etc, so there is a wider range of available methods.

How does the Gaussian process you develop improve EVPPI estimation?

Before my PhD started, Mark Strong and colleagues at the University of Sheffield developed a method to calculate the EVPPI based on flexible regression. This method is accurate but when you want to calculate the value of a group of model parameters, the computational time increases significantly. A Gaussian process is a method for very flexible regression but could be slow when trying to calculate the EVPPI for a group of parameters. The method we developed adapted the Gaussian process to speed up computation when calculating the EVPPI for a group of parameters. The size of the group of parameters does not really make a difference to the computation for this method, so we allowed for fast EVPPI computation in nearly all practical examples!

What is moment matching, and how can it be used to estimate EVSI?

Moments define the shape of a distribution – the first moment is the mean, the second the variance, the third is the skewness and so on. To estimate the EVSI, we need to estimate a distribution with some specific properties. We can show that this distribution is similar to the distribution of the net benefit from a probabilistic sensitivity analysis. Moment matching is a fancy way of saying that we estimate the EVSI by changing the distribution of the net benefit so it has the same variance as the distribution needed to estimate the EVSI. This significantly decreases the computation time for the EVSI because traditionally we would estimate the distribution for the EVSI using a large number of simulations (I’ve used 10 billion simulations for one estimate).

The really cool thing about this method is that we extended it to use the EVSI to find the trial design and sample size that gives the maximum value for money from research investment resources. The computation time for this analysis was around 5 minutes whereas the traditional method took over 300 days!

Do jobbing health economists need to be experts in value of information analysis to use your BCEA and EVSI software?

The BCEA software uses the costs and effects calculated from a probabilistic health economic model alongside the probabilistic analysis for the model parameters to give standard graphics and summaries. It is based in R and can be used to calculate the EVPPI without being an expert in VoI methods and analysis. All you need is to decide which model parameters you are interested in valuing. We’ve put together a Web interface, BCEAweb, which allows you to use BCEA without using R.

The EVSI software requires a model that incorporates how the data from the future study will be analysed. This can be complicated to design although I’m currently putting together a library of standard examples. Once you’ve designed the study, the software calculates the EVSI without any input from the user, so you don’t need to be an expert in the calculation methods. The software also provides graphics to display the EVSI results and includes text to help interpret the graphical results. An example of the graphical output can be seen here.