Rita Faria’s journal round-up for 28th January 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Appraising the value of evidence generation activities: an HIV modelling study. BMJ Global Health [PubMed] Published 7th December 2018

How much should we spend on implementing our health care strategy versus getting more information to devise a better strategy? Should we devolve budgets to regions or administer the budget centrally? These are difficult questions and this new paper by Beth Woods et al has a brilliant stab at answering them.

The paper looks at the HIV prevention and treatment policies in Zambia. It starts by finding the most cost-effective strategy and the corresponding budget in each region, given what is currently known about the prevalence of the infection, the effectiveness of interventions, etc. The idea is that the regions receive a cost-effective budget to implement a cost-effective strategy. The issue is that the cost-effective strategy and budget are devised according to what we currently know. In practice, regions might face a situation on the ground which is different from what was expected. Regions might not have enough budget to implement the strategy or might have some leftover.

What if we spend some of the budget to get more information to make a better decision? This paper considers the value of perfect information given the costs of research. Depending on the size of the budget and the cost of research, it may be worthwhile to divert some funds to get more information. But what if we had more flexibility in the budgetary policy? This paper tests 2 more budgetary options: a national hard budget but with the flexibility to transfer funds from under- to overspending regions, and a regional hard budget with a contingency fund.

The results are remarkable. The best budgetary policy is to have a national budget with the flexibility to reallocate funds across regions. This is a fascinating paper, with implications not only for prioritisation and budget setting in LMICs but also for high-income countries. For example, the 2012 Health and Social Care Act broke down PCTs into smaller CCGs and gave them hard budgets. Some CCGs went into deficit, and there are reports that some interventions have been cut back as a result. There are probably many reasons for the deficit, but this paper shows that hard regional budgets clearly have negative consequences.

Health economics methods for public health resource allocation: a qualitative interview study of decision makers from an English local authority. Health Economics, Policy and Law [PubMed] Published 11th January 2019

Our first paper looked at how to use cost-effectiveness to allocate resources between regions and across health care services and research. Emma Frew and Katie Breheny look at how decisions are actually made in practice, but this time in a local authority in England. Another change of the 2012 Health and Social Care Act was to move public health responsibilities from the NHS to local authorities. Local authorities are now given a ring-fenced budget to implement cost-effective interventions that best match their needs. How do they make decisions? Thanks to this paper, we’re about to find out.

This paper is an enjoyable read and quite an eye-opener. It was startling that health economics evidence was not much used in practice. But the barriers that were cited are not insurmountable. And the suggestions by the interviewees were really useful. There were suggestions about how economic evaluations should consider the local context to get a fair picture of the impact of the intervention to services and to the population, and to move beyond the trial into the real world. Equity was mentioned too, as well as broadening the outcomes beyond health. Fortunately, the health economics community is working on many of these issues.

Lastly, there was a clear message to make economic evidence accessible to lay audiences. This is a topic really close to my heart, and something I’d like to help improve. We have to make our work easy to understand and use. Otherwise, it may stay locked away in papers rather than do what we intended it for. Which is, at least in my view, to help inform decisions and to improve people’s lives.

I found this paper reassuring in that there is clearly a need for economic evidence and a desire to use it. Yes, there are some teething issues, but we’re working in the right direction. In sum, the future for health economics is bright!

Survival extrapolation in cancer immunotherapy: a validation-based case study. Value in Health Published 13th December 2018

Often, the cost-effectiveness of cancer drugs hangs in the method to extrapolate overall survival. This is because many cancer drugs receive their marketing authorisation before most patients in the trial have died. Extrapolation is tested extensively in the sensitivity analysis, and this is the subject of many discussions in NICE appraisal committees. Ultimately, at the point of making the decision, the correct method to extrapolate is a known unknown. Only in hindsight can we know for sure what the best choice was.

Ash Bullement and colleagues take advantage of hindsight to know the best method for extrapolation of a clinical trial of an immunotherapy drug. Survival after treatment with immunotherapy drugs is more difficult to predict because some patients can survive for a very long time, while others have much poorer outcomes. They fitted survival models to the 3-year data cut, which was available at the time of the NICE technology appraisal. Then they compared their predictions to the observed survival in the 5-year data cut and to long-term survival trends from registry data. They found that the piecewise model and a mixture-cure model had the best predictions at 5 years.

This is a relevant paper for those of us who work in the technology appraisal world. I have to admit that I can be sceptical of piecewise and mixture-cure models, but they definitely have a role in our toolbox for survival extrapolation. Ideally, we’d have a study like this for all the technology appraisals hanging on the survival extrapolation so that we can take learnings across cancers and classes of drugs. With time, we would get to know more about what works best for which condition or drug. Ultimately, we may be able to get to a stage where we can look at the extrapolation with less inherent uncertainty.

Credits

Rita Faria’s journal round-up for 10th December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Calculating the expected value of sample information using efficient nested Monte Carlo: a tutorial. Value in Health [PubMed] Published 17th July 2018

The expected value of sample information (EVSI) represents the added benefit from collecting new information on specific parameters in future studies. It can be compared to the cost of conducting these future studies to calculate the expected net benefit of sampling. The objective is to help inform which study design is best, given the information it can gather and its costs. The theory and methods to calculate EVSI have been around for some time, but we rarely see it in applied economic evaluations.

In this paper, Anna Heath and Gianluca Baio present a tutorial about how to implement a method they had previously published on, which is more computationally efficient than the standard nested Monte Carlo simulations.

The authors start by explaining the method in theory, then illustrate it with a simple worked example. I’ll admit that I got a bit lost with the theory, but I found that the example made it much clearer. They demonstrate the method’s performance using a previously published cost-effectiveness model. Additionally, they have very helpfully published a suite of functions to apply this method in practice.

I really enjoyed reading this paper, as it takes the reader step-by-step through the method. However, I wasn’t sure about when this method is applicable, given that the authors note that it requires a large number of probabilistic simulations to perform well, and it is only appropriate when EVPPI is high. The issue is, how large is large and how high is high? Hopefully, these and other practical questions are on the list for this brilliant research team.

As an applied researcher, I find tutorial papers such as this one incredibly useful to learn new methods and help implement them in practice. Thanks to work such as this one and others, we’re getting close to making value of information analysis a standard element of cost-effectiveness studies.

Future costs in cost-effectiveness analyses: past, present, future. PharmacoEconomics [PubMed] Published 26th November 2018

Linda de Vries, Pieter van Baal and Werner Brouwer help illuminate the debate on future costs with this fascinating paper. Future costs are the costs of resources used by patients during the years of life added by the technology under evaluation. Future costs can be distinguished between related or unrelated, depending on whether the resources are used for the target disease. They can also be distinguished between medical or non-medical, depending on whether the costs fall on the healthcare budget.

The authors very skilfully summarise the theoretical literature on the inclusion of future costs. They conclude that future related and unrelated medical costs should be included and present compelling arguments to do so.

They also discuss empirical research, such as studies that estimate future unrelated costs. The references are a useful starting point for other researchers. For example, I noted that there is a tool to include future unrelated medical costs in the Netherlands and some studies on their estimation in the UK (see, for example, here).

There is a thought-provoking section on ethical concerns. If unrelated costs are included, technologies that increase the life expectancy of people who need a lot of resources will look less cost-effective. The authors suggest that these issues should not be concealed in the analysis, but instead dealt with in the decision-making process.

This is an enjoyable paper that provides an overview of the literature on future costs. I highly recommend it to get up to speed with the arguments and the practical implications. There is clearly a case for including future costs, and the question now is whether the cost-effectiveness practice follows suit.

Cost-utility analysis using EQ-5D-5L data: does how the utilities are derived matter? Value in Health Published 4th July 2018

We’ve recently become spoilt for choice when it comes to the EQ-5D. To obtain utility values, just in the UK, there are a few options: the 3L tariff, the 5L tariff, and crosswalk tariffs by Ben van Hout and colleagues and Mónica Hernandez and colleagues [PDF]. Which one to choose? And does it make any difference?

Fan Yang and colleagues have done a good job in getting us closer to the answer. They estimated utilities obtained from EQ-5D-5L data using the 5L value set and crosswalk tariffs to EQ-5D-3L and tested the values in cost-effectiveness models of hemodialysis compared to peritoneal dialysis.

Reassuringly, hemodialysis had always greater utilities than peritoneal dialysis. However, the magnitude of the difference varied with the approach. Therefore, using either EQ-5D-5L or the crosswalk tariff to EQ-5D-3L can influence the cost-effectiveness results. These results are in line with earlier work by Mónica Hernandez and colleagues, who compared the EQ-5D-3L with the EQ-5D-5L.

The message is clear in that both the type of EQ-5D questionnaire and the EQ-5D tariff makes a difference to the cost-effectiveness results. This can have huge policy implications as decisions by HTA agencies, such as NICE, depend on these results.

Which EQ-5D-5L to use in a new primary research study remains an open question. In the meantime, NICE recommends the use of the EQ-5D-3L or, if EQ-5D-5L was collected, Ben van Hout and colleagues’ mapping function to the EQ-5D-3L. Hopefully, a definite answer won’t be long in coming.

Credits

Thesis Thursday: Anna Heath

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Anna Heath who has a PhD from the University College London. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Bayesian computations for value of information measures using Gaussian processes, INLA and Moment Matching
Supervisors
Gianluca Baio, Ioanna Manolopoulou
Repository link
http://discovery.ucl.ac.uk/id/eprint/10050229

Why are new methods needed for value of information analysis?

Value of Information (VoI) has been around for a really long time – it was first mentioned in a book published in 1959! More recently, it has been suggested that VoI methods can be used in health economics to direct and design future research strategies. There are several different concepts in VoI analysis and each of these can be used to answer different questions. The VoI measure with the most potential calculates the economic benefit of collecting additional data to inform a health economic model (known as the EVSI). The EVSI can be compared with the cost of collecting data and allow us to make sure that our clinical research is “cost-effective”.

The problem is that, mathematically, VoI measures are almost impossible to calculate, so we have to use simulation. Traditionally, these simulation methods have been very slow (in my PhD, one example took over 300 days to compute 10 VoI measures) so we need simulation methods that speed up the computation significantly before VoI can be used for decisions about research design and funding.

Do current EVPPI and EVSI estimation methods give different results?

For most examples, the current estimation methods give similar results but the computational time to obtain these results differs significantly. Since starting my PhD, different estimation methods for the EVPPI and the EVSI have been published. The difference between these methods are the assumptions and the ease of use. The results seem to be pretty stable for all the different methods, which is good!

The EVPPI determines which model parameters have the biggest impact on the cost-effectiveness of the different treatments. This is used to direct possible avenues of future research, i.e. we should focus on gaining more information about parameters with a large impact on cost-effectiveness. The EVPPI is calculated based only on simulations of the model parameters so the number of methods for EVPPI calculation is quite small. To calculate the EVSI, you need to consider how to collect additional data, through a clinical trial, observational study etc, so there is a wider range of available methods.

How does the Gaussian process you develop improve EVPPI estimation?

Before my PhD started, Mark Strong and colleagues at the University of Sheffield developed a method to calculate the EVPPI based on flexible regression. This method is accurate but when you want to calculate the value of a group of model parameters, the computational time increases significantly. A Gaussian process is a method for very flexible regression but could be slow when trying to calculate the EVPPI for a group of parameters. The method we developed adapted the Gaussian process to speed up computation when calculating the EVPPI for a group of parameters. The size of the group of parameters does not really make a difference to the computation for this method, so we allowed for fast EVPPI computation in nearly all practical examples!

What is moment matching, and how can it be used to estimate EVSI?

Moments define the shape of a distribution – the first moment is the mean, the second the variance, the third is the skewness and so on. To estimate the EVSI, we need to estimate a distribution with some specific properties. We can show that this distribution is similar to the distribution of the net benefit from a probabilistic sensitivity analysis. Moment matching is a fancy way of saying that we estimate the EVSI by changing the distribution of the net benefit so it has the same variance as the distribution needed to estimate the EVSI. This significantly decreases the computation time for the EVSI because traditionally we would estimate the distribution for the EVSI using a large number of simulations (I’ve used 10 billion simulations for one estimate).

The really cool thing about this method is that we extended it to use the EVSI to find the trial design and sample size that gives the maximum value for money from research investment resources. The computation time for this analysis was around 5 minutes whereas the traditional method took over 300 days!

Do jobbing health economists need to be experts in value of information analysis to use your BCEA and EVSI software?

The BCEA software uses the costs and effects calculated from a probabilistic health economic model alongside the probabilistic analysis for the model parameters to give standard graphics and summaries. It is based in R and can be used to calculate the EVPPI without being an expert in VoI methods and analysis. All you need is to decide which model parameters you are interested in valuing. We’ve put together a Web interface, BCEAweb, which allows you to use BCEA without using R.

The EVSI software requires a model that incorporates how the data from the future study will be analysed. This can be complicated to design although I’m currently putting together a library of standard examples. Once you’ve designed the study, the software calculates the EVSI without any input from the user, so you don’t need to be an expert in the calculation methods. The software also provides graphics to display the EVSI results and includes text to help interpret the graphical results. An example of the graphical output can be seen here.