# Brendan Collins’s journal round-up for 18th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluation of intervention impact on health inequality for resource allocation. Medical Decision Making [PubMed] Published 28th February 2019

How should decision-makers factor equity impacts into economic decisions? Can we trade off an intervention’s cost-effectiveness with its impact on unfair health inequalities? Is a QALY just a QALY or should we weight it more if it is gained by someone from a disadvantaged group? Can we assume that, because people of lower socioeconomic position lose more QALYs through ill health, that most interventions should, by default, reduce inequalities?

I really like the health equity plane. This is where you show health impacts (usually including a summary measure of cost-effectiveness like net health benefit or net monetary benefit) and equity impacts (which might be a change in slope index of inequality [SII] or relative index of inequality) on the same plane. This enables decision-makers to identify potential trade-offs between interventions that produce a greater benefit, but have less impact on inequalities, and those that produce a smaller benefit, but increase equity. I think there has been a debate over whether the ‘win-win’ quadrant should be south-east (which would be consistent with the dominant quadrant of the cost-effectiveness plane) or north-east, which is what seems to have been adopted as the consensus and is used here.

This paper showcases a reproducible method to estimate the equity impact of interventions. It considers public health interventions recommended by NICE from 2006-2016, with equity impacts estimated based on whether they targeted specific diseases, risk factors or populations. The disease distributions were based on hospital episode statistics data by deprivation (IMD). The study used equity weights to convert QALYs gained to different social groups into net social welfare. In this case, valuing the most disadvantaged fifth of people’s health at around 6-7 times that of the least disadvantaged fifth. I think there might still be work to be done around reaching consensus for equity weights.

The total expected effect on inequalities is small – full implementation of all recommendations would produce a reduction of the quality-adjusted life expectancy gap between the healthiest and least healthy from 13.78 to 13.34 QALYs. But maybe this is to be expected; NICE does not typically look at vaccinations or screening and has not looked at large scale public health programmes like the Healthy Child Programme in the whole. Reassuringly, where recommended interventions were likely to increase inequality, the trade-off between efficiency and equity was within the social welfare function they had used. The increase in inequality might be acceptable because the interventions were cost-effective – producing 5.6million QALYs while increasing the SII by 0.005. If these interventions are buying health at a good price, then you would hope this might then release money for other interventions that would reduce inequalities.

I suspect that public health folks might not like equity trade-offs at all – trading off equity and cost-effectiveness might be the moral equivalent of trading off human rights – you can’t choose between them. But the reality is that these kinds of trade-offs do happen, and like a lot of economic methods, it is about revealing these implicit trade-offs so that they become explicit, and having ‘accountability for reasonableness‘.

Future unrelated medical costs need to be considered in cost effectiveness analysis. The European Journal of Health Economics [PubMed] [RePEc] Published February 2019

This editorial says that NICE should include unrelated future medical costs in its decision making. At the moment, if NICE looks at a cardiovascular disease (CVD) drug, it might look at future costs related to CVD but it won’t include changes in future costs of cancer, or dementia, which may occur because individuals live longer. But usually unrelated QALY gains will be implicitly included; so there is an inconsistency. If you are a health economic modeller, you know that including unrelated costs properly is technically difficult. You might weight average population costs by disease prevalence so you get a cost estimate for people with coronary heart disease, diabetes, and people without either disease. Or you might have a general healthcare running cost that you can apply to future years. But accounting for a full matrix of competing causes of morbidity and mortality is very tricky if not impossible. To help with this, this group of authors produced the excellent PAID tool, which helps with doing this for the Netherlands (can we have one for the UK please?).

To me, including unrelated future costs means that in some cases ICERs might be driven more by the ratio of future costs to QALYs gained. Whereas currently, ICERs are often driven by the ratio of the intervention costs to QALYs gained. So it might be that a lot of treatments that are currently cost-effective no longer are, or we need to judge all interventions with a higher ICER willingness to pay threshold or value of a QALY. The authors suggest that, although including unrelated medical costs usually pushes up the ICER, it should ultimately result in better decisions that increase health.

There are real ethical issues here. I worry that including future unrelated costs might be used for an integrated care agenda in the NHS, moving towards a capitation system where the total healthcare spend on any one individual is capped, which I don’t necessarily think should happen in a health insurance system. Future developments around big data mean we will be able to segment the population a lot better and estimate who will benefit from treatments. But I think if someone is unlucky enough to need a lot of healthcare spending, maybe they should have it. This is risk sharing and, without it, you may get the ‘double jeopardy‘ problem.

For health economic modellers and decision-makers, a compromise might be to present analyses with related and unrelated medical costs and to consider both for investment decisions.

Overview of cost-effectiveness analysis. JAMA [PubMed] Published 11th March 2019

This paper probably won’t offer anything new to academic health economists in terms of methods, but I think it might be a useful teaching resource. It gives an interesting example of a model of ovarian cancer screening in the US that was published in February 2018. There has been a large-scale trial of ovarian cancer screening in the UK (the UKCTOCS), which has been extended because the results have been promising but mortality reductions were not statistically significant. The model gives a central ICER estimate of \$106,187/QALY (based on \$100 per screen) which would probably not be considered cost-effective in the UK.

I would like to explore one statement that I found particularly interesting, around the willingness to pay threshold; “This willingness to pay is often represented by the largest ICER among all the interventions that were adopted before current resources were exhausted, because adoption of any new intervention would require removal of an existing intervention to free up resources.”

The Culyer bookshelf model is similar to this, although as well as the ICER you also need to consider the burden of disease or size of the investment. Displacing a \$110,000/QALY intervention for 1000 people with a \$109,000/QALY intervention for a million people will bust your budget.

This idea works intuitively – if Liverpool FC are signing a new player then I might hope they are better than all of the other players, or at least better than the average player. But actually, as long as they are better than the worst player then the team will be improved (leaving aside issues around different positions, how they play together, etc.).

However, I think that saying that the reference ICER should be the largest current ICER might be a bit dangerous. Leaving aside inefficient legacy interventions (like unnecessary tonsillectomies etc), it is likely that the intervention being considered for investment and the current maximum ICER intervention to be displaced may both be new, expensive immunotherapies. It might be last in, first out. But I can’t see this happening; people are loss averse, so decision-makers and patients might not accept what is seen as a fantastic new drug for pancreatic cancer being approved then quickly usurped by a fantastic new leukaemia drug.

There has been a lot of debate around what the threshold should be in the UK; in England NICE currently use £20,000 – £30,000, up to a hypothetical maximum £300,000/QALY in very specific circumstances. UK Treasury value QALYs at £60,000. Work by Karl Claxton and colleagues suggests that marginal productivity (the ‘shadow price’) in the NHS is nearer to £5,000 – £15,000 per QALY.

I don’t know what the answer to this is. I don’t think the willingness-to-pay threshold for a new treatment should be the maximum ICER of a current portfolio of interventions; maybe it should be the marginal health production cost in a health system, as might be inferred from the Claxton work. Of course, investment decisions are made on other factors, like impact on health inequalities, not just on the ICER.

Credits

# Rita Faria’s journal round-up for 28th January 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Appraising the value of evidence generation activities: an HIV modelling study. BMJ Global Health [PubMed] Published 7th December 2018

How much should we spend on implementing our health care strategy versus getting more information to devise a better strategy? Should we devolve budgets to regions or administer the budget centrally? These are difficult questions and this new paper by Beth Woods et al has a brilliant stab at answering them.

The paper looks at the HIV prevention and treatment policies in Zambia. It starts by finding the most cost-effective strategy and the corresponding budget in each region, given what is currently known about the prevalence of the infection, the effectiveness of interventions, etc. The idea is that the regions receive a cost-effective budget to implement a cost-effective strategy. The issue is that the cost-effective strategy and budget are devised according to what we currently know. In practice, regions might face a situation on the ground which is different from what was expected. Regions might not have enough budget to implement the strategy or might have some leftover.

What if we spend some of the budget to get more information to make a better decision? This paper considers the value of perfect information given the costs of research. Depending on the size of the budget and the cost of research, it may be worthwhile to divert some funds to get more information. But what if we had more flexibility in the budgetary policy? This paper tests 2 more budgetary options: a national hard budget but with the flexibility to transfer funds from under- to overspending regions, and a regional hard budget with a contingency fund.

The results are remarkable. The best budgetary policy is to have a national budget with the flexibility to reallocate funds across regions. This is a fascinating paper, with implications not only for prioritisation and budget setting in LMICs but also for high-income countries. For example, the 2012 Health and Social Care Act broke down PCTs into smaller CCGs and gave them hard budgets. Some CCGs went into deficit, and there are reports that some interventions have been cut back as a result. There are probably many reasons for the deficit, but this paper shows that hard regional budgets clearly have negative consequences.

Health economics methods for public health resource allocation: a qualitative interview study of decision makers from an English local authority. Health Economics, Policy and Law [PubMed] Published 11th January 2019

Our first paper looked at how to use cost-effectiveness to allocate resources between regions and across health care services and research. Emma Frew and Katie Breheny look at how decisions are actually made in practice, but this time in a local authority in England. Another change of the 2012 Health and Social Care Act was to move public health responsibilities from the NHS to local authorities. Local authorities are now given a ring-fenced budget to implement cost-effective interventions that best match their needs. How do they make decisions? Thanks to this paper, we’re about to find out.

This paper is an enjoyable read and quite an eye-opener. It was startling that health economics evidence was not much used in practice. But the barriers that were cited are not insurmountable. And the suggestions by the interviewees were really useful. There were suggestions about how economic evaluations should consider the local context to get a fair picture of the impact of the intervention to services and to the population, and to move beyond the trial into the real world. Equity was mentioned too, as well as broadening the outcomes beyond health. Fortunately, the health economics community is working on many of these issues.

Lastly, there was a clear message to make economic evidence accessible to lay audiences. This is a topic really close to my heart, and something I’d like to help improve. We have to make our work easy to understand and use. Otherwise, it may stay locked away in papers rather than do what we intended it for. Which is, at least in my view, to help inform decisions and to improve people’s lives.

I found this paper reassuring in that there is clearly a need for economic evidence and a desire to use it. Yes, there are some teething issues, but we’re working in the right direction. In sum, the future for health economics is bright!

Survival extrapolation in cancer immunotherapy: a validation-based case study. Value in Health Published 13th December 2018

Often, the cost-effectiveness of cancer drugs hangs in the method to extrapolate overall survival. This is because many cancer drugs receive their marketing authorisation before most patients in the trial have died. Extrapolation is tested extensively in the sensitivity analysis, and this is the subject of many discussions in NICE appraisal committees. Ultimately, at the point of making the decision, the correct method to extrapolate is a known unknown. Only in hindsight can we know for sure what the best choice was.

Ash Bullement and colleagues take advantage of hindsight to know the best method for extrapolation of a clinical trial of an immunotherapy drug. Survival after treatment with immunotherapy drugs is more difficult to predict because some patients can survive for a very long time, while others have much poorer outcomes. They fitted survival models to the 3-year data cut, which was available at the time of the NICE technology appraisal. Then they compared their predictions to the observed survival in the 5-year data cut and to long-term survival trends from registry data. They found that the piecewise model and a mixture-cure model had the best predictions at 5 years.

This is a relevant paper for those of us who work in the technology appraisal world. I have to admit that I can be sceptical of piecewise and mixture-cure models, but they definitely have a role in our toolbox for survival extrapolation. Ideally, we’d have a study like this for all the technology appraisals hanging on the survival extrapolation so that we can take learnings across cancers and classes of drugs. With time, we would get to know more about what works best for which condition or drug. Ultimately, we may be able to get to a stage where we can look at the extrapolation with less inherent uncertainty.

Credits

# Sam Watson’s journal round-up for 9th July 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluating the 2014 sugar-sweetened beverage tax in Chile: an observational study in urban areas. PLoS Medicine [PubMedPublished 3rd July 2018

Sugar taxes are one of the public health policy options currently in vogue. Countries including Mexico, the UK, South Africa, and Sri Lanka all have sugar taxes. The aim of such levies is to reduce demand for the most sugary drinks, or if the tax is absorbed on the supply side, which is rare, to encourage producers to reduce the sugar content of their drinks. One may also view it as a form of Pigouvian taxation to internalise the public health costs associated with obesity. Chile has long had an ad valorem tax on soft drinks fixed at 13%, but in 2014 decided to pursue a sugar tax approach. Drinks with more than 6.25g/100ml saw their tax rate rise to 18% and the tax on those below this threshold dropped to 10%. To understand what effect this change had, we would want to know three key things along the causal pathway from tax policy to sugar consumption: did people know about the tax change, did prices change, and did consumption behaviour change. On this latter point, we can consider both the overall volume of soft drinks and whether people substituted low sugar for high sugar beverages. Using the Kantar Worldpanel, a household panel survey of purchasing behaviour, this paper examines these questions.

Everyone in Chile was affected by the tax so there is no control group. We must rely on time series variation to identify the effect of the tax. Sometimes, looking at plots of the data reveals a clear step-change when an intervention is introduced (e.g. the plot in this post), not so in this paper. We therefore rely heavily on the results of the model for our inferences, and I have a couple of small gripes with it. First, the model captures household fixed effects, but no consideration is given to dynamic effects. Some households may be more or less likely to buy drinks, but their decisions are also likely to be affected by how much they’ve recently bought. Similarly, the errors may be correlated over time. Ignoring dynamic effects can lead to large biases. Second, the authors choose among different functional form specifications of time using Akaike Information Criterion (AIC). While AIC and the Bayesian Information Criterion (BIC) are often thought to be interchangeable, they are not; AIC estimates predictive performance on future data, while BIC estimates goodness of fit to the data. Thus, I would think BIC would be more appropriate. Additional results show the estimates are very sensitive to the choice of functional form by an order of magnitude and in sign. The authors estimate a fairly substantial decrease of around 22% in the volume of high sugar drinks purchased, but find evidence that the price paid changed very little (~1.5%) and there was little change in other drinks. While the analysis is generally careful and well thought out, I am not wholly convinced by the authors’ conclusions that “Our main estimates suggest a significant, sizeable reduction in the volume of high-tax soft drinks purchased.”

A Bayesian framework for health economic evaluation in studies with missing data. Health Economics [PubMedPublished 3rd July 2018

Missing data is a ubiquitous problem. I’ve never used a data set where no observations were missing and I doubt I’m alone. Despite its pervasiveness, it’s often only afforded an acknowledgement in the discussion or perhaps, in more complete analyses, something like multiple imputation will be used. Indeed, the majority of trials in the top medical journals don’t handle it correctly, if at all. The majority of the methods used for missing data in practice assume the data are ‘missing at random’ (MAR). One interpretation is that this means that, conditional on the observable variables, the probability of data being missing is independent of unobserved factors influencing the outcome. Another interpretation is that the distribution of the potentially missing data does not depend on whether they are actually missing. This interpretation comes from factorising the joint distribution of the outcome $Y$ and an indicator of whether the datum is observed $R$, along with some covariates $X$, into a conditional and marginal model: $f(Y,R|X) = f(Y|R,X)f(R|X)$, a so-called pattern mixture model. This contrasts with the ‘selection model’ approach: $f(Y,R|X) = f(R|Y,X)f(Y|X)$.

This paper considers a Bayesian approach using the pattern mixture model for missing data for health economic evaluation. Specifically, the authors specify a multivariate normal model for the data with an additional term in the mean if it is missing, i.e. the model of $f(Y|R,X)$. A model is not specified for $f(R|X)$. If it were then you would typically allow for correlation between the errors in this model and the main outcomes model. But, one could view the additional term in the outcomes model as some function of the error from the observation model somewhat akin to a control function. Instead, this article uses expert elicitation methods to generate a prior distribution for the unobserved terms in the outcomes model. While this is certainly a legitimate way forward in my eyes, I do wonder how specification of a full observation model would affect the results. The approach of this article is useful and they show that it works, and I don’t want to detract from that but, given the lack of literature on missing data in this area, I am curious to compare approaches including selection models. You could even add shared parameter models as an alternative, all of which are feasible. Perhaps an idea for a follow-up study. As a final point, the models run in WinBUGS, but regular readers will know I think Stan is the future for estimating Bayesian models, especially in light of the problems with MCMC we’ve discussed previously. So equivalent Stan code would have been a bonus.

This is an economics blog. But focusing solely on economics papers in these round-ups would mean missing out on some papers from related fields that may provide insight into our own work. Thus I present to you a politics and sociology paper. It is not my field and I can’t give a reliable appraisal of the methods, but the results are of interest. In the global fight against non-communicable diseases, there is a range of policy tools available to governments, including the sugar tax of the paper at the top. The WHO recommends a large number. However, there is ongoing debate about whether trade rules and agreements are used to undermine this public health legislation. One agreement, the Technical Barriers to Trade (TBT) Agreement that World Trade Organization (WTO) members all sign, states that members may not impose ‘unnecessary trade costs’ or barriers to trade, especially if the intended aim of the measure can be achieved without doing so. For example, Philip Morris cited a bilateral trade agreement when it sued the Australian government for introducing plain packaging claiming it violated the terms of trade. Philip Morris eventually lost but not after substantial costs were incurred. In another example, the Thai government were deterred from introducing a traffic light warning system for food after threats of a trade dispute from the US, which cited WTO rules. However, there was no clear evidence on the extent to which trade disputes have undermined public health measures.

This article presents results from a new database of all TBT WTO challenges. Between 1995 and 2016, 93 challenges were raised concerning food, beverage, and tobacco products, the number per year growing over time. The most frequent challenges were over labelling products and then restricted ingredients. The paper presents four case studies, including Indonesia delaying food labelling of fat, sugar, and salt after challenge by several members including the EU, and many members including the EU again and the US objecting to the size and colour of a red STOP sign that Chile wanted to put on products containing high sugar, fat, and salt.

We have previously discussed the politics and political economy around public health policy relating to e-cigarettes, among other things. Understanding the political economy of public health and phenomena like government failure can be as important as understanding markets and market failure in designing effective interventions.

Credits