Chris Sampson’s journal round-up for 19th November 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Valuation of health states considered to be worse than death—an analysis of composite time trade-off data from 5 EQ-5D-5L valuation studies. Value in Health Published 12th November 2018

I have a problem with the idea of health states being ‘worse than dead’, and I’ve banged on about it on this blog. Happily, this new article provides an opportunity for me to continue my campaign. Health state valuation methods estimate how much a person prefers being in a more healthy state. Positive values are easy to understand; 1.0 is twice as good as 0.5. But how about the negative values? Is -1.0 twice as bad as -0.5? How much worse than being dead is that? The purpose of this study is to evaluate whether or not negative EQ-5D-5L values meaningfully discriminate between different health states.

The study uses data from EQ-5D-5L valuation studies conducted in Singapore, the Netherlands, China, Thailand, and Canada. Altogether, more than 5000 people provided valuations of 10 states each. As a simple measure of severity, the authors summed the number of steps from full health in all domains, giving a value from 0 (11111) to 20 (55555). We’d expect this measure of severity of states to correlate strongly with the mean utility values derived from the composite time trade-off (TTO) exercise.

Taking Singapore as an example, the mean of positive values (states better than dead) decreased from 0.89 to 0.21 with increasing severity, which is reassuring. The mean of negative values, on the other hand, ranged from -0.98 to -0.89. Negative values were clustered between -0.5 and -1.0. Results were similar across the other countries. In all except Thailand, observed negative values were indistinguishable from random noise. There was no decreasing trend in mean utility values as severity increased for states worse than dead. A linear mixed model with participant-specific intercepts and an ANOVA model confirmed the findings.

What this means is that we can’t say much about states worse than dead except that they are worse than dead. How much worse doesn’t relate to severity, which is worrying if we’re using these values in trade-offs against states better than dead. Mostly, the authors frame this lack of discriminative ability as a practical problem, rather than anything more fundamental. The discussion section provides some interesting speculation, but my favourite part of the paper is an analogy, which I’ll be quoting in future: “it might be worse to be lost at sea in deep waters than in a pond, but not in any way that truly matters”. Dead is dead is dead.

Determining value in health technology assessment: stay the course or tack away? PharmacoEconomics [PubMed] Published 9th November 2018

The cost-per-QALY approach to value in health care is no stranger to assault. The majority of criticisms are ill-founded special pleading, but, sometimes, reasonable tweaks and alternatives have been proposed. The aim of this paper was to bring together a supergroup of health economists to review and discuss these reasonable alternatives. Specifically, the questions they sought to address were: i) what should health technology assessment achieve, and ii) what should be the approach to value-based pricing?

The paper provides an unstructured overview of a selection of possible adjustments or alternatives to the cost-per-QALY method. We’re very briefly introduced to QALY weighting, efficiency frontiers, and multi-criteria decision analysis. The authors don’t tell us why we ought (or ought not) to adopt these alternatives. I was hoping that the paper would provide tentative answers to the normative questions posed, but it doesn’t do that. It doesn’t even outline the thought processes required to answer them.

The purpose of this paper seems to be to argue that alternative approaches aren’t sufficiently developed to replace the cost-per-QALY approach. But it’s hardly a strong defence. I’m a big fan of the cost-per-QALY as a necessary (if not sufficient) part of decision making in health care, and I agree with the authors that the alternatives are lacking in support. But the lack of conviction in this paper scares me. It’s tempting to make a comparison between the EU and the QALY.

How can we evaluate the cost-effectiveness of health system strengthening? A typology and illustrations. Social Science & Medicine [PubMed] Published 3rd November 2018

Health care is more than the sum of its parts. This is particularly evident in low- and middle-income countries that might lack strong health systems and which therefore can’t benefit from a new intervention in the way a strong system could. Thus, there is value in health system strengthening. But, as the authors of this paper point out, this value can be difficult to identify. The purpose of this study is to provide new methods to model the impact of health system strengthening in order to support investment decisions in this context.

The authors introduce standard cost-effectiveness analysis and economies of scope as relevant pieces of the puzzle. In essence, this paper is trying to marry the two. An intervention is more likely to be cost-effective if it helps to provide economies of scope, either by making use of an underused platform or providing a new platform that would improve the cost-effectiveness of other interventions. The authors provide a typology with three types of health system strengthening: i) investing in platform efficiency, ii) investing in platform capacity, and iii) investing in new platforms. Examples are provided for each. Simple mathematical approaches to evaluating these are described, using scaling factors and disaggregated cost and outcome constraints. Numerical demonstrations show how these approaches can reveal differences in cost-effectiveness that arise through changes in technical efficiency or the opportunity cost linked to health system strengthening.

This paper is written with international development investment decisions in mind, and in particular the challenge of investments that can mostly be characterised as health system strengthening. But it’s easy to see how many – perhaps all – health services are interdependent. If anything, the broader impact of new interventions on health systems should be considered as standard. The methods described in this paper provide a useful framework to tackle these issues, with food for thought for anybody engaged in cost-effectiveness analysis.

Credits

Chris Sampson’s journal round-up for 15th October 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Reliability and validity of the contingent valuation method for estimating willingness to pay: a case of in vitro fertilisation. Applied Health Economics and Health Policy [PubMed] Published 13th October 2018

In vitro fertilisation (IVF) is a challenge for standard models of valuation in health economics. Mostly, that’s because, despite it falling within the scope of health care, and despite infertility being a health problem, many of the benefits of IVF can’t be considered health-specific. QALYs can’t really do the job, so there’s arguably a role for cost-benefit analysis, and for using stated preference methods to determine the value of IVF. This study adds to an existing literature studying willingness to pay for IVF, but differs in that it tries to identify willingness to pay (WTP) from the general population. This study is set in Australia, where IVF is part-funded by universal health insurance, so asking the public is arguably the right thing to do.

Three contingent valuation surveys were conducted online with 1,870 people from the general public. The first survey used a starting point bid of $10,000, and then, 10 months later, two more surveys were conducted with starting point bids of $4,000 and $10,000. Each included questions for a 10%, 20%, and 50% success rate. Respondents were asked to adopt an ex-post perspective, assuming that they were infertile and could conceive by IVF. Individuals could respond to starting bids with ‘yes’, ‘no’, ‘not sure’, or ‘I am not willing to pay anything’. WTP for one IVF cycle with a 20% success rate ranged from $6,353 in the $4,000 survey to $11,750 in the first $10,000 survey. WTP for a year of treatment ranged from $18,433 to $28,117. The method was reliable insofar as there were no differences between the first and second $10,000 surveys. WTP values corresponded to the probability of success, providing support for the internal construct validity of the survey. However, the big difference between values derived using the alternative starting point bids indicates a strong anchoring bias. The authors also tested the external criterion validity by comparing the number of respondents willing to pay more than $4,000 for a cycle with a 20% success rate (roughly equivalent to the out of pocket cost in Australia) with the number of people who actually choose to pay for IVF in Australia. Around 63% of respondents were willing to pay at that price, which is close to the estimated 60% in Australia.

This study provides some support for the use of contingent valuation methods in the context of IVF, and for its use in general population samples. But the anchoring effect is worrying and justifies further research to identify appropriate methods to counteract this bias. The exclusion of the “not sure” and “I will not pay anything” responses from the analysis – as ‘non-demanders’ – arguably undermines the ‘societal valuation’ aspect of the estimates.

Pharmaceutical expenditure and gross domestic product: evidence of simultaneous effects using a two‐step instrumental variables strategy. Health Economics [PubMed] Published 10th October 2018

The question of how governments determine spending on medicines is pertinent in the UK right now, as the Pharmaceutical Price Regulation Scheme approaches its renewal date. The current agreement includes a cap on pharmaceutical expenditure. It should go without saying that GDP ought to have some influence on how much public spending is dedicated to medicines. But, when medicines expenditure might also influence GDP, the actual relationship is difficult to estimate. In this paper, the authors seek to identify both effects: the income elasticity of government spending on pharmaceuticals and the effect of that spending on income.

The authors use a variety of data sources from the World Health Organization, World Bank, and International Monetary Fund to construct an unbalanced panel for 136 countries from 1995 to 2006. To get around the challenge of two-way causality, the authors implement a two-step instrumental variable approach. In the first step of the procedure, a model estimates the impact of GDP per capita on government spending on pharmaceuticals. International tourist receipts are used as an instrument that is expected to correlate strongly with GDP per capita, but which is expected to be unrelated to medicines expenditure (except through its correlation with GDP). The model attempts to control for health care expenditure, life expectancy, and other important country-specific variables. In the second step, a reverse causality model is used to assess the impact of pharmaceutical expenditure on GDP per capita, with pharmaceutical expenditure adjusted to partial-out the response to GDP estimated in the first step.

The headline average results are that GDP increases pharmaceutical expenditure and that pharmaceutical expenditure reduces GDP. A 1% increase in GDP per capita increases public pharmaceutical expenditure per capita by 1.4%, suggesting that pharmaceuticals are a luxury good. A 1% increase in public pharmaceutical expenditure is associated with a 0.09% decrease in GDP per capita. But the results are more nuanced than that. The authors outline various sources of heterogeneity. The positive effect of GDP on pharmaceutical expenditure only holds for high-income countries and the negative effect of pharmaceutical expenditure on GDP only holds for low-income countries. Quantile regressions show that income elasticity decreases for higher quantiles of expenditure. GDP only influences pharmaceutical spending in countries classified as ‘free’ on the index of Economic Freedom of the World, and pharmaceutical expenditure only has a negative impact on GDP in countries that are ‘not free’.

I’ve never come across this kind of two-step approach before, so I’m still trying to get my head around whether the methods and the data are adequate. But a series of robustness checks provide some reassurance. In particular, an analysis of intertemporal effects using lagged GDP and lagged pharmaceutical expenditure demonstrates the robustness of the main findings. Arguably, the findings of this study are more important for policymaking in low- and middle-income countries, where pharmaceutical expenditures might have important consequences for GDP. In high-income (and ‘free’) economies that spend a lot on medicines, like the UK, there is probably less at stake. This could be because of effective price regulation and monitoring, and better adherence, ensuring that pharmaceutical expenditure is not wasteful.

Parental health spillover in cost-effectiveness analysis: evidence from self-harming adolescents in England. PharmacoEconomics [PubMed] [RePEc] Published 8th October 2018

Any intervention has the potential for spillover effects, whereby people other than the recipient of care are positively or negatively affected by the consequences of the intervention. Where a child is the recipient of care, it stands to reason that any intervention could affect the well-being of the parents and that these impacts should be considered in economic evaluation. But how should parental spillovers be incorporated? Are parental utilities additive to that of the child patient? Or should a multiplier effect be used with reference to the effect of an intervention on the child’s utility?

The study reports on a trial-based economic evaluation of family therapy for self-harming adolescents aged 11-17. Data collection included EQ-5D-3L for the adolescents and HUI2 for the main caregiver (86% mothers) at baseline, 6-month follow-up, and 12-month follow-up, collected from 731 patient-parent pairs. The authors outline six alternative methods for including parental health spillovers: i) relative health spillover, ii) relative health spillover per treatment arm, iii) absolute health spillover, iv) absolute global health spillover per treatment arm, v) additive accrued health benefits, and vi) household equivalence scales. These differ according to whether parental utility is counted as depending on adolescent’s utility, treatment allocation, the primary outcome of the study, or some combination thereof. But the authors’ primary focus (and the main contribution of this study) is the equivalence scale option. This involves adding together the spillover effects for other members of the household and using alternative weightings depending on the importance of parental utility compared with adolescent utility.

Using Tobit models, controlling for a variety of factors, the authors demonstrate that parental utility is associated with adolescent utility. Then, economic evaluations are conducted using each of the alternative spillover accounting methods. The base case of including only adolescents’ utility delivers an ICER of around £40,453. Employing the alternative methods gives quite different results, with the intervention dominated in two of the cases and an ICER below £30,000 per QALY in others. For the equivalence scale approach, the authors employ several elasticities for spillover utility, ranging from 0 (where parental utility is of equivalent value to adolescent utility and therefore additive) to 1 (where the average health spillover per household member is estimated for each patient). The ICER estimates using the equivalence scale approach ranged from £27,166 to £32,504. Higher elasticity implied lower cumulated QALYs.

The paper’s contribution is methodological, and I wouldn’t read too much into the magnitude of the results. For starters, the use of HUI2 (a measure for children) in adults and the use of EQ-5D-3L (a measure for adults) in the children is somewhat confusing. The authors argue that health gains should only be aggregated at the household level if the QALY gain for the patient is greater or equal to zero, because the purpose of treatment is to benefit the adolescents, not the parents. And they argue in favour of using an equivalence scale approach. By requiring an explicit judgement to set the elasticity within the estimation, the method provides a useful and transparent approach to including parental spillovers.

Credits

Simon McNamara’s journal round-up for 1st October 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A review of NICE appraisals of pharmaceuticals 2000-2016 found variation in establishing comparative clinical effectiveness. Journal of Clinical Epidemiology [PubMed] Published 17th September 2018

The first paper in this week’s round-up is on the topic on single arm studies; specifically, the way in which the comparative effectiveness of medicines granted a marketing authorisation on the basis of single arm studies have been evaluated in NICE appraisals. If you are interested in comparative effectiveness, single arm studies are difficult to deal with. If you don’t have a control arm to refer to, how do you know what the impact of the intervention is? If you don’t know how effective the intervention is, how can you say whether it is cost-effective?

In this paper, the authors conduct a review into the way this problem has been dealt with during NICE appraisals. They do this by searching through the 489 NICE technology appraisals conducted between 2010 and 2016. The search identified 22 relevant appraisals (4% of the total). The most commonly used way of estimating comparative effectiveness (19 of 22 appraisals) was simulation of a control arm using external data – be that from observational study or a randomised trial. Of these,14 of the appraisals featured naïve comparison across studies, with no attempt made to adjust for potential differences between population groups. The three appraisals that didn’t use external data were reliant upon the use of expert opinion, or the assumption that non-responders in the intervention single-arm study could be used as a proxy for those who would receive the comparator intervention.

Interestingly, the authors find little difference between the proportion of medicines reliant on non-RCT data being approved by NICE (83%), compared to those with RCT data (86%), however; the likelihood of receiving an “optimised” (aka subgroup) approval was substantially higher for medicines with solely non-RCT data (41% vs 19%). These findings demonstrate that NICE do accept models based on single-arm studies – even if more than 75% of the comparative effectiveness estimates these models were based on were reliant upon naïve indirect comparisons, or other less robust methods.

The paper concludes by noting that single-arm studies are becoming more common (50% of the appraisals identified were conducted in 2015-2016), and suggesting that HTA and regulatory bodies should work together, to develop guidance on how to evaluate comparative effectiveness based on single-arm studies.

I thought this paper was great, and it made me reflect on a couple of things. Firstly, the fact that NICE completed such a high volume of appraisals (489) between 2010 and 2016 is extremely impressive – well done NICE. Secondly, should the EMA, or EUnetHTA, play a larger role in providing estimates of comparative effectiveness for single arm studies? Whilst different countries may reasonably make different value judgements about different health outcomes, comparative effectiveness is – at least in theory – a matter of fact, rather than values, so can’t we assess it centrally?

A QALY loss is a QALY loss is a QALY loss: a note on independence of loss aversion from health states. The European Journal of Health Economics [PubMed] Published 18th September 2018

If I told you that you would receive £10 in return for doing some work for me, and then I only paid you £5, how annoyed would you be? What about if I told you I would give you £10 but then gave you £15? How delighted would you be? If you are economically rational then these two impacts (annoyance vs being delighted) should be symmetrical; but, if you are a human, your annoyance in the first scenario would likely outweigh the delight you would experience in the second. This is the basic idea behind Kahneman and Tversky’s seminal work on “loss aversion” – we dislike changes we perceive as losses more than we like equivalent changes we perceive as gains. The second paper in this week’s roundup explores loss aversion in the context of health. Application of loss aversion in health is a really interesting idea, because it calls into question the idea that people value all QALYs equally – perhaps QALYs perceived as losses are valued more highly than QALYs perceived as gains.

In the introduction of this paper, the authors note that existing evidence suggests loss aversion is present for duration of life, and for quality of life, but note that nobody has explored whether loss aversion remains constant if the two elements change together – simply put, when it comes to loss aversion is “a QALY loss a QALY loss a QALY loss”? The authors test this idea via a choice experiment fielded in a sample of 111 Dutch students. In this experiment, the loss aversion of each participant was independently elicited for four EQ-5D-5L health states – ranging from perfect health down to a health state utility value of 0.46.

As you might have guessed from the title of the paper, the authors found that, at the aggregate level, loss aversion was not significantly different between the four health states – albeit with some variation at the individual level. For each health state, perceived losses were weighted around two times as highly as perceived gains.

I enjoyed this paper, and it prompted me to think about the consequences of loss-aversion for health economics more generally. Do health related decision makers treat the outcomes associated with a new technology as a reference-point, and so feel loss aversion when considering not funding it? From a normative perspective, should we accept asymmetry in the valuation of health? Is this simply a behavioural quirk that we should over-ride in our analyses, or should we be conforming to it and granting differential weight to outcomes depending upon whether the recipient perceives it as a gain or a loss?

Advanced therapy medicinal products and health technology assessment principles and practices for value-based and sustainable healthcare. The European Journal of Health Economics [PubMed] Published 18th September 2018

The final paper in this week’s roundup is on “Advanced Therapy Medicinal Products” (ATMPs). According to the European Union Regulation 1394/2007, an ATMP is a medicine which is either (1) a gene therapy, (2) a somatic-cell therapy, (3) a tissue-engineered therapy, or (4) a combination of these approaches. I don’t pretend to understand the nuances of how these medicines work, but in simple terms ATMPs aim to replace, or regenerate, human cells, tissues and organs in order to treat ill health. Whilst ATMPs are thought to have great potential in improving health and providing long-term survival gains, they present a number of challenges for Health Technology Assessment (HTA) bodies.

This paper details a meeting of a panel of experts from the UK, Germany, France and Sweden, who were tasked with identifying and discussing these challenges. The experts identified three key challenges; (1) uncertainty of long-term benefit, and subsequently cost-effectiveness, (2) discount rates, and (3) capturing the broader “value” of these therapies – including the incremental value associated with potentially curative therapies. These three challenges stem from the fact that at the point of HTA, ATMPs are likely to have immature data and the uncertain prospect of long-term benefits. The experts suggest a range of solutions to these problems, including the use of outcomes-based reimbursement schemes, initiating a multi-disciplinary forum to consider different approaches to discounting, and further research into elements of “value” not captured by current HTA processes.

Whilst there is undoubtedly merit to some of these suggestions, I couldn’t help but feel a bit uneasy about this paper due to its funder – an ATMP manufacturer. Would the authors have written this paper if they hadn’t been paid to by a company with a vested interest in changing HTA systems to suit their agenda? Whilst I don’t doubt the paper was written independently of the company, and don’t mean to cast aspersions on the authors, this does make me question how industry shapes the areas of discourse in our field – even if it doesn’t shape the specific details of that discourse.

Many of the problems raised in this paper are not unique to ATMPs, they apply equally to all interventions with the uncertain prospect of potential cure or long-term benefit (e.g. for therapies for the treatment of early stage cancer, public health interventions or immunotherapies). Science aside, funder aside, what makes ATMPs any different to these prior interventions?

Credits