Simon McNamara’s journal round-up for 1st October 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A review of NICE appraisals of pharmaceuticals 2000-2016 found variation in establishing comparative clinical effectiveness. Journal of Clinical Epidemiology [PubMed] Published 17th September 2018

The first paper in this week’s round-up is on the topic on single arm studies; specifically, the way in which the comparative effectiveness of medicines granted a marketing authorisation on the basis of single arm studies have been evaluated in NICE appraisals. If you are interested in comparative effectiveness, single arm studies are difficult to deal with. If you don’t have a control arm to refer to, how do you know what the impact of the intervention is? If you don’t know how effective the intervention is, how can you say whether it is cost-effective?

In this paper, the authors conduct a review into the way this problem has been dealt with during NICE appraisals. They do this by searching through the 489 NICE technology appraisals conducted between 2010 and 2016. The search identified 22 relevant appraisals (4% of the total). The most commonly used way of estimating comparative effectiveness (19 of 22 appraisals) was simulation of a control arm using external data – be that from observational study or a randomised trial. Of these,14 of the appraisals featured naïve comparison across studies, with no attempt made to adjust for potential differences between population groups. The three appraisals that didn’t use external data were reliant upon the use of expert opinion, or the assumption that non-responders in the intervention single-arm study could be used as a proxy for those who would receive the comparator intervention.

Interestingly, the authors find little difference between the proportion of medicines reliant on non-RCT data being approved by NICE (83%), compared to those with RCT data (86%), however; the likelihood of receiving an “optimised” (aka subgroup) approval was substantially higher for medicines with solely non-RCT data (41% vs 19%). These findings demonstrate that NICE do accept models based on single-arm studies – even if more than 75% of the comparative effectiveness estimates these models were based on were reliant upon naïve indirect comparisons, or other less robust methods.

The paper concludes by noting that single-arm studies are becoming more common (50% of the appraisals identified were conducted in 2015-2016), and suggesting that HTA and regulatory bodies should work together, to develop guidance on how to evaluate comparative effectiveness based on single-arm studies.

I thought this paper was great, and it made me reflect on a couple of things. Firstly, the fact that NICE completed such a high volume of appraisals (489) between 2010 and 2016 is extremely impressive – well done NICE. Secondly, should the EMA, or EUnetHTA, play a larger role in providing estimates of comparative effectiveness for single arm studies? Whilst different countries may reasonably make different value judgements about different health outcomes, comparative effectiveness is – at least in theory – a matter of fact, rather than values, so can’t we assess it centrally?

A QALY loss is a QALY loss is a QALY loss: a note on independence of loss aversion from health states. The European Journal of Health Economics [PubMed] Published 18th September 2018

If I told you that you would receive £10 in return for doing some work for me, and then I only paid you £5, how annoyed would you be? What about if I told you I would give you £10 but then gave you £15? How delighted would you be? If you are economically rational then these two impacts (annoyance vs being delighted) should be symmetrical; but, if you are a human, your annoyance in the first scenario would likely outweigh the delight you would experience in the second. This is the basic idea behind Kahneman and Tversky’s seminal work on “loss aversion” – we dislike changes we perceive as losses more than we like equivalent changes we perceive as gains. The second paper in this week’s roundup explores loss aversion in the context of health. Application of loss aversion in health is a really interesting idea, because it calls into question the idea that people value all QALYs equally – perhaps QALYs perceived as losses are valued more highly than QALYs perceived as gains.

In the introduction of this paper, the authors note that existing evidence suggests loss aversion is present for duration of life, and for quality of life, but note that nobody has explored whether loss aversion remains constant if the two elements change together – simply put, when it comes to loss aversion is “a QALY loss a QALY loss a QALY loss”? The authors test this idea via a choice experiment fielded in a sample of 111 Dutch students. In this experiment, the loss aversion of each participant was independently elicited for four EQ-5D-5L health states – ranging from perfect health down to a health state utility value of 0.46.

As you might have guessed from the title of the paper, the authors found that, at the aggregate level, loss aversion was not significantly different between the four health states – albeit with some variation at the individual level. For each health state, perceived losses were weighted around two times as highly as perceived gains.

I enjoyed this paper, and it prompted me to think about the consequences of loss-aversion for health economics more generally. Do health related decision makers treat the outcomes associated with a new technology as a reference-point, and so feel loss aversion when considering not funding it? From a normative perspective, should we accept asymmetry in the valuation of health? Is this simply a behavioural quirk that we should over-ride in our analyses, or should we be conforming to it and granting differential weight to outcomes depending upon whether the recipient perceives it as a gain or a loss?

Advanced therapy medicinal products and health technology assessment principles and practices for value-based and sustainable healthcare. The European Journal of Health Economics [PubMed] Published 18th September 2018

The final paper in this week’s roundup is on “Advanced Therapy Medicinal Products” (ATMPs). According to the European Union Regulation 1394/2007, an ATMP is a medicine which is either (1) a gene therapy, (2) a somatic-cell therapy, (3) a tissue-engineered therapy, or (4) a combination of these approaches. I don’t pretend to understand the nuances of how these medicines work, but in simple terms ATMPs aim to replace, or regenerate, human cells, tissues and organs in order to treat ill health. Whilst ATMPs are thought to have great potential in improving health and providing long-term survival gains, they present a number of challenges for Health Technology Assessment (HTA) bodies.

This paper details a meeting of a panel of experts from the UK, Germany, France and Sweden, who were tasked with identifying and discussing these challenges. The experts identified three key challenges; (1) uncertainty of long-term benefit, and subsequently cost-effectiveness, (2) discount rates, and (3) capturing the broader “value” of these therapies – including the incremental value associated with potentially curative therapies. These three challenges stem from the fact that at the point of HTA, ATMPs are likely to have immature data and the uncertain prospect of long-term benefits. The experts suggest a range of solutions to these problems, including the use of outcomes-based reimbursement schemes, initiating a multi-disciplinary forum to consider different approaches to discounting, and further research into elements of “value” not captured by current HTA processes.

Whilst there is undoubtedly merit to some of these suggestions, I couldn’t help but feel a bit uneasy about this paper due to its funder – an ATMP manufacturer. Would the authors have written this paper if they hadn’t been paid to by a company with a vested interest in changing HTA systems to suit their agenda? Whilst I don’t doubt the paper was written independently of the company, and don’t mean to cast aspersions on the authors, this does make me question how industry shapes the areas of discourse in our field – even if it doesn’t shape the specific details of that discourse.

Many of the problems raised in this paper are not unique to ATMPs, they apply equally to all interventions with the uncertain prospect of potential cure or long-term benefit (e.g. for therapies for the treatment of early stage cancer, public health interventions or immunotherapies). Science aside, funder aside, what makes ATMPs any different to these prior interventions?

Credits

Rita Faria’s journal round-up for 13th August 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Analysis of clinical benefit, harms, and cost-effectiveness of screening women for abdominal aortic aneurysm. The Lancet [PubMed] Published 26th July 2018

This study is an excellent example of the power and flexibility of decision models to help inform decisions on screening policies.

In many countries, screening for abdominal aortic aneurysm is offered to older men but not to women. This is because screening was found to be beneficial and cost-effective, based on evidence from RCTs in older men. In contrast, there is no direct evidence for women. To inform this question, the study team developed a decision model to simulate the benefits and costs of screening women.

This study has many fascinating features. Not only does it simulate the outcomes of expanding the current UK screening policy for men to include women, but also of other policies with different age parameters, diagnostic thresholds and treatment thresholds.

Curiously, the most cost-effective policy for women is not the current UK policy for men. This shows the importance of including the full range of options in the evaluation, rather than just what is done now. Unfortunately, the paper is sparse on detail on how the various policies were devised and if other more cost-effective policies may have been left out.

The key cost-effectiveness driver is the probability of having the disease and its presentation (i.e. the distribution of the aortic diameter), which is quite frequent in cost-effectiveness analysis of diagnostic tests. Neither of these parameters requires an RCT to be estimated. This means that, in principle, we could reduce the uncertainty on which policy to fund by conducting a study on the prevalence of the disease, rather than an RCT on whether a specific policy works.

An exciting aspect is that treatment itself could be better targeted, in particular, that lowering the threshold for treatment could reduce non-intervention rates and operative mortality. The implication is that there may be scope to improve the cost-effectiveness of management, which in turn will leave greater scope for investment in screening. Could this be the next question to be tackled by this remarkable model?

Establishing the value of diagnostic and prognostic tests in health technology assessment. Medical Decision Making [PubMed] Published 13th March 2018

Keeping on the topic of the cost-effectiveness of screening and diagnostic tests, this is a paper on how to evaluate tests in a manner consistent with health technology assessment principles. This paper has been around for a few months, but it’s only now that I’ve had the chance to give it the careful read that such a well thought out paper deserves.

Marta Soares and colleagues lay out an approach to determine the most cost-effective way to use diagnostic and prognostic tests. They start by explaining that the value of the test is mostly in informing better management decisions. This means that the cost-effectiveness of testing necessarily depends on the cost-effectiveness of management.

The paper also spells out that the cost-effectiveness of testing depends on the prevalence of the disease, as we saw in the paper above on screening for abdominal aortic aneurysm. Clearly, the cost-effectiveness of testing depends on the accuracy of the test.

Importantly, the paper highlights that the evaluation should compare all possible ways of using the test. A decision problem with 1 test and 1 treatment yields 6 strategies, of which 3 are relevant: no test and treat all; no test and treat none; test and treat if positive. If the reference test is added, another 3 strategies need to be considered. This shows how complex a cost-effectiveness analysis of a test can quickly become! In my paper with Marta and others, for example, we ended up with 383 testing strategies.

The discussion is excellent, particularly about the limitations of end-to-end studies (which compare testing strategies in terms of their end outcomes e.g. health). End-to-end studies can only compare a limited subset of testing strategies and may not allow for the modelling of the outcomes of strategies beyond those compared in the study. Furthermore, end-to-end studies are likely to be inefficient given the large sample sizes and long follow-up required to detect differences in outcomes. I wholeheartedly agree that primary studies should focus on the prevalence of the disease and the accuracy of the test, leaving the evaluation of the best way to use the test to decision modelling.

Reasonable patient care under uncertainty. Health Economics [PubMed] Published 22nd August 2018

And for my third paper for the week, something completely different. But so worth reading! Charles Manski provides an overview of his work on how to use the available evidence to make decisions under uncertainty. It is accompanied by comments from Karl Claxton, Emma McIntosh, and Anirban Basu, together with Manski’s response. The set is a superb read and great food for thought.

Manski starts with the premise that we make decisions about which course of action to take without having full information about what is best; i.e. under uncertainty. This is uncontroversial and well accepted, ever since Arrow’s seminal paper.

Less consensual is Manski’s view that clinicians’ decisions for individual patients may be better than the recommendations of guidelines to the ‘average’ patient because clinicians can take into account more information about the specific individual patient. I would contend that it is unrealistic to expect that clinicians keep pace with new knowledge in medicine given how fast and how much it is generated. Furthermore, clinicians, like all other people, are unlikely to be fully rational in their decision-making process.

Most fascinating was Section 6 on decision theory under uncertainty. Manski focussed on the minimax-regret criterion. I had not heard about these approaches before, so Manski’s explanations were quite the eye-opener.

Manksi concludes by recommending that central health care planners take a portfolio approach to their guidelines (adaptive diversification), coupled with the minimax criterion to update the guidelines as more information emerges (adaptive minimax-regret). Whether the minimax-regret criterion is the best is a question that I will leave to better brains than mine. A more immediate question is how feasible it is to implement this adaptive diversification, particularly in instituting a process in that data are systematically collected and analysed to update the guideline. In his response, Manski suggests that specialists in decision analysis should become members of the multidisciplinary clinical team and to teach decision analysis in Medicine courses. This resonates with my own view that we need to do better in helping people using information to make better decisions.

Credits

James Lomas’s journal round-up for 21st May 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Decision making for healthcare resource allocation: joint v. separate decisions on interacting interventions. Medical Decision Making [PubMed] Published 23rd April 2018

While it may be uncontroversial that including all of the relevant comparators in an economic evaluation is crucial, a careful examination of this statement raises some interesting questions. Which comparators are relevant? For those that are relevant, how crucial is it that they are not excluded? The answer to the first of these questions may seem obvious, that all feasible mutually exclusive interventions should be compared, but this is in fact deceptive. Dakin and Gray highlight inconsistency between guidelines as to what constitutes interventions that are ‘mutually exclusive’ and so try to re-frame the distinction according to whether interventions are ‘incompatible’ – when it is physically impossible to implement both interventions simultaneously – and, if not, whether interventions are ‘interacting’ – where the costs and effects of the simultaneous implementation of A and B do not equal the sum of these parts. What I really like about this paper is that it has a very pragmatic focus. Inspired by policy arrangements, for example single technology appraisals, and the difficulty in capturing all interactions, Dakin and Gray provide a reader-friendly flow diagram to illustrate cases where excluding interacting interventions from a joint evaluation is likely to have a big impact, and furthermore propose a sequencing approach that avoids the major problems in evaluating separately what should be considered jointly. Essentially when we have interacting interventions at different points of the disease pathway, evaluating separately may not be problematic if we start at the end of the pathway and move backwards, similar to the method of backward induction used in sequence problems in game theory. There are additional related questions that I’d like to see these authors turn to next, such as how to include interaction effects between interventions and, in particular, how to evaluate system-wide policies that may interact with a very large number of interventions. This paper makes a great contribution to answering all of these questions by establishing a framework that clearly distinguishes concepts that had previously been subject to muddied thinking.

When cost-effective interventions are unaffordable: integrating cost-effectiveness and budget impact in priority setting for global health programs. PLoS Medicine [PubMed] Published 2nd October 2017

In my opinion, there are many things that health economists shouldn’t try to include when they conduct cost-effectiveness analysis. Affordability is not one of these. This paper is great, because Bilinski et al shine a light on the worldwide phenomenon of interventions being found to be ‘cost-effective’ but not affordable. A particular quote – that it would be financially impossible to implement all interventions that are found to be ‘very cost-effective’ in many low- and middle-income countries – is quite shocking. Bilinski et al compare and contrast cost-effectiveness analysis and budget impact analysis, and argue that there are four key reasons why something could be ‘cost-effective’ but not affordable: 1) judging cost-effectiveness with reference to an inappropriate cost-effectiveness ‘threshold’, 2) adoption of a societal perspective that includes costs not falling upon the payer’s budget, 3) failing to make explicit consideration of the distribution of costs over time and 4) the use of an inappropriate discount rate that may not accurately reflect the borrowing and investment opportunities facing the payer. They then argue that, because of this, cost-effectiveness analysis should be presented along with budget impact analysis so that the decision-maker can base a decision on both analyses. I don’t disagree with this as a pragmatic interim solution, but – by highlighting these four reasons for divergence of results with such important economic consequences – I think that there will be further reaching implications of this paper. To my mind, Bilinski et al essentially serves as a call to arms for researchers to try to come up with frameworks and estimates so that the conduct of cost-effectiveness analysis can be improved in order that paradoxical results are no longer produced, decisions are more usefully informed by cost-effectiveness analysis, and the opportunity costs of large budget impacts are properly evaluated – especially in the context of low- and middle-income countries where the foregone health from poor decisions can be so significant.

Patient cost-sharing, socioeconomic status, and children’s health care utilization. Journal of Health Economics [PubMed] Published 16th April 2018

This paper evaluates a policy using a combination of regression discontinuity design and difference-in-difference methods. Not only does it do that, but it tackles an important policy question using a detailed population-wide dataset (a set of linked datasets, more accurately). As if that weren’t enough, one of the policy reforms was actually implemented as a result of a vote where two politicians ‘accidentally pressed the wrong button’, reducing concerns that the policy may have in some way not been exogenous. Needless to say I found the method employed in this paper to be a pretty convincing identification strategy. The policy question at hand is about whether demand for GP visits for children in the Swedish county of Scania (Skåne) is affected by cost-sharing. Cost-sharing for GP visits has occurred for different age groups over different periods of time, providing the basis for regression discontinuities around the age threshold and treated and control groups over time. Nilsson and Paul find results suggesting that when health care is free of charge doctor visits by children increase by 5-10%. In this context, doctor visits happened subject to telephone triage by a nurse and so in this sense it can be argued that all of these visits would be ‘needed’. Further, Nilsson and Paul find that the sensitivity to price is concentrated in low-income households, and is greater among sickly children. The authors contextualise their results very well and, in addition to that context, I can’t deny that it also particularly resonated with me to read this approaching the 70th birthday of the NHS – a system where cost-sharing has never been implemented for GP visits by children. This paper is clearly also highly relevant to that debate that has surfaced again and again in the UK.

Credits