Rachel Houten’s journal round-up for 22nd April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

To HTA or not to HTA: identifying the factors influencing the rapid review outcome in Ireland. Value in Health [PubMed] Published 6th March 2019

National health services are constantly under pressure to provide access to new medicines as soon as marketing authorisation is granted. The NCPE in the Republic of Ireland has a rapid review process for selecting medicines that require a full health technology assessment (HTA), and the rest, approximately 45%, are able to be reimbursed without such an in-depth analysis.

Formal criteria do not exist. However, it has previously been suggested that the robustness of clinical evidence of at least equivalence; a drug that costs the same or less; an annual (or estimated) budget impact of less than €0.75 million to €1 million; and the ability of the current health systems to restrict usage are some of what is considered when making the decision.

The authors of this paper used the allocation over the past eight years to explore the factors that drive the decision to embark on a full HTA. They found, unsurprisingly, that first-in-class medicines are more likely to require an HTA as too are those with orphan status. Interestingly, the clinical area influenced the requirement for a full HTA, but the authors consider all of these factors to indicate that high-cost drugs are more likely to require a full assessment. Drug cost information is not publicly available and so the authors used the data available on the Scottish Medicine Consortium website as a surrogate for costs in Ireland. In doing so, they were able to establish a relationship between the cost per person for each drug and the likelihood of the drug having a full HTA, further supporting the idea that more expensive drugs are more likely to require HTA. On the face of it, this seems eminently sensible. However, my concern is that, in a system that is designed to deliberately measure cost per unit of health care (usually QALYs), there is the potential for lower-cost but ineffective drugs to become commonplace while more expensive medicines are subject to more rigor.

The paper provides some insight into what drives a decision to undertake a full HTA in Ireland. The NICE fast-track appraisal system operates as an opt-in system where manufacturers can ask to follow this shorter appraisal route if their drug is likely to produce an ICER of £10,000 or less. As my day job is for an Evidence Review Group (opinions my own), how things are done elsewhere – unsurprisingly – captured my attention. The desire to speed up the HTA process is obvious but the most appropriate mechanisms in which to do so are far from it. Whether or not the same decision is ultimately made is what concerns me.

NHS joint working with industry is out of public sight. BMJ [PubMed] Published 27th March 2019

This paper suggests that ‘joint working arrangements’ – a government-supported initiative between pharmaceutical companies and the NHS – are not being implemented according to guidelines on transparency. These arrangements are designed to promote collaborative research between the NHS and industry and help advance NHS provision of services.

The authors used freedom of information requests to obtain details on how many trusts were involved in joint working arrangements in 2016 and 2017. The declarations of payments made by drug companies are disclosed but the corresponding information from trusts is less readily accessible, and in some cases access to any details was prevented. Theoretically, the joint working arrangements are supposed to be void of any commercial influence on what is prescribed, but my thoughts are echoed in this paper when it asks “what’s in it for the private sector?” The sheer fact that some NHS trusts were unwilling to provide the BMJ with the information requested due to ‘commercial interest’ rings huge alarm bells.

I’m not completely cynical of these arrangements in principle, though, and the paper cites a couple of projects that involved building new facilities for age-related macular generation, which likely offer benefits to patients, and possibly much faster than could have been achieved with NHS funding alone. Some of the arrangements intend to push the implementation of national guidance, which, as a small cog in the guidance generation machine, I unashamedly (and predictably) think is a good thing.

Does it matter to us? As economists, it means that any work based on national practice and costs is likely to be unrepresentative of what actually happens. This, however, has always been the case to some extent, with variations in local service provision and the negotiation power of trusts with large volumes of patients. A national register of the arrangements would have the potential to feed into economic analysis, even if just as a statement of awareness.

Can the NHS survive without getting into bed with industry? Probably not. I think the paper does a good job of presenting the arguments on all sides and pushing for increasing availability of what is happening.

Estimating joint health condition utility values. Value in Health [PubMed] Published 22nd February 2019

I’m really interested in how this area is developing. Multi-morbidity is the norm, especially as we age. Single condition models are criticised for their lack of representation of patients in the real world. Appropriately estimating the quality of life of people with several chronic conditions, when only individual condition data are available, is incredibly difficult.

In this paper, parametric and non-parametric methods were tested on a dataset from a large primary care patient survey in the UK. The multiplicative approach was the best performing for two conditions. When more than two conditions were considered, the linear index (which incorporates additive, multiplicative, and minimum models with the use of linear regression and parameter weights derived from the underlying data) achieved the best results.

Including long-term mental health within the co-morbidities for which utility was estimated produced biased estimates. The authors discuss some possible explanations for this, including the fact that the anxiety and depression question in the EQ-5D is the only one which directly maps to an individual condition, and that mental health may have a causal effect on physical health. This is a fascinating finding, which has left me somewhat scratching my head as to how this oddity could be addressed and if separate methods of estimation will need to be used for any population with multi-morbidity including mental health conditions.

It did make me wonder if more precise EQ-5D data could be helpful to uncover the true interrelationships between joint health conditions and quality of life. The EQ-5D asks patients to think about their health state ‘today’. Although the primary care dataset used includes 16 chronic health conditions, it doesn’t, as far as I know, contain any information on the symptoms apparent on the day of quality of life assessment, which could be flaring or absent at any given time. This is a common problem with the EQ-5D and I don’t think a readily available data source of this type exists, so it’s a thought on ideals. Unsurprisingly, the more joint health conditions to be considered, the larger the error in terms of estimation from individual conditions. This may be due to the increasing likelihood of overlap in the symptoms experienced across conditions and thus a violation of the assumption that quality of life for an individual condition is independent of any other condition.

Whether the methodology remains robust for populations outside of the UK or for other measures of utility would need to be tested, and the authors are keen to highlight the need for caution before running away and using the methods verbatim. The paper does present a nice summary of the evidence to date in this area, what the authors did, and what it adds to the topic, so worth a read.

Credits

Simon McNamara’s journal round-up for 8th April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

National Institute for Health and Care Excellence, social values and healthcare priority setting. Journal of the Royal Society of Medicine [PubMed] Published 2nd April 2019

As is traditional, this week’s round-up starts with an imaginary birthday party. After much effort, we have finally managed to light the twenty candles, have agreed our approach to the distribution of the cake, and are waiting in anticipation of the entrance of the birthday “quasi-autonomous non-governmental body”. The door opens. You clear your throat. Here we go…

Happy Birthday to you,

Happy Birthday to you,

Happy Birthday dear National Institute for Health and Care Excellence,

Happy Birthday to you.

NICE smiles happily. It is no longer a teenager. It has made it to 20 – despite its parents challenging it a few times (cough, Cancer Drug Fund, cough). After the candles have been blown out, someone at the back shouts: “Speech! Speech!”. NICE coughs, thanks everyone politely, and (admittedly slight strangely) takes the opportunity to announce that they are revising their “Social Value Judgements” paper – a document that outlines the principles they use to develop guidance. They then proceed to circle the room, proudly handing out draft copies of the new document- “The principles that guide the development of NICE guidance and standards” (PDF). They look excited. Your fellow guests start to read.

“Surely not?”, “What the … ?”, “Why?” – they don’t seem pleased. You jump into the document. All of this is about process. Where are all the bits about justice, and inequalities, and bioethics, and the rest? “Why have you taken out loads of the good stuff?” you ask. “This is too vague, too procedural”. Your disappointment is obvious to those in the room.

Your phone pings – it’s your favourite WhatsApp group. One of the other guests has already started drafting a “critical friend” paper in the corner of the room. They want to know if you want to be involved. “I’m in”, you respond, “This is important, we need to make sure NICE knows what we think”. Your phone pings again. Another guest is in: “I want to be involved, this matters. Also, this is exactly the kind of paper that will get picked up by the AHE blog. If we are lucky, we might even be the first paper in one of their journal round-ups”. You pause, think, and respond hopefully: “Fingers crossed”.

I don’t know if NICE had an actual birthday party – if they did I certainly wasn’t invited. I also highly doubt that the authors of this week’s first paper, or indeed any paper, had the AHE blog in mind when writing. What I do know, is that the first article is indeed a “critical friend” paper which outlines the authors’ concerns with NICE’s proposal to “revise” (read: delete) their social value judgements guidance. This paper is relatively short, so if you are interested in these changes I suggest you read it, rather than relying on my imaginary birthday party version of their concerns.

I am highly sympathetic to the views expressed in this paper. The existing “social value judgements” document is excellent, and (to me at least) seems to be the gold standard in setting the values by which an HTA body should develop guidance. Reducing this down to solely procedural elements seems unnecessary, and potentially harmful if the other core values are forgotten, or deprioritised.

As I reflect on this paper, I can’t help think of the old adage: “If it ain’t broke, don’t fix it”. NICE – this ain’t broke.

Measuring survival benefit in health technology assessment in the presence of nonproportional hazards. Value in Health Published 22nd March 2019

Dear HTA bodies that don’t routinely look for violations of proportional hazards in oncology data: 2005 called, they want their methods back.

Seriously though, it’s 2019. Things have moved on. If a new drug has a different mode of action to its comparator, is given for a different duration, or has differing levels of treatment effect in different population subgroups, there are good reasons to think that the trial data for that drug might violate proportional hazards. So why not look? It’s easy enough, and could change the way you think about both the costs and the benefits of that medicine.

If you haven’t worked in oncology before, there is a good chance you are currently asking yourself two questions: “what does proportional hazards mean?” and “why does it matter?”. In massively simplified terms, when we say the hazards in a trial are “proportional” we mean that the treatment effect of the new intervention (typically on survival) is constant over time. If a treatment takes some time to work (e.g. immunotherapies), or is given for only a few weeks before being stopped (e.g. some chemotherapies), there are good reasons to think that the treatment effect of that intervention may vary over time. If this is the case, there will be a violation of proportional hazards (they will be “nonproportional”).

If you are an HTA body, this is important for at least three reasons. First, if hazards are non-proportional, this can mean that the average hazard ratio (treatment effect) from the trial is a poor representation of what is likely to happen beyond the trial period – a big issue if you are extrapolating data in an economic model. Second, if hazards are non-proportional, this can mean that the median survival benefit from the trial is a poor representation of the mean benefit (e.g. in the case of a curve with a “big tail”). If you don’t account for this, and rely on medians (as some HTA bodies do), this can result in your evaluation under-estimating, or over-estimating, the true benefits and costs of the medicine. Third, most approaches to including indirect comparison in economic models rely on proportionality so, if this doesn’t hold, your model might be a poor representation of reality. Given these issues, it makes sense that HTA bodies should be looking for violations in proportional hazards when evaluating oncology data.

In this week’s second paper, the authors review the way different HTA bodies approach the issue of non-proportionality in their methods guides, and in a sample of their appraisals. Of the HTA bodies considered, they find that only NICE (UK), CADTH (Canada), and PBAC (Australia) recommend testing for proportional hazards. Notably, the authors report that the Transparency Committee (France), IQWiG (Germany), and TLV (Sweden) don’t recommend testing for proportionality. Interestingly, despite these recommendations, the authors find that solely the majority of NICE appraisals they reviewed included these tests, and that only 20% of the PBAC appraisals and 8% of the CADTH appraisals did. This suggests that the vast majority of oncology drug evaluations do not include consideration of non-proportionality – a big concern given the issues outlined above.

I liked this paper, although I was a bit shocked at the results. If you work for an HTA body that doesn’t recommend testing for non-proportionality, or doesn’t enforce their existing recommendations, I suggest you think very carefully about this issue – particularly if you rely on the extrapolation of survival curves in your assessments. If you aren’t looking for violations of proportional hazards, there is a good chance that you aren’t reflecting the true costs and benefits of many medicines in your evaluations. So, why not look for them?

The challenge of antimicrobial resistance: what economics can contribute. Science Published 5th April 2019

Health Economics doesn’t normally make it into Science (the journal). If it does, it probably means the paper is an important one. This one certainly is.

Antimicrobial resistance (AMR) is scary – really scary. One source cited in this paper predicts that by 2050, 10 million people a year will die due to AMR. I don’t know about you, but I find this pretty worrying (how’s that for a bit of British understatement?). Given these predicted consequences, you would think that there would be quite a lot of work from economists on this issue. Well, there isn’t. According to this article, there are only 55 papers on EconLit that “broadly relate” to AMR.

This paper contributes to this literature in two important ways. First, it is a call to arms to economists to do more work on AMR. If there are only 55 papers on this topic, this suggests we are only scratching the surface of the issue and could do more as a field contribute to helping solve the problem. Second, it neatly demonstrates how economics could be applied to the problem of AMR – including analysis of both the supply side (not enough new antibiotics being developed) and demand side (too much antibiotic use) of the problem.

In the main body of the paper, the authors draw parallels between the economics of AMR and the economics of climate change: both are global instances of the ‘tragedy of the commons’, both are subject to significant uncertainty about the future, and both are highly sensitive to inter-temporal discounting. They then go on to suggest that many of the ideas developed in the context of climate change could be applied to AMR – including the potential for use of antibiotic prescribing quotas (analogous to carbon quotas) and taxation of antibiotic prescriptions (analogous to the idea of a carbon tax). There are many other ideas in the paper, and if you are interested in these I suggest you take the time to read it in full.

I think this is an important paper and one that has made me think more about the economics of both AMR and, inadvertently, climate change. With both issues, I can’t help but think we might be sleepwalking into a world where we have royally screwed over future generations because we didn’t take the actions we needed to take. If economists can help stop these things happening, we need to act. If we don’t, what will you say in 2050 when you turn on the news and see that 10 million people are dying from AMR each year? That is, assuming you aren’t one of those who has died as a result. Scary stuff indeed.

Credits

Chris Sampson’s journal round-up for 19th November 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Valuation of health states considered to be worse than death—an analysis of composite time trade-off data from 5 EQ-5D-5L valuation studies. Value in Health Published 12th November 2018

I have a problem with the idea of health states being ‘worse than dead’, and I’ve banged on about it on this blog. Happily, this new article provides an opportunity for me to continue my campaign. Health state valuation methods estimate how much a person prefers being in a more healthy state. Positive values are easy to understand; 1.0 is twice as good as 0.5. But how about the negative values? Is -1.0 twice as bad as -0.5? How much worse than being dead is that? The purpose of this study is to evaluate whether or not negative EQ-5D-5L values meaningfully discriminate between different health states.

The study uses data from EQ-5D-5L valuation studies conducted in Singapore, the Netherlands, China, Thailand, and Canada. Altogether, more than 5000 people provided valuations of 10 states each. As a simple measure of severity, the authors summed the number of steps from full health in all domains, giving a value from 0 (11111) to 20 (55555). We’d expect this measure of severity of states to correlate strongly with the mean utility values derived from the composite time trade-off (TTO) exercise.

Taking Singapore as an example, the mean of positive values (states better than dead) decreased from 0.89 to 0.21 with increasing severity, which is reassuring. The mean of negative values, on the other hand, ranged from -0.98 to -0.89. Negative values were clustered between -0.5 and -1.0. Results were similar across the other countries. In all except Thailand, observed negative values were indistinguishable from random noise. There was no decreasing trend in mean utility values as severity increased for states worse than dead. A linear mixed model with participant-specific intercepts and an ANOVA model confirmed the findings.

What this means is that we can’t say much about states worse than dead except that they are worse than dead. How much worse doesn’t relate to severity, which is worrying if we’re using these values in trade-offs against states better than dead. Mostly, the authors frame this lack of discriminative ability as a practical problem, rather than anything more fundamental. The discussion section provides some interesting speculation, but my favourite part of the paper is an analogy, which I’ll be quoting in future: “it might be worse to be lost at sea in deep waters than in a pond, but not in any way that truly matters”. Dead is dead is dead.

Determining value in health technology assessment: stay the course or tack away? PharmacoEconomics [PubMed] Published 9th November 2018

The cost-per-QALY approach to value in health care is no stranger to assault. The majority of criticisms are ill-founded special pleading, but, sometimes, reasonable tweaks and alternatives have been proposed. The aim of this paper was to bring together a supergroup of health economists to review and discuss these reasonable alternatives. Specifically, the questions they sought to address were: i) what should health technology assessment achieve, and ii) what should be the approach to value-based pricing?

The paper provides an unstructured overview of a selection of possible adjustments or alternatives to the cost-per-QALY method. We’re very briefly introduced to QALY weighting, efficiency frontiers, and multi-criteria decision analysis. The authors don’t tell us why we ought (or ought not) to adopt these alternatives. I was hoping that the paper would provide tentative answers to the normative questions posed, but it doesn’t do that. It doesn’t even outline the thought processes required to answer them.

The purpose of this paper seems to be to argue that alternative approaches aren’t sufficiently developed to replace the cost-per-QALY approach. But it’s hardly a strong defence. I’m a big fan of the cost-per-QALY as a necessary (if not sufficient) part of decision making in health care, and I agree with the authors that the alternatives are lacking in support. But the lack of conviction in this paper scares me. It’s tempting to make a comparison between the EU and the QALY.

How can we evaluate the cost-effectiveness of health system strengthening? A typology and illustrations. Social Science & Medicine [PubMed] Published 3rd November 2018

Health care is more than the sum of its parts. This is particularly evident in low- and middle-income countries that might lack strong health systems and which therefore can’t benefit from a new intervention in the way a strong system could. Thus, there is value in health system strengthening. But, as the authors of this paper point out, this value can be difficult to identify. The purpose of this study is to provide new methods to model the impact of health system strengthening in order to support investment decisions in this context.

The authors introduce standard cost-effectiveness analysis and economies of scope as relevant pieces of the puzzle. In essence, this paper is trying to marry the two. An intervention is more likely to be cost-effective if it helps to provide economies of scope, either by making use of an underused platform or providing a new platform that would improve the cost-effectiveness of other interventions. The authors provide a typology with three types of health system strengthening: i) investing in platform efficiency, ii) investing in platform capacity, and iii) investing in new platforms. Examples are provided for each. Simple mathematical approaches to evaluating these are described, using scaling factors and disaggregated cost and outcome constraints. Numerical demonstrations show how these approaches can reveal differences in cost-effectiveness that arise through changes in technical efficiency or the opportunity cost linked to health system strengthening.

This paper is written with international development investment decisions in mind, and in particular the challenge of investments that can mostly be characterised as health system strengthening. But it’s easy to see how many – perhaps all – health services are interdependent. If anything, the broader impact of new interventions on health systems should be considered as standard. The methods described in this paper provide a useful framework to tackle these issues, with food for thought for anybody engaged in cost-effectiveness analysis.

Credits