Rachel Houten’s journal round-up for 22nd April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

To HTA or not to HTA: identifying the factors influencing the rapid review outcome in Ireland. Value in Health [PubMed] Published 6th March 2019

National health services are constantly under pressure to provide access to new medicines as soon as marketing authorisation is granted. The NCPE in the Republic of Ireland has a rapid review process for selecting medicines that require a full health technology assessment (HTA), and the rest, approximately 45%, are able to be reimbursed without such an in-depth analysis.

Formal criteria do not exist. However, it has previously been suggested that the robustness of clinical evidence of at least equivalence; a drug that costs the same or less; an annual (or estimated) budget impact of less than €0.75 million to €1 million; and the ability of the current health systems to restrict usage are some of what is considered when making the decision.

The authors of this paper used the allocation over the past eight years to explore the factors that drive the decision to embark on a full HTA. They found, unsurprisingly, that first-in-class medicines are more likely to require an HTA as too are those with orphan status. Interestingly, the clinical area influenced the requirement for a full HTA, but the authors consider all of these factors to indicate that high-cost drugs are more likely to require a full assessment. Drug cost information is not publicly available and so the authors used the data available on the Scottish Medicine Consortium website as a surrogate for costs in Ireland. In doing so, they were able to establish a relationship between the cost per person for each drug and the likelihood of the drug having a full HTA, further supporting the idea that more expensive drugs are more likely to require HTA. On the face of it, this seems eminently sensible. However, my concern is that, in a system that is designed to deliberately measure cost per unit of health care (usually QALYs), there is the potential for lower-cost but ineffective drugs to become commonplace while more expensive medicines are subject to more rigor.

The paper provides some insight into what drives a decision to undertake a full HTA in Ireland. The NICE fast-track appraisal system operates as an opt-in system where manufacturers can ask to follow this shorter appraisal route if their drug is likely to produce an ICER of £10,000 or less. As my day job is for an Evidence Review Group (opinions my own), how things are done elsewhere – unsurprisingly – captured my attention. The desire to speed up the HTA process is obvious but the most appropriate mechanisms in which to do so are far from it. Whether or not the same decision is ultimately made is what concerns me.

NHS joint working with industry is out of public sight. BMJ [PubMed] Published 27th March 2019

This paper suggests that ‘joint working arrangements’ – a government-supported initiative between pharmaceutical companies and the NHS – are not being implemented according to guidelines on transparency. These arrangements are designed to promote collaborative research between the NHS and industry and help advance NHS provision of services.

The authors used freedom of information requests to obtain details on how many trusts were involved in joint working arrangements in 2016 and 2017. The declarations of payments made by drug companies are disclosed but the corresponding information from trusts is less readily accessible, and in some cases access to any details was prevented. Theoretically, the joint working arrangements are supposed to be void of any commercial influence on what is prescribed, but my thoughts are echoed in this paper when it asks “what’s in it for the private sector?” The sheer fact that some NHS trusts were unwilling to provide the BMJ with the information requested due to ‘commercial interest’ rings huge alarm bells.

I’m not completely cynical of these arrangements in principle, though, and the paper cites a couple of projects that involved building new facilities for age-related macular generation, which likely offer benefits to patients, and possibly much faster than could have been achieved with NHS funding alone. Some of the arrangements intend to push the implementation of national guidance, which, as a small cog in the guidance generation machine, I unashamedly (and predictably) think is a good thing.

Does it matter to us? As economists, it means that any work based on national practice and costs is likely to be unrepresentative of what actually happens. This, however, has always been the case to some extent, with variations in local service provision and the negotiation power of trusts with large volumes of patients. A national register of the arrangements would have the potential to feed into economic analysis, even if just as a statement of awareness.

Can the NHS survive without getting into bed with industry? Probably not. I think the paper does a good job of presenting the arguments on all sides and pushing for increasing availability of what is happening.

Estimating joint health condition utility values. Value in Health [PubMed] Published 22nd February 2019

I’m really interested in how this area is developing. Multi-morbidity is the norm, especially as we age. Single condition models are criticised for their lack of representation of patients in the real world. Appropriately estimating the quality of life of people with several chronic conditions, when only individual condition data are available, is incredibly difficult.

In this paper, parametric and non-parametric methods were tested on a dataset from a large primary care patient survey in the UK. The multiplicative approach was the best performing for two conditions. When more than two conditions were considered, the linear index (which incorporates additive, multiplicative, and minimum models with the use of linear regression and parameter weights derived from the underlying data) achieved the best results.

Including long-term mental health within the co-morbidities for which utility was estimated produced biased estimates. The authors discuss some possible explanations for this, including the fact that the anxiety and depression question in the EQ-5D is the only one which directly maps to an individual condition, and that mental health may have a causal effect on physical health. This is a fascinating finding, which has left me somewhat scratching my head as to how this oddity could be addressed and if separate methods of estimation will need to be used for any population with multi-morbidity including mental health conditions.

It did make me wonder if more precise EQ-5D data could be helpful to uncover the true interrelationships between joint health conditions and quality of life. The EQ-5D asks patients to think about their health state ‘today’. Although the primary care dataset used includes 16 chronic health conditions, it doesn’t, as far as I know, contain any information on the symptoms apparent on the day of quality of life assessment, which could be flaring or absent at any given time. This is a common problem with the EQ-5D and I don’t think a readily available data source of this type exists, so it’s a thought on ideals. Unsurprisingly, the more joint health conditions to be considered, the larger the error in terms of estimation from individual conditions. This may be due to the increasing likelihood of overlap in the symptoms experienced across conditions and thus a violation of the assumption that quality of life for an individual condition is independent of any other condition.

Whether the methodology remains robust for populations outside of the UK or for other measures of utility would need to be tested, and the authors are keen to highlight the need for caution before running away and using the methods verbatim. The paper does present a nice summary of the evidence to date in this area, what the authors did, and what it adds to the topic, so worth a read.

Credits

Simon McNamara’s journal round-up for 8th April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

National Institute for Health and Care Excellence, social values and healthcare priority setting. Journal of the Royal Society of Medicine [PubMed] Published 2nd April 2019

As is traditional, this week’s round-up starts with an imaginary birthday party. After much effort, we have finally managed to light the twenty candles, have agreed our approach to the distribution of the cake, and are waiting in anticipation of the entrance of the birthday “quasi-autonomous non-governmental body”. The door opens. You clear your throat. Here we go…

Happy Birthday to you,

Happy Birthday to you,

Happy Birthday dear National Institute for Health and Care Excellence,

Happy Birthday to you.

NICE smiles happily. It is no longer a teenager. It has made it to 20 – despite its parents challenging it a few times (cough, Cancer Drug Fund, cough). After the candles have been blown out, someone at the back shouts: “Speech! Speech!”. NICE coughs, thanks everyone politely, and (admittedly slight strangely) takes the opportunity to announce that they are revising their “Social Value Judgements” paper – a document that outlines the principles they use to develop guidance. They then proceed to circle the room, proudly handing out draft copies of the new document- “The principles that guide the development of NICE guidance and standards” (PDF). They look excited. Your fellow guests start to read.

“Surely not?”, “What the … ?”, “Why?” – they don’t seem pleased. You jump into the document. All of this is about process. Where are all the bits about justice, and inequalities, and bioethics, and the rest? “Why have you taken out loads of the good stuff?” you ask. “This is too vague, too procedural”. Your disappointment is obvious to those in the room.

Your phone pings – it’s your favourite WhatsApp group. One of the other guests has already started drafting a “critical friend” paper in the corner of the room. They want to know if you want to be involved. “I’m in”, you respond, “This is important, we need to make sure NICE knows what we think”. Your phone pings again. Another guest is in: “I want to be involved, this matters. Also, this is exactly the kind of paper that will get picked up by the AHE blog. If we are lucky, we might even be the first paper in one of their journal round-ups”. You pause, think, and respond hopefully: “Fingers crossed”.

I don’t know if NICE had an actual birthday party – if they did I certainly wasn’t invited. I also highly doubt that the authors of this week’s first paper, or indeed any paper, had the AHE blog in mind when writing. What I do know, is that the first article is indeed a “critical friend” paper which outlines the authors’ concerns with NICE’s proposal to “revise” (read: delete) their social value judgements guidance. This paper is relatively short, so if you are interested in these changes I suggest you read it, rather than relying on my imaginary birthday party version of their concerns.

I am highly sympathetic to the views expressed in this paper. The existing “social value judgements” document is excellent, and (to me at least) seems to be the gold standard in setting the values by which an HTA body should develop guidance. Reducing this down to solely procedural elements seems unnecessary, and potentially harmful if the other core values are forgotten, or deprioritised.

As I reflect on this paper, I can’t help think of the old adage: “If it ain’t broke, don’t fix it”. NICE – this ain’t broke.

Measuring survival benefit in health technology assessment in the presence of nonproportional hazards. Value in Health Published 22nd March 2019

Dear HTA bodies that don’t routinely look for violations of proportional hazards in oncology data: 2005 called, they want their methods back.

Seriously though, it’s 2019. Things have moved on. If a new drug has a different mode of action to its comparator, is given for a different duration, or has differing levels of treatment effect in different population subgroups, there are good reasons to think that the trial data for that drug might violate proportional hazards. So why not look? It’s easy enough, and could change the way you think about both the costs and the benefits of that medicine.

If you haven’t worked in oncology before, there is a good chance you are currently asking yourself two questions: “what does proportional hazards mean?” and “why does it matter?”. In massively simplified terms, when we say the hazards in a trial are “proportional” we mean that the treatment effect of the new intervention (typically on survival) is constant over time. If a treatment takes some time to work (e.g. immunotherapies), or is given for only a few weeks before being stopped (e.g. some chemotherapies), there are good reasons to think that the treatment effect of that intervention may vary over time. If this is the case, there will be a violation of proportional hazards (they will be “nonproportional”).

If you are an HTA body, this is important for at least three reasons. First, if hazards are non-proportional, this can mean that the average hazard ratio (treatment effect) from the trial is a poor representation of what is likely to happen beyond the trial period – a big issue if you are extrapolating data in an economic model. Second, if hazards are non-proportional, this can mean that the median survival benefit from the trial is a poor representation of the mean benefit (e.g. in the case of a curve with a “big tail”). If you don’t account for this, and rely on medians (as some HTA bodies do), this can result in your evaluation under-estimating, or over-estimating, the true benefits and costs of the medicine. Third, most approaches to including indirect comparison in economic models rely on proportionality so, if this doesn’t hold, your model might be a poor representation of reality. Given these issues, it makes sense that HTA bodies should be looking for violations in proportional hazards when evaluating oncology data.

In this week’s second paper, the authors review the way different HTA bodies approach the issue of non-proportionality in their methods guides, and in a sample of their appraisals. Of the HTA bodies considered, they find that only NICE (UK), CADTH (Canada), and PBAC (Australia) recommend testing for proportional hazards. Notably, the authors report that the Transparency Committee (France), IQWiG (Germany), and TLV (Sweden) don’t recommend testing for proportionality. Interestingly, despite these recommendations, the authors find that solely the majority of NICE appraisals they reviewed included these tests, and that only 20% of the PBAC appraisals and 8% of the CADTH appraisals did. This suggests that the vast majority of oncology drug evaluations do not include consideration of non-proportionality – a big concern given the issues outlined above.

I liked this paper, although I was a bit shocked at the results. If you work for an HTA body that doesn’t recommend testing for non-proportionality, or doesn’t enforce their existing recommendations, I suggest you think very carefully about this issue – particularly if you rely on the extrapolation of survival curves in your assessments. If you aren’t looking for violations of proportional hazards, there is a good chance that you aren’t reflecting the true costs and benefits of many medicines in your evaluations. So, why not look for them?

The challenge of antimicrobial resistance: what economics can contribute. Science Published 5th April 2019

Health Economics doesn’t normally make it into Science (the journal). If it does, it probably means the paper is an important one. This one certainly is.

Antimicrobial resistance (AMR) is scary – really scary. One source cited in this paper predicts that by 2050, 10 million people a year will die due to AMR. I don’t know about you, but I find this pretty worrying (how’s that for a bit of British understatement?). Given these predicted consequences, you would think that there would be quite a lot of work from economists on this issue. Well, there isn’t. According to this article, there are only 55 papers on EconLit that “broadly relate” to AMR.

This paper contributes to this literature in two important ways. First, it is a call to arms to economists to do more work on AMR. If there are only 55 papers on this topic, this suggests we are only scratching the surface of the issue and could do more as a field contribute to helping solve the problem. Second, it neatly demonstrates how economics could be applied to the problem of AMR – including analysis of both the supply side (not enough new antibiotics being developed) and demand side (too much antibiotic use) of the problem.

In the main body of the paper, the authors draw parallels between the economics of AMR and the economics of climate change: both are global instances of the ‘tragedy of the commons’, both are subject to significant uncertainty about the future, and both are highly sensitive to inter-temporal discounting. They then go on to suggest that many of the ideas developed in the context of climate change could be applied to AMR – including the potential for use of antibiotic prescribing quotas (analogous to carbon quotas) and taxation of antibiotic prescriptions (analogous to the idea of a carbon tax). There are many other ideas in the paper, and if you are interested in these I suggest you take the time to read it in full.

I think this is an important paper and one that has made me think more about the economics of both AMR and, inadvertently, climate change. With both issues, I can’t help but think we might be sleepwalking into a world where we have royally screwed over future generations because we didn’t take the actions we needed to take. If economists can help stop these things happening, we need to act. If we don’t, what will you say in 2050 when you turn on the news and see that 10 million people are dying from AMR each year? That is, assuming you aren’t one of those who has died as a result. Scary stuff indeed.

Credits

Brendan Collins’s journal round-up for 18th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluation of intervention impact on health inequality for resource allocation. Medical Decision Making [PubMed] Published 28th February 2019

How should decision-makers factor equity impacts into economic decisions? Can we trade off an intervention’s cost-effectiveness with its impact on unfair health inequalities? Is a QALY just a QALY or should we weight it more if it is gained by someone from a disadvantaged group? Can we assume that, because people of lower socioeconomic position lose more QALYs through ill health, that most interventions should, by default, reduce inequalities?

I really like the health equity plane. This is where you show health impacts (usually including a summary measure of cost-effectiveness like net health benefit or net monetary benefit) and equity impacts (which might be a change in slope index of inequality [SII] or relative index of inequality) on the same plane. This enables decision-makers to identify potential trade-offs between interventions that produce a greater benefit, but have less impact on inequalities, and those that produce a smaller benefit, but increase equity. I think there has been a debate over whether the ‘win-win’ quadrant should be south-east (which would be consistent with the dominant quadrant of the cost-effectiveness plane) or north-east, which is what seems to have been adopted as the consensus and is used here.

This paper showcases a reproducible method to estimate the equity impact of interventions. It considers public health interventions recommended by NICE from 2006-2016, with equity impacts estimated based on whether they targeted specific diseases, risk factors or populations. The disease distributions were based on hospital episode statistics data by deprivation (IMD). The study used equity weights to convert QALYs gained to different social groups into net social welfare. In this case, valuing the most disadvantaged fifth of people’s health at around 6-7 times that of the least disadvantaged fifth. I think there might still be work to be done around reaching consensus for equity weights.

The total expected effect on inequalities is small – full implementation of all recommendations would produce a reduction of the quality-adjusted life expectancy gap between the healthiest and least healthy from 13.78 to 13.34 QALYs. But maybe this is to be expected; NICE does not typically look at vaccinations or screening and has not looked at large scale public health programmes like the Healthy Child Programme in the whole. Reassuringly, where recommended interventions were likely to increase inequality, the trade-off between efficiency and equity was within the social welfare function they had used. The increase in inequality might be acceptable because the interventions were cost-effective – producing 5.6million QALYs while increasing the SII by 0.005. If these interventions are buying health at a good price, then you would hope this might then release money for other interventions that would reduce inequalities.

I suspect that public health folks might not like equity trade-offs at all – trading off equity and cost-effectiveness might be the moral equivalent of trading off human rights – you can’t choose between them. But the reality is that these kinds of trade-offs do happen, and like a lot of economic methods, it is about revealing these implicit trade-offs so that they become explicit, and having ‘accountability for reasonableness‘.

Future unrelated medical costs need to be considered in cost effectiveness analysis. The European Journal of Health Economics [PubMed] [RePEc] Published February 2019

This editorial says that NICE should include unrelated future medical costs in its decision making. At the moment, if NICE looks at a cardiovascular disease (CVD) drug, it might look at future costs related to CVD but it won’t include changes in future costs of cancer, or dementia, which may occur because individuals live longer. But usually unrelated QALY gains will be implicitly included; so there is an inconsistency. If you are a health economic modeller, you know that including unrelated costs properly is technically difficult. You might weight average population costs by disease prevalence so you get a cost estimate for people with coronary heart disease, diabetes, and people without either disease. Or you might have a general healthcare running cost that you can apply to future years. But accounting for a full matrix of competing causes of morbidity and mortality is very tricky if not impossible. To help with this, this group of authors produced the excellent PAID tool, which helps with doing this for the Netherlands (can we have one for the UK please?).

To me, including unrelated future costs means that in some cases ICERs might be driven more by the ratio of future costs to QALYs gained. Whereas currently, ICERs are often driven by the ratio of the intervention costs to QALYs gained. So it might be that a lot of treatments that are currently cost-effective no longer are, or we need to judge all interventions with a higher ICER willingness to pay threshold or value of a QALY. The authors suggest that, although including unrelated medical costs usually pushes up the ICER, it should ultimately result in better decisions that increase health.

There are real ethical issues here. I worry that including future unrelated costs might be used for an integrated care agenda in the NHS, moving towards a capitation system where the total healthcare spend on any one individual is capped, which I don’t necessarily think should happen in a health insurance system. Future developments around big data mean we will be able to segment the population a lot better and estimate who will benefit from treatments. But I think if someone is unlucky enough to need a lot of healthcare spending, maybe they should have it. This is risk sharing and, without it, you may get the ‘double jeopardy‘ problem.

For health economic modellers and decision-makers, a compromise might be to present analyses with related and unrelated medical costs and to consider both for investment decisions.

Overview of cost-effectiveness analysis. JAMA [PubMed] Published 11th March 2019

This paper probably won’t offer anything new to academic health economists in terms of methods, but I think it might be a useful teaching resource. It gives an interesting example of a model of ovarian cancer screening in the US that was published in February 2018. There has been a large-scale trial of ovarian cancer screening in the UK (the UKCTOCS), which has been extended because the results have been promising but mortality reductions were not statistically significant. The model gives a central ICER estimate of $106,187/QALY (based on $100 per screen) which would probably not be considered cost-effective in the UK.

I would like to explore one statement that I found particularly interesting, around the willingness to pay threshold; “This willingness to pay is often represented by the largest ICER among all the interventions that were adopted before current resources were exhausted, because adoption of any new intervention would require removal of an existing intervention to free up resources.”

The Culyer bookshelf model is similar to this, although as well as the ICER you also need to consider the burden of disease or size of the investment. Displacing a $110,000/QALY intervention for 1000 people with a $109,000/QALY intervention for a million people will bust your budget.

This idea works intuitively – if Liverpool FC are signing a new player then I might hope they are better than all of the other players, or at least better than the average player. But actually, as long as they are better than the worst player then the team will be improved (leaving aside issues around different positions, how they play together, etc.).

However, I think that saying that the reference ICER should be the largest current ICER might be a bit dangerous. Leaving aside inefficient legacy interventions (like unnecessary tonsillectomies etc), it is likely that the intervention being considered for investment and the current maximum ICER intervention to be displaced may both be new, expensive immunotherapies. It might be last in, first out. But I can’t see this happening; people are loss averse, so decision-makers and patients might not accept what is seen as a fantastic new drug for pancreatic cancer being approved then quickly usurped by a fantastic new leukaemia drug.

There has been a lot of debate around what the threshold should be in the UK; in England NICE currently use £20,000 – £30,000, up to a hypothetical maximum £300,000/QALY in very specific circumstances. UK Treasury value QALYs at £60,000. Work by Karl Claxton and colleagues suggests that marginal productivity (the ‘shadow price’) in the NHS is nearer to £5,000 – £15,000 per QALY.

I don’t know what the answer to this is. I don’t think the willingness-to-pay threshold for a new treatment should be the maximum ICER of a current portfolio of interventions; maybe it should be the marginal health production cost in a health system, as might be inferred from the Claxton work. Of course, investment decisions are made on other factors, like impact on health inequalities, not just on the ICER.

Credits