Chris Sampson’s journal round-up for 16th December 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

MCDA-based deliberation to value health states: lessons learned from a pilot study. Health and Quality of Life Outcomes [PubMed] Published 1st July 2019

The rejection of the EQ-5D-5L value set for England indicates something of a crisis in health state valuation. Evidently, there is a lack of trust in the quantitative data and methods used. This is despite decades of methodological development. Perhaps we need a completely different approach. Could we instead develop a value set using qualitative methods?

A value set based on qualitative research aligns with an idea forwarded by Daniel Hausman, who has argued for the use of deliberative approaches. This could circumvent the problems associated with asking people to give instant (and possibly ill-thought-out) responses to preference elicitation surveys. The authors of this study report on the first ever (pilot) attempt to develop a consensus value set using methods of multi-criteria decision analysis (MCDA) and deliberation. The study attempts to identify a German value set for the SF-6D.

The study included 34 students in a one-day conference setting. A two-step process was followed for the MCDA using MACBETH (the Measuring Attractiveness by a Categorical Based Evaluation Technique), which uses pairwise comparisons to derive numerical scales without quantitative assessments. First, a scoring procedure was conducted for each of the six dimensions. Second, a weighting was identified for each dimension. After an introductory session, participants were allocated into groups of five or six and each group was tasked with scoring one SF-6D dimension. Within each group, consensus was achieved. After these group sessions, all participants were brought together to present and validate the results. In this deliberation process, consensus was achieved for all domains except pain. Then the weighting session took place, but resulted in no consensus. Subsequent to the one-day conference, a series of semi-structured interviews were conducted with moderators. All the sessions and interviews were recorded, transcribed, and analysed qualitatively.

In short, the study failed. A consensus value set could not be identified. Part of the problem was probably in the SF-6D descriptive system, particularly in relation to pain, which was interpreted differently by different people. But the main issue was that people had different opinions and didn’t seem willing to move towards consensus with a societal perspective in mind. Participants broadly fell into three groups – one in favour of prioritising pain and mental health, one opposed to trading-off SF-6D dimensions and favouring equal weights, and another group that was not willing to accept any trade-offs.

Despite its apparent failure, this seems like an extremely useful and important study. The authors provide a huge amount of detail regarding what they did, what went well, and what might be done differently next time. I’m not sure it will ever be possible to get a group of people to reach a consensus on a value set. The whole point of preference-based measures is surely that different people have different priorities, and they should be expected to disagree. But I think we should expect that the future of health state valuation lies in mixed methods. There might be more success in a qualitative and deliberative approach to scoring combined with a quantitative approach to weighting, or perhaps a qualitative approach informed by quantitative data that demands trade-offs. Whatever the future holds, this study will be a valuable guide.

Preference-based health-related quality of life outcomes associated with preterm birth: a systematic review and meta-analysis. PharmacoEconomics [PubMed] Published 9th December 2019

Premature and low birth weight babies can experience a whole host of negative health outcomes. Most studies in this context look at short-term biomedical assessments or behavioural and neurodevelopmental indicators. But some studies have sought to identify the long-term consequences on health-related quality of life by identifying health state utility values. This study provides us with a review and meta-analysis of such values.

The authors screened 2,139 articles from their search and included 20 in the review. Lots of data were extracted from the articles, which is helpfully tabulated in the paper. The majority of the studies included adolescents and focussed on children born very preterm or at very low birth weight.

For the meta-analysis, the authors employed a linear mixed-effects meta-regression, which is an increasingly routine approach in this context. The models were used to estimate the decrement in utility values associated with preterm birth or low birth weight, compared with matched controls. Conveniently, all but one of the studies used a measure other than the HUI2 or HUI3, so the analysis was restricted to these two measures. Preterm birth was associated with an average decrement of 0.066 and extremely low birth weight with a decrement of 0.068. The mean estimated utility scores for the study groups was 0.838, compared with 0.919 for the control groups.

Reviews of utility values are valuable as they provide modellers with a catalogue of potential parameters that can be selected in a meaningful and transparent way. Even though this is a thorough and well-reported study, it’s a bit harder to see how its findings will be used. Most reviews of utility values relate to a particular disease, which might be prevented or ameliorated by treatment, and the value of this treatment depends on the utility values selected. But how will these utility values be used? The avoidance of preterm or low-weight birth is not the subject of most evaluations in the neonatal setting. Even if it was, how valuable are estimates from a single point in adolescence? The authors suggest that future research should seek to identify a trajectory of utility values over the life course. But, even if we could achieve this, it’s not clear to me how this should complement utility values identified in relation to the specific health problems experienced by these people.

The new and non-transparent Cancer Drugs Fund. PharmacoEconomics [PubMed] Published 12th December 2019

Not many (any?) health economists liked the Cancer Drugs Fund (CDF). It was set-up to give special treatment to cancer drugs, which weren’t assessed on the same basis as other drugs being assessed by NICE. In 2016, the CDF was brought within NICE’s remit, with medicines available through the CDF requiring a managed access agreement. This includes agreements on data collection and on payments by the NHS during the period. In this article, the authors contend that the new CDF process is not sufficiently transparent.

Three main issued are raised: i) lack of transparency relating to the value of CDF drugs, ii) lack of transparency relating to the cost of CDF drugs, and iii) the amount of time that medicines remain on the CDF. The authors tabulate the reporting of ICERs according to the decisions made, showing that the majority of treatment comparisons do not report ICERs. Similarly, the time in the CDF is tabulated, with many indications being in the CDF for an unknown amount of time. In short, we don’t know much about medicines going through the CDF, except that they’re probably costing a lot.

I’m a fan of transparency, in almost all contexts. I think it is inherently valuable to share information widely. It seems that the authors of this paper do too. A lack of transparency in NICE decision-making is a broader problem that arises from the need to protect commercially sensitive pricing agreements. But what this paper doesn’t manage to do is to articulate why anybody who doesn’t support transparency in principle should care about the CDF in particular. Part of the authors’ argument is that the lack of transparency prevents independent scrutiny. But surely NICE is the independent scrutiny? The authors argue that it is a problem that commissioners and the public cannot assess the value of the medicines, but it isn’t clear why that should be a problem if they are not the arbiters of value. The CDF has quite rightly faced criticism over the years, but I’m not convinced that its lack of transparency is its main problem.

Credits

Jason Shafrin’s journal round-up for 15th July 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Understanding price growth in the market for targeted oncology therapies. American Journal of Managed Care [PubMed] Published 14th June 2019

In the media, you hear that drugs prices—particularly for oncology—are on the rise. With high prices, it makes it difficult for payers to afford effective treatments. For countries where patients bear significant cost, patients may even go without treatment. Are pharmaceutical firms making money hand over fist with these rising prices?

Recent research by Sussell et al. argues that, despite increased drug price costs, pharmaceutical manufacturers are actually making less money on every new cancer drug they produce. The reason? Precision medicine.

The authors use data from both the IQVIA National Sales Perspective (NSP) data set and the Medicare Current Beneficiary Survey (MCBS) to examine changes in the price, quantity, and total revenue over time. Price is measured as episode price (price over a fixed line of therapy) rather than the price per unit of drug. The time period for the core analysis covers 1997-2015.

The authors find that drug prices have roughly tripled between 1997-2015. Despite this price increase, pharmaceutical manufacturers are actually making less money. The number of eligible (i.e., indicated) patients per new oncology drug launch fell between 85% to 90% over this time period. On net, median pharmaceutical manufacturer revenues fell by about half over this time period.

Oncology may be the case where high cost drugs are a good thing; rather than identifying treatments indicated for a large number of people that are less effective on average per patient, develop more highly effective drugs targeted to small groups of people. Patients don’t get unnecessary treatments, and overall costs to payers fall. Of course, manufacturers still need to justify that these treatments represent high value, but some of my research has shown that quality-adjusted cost of care in oncology has remained flat or even fallen for some tumors despite rising drug prices.

Do cancer treatments have option value? Real‐world evidence from metastatic melanoma. Health Economics [PubMed] [RePEc] Published 24th June 2019

Cost effectiveness models done from a societal perspective aim to capture all benefits and costs of a given treatment relative to a comparator. Are standard CEA approaches really capturing all costs and benefits? A 2018 ISPOR Task Force examines some novel components of value that are not typically captured, such as real option value. The Task Force describes real option value as value that is “…generated when a health technology that extends life creates opportunities for the patient to benefit from other future advances in medicine.” Previous studies (here and here) have shown that patients who received treatments for chronic myeloid leukemia and non-small cell lung cancer lived longer than expected since they were able to live long enough to reach the next scientific advance.

A question remains, however, of whether individuals’ behaviors actually take into account this option value. A paper by Li et al. 2019 aims to answer this question by examining whether patients were more likely to get surgical resection after the advent of a novel immuno-oncology treatment (ipilimumab). Using claims data (Marketscan), the authors use an interrupted time series design to examine whether Phase II and Phase III clinical trail read-outs affected the likelihood of surgical resection. The model is a multinomial logit regression. Their preferred specification finds that

“Phase II result was associated with a nearly twofold immediate increase (SD: 0.61; p = .033) in the probability of undergoing surgical resection of metastasis relative to no treatment and a 2.5‐fold immediate increase (SD: 1.14; p = .049) in the probability of undergoing both surgical resection of metastasis and systemic therapy relative to no treatment.”

The finding is striking, but also could benefit from further testing. For instance, the impact of the Phase III results are (incrementally) small relative to the Phase II results. This may be reasonable if one believes that Phase II is a sufficiently reliable indicator of drug benefit, but many people focus on Phase III results. One test the authors could look at is to see whether physicians in academic medical centers are more likely to respond to this news. If one believes that physicians at academic medical centers are more up to speed on the literature, one would expect to see a larger option value for patients treated at academic compared to community medical centers. Further, the study would benefit from some falsification tests. If the authors could use data from other tumors, one would expect that the ipilimumab Phase II results would not have a material impact on surgical resection for other tumor types.

Overall, however, the study is worthwhile as it looks at treatment benefits not just in a static sense, but in a dynamically evolving innovation landscape.

Aggregate distributional cost-effectiveness analysis of health technologies. Value in Health [PubMed] Published 1st May 2019

In general, health economists would like to have health insurers cover treatments that are welfare improving in the Pareto sense. This means, if a treatment provides more expected benefits than costs and no one is worse off (in expectation), then this treatment should certainly be covered. It could be the case, however, that people care who gains these benefits. For instance, consider the case of a new technology that helped people with serious diseases move around more easily inside a mansion. Assume this technology had more benefits than cost. Some (many) people, however, may not like covering a treatment that only benefits people who are very well-off. This issue is especially relevant in single payer systems—like the United Kingdom’s National Health Service (NHS)—which are funded by taxpayers.

One option is to consider both the average net health benefits (i.e., benefits less cost) to a population as well as its effect on inequality. If a society doesn’t care at all about inequality, then this is reduced to just measuring net health benefit overall; if a society has a strong preference for equality, treatments that provide benefits to only the better-off will be considered less valuable.

A paper by Love-Koh et al. 2019 provides a nice quantitative way to estimate these tradeoffs. The approach uses both the Atkinson inequality index and the Kolm index to measure inequality. The authors then use these indices to calculate the equally distributed equivalent (EDE), which is the level of population health (in QALYs) in a completely equal distribution that yields the same amount of social welfare as the distribution under investigation.

Using this approach, the authors find the following:

“Twenty-seven interventions were evaluated. Fourteen interventions were estimated to increase population health and reduce health inequality, 8 to reduce population health and increase health inequality, and 5 to increase health and increase health inequality. Among the latter 5, social welfare analysis, using inequality aversion parameters reflecting high concern for inequality, indicated that the health gain outweighs the negative health inequality impact.”

Despite the attractive features of this approach analytically, there are issues related to how it would be implemented. In this case, inequality is based solely on quality-adjusted life expectancy. However, others could take a more holistic approach and look at socioeconomic status including other factors (e.g., income, employment, etc.). In theory, one could perform the same exercise measuring individual overall utility including these other aspects, but few (rightly) would want the government to assess individuals’ overall happiness to make treatment decisions. Second, the authors qualify expected life expectancy by patients’ sex, primary diagnosis and postcode. Thus, you could have a system that prioritizes treatments for men—since men’s life expectancy is generally less than women. Third, this model assumes disease is exogenous. In many cases this is true, but in some cases individual behavior could increase the likelihood of having a disease. For instance, would citizens want to discount treatments for diseases that are preventable (e.g., lung cancer due to smoking, diabetes due to poor eating habits/exercise), even if treatments for these diseases reduced inequality. Typically, there are no diseases that are fully exogenous or fully at fault of the individual, so this is a slippery slope.

What the Love-Koh paper contributes is an easy to implement method for quantifying how inequality preferences should affect the value of different treatments. What the paper does not answer is whether this approach should be implemented.

Credits

Simon McNamara’s journal round-up for 8th April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

National Institute for Health and Care Excellence, social values and healthcare priority setting. Journal of the Royal Society of Medicine [PubMed] Published 2nd April 2019

As is traditional, this week’s round-up starts with an imaginary birthday party. After much effort, we have finally managed to light the twenty candles, have agreed our approach to the distribution of the cake, and are waiting in anticipation of the entrance of the birthday “quasi-autonomous non-governmental body”. The door opens. You clear your throat. Here we go…

Happy Birthday to you,

Happy Birthday to you,

Happy Birthday dear National Institute for Health and Care Excellence,

Happy Birthday to you.

NICE smiles happily. It is no longer a teenager. It has made it to 20 – despite its parents challenging it a few times (cough, Cancer Drug Fund, cough). After the candles have been blown out, someone at the back shouts: “Speech! Speech!”. NICE coughs, thanks everyone politely, and (admittedly slight strangely) takes the opportunity to announce that they are revising their “Social Value Judgements” paper – a document that outlines the principles they use to develop guidance. They then proceed to circle the room, proudly handing out draft copies of the new document- “The principles that guide the development of NICE guidance and standards” (PDF). They look excited. Your fellow guests start to read.

“Surely not?”, “What the … ?”, “Why?” – they don’t seem pleased. You jump into the document. All of this is about process. Where are all the bits about justice, and inequalities, and bioethics, and the rest? “Why have you taken out loads of the good stuff?” you ask. “This is too vague, too procedural”. Your disappointment is obvious to those in the room.

Your phone pings – it’s your favourite WhatsApp group. One of the other guests has already started drafting a “critical friend” paper in the corner of the room. They want to know if you want to be involved. “I’m in”, you respond, “This is important, we need to make sure NICE knows what we think”. Your phone pings again. Another guest is in: “I want to be involved, this matters. Also, this is exactly the kind of paper that will get picked up by the AHE blog. If we are lucky, we might even be the first paper in one of their journal round-ups”. You pause, think, and respond hopefully: “Fingers crossed”.

I don’t know if NICE had an actual birthday party – if they did I certainly wasn’t invited. I also highly doubt that the authors of this week’s first paper, or indeed any paper, had the AHE blog in mind when writing. What I do know, is that the first article is indeed a “critical friend” paper which outlines the authors’ concerns with NICE’s proposal to “revise” (read: delete) their social value judgements guidance. This paper is relatively short, so if you are interested in these changes I suggest you read it, rather than relying on my imaginary birthday party version of their concerns.

I am highly sympathetic to the views expressed in this paper. The existing “social value judgements” document is excellent, and (to me at least) seems to be the gold standard in setting the values by which an HTA body should develop guidance. Reducing this down to solely procedural elements seems unnecessary, and potentially harmful if the other core values are forgotten, or deprioritised.

As I reflect on this paper, I can’t help think of the old adage: “If it ain’t broke, don’t fix it”. NICE – this ain’t broke.

Measuring survival benefit in health technology assessment in the presence of nonproportional hazards. Value in Health Published 22nd March 2019

Dear HTA bodies that don’t routinely look for violations of proportional hazards in oncology data: 2005 called, they want their methods back.

Seriously though, it’s 2019. Things have moved on. If a new drug has a different mode of action to its comparator, is given for a different duration, or has differing levels of treatment effect in different population subgroups, there are good reasons to think that the trial data for that drug might violate proportional hazards. So why not look? It’s easy enough, and could change the way you think about both the costs and the benefits of that medicine.

If you haven’t worked in oncology before, there is a good chance you are currently asking yourself two questions: “what does proportional hazards mean?” and “why does it matter?”. In massively simplified terms, when we say the hazards in a trial are “proportional” we mean that the treatment effect of the new intervention (typically on survival) is constant over time. If a treatment takes some time to work (e.g. immunotherapies), or is given for only a few weeks before being stopped (e.g. some chemotherapies), there are good reasons to think that the treatment effect of that intervention may vary over time. If this is the case, there will be a violation of proportional hazards (they will be “nonproportional”).

If you are an HTA body, this is important for at least three reasons. First, if hazards are non-proportional, this can mean that the average hazard ratio (treatment effect) from the trial is a poor representation of what is likely to happen beyond the trial period – a big issue if you are extrapolating data in an economic model. Second, if hazards are non-proportional, this can mean that the median survival benefit from the trial is a poor representation of the mean benefit (e.g. in the case of a curve with a “big tail”). If you don’t account for this, and rely on medians (as some HTA bodies do), this can result in your evaluation under-estimating, or over-estimating, the true benefits and costs of the medicine. Third, most approaches to including indirect comparison in economic models rely on proportionality so, if this doesn’t hold, your model might be a poor representation of reality. Given these issues, it makes sense that HTA bodies should be looking for violations in proportional hazards when evaluating oncology data.

In this week’s second paper, the authors review the way different HTA bodies approach the issue of non-proportionality in their methods guides, and in a sample of their appraisals. Of the HTA bodies considered, they find that only NICE (UK), CADTH (Canada), and PBAC (Australia) recommend testing for proportional hazards. Notably, the authors report that the Transparency Committee (France), IQWiG (Germany), and TLV (Sweden) don’t recommend testing for proportionality. Interestingly, despite these recommendations, the authors find that solely the majority of NICE appraisals they reviewed included these tests, and that only 20% of the PBAC appraisals and 8% of the CADTH appraisals did. This suggests that the vast majority of oncology drug evaluations do not include consideration of non-proportionality – a big concern given the issues outlined above.

I liked this paper, although I was a bit shocked at the results. If you work for an HTA body that doesn’t recommend testing for non-proportionality, or doesn’t enforce their existing recommendations, I suggest you think very carefully about this issue – particularly if you rely on the extrapolation of survival curves in your assessments. If you aren’t looking for violations of proportional hazards, there is a good chance that you aren’t reflecting the true costs and benefits of many medicines in your evaluations. So, why not look for them?

The challenge of antimicrobial resistance: what economics can contribute. Science Published 5th April 2019

Health Economics doesn’t normally make it into Science (the journal). If it does, it probably means the paper is an important one. This one certainly is.

Antimicrobial resistance (AMR) is scary – really scary. One source cited in this paper predicts that by 2050, 10 million people a year will die due to AMR. I don’t know about you, but I find this pretty worrying (how’s that for a bit of British understatement?). Given these predicted consequences, you would think that there would be quite a lot of work from economists on this issue. Well, there isn’t. According to this article, there are only 55 papers on EconLit that “broadly relate” to AMR.

This paper contributes to this literature in two important ways. First, it is a call to arms to economists to do more work on AMR. If there are only 55 papers on this topic, this suggests we are only scratching the surface of the issue and could do more as a field contribute to helping solve the problem. Second, it neatly demonstrates how economics could be applied to the problem of AMR – including analysis of both the supply side (not enough new antibiotics being developed) and demand side (too much antibiotic use) of the problem.

In the main body of the paper, the authors draw parallels between the economics of AMR and the economics of climate change: both are global instances of the ‘tragedy of the commons’, both are subject to significant uncertainty about the future, and both are highly sensitive to inter-temporal discounting. They then go on to suggest that many of the ideas developed in the context of climate change could be applied to AMR – including the potential for use of antibiotic prescribing quotas (analogous to carbon quotas) and taxation of antibiotic prescriptions (analogous to the idea of a carbon tax). There are many other ideas in the paper, and if you are interested in these I suggest you take the time to read it in full.

I think this is an important paper and one that has made me think more about the economics of both AMR and, inadvertently, climate change. With both issues, I can’t help but think we might be sleepwalking into a world where we have royally screwed over future generations because we didn’t take the actions we needed to take. If economists can help stop these things happening, we need to act. If we don’t, what will you say in 2050 when you turn on the news and see that 10 million people are dying from AMR each year? That is, assuming you aren’t one of those who has died as a result. Scary stuff indeed.

Credits