Jason Shafrin’s journal round-up for 15th July 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Understanding price growth in the market for targeted oncology therapies. American Journal of Managed Care [PubMed] Published 14th June 2019

In the media, you hear that drugs prices—particularly for oncology—are on the rise. With high prices, it makes it difficult for payers to afford effective treatments. For countries where patients bear significant cost, patients may even go without treatment. Are pharmaceutical firms making money hand over fist with these rising prices?

Recent research by Sussell et al. argues that, despite increased drug price costs, pharmaceutical manufacturers are actually making less money on every new cancer drug they produce. The reason? Precision medicine.

The authors use data from both the IQVIA National Sales Perspective (NSP) data set and the Medicare Current Beneficiary Survey (MCBS) to examine changes in the price, quantity, and total revenue over time. Price is measured as episode price (price over a fixed line of therapy) rather than the price per unit of drug. The time period for the core analysis covers 1997-2015.

The authors find that drug prices have roughly tripled between 1997-2015. Despite this price increase, pharmaceutical manufacturers are actually making less money. The number of eligible (i.e., indicated) patients per new oncology drug launch fell between 85% to 90% over this time period. On net, median pharmaceutical manufacturer revenues fell by about half over this time period.

Oncology may be the case where high cost drugs are a good thing; rather than identifying treatments indicated for a large number of people that are less effective on average per patient, develop more highly effective drugs targeted to small groups of people. Patients don’t get unnecessary treatments, and overall costs to payers fall. Of course, manufacturers still need to justify that these treatments represent high value, but some of my research has shown that quality-adjusted cost of care in oncology has remained flat or even fallen for some tumors despite rising drug prices.

Do cancer treatments have option value? Real‐world evidence from metastatic melanoma. Health Economics [PubMed] [RePEc] Published 24th June 2019

Cost effectiveness models done from a societal perspective aim to capture all benefits and costs of a given treatment relative to a comparator. Are standard CEA approaches really capturing all costs and benefits? A 2018 ISPOR Task Force examines some novel components of value that are not typically captured, such as real option value. The Task Force describes real option value as value that is “…generated when a health technology that extends life creates opportunities for the patient to benefit from other future advances in medicine.” Previous studies (here and here) have shown that patients who received treatments for chronic myeloid leukemia and non-small cell lung cancer lived longer than expected since they were able to live long enough to reach the next scientific advance.

A question remains, however, of whether individuals’ behaviors actually take into account this option value. A paper by Li et al. 2019 aims to answer this question by examining whether patients were more likely to get surgical resection after the advent of a novel immuno-oncology treatment (ipilimumab). Using claims data (Marketscan), the authors use an interrupted time series design to examine whether Phase II and Phase III clinical trail read-outs affected the likelihood of surgical resection. The model is a multinomial logit regression. Their preferred specification finds that

“Phase II result was associated with a nearly twofold immediate increase (SD: 0.61; p = .033) in the probability of undergoing surgical resection of metastasis relative to no treatment and a 2.5‐fold immediate increase (SD: 1.14; p = .049) in the probability of undergoing both surgical resection of metastasis and systemic therapy relative to no treatment.”

The finding is striking, but also could benefit from further testing. For instance, the impact of the Phase III results are (incrementally) small relative to the Phase II results. This may be reasonable if one believes that Phase II is a sufficiently reliable indicator of drug benefit, but many people focus on Phase III results. One test the authors could look at is to see whether physicians in academic medical centers are more likely to respond to this news. If one believes that physicians at academic medical centers are more up to speed on the literature, one would expect to see a larger option value for patients treated at academic compared to community medical centers. Further, the study would benefit from some falsification tests. If the authors could use data from other tumors, one would expect that the ipilimumab Phase II results would not have a material impact on surgical resection for other tumor types.

Overall, however, the study is worthwhile as it looks at treatment benefits not just in a static sense, but in a dynamically evolving innovation landscape.

Aggregate distributional cost-effectiveness analysis of health technologies. Value in Health [PubMed] Published 1st May 2019

In general, health economists would like to have health insurers cover treatments that are welfare improving in the Pareto sense. This means, if a treatment provides more expected benefits than costs and no one is worse off (in expectation), then this treatment should certainly be covered. It could be the case, however, that people care who gains these benefits. For instance, consider the case of a new technology that helped people with serious diseases move around more easily inside a mansion. Assume this technology had more benefits than cost. Some (many) people, however, may not like covering a treatment that only benefits people who are very well-off. This issue is especially relevant in single payer systems—like the United Kingdom’s National Health Service (NHS)—which are funded by taxpayers.

One option is to consider both the average net health benefits (i.e., benefits less cost) to a population as well as its effect on inequality. If a society doesn’t care at all about inequality, then this is reduced to just measuring net health benefit overall; if a society has a strong preference for equality, treatments that provide benefits to only the better-off will be considered less valuable.

A paper by Love-Koh et al. 2019 provides a nice quantitative way to estimate these tradeoffs. The approach uses both the Atkinson inequality index and the Kolm index to measure inequality. The authors then use these indices to calculate the equally distributed equivalent (EDE), which is the level of population health (in QALYs) in a completely equal distribution that yields the same amount of social welfare as the distribution under investigation.

Using this approach, the authors find the following:

“Twenty-seven interventions were evaluated. Fourteen interventions were estimated to increase population health and reduce health inequality, 8 to reduce population health and increase health inequality, and 5 to increase health and increase health inequality. Among the latter 5, social welfare analysis, using inequality aversion parameters reflecting high concern for inequality, indicated that the health gain outweighs the negative health inequality impact.”

Despite the attractive features of this approach analytically, there are issues related to how it would be implemented. In this case, inequality is based solely on quality-adjusted life expectancy. However, others could take a more holistic approach and look at socioeconomic status including other factors (e.g., income, employment, etc.). In theory, one could perform the same exercise measuring individual overall utility including these other aspects, but few (rightly) would want the government to assess individuals’ overall happiness to make treatment decisions. Second, the authors qualify expected life expectancy by patients’ sex, primary diagnosis and postcode. Thus, you could have a system that prioritizes treatments for men—since men’s life expectancy is generally less than women. Third, this model assumes disease is exogenous. In many cases this is true, but in some cases individual behavior could increase the likelihood of having a disease. For instance, would citizens want to discount treatments for diseases that are preventable (e.g., lung cancer due to smoking, diabetes due to poor eating habits/exercise), even if treatments for these diseases reduced inequality. Typically, there are no diseases that are fully exogenous or fully at fault of the individual, so this is a slippery slope.

What the Love-Koh paper contributes is an easy to implement method for quantifying how inequality preferences should affect the value of different treatments. What the paper does not answer is whether this approach should be implemented.

Credits

Simon McNamara’s journal round-up for 8th April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

National Institute for Health and Care Excellence, social values and healthcare priority setting. Journal of the Royal Society of Medicine [PubMed] Published 2nd April 2019

As is traditional, this week’s round-up starts with an imaginary birthday party. After much effort, we have finally managed to light the twenty candles, have agreed our approach to the distribution of the cake, and are waiting in anticipation of the entrance of the birthday “quasi-autonomous non-governmental body”. The door opens. You clear your throat. Here we go…

Happy Birthday to you,

Happy Birthday to you,

Happy Birthday dear National Institute for Health and Care Excellence,

Happy Birthday to you.

NICE smiles happily. It is no longer a teenager. It has made it to 20 – despite its parents challenging it a few times (cough, Cancer Drug Fund, cough). After the candles have been blown out, someone at the back shouts: “Speech! Speech!”. NICE coughs, thanks everyone politely, and (admittedly slight strangely) takes the opportunity to announce that they are revising their “Social Value Judgements” paper – a document that outlines the principles they use to develop guidance. They then proceed to circle the room, proudly handing out draft copies of the new document- “The principles that guide the development of NICE guidance and standards” (PDF). They look excited. Your fellow guests start to read.

“Surely not?”, “What the … ?”, “Why?” – they don’t seem pleased. You jump into the document. All of this is about process. Where are all the bits about justice, and inequalities, and bioethics, and the rest? “Why have you taken out loads of the good stuff?” you ask. “This is too vague, too procedural”. Your disappointment is obvious to those in the room.

Your phone pings – it’s your favourite WhatsApp group. One of the other guests has already started drafting a “critical friend” paper in the corner of the room. They want to know if you want to be involved. “I’m in”, you respond, “This is important, we need to make sure NICE knows what we think”. Your phone pings again. Another guest is in: “I want to be involved, this matters. Also, this is exactly the kind of paper that will get picked up by the AHE blog. If we are lucky, we might even be the first paper in one of their journal round-ups”. You pause, think, and respond hopefully: “Fingers crossed”.

I don’t know if NICE had an actual birthday party – if they did I certainly wasn’t invited. I also highly doubt that the authors of this week’s first paper, or indeed any paper, had the AHE blog in mind when writing. What I do know, is that the first article is indeed a “critical friend” paper which outlines the authors’ concerns with NICE’s proposal to “revise” (read: delete) their social value judgements guidance. This paper is relatively short, so if you are interested in these changes I suggest you read it, rather than relying on my imaginary birthday party version of their concerns.

I am highly sympathetic to the views expressed in this paper. The existing “social value judgements” document is excellent, and (to me at least) seems to be the gold standard in setting the values by which an HTA body should develop guidance. Reducing this down to solely procedural elements seems unnecessary, and potentially harmful if the other core values are forgotten, or deprioritised.

As I reflect on this paper, I can’t help think of the old adage: “If it ain’t broke, don’t fix it”. NICE – this ain’t broke.

Measuring survival benefit in health technology assessment in the presence of nonproportional hazards. Value in Health Published 22nd March 2019

Dear HTA bodies that don’t routinely look for violations of proportional hazards in oncology data: 2005 called, they want their methods back.

Seriously though, it’s 2019. Things have moved on. If a new drug has a different mode of action to its comparator, is given for a different duration, or has differing levels of treatment effect in different population subgroups, there are good reasons to think that the trial data for that drug might violate proportional hazards. So why not look? It’s easy enough, and could change the way you think about both the costs and the benefits of that medicine.

If you haven’t worked in oncology before, there is a good chance you are currently asking yourself two questions: “what does proportional hazards mean?” and “why does it matter?”. In massively simplified terms, when we say the hazards in a trial are “proportional” we mean that the treatment effect of the new intervention (typically on survival) is constant over time. If a treatment takes some time to work (e.g. immunotherapies), or is given for only a few weeks before being stopped (e.g. some chemotherapies), there are good reasons to think that the treatment effect of that intervention may vary over time. If this is the case, there will be a violation of proportional hazards (they will be “nonproportional”).

If you are an HTA body, this is important for at least three reasons. First, if hazards are non-proportional, this can mean that the average hazard ratio (treatment effect) from the trial is a poor representation of what is likely to happen beyond the trial period – a big issue if you are extrapolating data in an economic model. Second, if hazards are non-proportional, this can mean that the median survival benefit from the trial is a poor representation of the mean benefit (e.g. in the case of a curve with a “big tail”). If you don’t account for this, and rely on medians (as some HTA bodies do), this can result in your evaluation under-estimating, or over-estimating, the true benefits and costs of the medicine. Third, most approaches to including indirect comparison in economic models rely on proportionality so, if this doesn’t hold, your model might be a poor representation of reality. Given these issues, it makes sense that HTA bodies should be looking for violations in proportional hazards when evaluating oncology data.

In this week’s second paper, the authors review the way different HTA bodies approach the issue of non-proportionality in their methods guides, and in a sample of their appraisals. Of the HTA bodies considered, they find that only NICE (UK), CADTH (Canada), and PBAC (Australia) recommend testing for proportional hazards. Notably, the authors report that the Transparency Committee (France), IQWiG (Germany), and TLV (Sweden) don’t recommend testing for proportionality. Interestingly, despite these recommendations, the authors find that solely the majority of NICE appraisals they reviewed included these tests, and that only 20% of the PBAC appraisals and 8% of the CADTH appraisals did. This suggests that the vast majority of oncology drug evaluations do not include consideration of non-proportionality – a big concern given the issues outlined above.

I liked this paper, although I was a bit shocked at the results. If you work for an HTA body that doesn’t recommend testing for non-proportionality, or doesn’t enforce their existing recommendations, I suggest you think very carefully about this issue – particularly if you rely on the extrapolation of survival curves in your assessments. If you aren’t looking for violations of proportional hazards, there is a good chance that you aren’t reflecting the true costs and benefits of many medicines in your evaluations. So, why not look for them?

The challenge of antimicrobial resistance: what economics can contribute. Science Published 5th April 2019

Health Economics doesn’t normally make it into Science (the journal). If it does, it probably means the paper is an important one. This one certainly is.

Antimicrobial resistance (AMR) is scary – really scary. One source cited in this paper predicts that by 2050, 10 million people a year will die due to AMR. I don’t know about you, but I find this pretty worrying (how’s that for a bit of British understatement?). Given these predicted consequences, you would think that there would be quite a lot of work from economists on this issue. Well, there isn’t. According to this article, there are only 55 papers on EconLit that “broadly relate” to AMR.

This paper contributes to this literature in two important ways. First, it is a call to arms to economists to do more work on AMR. If there are only 55 papers on this topic, this suggests we are only scratching the surface of the issue and could do more as a field contribute to helping solve the problem. Second, it neatly demonstrates how economics could be applied to the problem of AMR – including analysis of both the supply side (not enough new antibiotics being developed) and demand side (too much antibiotic use) of the problem.

In the main body of the paper, the authors draw parallels between the economics of AMR and the economics of climate change: both are global instances of the ‘tragedy of the commons’, both are subject to significant uncertainty about the future, and both are highly sensitive to inter-temporal discounting. They then go on to suggest that many of the ideas developed in the context of climate change could be applied to AMR – including the potential for use of antibiotic prescribing quotas (analogous to carbon quotas) and taxation of antibiotic prescriptions (analogous to the idea of a carbon tax). There are many other ideas in the paper, and if you are interested in these I suggest you take the time to read it in full.

I think this is an important paper and one that has made me think more about the economics of both AMR and, inadvertently, climate change. With both issues, I can’t help but think we might be sleepwalking into a world where we have royally screwed over future generations because we didn’t take the actions we needed to take. If economists can help stop these things happening, we need to act. If we don’t, what will you say in 2050 when you turn on the news and see that 10 million people are dying from AMR each year? That is, assuming you aren’t one of those who has died as a result. Scary stuff indeed.

Credits

Chris Sampson’s journal round-up for 1st April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Toward a centralized, systematic approach to the identification, appraisal, and use of health state utility values for reimbursement decision making: introducing the Health Utility Book (HUB). Medical Decision Making [PubMed] Published 22nd March 2019

Every data point reported in research should be readily available to us all in a structured knowledge base. Most of us waste most of our time retreading old ground, meaning that we don’t have the time to do the best research possible. One instance of this is in the identification of health state utility values to plug into decision models. Everyone who builds a model in a particular context goes searching for utility values – there is no central source. The authors of this paper are hoping to put an end to that.

The paper starts with an introduction to the importance of health state utility values in cost-effectiveness analysis, which most of us don’t need to read. Of course, the choice of utility values in a model is very important and can dramatically alter estimates of cost-effectiveness. The authors also discuss issues around the identification of utility values and the assessment of their quality and applicability. Then we get into the objectives of the ‘Health Utility Book’, which is designed to tackle these issues.

The Health Utility Book will consist of a registry (I like registries), backed by a systematic approach to the identification and inclusion (registration?) of utility values. The authors plan to develop a quality assessment tool for studies that report utility values, using a Delphi panel method to identify appropriate indicators of quality to be included. The quality assessment tool will be complemented by a tool to assess applicability, which will be developed through interviews with stakeholders involved in the reimbursement process.

In the first place, the Health Utility Book will only compile utility values for cancer, and some of the funding for the project is cancer specific. To survive, the project will need more money from more sources. To be sustainable, the project will need to attract funding indefinitely. Or perhaps it could morph into a crowd-sourced platform. Either way, the Health Utility Book has my support.

A review of attitudes towards the reuse of health data among people in the European Union: the primacy of purpose and the common good. Health Policy Published 21st March 2019

We all agree that data protection is important. We all love the GDPR. Organisations such as the European Council and the OECD are committed to facilitating the availability of health data as a means of improving population health. And yet, there often seem to be barriers to accessing health data, and we occasionally hear stories of patients opposing data sharing (e.g. care.data). Maybe people don’t want researchers to be using their data, and we just need to respect that. Or, more likely, we need to figure out what it is that people are opposed to, and design systems that recognise this.

This study reviews research on attitudes towards the sharing of health data for purposes other than treatment, among people living in the EU, employing a ‘configurative literature synthesis’ (a new one for me). From 5,691 abstracts, 29 studies were included. Most related to the use of health data in research in general, while some focused on registries. A few studies looked at other uses, such as for planning and policy purposes. And most were from the UK.

An overarching theme was a low awareness among the population about the reuse of health data. However, in some studies, a desire to be better informed was observed. In general, views towards the use of health data were positive. But this was conditional on the data being used to serve the common good. This includes such purposes as achieving a better understanding of diseases, improving treatments, or achieving more efficient health care. Participants weren’t so happy with health data reuse if it was seen to conflict with the interests of patients providing the data. Commercialisation is a big concern, including the sale of data and private companies profiting from the data. Employers and insurance companies were also considered a threat to patients’ interests. There were conflicting views about whether it is positive for pharmaceutical companies to have access to health data. A minority of people were against sharing data altogether. Certain types of data are seen as being particularly sensitive, including those relating to mental health or sexual health. In general, people expressed concern about data security and the potential for leaks. The studies also looked at the basis for consent that people would prefer. A majority accepted that their data could be used without consent so long as the data were anonymised. But there were no clear tendencies of preference for the various consent models.

It’s important to remember that – on the whole – patients want their data to be used to further the common good. But support can go awry if the data are used to generate profits for private firms or used in a way that might be perceived to negatively affect patients.

Health-related quality of life in injury patients: the added value of extending the EQ-5D-3L with a cognitive dimension. Quality of Life Research [PubMed] Published 18th March 2019

I’m currently working on a project to develop a cognition ‘bolt-on’ for the EQ-5D. Previous research has demonstrated that a cognition bolt-on could provide additional information to distinguish meaningful differences between health states, and that cognition might be a more important candidate than other bolt-ons. Injury – especially traumatic brain injury – can be associated with cognitive impairments. This study explores the value of a cognition bolt-on in this context.

The authors sought to find out whether cognition is sufficiently independent of other dimensions, whether the impact of cognitive problems is reflected in the EuroQol visual analogue scale (EQ VAS), and how a cognition bolt-on affects the overall explanatory power of the EQ-5D-3L. The data used are from the Dutch Injury Surveillance System, which surveys people who have attended an emergency department with an injury, including EQ-5D-3L. The survey adds a cognitive bolt-on relating to memory and concentration.

Data were available for 16,624 people at baseline, with 5,346 complete responses at 2.5-month follow-up. The cognition item was the least affected, with around 20% reporting any problems (though it’s worth noting that the majority of the cohort had injuries to parts of the body other than the head). The frequency of different responses suggests that cognition is dominant over other dimensions in the sense that severe cognitive problems tend to be observed alongside problems in other dimensions, but not vice versa. The mean EQ VAS for people reporting severe cognitive impairment was 41, compared with a mean of 75 for those reporting no problems. Regression analysis showed that moderate and severe cognitive impairment explained 8.7% and 6.2% of the variance of the EQ VAS. Multivariate analysis suggested that the cognitive dimension added roughly the same explanatory power as any other dimension. This was across the whole sample. Interestingly (or, perhaps, worryingly) when the authors looked at the subset of people with traumatic brain injury, the explanatory power of the cognitive dimension was slightly lower than overall.

There’s enough in this paper to justify further research into the advantages and disadvantages of using a cognition bolt-on. But I would say that. Whether or not the bolt-on descriptors used in this study are meaningful to patients remains an open question.

Developing the role of electronic health records in economic evaluation. The European Journal of Health Economics [PubMed] Published 14th March 2019

One way that we can use patients’ routinely collected data is to support the conduct of economic evaluations. In this commentary, the authors set out some of the ways to make the most of these data and discuss some of the methodological challenges. Large datasets have the advantage of being large. When this is combined with the collection of sociodemographic data, estimates for sub-groups can be produced. The data can also facilitate the capture of outcomes not otherwise available. For example, the impact of bariatric surgery on depression outcomes could be identified beyond the timeframe of a trial. The datasets also have the advantage of being representative, where trials are not. This could mean more accurate estimates of costs and outcomes. But there are things to bear in mind when using the data, such as the fact that coding might not always be very accurate, and coding practices could vary between observations. Missing data are likely to be missing for a reason (i.e. not at random), which creates challenges for the analyst. I had hoped that this paper would discuss novel uses of routinely collected data systems, such as the embedding of economic evaluations within them, rather than simply their use to estimate parameters for a model. But if you’re just getting started with using routine data, I suppose you could do worse than start with this paper.

Credits