Rita Faria’s journal round-up for 20th January 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Opportunity cost neglect in public policy. Journal of Economic Behavior & Organization Published 10th January 2020

Opportunity cost is a key concept in economics, and health economics is no exception. We all agree that policy-makers should consider the opportunity cost alongside the benefits of the various policy options. The question is… do they? This fascinating paper by Emil Persson and Gustav Tinghög suggests that they may not.

The paper reports two studies: one in the general population, and the other in a sample of experts on priority setting in health. In both studies, the participants were asked to choose between making a purchase or not, and were randomised to choices with and without a reminder about the opportunity cost. The reminder consisted of the “no” option having the comment “saving the money for other purchases“. There were choices about private consumption (e.g. buying a new mobile phone) and health care policy (e.g. funding a new cancer screening programme).

In the study in the general population, the participants were 6% less likely to invest in public policies if they were reminded of the opportunity cost. There was no effect in private consumption decisions. In the study with experts on health care priority setting, the participants were 10% less likely to invest in a health programme when reminded about opportunity costs, although the result was “marginally significant“. There was a numerical difference of -6% regarding private consumption, but non-significant. The authors concluded that both lay people and experts neglect opportunity cost in public policy but much less so in their own private consumption decisions.

It struck me that this effect is driven by quite a small difference between the scenarios – simply stating that choosing to reject the policy means that the money will be saved for future purchases. I wonder about how this information affects the decision. After all, the scenarios only quantify the costs of the policy, without information about the benefits or the opportunity cost. For example, the benefits of the cancer screening programme were that “cancer treatment will be more effective, lives will be saved and human suffering will be avoided” and the cost was 48 million SEK per year. Whether this policy is good or bad value for money all depends on how much suffering it avoids and how much would be avoided by investing the money in something else. It would be interesting to have coupled the survey with interviews to understand how the participants interpreted the information and their decision making process.

On a wider note, this paper agrees with health economists’ anecdotal experience that policy-makers find it hard to think about opportunity cost. This is not helped by settings where they hear about the experience of people who would benefit from a positive recommendation and from doctors who would like to have the new drug in their medical arsenal, but not much about the people who will bear the opportunity cost. The message is clear: we need to do better at communicating the opportunity cost of public policies!

Assessment of progression-free survival as a surrogate end point of overall survival in first-line treatment of ovarian cancer. JAMA Network Open [PubMed] Published 10th January 2020

A study about the relationship between progression-free survival and overall survival may seem an odd choice for a health economics journal round-up, but it is actually quite relevant. In cost-effectiveness analysis of new cancer drugs, the trial primary endpoint may be progression-free survival (PFS). Data on overall survival (OS) may be too immature to assess the treatment effect or for extrapolation to the longer term. To predict QALYs and lifetime costs with and without the new drug, the cost-effectiveness model may need to assume a surrogate relationship between PFS and OS. That is, that an effect on PFS is reflected, to some extent, in an effect on OS. The question is, how strong is that surrogate relationship? This study tries to answer this question in advanced ovarian cancer.

Xavier Paoletti and colleagues conducted a systematic review and meta-analysis using individual patient data from 11,029 people who took part in 17 RCTs of first-line therapy in advanced ovarian cancer. They assessed the surrogate relationship at the individual and at the trial-level. The individual-level surrogate relationship refers to the correlation between PFS and OS for the individual patient. As the authors note, this may only reflect that people who have longer life expectancy also take longer to progress. At the trial-level, they looked at the correlation between the hazard ratio (HR) on OS and the HR on PFS. This reflects how much of the effect on OS could be predicted by the effect on PFS. They used the surrogate criteria proposed by the Follicular Lymphoma Analysis of Surrogacy Hypothesis initiative. As this is outside my area of expertise, I won’t comment on the methodology.

One of their results is quite striking: in 16/17 RCTs, the experimental drug did not have HRs for PFS and OS statistically different from the control. This means that there have not been any new drugs with statistically significant benefits! In terms of the surrogate relationship, they found that there is an individual-level association – that is, people who take longer to progress also survive for longer. In contrast, they did not find a surrogate relationship between PFS and OS at the trial-level. Given that the HRs were centred around 1, the poor correlation may be partly due to the lack of variation in HRs rather than a poor surrogate relationship.

Now the challenge remains in cost-effectiveness modelling when OS is immature. Extrapolate OS with high uncertainty? Use a poor surrogate relationship with PFS? Or formal expert elicitation? Hopefully methodologists are looking into this! In the meantime, regulators may wish to think again about licensing drugs with evidence only on PFS.

After 20 years of using economic evaluation, should NICE be considered a methods innovator? PharmacoEconomics [PubMed] Published 13th January 2020

NICE is currently starting a review of the methods and process for health technology assessment. Mark Sculpher and Steve Palmer take this opportunity to reflect on how NICE’s methods have evolved over time and to propose areas ripe for an update.

It was very enjoyable to read about the history of the Methods Guide and how NICE has responded to its changing context, responsibilities, and new challenges. For example, the cost-effectiveness threshold of £20k-£30k/QALY was introduced by the 2004 Methods Guide. This threshold was reinforced by the 2019 Voluntary Scheme for Branded Medicines Pricing and Access. The funny thing is, although NICE is constrained to the £20k-£30k/QALY threshold, the Department of Health and Social Care routinely uses Claxton et al’s £13k/QALY benchmark.

Mark and Steve go through five key topics in health technology assessment to pick out the areas that should be considered for an update. The topics are: health measurement and valuation, broader benefits, perspective, modelling, and uncertainty.  For example, whether/how to consider caregiver burden, and benefits (and opportunity costs) on caregivers, guidance on model validation, and formal incorporation of value of information methods. These are all sorely needed and would definitely cement NICE’s position as the international standard-setter for health technology assessment.

Beyond NICE and the UK, I found that this paper provides a good overview on hot topics in cost-effectiveness for the next few years. Must read for cost-effectiveness analysts!


Rita Faria’s journal round-up for 30th December 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Value in hepatitis C virus treatment: a patient-centered cost-effectiveness analysis. PharmacoEconomics [PubMed] Published 2nd December 2019

There have been many economic evaluations of treatments for viral hepatitis C. The usual outcomes are costs and a measure of quality-adjusted survival, such as QALYs. But health-related quality of life and life expectancy may not be the only important outcomes for patients. This fascinating paper by Joe Mattingly II and colleagues fills in the gap by collaborating with patients in the development of an economic evaluation of treatments for viral hepatitis C.

Patient engagement was guided by a stakeholder advisory board including health care professionals, four patients and a representative of a national patient advocacy organisation. This board reviewed the model design, model inputs and presentation of results. To ensure that the economic evaluation included what is important to patients, the team conducted a Delphi process with patients who had received treatment or were considering treatment. This is reported in a separate paper.

The feedback from patients led to the inclusion of two outcomes beyond QALYs and costs: infected life-years, which relate to the patient’s fear of infecting others, and workdays missed, which relate to financial issues and impact on work and career.

I was impressed with the effort put into engaging with patients and stakeholders. For example, there were 11 meetings with the stakeholder advisory board. This shows that engaging with stakeholders takes time and energy to do right! The challenge with the patient-centric outcome measures is in using them to make decisions. From an individual or an employer’s perspective, it may be useful to have results in terms of costs per workday missed avoided, for example, if these can then be compared to a maximum acceptable cost. As suggested by the authors, an interesting next step would be to seek feedback from managed care organisations. Whether such measures would be useful to inform decisions in publicly funded healthcare services is less clear.

Patient engagement is all the rage at present, but there’s not much guidance on how to do it in practice. This paper is a great example of how to go about it.

TECH-VER: a verification checklist to reduce errors in models and improve their credibility. PharmacoEconomics [PubMed] [RePEc] Published 8th November 2019

Looking for help in checking your decision model? Fear not, there’s a new tool on the block! The TECH-VER checklist lists a set of steps to assess the internal validity of your model.

I have to admit that I’m getting a bit weary of checklists, but this one is truly useful. It’s divided into five areas: model inputs, event/state calculations, results, uncertainty analysis, and overall validation and other supplementary checks. Each area includes an assessment of the completeness of the calculations in the electronic model, their consistency with the technical report, and then steps to check their correctness.

Correctness is assessed with a series of black-box, white-box, and replication-based tests. Black-box tests involve changing parameters in the model and checking if the results change as expected. For example, if the HRQOL weights=1 and decrements=0, the QALYs should be the same as the life years. White-box testing involves checking the calculations one by one. Replication-based tests involve redoing calculations independently.

The authors’ handy tip is to apply the checks in ascending order of effort and time: starting first with black-box tests, then conducting white-box tests only for priority calculations or if there are unexpected results. I recommend this paper to all cost-effectiveness modellers. TECH-VER will definitely feature in my toolbox!

Proposals on Kaplan-Meier plots in medical research and a survey of stakeholder views: KMunicate. BMJ Open [PubMed] Published 30th September 2019

What’s your view of the Kaplan-Meier plot? I find it quite difficult to explain to non-specialist audiences, particularly the uncertainty in the differences in survival time between treatment groups. It seems that I’m not the only one!

Tim Morris and colleagues agree that Kaplan-Meier can be difficult to interpret. To address this, they proposed improvements to better show the status of patients over time and the uncertainty around those estimates. They then assessed the proposed improvements with a survey of researchers. Similar to my own views, the majority of respondents preferred having a table with the number of patients who had the events and who were censored to show the status of patients over time, and confidence intervals to show the uncertainty.

The Kaplan-Meier plot with confidence intervals and the table would definitely help me to interpret and explain Kaplan-Meier plots. Also, the proposed improvements seem to be straightforward to implement. One way to make it easy for researchers to implement these plots in practice would be to publish the code to replicate the preferred plots.

There is a broader question, outside the scope of this project, about how to convey survival times and their uncertainty to untrained audiences, from health care professionals and managers to patients. Would audience-specific tools be the answer? Or should we try to up-skill the audience to understand a Kaplan-Meier plot?

Better communication is surely key if we want to engage stakeholders with research and if our research is to have an impact on policy. I, for one, would be grateful for more guidance on how to communicate research. This study is an excellent first step in making a specialist tool – the Kaplan-Meier plot – easier to understand.

Cost-effectiveness of strategies preventing late-onset infection in preterm infants. Archives of Disease in Childhood [PubMed] Published 13th December 2019

And lastly, a plug for my own paper! This article reports the cost-effectiveness analysis conducted for a ‘negative’ trial. The PREVAIL trial found that the experimental intervention – anti-microbial impregnated peripherally inserted central catheters (AM-PICCs) – had no effect compared to the standard PICCS, which are used in the NHS. AM-PICCs are more costly than standard PICCs. Clearly, AM-PICCs are not cost-effective. So, you may ask, why conduct a cost-effectiveness analysis and develop a new model?

Developing a model to evaluate the cost-effectiveness of AM-PICCs was one of the project’s objectives. We started the economic work pretty early on. By the time that the trial reported, the model was already built, tested with data from the literature, and all ready to receive the trial data. Wasted effort? Not at all!

Thanks to this cost-effectiveness analysis, we have concluded that avoiding neurodevelopmental impairment in children born preterm is very beneficial; hence warranting a large investment by the NHS. If we believe the observational evidence that infection causes neurodevelopmental impairment, interventions that reduce the risk of infection can be cost-effective.

The linkage to Hospital Episode Statistics, National Neonatal Research Database and Paediatric Intensive Care Audit Network allowed us to get a good picture of the hospital care and costs of the babies in the PREVAIL trial. This informed some of the cost inputs in the cost-effectiveness model.

If you’re planning a cost-effectiveness analysis of strategies to prevent infections and/or neurodevelopmental impairment in preterm babies, do feel free to get in touch!


Meeting round-up: ISPOR Europe 2019

For many health economists, November is ISPOR Europe month, and this year was no exception! We gathered in the fantastic Bella Center in Copenhagen to debate, listen and breathe health economics and outcomes research from the 2nd to the 6th November. Missed it? Would like a recap? Stay tuned for the #ISPOREurope 2019 round-up!

Bella Center

My ISPOR week started with the fascinating course ‘Tools for reproducible real-world data analysis’ by Blythe Adamson and Rachael Sorg. My key take-home messages? Use an interface like R-markdown to produce a document with code and results automatically. Use a version control platform like Phabricator to make code review easy. Write a detailed protocol, write the code to follow the protocol, and then check the code side by side with the protocol.

Monday started with the impressive workshop on translating oncology clinical trial endpoints to real-world data (RWD) for decision making.

Keith Abrams set the scene. Electronic health records (EHRs) may be used to derive the overall survival (OS) benefit given the observed benefit on progression-free survival (PFS). Sylwia Bujkiewicz showed an example where a bivariate meta-analysis of RCTs was used to estimate the surrogate relationship between PFS and OS (paper here). Jessica Davies discussed some of the challenges, such as the lack of data on exposure to treatments in a way that matches the data recorded in trials. Federico Felizzi presented a method to determine the optimal treatment duration of a cancer drug (see here for the code).

Next up, the Women in HEOR session! Women in HEOR is an ISPOR initiative that aims to support the growth, development, and contribution of women. It included various initiatives at ISPOR Europe, such as dinners, receptions and, of course, this session.

Shelby Reed introduced, and Olivia Wu presented on the overwhelming evidence on the benefits of diversity and on how to foster it in our work environment. Nancy Berg presented on ISPOR’s commitment to diversity and equality. We then heard from Sabina Hutchison about how to network in a conference environment, how to develop a personal brand and present our pitch. Have a look at my twitter thread for the tips. For more information on the Women in HEOR activities at ISPOR Europe, search #WomenInHEOR on twitter. Loads of cool information!

My Monday afternoon started with the provocatively titled ‘Time for change? Has time come for the pharma industry to accept modest prices?’. Have a look here for my live twitter thread. Kate Dion started by noting that the pressure is on for the pharmaceutical industry to reduce drug prices. Sarah Garner argued that lower prices lead to more patients being able to access the drug, which in turn increases the company’s income. Michael Schröter argued that innovative products should have a premium price, such as with Hemlibra. Lastly, Jens Grueger supported the implementation of value-based price, given the cost-effectiveness threshold.

Keeping with the drug pricing theme, my next session was on indication-based pricing. Mireia Jofre Bonet tackled the question of whether a single price is stifling innovation. Adrian Towse was supportive of indication-based pricing because it allows for the price to depend on the value of each indication and expand access to the full licensed population. Andrew Briggs argued against indication-based pricing for three reasons. First, it would give companies the maximum value-based price across all indications. Second, it would lead to greater drug expenditure, leading to greater opportunity costs. Third, it would be difficult to enforce, given that it would require cooperation of all payers. Francis Arickx explained the pricing system in Belgium. Remarkably, prices can be renegotiated over time depending on new entrants to market and new evidence. Another excellent session at ISPOR Europe!

My final session on Monday was about the timely and important topic of approaches for OS extrapolation. Elisabeth Fenwick introduced the session by noting that innovations in oncology have given rise to different patterns of survival, with implications for extrapolation. Sven Klijn presented on the various available methods for survival extrapolation. John Whalen focused on mixture cure models for cost-effectiveness analysis. Steve Palmer argued that, although new methods, such as mixture cure models, may provide additional insight, the approach should be justified, evidence-based and alternatives explored. In sum, there is no single optimal method.

On Tuesday, my first session was the impressive workshop on estimating cost-effectiveness thresholds based on the opportunity cost (twitter thread). Nancy Devlin set the scene by explaining the importance of getting the cost-effectiveness threshold right. James Lomas explained how to estimate the opportunity cost to the health care system following the seminal work by Karl Claxton et al and also touching on some of James’s recent work. Martin Henriksson noted that, by itself, the opportunity cost is not sufficient to define the threshold if we wish to consider solidarity and need alongside cost-effectiveness. The advantage of knowing the opportunity cost is that we can make informed trade-offs between health maximisation and other elements of value. Danny Palnoch finished the panel by explaining the challenges when deciding what to pay for a new treatment.

Clearly there is a tension between the price that pharmaceutical companies feel is reasonable, the opportunity cost to the health care service, and the desire by stakeholders to use the drug. I feel this in every session of the NICE appraisal committee!

My next session was the compelling panel on the use of RWD to revisit the HTA decision (twitter thread). Craig Brooks-Rooney noted that, as regulators increasingly license technologies based on weaker evidence, HTA agencies are under pressure to adapt their methods to the available evidence. Adrian Towse proposed a conceptual framework to use RWD to revisit decisions based on value of information analysis. Jeanette Kusel went through examples where RWD has been used to inform NICE decisions, such as brentuximab vendotin. Anna Halliday discussed the many practical challenges to implement RWD collection to inform re-appraisals. Anna finished with the caution against prolonging negotiations and appraisals, which could lead to delays to patient access.

My Wednesday started with the stimulating panel on drugs with tumour agnostic indications. Clarissa Higuchi Zerbini introduced the panel and proposed some questions to be addressed. Rosa Giuliani contributed with the clinical perspective. Jacoline Bouvy discussed the challenges faced by NICE and ways forward in appraising tumour-agnostic drugs. Marc van den Bulcke finished the panel with an overview of how next generation sequencing has been implemented in Belgium.

My last session was the brilliant workshop on HTA methods for antibiotics.

Mark Sculpher introduced the topic. Antibiotic resistance is a major challenge for humanity, but the development of new antibiotics is declining. Beth Woods presented a new framework for HTA of antibiotics. The goal is to reflect the full value of antibiotics whilst accounting for the opportunity cost and uncertainties in the evidence (see this report for more details). Angela Blake offered the industry perspective. She argued that revenues should be delinked to volume, to be holistic in the value assessment, and to be mindful of the incentives faced by drug companies. Nick Crabb finished by introducing a new project, by NICE and NHS England, on the feasibility of innovative value assessments for antibiotics.

And this is the end of the absolutely outstanding ISPOR Europe 2019! If you’re eager for more, have a look at the video below with my conference highlights!