Rita Faria’s journal round-up for 20th January 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Opportunity cost neglect in public policy. Journal of Economic Behavior & Organization Published 10th January 2020

Opportunity cost is a key concept in economics, and health economics is no exception. We all agree that policy-makers should consider the opportunity cost alongside the benefits of the various policy options. The question is… do they? This fascinating paper by Emil Persson and Gustav Tinghög suggests that they may not.

The paper reports two studies: one in the general population, and the other in a sample of experts on priority setting in health. In both studies, the participants were asked to choose between making a purchase or not, and were randomised to choices with and without a reminder about the opportunity cost. The reminder consisted of the “no” option having the comment “saving the money for other purchases“. There were choices about private consumption (e.g. buying a new mobile phone) and health care policy (e.g. funding a new cancer screening programme).

In the study in the general population, the participants were 6% less likely to invest in public policies if they were reminded of the opportunity cost. There was no effect in private consumption decisions. In the study with experts on health care priority setting, the participants were 10% less likely to invest in a health programme when reminded about opportunity costs, although the result was “marginally significant“. There was a numerical difference of -6% regarding private consumption, but non-significant. The authors concluded that both lay people and experts neglect opportunity cost in public policy but much less so in their own private consumption decisions.

It struck me that this effect is driven by quite a small difference between the scenarios – simply stating that choosing to reject the policy means that the money will be saved for future purchases. I wonder about how this information affects the decision. After all, the scenarios only quantify the costs of the policy, without information about the benefits or the opportunity cost. For example, the benefits of the cancer screening programme were that “cancer treatment will be more effective, lives will be saved and human suffering will be avoided” and the cost was 48 million SEK per year. Whether this policy is good or bad value for money all depends on how much suffering it avoids and how much would be avoided by investing the money in something else. It would be interesting to have coupled the survey with interviews to understand how the participants interpreted the information and their decision making process.

On a wider note, this paper agrees with health economists’ anecdotal experience that policy-makers find it hard to think about opportunity cost. This is not helped by settings where they hear about the experience of people who would benefit from a positive recommendation and from doctors who would like to have the new drug in their medical arsenal, but not much about the people who will bear the opportunity cost. The message is clear: we need to do better at communicating the opportunity cost of public policies!

Assessment of progression-free survival as a surrogate end point of overall survival in first-line treatment of ovarian cancer. JAMA Network Open [PubMed] Published 10th January 2020

A study about the relationship between progression-free survival and overall survival may seem an odd choice for a health economics journal round-up, but it is actually quite relevant. In cost-effectiveness analysis of new cancer drugs, the trial primary endpoint may be progression-free survival (PFS). Data on overall survival (OS) may be too immature to assess the treatment effect or for extrapolation to the longer term. To predict QALYs and lifetime costs with and without the new drug, the cost-effectiveness model may need to assume a surrogate relationship between PFS and OS. That is, that an effect on PFS is reflected, to some extent, in an effect on OS. The question is, how strong is that surrogate relationship? This study tries to answer this question in advanced ovarian cancer.

Xavier Paoletti and colleagues conducted a systematic review and meta-analysis using individual patient data from 11,029 people who took part in 17 RCTs of first-line therapy in advanced ovarian cancer. They assessed the surrogate relationship at the individual and at the trial-level. The individual-level surrogate relationship refers to the correlation between PFS and OS for the individual patient. As the authors note, this may only reflect that people who have longer life expectancy also take longer to progress. At the trial-level, they looked at the correlation between the hazard ratio (HR) on OS and the HR on PFS. This reflects how much of the effect on OS could be predicted by the effect on PFS. They used the surrogate criteria proposed by the Follicular Lymphoma Analysis of Surrogacy Hypothesis initiative. As this is outside my area of expertise, I won’t comment on the methodology.

One of their results is quite striking: in 16/17 RCTs, the experimental drug did not have HRs for PFS and OS statistically different from the control. This means that there have not been any new drugs with statistically significant benefits! In terms of the surrogate relationship, they found that there is an individual-level association – that is, people who take longer to progress also survive for longer. In contrast, they did not find a surrogate relationship between PFS and OS at the trial-level. Given that the HRs were centred around 1, the poor correlation may be partly due to the lack of variation in HRs rather than a poor surrogate relationship.

Now the challenge remains in cost-effectiveness modelling when OS is immature. Extrapolate OS with high uncertainty? Use a poor surrogate relationship with PFS? Or formal expert elicitation? Hopefully methodologists are looking into this! In the meantime, regulators may wish to think again about licensing drugs with evidence only on PFS.

After 20 years of using economic evaluation, should NICE be considered a methods innovator? PharmacoEconomics [PubMed] Published 13th January 2020

NICE is currently starting a review of the methods and process for health technology assessment. Mark Sculpher and Steve Palmer take this opportunity to reflect on how NICE’s methods have evolved over time and to propose areas ripe for an update.

It was very enjoyable to read about the history of the Methods Guide and how NICE has responded to its changing context, responsibilities, and new challenges. For example, the cost-effectiveness threshold of £20k-£30k/QALY was introduced by the 2004 Methods Guide. This threshold was reinforced by the 2019 Voluntary Scheme for Branded Medicines Pricing and Access. The funny thing is, although NICE is constrained to the £20k-£30k/QALY threshold, the Department of Health and Social Care routinely uses Claxton et al’s £13k/QALY benchmark.

Mark and Steve go through five key topics in health technology assessment to pick out the areas that should be considered for an update. The topics are: health measurement and valuation, broader benefits, perspective, modelling, and uncertainty.  For example, whether/how to consider caregiver burden, and benefits (and opportunity costs) on caregivers, guidance on model validation, and formal incorporation of value of information methods. These are all sorely needed and would definitely cement NICE’s position as the international standard-setter for health technology assessment.

Beyond NICE and the UK, I found that this paper provides a good overview on hot topics in cost-effectiveness for the next few years. Must read for cost-effectiveness analysts!

Credits

Meeting round-up: ISPOR Europe 2019

For many health economists, November is ISPOR Europe month, and this year was no exception! We gathered in the fantastic Bella Center in Copenhagen to debate, listen and breathe health economics and outcomes research from the 2nd to the 6th November. Missed it? Would like a recap? Stay tuned for the #ISPOREurope 2019 round-up!

Bella Center

My ISPOR week started with the fascinating course ‘Tools for reproducible real-world data analysis’ by Blythe Adamson and Rachael Sorg. My key take-home messages? Use an interface like R-markdown to produce a document with code and results automatically. Use a version control platform like Phabricator to make code review easy. Write a detailed protocol, write the code to follow the protocol, and then check the code side by side with the protocol.

Monday started with the impressive workshop on translating oncology clinical trial endpoints to real-world data (RWD) for decision making.

Keith Abrams set the scene. Electronic health records (EHRs) may be used to derive the overall survival (OS) benefit given the observed benefit on progression-free survival (PFS). Sylwia Bujkiewicz showed an example where a bivariate meta-analysis of RCTs was used to estimate the surrogate relationship between PFS and OS (paper here). Jessica Davies discussed some of the challenges, such as the lack of data on exposure to treatments in a way that matches the data recorded in trials. Federico Felizzi presented a method to determine the optimal treatment duration of a cancer drug (see here for the code).

Next up, the Women in HEOR session! Women in HEOR is an ISPOR initiative that aims to support the growth, development, and contribution of women. It included various initiatives at ISPOR Europe, such as dinners, receptions and, of course, this session.

Shelby Reed introduced, and Olivia Wu presented on the overwhelming evidence on the benefits of diversity and on how to foster it in our work environment. Nancy Berg presented on ISPOR’s commitment to diversity and equality. We then heard from Sabina Hutchison about how to network in a conference environment, how to develop a personal brand and present our pitch. Have a look at my twitter thread for the tips. For more information on the Women in HEOR activities at ISPOR Europe, search #WomenInHEOR on twitter. Loads of cool information!

My Monday afternoon started with the provocatively titled ‘Time for change? Has time come for the pharma industry to accept modest prices?’. Have a look here for my live twitter thread. Kate Dion started by noting that the pressure is on for the pharmaceutical industry to reduce drug prices. Sarah Garner argued that lower prices lead to more patients being able to access the drug, which in turn increases the company’s income. Michael Schröter argued that innovative products should have a premium price, such as with Hemlibra. Lastly, Jens Grueger supported the implementation of value-based price, given the cost-effectiveness threshold.

Keeping with the drug pricing theme, my next session was on indication-based pricing. Mireia Jofre Bonet tackled the question of whether a single price is stifling innovation. Adrian Towse was supportive of indication-based pricing because it allows for the price to depend on the value of each indication and expand access to the full licensed population. Andrew Briggs argued against indication-based pricing for three reasons. First, it would give companies the maximum value-based price across all indications. Second, it would lead to greater drug expenditure, leading to greater opportunity costs. Third, it would be difficult to enforce, given that it would require cooperation of all payers. Francis Arickx explained the pricing system in Belgium. Remarkably, prices can be renegotiated over time depending on new entrants to market and new evidence. Another excellent session at ISPOR Europe!

My final session on Monday was about the timely and important topic of approaches for OS extrapolation. Elisabeth Fenwick introduced the session by noting that innovations in oncology have given rise to different patterns of survival, with implications for extrapolation. Sven Klijn presented on the various available methods for survival extrapolation. John Whalen focused on mixture cure models for cost-effectiveness analysis. Steve Palmer argued that, although new methods, such as mixture cure models, may provide additional insight, the approach should be justified, evidence-based and alternatives explored. In sum, there is no single optimal method.

On Tuesday, my first session was the impressive workshop on estimating cost-effectiveness thresholds based on the opportunity cost (twitter thread). Nancy Devlin set the scene by explaining the importance of getting the cost-effectiveness threshold right. James Lomas explained how to estimate the opportunity cost to the health care system following the seminal work by Karl Claxton et al and also touching on some of James’s recent work. Martin Henriksson noted that, by itself, the opportunity cost is not sufficient to define the threshold if we wish to consider solidarity and need alongside cost-effectiveness. The advantage of knowing the opportunity cost is that we can make informed trade-offs between health maximisation and other elements of value. Danny Palnoch finished the panel by explaining the challenges when deciding what to pay for a new treatment.

Clearly there is a tension between the price that pharmaceutical companies feel is reasonable, the opportunity cost to the health care service, and the desire by stakeholders to use the drug. I feel this in every session of the NICE appraisal committee!

My next session was the compelling panel on the use of RWD to revisit the HTA decision (twitter thread). Craig Brooks-Rooney noted that, as regulators increasingly license technologies based on weaker evidence, HTA agencies are under pressure to adapt their methods to the available evidence. Adrian Towse proposed a conceptual framework to use RWD to revisit decisions based on value of information analysis. Jeanette Kusel went through examples where RWD has been used to inform NICE decisions, such as brentuximab vendotin. Anna Halliday discussed the many practical challenges to implement RWD collection to inform re-appraisals. Anna finished with the caution against prolonging negotiations and appraisals, which could lead to delays to patient access.

My Wednesday started with the stimulating panel on drugs with tumour agnostic indications. Clarissa Higuchi Zerbini introduced the panel and proposed some questions to be addressed. Rosa Giuliani contributed with the clinical perspective. Jacoline Bouvy discussed the challenges faced by NICE and ways forward in appraising tumour-agnostic drugs. Marc van den Bulcke finished the panel with an overview of how next generation sequencing has been implemented in Belgium.

My last session was the brilliant workshop on HTA methods for antibiotics.

Mark Sculpher introduced the topic. Antibiotic resistance is a major challenge for humanity, but the development of new antibiotics is declining. Beth Woods presented a new framework for HTA of antibiotics. The goal is to reflect the full value of antibiotics whilst accounting for the opportunity cost and uncertainties in the evidence (see this report for more details). Angela Blake offered the industry perspective. She argued that revenues should be delinked to volume, to be holistic in the value assessment, and to be mindful of the incentives faced by drug companies. Nick Crabb finished by introducing a new project, by NICE and NHS England, on the feasibility of innovative value assessments for antibiotics.

And this is the end of the absolutely outstanding ISPOR Europe 2019! If you’re eager for more, have a look at the video below with my conference highlights!

Rita Faria’s journal round-up for 4th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The marginal benefits of healthcare spending in the Netherlands: estimating cost-effectiveness thresholds using a translog production function. Health Economics [PubMed] Published 30th August 2019

The marginal productivity of the healthcare sector or, as commonly known, the supply-side cost-effectiveness threshold, is a hot topic right now. A few years ago, we could only guess at the magnitude of health that was displaced by reimbursing expensive and not-that-beneficial drugs. Since the seminal work by Karl Claxton and colleagues, we have started to have a pretty good idea of what we’re giving up.

This paper by Niek Stadhouders and colleagues adds to this literature by estimating the marginal productivity of hospital care in the Netherlands. Spoiler alert: they estimated that hospital care generates 1 QALY for around €74,000 at the margin, with 95% confidence intervals ranging from €53,000 to €94,000. Remarkably, it’s close to the Dutch upper reference value for the cost-effectiveness threshold at €80,000!

The approach for estimation is quite elaborate because it required building QALYs and costs, and accounting for the effect of mortality on costs. The diagram in Figure 1 is excellent in explaining it. Their approach is different from the Claxton et al method, in that they corrected for the cost due to changes in mortality directly rather than via an instrumental variable analysis. To estimate the marginal effect of spending on health, they use a translog function. The confidence intervals are generated with Monte Carlo simulation and various robustness checks are presented.

This is a fantastic paper, which will be sure to have important policy implications. Analysts conducting cost-effectiveness analysis in the Netherlands, do take note.

Mixed-effects models for health care longitudinal data with an informative visiting process: a Monte Carlo simulation study. Statistica Neerlandica Published 5th September 2019

Electronic health records are the current big thing in health economics research, but they’re not without challenges. One issue is that the data reflects the clinical management, rather than a trial protocol. This means that doctors may test more severe patients more often. For example, people with higher cholesterol may get more frequent cholesterol tests. The challenge is that traditional methods for longitudinal data assume independence between observation times and disease severity.

Alessandro Gasparini and colleagues set out to solve this problem. They propose using inverse intensity of visit weighting within a mixed-methods model framework. Importantly, they provide a Stata package that includes the method. It’s part of the wide ranging and super-useful merlin package.

It was great to see how the method works with the directed acyclic graph. Essentially, after controlling for confounders, the longitudinal outcome and the observation process are associated through shared random effects. By assuming a distribution for the shared random effects, the model blocks the path between the outcome and the observation process. It makes it sound easy!

The paper goes through the method, compares it with other methods in the literature in a simulation study, and applies to a real case study. It’s a brilliant paper that deserves a close look by all of those using electronic health records.

Alternative approaches for confounding adjustment in observational studies using weighting based on the propensity score: a primer for practitioners. BMJ [PubMed] Published 23rd October 2019

Would you like to use a propensity score method but don’t know where to start? Look no further! This paper by Rishi Desai and Jessica Franklin provides a practical guide to propensity score methods.

They start by explaining what a propensity score is and how it can be used, from matching to reweighting and regression adjustment. I particularly enjoyed reading about the importance of conceptualising the target of inference, that is, what treatment effect are we trying to estimate. In the medical literature, it is rare to see a paper that is clear on whether it is average treatment effect or average treatment effect among the treated population.

I found the algorithm for method selection really useful. Here, Rishi and Jessica describe the steps in the choice of the propensity score method and recommend their preferred method for each situation. The paper also includes the application of each method to the example of dabigatran versus warfarin for atrial fibrillation. Thanks to the graphs, we can visualise how the distribution of the propensity score changes for each method and depending on the target of inference.

This is an excellent paper to those starting their propensity score analyses, or for those who would like a refresher. It’s a keeper!

Credits