Meeting round-up: ISPOR Europe 2019

For many health economists, November is ISPOR Europe month, and this year was no exception! We gathered in the fantastic Bella Center in Copenhagen to debate, listen and breathe health economics and outcomes research from the 2nd to the 6th November. Missed it? Would like a recap? Stay tuned for the #ISPOREurope 2019 round-up!

Bella Center

My ISPOR week started with the fascinating course ‘Tools for reproducible real-world data analysis’ by Blythe Adamson and Rachael Sorg. My key take-home messages? Use an interface like R-markdown to produce a document with code and results automatically. Use a version control platform like Phabricator to make code review easy. Write a detailed protocol, write the code to follow the protocol, and then check the code side by side with the protocol.

Monday started with the impressive workshop on translating oncology clinical trial endpoints to real-world data (RWD) for decision making.

Keith Abrams set the scene. Electronic health records (EHRs) may be used to derive the overall survival (OS) benefit given the observed benefit on progression-free survival (PFS). Sylwia Bujkiewicz showed an example where a bivariate meta-analysis of RCTs was used to estimate the surrogate relationship between PFS and OS (paper here). Jessica Davies discussed some of the challenges, such as the lack of data on exposure to treatments in a way that matches the data recorded in trials. Federico Felizzi presented a method to determine the optimal treatment duration of a cancer drug (see here for the code).

Next up, the Women in HEOR session! Women in HEOR is an ISPOR initiative that aims to support the growth, development, and contribution of women. It included various initiatives at ISPOR Europe, such as dinners, receptions and, of course, this session.

Shelby Reed introduced, and Olivia Wu presented on the overwhelming evidence on the benefits of diversity and on how to foster it in our work environment. Nancy Berg presented on ISPOR’s commitment to diversity and equality. We then heard from Sabina Hutchison about how to network in a conference environment, how to develop a personal brand and present our pitch. Have a look at my twitter thread for the tips. For more information on the Women in HEOR activities at ISPOR Europe, search #WomenInHEOR on twitter. Loads of cool information!

My Monday afternoon started with the provocatively titled ‘Time for change? Has time come for the pharma industry to accept modest prices?’. Have a look here for my live twitter thread. Kate Dion started by noting that the pressure is on for the pharmaceutical industry to reduce drug prices. Sarah Garner argued that lower prices lead to more patients being able to access the drug, which in turn increases the company’s income. Michael Schröter argued that innovative products should have a premium price, such as with Hemlibra. Lastly, Jens Grueger supported the implementation of value-based price, given the cost-effectiveness threshold.

Keeping with the drug pricing theme, my next session was on indication-based pricing. Mireia Jofre Bonet tackled the question of whether a single price is stifling innovation. Adrian Towse was supportive of indication-based pricing because it allows for the price to depend on the value of each indication and expand access to the full licensed population. Andrew Briggs argued against indication-based pricing for three reasons. First, it would give companies the maximum value-based price across all indications. Second, it would lead to greater drug expenditure, leading to greater opportunity costs. Third, it would be difficult to enforce, given that it would require cooperation of all payers. Francis Arickx explained the pricing system in Belgium. Remarkably, prices can be renegotiated over time depending on new entrants to market and new evidence. Another excellent session at ISPOR Europe!

My final session on Monday was about the timely and important topic of approaches for OS extrapolation. Elisabeth Fenwick introduced the session by noting that innovations in oncology have given rise to different patterns of survival, with implications for extrapolation. Sven Klijn presented on the various available methods for survival extrapolation. John Whalen focused on mixture cure models for cost-effectiveness analysis. Steve Palmer argued that, although new methods, such as mixture cure models, may provide additional insight, the approach should be justified, evidence-based and alternatives explored. In sum, there is no single optimal method.

On Tuesday, my first session was the impressive workshop on estimating cost-effectiveness thresholds based on the opportunity cost (twitter thread). Nancy Devlin set the scene by explaining the importance of getting the cost-effectiveness threshold right. James Lomas explained how to estimate the opportunity cost to the health care system following the seminal work by Karl Claxton et al and also touching on some of James’s recent work. Martin Henriksson noted that, by itself, the opportunity cost is not sufficient to define the threshold if we wish to consider solidarity and need alongside cost-effectiveness. The advantage of knowing the opportunity cost is that we can make informed trade-offs between health maximisation and other elements of value. Danny Palnoch finished the panel by explaining the challenges when deciding what to pay for a new treatment.

Clearly there is a tension between the price that pharmaceutical companies feel is reasonable, the opportunity cost to the health care service, and the desire by stakeholders to use the drug. I feel this in every session of the NICE appraisal committee!

My next session was the compelling panel on the use of RWD to revisit the HTA decision (twitter thread). Craig Brooks-Rooney noted that, as regulators increasingly license technologies based on weaker evidence, HTA agencies are under pressure to adapt their methods to the available evidence. Adrian Towse proposed a conceptual framework to use RWD to revisit decisions based on value of information analysis. Jeanette Kusel went through examples where RWD has been used to inform NICE decisions, such as brentuximab vendotin. Anna Halliday discussed the many practical challenges to implement RWD collection to inform re-appraisals. Anna finished with the caution against prolonging negotiations and appraisals, which could lead to delays to patient access.

My Wednesday started with the stimulating panel on drugs with tumour agnostic indications. Clarissa Higuchi Zerbini introduced the panel and proposed some questions to be addressed. Rosa Giuliani contributed with the clinical perspective. Jacoline Bouvy discussed the challenges faced by NICE and ways forward in appraising tumour-agnostic drugs. Marc van den Bulcke finished the panel with an overview of how next generation sequencing has been implemented in Belgium.

My last session was the brilliant workshop on HTA methods for antibiotics.

Mark Sculpher introduced the topic. Antibiotic resistance is a major challenge for humanity, but the development of new antibiotics is declining. Beth Woods presented a new framework for HTA of antibiotics. The goal is to reflect the full value of antibiotics whilst accounting for the opportunity cost and uncertainties in the evidence (see this report for more details). Angela Blake offered the industry perspective. She argued that revenues should be delinked to volume, to be holistic in the value assessment, and to be mindful of the incentives faced by drug companies. Nick Crabb finished by introducing a new project, by NICE and NHS England, on the feasibility of innovative value assessments for antibiotics.

And this is the end of the absolutely outstanding ISPOR Europe 2019! If you’re eager for more, have a look at the video below with my conference highlights!

Rita Faria’s journal round-up for 29th July 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

All-male panels and gender diversity of issue panels and plenary sessions at ISPOR Europe. PharmacoEconomics – Open [PubMed] Published 22nd July 2019

All male panels and other diversity considerations for ISPOR. PharmacoEconomics – Open [PubMed] Published 22nd July 2019

How is gender balance at ISPOR Europe conferences? This fascinating paper by Jacoline Bouvy and Michelle Mujoomdar kick-started a debate among the #HealthEconomics Twitterati by showing that the gender distribution is far from balanced.

Jacoline and Michelle found that, between 2016-18, 30% of the 346 speakers at issue panels and plenary sessions were women. Of the 85 panels and sessions, 29% were manels and 64% were mainly composed by men, whereas 2% were all-women panels (‘famels’?).

The ISPOR president Nancy Devlin had a positive and constructive response. For example, I was very pleased to know that ISPOR is taking the issue seriously and no longer has all-male plenary sessions. Issue panels, however, are proposed by members. The numbers show that the gender imbalance in the panels that do get accepted reflects the imbalance of the panels that are proposed.

These two papers raise quite a lot of questions. Why are fewer women participating in abstracts for issue panels? Does the gender distribution in abstracts reflect the distribution in membership, conference attendance, and submission of other types of abstracts? And how does it compare with other conferences in health economics and in other disciplines? Could we learn from other disciplines for effective action? If there is a gender imbalance in conference attendance, providing childcare may help (see here for a discussion). If women tend to submit more abstracts for posters rather than for organised sessions, more networking opportunities both online and at conferences could be an effective action.

I haven’t studied this phenomenon, so I really don’t know. I’d like to suggest that ISPOR starts collecting data systematically and implements initiatives in a way that is amenable to evaluation. After all, doing an evaluation is the health economist way!

Seamless interactive language interfacing between R and Stata. The Stata Journal [RePEc] Published 14th March 2019

Are you a Stata-user, but every so often you’d like to use a function only available in R? This brilliant package is for you!

E.F. Haghish created the rcall package to use R from Stata. It can be used to call R from Stata, or call R for a specific function. With the console mode, we call R to perform an action. The interactive mode allows us to call R from a Stata do-file. The vanilla mode evokes a new R session. The sync mode automatically synchronises objects between R and Stata. Additionally, rcall can transfer various types of data, such as locals, globals, datasets, etc. between Stata and R. Lastly, you can write ado-commands to embed R functions in Stata programs.

This package opens up loads of possibilities. Obviously, it does require that Stata users also know R. But it does make it easy to use R from the comfort of Stata. Looking forward to trying it out more!

Development of the summary of findings table for network meta-analysis. Journal of Clinical Epidemiology [PubMed] Published 2nd May 2019

Whilst the previous paper expands your analytical toolbox, this paper helps you present the results in the context of network meta-analysis. Juan José Yepes-Nuñez and colleagues propose a new summary of findings table to present the results of network meta-analysis. This new table reports all the relevant findings in a way that works for readers.

This study is remarkable because they actually tested the new table with 32 users in four rounds of test and revision. The limitation is that the users were mostly methodologists, although I imagine that recruitment of other users such as clinicians may have been difficult. The new format comprises three sections. The upper section details the PICO (Population; Intervention; Comparison; Outcome) and shows the diagram of the evidence network. The middle section summarises the results in terms of the comparisons, number of studies, participants, relative effect, absolute outcomes and absolute difference, certainty of evidence, rankings, and interpretation of the findings. The lower section defines the terminology and provides some details on the calculations.

It was interesting to read that users felt confused and overwhelmed if the results for all comparisons were shown. Therefore, the table shows the results for one main comparator vs other interventions. The issue is that, as the authors discuss, one comparator needs to be chosen as the main comparator, which is not ideal. Nonetheless, I agree that this is a compromise worth making to achieve a table that works!

I really enjoyed reading about the process to get to this table. I’m wondering if it would be useful to conduct a similar exercise to standardise the presentation of cost-effectiveness results. It would be great to know your thoughts!


Chris Sampson’s journal round-up for 11th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Identification, review, and use of health state utilities in cost-effectiveness models: an ISPOR Good Practices for Outcomes Research Task Force report. Value in Health [PubMed] Published 1st March 2019

When modellers select health state utility values to plug into their models, they often do it in an ad hoc and unsystematic way. This ISPOR Task Force report seeks to address that.

The authors discuss the process of searching, reviewing, and synthesising utility values. Searches need to use iterative techniques because evidence requirements develop as a model develops. Due to the scope of models, it may be necessary to develop multiple search strategies (for example, for different aspects of disease pathways). Searches needn’t be exhaustive, but they should be systematic and transparent. The authors provide a list of factors that should be considered in defining search criteria. In reviewing utility values, both quality and appropriateness should be considered. Quality is indicated by the precision of the evidence, the response rate, and missing data. Appropriateness relates to the extent to which the evidence being reviewed conforms to the context of the model in which it is to be used. This includes factors such as the characteristics of the study population, the measure used, value sets used, and the timing of data collection. When it comes to synthesis, the authors suggest it might not be meaningful in most cases, because of variation in methods. We can’t pool values if they aren’t (at least roughly) equivalent. Therefore, one approach is to employ strict inclusion criteria (e.g only EQ-5D, only a particular value set), but this isn’t likely to leave you with much. Meta-regression can be used to analyse more dissimilar utility values and provide insight into the impact of methodological differences. But the extent to which this can provide pooled values for a model is questionable, and the authors concede that more research is needed.

This paper can inform that future research. Not least in its attempt to specify minimum reporting standards. We have another checklist, with another acronym (SpRUCE). The idea isn’t so much that this will guide publications of systematic reviews of utility values, but rather that modellers (and model reviewers) can use it to assess whether the selection of utility values was adequate. The authors then go on to offer methodological recommendations for using utility values in cost-effectiveness models, considering issues such as modelling technique, comorbidities, adverse events, and sensitivity analysis. It’s early days, so the recommendations in this report ought to be changed as methods develop. Still, it’s a first step away from the ad hoc selection of utility values that (no doubt) drives the results of many cost-effectiveness models.

Estimating the marginal cost of a life year in Sweden’s public healthcare sector. The European Journal of Health Economics [PubMed] Published 22nd February 2019

It’s only recently that health economists have gained access to data that enables the estimation of the opportunity cost of health care expenditure on a national level; what is sometimes referred to as a supply-side threshold. We’ve seen studies in the UK, Spain, Australia, and here we have one from Sweden.

The authors use data on health care expenditure at the national (1970-2016) and regional (2003-2016) level, alongside estimates of remaining life expectancy by age and gender (1970-2016). First, they try a time series analysis, testing the nature of causality. Finding an apparently causal relationship between longevity and expenditure, the authors don’t take it any further. Instead, the results are based on a panel data analysis, employing similar methods to estimates generated in other countries. The authors propose a conceptual model to support their analysis, which distinguishes it from other studies. In particular, the authors assert that the majority of the impact of expenditure on mortality operates through morbidity, which changes how the model should be specified. The number of newly graduated nurses is used as an instrument indicative of a supply-shift at the national rather than regional level. The models control for socioeconomic and demographic factors and morbidity not amenable to health care.

The authors estimate the marginal cost of a life year by dividing health care expenditure by the expenditure elasticity of life expectancy, finding an opportunity cost of €38,812 (with a massive 95% confidence interval). Using Swedish population norms for utility values, this would translate into around €45,000/QALY.

The analysis is considered and makes plain the difficulty of estimating the marginal productivity of health care expenditure. It looks like a nail in the coffin for the idea of estimating opportunity costs using time series. For now, at least, estimates of opportunity cost will be based on variation according to geography, rather than time. In their excellent discussion, the authors are candid about the limitations of their model. Their instrument wasn’t perfect and it looks like there may have been important confounding variables that they couldn’t control for.

Frequentist and Bayesian meta‐regression of health state utilities for multiple myeloma incorporating systematic review and analysis of individual patient data. Health Economics [PubMed] Published 20th February 2019

The first paper in this round-up was about improving practice in the systematic review of health state utility values, and it indicated the need for more research on the synthesis of values. Here, we have some. In this study, the authors conduct a meta-analysis of utility values alongside an analysis of registry and clinical study data for multiple myeloma patients.

A literature search identified 13 ‘methodologically appropriate’ papers, providing 27 health state utility values. The EMMOS registry included data for 2,445 patients in 22 counties and the APEX clinical study included 669 patients, all with EQ-5D-3L data. The authors implement both a frequentist meta-regression and a Bayesian model. In both cases, the models were run including all values and then with a limited set of only EQ-5D values. These models predicted utility values based on the number of treatment classes received and the rate of stem cell transplant in the sample. The priors used in the Bayesian model were based on studies that reported general utility values for the presence of disease (rather than according to treatment).

The frequentist models showed that utility was low at diagnosis, higher at first treatment, and lower at each subsequent treatment. Stem cell transplant had a positive impact on utility values independent of the number of previous treatments. The results of the Bayesian analysis were very similar, which the authors suggest is due to weak priors. An additional Bayesian model was run with preferred data but vague priors, to assess the sensitivity of the model to the priors. At later stages of disease (for which data were more sparse), there was greater uncertainty. The authors provide predicted values from each of the five models, according to the number of treatment classes received. The models provide slightly different results, except in the case of newly diagnosed patients (where the difference was 0.001). For example, the ‘EQ-5D only’ frequentist model gave a value of 0.659 for one treatment, while the Bayesian model gave a value of 0.620.

I’m not sure that the study satisfies the recommendations outlined in the ISPOR Task Force report described above (though that would be an unfair challenge, given the timing of publication). We’re told very little about the nature of the studies that are included, so it’s difficult to judge whether they should have been combined in this way. However, the authors state that they have made their data extraction and source code available online, which means I could check that out (though, having had a look, I can’t find the material that the authors refer to, reinforcing my hatred for the shambolic ‘supplementary material’ ecosystem). The main purpose of this paper is to progress the methods used to synthesise health state utility values, and it does that well. Predictably, the future is Bayesian.