Chris Sampson’s journal round-up for 11th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Identification, review, and use of health state utilities in cost-effectiveness models: an ISPOR Good Practices for Outcomes Research Task Force report. Value in Health [PubMed] Published 1st March 2019

When modellers select health state utility values to plug into their models, they often do it in an ad hoc and unsystematic way. This ISPOR Task Force report seeks to address that.

The authors discuss the process of searching, reviewing, and synthesising utility values. Searches need to use iterative techniques because evidence requirements develop as a model develops. Due to the scope of models, it may be necessary to develop multiple search strategies (for example, for different aspects of disease pathways). Searches needn’t be exhaustive, but they should be systematic and transparent. The authors provide a list of factors that should be considered in defining search criteria. In reviewing utility values, both quality and appropriateness should be considered. Quality is indicated by the precision of the evidence, the response rate, and missing data. Appropriateness relates to the extent to which the evidence being reviewed conforms to the context of the model in which it is to be used. This includes factors such as the characteristics of the study population, the measure used, value sets used, and the timing of data collection. When it comes to synthesis, the authors suggest it might not be meaningful in most cases, because of variation in methods. We can’t pool values if they aren’t (at least roughly) equivalent. Therefore, one approach is to employ strict inclusion criteria (e.g only EQ-5D, only a particular value set), but this isn’t likely to leave you with much. Meta-regression can be used to analyse more dissimilar utility values and provide insight into the impact of methodological differences. But the extent to which this can provide pooled values for a model is questionable, and the authors concede that more research is needed.

This paper can inform that future research. Not least in its attempt to specify minimum reporting standards. We have another checklist, with another acronym (SpRUCE). The idea isn’t so much that this will guide publications of systematic reviews of utility values, but rather that modellers (and model reviewers) can use it to assess whether the selection of utility values was adequate. The authors then go on to offer methodological recommendations for using utility values in cost-effectiveness models, considering issues such as modelling technique, comorbidities, adverse events, and sensitivity analysis. It’s early days, so the recommendations in this report ought to be changed as methods develop. Still, it’s a first step away from the ad hoc selection of utility values that (no doubt) drives the results of many cost-effectiveness models.

Estimating the marginal cost of a life year in Sweden’s public healthcare sector. The European Journal of Health Economics [PubMed] Published 22nd February 2019

It’s only recently that health economists have gained access to data that enables the estimation of the opportunity cost of health care expenditure on a national level; what is sometimes referred to as a supply-side threshold. We’ve seen studies in the UK, Spain, Australia, and here we have one from Sweden.

The authors use data on health care expenditure at the national (1970-2016) and regional (2003-2016) level, alongside estimates of remaining life expectancy by age and gender (1970-2016). First, they try a time series analysis, testing the nature of causality. Finding an apparently causal relationship between longevity and expenditure, the authors don’t take it any further. Instead, the results are based on a panel data analysis, employing similar methods to estimates generated in other countries. The authors propose a conceptual model to support their analysis, which distinguishes it from other studies. In particular, the authors assert that the majority of the impact of expenditure on mortality operates through morbidity, which changes how the model should be specified. The number of newly graduated nurses is used as an instrument indicative of a supply-shift at the national rather than regional level. The models control for socioeconomic and demographic factors and morbidity not amenable to health care.

The authors estimate the marginal cost of a life year by dividing health care expenditure by the expenditure elasticity of life expectancy, finding an opportunity cost of €38,812 (with a massive 95% confidence interval). Using Swedish population norms for utility values, this would translate into around €45,000/QALY.

The analysis is considered and makes plain the difficulty of estimating the marginal productivity of health care expenditure. It looks like a nail in the coffin for the idea of estimating opportunity costs using time series. For now, at least, estimates of opportunity cost will be based on variation according to geography, rather than time. In their excellent discussion, the authors are candid about the limitations of their model. Their instrument wasn’t perfect and it looks like there may have been important confounding variables that they couldn’t control for.

Frequentist and Bayesian meta‐regression of health state utilities for multiple myeloma incorporating systematic review and analysis of individual patient data. Health Economics [PubMed] Published 20th February 2019

The first paper in this round-up was about improving practice in the systematic review of health state utility values, and it indicated the need for more research on the synthesis of values. Here, we have some. In this study, the authors conduct a meta-analysis of utility values alongside an analysis of registry and clinical study data for multiple myeloma patients.

A literature search identified 13 ‘methodologically appropriate’ papers, providing 27 health state utility values. The EMMOS registry included data for 2,445 patients in 22 counties and the APEX clinical study included 669 patients, all with EQ-5D-3L data. The authors implement both a frequentist meta-regression and a Bayesian model. In both cases, the models were run including all values and then with a limited set of only EQ-5D values. These models predicted utility values based on the number of treatment classes received and the rate of stem cell transplant in the sample. The priors used in the Bayesian model were based on studies that reported general utility values for the presence of disease (rather than according to treatment).

The frequentist models showed that utility was low at diagnosis, higher at first treatment, and lower at each subsequent treatment. Stem cell transplant had a positive impact on utility values independent of the number of previous treatments. The results of the Bayesian analysis were very similar, which the authors suggest is due to weak priors. An additional Bayesian model was run with preferred data but vague priors, to assess the sensitivity of the model to the priors. At later stages of disease (for which data were more sparse), there was greater uncertainty. The authors provide predicted values from each of the five models, according to the number of treatment classes received. The models provide slightly different results, except in the case of newly diagnosed patients (where the difference was 0.001). For example, the ‘EQ-5D only’ frequentist model gave a value of 0.659 for one treatment, while the Bayesian model gave a value of 0.620.

I’m not sure that the study satisfies the recommendations outlined in the ISPOR Task Force report described above (though that would be an unfair challenge, given the timing of publication). We’re told very little about the nature of the studies that are included, so it’s difficult to judge whether they should have been combined in this way. However, the authors state that they have made their data extraction and source code available online, which means I could check that out (though, having had a look, I can’t find the material that the authors refer to, reinforcing my hatred for the shambolic ‘supplementary material’ ecosystem). The main purpose of this paper is to progress the methods used to synthesise health state utility values, and it does that well. Predictably, the future is Bayesian.

Credits

Meeting round-up: ISPOR Europe 2018 (part 2)

Have you missed ISPOR Europe 2018 but are eager to know all about it? Time to continue reading! In yesterday’s post, I wrote about ISPOR’s outstanding short-course on causal inference and the superb sessions I had attended on day 1. This blog post is about day 2, Tuesday 13th, which was another big day.

The second plenary session was on fairness in pharmaceutical pricing. It was moderated by Sarah Garner, with presentations by many key stakeholders. The thought-provoking discussion highlighted the importance of pharmaceutical pricing policy and the large role that HTA can have in shaping it.

Communicating cost-effectiveness analysis was the next session, where myself, together with Rob Hettle, Gabriel Rogers and Mike Drummond, discussed the pitfalls and approaches to explaining cost-effectiveness models to non-health economists. This was a hugely popular session! We were delighted by the incredibly positive feedback we received, which reassured us that we are clearly not alone in finding it difficult to communicate cost-effectiveness analysis to a lay audience. We certainly feel incentivised to continue working on this topic. The slides are available here, and for the audience’s feedback, search on twitter #communicateCEA.

The lunch was followed by the open meeting of ISPOR Women in HEOR Initiative with Shelby Reed, Olivia Wu and Louise Timlin. It is really encouraging to see ISPOR taking a proactive stance to gender balance!

The most popular session in the afternoon was Valuing a cure: Are new approaches needed, with Steve Pearson, Jens Grueger, Sarah Garner and Mark Sculpher. The panel showed the various perspectives on the pricing of curative therapies. Payers call for a sustainable pricing model, whilst pharma warns that pricing policy is necessarily linked to the incentives for investment in research. I agree with Mark in that these challenges are not unique to curative therapies. As pharmaceutical therapies have greater health benefits but at large costs, it is pressing that cost-effectiveness assessments are also able to consider the opportunity cost of funding more costly treatments. See here for a roundup of the estimates already available.

I then attended the excellent session on Drug disinvestment: is it needed and how could it work, moderated by Richard Macaulay. Andrew Walker explained that HTA agencies’ advice does not always go down well with local payers, highlighting this with an amusing imaginary dialogue between NICE and a hospital. Detlev Parow argued that payers find that prices are often unaffordable, hence payment schemes should consider other options, such as treatment success, risk-sharing agreements and payment by instalments. Bettina Ryll made an impressive case from the patients’ perspective, for whom these decisions have a real impact.

The conference continued late into the evening and, I suspect, long into the early hours of Wednesday, with the ever-popular conference dinner. Wednesday was another day full of fascinating sessions. The plenary was titled Budget Impact and Expenditure Caps: Potential or Pitfall, moderated by Guillem López-Casasnovas. It was followed by inspiring sessions that explored a wide range of topics, presented by the top experts in the relevant fields. These really delved into the nitty-gritty on subjects, such as using R to build decision models, the value of diagnostic information, and expert elicitation, just to name a few.

I don’t think I’m just speaking personally when I say that ISPOR Barcelona was an absolutely brilliant conference! I’ve mentioned here a few of the most outstanding sessions, but there were many, many more. There were so many sessions at the same time that it was physically impossible to attend all of those with a direct relevance to my research. But fortunately, we can access all the presentations by downloading them from the ISPOR website. I’ll leave the suggestion to ISPOR here, that they should think about filming some of the key sessions and broadcasting them as webinars after the conference. This could create a further key resource for our sector.

As in previous editions, ISPOR Barcelona truly confirms ISPOR Europe in the top HTA conferences in Europe, if not the world. It expertly combines cutting-edge methodological research with outstanding applied work, all with the view to better inform decision making. As I’m sure you can guess, I’m already looking forward to the next ISPOR Europe in Copenhagen on the 2nd-6th November 2019, and the amazing sessions which will indubitably be featured!

poster_hall

Meeting round-up: ISPOR Europe 2018 (part 1)

ISPOR Europe 2018, which took place in Barcelona on the 10th-14th November, was an exceptional conference. It had a jam-packed programme on the latest developments and most pressing challenges in health technology assessment (HTA), economic evaluation and outcomes research. In two blog posts, I’ll tell you about the outstanding sessions and thought-provoking discussions in this always superb conference.

For me, proceedings started on Sunday, with the excellent short-course Adjusting for Time-Dependent Confounding and Treatment Switching Bias in Observational Studies and Clinical Trials: Purpose, Methods, Good Practices and Acceptance in HTA, by Uwe Siebert, Felicitas Kühne and Nick Latimer. Felicitas Kühne explained that causal inference methods aim to estimate the effect of a treatment, risk factor etc. on our outcome of interest, controlling for other exposures that may affect it and hence bias our estimate. Uwe Siebert and Nick Latimer provided a really useful overview of the methods to overcome this challenge in observational studies and RCTs with treatment switching. This was an absolutely brilliant course. Highly recommended to any health economist!

ISPOR conferences usually start early and finish late with loads of exceptional sessions. On Monday, I started the conference proper with the plenary Joint Assessment of Relative Effectiveness: “Trick or Treat” for Decision Makers in EU Member States, moderated by Finn Børlum Kristensen. There were presentations from representatives of payers, HTA agencies, EUnetHTA, pharmaceutical industry and patients. The prevailing mood seemed to be of cautious anticipation. Avoiding duplication of efforts in the clinical assessment was greatly welcomed, but there were some concerns voiced about the practicalities of implementation. The proposal was due to be discussed soon by the European Commission, so undoubtedly we can look forward to knowing more in the near future.

plenary1

My next session was the fascinating panel on the perils and opportunities of advanced computing techniques with the tongue-in-cheek title Will machines soon make health economists obsolete?, by David Thompson, Bill Marder, Gerry Oster and Mike Drummond. Don’t panic yet as, despite the promises of artificial intelligence, I’d wager that our jobs are quite safe. For example, Gerry Oster predicted that demand for health economic models is actually likely to increase, as computers make our models quicker and cheaper to build. Mike Drummond finished with the sensible suggestion to simply keep calm and carry on modelling, as computing advances will liberate our time to explore other areas, such as the interface with decision-makers. This session left us all in a very positive mood as we headed for a well-earned lunch!

There were many interesting sessions in the afternoon. I chose to pop over to the ISPOR Medical Device and Diagnostic Special Interest Group Open Meeting, the ISPOR Portugal chapter meeting, along with taking in the podium presentations on conceptual papers. Many of the presentations will be made available in the ISPOR database, which I recommend exploring. I had a wonderful experience moderating the engaging podium session on cancer models, with outstanding presentations delivered by Hedwig Blommestein, Ash Bullement, and Isle van Oostrum.

The workshop Adjusting for post-randomisation confounding and switching in phase 3 and pragmatic trials to get the estimands right: needs, methods, sub-optimal use, and acceptance in HTA by Uwe Siebert, Felicitas Kühne, Nick Latimer and Amanda Adler is one worth highlighting. The panellists showed that some HTAs do not include any adjustments for treatment switching, whilst adjustments can sometimes be incorrectly applied. It reinforced the idea that we need to learn more about these methods, to be able to apply them in practice and critically appraise them.

The afternoon finished with the second session of the day on posters. Alessandro Grosso, Laura Bojke and I had a poster on the impact of structural uncertainty in the expected value of perfect information. Alessandro did an amazing job encapsulating the poster and presenting it live to camera, which you can watch here.

poster_photo

In tomorrow’s blog post, I’ll tell you about day 2 of ISPOR Europe 2018 in Barcelona. Tuesday was another big day, with loads of outstanding sessions on the key topics in HTA. It featured my very own workshop, with Rob Hettle, Gabriel Rogers and Mike Drummond on communicating cost-effectiveness analysis. I hope you will stay tuned for the ISPOR meeting round-up part 2!