Meeting round-up: ISPOR Europe 2018 (part 1)

ISPOR Europe 2018, which took place in Barcelona on the 10th-14th November, was an exceptional conference. It had a jam-packed programme on the latest developments and most pressing challenges in health technology assessment (HTA), economic evaluation and outcomes research. In two blog posts, I’ll tell you about the outstanding sessions and thought-provoking discussions in this always superb conference.

For me, proceedings started on Sunday, with the excellent short-course Adjusting for Time-Dependent Confounding and Treatment Switching Bias in Observational Studies and Clinical Trials: Purpose, Methods, Good Practices and Acceptance in HTA, by Uwe Siebert, Felicitas Kühne and Nick Latimer. Felicitas Kühne explained that causal inference methods aim to estimate the effect of a treatment, risk factor etc. on our outcome of interest, controlling for other exposures that may affect it and hence bias our estimate. Uwe Siebert and Nick Latimer provided a really useful overview of the methods to overcome this challenge in observational studies and RCTs with treatment switching. This was an absolutely brilliant course. Highly recommended to any health economist!

ISPOR conferences usually start early and finish late with loads of exceptional sessions. On Monday, I started the conference proper with the plenary Joint Assessment of Relative Effectiveness: “Trick or Treat” for Decision Makers in EU Member States, moderated by Finn Børlum Kristensen. There were presentations from representatives of payers, HTA agencies, EUnetHTA, pharmaceutical industry and patients. The prevailing mood seemed to be of cautious anticipation. Avoiding duplication of efforts in the clinical assessment was greatly welcomed, but there were some concerns voiced about the practicalities of implementation. The proposal was due to be discussed soon by the European Commission, so undoubtedly we can look forward to knowing more in the near future.

plenary1

My next session was the fascinating panel on the perils and opportunities of advanced computing techniques with the tongue-in-cheek title Will machines soon make health economists obsolete?, by David Thompson, Bill Marder, Gerry Oster and Mike Drummond. Don’t panic yet as, despite the promises of artificial intelligence, I’d wager that our jobs are quite safe. For example, Gerry Oster predicted that demand for health economic models is actually likely to increase, as computers make our models quicker and cheaper to build. Mike Drummond finished with the sensible suggestion to simply keep calm and carry on modelling, as computing advances will liberate our time to explore other areas, such as the interface with decision-makers. This session left us all in a very positive mood as we headed for a well-earned lunch!

There were many interesting sessions in the afternoon. I chose to pop over to the ISPOR Medical Device and Diagnostic Special Interest Group Open Meeting, the ISPOR Portugal chapter meeting, along with taking in the podium presentations on conceptual papers. Many of the presentations will be made available in the ISPOR database, which I recommend exploring. I had a wonderful experience moderating the engaging podium session on cancer models, with outstanding presentations delivered by Hedwig Blommestein, Ash Bullement, and Isle van Oostrum.

The workshop Adjusting for post-randomisation confounding and switching in phase 3 and pragmatic trials to get the estimands right: needs, methods, sub-optimal use, and acceptance in HTA by Uwe Siebert, Felicitas Kühne, Nick Latimer and Amanda Adler is one worth highlighting. The panellists showed that some HTAs do not include any adjustments for treatment switching, whilst adjustments can sometimes be incorrectly applied. It reinforced the idea that we need to learn more about these methods, to be able to apply them in practice and critically appraise them.

The afternoon finished with the second session of the day on posters. Alessandro Grosso, Laura Bojke and I had a poster on the impact of structural uncertainty in the expected value of perfect information. Alessandro did an amazing job encapsulating the poster and presenting it live to camera, which you can watch here.

poster_photo

In tomorrow’s blog post, I’ll tell you about day 2 of ISPOR Europe 2018 in Barcelona. Tuesday was another big day, with loads of outstanding sessions on the key topics in HTA. It featured my very own workshop, with Rob Hettle, Gabriel Rogers and Mike Drummond on communicating cost-effectiveness analysis. I hope you will stay tuned for the ISPOR meeting round-up part 2!

Rita Faria’s journal round-up for 22nd October 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Economically efficient hepatitis C virus treatment prioritization improves health outcomes. Medical Decision Making [PubMed] Published 22th August 2018

Hepatitis C treatment was in the news a couple of years ago when the new direct-acting antivirals first appeared on the scene. These drugs are very effective but also incredibly expensive. This prompted a flurry of cost-effectiveness analyses and discussions of the role of affordability in cost-effectiveness (my views here).

This compelling study by Lauren Cipriano and colleagues joins the debate by comparing various strategies to prioritise patients for treatment when the budget is not enough to meet patient demand. This is a clear example of the health losses due to the opportunity cost.

The authors compare the costs and health outcomes of various prioritisation schedules in terms of the number of patients treated, the distribution by severity and age, time to treatment, impact on end-stage liver disease, QALYs, costs and net benefit.

The differences between prioritisation schedules in terms of these various outcomes were remarkable. Reassuringly, the optimal prioritisation schedule on the basis of net benefit (the “optimisation” schedule) was the one that achieved the most QALYs and the greatest net benefit. This was even though the cost-effectiveness threshold did not reflect the opportunity cost, as it was set at $100,000 per QALY gained.

This study is fascinating. It shows how the optimal policy depends on what we are trying to maximise. The “first come first serve” schedule treats the most patients, but it is the “optimisation” schedule that achieves the most health benefits net of the opportunity cost.

Since their purpose was not to compare treatments, the authors used a representative price depending on whether patients had progressed to cirrhosis. A future study could include a comparison between drugs, as our previous work found that there are clear differences in cost-effectiveness between treatment strategies. The more cost-effective the treatment strategies, the more patients can be treated with a given budget.

The authors made the Excel model available as supporting material, together with documentation. This is excellent practice! It disseminates the work and shows openness to independent validation. Well done!

Long-term survival and value of chimeric antigen receptor T-cell therapy for pediatric patients with relapsed or refractory leukemia. JAMA Pediatrics [PubMed] Published 8th October 2018

This fascinating study looks at the cost-effectiveness of tisagenlecleucel in the treatment of children with relapsed or refractory leukaemia compared to chemotherapy.

Tisagenlecleucel is the first chimeric antigen receptor T-cell (CAR-T) therapy. CAR-T therapy is the new kid on the block in cancer treatment. It involves modifying the patient’s own immune system cells to recognise and kill the patient’s cancer (see here for details). Such high-tech treatment comes with a hefty price tag. Tisagenlecleucel is listed at $475,000 for a one-off administration.

The key challenge was to obtain the effectiveness inputs under the chemotherapy option. This was because tisagenlecleucel has only been studied in single-arm trials and individual level data was not available to the research team. The research team selected a single-arm study on the outcomes with clofarabine monotherapy, since its patients at baseline were most similar in terms of demographics and number of prior therapies to the tisagenlecleucel study.

This study is brilliant in approaching a difficult decision problem and conducting extensive sensitivity analysis. In particular, it tests the impact of common drivers of the cost-effectiveness of potentially curative therapies in children, such as the discount rate, duration of benefit, treatment initiation, and the inclusion of future health care costs. Ideally, the sensitivity analysis should also have tested the assumption that the studies informing the effectiveness inputs for tisagenlecleucel and clofarabine monotherapy were comparable or if clofarabine monotherapy does not represent the current standard of care, although it would be difficult to parameterise.

This outstanding study highlights the challenges posed by the approval of treatments based on single-arm studies. Had individual-level data been available, an adjusted comparison may have been possible, which would improve the degree of confidence in the cost-effectiveness of tisagenlecleucel. Regulators and trial sponsors should work together to make anonymised individual level data available to bonafide researchers.

Researcher requests for inappropriate analysis and reporting: a U.S. survey of consulting biostatisticians. Annals of Internal Medicine [PubMed] Published 10th October 2018

This study reports a survey of biostatisticians on the frequency and severity of requests for inappropriate analysis and reporting. The results are stunning!

The top 3 requests in terms of severity were to falsify statistical significance to support a desired result, change data to achieve the desired outcome and remove/alter data records to better support the research hypothesis. Fortunately, this sort of requests appears to be rare.

The top 3 requests in terms of frequency seem to be not showing a plot because it does not show an effect as strong as it had been hoped; to stress only the significant findings but under-reporting non-significant ones, and report results before data have been cleaned and validated.

Given the frequency and severity of the requests, the authors recommend that researchers should be better educated in good statistical practice and research ethics. I couldn’t agree more and would suggest that cost-effectiveness analysis is included, given that it informs policy decisions and it is generally conducted by multidisciplinary teams.

I’m now wondering what the responses would be if we did a similar survey to health economists, particularly those working in health technology assessment! Something for HESG, iHEA or ISPOR to look at for the future?

Credits

Rita Faria’s journal round-up for 24th September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Methodological issues in assessing the economic value of next-generation sequencing tests: many challenges and not enough solutions. Value in Health [PubMed] Published 8th August 2018

This month’s issue of Value in Health includes a themed section on assessing the value of next-generation sequencing. Next-generation sequencing is sometimes hailed as the holy grail in medicine. The promise is that our individual genome can indicate how at-risk we are for many diseases. The question is whether the information obtained by these tests is worth their costs and potentially harmful consequences on well-being and health-related quality of life. This largely remains unexplored, so I expect seeing more economic evaluations of next-generation sequencing in the future.

This paper has caught my eye given an ongoing project on cascade testing protocols for familial hypercholesterolaemia. Next-generation sequencing can be used to identify the genetic cause of familial hypercholesterolaemia, thereby identifying patients suitable to have their relatives tested for the disease. I read this paper with the hope of finding inspiration for our economic evaluation.

This thought-provoking paper discusses the challenges in conducting economic evaluations of next-generation sequencing, such as complex model structure, inclusion of upstream and downstream costs, identifying comparators, identifying costs and outcomes that are related to the test, measuring costs and outcomes, evidence synthesis, data availability and quality.

I agree with the authors that these are important challenges, and it was useful to see them explained in a systematic way. Another valuable feature of this paper is the summary of applied studies which have encountered these challenges and their approaches to overcome them. It’s encouraging to read about how other studies have dealt with complex decision problems!

I’d argue that the challenges are applicable to economic evaluations of many other interventions. For example, identifying the relevant comparators can be a challenge in the evaluations of treatments: in an evaluation of hepatitis C drugs, we compared 633 treatment sequences in 14 subgroups. I view the challenges as the issues to think about when planning an economic evaluation of any intervention: what the comparators are, the scope of the evaluation, the model conceptualisation, data sources and their statistical analysis. Therefore, I’d recommend this paper as an addition to your library about the conceptualisation of economic evaluations.

Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ [PubMed] Published 12th September 2018

You may be puzzled at the choice of the latest Ben Goldacre and colleagues’ paper, as it does not include an economic component. This study investigates compliance with the European Commission’s requirements that all trials on the EU Clinical Trials Register post results to the registry within 12 months of completion. At first sight, the economic implications may not be obvious, but they do exist and are quite important.

Clinical trials are a large investment of resources, not only financial but also in the health of patients who accept to take part in an experiment that may impact their health adversely. Therefore, clinical trials can have a huge sunk cost in both money and health. The payoff only realises if the trial is reported. If the trial is not reported, the benefits from the investment cannot be realised. In sum, an unreported trial is clearly a cost-ineffective use of resources.

The solution is simple: ensure that trial results are reported. This way we can all benefit from the information collected by the trial. The issue is, as Goldacre and colleagues have revealed, compliance is far from perfect.

Remarkably, around half of the 7,274 studies are due to publish results. The worst offenders are non-commercial sponsors, where only 11% of trials had their results reported (compared with 68% of trials by a commercial sponsor).

The authors provide a web tool to look up unreported trials by institution. I looked up my very own University of York. It was reassuring to know that my institution has no trials due to report results. Nonetheless, many others are less compliant.

This is an exciting study on the world of clinical trials. I’d suggest that a possible next step would be to estimate the health lost and costs from failing to report trial results.

Network meta-analysis of diagnostic test accuracy studies identifies and ranks the optimal diagnostic tests and thresholds for health care policy and decision-making. Journal of Clinical Epidemiology [PubMed] Published 13th March 2018

Diagnostic tests are an emerging area of methodological development. This timely paper by Rhiannon Owen and colleagues addresses the important topic of evidence synthesis of diagnostic test accuracy studies.

Diagnostic test studies cannot be meta-analysed with the standard techniques used for treatment effectiveness. This is because there are two quantities of interest (sensitivity and specificity), which are correlated, and vary depending on the test threshold (that is, the value at which we say the test result is positive or negative).

Owen and colleagues propose a new approach to synthesising diagnostic test accuracy studies using network meta-analysis methodology. This innovative method allows for comparing multiple tests, evaluated at various test threshold values.

I cannot comment on the method itself as evidence synthesis is not my area of expertise. My interest comes from my experience in the economic evaluation of diagnostic tests, where we often wish to combine evidence from various studies.

With this in mind, I recommend having a look at the NIHR Complex Reviews Support Unit website for more handy tools and the latest research on methods for evidence synthesis. For example, the CRSU has a web tool for meta-analysis of diagnostic tests and a web tool to conduct network meta-analysis for those of us who are not evidence synthesis experts. Providing web tools is a brilliant way of helping analysts using these methods so, hopefully, we’ll see greater use of evidence synthesis in the future.

Credits