Meeting round-up: iHEA Congress 2019

Missed iHEA 2019? Or were you there but could not make it to all of the amazing sessions? Stay tuned for my conference highlights!

iHEA started on Saturday 13th with pre-congress sessions on fascinating research as well as more prosaic topics, such as early-career networking sessions with senior health economists. All attendees got a super useful plastic bottle – great idea iHEA team!

The conference proper launched on Sunday evening with the brilliant plenary session by Raj Chetty from Harvard University.

Monday morning started bright and early with the thought-provoking session on validation of CE models. It was chaired and discussed by Stefan Lhachimi and featured presentations by Isaac Corro Ramos, Talitha Feenstra and Salah Ghabri. I’m pleased to see that validation is coming to the forefront of current topics! Clearly, we need to do better in validating our models and documenting code, but we’re on the right track and engaged in making this happen.

Next up, the superb session on the societal perspective for cost-effectiveness analysis. It was an all-star cast with Mark Sculpher, Simon Walker, Susan Griffin, Peter Neumann, Lisa Robinson, and Werner Brouwer. I’ve live-tweeted it here.

The case was expertly made that taking a single sector perspective can be misleading when evaluating policies with cross-sectoral effects, hence the impact inventory by Simon and colleagues is a useful tool to guide the choice of sectors to include. At the same time, we should be mindful of the requirements of the decision-maker for whom CEA is intended. This was a compelling session, which will definitely set the scene for much more research to come.

After a tasty lunch (well done catering team!), I headed to the session on evaluations using non-randomised data. The presenters included Maninie Molatseli, Fernando Antonio Postali, James Love-Koh and Taufik Hidayat, on case studies from South Africa, Brazil and Indonesia. Marc Suhrcke chaired. I really enjoyed hearing about the practicalities of applying econometric methods to estimate treatment effects of system wide policies. And James’s presentation was a great application of distributional cost-effectiveness analysis.

I was on the presenter’s chair next, discussing the challenges in implementing policies in the southwest quadrant of the CE plane. This session was chaired by Anna Vassall and discussed by Gesine Meyer-Rath. Jack Dowie started by convincingly arguing that the decision rule should be the same regardless of where in the CE plane the policy falls. David Bath and Sergio Torres-Rueda presented fascinating case studies of south west policies. And I argued that the barrier was essentially a problem of communication (presentation available here). An energetic discussion followed and showed that, even in our field, the matter is far from settled.

The day finished with the memorial session for the wonderful Alan Maynard and Uwe Reinhardt, both of whom did so much for health economics. It was a beautiful session, where people got together to share incredible stories from these health economics heroes. And if you’d like to know more, both Alan and Uwe have published books here and here.

Tuesday started with the session on precision medicine, chaired by Dean Regier, and featuring Rosalie Viney, Chris McCabe and Stuart Peacock. Rather than slides, the screen was filled with a video of a cosy fireplace, inviting the audience to take part in the discussion.

Under debate was whether precision medicine is a completely different type of technology, with added benefits over and above improvement to health, and needing a different CE framework. The panellists were absolutely outstanding in debating the issues! Although I understand the benefits beyond health that these technologies can offer, I side with the view that, like with other technologies, value is about whether the added benefits are worth the losses given the opportunity cost.

My final session of the day was by the great Mike Drummond, comparing how HTA has influenced the uptake of new anticancer drugs in Spain versus England (summary in thread below). Mike and colleagues found that positive recommendations do increase utilisation, but the magnitude of change differs by country and region. The work is ongoing in checking that utilisation has been picked up accurately in the routine data sources.

The conference dinner was at the Markthalle, with plenty of drinks and loads of international food to choose from. I had to have an early night given that I was presenting at 8:30 the next morning. Others, though, enjoyed the party until the early hours!

Indeed, Wednesday started with my session on cost-effectiveness analysis of diagnostic tests. Alison Smith presented on her remarkable work on measurement uncertainty while Hayley Jones gave a masterclass on her new method for meta-analysis of test accuracy across multiple thresholds. I presented on the CEA of test sequences (available here). Simon Walker and James Buchanan added insightful points as discussants. We had a fantastically engaged audience, with great questions and comments. It shows that the CEA of diagnostic tests is becoming a hugely important topic.

Sadly, some other morning sessions were not as well attended. One session, also on CEA, was even cancelled due to lack of audience! For future conferences, I’d suggest scheduling the sessions on the day after the conference dinner a bit later, as well as having fewer sessions to choose from.

Next up on my agenda was the exceptional session on equity, chaired by Paula Lorgelly, and with presentations by Richard Cookson, Susan Griffin and Ijeoma Edoka. I was unable to attend, but I have watched it at home via YouTube (from 1:57:10)! That’s right, some sessions were live streamed and are still available via the iHEA website. Do have a look!

My last session of the conference was on end-of-life care, with Charles Normand chairing, discussed by Helen Mason, Eric Finkelstein, and Mendwas Dzingina, and presentations by Koonal Shah, Bridget Johnson and Nikki McCaffrey. It was a really thought-provoking session, raising questions on the value of interventions at the end-of-life compared to at other stages of the life course.

Lastly, the outstanding plenary session by Lise Rochaix and Joseph Kutzin on how to translate health economics research into policy. Lise and Joseph had pragmatic suggestions and insightful comments on the communication of health economics research to policy makers. Superb! Also available on the live stream here (from 06:09:44).

iHEA 2019 was truly an amazing conference. Expertly organised, well thought-out and with lots of interesting sessions to choose from. iHEA 2021 in Cape Town is firmly in my diary!

How to explain cost-effectiveness models for diagnostic tests to a lay audience

Non-health economists (henceforth referred to as ‘lay stakeholders’) are often asked to use the outputs of cost-effectiveness models to inform decisions, but they can find them difficult to understand. Conversely, health economists may have limited experience of explaining cost-effectiveness models to lay stakeholders. How can we do better?

This article shares my experience of explaining cost-effectiveness models of diagnostic tests to lay stakeholders such as researchers in other fields, clinicians, managers, and patients, and suggests some approaches to make models easier to understand. It is the condensed version of my presentation at ISPOR Europe 2018.

Why are cost-effectiveness models of diagnostic tests difficult to understand?

Models designed to compare diagnostic strategies are particularly challenging. In my view, this is for two reasons.

Firstly, there is the sheer number of possible diagnostic strategies that a cost-effectiveness model allows us to compare. Even if we are looking at only a couple of tests, we can use them in various combinations and at many diagnostic thresholds. See, for example, this cost-effectiveness analysis of diagnosis of prostate cancer.

Secondly, diagnostic tests can affect costs and health outcomes in multiple ways. Specifically, diagnostic tests can have a direct effect on people’s health-related quality of life, mortality risk, acquisition costs, as well as the consequences of side effects. Furthermore, diagnostic tests can have an indirect effect via the consequences of the subsequent management decisions. This indirect effect is often the key driver of cost-effectiveness.

As a result, the cost-effectiveness analysis of diagnostic tests can have many strategies, with multiple effects modelled in the short and long-term. This makes the model and the results difficult to understand.

Map out the effect of the test on health outcomes or costs

The first step in developing any cost-effectiveness model is to understand how the new technology, such as a diagnostic test or a drug, can impact the patient and the health care system. Ferrante di Ruffano et al and Kip et al are two studies that can be used as a starting point to understand the possible effects of a test on health outcomes and/or costs.

Ferrante di Ruffano et al conducted a review of the mechanisms by which diagnostic tests can affect health outcomes and provides a list of the possible effects of diagnostic tests.

Kip et al suggests a checklist for the reporting of cost-effectiveness analyses of diagnostic tests and biomarkers. Although this is a checklist for the reporting of a cost-effectiveness analysis that has been previously conducted, it can also be used as a prompt to define the possible effects of a test.

Reach a shared understanding of the clinical pathway

The parallel step is to understand the clinical pathway in which the diagnostic strategies integrate and affect. This consists of conceptualising the elements of the health care service relevant for the decision problem. If you’d like to know more about model conceptualisation, I suggest this excellent paper by Paul Tappenden.

These conceptual models are necessarily simplifications of reality. They need to be as simple as possible, but accurate enough that lay stakeholders recognise it as valid. As Einstein said: “to make the irreducible basic elements as simple and as few as possible, without having to surrender the adequate representation of a single datum of experience.”

Agree which impacts to include in the cost-effectiveness model

What to include and to exclude from the model is, at present, more of an art than a science. For example, Chilcott et al conducted a series of interviews with health economists and found that their approach to model development varied widely.

I find that the best approach is to design the model in consultation with the relevant stakeholders, such as clinicians, patients, health care managers, etc. This ensures that the cost-effectiveness model has face validity to those who will ultimately be their end user and (hopefully) advocates of the results.

Decouple the model diagram from the mathematical model

When we have a reasonable idea of the model that we are going to build, we can draw its diagram. A model diagram not only is a recommended component of the reporting of a cost-effectiveness model but also helps lay stakeholders understand it.

The temptation is often to draw the model diagram as similar as possible to the mathematical model. In cost-effectiveness models of diagnostic tests, the mathematical model tends to be a decision tree. Therefore, we often see a decision tree diagram.

The problem is that decision trees can easily become unwieldy when we have various test combinations and decision nodes. We can try to synthesise a gigantic decision tree into a simpler diagram, but unless you have great graphic designer skills, it might be a futile exercise (see, for example, here).

An alternative approach is to decouple the model diagram from the mathematical model and break down the decision problem into steps. The figure below shows an example of how the model diagram can be decoupled from the mathematical model.

The diagram breaks the problem down into steps that relate to the clinical pathway, and therefore, to the stakeholders. In this example, the diagram follows the questions that clinicians and patients may ask: which test to do first? Given the result of the first test, should a second test be done? If a second test is done, which one?

Simplified model diagram on the cost-effectiveness analysis of magnetic resonance imaging (MRI) and biopsy to diagnose prostate cancer

Relate the results to the model diagram

The next point of contact between the health economists and lay stakeholders is likely to be at the point when the first cost-effectiveness results are available.

The typical chart for the probabilistic results is the cost-effectiveness acceptability curve (CEAC). In my experience, the CEAC is challenging for lay stakeholders. It plots results over a range of cost-effectiveness thresholds, which are not quantities that most people outside cost-effectiveness analysis relate to. Additionally, CEACs showing the results of multiple strategies can have many lines and some discontinuities, which can be difficult to understand by the untrained eye.

An alternative approach is to re-use the model diagram to present the results. The model diagram can show the strategy that is expected to be cost-effective and its probability of cost-effectiveness at the relevant threshold. For example, the probability that the strategies starting with a specific test are cost-effective is X%; and the probability that strategies using the specific test at a specific cut-off are cost-effective is Y%, etc.

Next steps for practice and research

Research about the communication of cost-effectiveness analysis is sparse, and guidance is lacking. Beyond the general advice to speak in plain English and avoiding jargon, there is little advice. Hence, health economists find themselves developing their own approaches and techniques.

In my experience, the key aspects for effective communication are to engage with lay stakeholders from the start of the model development, to explain the intuition behind the model in simplified diagrams, and to find a balance between scientific accuracy and clarity which is appropriate for the audience.

More research and guidance are clearly needed to develop communication methods that are effective and straightforward to use in applied cost-effectiveness analysis. Perhaps this is where patient and public involvement can really make a difference!

Rita Faria’s journal round-up for 24th September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Methodological issues in assessing the economic value of next-generation sequencing tests: many challenges and not enough solutions. Value in Health [PubMed] Published 8th August 2018

This month’s issue of Value in Health includes a themed section on assessing the value of next-generation sequencing. Next-generation sequencing is sometimes hailed as the holy grail in medicine. The promise is that our individual genome can indicate how at-risk we are for many diseases. The question is whether the information obtained by these tests is worth their costs and potentially harmful consequences on well-being and health-related quality of life. This largely remains unexplored, so I expect seeing more economic evaluations of next-generation sequencing in the future.

This paper has caught my eye given an ongoing project on cascade testing protocols for familial hypercholesterolaemia. Next-generation sequencing can be used to identify the genetic cause of familial hypercholesterolaemia, thereby identifying patients suitable to have their relatives tested for the disease. I read this paper with the hope of finding inspiration for our economic evaluation.

This thought-provoking paper discusses the challenges in conducting economic evaluations of next-generation sequencing, such as complex model structure, inclusion of upstream and downstream costs, identifying comparators, identifying costs and outcomes that are related to the test, measuring costs and outcomes, evidence synthesis, data availability and quality.

I agree with the authors that these are important challenges, and it was useful to see them explained in a systematic way. Another valuable feature of this paper is the summary of applied studies which have encountered these challenges and their approaches to overcome them. It’s encouraging to read about how other studies have dealt with complex decision problems!

I’d argue that the challenges are applicable to economic evaluations of many other interventions. For example, identifying the relevant comparators can be a challenge in the evaluations of treatments: in an evaluation of hepatitis C drugs, we compared 633 treatment sequences in 14 subgroups. I view the challenges as the issues to think about when planning an economic evaluation of any intervention: what the comparators are, the scope of the evaluation, the model conceptualisation, data sources and their statistical analysis. Therefore, I’d recommend this paper as an addition to your library about the conceptualisation of economic evaluations.

Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ [PubMed] Published 12th September 2018

You may be puzzled at the choice of the latest Ben Goldacre and colleagues’ paper, as it does not include an economic component. This study investigates compliance with the European Commission’s requirements that all trials on the EU Clinical Trials Register post results to the registry within 12 months of completion. At first sight, the economic implications may not be obvious, but they do exist and are quite important.

Clinical trials are a large investment of resources, not only financial but also in the health of patients who accept to take part in an experiment that may impact their health adversely. Therefore, clinical trials can have a huge sunk cost in both money and health. The payoff only realises if the trial is reported. If the trial is not reported, the benefits from the investment cannot be realised. In sum, an unreported trial is clearly a cost-ineffective use of resources.

The solution is simple: ensure that trial results are reported. This way we can all benefit from the information collected by the trial. The issue is, as Goldacre and colleagues have revealed, compliance is far from perfect.

Remarkably, around half of the 7,274 studies are due to publish results. The worst offenders are non-commercial sponsors, where only 11% of trials had their results reported (compared with 68% of trials by a commercial sponsor).

The authors provide a web tool to look up unreported trials by institution. I looked up my very own University of York. It was reassuring to know that my institution has no trials due to report results. Nonetheless, many others are less compliant.

This is an exciting study on the world of clinical trials. I’d suggest that a possible next step would be to estimate the health lost and costs from failing to report trial results.

Network meta-analysis of diagnostic test accuracy studies identifies and ranks the optimal diagnostic tests and thresholds for health care policy and decision-making. Journal of Clinical Epidemiology [PubMed] Published 13th March 2018

Diagnostic tests are an emerging area of methodological development. This timely paper by Rhiannon Owen and colleagues addresses the important topic of evidence synthesis of diagnostic test accuracy studies.

Diagnostic test studies cannot be meta-analysed with the standard techniques used for treatment effectiveness. This is because there are two quantities of interest (sensitivity and specificity), which are correlated, and vary depending on the test threshold (that is, the value at which we say the test result is positive or negative).

Owen and colleagues propose a new approach to synthesising diagnostic test accuracy studies using network meta-analysis methodology. This innovative method allows for comparing multiple tests, evaluated at various test threshold values.

I cannot comment on the method itself as evidence synthesis is not my area of expertise. My interest comes from my experience in the economic evaluation of diagnostic tests, where we often wish to combine evidence from various studies.

With this in mind, I recommend having a look at the NIHR Complex Reviews Support Unit website for more handy tools and the latest research on methods for evidence synthesis. For example, the CRSU has a web tool for meta-analysis of diagnostic tests and a web tool to conduct network meta-analysis for those of us who are not evidence synthesis experts. Providing web tools is a brilliant way of helping analysts using these methods so, hopefully, we’ll see greater use of evidence synthesis in the future.

Credits