How to explain cost-effectiveness models for diagnostic tests to a lay audience

Non-health economists (henceforth referred to as ‘lay stakeholders’) are often asked to use the outputs of cost-effectiveness models to inform decisions, but they can find them difficult to understand. Conversely, health economists may have limited experience of explaining cost-effectiveness models to lay stakeholders. How can we do better?

This article shares my experience of explaining cost-effectiveness models of diagnostic tests to lay stakeholders such as researchers in other fields, clinicians, managers, and patients, and suggests some approaches to make models easier to understand. It is the condensed version of my presentation at ISPOR Europe 2018.

Why are cost-effectiveness models of diagnostic tests difficult to understand?

Models designed to compare diagnostic strategies are particularly challenging. In my view, this is for two reasons.

Firstly, there is the sheer number of possible diagnostic strategies that a cost-effectiveness model allows us to compare. Even if we are looking at only a couple of tests, we can use them in various combinations and at many diagnostic thresholds. See, for example, this cost-effectiveness analysis of diagnosis of prostate cancer.

Secondly, diagnostic tests can affect costs and health outcomes in multiple ways. Specifically, diagnostic tests can have a direct effect on people’s health-related quality of life, mortality risk, acquisition costs, as well as the consequences of side effects. Furthermore, diagnostic tests can have an indirect effect via the consequences of the subsequent management decisions. This indirect effect is often the key driver of cost-effectiveness.

As a result, the cost-effectiveness analysis of diagnostic tests can have many strategies, with multiple effects modelled in the short and long-term. This makes the model and the results difficult to understand.

Map out the effect of the test on health outcomes or costs

The first step in developing any cost-effectiveness model is to understand how the new technology, such as a diagnostic test or a drug, can impact the patient and the health care system. Ferrante di Ruffano et al and Kip et al are two studies that can be used as a starting point to understand the possible effects of a test on health outcomes and/or costs.

Ferrante di Ruffano et al conducted a review of the mechanisms by which diagnostic tests can affect health outcomes and provides a list of the possible effects of diagnostic tests.

Kip et al suggests a checklist for the reporting of cost-effectiveness analyses of diagnostic tests and biomarkers. Although this is a checklist for the reporting of a cost-effectiveness analysis that has been previously conducted, it can also be used as a prompt to define the possible effects of a test.

Reach a shared understanding of the clinical pathway

The parallel step is to understand the clinical pathway in which the diagnostic strategies integrate and affect. This consists of conceptualising the elements of the health care service relevant for the decision problem. If you’d like to know more about model conceptualisation, I suggest this excellent paper by Paul Tappenden.

These conceptual models are necessarily simplifications of reality. They need to be as simple as possible, but accurate enough that lay stakeholders recognise it as valid. As Einstein said: “to make the irreducible basic elements as simple and as few as possible, without having to surrender the adequate representation of a single datum of experience.”

Agree which impacts to include in the cost-effectiveness model

What to include and to exclude from the model is, at present, more of an art than a science. For example, Chilcott et al conducted a series of interviews with health economists and found that their approach to model development varied widely.

I find that the best approach is to design the model in consultation with the relevant stakeholders, such as clinicians, patients, health care managers, etc. This ensures that the cost-effectiveness model has face validity to those who will ultimately be their end user and (hopefully) advocates of the results.

Decouple the model diagram from the mathematical model

When we have a reasonable idea of the model that we are going to build, we can draw its diagram. A model diagram not only is a recommended component of the reporting of a cost-effectiveness model but also helps lay stakeholders understand it.

The temptation is often to draw the model diagram as similar as possible to the mathematical model. In cost-effectiveness models of diagnostic tests, the mathematical model tends to be a decision tree. Therefore, we often see a decision tree diagram.

The problem is that decision trees can easily become unwieldy when we have various test combinations and decision nodes. We can try to synthesise a gigantic decision tree into a simpler diagram, but unless you have great graphic designer skills, it might be a futile exercise (see, for example, here).

An alternative approach is to decouple the model diagram from the mathematical model and break down the decision problem into steps. The figure below shows an example of how the model diagram can be decoupled from the mathematical model.

The diagram breaks the problem down into steps that relate to the clinical pathway, and therefore, to the stakeholders. In this example, the diagram follows the questions that clinicians and patients may ask: which test to do first? Given the result of the first test, should a second test be done? If a second test is done, which one?

Simplified model diagram on the cost-effectiveness analysis of magnetic resonance imaging (MRI) and biopsy to diagnose prostate cancer

Relate the results to the model diagram

The next point of contact between the health economists and lay stakeholders is likely to be at the point when the first cost-effectiveness results are available.

The typical chart for the probabilistic results is the cost-effectiveness acceptability curve (CEAC). In my experience, the CEAC is challenging for lay stakeholders. It plots results over a range of cost-effectiveness thresholds, which are not quantities that most people outside cost-effectiveness analysis relate to. Additionally, CEACs showing the results of multiple strategies can have many lines and some discontinuities, which can be difficult to understand by the untrained eye.

An alternative approach is to re-use the model diagram to present the results. The model diagram can show the strategy that is expected to be cost-effective and its probability of cost-effectiveness at the relevant threshold. For example, the probability that the strategies starting with a specific test are cost-effective is X%; and the probability that strategies using the specific test at a specific cut-off are cost-effective is Y%, etc.

Next steps for practice and research

Research about the communication of cost-effectiveness analysis is sparse, and guidance is lacking. Beyond the general advice to speak in plain English and avoiding jargon, there is little advice. Hence, health economists find themselves developing their own approaches and techniques.

In my experience, the key aspects for effective communication are to engage with lay stakeholders from the start of the model development, to explain the intuition behind the model in simplified diagrams, and to find a balance between scientific accuracy and clarity which is appropriate for the audience.

More research and guidance are clearly needed to develop communication methods that are effective and straightforward to use in applied cost-effectiveness analysis. Perhaps this is where patient and public involvement can really make a difference!

Meeting round-up: Health Economists’ Study Group (HESG) Winter 2018

Last week’s biannual intellectual knees-up for UK health economists took place at City, University of London. We’ve written before about HESG, but if you need a reminder of the format you can read Lucy Abel’s blog post on the subject. This was the first HESG I’ve been to in a while that took place in an actual university building.

The conference kicked off for me with my colleague Grace Hampson‘s first ever HESG discussion. It was an excellent discussion of Toby Watt‘s paper on the impact of price promotions for cola, in terms of quantities purchased (they increase) and – by extension – sugar consumption. It was a nice paper with a clear theoretical framework and empirical strategy, which generated a busy discussion. Nutrition is a subject that I haven’t seen represented much at past HESG meetings, but there were several on the schedule this time around with other papers by Jonathan James and Ben Gershlick. I expect it’s something we’ll see becoming more prevalent as policymaking becomes more insistent.

The second and third sessions I attended were on the relationship between health and social care, which is a pressing matter in the UK, particular with regard to achieving integrated care. Ben Zaranko‘s paper considered substitution effects arising from changes in the relative budgets of health and social care. Jonathan Stokes and colleagues attempted to identify whether the Better Care Fund has achieved its goal of reducing secondary care use. That paper got a blazing discussion from Andrew Street that triggered an insightful discussion in the room.

A recurring theme in many sessions was the challenge of communicating with local decision-makers, and the apparent difficulty in working without a reference case to fall back on (such as that of NICE). This is something that I have heard regularly discussed at least since the Winter 2016 meeting in Manchester. At City, this was most clearly discussed in Emma Frew‘s paper describing the researchers’ experiences working with local government. Qualitative research has clearly broken through at HESG, including Emma’s paper and a study by Hareth Al-Janabi on the subject of treatment spillovers on family carers.

I also saw a few papers that related primarily to matters of research conduct and publishing. Charitini Stavropoulou‘s paper explored whether highly-cited researchers are more likely to receive public funding, while the paper I chaired by Anum Shaikh explored the potential for recycling cost-effectiveness models. The latter was a joy for me, with much discussion of model registries!

There were plenty of papers that satisfied my own particular research interests. Right up my research street was Mauro Laudicella‘s paper, which used real-world data to assess the cost savings associated with redirecting cancer diagnoses to GP referral rather than emergency presentation. I wasn’t quite as optimistic about the potential savings, with the standard worries about lead time bias and selection effects. But it was a great paper nonetheless. Also using real-world evidence was Ewan Gray‘s study, which supported the provision of adjuvant chemotherapy for early stage breast cancer but delivered some perplexing findings about patient-GP decision-making. Ewan’s paper explored technical methodological challenges, though the prize for the most intellectually challenging paper undoubtedly goes to Manuel Gomes, who continued his crusade to make health economists better at dealing with missing data – this time for the case of quality of life data. Milad Karimi‘s paper asked whether preferences over health states are informed. This is the kind of work I enjoy thinking about – whether measures like the EQ-5D capture what really matters and how we might do better.

As usual, many delegates worked hard and played hard. I took a beating from the schedule at this HESG, with my discussion taking place during the first session after the conference dinner (where we walked in the footsteps of the Spice Girls) and my chairing responsibilities falling on the last session of the last day. But in both cases, the audience was impressive.

I’ll leave the final thought for the blog post with Peter Smith’s plenary, which considered the role of health economists in a post-truth world. Happily, for me, Peter’s ideas chimed with my own view that we ought to be taking our message to the man on the Clapham omnibus and supporting public debate. Perhaps our focus on (national) policymakers is too strong. If not explicit, this was a theme that could be seen throughout the meeting, whether it be around broader engagement with stakeholders, recognising local decision-making processes, or harnessing the value of storytelling through qualitative research. HESG members are STRETCHing the truth.

Credit

Meeting round-up: Health Economists’ Study Group (HESG) Winter 2017

The perfect tonic to the January blues, this year’s winter HESG took us to Birmingham. Continuing the trend of recent years, 100+ health economists gathered in a major chain hotel to discuss 50 odd papers currently in progress in our little corner of academia. First thing I’ll say is that it was a great conference. It was flawlessly organised and the team helped create that unmistakable HESG buzz.

As we’ve come to expect from HESG, there was an impressive breadth of subject matter and methodologies on offer across the 4 or 5 parallel sessions throughout each day. From mental health to dentistry, from financial incentive schemes to integrated care, and from small-scale preference elicitation studies to regression analyses of millions of data points – that was just the first day.

I did the usual hat-trick duties of having a paper, giving a discussion and doing a bit of chairing; nothing compared to our own Sam Watson‘s herculean effort to tackle ‘the quad’ with two papers accepted. Despite my concern that it might just be a bit too boring, my paper – Systematic review and meta-analysis of health state utility values for diabetic retinopathy: implications for model-based economic evaluation – was well received on the first day. We discussed the reason and basis for a meta-analysis of utility values, and whether it makes more sense to target specific values or adopt a blanket approach. I’m very grateful to my discussant, Anthony Hatswell, and to the rest of the room for their feedback. The other highlight of the first day’s sessions for me was a paper by Uma Thomas that was discussed by Hareth Al-Janabi. The paper tried to tackle the very difficult problem of identifying ‘sophistication’ in the context of present bias and commitment contracts. Some people will be able to anticipate their own time-inconsistent preferences and should therefore demand commitment contracts. But as the discussion testified, identifying sophistication (or even understanding it) is no mean feat.

Day one ended with a very engaging plenary in which 4 speakers – Judith Smith, Matt Sutton, Andrew Street and Paula Lorgelly – discussed their short to medium-term priorities for the NHS. Generally, things looked bleak. Judith discussed the need to ‘get through the winter’, while Matt highlighted the apparent lack of attention given to evidence in the policy-making process. Andy warned us against getting sick in 2017 as the government demands impossible efficiency savings. Paula mentioned the ‘p’ word, attracting (jovial) hisses and boos. But she’s right – we really could do a better job of optimising NHS links with the private sector. The substance of the plenary as a whole was a call to arms. Health economists need to improve their communication to decision makers at all levels of the health service and of government. Numerous suggestions came from the floor and something seemed to be sparked in the room. I suspect we’ll hear more about this in the future.

My discussion on day 2 was of a paper by John Brazier and co, which fortuitously related to a paper that I previously discussed here on the blog. I was badly behaved, going well over time, but there were a lot of issues to grapple with around whether or not we should use ‘patient preferences’ in economic evaluation. The room was packed and provided a lively discussion. It’s a question that we’ll no doubt return to on this blog. I chaired a session in which Yan Feng discussed Liz Camacho‘s paper on the suitability of the EQ-5D for people at risk of developing psychosis. The take-home message of the discussion was that we need to stop considering ‘mental illness’ as a single diagnosis, and that while the EQ-5D might be valid in some groups it might not be in others.

A well-attended member’s meeting touched on some of the issues raised in the plenary, around the idea that HESG and its members might do more to influence decision makers and inform interested parties. What’s more, we learnt of some exciting news about HESG’s future that might facilitate action on this. There was the inevitable discussion of HESG’s controversial trip away, with the conclusion being that we probably won’t do it again for a few years (at least). This presents the exciting prospect that next year’s meeting – to be hosted by City University – might just end up in Cleethorpes.

The high quality of discussion was maintained into the last day. For me there was Penny Mullen’s discussion of Jytte Nielsen‘s paper describing a novel method by which to elicit people’s preferences for end of life treatment, without taking into account distributional concerns. And everything was wrapped up with champion HESG organiser Phil Kinghorn‘s discussion of Padraig Dixon‘s paper about the challenges of including carer spillover effects in economic evaluation. Phil gets the prize for inducing the most laughs during a presentation.

Yet another brilliant HESG that left me physically drained and mentally invigorated.

Credits