How to explain cost-effectiveness models for diagnostic tests to a lay audience

Non-health economists (henceforth referred to as ‘lay stakeholders’) are often asked to use the outputs of cost-effectiveness models to inform decisions, but they can find them difficult to understand. Conversely, health economists may have limited experience of explaining cost-effectiveness models to lay stakeholders. How can we do better?

This article shares my experience of explaining cost-effectiveness models of diagnostic tests to lay stakeholders such as researchers in other fields, clinicians, managers, and patients, and suggests some approaches to make models easier to understand. It is the condensed version of my presentation at ISPOR Europe 2018.

Why are cost-effectiveness models of diagnostic tests difficult to understand?

Models designed to compare diagnostic strategies are particularly challenging. In my view, this is for two reasons.

Firstly, there is the sheer number of possible diagnostic strategies that a cost-effectiveness model allows us to compare. Even if we are looking at only a couple of tests, we can use them in various combinations and at many diagnostic thresholds. See, for example, this cost-effectiveness analysis of diagnosis of prostate cancer.

Secondly, diagnostic tests can affect costs and health outcomes in multiple ways. Specifically, diagnostic tests can have a direct effect on people’s health-related quality of life, mortality risk, acquisition costs, as well as the consequences of side effects. Furthermore, diagnostic tests can have an indirect effect via the consequences of the subsequent management decisions. This indirect effect is often the key driver of cost-effectiveness.

As a result, the cost-effectiveness analysis of diagnostic tests can have many strategies, with multiple effects modelled in the short and long-term. This makes the model and the results difficult to understand.

Map out the effect of the test on health outcomes or costs

The first step in developing any cost-effectiveness model is to understand how the new technology, such as a diagnostic test or a drug, can impact the patient and the health care system. Ferrante di Ruffano et al and Kip et al are two studies that can be used as a starting point to understand the possible effects of a test on health outcomes and/or costs.

Ferrante di Ruffano et al conducted a review of the mechanisms by which diagnostic tests can affect health outcomes and provides a list of the possible effects of diagnostic tests.

Kip et al suggests a checklist for the reporting of cost-effectiveness analyses of diagnostic tests and biomarkers. Although this is a checklist for the reporting of a cost-effectiveness analysis that has been previously conducted, it can also be used as a prompt to define the possible effects of a test.

Reach a shared understanding of the clinical pathway

The parallel step is to understand the clinical pathway in which the diagnostic strategies integrate and affect. This consists of conceptualising the elements of the health care service relevant for the decision problem. If you’d like to know more about model conceptualisation, I suggest this excellent paper by Paul Tappenden.

These conceptual models are necessarily simplifications of reality. They need to be as simple as possible, but accurate enough that lay stakeholders recognise it as valid. As Einstein said: “to make the irreducible basic elements as simple and as few as possible, without having to surrender the adequate representation of a single datum of experience.”

Agree which impacts to include in the cost-effectiveness model

What to include and to exclude from the model is, at present, more of an art than a science. For example, Chilcott et al conducted a series of interviews with health economists and found that their approach to model development varied widely.

I find that the best approach is to design the model in consultation with the relevant stakeholders, such as clinicians, patients, health care managers, etc. This ensures that the cost-effectiveness model has face validity to those who will ultimately be their end user and (hopefully) advocates of the results.

Decouple the model diagram from the mathematical model

When we have a reasonable idea of the model that we are going to build, we can draw its diagram. A model diagram not only is a recommended component of the reporting of a cost-effectiveness model but also helps lay stakeholders understand it.

The temptation is often to draw the model diagram as similar as possible to the mathematical model. In cost-effectiveness models of diagnostic tests, the mathematical model tends to be a decision tree. Therefore, we often see a decision tree diagram.

The problem is that decision trees can easily become unwieldy when we have various test combinations and decision nodes. We can try to synthesise a gigantic decision tree into a simpler diagram, but unless you have great graphic designer skills, it might be a futile exercise (see, for example, here).

An alternative approach is to decouple the model diagram from the mathematical model and break down the decision problem into steps. The figure below shows an example of how the model diagram can be decoupled from the mathematical model.

The diagram breaks the problem down into steps that relate to the clinical pathway, and therefore, to the stakeholders. In this example, the diagram follows the questions that clinicians and patients may ask: which test to do first? Given the result of the first test, should a second test be done? If a second test is done, which one?

Simplified model diagram on the cost-effectiveness analysis of magnetic resonance imaging (MRI) and biopsy to diagnose prostate cancer

Relate the results to the model diagram

The next point of contact between the health economists and lay stakeholders is likely to be at the point when the first cost-effectiveness results are available.

The typical chart for the probabilistic results is the cost-effectiveness acceptability curve (CEAC). In my experience, the CEAC is challenging for lay stakeholders. It plots results over a range of cost-effectiveness thresholds, which are not quantities that most people outside cost-effectiveness analysis relate to. Additionally, CEACs showing the results of multiple strategies can have many lines and some discontinuities, which can be difficult to understand by the untrained eye.

An alternative approach is to re-use the model diagram to present the results. The model diagram can show the strategy that is expected to be cost-effective and its probability of cost-effectiveness at the relevant threshold. For example, the probability that the strategies starting with a specific test are cost-effective is X%; and the probability that strategies using the specific test at a specific cut-off are cost-effective is Y%, etc.

Next steps for practice and research

Research about the communication of cost-effectiveness analysis is sparse, and guidance is lacking. Beyond the general advice to speak in plain English and avoiding jargon, there is little advice. Hence, health economists find themselves developing their own approaches and techniques.

In my experience, the key aspects for effective communication are to engage with lay stakeholders from the start of the model development, to explain the intuition behind the model in simplified diagrams, and to find a balance between scientific accuracy and clarity which is appropriate for the audience.

More research and guidance are clearly needed to develop communication methods that are effective and straightforward to use in applied cost-effectiveness analysis. Perhaps this is where patient and public involvement can really make a difference!

Rita Faria’s journal round-up for 13th May 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Communicating uncertainty about facts, numbers and science. Royal Society Open Science Published 8th May 2019

This remarkable paper by Anne Marthe van der Bles and colleagues, including the illustrious David Spiegelhalter, covers two of my most favourite topics: communication and uncertainty. They focused on epistemic uncertainty. That is, the uncertainty about facts, numbers and science due to limited knowledge (rather than due to the randomness of the world). This is what we could know more about, if we spent more resources in finding it out.

The authors propose a framework for communicating uncertainty and apply it to two case studies, one in climate change and the other in economic statistics. They also review the literature on the effect of communicating uncertainty. It is so wide-ranging and exhaustive that, if I have any criticism, its 42 pages are not conducive to a leisurely read.

I found the distinction between direct and indirect uncertainty fascinating and incredibly relevant to health economics. Direct uncertainty is about the precision of the evidence whilst indirect uncertainty is about its quality. For example, evidence based on a naïve comparison of patients in a Phase 2 trial with historical controls in another country (yup, this happens!).

So, how should we communicate the uncertainty in our findings? I’m afraid that this paper is not a practical guide but rather a brilliant ground clearing exercise on how to start thinking about this. Nevertheless Box 5 (p35) does give some good advice! I do hope this paper kick-starts research on how to explain uncertainty beyond an academic audience. Looking forward to more!

Was Brexit triggered by the old and unhappy? Or by financial feelings? Journal of Economic Behavior & Organization [RePEc] Published 18th April 2019

Not strictly health economics – although arguably Brexit affects our health – is this impressive study about the factors that contributed to the Leave win in the Brexit referendum. Federica Liberini and colleagues used data from the Understanding Society survey to look at the predictors of people’s views about whether or not the UK should leave the EU. The main results are from a regression on whether or not a person was pro-Brexit, regressed on life satisfaction, their feelings on their financial situation, and other characteristics.

Their conclusions are staggering. They found that people’s views were generally unrelated to their age, their life satisfaction or their income. Instead, it was a person’s feelings about their financial situation that was the strongest predictor. For economists, it may be a bit cringe-worthy to see OLS used for a categorical dependent variable. But to be fair, the authors mention that the results are similar with non-linear models and they report extensive supplementary analyses. Remarkably, they’re making the individual level data available on the 18th of June here.

As the authors discuss, it is not clear if we’re looking at predictive estimates of characteristics related to pro-Brexit feeling or at causal estimates of factors that led to the pro-Brexit feeling. That is, if we could improve someone’s perceived financial situation, would we reduce their probability of feeling pro-Brexit? In any case, the message is clear. Feelings matter!

How does treating chronic hepatitis C affect individuals in need of organ transplants in the United Kingdom? Value in Health Published 8th March 2019

Anupam Bapu Jena and colleagues looked at the spillover benefits of curing hepatitis C given its consequences on the supply and demand of liver and other organs for transplant in the UK. They compare three policies: the status quo, in which there is no screening for hepatitis C and organ donation by people with hepatitis C is rare; universal screen and treat policy where cured people opt-in for organ donation; and similarly, but with opt-out for organ donation.

To do this, they adapted a previously developed queuing model. For the status quo, the model inputs were estimated by calibrating the model outputs to reported NHS performance. They then changed the model inputs to reflect the anticipated impact of the new policies. Importantly, they assumed that all patients with hepatitis C would be cured and no longer require a transplanted organ; conversely, that cured patients would donate organs at similar rates to the general population. They predict that curing hepatitis C would directly reduce the waiting list for organ transplants by reducing the number of patients needing them. Also, there would be an indirect benefit via increasing their availability to other patients. These consequences aren’t typically included in the cost-effectiveness analysis of treatments for hepatitis C, which means that their comparative benefits and costs may not be accurate.

Keeping in the theme of uncertainty, it was disappointing that the paper does not include some sort of confidence bounds on its results nor does it present sensitivity analysis to their assumptions, which in my view, were quite favourable towards a universal screen and test policy. This is an interesting application of a queuing model, which is something I don’t often see in cost-effectiveness analysis. It is also timely and relevant, given the recent drive by the NHS to eliminate hepatitis C. In a few years’ time, we’ll hopefully know to what extent the predicted spillover benefits were realised.


Rita Faria’s journal round-up for 15th April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Emulating a trial of joint dynamic strategies: an application to monitoring and treatment of HIV‐positive individuals. Statistics in Medicine [PubMed] Published 18th March 2019

Have you heard about the target trial approach? This is a causal inference method for using observational evidence to compare strategies. This outstanding paper by Ellen Caniglia and colleagues is a great way to get introduced to it!

The question is: what is the best test-and-treat strategy for HIV-positive individuals? Given that patients weren’t randomised to each of the 4 alternative strategies, chances are that their treatment was informed by their prognostic factors. And these also influence their outcome. It’s a typical situation of bias due to confounding. The target trial approach consists of designing the RCT which would estimate the causal effect of interest, and to think through how its design can be emulated by the observational data. Here, it would be a trial in which patients would be randomly assigned to one of the 4 joint monitoring and treatment strategies. The goal is to estimate the difference in outcomes if all patients had followed their assigned strategies.

The method is fascinating albeit a bit complicated. It involves censoring individuals, fitting survival models, estimating probability weights, and replicating data. It is worthy of a detailed read! I’m very excited about the target trial methodology for cost-effectiveness analysis with observational data. But I haven’t come across any application yet. Please do get in touch via comments or Twitter if you know of a cost-effectiveness application.

Achieving integrated care through commissioning of primary care services in the English NHS: a qualitative analysis. BMJ Open [PubMed] Published 1st April 2019

Are you confused about the set-up of primary health care services in England? Look no further than Imelda McDermott and colleagues’ paper.

The paper starts by telling the story of how primary care has been organised in England over time, from its creation in 1948 to current times. For example, I didn’t know that there are new plans to allow clinical commissioning groups (CCGs) to design local incentive schemes as an alternative to the Quality and Outcomes Framework pay-for-performance scheme. The research proper is a qualitative study using interviews, telephone surveys and analysis of policy documents to understand how the CCGs commission primary care services. CCG Commissioning is intended to make better and more efficient use of resources to address increasing demand for health care services, staff shortage and financial pressure. The issue is that it is not easy to implement in practice. Furthermore, there seems to be some “reinvention of the wheel”. For example, from one of the interviewees: “…it’s no great surprise to me that the three STPs that we’ve got are the same as the three PCT clusters that we broke up to create CCGs…” Hum, shall we just go back to pre-2012 then?

Even if CCG commissioning does achieve all it sets out to do, I wonder about its value for money given the costs of setting it up. This paper is an exceptional read about the practicalities of implementing this policy in practice.

The dark side of coproduction: do the costs outweight the benefits for health research? Health Research Policy and Systems [PubMed] Published 28th March 2019

Last month, I covered the excellent paper by Kathryn Oliver and Paul Cairney about how to get our research to influence policy. This week I’d like to suggest another remarkable paper by Kathryn, this time with Anita Kothari and Nicholas Mays, on the costs and benefits of coproduction.

If you are in the UK, you have certainly heard about public and patient involvement or PPI. In this paper, coproduction refers to any collaborative working between academics and non-academics, of which PPI is one type, but it includes working with professionals, policy makers and any other people affected by the research. The authors discuss a wide range of costs to coproduction. From the direct costs of doing collaborative research, such as organising meetings, travel arrangements, etc., to the personal costs on an individual researcher to manage conflicting views and disagreements between collaborators, of having research products seen to be of lower quality, of being seen as partisan, etc., and costs to the stakeholders themselves

As a detail, I loved the term “hit-and-run research” to describe the current climate: get funding, do research, achieve impact, leave. Indeed, the way that research is funded, with budgets only available for the period that the research is being developed, does not help academics to foster relationships.

This paper reinforced my view that there may well be benefits to coproduction, but that there are also quite a lot of costs. And there tends to be not much attention to the magnitude of those costs, in whom they fall, and what’s displaced. I found the authors’ advice about the questions to ask oneself when thinking about coproduction to be really useful. I’ll keep it to hand when writing my next funding application, and I recommend you do too!