Rita Faria’s journal round-up for 21st October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Quantifying how diagnostic test accuracy depends on threshold in a meta-analysis. Statistics in Medicine [PubMed] Published 30th September 2019

A diagnostic test is often based on a continuous measure, e.g. cholesterol, which is dichotomised at a certain threshold to classify people as ‘test positive’, who should be treated, or ‘test negative’, who should not. In an economic evaluation, we may wish to compare the costs and benefits of using the test at different thresholds. For example, the cost-effectiveness of offering lipid lowering therapy for people with cholesterol over 7 mmol/L vs over 5 mmol/L. This is straightforward to do if we have access to a large dataset comparing the test to its gold standard to estimate its sensitivity and specificity at various thresholds. It is quite the challenge if we only have aggregate data from multiple publications.

In this brilliant paper, Hayley Jones and colleagues report on a new method to synthesise diagnostic accuracy data from multiple studies. It consists of a multinomial meta-analysis model that can estimate how accuracy depends on the diagnostic threshold. This method produces estimates that can be used to parameterise an economic model.

These new developments in evidence synthesis are very exciting and really important to improve the data going into economic models. My only concern is that the model is implemented in WinBUGS, which is not a software that many applied analysts use. Would it be possible to have a tutorial, or even better, include this method in the online tools available in the Complex Reviews Support Unit website?

Early economic evaluation of diagnostic technologies: experiences of the NIHR Diagnostic Evidence Co-operatives. Medical Decision Making [PubMed] Published 26th September 2019

Keeping with the diagnostic theme, this paper by Lucy Abel and colleagues reports on the experience of the Diagnostic Evidence Co-operatives in conducting early modelling of diagnostic tests. These were established in 2013 to help developers of diagnostic tests link-up with clinical and academic experts.

The paper discusses eight projects where economic modelling was conducted at an early stage of project development. It was fascinating to read about the collaboration between academics and test developers. One of the positive aspects was the buy-in of the developers, while a less positive one was the pressure to produce evidence quickly and that supported the product.

The paper is excellent in discussing the strengths and challenges of these projects. Of note, there were challenges in mapping out a clinical pathway, selecting the appropriate comparators, and establishing the consequences of testing. Furthermore, they found that the parameters around treatment effectiveness were the key driver of cost-effectiveness in many of the evaluations. This is not surprising given that the benefits of a test are usually in better informing the management decisions, rather than via its direct costs and benefits. It definitely resonates with my own experience in conducting economic evaluations of diagnostic tests (see, for example, here).

Following on from the challenges, the authors suggest areas for methodological research: mapping the clinical pathway, ensuring model transparency, and modelling sequential tests. They finish with advice for researchers doing early modelling of tests, although I’d say that it would be applicable to any economic evaluation. I completely agree that we need better methods for economic evaluation of diagnostic tests. This paper is a useful first step in setting up a research agenda.

A second chance to get causal inference right: a classification of data science tasks. Chance [arXiv] Published 14th March 2019

This impressive paper by Miguel Hernan, John Hsu and Brian Healy is an essential read for all researchers, analysts and scientists. Miguel and colleagues classify data science tasks into description, prediction and counterfactual prediction. Description is using data to quantitatively summarise some features of the world. Prediction is using the data to know some features of the world given our knowledge about other features. Counterfactual prediction is using the data to know what some features of the world would have been if something hadn’t happened; that is, causal inference.

I found the explanation of the difference between prediction and causal inference quite enlightening. It is not about the amount of data or the statistical/econometric techniques. The key difference is in the role of expert knowledge. Predicting requires expert knowledge to specify the research question, the inputs, the outputs and the data sources. Additionally, causal inference requires expert knowledge “also to describe the causal structure of the system under study”. This causal knowledge is reflected in the assumptions, the ideas for the data analysis, and for the interpretation of the results.

The section on implications for decision-making makes some important points. First, that the goal of data science is to help people make better decisions. Second, that predictive algorithms can tell us that decisions need to be made but not which decision is most beneficial – for that, we need causal inference. Third, many of us work on complex systems for which we don’t know everything (the human body is a great example). Because we don’t know everything, it is impossible to predict with certainty what would be the consequences of an intervention in a specific individual from routine health records. At most, we can estimate the average causal effect, but even for that we need assumptions. The relevance to the latest developments in data science is obvious, given all the hype around real world data, artificial intelligence and machine learning.

I absolutely loved reading this paper and wholeheartedly recommend it for any health economist. It’s a must read!

Credits

Meeting round-up: iHEA Congress 2019

Missed iHEA 2019? Or were you there but could not make it to all of the amazing sessions? Stay tuned for my conference highlights!

iHEA started on Saturday 13th with pre-congress sessions on fascinating research as well as more prosaic topics, such as early-career networking sessions with senior health economists. All attendees got a super useful plastic bottle – great idea iHEA team!

The conference proper launched on Sunday evening with the brilliant plenary session by Raj Chetty from Harvard University.

Monday morning started bright and early with the thought-provoking session on validation of CE models. It was chaired and discussed by Stefan Lhachimi and featured presentations by Isaac Corro Ramos, Talitha Feenstra and Salah Ghabri. I’m pleased to see that validation is coming to the forefront of current topics! Clearly, we need to do better in validating our models and documenting code, but we’re on the right track and engaged in making this happen.

Next up, the superb session on the societal perspective for cost-effectiveness analysis. It was an all-star cast with Mark Sculpher, Simon Walker, Susan Griffin, Peter Neumann, Lisa Robinson, and Werner Brouwer. I’ve live-tweeted it here.

The case was expertly made that taking a single sector perspective can be misleading when evaluating policies with cross-sectoral effects, hence the impact inventory by Simon and colleagues is a useful tool to guide the choice of sectors to include. At the same time, we should be mindful of the requirements of the decision-maker for whom CEA is intended. This was a compelling session, which will definitely set the scene for much more research to come.

After a tasty lunch (well done catering team!), I headed to the session on evaluations using non-randomised data. The presenters included Maninie Molatseli, Fernando Antonio Postali, James Love-Koh and Taufik Hidayat, on case studies from South Africa, Brazil and Indonesia. Marc Suhrcke chaired. I really enjoyed hearing about the practicalities of applying econometric methods to estimate treatment effects of system wide policies. And James’s presentation was a great application of distributional cost-effectiveness analysis.

I was on the presenter’s chair next, discussing the challenges in implementing policies in the southwest quadrant of the CE plane. This session was chaired by Anna Vassall and discussed by Gesine Meyer-Rath. Jack Dowie started by convincingly arguing that the decision rule should be the same regardless of where in the CE plane the policy falls. David Bath and Sergio Torres-Rueda presented fascinating case studies of south west policies. And I argued that the barrier was essentially a problem of communication (presentation available here). An energetic discussion followed and showed that, even in our field, the matter is far from settled.

The day finished with the memorial session for the wonderful Alan Maynard and Uwe Reinhardt, both of whom did so much for health economics. It was a beautiful session, where people got together to share incredible stories from these health economics heroes. And if you’d like to know more, both Alan and Uwe have published books here and here.

Tuesday started with the session on precision medicine, chaired by Dean Regier, and featuring Rosalie Viney, Chris McCabe and Stuart Peacock. Rather than slides, the screen was filled with a video of a cosy fireplace, inviting the audience to take part in the discussion.

Under debate was whether precision medicine is a completely different type of technology, with added benefits over and above improvement to health, and needing a different CE framework. The panellists were absolutely outstanding in debating the issues! Although I understand the benefits beyond health that these technologies can offer, I side with the view that, like with other technologies, value is about whether the added benefits are worth the losses given the opportunity cost.

My final session of the day was by the great Mike Drummond, comparing how HTA has influenced the uptake of new anticancer drugs in Spain versus England (summary in thread below). Mike and colleagues found that positive recommendations do increase utilisation, but the magnitude of change differs by country and region. The work is ongoing in checking that utilisation has been picked up accurately in the routine data sources.

The conference dinner was at the Markthalle, with plenty of drinks and loads of international food to choose from. I had to have an early night given that I was presenting at 8:30 the next morning. Others, though, enjoyed the party until the early hours!

Indeed, Wednesday started with my session on cost-effectiveness analysis of diagnostic tests. Alison Smith presented on her remarkable work on measurement uncertainty while Hayley Jones gave a masterclass on her new method for meta-analysis of test accuracy across multiple thresholds. I presented on the CEA of test sequences (available here). Simon Walker and James Buchanan added insightful points as discussants. We had a fantastically engaged audience, with great questions and comments. It shows that the CEA of diagnostic tests is becoming a hugely important topic.

Sadly, some other morning sessions were not as well attended. One session, also on CEA, was even cancelled due to lack of audience! For future conferences, I’d suggest scheduling the sessions on the day after the conference dinner a bit later, as well as having fewer sessions to choose from.

Next up on my agenda was the exceptional session on equity, chaired by Paula Lorgelly, and with presentations by Richard Cookson, Susan Griffin and Ijeoma Edoka. I was unable to attend, but I have watched it at home via YouTube (from 1:57:10)! That’s right, some sessions were live streamed and are still available via the iHEA website. Do have a look!

My last session of the conference was on end-of-life care, with Charles Normand chairing, discussed by Helen Mason, Eric Finkelstein, and Mendwas Dzingina, and presentations by Koonal Shah, Bridget Johnson and Nikki McCaffrey. It was a really thought-provoking session, raising questions on the value of interventions at the end-of-life compared to at other stages of the life course.

Lastly, the outstanding plenary session by Lise Rochaix and Joseph Kutzin on how to translate health economics research into policy. Lise and Joseph had pragmatic suggestions and insightful comments on the communication of health economics research to policy makers. Superb! Also available on the live stream here (from 06:09:44).

iHEA 2019 was truly an amazing conference. Expertly organised, well thought-out and with lots of interesting sessions to choose from. iHEA 2021 in Cape Town is firmly in my diary!

How to explain cost-effectiveness models for diagnostic tests to a lay audience

Non-health economists (henceforth referred to as ‘lay stakeholders’) are often asked to use the outputs of cost-effectiveness models to inform decisions, but they can find them difficult to understand. Conversely, health economists may have limited experience of explaining cost-effectiveness models to lay stakeholders. How can we do better?

This article shares my experience of explaining cost-effectiveness models of diagnostic tests to lay stakeholders such as researchers in other fields, clinicians, managers, and patients, and suggests some approaches to make models easier to understand. It is the condensed version of my presentation at ISPOR Europe 2018.

Why are cost-effectiveness models of diagnostic tests difficult to understand?

Models designed to compare diagnostic strategies are particularly challenging. In my view, this is for two reasons.

Firstly, there is the sheer number of possible diagnostic strategies that a cost-effectiveness model allows us to compare. Even if we are looking at only a couple of tests, we can use them in various combinations and at many diagnostic thresholds. See, for example, this cost-effectiveness analysis of diagnosis of prostate cancer.

Secondly, diagnostic tests can affect costs and health outcomes in multiple ways. Specifically, diagnostic tests can have a direct effect on people’s health-related quality of life, mortality risk, acquisition costs, as well as the consequences of side effects. Furthermore, diagnostic tests can have an indirect effect via the consequences of the subsequent management decisions. This indirect effect is often the key driver of cost-effectiveness.

As a result, the cost-effectiveness analysis of diagnostic tests can have many strategies, with multiple effects modelled in the short and long-term. This makes the model and the results difficult to understand.

Map out the effect of the test on health outcomes or costs

The first step in developing any cost-effectiveness model is to understand how the new technology, such as a diagnostic test or a drug, can impact the patient and the health care system. Ferrante di Ruffano et al and Kip et al are two studies that can be used as a starting point to understand the possible effects of a test on health outcomes and/or costs.

Ferrante di Ruffano et al conducted a review of the mechanisms by which diagnostic tests can affect health outcomes and provides a list of the possible effects of diagnostic tests.

Kip et al suggests a checklist for the reporting of cost-effectiveness analyses of diagnostic tests and biomarkers. Although this is a checklist for the reporting of a cost-effectiveness analysis that has been previously conducted, it can also be used as a prompt to define the possible effects of a test.

Reach a shared understanding of the clinical pathway

The parallel step is to understand the clinical pathway in which the diagnostic strategies integrate and affect. This consists of conceptualising the elements of the health care service relevant for the decision problem. If you’d like to know more about model conceptualisation, I suggest this excellent paper by Paul Tappenden.

These conceptual models are necessarily simplifications of reality. They need to be as simple as possible, but accurate enough that lay stakeholders recognise it as valid. As Einstein said: “to make the irreducible basic elements as simple and as few as possible, without having to surrender the adequate representation of a single datum of experience.”

Agree which impacts to include in the cost-effectiveness model

What to include and to exclude from the model is, at present, more of an art than a science. For example, Chilcott et al conducted a series of interviews with health economists and found that their approach to model development varied widely.

I find that the best approach is to design the model in consultation with the relevant stakeholders, such as clinicians, patients, health care managers, etc. This ensures that the cost-effectiveness model has face validity to those who will ultimately be their end user and (hopefully) advocates of the results.

Decouple the model diagram from the mathematical model

When we have a reasonable idea of the model that we are going to build, we can draw its diagram. A model diagram not only is a recommended component of the reporting of a cost-effectiveness model but also helps lay stakeholders understand it.

The temptation is often to draw the model diagram as similar as possible to the mathematical model. In cost-effectiveness models of diagnostic tests, the mathematical model tends to be a decision tree. Therefore, we often see a decision tree diagram.

The problem is that decision trees can easily become unwieldy when we have various test combinations and decision nodes. We can try to synthesise a gigantic decision tree into a simpler diagram, but unless you have great graphic designer skills, it might be a futile exercise (see, for example, here).

An alternative approach is to decouple the model diagram from the mathematical model and break down the decision problem into steps. The figure below shows an example of how the model diagram can be decoupled from the mathematical model.

The diagram breaks the problem down into steps that relate to the clinical pathway, and therefore, to the stakeholders. In this example, the diagram follows the questions that clinicians and patients may ask: which test to do first? Given the result of the first test, should a second test be done? If a second test is done, which one?

Simplified model diagram on the cost-effectiveness analysis of magnetic resonance imaging (MRI) and biopsy to diagnose prostate cancer

Relate the results to the model diagram

The next point of contact between the health economists and lay stakeholders is likely to be at the point when the first cost-effectiveness results are available.

The typical chart for the probabilistic results is the cost-effectiveness acceptability curve (CEAC). In my experience, the CEAC is challenging for lay stakeholders. It plots results over a range of cost-effectiveness thresholds, which are not quantities that most people outside cost-effectiveness analysis relate to. Additionally, CEACs showing the results of multiple strategies can have many lines and some discontinuities, which can be difficult to understand by the untrained eye.

An alternative approach is to re-use the model diagram to present the results. The model diagram can show the strategy that is expected to be cost-effective and its probability of cost-effectiveness at the relevant threshold. For example, the probability that the strategies starting with a specific test are cost-effective is X%; and the probability that strategies using the specific test at a specific cut-off are cost-effective is Y%, etc.

Next steps for practice and research

Research about the communication of cost-effectiveness analysis is sparse, and guidance is lacking. Beyond the general advice to speak in plain English and avoiding jargon, there is little advice. Hence, health economists find themselves developing their own approaches and techniques.

In my experience, the key aspects for effective communication are to engage with lay stakeholders from the start of the model development, to explain the intuition behind the model in simplified diagrams, and to find a balance between scientific accuracy and clarity which is appropriate for the audience.

More research and guidance are clearly needed to develop communication methods that are effective and straightforward to use in applied cost-effectiveness analysis. Perhaps this is where patient and public involvement can really make a difference!