R for trial and model-based cost-effectiveness analysis: workshop

Background and objectives

It is our pleasure to announce a workshop and training event on the use of R for trial and model-based cost-effectiveness analysis (CEA). This follows our successful workshop on R for CEA in 2018.

Our event will begin with a half-day short course on R for decision trees and Markov models and the use of the BCEA package for graphical and statistical analysis of results; this will be delivered by Gianluca Baio of UCL and Howard Thom of Bristol University.

This will be followed by a one-day workshop in which we will present a wide variety of technical aspects by experts from academia, industry, and government institutions (including NICE). Topics will include decision trees, Markov models, discrete event simulation, integration of network meta-analysis, extrapolation of survival curves, and development of R packages.

We will include a pre-workshop virtual code challenge on a problem set by our scientific committee. This will take place over Github and a Slack channel with participants encouraged to submit final R code solutions for peer review on efficiency, flexibility, elegance and transparency. Prizes will be provided for the best entry.

Participants are also invited to submit abstracts for potential oral presentations. An optional dinner and networking event will be held on the evening of 8th July.

Registration is open until 1 June 2019 at https://onlinestore.ucl.ac.uk/conferences-and-events/faculty-of-mathematical-physical-sciences-c06/department-of-statistical-science-f61/f61-workshop-on-r-for-trial-modelbased-costeffectiveness-analysis

To submit an abstract, please send it to howard.thom@bristol.ac.uk with the subject “R for CEA abstract”. The word limit is 300. Abstract submission deadline is 15 May 2019 and the scientific committee will make decisions on acceptance by 1st June 2018.

Preliminary Programme

Day 2: Workshop. Tuesday 9th July.

  • 9:30-9:45. Howard Thom. Welcome
  • 9:45-10:15. Nathan Green. Imperial College London. _Simple, pain-free decision trees in R for the Excel user
  • 10:15-10:35 Pedro Saramago. Centre for Health Economics, University of York. Using R for Markov modelling: an introduction
  • 10:35-10:55. Alison Smith. University of Leeds. Discrete event simulation models in R
  • 10:55-11:10. Coffee
  • 11:10-12:20. Participants oral presentation session (4 speakers, 15 minutes each)
  • 12:20-13:45. Lunch
  • 13:45-14:00. Gianluca Baio. University College London. Packing up, shacking up’s (going to be) all you wanna do!. Building packages in R and Github
  • 14:00-14:15. Jeroen Jansen. Innovation and Value Initiative. State transition models and integration with network meta-analysis
  • 14:15-14:25. Ash Bullement. Delta Hat Analytics, UK. Fitting and extrapolating survival curves for CEA models
  • 14:25-14:45. Iryna Schlackow. Nuffield Department of Public Health, University of Oxford. Generic R methods to prepare routine healthcare data for disease modelling
  • 14:45-15:00. Coffee
  • 15:00-15:15. Initiatives for the future and challenges in gaining R acceptance (ISPOR Taskforce, ISPOR Special Interest Group, future of the R for CEA workshop)
  • 15:15-16:30. Participant discussion.
  • 16:30-16:45. Anthony Hatswell. Close and conclusions

 

R for trial and model-based cost-effectiveness analysis: short course

Background and objectives

It is our pleasure to announce a workshop and training event on the use of R for trial and model-based cost-effectiveness analysis (CEA). This follows our successful workshop on R for CEA in 2018.

Our event will begin with a half-day short course on R for decision trees and Markov models and the use of the BCEA package for graphical and statistical analysis of results; this will be delivered by Gianluca Baio of UCL and Howard Thom of Bristol University.

This will be followed by a one-day workshop in which we will present a wide variety of technical aspects by experts from academia, industry, and government institutions (including NICE). Topics will include decision trees, Markov models, discrete event simulation, integration of network meta-analysis, extrapolation of survival curves, and development of R packages.

We will include a pre-workshop virtual code challenge on a problem set by our scientific committee. This will take place over Github and a Slack channel with participants encouraged to submit final R code solutions for peer review on efficiency, flexibility, elegance and transparency. Prizes will be provided for the best entry.

Participants are also invited to submit abstracts for potential oral presentations. An optional dinner and networking event will be held on the evening of 8th July.

Registration is open until 1 June 2019 at https://onlinestore.ucl.ac.uk/conferences-and-events/faculty-of-mathematical-physical-sciences-c06/department-of-statistical-science-f61/f61-short-course-on-r-for-decision-trees-markov-models-the-use-of-bcea

 

Preliminary Programme

Day 1: Introduction to R for Cost-Effectiveness Modelling. Monday 8th July.

  • 13:00-13:15. Howard Thom. Welcome and introductions
  • 13:15-13:45. Howard Thom. Building a decision tree in R
  • 13:45-14:15. Gianluca Baio. Using BCEA to summarise outputs of an economic model
  • 14:15-14:45. Practical 1 (Decision trees)
  • 14:45-15:00. Coffee break
  • 15:00-15:45. Howard Thom. R for building Markov models
  • 15:45-16:15. Gianluca Baio. Further use of BCEA
  • 16:15-17:00. Practical 2 (Markov models)

How to explain cost-effectiveness models for diagnostic tests to a lay audience

Non-health economists (henceforth referred to as ‘lay stakeholders’) are often asked to use the outputs of cost-effectiveness models to inform decisions, but they can find them difficult to understand. Conversely, health economists may have limited experience of explaining cost-effectiveness models to lay stakeholders. How can we do better?

This article shares my experience of explaining cost-effectiveness models of diagnostic tests to lay stakeholders such as researchers in other fields, clinicians, managers, and patients, and suggests some approaches to make models easier to understand. It is the condensed version of my presentation at ISPOR Europe 2018.

Why are cost-effectiveness models of diagnostic tests difficult to understand?

Models designed to compare diagnostic strategies are particularly challenging. In my view, this is for two reasons.

Firstly, there is the sheer number of possible diagnostic strategies that a cost-effectiveness model allows us to compare. Even if we are looking at only a couple of tests, we can use them in various combinations and at many diagnostic thresholds. See, for example, this cost-effectiveness analysis of diagnosis of prostate cancer.

Secondly, diagnostic tests can affect costs and health outcomes in multiple ways. Specifically, diagnostic tests can have a direct effect on people’s health-related quality of life, mortality risk, acquisition costs, as well as the consequences of side effects. Furthermore, diagnostic tests can have an indirect effect via the consequences of the subsequent management decisions. This indirect effect is often the key driver of cost-effectiveness.

As a result, the cost-effectiveness analysis of diagnostic tests can have many strategies, with multiple effects modelled in the short and long-term. This makes the model and the results difficult to understand.

Map out the effect of the test on health outcomes or costs

The first step in developing any cost-effectiveness model is to understand how the new technology, such as a diagnostic test or a drug, can impact the patient and the health care system. Ferrante di Ruffano et al and Kip et al are two studies that can be used as a starting point to understand the possible effects of a test on health outcomes and/or costs.

Ferrante di Ruffano et al conducted a review of the mechanisms by which diagnostic tests can affect health outcomes and provides a list of the possible effects of diagnostic tests.

Kip et al suggests a checklist for the reporting of cost-effectiveness analyses of diagnostic tests and biomarkers. Although this is a checklist for the reporting of a cost-effectiveness analysis that has been previously conducted, it can also be used as a prompt to define the possible effects of a test.

Reach a shared understanding of the clinical pathway

The parallel step is to understand the clinical pathway in which the diagnostic strategies integrate and affect. This consists of conceptualising the elements of the health care service relevant for the decision problem. If you’d like to know more about model conceptualisation, I suggest this excellent paper by Paul Tappenden.

These conceptual models are necessarily simplifications of reality. They need to be as simple as possible, but accurate enough that lay stakeholders recognise it as valid. As Einstein said: “to make the irreducible basic elements as simple and as few as possible, without having to surrender the adequate representation of a single datum of experience.”

Agree which impacts to include in the cost-effectiveness model

What to include and to exclude from the model is, at present, more of an art than a science. For example, Chilcott et al conducted a series of interviews with health economists and found that their approach to model development varied widely.

I find that the best approach is to design the model in consultation with the relevant stakeholders, such as clinicians, patients, health care managers, etc. This ensures that the cost-effectiveness model has face validity to those who will ultimately be their end user and (hopefully) advocates of the results.

Decouple the model diagram from the mathematical model

When we have a reasonable idea of the model that we are going to build, we can draw its diagram. A model diagram not only is a recommended component of the reporting of a cost-effectiveness model but also helps lay stakeholders understand it.

The temptation is often to draw the model diagram as similar as possible to the mathematical model. In cost-effectiveness models of diagnostic tests, the mathematical model tends to be a decision tree. Therefore, we often see a decision tree diagram.

The problem is that decision trees can easily become unwieldy when we have various test combinations and decision nodes. We can try to synthesise a gigantic decision tree into a simpler diagram, but unless you have great graphic designer skills, it might be a futile exercise (see, for example, here).

An alternative approach is to decouple the model diagram from the mathematical model and break down the decision problem into steps. The figure below shows an example of how the model diagram can be decoupled from the mathematical model.

The diagram breaks the problem down into steps that relate to the clinical pathway, and therefore, to the stakeholders. In this example, the diagram follows the questions that clinicians and patients may ask: which test to do first? Given the result of the first test, should a second test be done? If a second test is done, which one?

Simplified model diagram on the cost-effectiveness analysis of magnetic resonance imaging (MRI) and biopsy to diagnose prostate cancer

Relate the results to the model diagram

The next point of contact between the health economists and lay stakeholders is likely to be at the point when the first cost-effectiveness results are available.

The typical chart for the probabilistic results is the cost-effectiveness acceptability curve (CEAC). In my experience, the CEAC is challenging for lay stakeholders. It plots results over a range of cost-effectiveness thresholds, which are not quantities that most people outside cost-effectiveness analysis relate to. Additionally, CEACs showing the results of multiple strategies can have many lines and some discontinuities, which can be difficult to understand by the untrained eye.

An alternative approach is to re-use the model diagram to present the results. The model diagram can show the strategy that is expected to be cost-effective and its probability of cost-effectiveness at the relevant threshold. For example, the probability that the strategies starting with a specific test are cost-effective is X%; and the probability that strategies using the specific test at a specific cut-off are cost-effective is Y%, etc.

Next steps for practice and research

Research about the communication of cost-effectiveness analysis is sparse, and guidance is lacking. Beyond the general advice to speak in plain English and avoiding jargon, there is little advice. Hence, health economists find themselves developing their own approaches and techniques.

In my experience, the key aspects for effective communication are to engage with lay stakeholders from the start of the model development, to explain the intuition behind the model in simplified diagrams, and to find a balance between scientific accuracy and clarity which is appropriate for the audience.

More research and guidance are clearly needed to develop communication methods that are effective and straightforward to use in applied cost-effectiveness analysis. Perhaps this is where patient and public involvement can really make a difference!