Thesis Thursday: Logan Trenaman

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Logan Trenaman who has a PhD from the University of British Columbia. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Economic evaluation of interventions to support shared decision-making: an extension of the valuation framework
Supervisors
Nick Bansback, Stirling Bryan
Repository link
http://hdl.handle.net/2429/66769

What is shared decision-making?

Shared decision-making is a process whereby patients and health care providers work together to make decisions. For most health care decisions, where there is no ‘best’ option, the most appropriate course of action depends on the clinical evidence and the patient’s informed preferences. In effect, shared decision-making is about reducing information asymmetry, by allowing providers to inform patients about the potential benefits and harms of alternative tests or treatments, and patients to express their preferences to their provider. The goal is to reach agreement on the most appropriate decision for that patient.

My thesis focused on individuals with advanced osteoarthritis who were considering whether to undergo total hip or knee replacement, or use non-surgical treatments such as pain medication, exercise, or mobility aids. Joint replacement alleviates pain and improves mobility for most patients, however, as many as 20-30% of recipients have reported insignificant improvement in symptoms and/or dissatisfaction with results. Shared decision-making can help ensure that those considering joint replacement are aware of alternative treatments and have realistic expectations about the potential benefits and harms of each option.

There are different types of interventions available to help support shared decision-making, some of which target the patient (e.g. patient decision aids) and some of which target providers (e.g. skills training). My thesis focused on a randomized controlled trial that evaluated a pre-consultation patient decision aid, which generated a summary report for the surgeon that outlined the patient’s knowledge, values, and preferences.

How can the use of decision aids influence health care costs?

The use of patient decision aids can impact health care costs in several ways. Some patient decision aids, such as those evaluated in my thesis, are designed for use by patients in preparation for a consultation where a treatment decision is made. Others are designed to be used during the consultation with the provider. There is some evidence that decision aids may increase up-front costs, by increasing the length of consultations, requiring investments to integrate decision aids into routine care, or train clinicians. These interventions may impact downstream costs by influencing treatment decision-making. For example, the Cochrane review of patient decision aids found that, across 18 studies in major elective surgery, those exposed to decision aids were less likely to choose surgery compared to those in usual care (RR: 0.86, 95% CI: 0.75 to 1.00).

This was observed in the trial-based economic evaluation which constituted the first chapter of my thesis. This analysis found that decision aids were highly cost-effective, largely due to a smaller proportion of patients undergoing joint replacement. Of course, this conclusion could change over time. One of the challenges of previous cost-effectiveness analysis (CEA) of patient decision aids has been a lack of long-term follow-up. Patients who choose not to have surgery over the short-term may go on to have surgery later. To look at the longer-term impact of decision aids, the third chapter of my thesis linked trial participants to administrative data with an average of 7-years follow-up. I found that, from a resource use perspective, the conclusion was the same as observed during the trial: fewer patients exposed to decision aids had undergone surgery, resulting in lower costs.

What is it about shared decision-making that patients value?

On the whole, the evidence suggests that patients value being informed, listened to, and offered the opportunity to participate in decision-making (should they wish!). To better understand how much shared decision-making is valued, I performed a systematic review of discrete choice experiments (DCEs) that had valued elements of shared decision-making. This review found that survey respondents (primarily patients) were willing to wait longer, pay, and in some cases willing to accept poorer health outcomes for greater shared decision-making.

It is important to consider preference heterogeneity in this context. The last chapter of my PhD performed a DCE to value shared decision-making in the context of advanced knee osteoarthritis. The DCE included three attributes: waiting time, health outcomes, and shared decision-making. The latent class analysis found four distinct subgroups of patients. Two groups were balanced, and traded between all attributes, while one group had a strong preference for shared decision-making, and another had a strong preference for better health outcomes. One important finding from this analysis was that having a strong preference for shared decision-making was not associated with demographic or clinical characteristics. This highlights the importance of each clinical encounter in determining the appropriate level of shared decision-making for each patient.

Is it meaningful to estimate the cost-per-QALY of shared decision-making interventions?

One of the challenges of my thesis was grappling with the potential conflict between the objectives of CEA using QALYs (maximizing health) and shared decision-making interventions (improved decision-making). Importantly, encouraging shared decision-making may result in patients choosing alternatives that do not maximize QALYs. For example, informed patients may choose to delay or forego elective surgery due to potential risks, despite it providing more QALYs (on average).

In cases where a CEA finds that shared decision-making interventions result in poorer health outcomes at lower cost, I think this is perfectly acceptable (provided patients are making informed choices). However, it becomes more complicated when shared decision-making interventions increase costs, result in poorer health outcomes, but provide other, non-health benefits such as informing patients or involving them in treatment decisions. In such cases, decision-makers need to consider whether it is justified to allocate scarce health care resources to encourage shared decision-making when it requires sacrificing health outcomes elsewhere. The latter part of my thesis tried to inform this trade-off, by valuing the non-health benefits of shared decision-making which would not otherwise be captured in a CEA that uses QALYs.

How should the valuation framework be extended, and is this likely to indicate different decisions?

I extended the valuation framework by attempting to value non-health benefits of shared decision-making. I followed guidelines from the Canadian Agency for Drugs and Technologies in Health, which state that “the value of non-health effects should be based on being traded off against health” and that societal preferences be used for this valuation. Requiring non-health benefits to be valued relative to health reflects the opportunity cost of allocating resources toward these outcomes. While these guidelines do not specifically state how to do so, I chose to value shared decision-making relative to life-years using a chained (or two-stage) valuation approach so that they could be incorporated within the QALY.

Ultimately, I found that the value of the process of shared decision-making was small, however, this may have an impact on cost-effectiveness. The reasons for this are twofold. First, there are few cases where shared decision-making interventions improve health outcomes. A 2018 sub-analysis of the Cochrane review of patient decision aids found little evidence that they impact health-related quality of life. Secondly, the up-front cost of implementing shared decision-making interventions may be small. Thus, in cases where shared decision-making interventions require a small investment but provide no health benefit, the non-health value of shared decision-making may impact cost-effectiveness. One recent example from Dr Victoria Brennan found that incorporating process utility associated with improved consultation quality, resulting from a new online assessment tool, increased the probability that the intervention was cost-effective from 35% to 60%.

Visualising PROMs data

The patient reported outcomes measures, or PROMs, is a large database with before and after health-related quality of life (HRQoL) measures for a large number of patients undergoing four key conditions: hip replacement, knee replacement, varicose vein surgery and surgery for groin hernia. The outcome measures are the EQ-5D index and visual analogue scale (and a disease-specific measure for three of the interventions). These data also contain the provider of the operation. Being publicly available, these data allow us to look at a range of different questions: what’s the average effect of the surgery on HRQoL? What are the differences between providers in gains to HRQoL or in patient casemix? Great!

The first thing we should always do with new data is to look at it. This might be in an exploratory way to determine the questions to ask of the data or in an analytical way to get an idea of the relationships between variables. Plotting the data communicates more about what’s going on than any table of statistics alone. However, the plots on the NHS Digital website might be accused of being a little uninspired as they collapse a lot of the variation into simple charts that conceal a lot of what’s going on. For example:

So let’s consider other ways of visualising this data. For all these plots a walk through of the code is at the end of this post.

Now, I’m not a regular user of PROMs data, so what I think are the interesting features of the data may not reflect what the data are generally used for. For this, I think the interesting features are:

  • The joint distribution of pre- and post-op scores
  • The marginal distributions of pre- and post-op scores
  • The relationship between pre- and post-op scores over time

We will pool all the data from six years’ worth of PROMs data. This gives us over 200,000 observations. A scatter plot with this information is useless as the density of the points will be very high. A useful alternative is hexagonal binning, which is like a two-dimensional histogram. Hexagonal tiles, which usefully tessellate and are more interesting to look at than squares, can be shaded or coloured with respect to the number of observations in each bin across the support of the joint distribution of pre- and post-op scores (which is [-0.5,1]x[-0.5,1]). We can add the marginal distributions to the axes and then add smoothed trend lines for each year. Since the data are constrained between -0.5 and 1, the mean may not be a very good summary statistic, so we’ll plot a smoothed median trend line for each year. Finally, we’ll add a line on the diagonal. Patients above this line have improved and patients below it deteriorated.

Hip replacement results

Hip replacement results

There’s a lot going on in the graph, but I think it reveals a number of key points about the data that we wouldn’t have seen from the standard plots on the website:

  • There appear to be four clusters of patients:
    • Those who were in close to full health prior to the operation and were in ‘perfect’ health (score = 1) after;
    • Those who were in close to full health pre-op and who didn’t really improve post-op;
    • Those who were in poor health (score close to zero) and made a full recovery;
    • Those who were in poor health and who made a partial recovery.
  • The median change is an improvement in health.
  • The median change improves modestly from year to year for a given pre-op score.
  • There are ceiling effects for the EQ-5D.

None of this is news to those who study these data. But this way of presenting the data certainly tells more of a story that the current plots on the website.

R code

We’re going to consider hip replacement, but the code is easily modified for the other outcomes. Firstly we will take the pre- and post-op score and their difference and pool them into one data frame.

# df 14/15
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1415.csv")

df<-df[!is.na(df$Pre.Op.Q.EQ5D.Index),]
df$pre<-df$Pre.Op.Q.EQ5D.Index
df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1415 <- df[,c('Provider.Code','pre','post','diff')]

#
# df 13/14
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1314.csv")

df<-df[!is.na(df$Pre.Op.Q.EQ5D.Index),]
df$pre<-df$Pre.Op.Q.EQ5D.Index
df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1314 <- df[,c('Provider.Code','pre','post','diff')]

# df 12/13
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1213.csv")

df<-df[!is.na(df$Pre.Op.Q.EQ5D.Index),]
df$pre<-df$Pre.Op.Q.EQ5D.Index
df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1213 <- df[,c('Provider.Code','pre','post','diff')]

# df 11/12
df<-read.csv("C:/docs/proms/Hip Replacement 1112.csv")

df$pre<-df$Q1_EQ5D_INDEX
df$post<- df$Q2_EQ5D_INDEX
df$diff<- df$post - df$pre
names(df)[1]<-'Provider.Code'

df1112 <- df[,c('Provider.Code','pre','post','diff')]

# df 10/11
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1011.csv")

df$pre<-df$Q1_EQ5D_INDEX
df$post<- df$Q2_EQ5D_INDEX
df$diff<- df$post - df$pre
names(df)[1]<-'Provider.Code'

df1011 <- df[,c('Provider.Code','pre','post','diff')]

#combine

df1415$year<-"2014/15"
df1314$year<-"2013/14"
df1213$year<-"2012/13"
df1112$year<-"2011/12"
df1011$year<-"2010/11"

df<-rbind(df1415,df1314,df1213,df1112,df1011)
write.csv(df,"C:/docs/proms/eq5d.csv")

Now, for the plot. We will need the packages ggplot2, ggExtra, and extrafont. The latter package is just to change the plot fonts, not essential, but aesthetically pleasing.

require(ggplot2)
require(ggExtra)
require(extrafont)
font_import()
loadfonts(device = "win")

p<-ggplot(data=df,aes(x=pre,y=post))+
 stat_bin_hex(bins=15,color="white",alpha=0.8)+
 geom_abline(intercept=0,slope=1,color="black")+
 geom_quantile(aes(color=year),method = "rqss", lambda = 2,quantiles=0.5,size=1)+
 scale_fill_gradient2(name="Count (000s)",low="light grey",midpoint = 15000,
   mid="blue",high = "red",
   breaks=c(5000,10000,15000,20000),labels=c(5,10,15,20))+
 theme_bw()+
 labs(x="Pre-op EQ-5D index score",y="Post-op EQ-5D index score")+
 scale_color_discrete(name="Year")+
 theme(legend.position = "bottom",text=element_text(family="Gill Sans MT"))

ggMarginal(p, type = "histogram")