Using Discrete Choice Experiments in Health Economics Course

This popular course, offered by the Health Economics Research Unit (HERU) at the University of Aberdeen, Scotland, covers the theoretical and practical issues of discrete choice experiments (DCEs) in health economics. The course takes place annually and in 2018 was fully booked.

The course provides:

  • An introduction to the theoretical basis for the development and application of DCEs in health economics.
  • Step by step guide to the design of DCEs, questionnaire development, data input, data analysis and interpretation of results.
  • An update on methodological issues raised in the application of DCEs in health economics.

Thesis Thursday: David Mott

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr David Mott who has a PhD from Newcastle University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
How do preferences for public health interventions differ? A case study using a weight loss maintenance intervention
Supervisors
Luke Vale, Laura Ternent
Repository link
http://hdl.handle.net/10443/4197

Why is it important to understand variation in people’s preferences?

It’s not all that surprising that people’s preferences for health care interventions vary, but we don’t have a great understanding of what might drive these differences. Increasingly, preference information is being used to support regulatory decisions and, to a lesser but increasing extent, health technology assessments. It could be the case that certain subgroups of individuals would not accept the risks associated with a particular health care intervention, whereas others would. Therefore, identifying differences in preferences is important. However, it’s also useful to try to understand why this heterogeneity might occur in the first place.

The debate on whose preferences to elicit for health state valuation has traditionally focused on those with experience (e.g. patients) and those without (e.g. the general population). Though this dichotomy is problematic; it has been shown that health state utilities systematically differ between these two groups, presumably due to the difference in relative experience. My project aimed to explore whether experience also affects people’s preferences for health care interventions.

How did you identify different groups of people, whose preferences might differ?

The initial plan for the project was to elicit preferences for a health care intervention from general population and patient samples. However, after reviewing the literature, it seemed highly unlikely that anyone would advocate for preferences for treatments to be elicited from general population samples. It has long been suggested that discrete choice experiments (DCEs) could be used to incorporate patient preferences into decision-making, and it turned out that patients were the focus of the majority of the DCE studies that I reviewed. Given this, I took a more granular approach in my empirical work.

We recruited a very experienced group of ‘service users’ from a randomised controlled trial (RCT). In this case, it was a novel weight loss maintenance intervention aimed at helping obese adults that had lost at least 5% of their overall weight to maintain their weight loss. We also recruited an additional three groups from an online panel. The first group were ‘potential service users’ – those that met the trial criteria but could not have experienced the intervention. The second group were ‘potential beneficiaries’ – those that were obese or overweight and did not meet the trial criteria. The final group were ‘non-users’ – those with a normal BMI.

What can your study tell us about preferences in the context of a weight loss maintenance intervention?

The empirical part of my study involved a DCE and an open-ended contingent valuation (CV) task. The DCE was focused on the delivery of the trial intervention, which was a technology-assisted behavioural intervention. It had a number of different components but, briefly, it involved participants weighing themselves regularly on a set of ‘smart scales’, which enabled the trial team to access and monitor the data. Participants received text messages from the trial team with feedback, reminders to weigh themselves (if necessary), and links to online tools and content to support the maintenance of their weight loss.

The DCE results suggested that preferences for the various components of the intervention varied significantly between individuals and between the different groups – and not all were important. In contrast, the efficacy and cost attributes were important across the board. The CV results suggested that a very significant proportion of individuals would be willing to pay for an effective intervention (i.e. that avoided weight regain), with very few respondents expressing a willingness to pay for an intervention that led to more than 10-20% weight regain.

Do alternative methods for preference elicitation provide a consistent picture of variation in preferences?

Existing evidence suggests that willingness to pay (WTP) estimates from CV tasks might differ from those derived from DCE data, but there aren’t a lot of empirical studies on this in health. Comparisons were planned in my study, but the approach taken in the end was suboptimal and ultimately inconclusive. The original plan was to obtain WTP estimates for an entire WLM intervention using the DCE and to compare this with the estimates from the CV task. Due to data limitations, it wasn’t possible to make this comparison. However, the CV task was a bit unusual because we asked for respondents’ WTP at various different efficacy levels. So instead the comparison made was between average WTP values for a percentage point of weight re-gain. The differences were statistically insignificant.

Are some people’s preferences ‘better defined’ than others’?

We hypothesised that those with experience of the trial intervention would have ‘better defined’ preferences. To explore this, we compared the data quality across the different user groups. From a quick glance at the DCE results, it is pretty clear that the data were much better for the most experienced group; the coefficients were larger, and a much higher proportion was statistically significant. However, more interestingly, we found that the most experienced group were 23% more likely to have passed all of the rationality tests that were embedded in the DCE. Therefore, if you accept that better quality data is an indicator of ‘better defined’ preferences, then the data do seem reasonably supportive of the hypothesis. That being said, there were no significant differences between the other three groups, begging the question: was it the difference in experience, or some other difference between RCT participants and online panel respondents?

What does your research imply for the use of preferences in resource allocation decisions?

While there are still many unanswered questions, and there is always a need for further research, the results from my PhD project suggest that preferences for health care interventions can differ significantly between respondents with differing levels of experience. Had my project been applied to a more clinical intervention that is harder for an average person to imagine experiencing, I would expect the differences to have been much larger. I’d love to see more research in this area in future, especially in the context of benefit-risk trade-offs.

The key message is that the level of experience of the participants matters. It is quite reasonable to believe that a preference study focusing on a particular subgroup of patients will not be generalisable to the broader patient population. As preference data, typically elicited from patients, is increasingly being used in decision-making – which is great – it is becoming increasingly important for researchers to make sure that their respondent samples are appropriate to support the decisions that are being made.

Thesis Thursday: Logan Trenaman

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Logan Trenaman who has a PhD from the University of British Columbia. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Economic evaluation of interventions to support shared decision-making: an extension of the valuation framework
Supervisors
Nick Bansback, Stirling Bryan
Repository link
http://hdl.handle.net/2429/66769

What is shared decision-making?

Shared decision-making is a process whereby patients and health care providers work together to make decisions. For most health care decisions, where there is no ‘best’ option, the most appropriate course of action depends on the clinical evidence and the patient’s informed preferences. In effect, shared decision-making is about reducing information asymmetry, by allowing providers to inform patients about the potential benefits and harms of alternative tests or treatments, and patients to express their preferences to their provider. The goal is to reach agreement on the most appropriate decision for that patient.

My thesis focused on individuals with advanced osteoarthritis who were considering whether to undergo total hip or knee replacement, or use non-surgical treatments such as pain medication, exercise, or mobility aids. Joint replacement alleviates pain and improves mobility for most patients, however, as many as 20-30% of recipients have reported insignificant improvement in symptoms and/or dissatisfaction with results. Shared decision-making can help ensure that those considering joint replacement are aware of alternative treatments and have realistic expectations about the potential benefits and harms of each option.

There are different types of interventions available to help support shared decision-making, some of which target the patient (e.g. patient decision aids) and some of which target providers (e.g. skills training). My thesis focused on a randomized controlled trial that evaluated a pre-consultation patient decision aid, which generated a summary report for the surgeon that outlined the patient’s knowledge, values, and preferences.

How can the use of decision aids influence health care costs?

The use of patient decision aids can impact health care costs in several ways. Some patient decision aids, such as those evaluated in my thesis, are designed for use by patients in preparation for a consultation where a treatment decision is made. Others are designed to be used during the consultation with the provider. There is some evidence that decision aids may increase up-front costs, by increasing the length of consultations, requiring investments to integrate decision aids into routine care, or train clinicians. These interventions may impact downstream costs by influencing treatment decision-making. For example, the Cochrane review of patient decision aids found that, across 18 studies in major elective surgery, those exposed to decision aids were less likely to choose surgery compared to those in usual care (RR: 0.86, 95% CI: 0.75 to 1.00).

This was observed in the trial-based economic evaluation which constituted the first chapter of my thesis. This analysis found that decision aids were highly cost-effective, largely due to a smaller proportion of patients undergoing joint replacement. Of course, this conclusion could change over time. One of the challenges of previous cost-effectiveness analysis (CEA) of patient decision aids has been a lack of long-term follow-up. Patients who choose not to have surgery over the short-term may go on to have surgery later. To look at the longer-term impact of decision aids, the third chapter of my thesis linked trial participants to administrative data with an average of 7-years follow-up. I found that, from a resource use perspective, the conclusion was the same as observed during the trial: fewer patients exposed to decision aids had undergone surgery, resulting in lower costs.

What is it about shared decision-making that patients value?

On the whole, the evidence suggests that patients value being informed, listened to, and offered the opportunity to participate in decision-making (should they wish!). To better understand how much shared decision-making is valued, I performed a systematic review of discrete choice experiments (DCEs) that had valued elements of shared decision-making. This review found that survey respondents (primarily patients) were willing to wait longer, pay, and in some cases willing to accept poorer health outcomes for greater shared decision-making.

It is important to consider preference heterogeneity in this context. The last chapter of my PhD performed a DCE to value shared decision-making in the context of advanced knee osteoarthritis. The DCE included three attributes: waiting time, health outcomes, and shared decision-making. The latent class analysis found four distinct subgroups of patients. Two groups were balanced, and traded between all attributes, while one group had a strong preference for shared decision-making, and another had a strong preference for better health outcomes. One important finding from this analysis was that having a strong preference for shared decision-making was not associated with demographic or clinical characteristics. This highlights the importance of each clinical encounter in determining the appropriate level of shared decision-making for each patient.

Is it meaningful to estimate the cost-per-QALY of shared decision-making interventions?

One of the challenges of my thesis was grappling with the potential conflict between the objectives of CEA using QALYs (maximizing health) and shared decision-making interventions (improved decision-making). Importantly, encouraging shared decision-making may result in patients choosing alternatives that do not maximize QALYs. For example, informed patients may choose to delay or forego elective surgery due to potential risks, despite it providing more QALYs (on average).

In cases where a CEA finds that shared decision-making interventions result in poorer health outcomes at lower cost, I think this is perfectly acceptable (provided patients are making informed choices). However, it becomes more complicated when shared decision-making interventions increase costs, result in poorer health outcomes, but provide other, non-health benefits such as informing patients or involving them in treatment decisions. In such cases, decision-makers need to consider whether it is justified to allocate scarce health care resources to encourage shared decision-making when it requires sacrificing health outcomes elsewhere. The latter part of my thesis tried to inform this trade-off, by valuing the non-health benefits of shared decision-making which would not otherwise be captured in a CEA that uses QALYs.

How should the valuation framework be extended, and is this likely to indicate different decisions?

I extended the valuation framework by attempting to value non-health benefits of shared decision-making. I followed guidelines from the Canadian Agency for Drugs and Technologies in Health, which state that “the value of non-health effects should be based on being traded off against health” and that societal preferences be used for this valuation. Requiring non-health benefits to be valued relative to health reflects the opportunity cost of allocating resources toward these outcomes. While these guidelines do not specifically state how to do so, I chose to value shared decision-making relative to life-years using a chained (or two-stage) valuation approach so that they could be incorporated within the QALY.

Ultimately, I found that the value of the process of shared decision-making was small, however, this may have an impact on cost-effectiveness. The reasons for this are twofold. First, there are few cases where shared decision-making interventions improve health outcomes. A 2018 sub-analysis of the Cochrane review of patient decision aids found little evidence that they impact health-related quality of life. Secondly, the up-front cost of implementing shared decision-making interventions may be small. Thus, in cases where shared decision-making interventions require a small investment but provide no health benefit, the non-health value of shared decision-making may impact cost-effectiveness. One recent example from Dr Victoria Brennan found that incorporating process utility associated with improved consultation quality, resulting from a new online assessment tool, increased the probability that the intervention was cost-effective from 35% to 60%.