Thesis Thursday: David Mott

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr David Mott who has a PhD from Newcastle University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
How do preferences for public health interventions differ? A case study using a weight loss maintenance intervention
Supervisors
Luke Vale, Laura Ternent
Repository link
http://hdl.handle.net/10443/4197

Why is it important to understand variation in people’s preferences?

It’s not all that surprising that people’s preferences for health care interventions vary, but we don’t have a great understanding of what might drive these differences. Increasingly, preference information is being used to support regulatory decisions and, to a lesser but increasing extent, health technology assessments. It could be the case that certain subgroups of individuals would not accept the risks associated with a particular health care intervention, whereas others would. Therefore, identifying differences in preferences is important. However, it’s also useful to try to understand why this heterogeneity might occur in the first place.

The debate on whose preferences to elicit for health state valuation has traditionally focused on those with experience (e.g. patients) and those without (e.g. the general population). Though this dichotomy is problematic; it has been shown that health state utilities systematically differ between these two groups, presumably due to the difference in relative experience. My project aimed to explore whether experience also affects people’s preferences for health care interventions.

How did you identify different groups of people, whose preferences might differ?

The initial plan for the project was to elicit preferences for a health care intervention from general population and patient samples. However, after reviewing the literature, it seemed highly unlikely that anyone would advocate for preferences for treatments to be elicited from general population samples. It has long been suggested that discrete choice experiments (DCEs) could be used to incorporate patient preferences into decision-making, and it turned out that patients were the focus of the majority of the DCE studies that I reviewed. Given this, I took a more granular approach in my empirical work.

We recruited a very experienced group of ‘service users’ from a randomised controlled trial (RCT). In this case, it was a novel weight loss maintenance intervention aimed at helping obese adults that had lost at least 5% of their overall weight to maintain their weight loss. We also recruited an additional three groups from an online panel. The first group were ‘potential service users’ – those that met the trial criteria but could not have experienced the intervention. The second group were ‘potential beneficiaries’ – those that were obese or overweight and did not meet the trial criteria. The final group were ‘non-users’ – those with a normal BMI.

What can your study tell us about preferences in the context of a weight loss maintenance intervention?

The empirical part of my study involved a DCE and an open-ended contingent valuation (CV) task. The DCE was focused on the delivery of the trial intervention, which was a technology-assisted behavioural intervention. It had a number of different components but, briefly, it involved participants weighing themselves regularly on a set of ‘smart scales’, which enabled the trial team to access and monitor the data. Participants received text messages from the trial team with feedback, reminders to weigh themselves (if necessary), and links to online tools and content to support the maintenance of their weight loss.

The DCE results suggested that preferences for the various components of the intervention varied significantly between individuals and between the different groups – and not all were important. In contrast, the efficacy and cost attributes were important across the board. The CV results suggested that a very significant proportion of individuals would be willing to pay for an effective intervention (i.e. that avoided weight regain), with very few respondents expressing a willingness to pay for an intervention that led to more than 10-20% weight regain.

Do alternative methods for preference elicitation provide a consistent picture of variation in preferences?

Existing evidence suggests that willingness to pay (WTP) estimates from CV tasks might differ from those derived from DCE data, but there aren’t a lot of empirical studies on this in health. Comparisons were planned in my study, but the approach taken in the end was suboptimal and ultimately inconclusive. The original plan was to obtain WTP estimates for an entire WLM intervention using the DCE and to compare this with the estimates from the CV task. Due to data limitations, it wasn’t possible to make this comparison. However, the CV task was a bit unusual because we asked for respondents’ WTP at various different efficacy levels. So instead the comparison made was between average WTP values for a percentage point of weight re-gain. The differences were statistically insignificant.

Are some people’s preferences ‘better defined’ than others’?

We hypothesised that those with experience of the trial intervention would have ‘better defined’ preferences. To explore this, we compared the data quality across the different user groups. From a quick glance at the DCE results, it is pretty clear that the data were much better for the most experienced group; the coefficients were larger, and a much higher proportion was statistically significant. However, more interestingly, we found that the most experienced group were 23% more likely to have passed all of the rationality tests that were embedded in the DCE. Therefore, if you accept that better quality data is an indicator of ‘better defined’ preferences, then the data do seem reasonably supportive of the hypothesis. That being said, there were no significant differences between the other three groups, begging the question: was it the difference in experience, or some other difference between RCT participants and online panel respondents?

What does your research imply for the use of preferences in resource allocation decisions?

While there are still many unanswered questions, and there is always a need for further research, the results from my PhD project suggest that preferences for health care interventions can differ significantly between respondents with differing levels of experience. Had my project been applied to a more clinical intervention that is harder for an average person to imagine experiencing, I would expect the differences to have been much larger. I’d love to see more research in this area in future, especially in the context of benefit-risk trade-offs.

The key message is that the level of experience of the participants matters. It is quite reasonable to believe that a preference study focusing on a particular subgroup of patients will not be generalisable to the broader patient population. As preference data, typically elicited from patients, is increasingly being used in decision-making – which is great – it is becoming increasingly important for researchers to make sure that their respondent samples are appropriate to support the decisions that are being made.

Thesis Thursday: Alastair Irvine

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Alastair Irvine who has a PhD from the University of Aberdeen. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Time preferences and the patient-doctor interaction
Supervisors
Marjon van der Pol, Euan Phimister
Repository link
http://digitool.abdn.ac.uk/webclient/DeliveryManager?pid=238373

How can people’s time preferences affect the way they use health care?

Time preferences are a way of thinking about how people choose between things that happen over time. Some people prefer a treatment with large side effects and a long chain of future benefits; others prefer smaller benefits but less side effects. These influence a wide range of health outcomes and decisions. One of the most interesting questions I had coming into the PhD was around non-adherence.

Non-adherence can’t be captured by ‘standard’ exponential time preferences because there is no way for something you prefer now to be ‘less preferred’ in the future if everything is held constant. Instead, present-bias preferences can capture non-adherent behaviour. With these preferences, people place a higher weight on the ‘current period’ relative to all future periods but weight all future periods consistently. What that means is you can have a situation where you plan to do something – eat healthily, take your medication – but end up not doing it. When planning, you placed less relative weight on the near term ‘cost’ (like medication side effects) than you do when the decision arrives.

In what way might the patient-doctor interaction affect a patient’s adherence to treatment?

There’s asymmetric information between doctors and patient, leading to an agency relationship. Doctors in general know more about treatment options than patients, and don’t know their patient’s preferences. So if doctors are making recommendations to patients, this asymmetry can lead to recommendations that are accepted by the patient but not adhered to. For example, present-biased patients accept the same treatments as exponential discounters. Depending on the treatment parameters, present-biased people will not adhere to some treatments. If the doctor doesn’t anticipate this when making recommendations, it leads to non-adherence.

One of the issues from a contracting perspective is that naive present-bias people don’t anticipate their own non-adherence, so we can’t write traditional ‘separating contracts’ that lead present-bias people to one treatment and exponential discounters to another. However, if the doctor can offer a lower level of treatment to all patients – one that has less side effects and a concomitantly lower benefit – then everyone sticks to that treatment. This clearly comes at the expense of the exponential discounters’ health, but if the proportion of present-bias is high enough it can be an efficient outcome.

Were you able to compare the time preferences of patients and of doctors?

Not this time! It had been the ‘grand plan’ at the start of the PhD to compare matched doctor and patient time preferences then link it to treatment choices but that was far too ambitious for the time, and there had been very little work establishing how time preferences work in the patient-doctor interaction so I felt we had a lot to do.

One interesting question we did ask was whether doctors’ time preferences for themselves were the same as for their patients. A lot of the existing evidence asks doctors for their own time preferences, but surely the important time preference is the one they apply to their patients?

We found that while there was little difference between these professional and private time preferences, a lot of the responses displayed increasing impatience. This means that as the start of treatment gets pushed further into the future, doctors started to prefer shorter-but-sooner benefits for themselves and their patients. We’re still thinking about whether this reflects that in the real world (outside the survey) doctors already account for the time patients have spent with symptoms when assessing how quickly a treatment benefit should arrive.

How could doctors alter their practice to reduce non-adherence?

We really only have two options – to make ‘the right thing’ easier or the ‘wrong thing’ more costly. The implication of present-bias is you need to use less intense treatments because the problem is the (relative) over-weighting of the side effects. The important thing we need for that is good information on adherence.

We could pay people to adhere to treatment. However, my gut feeling is that payments are hard to implement on the patient side without being coercive (e.g making non-adherence costly with charges) or expensive for the implementer when identification of completion is tricky (giving bonuses to doctors based on patient health outcomes). So doctors can reduce non-adherence by anticipating it, and offering less ‘painful’ treatments.

It’s important to say I was only looking at one kind of non-adherence. If patients have bad experiences then whatever we do shouldn’t keep them taking a treatment they don’t want. However, the fact that stopping treatment is always an option for the patient makes non-adherence hard to address because as an economist you would like to separate different reasons for stopping. This is a difficulty for analysing non-adherence as a problem of temptation. In temptation preferences we would like to change the outcome set so that ‘no treatment’ is not a tempting choice, but there are real ethical and practical difficulties with that.

To what extent did the evidence generated by your research support theoretical predictions?

I designed a lab experiment that put students in the role of the doctor with patients that may or may not be present-biased. The participants had to recommend treatments to a series of hypothetical patients and was set up so that adapting to non-adherence with less intense treatments was best. Participants got feedback on their previous patients, to learn about which treatments patients stuck to over the rounds.

We paid one arm a salary, and another a ‘performance payment’. The latter only got paid when patients stuck to treatment and the pay correlated with the patient outcomes. In both arms, patients’ outcomes were reflected with a charity donation.

The main result is that there was a lot of adaptation to non-adherence in both arms. The adaptation was stronger under the performance payment, reflecting the upper limit of the adaptation we can expect because it perfectly aligns patient and doctor preferences.

In the experimental setting, even when there is no direct financial benefit of doing so, participants adapted to non-adherence in the way I predicted.

Chris Sampson’s journal round-up for 31st December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Perspectives of patients with cancer on the quality-adjusted life year as a measure of value in healthcare. Value in Health Published 29th December 2018

Patients should have the opportunity to understand how decisions are made about which treatments they are and are not allowed to use, given their coverage. This study reports on a survey of cancer patients and survivors, with the aim of identifying patients’ awareness, understanding, and opinions about the QALY as a measure of value.

Participants were recruited from a (presumably US-based) patient advocacy group and 774 mostly well-educated, mostly white, mostly women responded. The online survey asked about cancer status and included a couple of measures of health literacy. Fewer than 7% of participants had ever heard of the QALY – more likely for those with greater health literacy. The survey explained the QALY to the participants and then asked if the concept of the QALY makes sense. Around half said it did and 24% thought that it was a good way to measure value in health care. The researchers report a variety of ‘significant’ differences in tendencies to understand or support the use of QALYs, but I’m not convinced that they’re meaningful because the differences aren’t big and the samples are relatively small.

At the end of the survey, respondents were asked to provide opinions on QALYs and value in health care. 165 people provided responses and these were coded and analysed qualitatively. The researchers identified three themes from this one free-text question: i) measuring value, ii) opinions on QALY, and iii) value in health care and decision making. I’m not sure that they’re meaningful themes that help us to understand patients’ views on QALYs. A significant proportion of respondents rejected the idea of using numbers to quantify value in health care. On the other hand, some suggested that the QALY could be a useful decision aid for patients. There was opposition to ‘external decision makers’ having any involvement in health care decision making. Unless you’re paying for all of your care out of pocket, that’s tough luck. But the most obvious finding from the qualitative analysis is that respondents didn’t understand what QALYs were for. That’s partly because health economists in general need to be better at communicating concepts like the QALY. But I think it’s also in large part because the authors failed to provide a clear explanation. They didn’t even use my lovely Wikipedia graphic. Many of the points made by respondents are entirely irrelevant to the appropriateness of QALYs as they’re used (or in the case of the US, aren’t yet used) in practice. For example, several discussed the use of QALYs in clinical decision making. Patients think that they should maintain autonomy, which is fair enough but has nothing to do with how QALYs are used to assess health technologies.

QALYs are built on the idea of trade-offs. They measure the trade-off between life extension and life improvement. They are used to guide trade-offs between different treatments for different people. But the researchers didn’t explain how or why QALYs are used to make trade-offs, so the elicited views aren’t well-informed.

Measuring multivariate risk preferences in the health domain. Journal of Health Economics Published 27th December 2018

Health preferences research is now a substantial field in itself. But there’s still a lot of work left to be done on understanding risk preferences with respect to health. Gradually, we’re coming round to the idea that people tend to be risk-averse. But risk preferences aren’t (necessarily) so simple. Recent research has proposed that ‘higher order’ preferences such as prudence and temperance play a role. A person exhibiting univariate prudence for longevity would be better able to cope with risk if they are going to live longer. Univariate temperance is characterised by a preference for prospects that disaggregate risk across different possible outcomes. Risk preferences can also be multivariate – across health and wealth, for example – determining the relationship between univariate risk preferences and other attributes. These include correlation aversion, cross-prudence, and cross-temperance. Many articles from the Arthur Attema camp demand a great deal of background knowledge. This paper isn’t an exception, but it does provide a very clear and intuitive description of the various kinds of uni- and multivariate risk preferences that the researchers are considering.

For this study, an experiment was conducted with 98 people, who were asked to make 69 choices, corresponding to 3 choices about each risk preference trait being tested, for both gains and losses. Participants were told that they had €240,000 in wealth and 40 years of life to play with. The number of times that an individual made choices in line with a particular trait was used as an indicator of their strength of preference.

For gains, risk aversion was common for both wealth and longevity, and prudence was a common trait. There was no clear tendency towards temperance. For losses, risk aversion and prudence tended to neutrality. For multivariate risk preferences, a majority of people were correlation averse for gains and correlation seeking for losses. For gains, 76% of choices were compatible with correlation aversion, suggesting that people prefer to disaggregate fixed wealth and health gains. For losses, the opposite was true in 68% of choices. There was evidence for cross-prudence in wealth gains but not longevity gains, suggesting that people prefer health risk if they have higher wealth. For losses, the researchers observed cross-prudence and cross-temperance neutrality. The authors go on to explore associations between different traits.

A key contribution is in understanding how risk preferences differ in the health domain as compared with the monetary domain (which is what most economists study). Conveniently, there are a lot of similarities between risk preferences in the two domains, suggesting that health economists can learn from the wider economics literature. Risk aversion and prudence seem to apply to longevity as well as monetary gains, with a shift to neutrality in losses. The potential implications of these findings are far-reaching, but this is just a small experimental study. More research needed (and anticipated).

Prospective payment systems and discretionary coding—evidence from English mental health providers. Health Economics [PubMed] Published 27th December 2018

If you’ve conducted an economic evaluation in the context of mental health care in England, you’ll have come across mental health care clusters. Patients undergoing mental health care are allocated to one of 20 clusters, classed as either ‘psychotic’, ‘non-psychotic’, or ‘organic’, which forms the basis of an episodic payment model. In 2013/14, these episodes were associated with an average cost of between £975 and £9,354 per day. Doctors determine the clusters and the clusters determine reimbursement. Perverse incentives abound. Or do they?

This study builds on the fact that patients are allocated by clinical teams with guidance from the algorithm-based Mental Health Clustering Tool (MHCT). Clinical teams might exhibit upcoding, whereby patients are allocated to clusters that attract a higher price than that recommended by the MHCT. Data were analysed for 148,471 patients from the Mental Health Services Data Set for 2011-2015. For each patient, their allocated cluster is known, along with a variety of socioeconomic indicators and the HoNoS and SARN instruments, which go into the MHCT algorithm. Mixed-effects logistic regression was used to look at whether individual patients were or were not allocated to the cluster recommended as ‘best fit’ by the MHCT, controlling for patient and provider characteristics. Further to this, multilevel multinomial logit models were used to categorise decisions that don’t match the MHCT as either under- or overcoding.

Average agreement across clusters between the MHCT and clinicians was 36%. In most cases, patients were allocated to a cluster either one step higher or one step lower in terms of the level of need, and there isn’t an obvious tendency to overcode. The authors are able to identify a few ways in which observable provider and patient characteristics influence the tendency to under- or over-cluster patients. For example, providers with higher activity are less likely to deviate from the MHCT best fit recommendation. However, the dominant finding – identified by using median odds ratios for the probability of a mismatch between two random providers – seems to be that unobserved heterogeneity determines variation in behaviour.

The study provides clues about the ways in which providers could manipulate coding to their advantage and identifies the need for further data collection for a proper assessment. But reimbursement wasn’t linked to clustering during the time period of the study, so it remains to be seen how clinicians actually respond to these potentially perverse incentives.

Credits