Thesis Thursday: Kevin Momanyi

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Kevin Momanyi who has a PhD from the University of Aberdeen. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Enhancing quality in social care through economic analysis
Paul McNamee
Repository link

What are reablement and telecare services and why should economists study them?

Reablement and telecare are two types of services within homecare that enable individuals to live independently in their own homes with little or no assistance from other people. Reablement focuses on helping individuals relearn the skills needed for independent living after an illness or injury. It is a short term intervention that lasts for about 6 to 12 weeks and usually involves several health care professionals and social care workers working together to meet some set objectives. Telecare, on the other hand, entails the use of devices (e.g. community alarms and linked pill dispensers) to facilitate communication between homecare clients and their care providers in the event of an accident or negative health shock. Economists should study reablement and telecare so as to determine whether or not the services have value for money and also develop policies that would reduce social care costs without compromising the welfare of the populace.

In what ways did your study reach beyond the scope of previous research?

My study extended the previous studies in three main ways. Firstly, I estimated the treatment effects in a non-experimental setting unlike the previous studies that used either randomised controlled trials or quasi-experiments. Secondly, I used linked administrative health and social care data in Scotland for the 2010/2011 financial year. The data covered the administrative records for the entire Scottish population and was larger and more robust than the data used by the previous studies. Thirdly, the previous studies were simply concerned with quantifying the treatment effects and thus did not provide a rationale as to how the interventions affect the outcomes of interest. My thesis addressed this knowledge gap by formulating an econometric model that links the demand for reablement/telecare to several outcomes.

How did you go about trying to estimate treatment effects from observational data?

I used a theory driven approach combined with specialised econometric techniques in order to estimate the treatment effects. The theoretical model drew from the Almost Ideal Demand System (AIDS), Andersen’s Behavioural Model of Health Services Use, the Grossman Model of the demand for health capital, and Samuelson’s Revealed Preference Theory; whereas the estimation strategy simultaneously controlled for unexplained trend variations, potential endogeneity of key variables, potential sample selection bias and potential unobserved heterogeneity. For a more substantive discussion of the theoretical model and estimation strategy, see Momanyi, 2018. Although the majority of the studies in the econometric literature advocate for the use of quasi-experimental study designs in estimating treatment effects using observational data, I provided several proofs in my thesis showing that these designs do not always yield consistent results, and that estimating the econometric models in the way that I did is preferable since it nests several study designs and estimation strategies as special cases.

Are there key groups of people that could benefit from greater use of reablement and telecare services?

According to the empirical results of my thesis, there is sufficient evidence to conclude that there are certain groups within the population that could benefit from greater use of telecare. For instance, one empirical study investigating the effect of telecare use on the expected length of stay in hospital showed that the community alarm users with physical disabilities are more likely than the other community alarm users to have a shorter length of stay in hospital, holding other factors constant. Correspondingly, the results also showed that the individuals who use more advanced telecare devices than the community alarm and who are also considered to be frail elderly are expected to have a relatively shorter length of stay in hospital as compared to the other telecare users in the population, all else equal. A discussion of various econometric models that can be used to link telecare use to the length of stay in hospital can be found in Momanyi, 2017.

What would be your main recommendation for policymakers in Scotland?

The main recommendation for policymakers is that they ought to subsidise the cost of telecare services, especially in regions that currently have relatively low utilisation levels, so as to increase the uptake of telecare in Scotland. This was informed by a decomposition analysis that I conducted in the first empirical study to shed light on what could be driving the observed direct relationship between telecare use and independent living at home. The analysis showed that the treatment effect was in part due to the underlying differences (both observable and unobservable) between telecare users and non-users, and thus policymakers could stimulate telecare use in the population by addressing these differences. In addition to that, policymakers should advise the local authorities to target telecare services at the groups of people that are most likely to benefit from them as well as sensitise the population on the benefits of using community alarms. This is because the econometric analyses in my thesis showed that the treatment effects are not homogenous across the population, and that the use of a community alarm is expected to reduce the likelihood of unplanned hospitalisation, whereas the use of the other telecare devices has the opposite effect all else equal.

Can you name one thing that you wish you could have done as part of your PhD, which you weren’t able to do?

I would have liked to include in my thesis an empirical study on the effects of reablement services. My analyses focused only on telecare use as the treatment variable due to data limitations. This additional study would have been vital in validating the econometric model that I developed in the first chapter of the thesis as well as addressing the gaps in knowledge that were identified by the literature review. In particular, it would have been worthwhile to determine whether reablement services should be offered to individuals discharged from hospital or to individuals who have been selected into the intervention directly from the community.

Thesis Thursday: David Mott

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr David Mott who has a PhD from Newcastle University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

How do preferences for public health interventions differ? A case study using a weight loss maintenance intervention
Luke Vale, Laura Ternent
Repository link

Why is it important to understand variation in people’s preferences?

It’s not all that surprising that people’s preferences for health care interventions vary, but we don’t have a great understanding of what might drive these differences. Increasingly, preference information is being used to support regulatory decisions and, to a lesser but increasing extent, health technology assessments. It could be the case that certain subgroups of individuals would not accept the risks associated with a particular health care intervention, whereas others would. Therefore, identifying differences in preferences is important. However, it’s also useful to try to understand why this heterogeneity might occur in the first place.

The debate on whose preferences to elicit for health state valuation has traditionally focused on those with experience (e.g. patients) and those without (e.g. the general population). Though this dichotomy is problematic; it has been shown that health state utilities systematically differ between these two groups, presumably due to the difference in relative experience. My project aimed to explore whether experience also affects people’s preferences for health care interventions.

How did you identify different groups of people, whose preferences might differ?

The initial plan for the project was to elicit preferences for a health care intervention from general population and patient samples. However, after reviewing the literature, it seemed highly unlikely that anyone would advocate for preferences for treatments to be elicited from general population samples. It has long been suggested that discrete choice experiments (DCEs) could be used to incorporate patient preferences into decision-making, and it turned out that patients were the focus of the majority of the DCE studies that I reviewed. Given this, I took a more granular approach in my empirical work.

We recruited a very experienced group of ‘service users’ from a randomised controlled trial (RCT). In this case, it was a novel weight loss maintenance intervention aimed at helping obese adults that had lost at least 5% of their overall weight to maintain their weight loss. We also recruited an additional three groups from an online panel. The first group were ‘potential service users’ – those that met the trial criteria but could not have experienced the intervention. The second group were ‘potential beneficiaries’ – those that were obese or overweight and did not meet the trial criteria. The final group were ‘non-users’ – those with a normal BMI.

What can your study tell us about preferences in the context of a weight loss maintenance intervention?

The empirical part of my study involved a DCE and an open-ended contingent valuation (CV) task. The DCE was focused on the delivery of the trial intervention, which was a technology-assisted behavioural intervention. It had a number of different components but, briefly, it involved participants weighing themselves regularly on a set of ‘smart scales’, which enabled the trial team to access and monitor the data. Participants received text messages from the trial team with feedback, reminders to weigh themselves (if necessary), and links to online tools and content to support the maintenance of their weight loss.

The DCE results suggested that preferences for the various components of the intervention varied significantly between individuals and between the different groups – and not all were important. In contrast, the efficacy and cost attributes were important across the board. The CV results suggested that a very significant proportion of individuals would be willing to pay for an effective intervention (i.e. that avoided weight regain), with very few respondents expressing a willingness to pay for an intervention that led to more than 10-20% weight regain.

Do alternative methods for preference elicitation provide a consistent picture of variation in preferences?

Existing evidence suggests that willingness to pay (WTP) estimates from CV tasks might differ from those derived from DCE data, but there aren’t a lot of empirical studies on this in health. Comparisons were planned in my study, but the approach taken in the end was suboptimal and ultimately inconclusive. The original plan was to obtain WTP estimates for an entire WLM intervention using the DCE and to compare this with the estimates from the CV task. Due to data limitations, it wasn’t possible to make this comparison. However, the CV task was a bit unusual because we asked for respondents’ WTP at various different efficacy levels. So instead the comparison made was between average WTP values for a percentage point of weight re-gain. The differences were statistically insignificant.

Are some people’s preferences ‘better defined’ than others’?

We hypothesised that those with experience of the trial intervention would have ‘better defined’ preferences. To explore this, we compared the data quality across the different user groups. From a quick glance at the DCE results, it is pretty clear that the data were much better for the most experienced group; the coefficients were larger, and a much higher proportion was statistically significant. However, more interestingly, we found that the most experienced group were 23% more likely to have passed all of the rationality tests that were embedded in the DCE. Therefore, if you accept that better quality data is an indicator of ‘better defined’ preferences, then the data do seem reasonably supportive of the hypothesis. That being said, there were no significant differences between the other three groups, begging the question: was it the difference in experience, or some other difference between RCT participants and online panel respondents?

What does your research imply for the use of preferences in resource allocation decisions?

While there are still many unanswered questions, and there is always a need for further research, the results from my PhD project suggest that preferences for health care interventions can differ significantly between respondents with differing levels of experience. Had my project been applied to a more clinical intervention that is harder for an average person to imagine experiencing, I would expect the differences to have been much larger. I’d love to see more research in this area in future, especially in the context of benefit-risk trade-offs.

The key message is that the level of experience of the participants matters. It is quite reasonable to believe that a preference study focusing on a particular subgroup of patients will not be generalisable to the broader patient population. As preference data, typically elicited from patients, is increasingly being used in decision-making – which is great – it is becoming increasingly important for researchers to make sure that their respondent samples are appropriate to support the decisions that are being made.

Thesis Thursday: Logan Trenaman

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Logan Trenaman who has a PhD from the University of British Columbia. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Economic evaluation of interventions to support shared decision-making: an extension of the valuation framework
Nick Bansback, Stirling Bryan
Repository link

What is shared decision-making?

Shared decision-making is a process whereby patients and health care providers work together to make decisions. For most health care decisions, where there is no ‘best’ option, the most appropriate course of action depends on the clinical evidence and the patient’s informed preferences. In effect, shared decision-making is about reducing information asymmetry, by allowing providers to inform patients about the potential benefits and harms of alternative tests or treatments, and patients to express their preferences to their provider. The goal is to reach agreement on the most appropriate decision for that patient.

My thesis focused on individuals with advanced osteoarthritis who were considering whether to undergo total hip or knee replacement, or use non-surgical treatments such as pain medication, exercise, or mobility aids. Joint replacement alleviates pain and improves mobility for most patients, however, as many as 20-30% of recipients have reported insignificant improvement in symptoms and/or dissatisfaction with results. Shared decision-making can help ensure that those considering joint replacement are aware of alternative treatments and have realistic expectations about the potential benefits and harms of each option.

There are different types of interventions available to help support shared decision-making, some of which target the patient (e.g. patient decision aids) and some of which target providers (e.g. skills training). My thesis focused on a randomized controlled trial that evaluated a pre-consultation patient decision aid, which generated a summary report for the surgeon that outlined the patient’s knowledge, values, and preferences.

How can the use of decision aids influence health care costs?

The use of patient decision aids can impact health care costs in several ways. Some patient decision aids, such as those evaluated in my thesis, are designed for use by patients in preparation for a consultation where a treatment decision is made. Others are designed to be used during the consultation with the provider. There is some evidence that decision aids may increase up-front costs, by increasing the length of consultations, requiring investments to integrate decision aids into routine care, or train clinicians. These interventions may impact downstream costs by influencing treatment decision-making. For example, the Cochrane review of patient decision aids found that, across 18 studies in major elective surgery, those exposed to decision aids were less likely to choose surgery compared to those in usual care (RR: 0.86, 95% CI: 0.75 to 1.00).

This was observed in the trial-based economic evaluation which constituted the first chapter of my thesis. This analysis found that decision aids were highly cost-effective, largely due to a smaller proportion of patients undergoing joint replacement. Of course, this conclusion could change over time. One of the challenges of previous cost-effectiveness analysis (CEA) of patient decision aids has been a lack of long-term follow-up. Patients who choose not to have surgery over the short-term may go on to have surgery later. To look at the longer-term impact of decision aids, the third chapter of my thesis linked trial participants to administrative data with an average of 7-years follow-up. I found that, from a resource use perspective, the conclusion was the same as observed during the trial: fewer patients exposed to decision aids had undergone surgery, resulting in lower costs.

What is it about shared decision-making that patients value?

On the whole, the evidence suggests that patients value being informed, listened to, and offered the opportunity to participate in decision-making (should they wish!). To better understand how much shared decision-making is valued, I performed a systematic review of discrete choice experiments (DCEs) that had valued elements of shared decision-making. This review found that survey respondents (primarily patients) were willing to wait longer, pay, and in some cases willing to accept poorer health outcomes for greater shared decision-making.

It is important to consider preference heterogeneity in this context. The last chapter of my PhD performed a DCE to value shared decision-making in the context of advanced knee osteoarthritis. The DCE included three attributes: waiting time, health outcomes, and shared decision-making. The latent class analysis found four distinct subgroups of patients. Two groups were balanced, and traded between all attributes, while one group had a strong preference for shared decision-making, and another had a strong preference for better health outcomes. One important finding from this analysis was that having a strong preference for shared decision-making was not associated with demographic or clinical characteristics. This highlights the importance of each clinical encounter in determining the appropriate level of shared decision-making for each patient.

Is it meaningful to estimate the cost-per-QALY of shared decision-making interventions?

One of the challenges of my thesis was grappling with the potential conflict between the objectives of CEA using QALYs (maximizing health) and shared decision-making interventions (improved decision-making). Importantly, encouraging shared decision-making may result in patients choosing alternatives that do not maximize QALYs. For example, informed patients may choose to delay or forego elective surgery due to potential risks, despite it providing more QALYs (on average).

In cases where a CEA finds that shared decision-making interventions result in poorer health outcomes at lower cost, I think this is perfectly acceptable (provided patients are making informed choices). However, it becomes more complicated when shared decision-making interventions increase costs, result in poorer health outcomes, but provide other, non-health benefits such as informing patients or involving them in treatment decisions. In such cases, decision-makers need to consider whether it is justified to allocate scarce health care resources to encourage shared decision-making when it requires sacrificing health outcomes elsewhere. The latter part of my thesis tried to inform this trade-off, by valuing the non-health benefits of shared decision-making which would not otherwise be captured in a CEA that uses QALYs.

How should the valuation framework be extended, and is this likely to indicate different decisions?

I extended the valuation framework by attempting to value non-health benefits of shared decision-making. I followed guidelines from the Canadian Agency for Drugs and Technologies in Health, which state that “the value of non-health effects should be based on being traded off against health” and that societal preferences be used for this valuation. Requiring non-health benefits to be valued relative to health reflects the opportunity cost of allocating resources toward these outcomes. While these guidelines do not specifically state how to do so, I chose to value shared decision-making relative to life-years using a chained (or two-stage) valuation approach so that they could be incorporated within the QALY.

Ultimately, I found that the value of the process of shared decision-making was small, however, this may have an impact on cost-effectiveness. The reasons for this are twofold. First, there are few cases where shared decision-making interventions improve health outcomes. A 2018 sub-analysis of the Cochrane review of patient decision aids found little evidence that they impact health-related quality of life. Secondly, the up-front cost of implementing shared decision-making interventions may be small. Thus, in cases where shared decision-making interventions require a small investment but provide no health benefit, the non-health value of shared decision-making may impact cost-effectiveness. One recent example from Dr Victoria Brennan found that incorporating process utility associated with improved consultation quality, resulting from a new online assessment tool, increased the probability that the intervention was cost-effective from 35% to 60%.