Thesis Thursday: David Mott

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr David Mott who has a PhD from Newcastle University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
How do preferences for public health interventions differ? A case study using a weight loss maintenance intervention
Supervisors
Luke Vale, Laura Ternent
Repository link
http://hdl.handle.net/10443/4197

Why is it important to understand variation in people’s preferences?

It’s not all that surprising that people’s preferences for health care interventions vary, but we don’t have a great understanding of what might drive these differences. Increasingly, preference information is being used to support regulatory decisions and, to a lesser but increasing extent, health technology assessments. It could be the case that certain subgroups of individuals would not accept the risks associated with a particular health care intervention, whereas others would. Therefore, identifying differences in preferences is important. However, it’s also useful to try to understand why this heterogeneity might occur in the first place.

The debate on whose preferences to elicit for health state valuation has traditionally focused on those with experience (e.g. patients) and those without (e.g. the general population). Though this dichotomy is problematic; it has been shown that health state utilities systematically differ between these two groups, presumably due to the difference in relative experience. My project aimed to explore whether experience also affects people’s preferences for health care interventions.

How did you identify different groups of people, whose preferences might differ?

The initial plan for the project was to elicit preferences for a health care intervention from general population and patient samples. However, after reviewing the literature, it seemed highly unlikely that anyone would advocate for preferences for treatments to be elicited from general population samples. It has long been suggested that discrete choice experiments (DCEs) could be used to incorporate patient preferences into decision-making, and it turned out that patients were the focus of the majority of the DCE studies that I reviewed. Given this, I took a more granular approach in my empirical work.

We recruited a very experienced group of ‘service users’ from a randomised controlled trial (RCT). In this case, it was a novel weight loss maintenance intervention aimed at helping obese adults that had lost at least 5% of their overall weight to maintain their weight loss. We also recruited an additional three groups from an online panel. The first group were ‘potential service users’ – those that met the trial criteria but could not have experienced the intervention. The second group were ‘potential beneficiaries’ – those that were obese or overweight and did not meet the trial criteria. The final group were ‘non-users’ – those with a normal BMI.

What can your study tell us about preferences in the context of a weight loss maintenance intervention?

The empirical part of my study involved a DCE and an open-ended contingent valuation (CV) task. The DCE was focused on the delivery of the trial intervention, which was a technology-assisted behavioural intervention. It had a number of different components but, briefly, it involved participants weighing themselves regularly on a set of ‘smart scales’, which enabled the trial team to access and monitor the data. Participants received text messages from the trial team with feedback, reminders to weigh themselves (if necessary), and links to online tools and content to support the maintenance of their weight loss.

The DCE results suggested that preferences for the various components of the intervention varied significantly between individuals and between the different groups – and not all were important. In contrast, the efficacy and cost attributes were important across the board. The CV results suggested that a very significant proportion of individuals would be willing to pay for an effective intervention (i.e. that avoided weight regain), with very few respondents expressing a willingness to pay for an intervention that led to more than 10-20% weight regain.

Do alternative methods for preference elicitation provide a consistent picture of variation in preferences?

Existing evidence suggests that willingness to pay (WTP) estimates from CV tasks might differ from those derived from DCE data, but there aren’t a lot of empirical studies on this in health. Comparisons were planned in my study, but the approach taken in the end was suboptimal and ultimately inconclusive. The original plan was to obtain WTP estimates for an entire WLM intervention using the DCE and to compare this with the estimates from the CV task. Due to data limitations, it wasn’t possible to make this comparison. However, the CV task was a bit unusual because we asked for respondents’ WTP at various different efficacy levels. So instead the comparison made was between average WTP values for a percentage point of weight re-gain. The differences were statistically insignificant.

Are some people’s preferences ‘better defined’ than others’?

We hypothesised that those with experience of the trial intervention would have ‘better defined’ preferences. To explore this, we compared the data quality across the different user groups. From a quick glance at the DCE results, it is pretty clear that the data were much better for the most experienced group; the coefficients were larger, and a much higher proportion was statistically significant. However, more interestingly, we found that the most experienced group were 23% more likely to have passed all of the rationality tests that were embedded in the DCE. Therefore, if you accept that better quality data is an indicator of ‘better defined’ preferences, then the data do seem reasonably supportive of the hypothesis. That being said, there were no significant differences between the other three groups, begging the question: was it the difference in experience, or some other difference between RCT participants and online panel respondents?

What does your research imply for the use of preferences in resource allocation decisions?

While there are still many unanswered questions, and there is always a need for further research, the results from my PhD project suggest that preferences for health care interventions can differ significantly between respondents with differing levels of experience. Had my project been applied to a more clinical intervention that is harder for an average person to imagine experiencing, I would expect the differences to have been much larger. I’d love to see more research in this area in future, especially in the context of benefit-risk trade-offs.

The key message is that the level of experience of the participants matters. It is quite reasonable to believe that a preference study focusing on a particular subgroup of patients will not be generalisable to the broader patient population. As preference data, typically elicited from patients, is increasingly being used in decision-making – which is great – it is becoming increasingly important for researchers to make sure that their respondent samples are appropriate to support the decisions that are being made.

Sam Watson’s journal round-up for 26th November 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Alcohol and self-control: a field experiment in India. American Economic Review Forthcoming

Addiction is complex. For many people it is characterised by a need or compulsion to take something, often to prevent withdrawal, often in conflict with a desire to not take it. This conflicts with Gary Becker’s much-maligned rational theory of addiction, which views the addiction as a choice to maximise utility in the long term. Under Becker’s model, one could use market-based mechanisms to end repeated, long-term drug or alcohol use. By making the cost of continuing to use higher then people would choose to stop. This has led to the development of interventions like conditional payment or cost mechanisms: a user would receive a payment on condition of sobriety. Previous studies, however, have found little evidence people would be willing to pay for such sobriety contracts. This article reports a randomised trial among rickshaw drivers in Chennai, India, a group of people with a high prevalence of high alcohol use and dependency. The three trial arms consisted of a control arm who received an unconditional daily payment, a treatment arm who received a small payment plus extra if they passed a breathalyser test, and a third arm who had the choice between either of the two payment mechanisms. Two findings are of much interest. First, the incentive payments significantly increased daytime sobriety, and second, over half the participants preferred the conditional sobriety payments over the unconditional payments when they were weakly dominated, and a third still preferred them even when the unconditional payments were higher than the maximum possible conditional payment. This conflicts with a market-based conception of addiction and its treatment. Indeed, the nature of addiction means it can override all intrinsic motivation to stop, or do anything else frankly. So it makes sense that individuals are willing to pay for extrinsic motivation, which in this case did make a difference.

Heterogeneity in long term health outcomes of migrants within Italy. Journal of Health Economics [PubMed] [RePEc] Published 2nd November 2018

We’ve discussed neighbourhood effects a number of times on this blog (here and here, for example). In the absence of a randomised allocation to different neighbourhoods or areas, it is very difficult to discern why people living there or who have moved there might be better or worse off than elsewhere. This article is another neighbourhood effects analysis, this time framed through the lens of immigration. It looks at those who migrated within Italy in the 1970s during a period of large northward population movements. The authors, in essence, identify the average health and mental health of people who moved to different regions conditional on duration spent in origin destinations and a range of other factors. The analysis is conceptually similar to that of two papers we discussed at length on internal migration in the US and labour market outcomes in that it accounts for the duration of ‘exposure’ to poorer areas and differences between destinations. In the case of the labour market outcomes papers, the analysis couldn’t really differentiate between a causal effect of a neighbourhood increasing human capital, differences in labour market conditions, and unobserved heterogeneity between migrating people and families. Now this article examining Italian migration looks at health outcomes, such as the SF-12, which limit the explanations since one cannot ‘earn’ more health by moving elsewhere. Nevertheless, the labour market can still impact upon health strongly.

The authors carefully discuss the difficulties in identifying causal effects here. A number of model extensions are also estimated to try to deal with some issues discussed. This includes a type of propensity score weighting approach, although I would emphasize that this categorically does not deal with issues of unobserved heterogeneity. A finite mixture model is also estimated. Generally a well-thought-through analysis. However, there is a reliance on statistical significance here. I know I do bang on about statistical significance a lot, but it is widely used inappropriately. A rule of thumb I’ve adopted for reviewing papers for journals is that if the conclusions would change if you changed the statistical significance threshold then there’s probably an issue. This article would fail that test. They use a threshold of p<0.10 which seems inappropriate for an analysis with a sample size in the tens of thousands and they build a concluding narrative around what is and isn’t statistically significant. This is not to detract from the analysis, merely its interpretation. In future, this could be helped by banning asterisks in tables, like the AER has done, or better yet developing submission guidelines around its use.

Credits

Chris Sampson’s journal round-up for 5th November 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Stratified treatment recommendation or one-size-fits-all? A health economic insight based on graphical exploration. The European Journal of Health Economics [PubMed] Published 29th October 2018

Health care is increasingly personalised. This creates the need to evaluate interventions for smaller and smaller subgroups as patient heterogeneity is taken into account. And this usually means we lack the statistical power to have confidence in our findings. The purpose of this paper is to consider the usefulness of a tool that hasn’t previously been employed in economic evaluation – the subpopulation treatment effect pattern plot (STEPP). STEPP works by assessing the interaction between treatments and covariates in different subgroups, which can then be presented graphically. Imagine your X-axis with the values defining the subgroups and your Y-axis showing the treatment outcome. This information can then be used to determine which subgroups exhibit positive outcomes.

This study uses data from a trial-based economic evaluation in heart failure, where patients’ 18-month all-cause mortality risk was estimated at baseline before allocation to one of three treatment strategies. For the STEPP procedure, the authors use baseline risk to define subgroups and adopt net monetary benefit at the patient level as the outcome. The study makes two comparisons (between three alternative strategies) and therefore presents two STEPP figures. The STEPP figures are used to identify subgroups, which the authors apply in a stratified cost-effectiveness analysis, estimating net benefit in each defined risk subgroup.

Interpretation of the STEPPs is a bit loose, with no hard decision rules. The authors suggest that one of the STEPPs shows no clear relationship between net benefit and baseline risk in terms of the cost-effectiveness of the intervention (care as usual vs basic support). The other STEPP shows that, on average, people with baseline risk below 0.16 have a positive net benefit from the intervention (intensive support vs basic support), while those with higher risk do not. The authors evaluate this stratification strategy against an alternative stratification strategy (based on the patient’s New York Heart Association class) and find that the STEPP-based approach is expected to be more cost-effective. So the key message seems to be that STEPP can be used as a basis for defining subgroups as cost-effectively as possible.

I’m unsure about the extent to which this is a method that deserves to have its own name, insofar as it is used in this study. I’ve seen plenty of studies present a graph with net benefit on the Y-axis and some patient characteristic on the X-axis. But my main concern is about defining subgroups on the basis of net monetary benefit rather than some patient characteristic. Is it OK to deny treatment to subgroup A because treatment costs are higher than in subgroup B, even if treatment is cost-effective for the entire population of A+B? Maybe, but I think that creates more challenges than stratification on the basis of treatment outcome.

Using post-market utilisation analysis to support medicines pricing policy: an Australian case study of aflibercept and ranibizumab use. Applied Health Economics and Health Policy [PubMed] Published 25th October 2018

The use of ranibizumab and aflibercept has been a hot topic in the UK, where NHS providers feel that they’ve been bureaucratically strong-armed into using an incredibly expensive drug to treat certain eye conditions when a cheaper and just-as-effective alternative is available. Seeing how other countries have managed prices in this context could, therefore, be valuable to the NHS and other health services internationally. This study uses data from Australia, where decisions about subsidising medicines are informed by research into how drugs are used after they come to market. Both ranibizumab (in 2007) and aflibercept (in 2012) were supported for the treatment of age-related macular degeneration. These decisions were based on clinical trials and modelling studies, which also showed that the benefit of ~6 aflibercept prescriptions equated to the benefit of ~12 ranibizumab prescriptions, justifying a higher price-per-injection for aflibercept.

In the UK and US, aflibercept attracts a higher price. The authors assume that this is because of the aforementioned trial data relating to the number of doses. However, in Australia, the same price is paid for aflibercept and ranibizumab. This is because a post-market analysis showed that, in practice, ranibizumab and aflibercept had a similar dose frequency. The purpose of this study is to see whether this is because different groups of patients are being prescribed the two drugs. If they are, then we might anticipate heterogenous treatment outcomes and thus a justification for differential pricing. Data were drawn from an administrative claims database for 208,000 Australian veterans for 2007-2017. The monthly number of aflibercept and ranibizumab prescriptions was estimated for each person, showing that total prescriptions increased steadily over the period, with aflibercept taking around half the market within a year of its approval. Ranibizumab initiators were slightly older in the post-aflibercept era but, aside from that, there were no real differences identified. When it comes to the prescription of ranibizumab or aflibercept, gender, being in residential care, remoteness of location, and co-morbidities don’t seem to be important. Dispensing rates were similar, at around 3 prescriptions during the first 90 days and around 9 prescriptions during the following 12 months.

The findings seem to support Australia’s decision to treat ranibizumab and aflibercept as substitutes at the same price. More generally, they support the idea that post-market utilisation assessments can (and perhaps should) be used as part of the health technology assessment and reimbursement process.

Do political factors influence public health expenditures? Evidence pre- and post-great recession. The European Journal of Health Economics [PubMed] Published 24th October 2018

There is mixed evidence about the importance of partisanship in public spending, and very little relating specifically to health care. I’d be worried if political factors didn’t influence public spending on health, given that that’s a definitively political issue. How the situation might be different before and after a recession is an interesting question.

The authors combined OECD data for 34 countries from 1970-2016 with the Database of Political Institutions. This allowed for the creation of variables relating to the ideology of the government and the proximity of elections. Stationary panel data models were identified as the most appropriate method for analysis of these data. A variety of political factors were included in the models, for which the authors present marginal effects. The more left-wing a government, the higher is public spending on health care, but this is only statistically significant in the period before the crisis of 2007. Before the crisis, coalition governments tended to spend more, while governments with more years in office tended to spend less. These effects also seem to disappear after 2007. Throughout the whole period, governing parties with a stronger majority tended to spend less on health care. Several of the non-political factors included in the models show the results that we would expect. GDP per capita is positively associated with health care expenditures, for example. The findings relating to the importance of political factors appear to be robust to the inclusion of other (non-political) variables and there are similar findings when the authors look at public health expenditure as a percentage of total health expenditure. In contradiction with some previous studies, proximity to elections does not appear to be important.

The most interesting finding here is that the effect of partisanship seems to have mostly disappeared – or, at least, reduced – since the crisis of 2007. Why did left-wing parties and right-wing parties converge? The authors suggest that it’s because adverse economic circumstances restrict the extent to which governments can make decisions on the basis of ideology. Though I dare say readers of this blog could come up with plenty of other (perhaps non-economic) explanations.

Credits