Chris Sampson’s journal round-up for 2nd December 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The treatment decision under uncertainty: the effects of health, wealth and the probability of death. Journal of Health Economics Published 16th November 2019

It’s important to understand how people make decisions about treatment. At the end of life, the question can become a matter of whether to have treatment or to let things take their course such that you end up dead. In order to consider this scenario, the author of this paper introduces the probability of death to some existing theoretical models of decision-making under uncertainty.

The diagnostic risk model and the therapeutic risk model can be used to identify risk thresholds that determine decisions about treatment. The diagnostic model relates to the probability that disease is present and the therapeutic model relates to the probability that treatment is successful. The new model described in this paper builds on these models to consider the impact on the decision thresholds of i) initial health state, ii) probability of death, and iii) wealth. The model includes wealth after death, in the form of a bequest. Limited versions of the model are also considered, excluding the bequest and excluding wealth (described as a ‘QALY model’). Both an individual perspective and an aggregate perspective are considered by excluding and including the monetary cost of diagnosis and treatment, to allow for a social insurance type setting.

The comparative statics show a lot of ambiguity, but there are a few things that the model can tell us. The author identifies treatment as having an ‘insurance effect’, by reducing diagnostic risk, a ‘protective effect’, by lowering the probability of death, and a risk-increasing effect associated with therapeutic risk. A higher probability of death increases the propensity for treatment in both the no-bequest model and the QALY model, because of the protective effect of treatment. In the bequest model, the impact is ambiguous, because treatment costs reduce the bequest. In the full model, wealthier individuals will choose to undergo treatment at a lower probability of success because of a higher marginal utility for survival, but the effect becomes ambiguous if the marginal utility of wealth depends on health (which it obviously does).

I am no theoretician, so it can take me a long time to figure these things out in my head. For now, I’m not convinced that it is meaningful to consider death in this way using a one-period life model. In my view, the very definition of death is a loss of time, which plays little or no part in this model. But I think my main bugbear is the idea that anybody’s decision about life saving treatment is partly determined by the amount of money they will leave behind. I find this hard to believe. The author links the finding that a higher probability of death increases treatment propensity to NICE’s end of life premium. Though I’m not convinced that the model has anything to do with NICE’s reasoning on this matter.

Moving toward evidence-based policy: the value of randomization for program and policy implementation. JAMA [PubMed] Published 15th November 2019

Evidence-based policy is a nice idea. We should figure out whether something works before rolling it out. But decision-makers (especially politicians) tend not to think in this way, because doing something is usually seen to be better than doing nothing. The authors of this paper argue that randomisation is the key to understanding whether a particular policy creates value.

Without evidence based on random allocation, it’s difficult to know whether a policy works. This, the authors argue, can undermine the success of effective interventions and allow harmful policies to persist. A variety of positive examples are provided from US healthcare, including trials of Medicare bundled payments. Apparently, such trials increased confidence in the programmes’ effects in a way that post hoc evaluations cannot, though no evidence of this increased confidence is actually provided. Policy evaluation is not always easy, so the authors describe four preconditions for the success of such studies: i) early engagement with policymakers, ii) willingness from policy leaders to support randomisation, iii) timing the evaluation in line with policymakers’ objectives, and iv) designing the evaluation in line with the realities of policy implementation.

These are sensible suggestions, but it is not clear why the authors focus on randomisation. The paper doesn’t do what it says on the tin, i.e. describe the value of randomisation. Rather, it explains the value of pre-specified policy evaluations. Randomisation may or may not deserve special treatment compared with other analytical tools, but this paper provides no explanation for why it should. The authors also suggest that people are becoming more comfortable with randomisation, as large companies employ experimental methods, particularly on the Internet with A/B testing. I think this perception is way off and that most people feel creeped out knowing that the likes of Facebook are experimenting on them without any informed consent. In the authors’ view, it being possible to randomise is a sufficient basis on which to randomise. But, considering the ethics, as well as possible methodological contraindications, it isn’t clear that randomisation should become the default.

A new tool for creating personal and social EQ-5D-5L value sets, including valuing ‘dead’. Social Science & Medicine Published 30th November 2019

Nobody can agree on the best methods for health state valuation. Or, at least, some people have disagreed loud enough to make it seem that way. Novel approaches to health state valuation are therefore welcome. Even more welcome is the development and testing of methods that you can try at home.

This paper describes the PAPRIKA method (Potentially All Pairwise RanKings of all possible Alternatives) of discrete choice experiment, implemented using 1000Minds software. Participants are presented with two health states that are defined in terms of just two dimensions, each lasting for 10 years, and asked to choose between them. Using the magical power of computers, an adaptive process identifies further choices, automatically ranking states using transitivity so that people don’t need to complete unnecessary tasks. In order to identify where ‘dead’ sits on the scale, a binary search procedure asks participants to compare EQ-5D states with being dead. What’s especially cool about this process is that everybody who completes it is able to view their own personal value set. These personal value sets can then be averaged to identify a social value set.

The authors used their tool to develop an EQ-5D-5L value set for New Zealand (which is where the researchers are based). They recruited 5,112 people in an online panel, such that the sample was representative of the general public. Participants answered 20 DCE questions each, on average, and almost half of them said that they found the questions difficult to answer. The NZ value set showed that anxiety/depression was associated with the greatest disutility, though each dimension has a notably similar level of impact at each level. The value set correlates well with numerous existing value sets.

The main limitation of this research seems to be that only levels 1, 3, and 5 of each EQ-5D-5L domain were included. Including levels 2 and 4 would more than double the number of questions that would need to be answered. It is also concerning that more than half of the sample was excluded due to low data quality. But the authors do a pretty good job of convincing us that this is for the best. Adaptive designs of this kind could be the future of health state valuation, especially if they can be implemented online, at low cost. I expect we’ll be seeing plenty more from PAPRIKA.

Credits

Thesis Thursday: Matthew Quaife

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Matthew Quaife who has a PhD from the London School of Hygiene and Tropical Medicine. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Using stated preferences to estimate the impact and cost-effectiveness of new HIV prevention products in South Africa
Supervisors
Fern Terris-Prestholt, Peter Vickerman
Repository link
http://researchonline.lshtm.ac.uk/4646708

Stated preferences for what?

Our main study looked at preferences for new HIV prevention products in South Africa – estimating the uptake and cost-effectiveness of multi-purpose prevention products, which protect against HIV, pregnancy and STIs. You’ll notice that condoms do this, so why even bother? Condom use needs both partners to agree (for the duration of a given activity) and, whilst female partners tend to prefer condom-protected sex, there is lots of evidence that male partners – who also have greater bargaining power in many contexts – do not.

Oral pre-exposure prophylaxis (PrEP), microbicide gels, and vaginal rings are new products which prevent HIV infection. More importantly, they are female-initiated and can generally be used without a male partner’s knowledge. But trials and demonstration projects among women at high risk of HIV in sub-Saharan Africa have shown low levels of uptake and adherence. We used a DCE to inform the development of attractive and usable profiles for these products, and also estimate how much additional demand – and therefore protection – would be gained from adding contraceptive or STI-protective attributes.

We also elicited the stated preferences of female sex workers for client risk, condom use, and payments for sex. Sex workers can earn more for risky unprotected sex, and we used a repeated DCE to predict risk compensation (i.e. how much condom use would change) if they were to use HIV prevention products.

What did you find most influenced people’s preferences in your research?

Unsurprisingly for products, HIV protection was most important to people, followed by STI and then pregnancy protection. But digging below these averages with a latent class analysis, we found some interesting variation within female respondents: over a third were not concerned with HIV protection at all, instead strongly caring about pregnancy and STI protection. Worryingly, these were more likely to be respondents from high-incidence adolescent and sex worker groups. The remainder of the sample overwhelmingly chose based on HIV protection.

In the second sex worker DCE, we found that using a new HIV prevention product made condoms become less important and price more important. We predict that the price premium for unprotected sex would reduce by two thirds, and the amount of condomless sex would double. This is an interesting labour market/economic finding, but – if true – also has real public health implications. Since economic changes mean sex workers move from multi-purpose condoms to single-purpose products which need high levels of adherence, we thought this would be interesting to model.

How did you use information about people’s preferences to inform estimates of cost-effectiveness?

In two ways. First, we used simple uptake predictions from DCEs to parameterise an HIV transmission model, allowing for condom substitution uptake to vary by condom users and non-users (it was double in the latter). We were also able to model the potential uptake of multipurpose products which don’t exist yet – e.g. a pill protecting from HIV and pregnancy. We predict that this combination, in particular, would double uptake among high-risk young women.

Second, we predicted risk compensation among sex workers who chose new products instead of condoms. We were also able to calculate the price elasticity of supply of unprotected sex, which we built into a dynamic transmission model as a determinant of behaviour.

Can discrete choice experiments accurately predict the kinds of behaviours that you were looking at?

To be honest, when I started the PhD I was really sceptical – and I still am to an extent. But two things make me think DCEs can be useful in predicting behaviours.

First is the data. We published a meta-analysis of how well DCEs predict real-world health choices at an individual level. We only found six studies with individual-level data, but these showed DCEs predict with an 88% sensitivity but just a 34% specificity. If a DCE says you’ll do something, you more than likely will – which is important for modelling heterogeneity in uptake. We desperately need more studies following up DCE participants making real-world choices.

Second is the lack of alternative inputs. Where products are new and potential users are inexperienced, modellers pick an uptake number/range and hope for the best. Where we don’t know efficacy, we may assume that uptake and efficacy are linearly related – but they may not be (e.g. if proportionately more people use a 95% effective product than a 45% effective one). Instead, we might assume uptake and efficacy are independent, but that might sound even less realistic. I think that DCEs can tell us something about these behaviours that are useful for the parameters and structures of models, even if they are not perfect predictors.

Your tread the waters of infectious disease modelling in your research – was the incorporation of economic factors a challenge?

It was pretty tricky, though not as challenging as building the simple dynamic transmission model as a first exposure to R. In general, behaviours are pretty crudely modelled in transmission models, largely due to assumptions like random mixing and other population-level dynamics. We made a simple mechanistic model of sex work based on the supply elasticities estimated in the DCE, and ran a few scenarios, each time estimating the impact of prevention products. We simulated the price of unprotected sex falling and quantity rising as above, but also overlaid a few behavioural rules (e.g. Camerer’s constant income hypothesis) to simulate behavioural responses to a fall in overall income. Finally, we thought about competition between product users and non-users, and how much the latter may be affected by the market behaviours of the former. Look out for the paper at Bristol HESG!

How would you like to see research build on your work to improve HIV prevention?

I did a public engagement event last year based on one statistic: if you are a 16-year old girl living in Durban, you have an 80% lifetime risk of acquiring HIV. I find it unbelievable that, in 2018, when millions have been spent on HIV prevention and we have a range of interventions that can prevent HIV, incidence among some groups is still so dramatically and persistently high.

I think research has a really important role in understanding how people want to protect themselves from HIV, STIs, and pregnancy. In addition to highlighting the populations where interventions will be most cost-effective, we show that variation in preferences drives impact. I hope we can keep banging the drum to make attractive and effective options available to those at high risk.

Thesis Thursday: Caroline Vass

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Caroline Vass who has a PhD from the University of Manchester. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Using discrete choice experiments to value benefits and risks in primary care
Supervisors
Katherine Payne, Stephen Campbell, Daniel Rigby
Repository link
https://www.escholar.manchester.ac.uk/uk-ac-man-scw:295629

Are there particular challenges associated with asking people to trade-off risks in a discrete choice experiment?

The challenge of communicating risk in general, not just in DCEs, was one of the things which drew me to the PhD. I’d heard a TED talk discussing a study which asked people’s understanding of weather forecasts. Although most people think they understand a simple statement like “there’s a 30% chance of rain tomorrow”, few people correctly interpreted that as meaning it will rain 30% of the days like tomorrow. Most interpret it to mean there will be rain 30% of the time or in 30% of the area.

My first ever publication was reviewing the risk communication literature, which confirmed our suspicions; even highly educated samples don’t always interpret information as we expect. Therefore, testing if the communication of risk mattered when making trade-offs in a DCE seemed a pretty important topic and formed the overarching research question of my PhD.

Most of your study used data relating to breast cancer screening. What made this a good context in which to explore your research questions?

All women are invited to participate in breast screening (either from a GP referral or at 47-50 years old) in the UK. This makes every woman a potential consumer and a potential ‘patient’. I conducted a lot of qualitative research to ensure the survey text was easily interpretable, and having a disease which many people had heard of made this easier and allowed us to focus on the risk communication formats. My supervisor Prof. Katherine Payne had also been working on a large evaluation of stratified screening which made contacting experts, patients and charities easier.

There are also national screening participation figures so we were able to test if the DCE had any real-world predictive value. Luckily, our estimates weren’t too far off the published uptake rates for the UK!

How did you come to use eye-tracking as a research method, and were there any difficulties in employing a method not widely used in our field?

I have to credit my supervisor Prof. Dan Rigby with planting the seed and introducing me to the method. I did a bit of reading into what psychologists thought you could measure using eye-movements and thought it was worth further investigation. I literally found people publishing with the technology at our institution and knocked on doors until someone would let me use it! If the University of Manchester didn’t already have the equipment, it would have been much more challenging to collect these data.

I then discovered the joys of lab-based work which I think many health economists, fortunately, don’t encounter in their PhDs. The shared bench, people messing with your experiment set-up, restricted lab time which needs to be booked weeks in advance etc. I’m sure it will all be worth it… when the paper is finally published.

What are the key messages from your research in terms of how we ought to be designing DCEs in this context?

I had a bit of a null-result on the risk communication formats, where I found it didn’t affect preferences. I think looking back that might have been with the types of numbers I was presenting (5%, 10%, 20% are easier to understand) and maybe people have a lot of knowledge about the risks of breast screening. It certainly warrants further research to see if my finding holds in other settings. There is a lot of support for visual risk communication formats like icon arrays in other literatures and their addition didn’t seem to do any harm.

Some of the most interesting results came from the think-aloud interviews I conducted with female members of the public. Although I originally wanted to focus on their interpretation of the risk attributes, people started verbalising all sorts of interesting behaviour and strategies. Some of it aligned with economic concepts I hadn’t thought of such as feelings of regret associated with opting-out and discounting both the costs and health benefits of later screens in the programme. But there were also some glaring violations, like ignoring certain attributes, associating cost with quality, using other people’s budget constraints to make choices, and trying to game the survey with protest responses. So perhaps people designing DCEs for benefit-risk trade-offs specifically or in healthcare more generally should be aware that respondents can and do adopt simplifying heuristics. Is this evidence of the benefits of qualitative research in this context? I make that argument here.

Your thesis describes a wealth of research methods and findings, but is there anything that you wish you could have done that you weren’t able to do?

Achieved a larger sample size for my eye-tracking study!