Thesis Thursday: Caroline Vass

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Caroline Vass who has a PhD from the University of Manchester. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Using discrete choice experiments to value benefits and risks in primary care
Supervisors
Katherine Payne, Stephen Campbell, Daniel Rigby
Repository link
https://www.escholar.manchester.ac.uk/uk-ac-man-scw:295629

Are there particular challenges associated with asking people to trade-off risks in a discrete choice experiment?

The challenge of communicating risk in general, not just in DCEs, was one of the things which drew me to the PhD. I’d heard a TED talk discussing a study which asked people’s understanding of weather forecasts. Although most people think they understand a simple statement like “there’s a 30% chance of rain tomorrow”, few people correctly interpreted that as meaning it will rain 30% of the days like tomorrow. Most interpret it to mean there will be rain 30% of the time or in 30% of the area.

My first ever publication was reviewing the risk communication literature, which confirmed our suspicions; even highly educated samples don’t always interpret information as we expect. Therefore, testing if the communication of risk mattered when making trade-offs in a DCE seemed a pretty important topic and formed the overarching research question of my PhD.

Most of your study used data relating to breast cancer screening. What made this a good context in which to explore your research questions?

All women are invited to participate in breast screening (either from a GP referral or at 47-50 years old) in the UK. This makes every woman a potential consumer and a potential ‘patient’. I conducted a lot of qualitative research to ensure the survey text was easily interpretable, and having a disease which many people had heard of made this easier and allowed us to focus on the risk communication formats. My supervisor Prof. Katherine Payne had also been working on a large evaluation of stratified screening which made contacting experts, patients and charities easier.

There are also national screening participation figures so we were able to test if the DCE had any real-world predictive value. Luckily, our estimates weren’t too far off the published uptake rates for the UK!

How did you come to use eye-tracking as a research method, and were there any difficulties in employing a method not widely used in our field?

I have to credit my supervisor Prof. Dan Rigby with planting the seed and introducing me to the method. I did a bit of reading into what psychologists thought you could measure using eye-movements and thought it was worth further investigation. I literally found people publishing with the technology at our institution and knocked on doors until someone would let me use it! If the University of Manchester didn’t already have the equipment, it would have been much more challenging to collect these data.

I then discovered the joys of lab-based work which I think many health economists, fortunately, don’t encounter in their PhDs. The shared bench, people messing with your experiment set-up, restricted lab time which needs to be booked weeks in advance etc. I’m sure it will all be worth it… when the paper is finally published.

What are the key messages from your research in terms of how we ought to be designing DCEs in this context?

I had a bit of a null-result on the risk communication formats, where I found it didn’t affect preferences. I think looking back that might have been with the types of numbers I was presenting (5%, 10%, 20% are easier to understand) and maybe people have a lot of knowledge about the risks of breast screening. It certainly warrants further research to see if my finding holds in other settings. There is a lot of support for visual risk communication formats like icon arrays in other literatures and their addition didn’t seem to do any harm.

Some of the most interesting results came from the think-aloud interviews I conducted with female members of the public. Although I originally wanted to focus on their interpretation of the risk attributes, people started verbalising all sorts of interesting behaviour and strategies. Some of it aligned with economic concepts I hadn’t thought of such as feelings of regret associated with opting-out and discounting both the costs and health benefits of later screens in the programme. But there were also some glaring violations, like ignoring certain attributes, associating cost with quality, using other people’s budget constraints to make choices, and trying to game the survey with protest responses. So perhaps people designing DCEs for benefit-risk trade-offs specifically or in healthcare more generally should be aware that respondents can and do adopt simplifying heuristics. Is this evidence of the benefits of qualitative research in this context? I make that argument here.

Your thesis describes a wealth of research methods and findings, but is there anything that you wish you could have done that you weren’t able to do?

Achieved a larger sample size for my eye-tracking study!

Chris Sampson’s journal round-up for 25th September 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Good practices for real‐world data studies of treatment and/or comparative effectiveness: recommendations from the Joint ISPOR‐ISPE Special Task Force on Real‐World Evidence in Health Care Decision Making. Value in Health Published 15th September 2017

I have an instinctive mistrust of buzzwords. They’re often used to avoid properly defining something, either because it’s too complicated or – worse – because it isn’t worth defining in the first place. For me, ‘real-world evidence’ falls foul. If your evidence isn’t from the real world, then it isn’t evidence at all. But I do like a good old ISPOR Task Force report, so let’s see where this takes us. Real-world evidence (RWE) and its sibling buzzword real-world data (RWD) relate to observational studies and other data not collected in an experimental setting. The purpose of this ISPOR task force (joint with the International Society for Pharmacoepidemiology) was to prepare some guidelines about the conduct of RWE/RWD studies, with a view to improving decision-makers’ confidence in them. Essentially, the hope is to try and create for RWE the kind of ecosystem that exists around RCTs, with procedures for study registration, protocols, and publication: a noble aim. The authors distinguish between 2 types of RWD: ‘Exploratory Treatment Effectiveness Studies’ and ‘Hypothesis Evaluating Treatment Effectiveness Studies’. The idea is that the latter test a priori hypotheses, and these are the focus of this report. Seven recommendations are presented: i) pre-specify the hypotheses, ii) publish a study protocol, iii) publish the study with reference to the protocol, iv) enable replication, v) test hypotheses on a separate dataset than the one used to generate the hypotheses, vi) publically address methodological criticisms, and vii) involve key stakeholders. Fair enough. But these are just good practices for research generally. It isn’t clear how they are in any way specific to RWE. Of course, that was always going to be the case. RWE-specific recommendations would be entirely contingent on whether or not one chose to define a study as using ‘real-world evidence’ (which you shouldn’t, because it’s meaningless). The authors are trying to fit a bag of square pegs into a hole of undefined shape. It isn’t clear to me why retrospective observational studies, prospective observational studies, registry studies, or analyses of routinely collected clinical data should all be treated the same, yet differently to randomised trials. Maybe someone can explain why I’m mistaken, but this report didn’t do it.

Are children rational decision makers when they are asked to value their own health? A contingent valuation study conducted with children and their parents. Health Economics [PubMed] [RePEc] Published 13th September 2017

Obtaining health state utility values for children presents all sorts of interesting practical and theoretical problems, especially if we want to use them in decisions about trade-offs with adults. For this study, the researchers conducted a contingent valuation exercise to elicit children’s (aged 7-19) preferences for reduced risk of asthma attacks in terms of willingness to pay. The study was informed by two preceding studies that sought to identify the best way in which to present health risk and financial information to children. The participating children (n=370) completed questionnaires at school, which asked about socio-demographics, experience of asthma, risk behaviours and altruism. They were reminded (in child-friendly language) about the idea of opportunity cost, and to consider their own budget constraint. Baseline asthma attack risk and 3 risk-reduction scenarios were presented graphically. Two weeks later, the parents completed similar questionnaires. Only 9% of children were unwilling to pay for risk reduction, and most of those said that it was the mayor’s problem! In some senses, the children did a better job than their parents. The authors conducted 3 tests for ‘incorrect’ responses – 14% of adults failed at least one, while only 4% of children did so. Older children demonstrated better scope sensitivity. Of course, children’s willingness to pay was much lower in absolute terms than their parents’, because children have a much smaller budget. As a percentage of the budget, parents were – on average – willing to pay more than children. That seems reassuringly predictable. Boys and fathers were willing to pay more than girls and mothers. Having experience of frequent asthma attacks increased willingness to pay. Interestingly, teenagers were willing to pay less (as a proportion of their budget) than younger children… and so were the teenagers’ parents! Children’s willingness to pay was correlated with that of their own parent’s at the higher risk reductions but not the lowest. This study reports lots of interesting findings and opens up plenty of avenues for future research. But the take-home message is obvious. Kids are smart. We should spend more time asking them what they think.

Journal of Patient-Reported Outcomes: aims and scope. Journal of Patient-Reported Outcomes Published 12th September 2017

Here we have a new journal that warrants a mention. The journal is sponsored by the International Society for Quality of Life Research (ISOQOL), making it a sister journal of Quality of Life Research. One of its Co-Editors-in-Chief is the venerable David Feeny, of HUI fame. They’ll be looking to publish research using PRO(M) data from trials or routine settings, studies of the determinants of PROs, qualitative studies in the development of PROs; anything PRO-related, really. This could be a good journal for more thorough reporting of PRO data that can get squeezed out of a study’s primary outcome paper. Also, “JPRO” is fun to say. The editors don’t mention that the journal is open access, but the website states that it is, so APCs at the ready. ISOQOL members get a discount.

Research and development spending to bring a single cancer drug to market and revenues after approval. JAMA Internal Medicine [PubMed] Published 11th September 2017

We often hear that new drugs are expensive because they’re really expensive to develop. Then we hear about how much money pharmaceutical companies spend on marketing, and we baulk. The problem is, pharmaceutical companies aren’t forthcoming with their accounts, so researchers have to come up with more creative ways to estimate R&D spending. Previous studies have reported divergent estimates. Whether R&D costs ‘justify’ high prices remains an open question. For this study, the authors looked at public data from the US for 10 companies that had only one cancer drug approved by the FDA between 2007 and 2016. Not very representative, perhaps, but useful because it allows for the isolation of the development costs associated with a single drug reaching the market. The median time for drug development was 7.3 years. The most generous estimate of the mean cost of development came in at under a billion dollars; substantially less than some previous estimates. This looks like a bargain; the mean revenue for the 10 companies up to December 2016 was over $6.5 billion. This study may seem a bit back-of-the-envelope in nature. But that doesn’t mean it isn’t accurate. If anything, it begs more confidence than some previous studies because the methods are entirely transparent.

Credits

 

 

Chris Sampson’s journal round-up for 1st August 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Individualised and personalised QALYs in exceptional treatment decisions. Journal of Medical Ethics [PubMedPublished 22nd July 2016

I’ve written previously about the notion of individualised cost-effectiveness analysis – or iCEA. With the rise of personalised medicine it will become an increasingly important idea. But it’s one that needs more consideration and research. So I was very pleased to see this essay in JME. The starting point for the author’s argument is that – in some cases – people will be denied treatment that would be cost-effective for them, because it has not been judged to be cost-effective for the target population on average. The author’s focus is upon people at the extremes of the distribution in terms of treatment effectiveness or costs: exceptional cases. There are two features to the argument. First, cost-effectiveness should be individualised in the sense that we should be providing treatment according to the costs and effects for that individual. Second, QALYs should be ‘personalised’ in the sense that individual’s own (health) preferences should be used to determine whether or not treatment is cost-effective. The author argues that ‘individual funding requests’ (where patients apply for eligibility for treatment that is not normally approved) represent an ideal context in which to use individualised and personalised QALYs. Unfortunately there are a lot of problems with the arguments presented in this essay, both in terms of their formulation and their practical implications. Some of the ideas are a bit dangerous. That there is no discussion of uncertainty or expectations is telling. If I can find the time I’ll write a full response to the journal. Nevertheless, it’s good to see discussion around this issue.

The value of medicines: a crucial but vague concept. PharmacoEconomics [PubMed] Published 21st July 2016

That we can’t define value is perhaps why the practice of value-based pricing has floundered in the UK. Yes, there’s cost-per-QALY, but none of us really think that’s the end of the value story. This article reports on a systematic review to try and identify how value has been defined in a number of European countries. Apparently none of the identified articles in the published literature included an explicit definition of value. This may not come as a surprise – value is in the eye of the beholder, and analysts defer to decision makers. Some vague definitions were found in the grey literature. The paper highlights a number of studies that demonstrate the ways in which different stakeholders might define value. In the countries that consider costs in reimbursement decisions, QALYs were (unsurprisingly) the most common way of measuring “the value of healthcare products”. But the authors note that most also take into account wider societal benefits and broader aspects of value. The review also identifies safety as being important. The authors seem to long for a universal definition of value, but acknowledge that it cannot be a fixed target. Value is heavily dependent on the context of a decision, so it makes sense to me that there should be inconsistencies. We just need to make sure we know what these inconsistencies are, and that we feel they are just.

The value of mortality risk reductions. Pure altruism – a confounder? Journal of Health Economics Published 19th July 2016

Only the most belligerent of old-school economists would argue that all human choices can be accounted for in purely selfish terms. There’s been much economic research into altruistic preferences. Pure altruism is the idea that people might be concerned with the general welfare of others, rather than just specific factors. In the context of tax-funded initiatives it can be either positive or negative, as people could either be willing to pay more for benefits to other people or less due to a reluctance to enforce higher costs (say nothing of sadism). This study reports on a discrete choice experiment regarding mortality reductions through traffic safety. Pure altruism is tested by the randomised inclusion of a statement about the amount paid by other people. An additional question about what the individual thinks the average citizen would choose is used to identify the importance of pure altruism (if it exists). The findings are both heartening and disappointing. People are considerate of other people’s preferences, but unfortunately they think that other people don’t value mortality reductions as highly as them. Therefore, individuals reduce their own willingness to pay, resulting in negative altruism. Furthermore, the analysis suggests that this is due to (negative) pure altruism because the stated values increase when the notion of coercive taxation is removed.

Realism and resources: towards more explanatory economic evaluation. Evaluation Published July 2016

This paper was doing the rounds on Twitter, having piqued people’s interest with an apparently alternative approach to economic evaluation. Realist evaluation – we are told – is expressed primarily as a means of answering the question ‘what works for whom, under what circumstances and why?’ Economic evaluation, on the other hand, might be characterised as ‘does this work for these people under these circumstances?’ We’re not really bothered why. Realist evaluation is concerned with the theory underlying the effectiveness of an intervention – it is seen as necessary to identify the cause of the benefit. This paper argues for more use of realist evaluation approaches in economic evaluation, providing an overview of the two approaches. The authors present an example of shared care and review literature relating to cost-effectiveness-specific ‘programme theories’: the mechanisms affecting resource use. The findings are vague and inconclusive, and for me this is a problem – I’m not sure what we’ve learned. I am somewhat on the fence. I agree with the people who think we need more data to help us identify causality and support theories. I agree with the people who say we need to better recognise context and complexity. But alternative approaches to economic evaluation like PBMA could handle this better without any express use of ‘realist evaluation’. And I agree that we could learn a lot from more qualitative analysis. I agree with most of what this article’s authors’ say. But I still don’t see how realist evaluation helps us get there any more than us just doing economic evaluation better. If understanding the causal pathways is relevant to decision-making (i.e., understanding it could change decisions in certain contexts) then we ought to be considering it in economic evaluation. If it isn’t then why would we bother? This article demonstrates that it is possible to carry out realist evaluation to support cost-effectiveness analysis, but it isn’t clear why we should. But then, that might just be because I don’t understand realist evaluation.

Photo credit: Antony Theobald (CC BY-NC-ND 2.0)