Thesis Thursday: Caroline Vass

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Caroline Vass who has a PhD from the University of Manchester. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Using discrete choice experiments to value benefits and risks in primary care
Supervisors
Katherine Payne, Stephen Campbell, Daniel Rigby
Repository link
https://www.escholar.manchester.ac.uk/uk-ac-man-scw:295629

Are there particular challenges associated with asking people to trade-off risks in a discrete choice experiment?

The challenge of communicating risk in general, not just in DCEs, was one of the things which drew me to the PhD. I’d heard a TED talk discussing a study which asked people’s understanding of weather forecasts. Although most people think they understand a simple statement like “there’s a 30% chance of rain tomorrow”, few people correctly interpreted that as meaning it will rain 30% of the days like tomorrow. Most interpret it to mean there will be rain 30% of the time or in 30% of the area.

My first ever publication was reviewing the risk communication literature, which confirmed our suspicions; even highly educated samples don’t always interpret information as we expect. Therefore, testing if the communication of risk mattered when making trade-offs in a DCE seemed a pretty important topic and formed the overarching research question of my PhD.

Most of your study used data relating to breast cancer screening. What made this a good context in which to explore your research questions?

All women are invited to participate in breast screening (either from a GP referral or at 47-50 years old) in the UK. This makes every woman a potential consumer and a potential ‘patient’. I conducted a lot of qualitative research to ensure the survey text was easily interpretable, and having a disease which many people had heard of made this easier and allowed us to focus on the risk communication formats. My supervisor Prof. Katherine Payne had also been working on a large evaluation of stratified screening which made contacting experts, patients and charities easier.

There are also national screening participation figures so we were able to test if the DCE had any real-world predictive value. Luckily, our estimates weren’t too far off the published uptake rates for the UK!

How did you come to use eye-tracking as a research method, and were there any difficulties in employing a method not widely used in our field?

I have to credit my supervisor Prof. Dan Rigby with planting the seed and introducing me to the method. I did a bit of reading into what psychologists thought you could measure using eye-movements and thought it was worth further investigation. I literally found people publishing with the technology at our institution and knocked on doors until someone would let me use it! If the University of Manchester didn’t already have the equipment, it would have been much more challenging to collect these data.

I then discovered the joys of lab-based work which I think many health economists, fortunately, don’t encounter in their PhDs. The shared bench, people messing with your experiment set-up, restricted lab time which needs to be booked weeks in advance etc. I’m sure it will all be worth it… when the paper is finally published.

What are the key messages from your research in terms of how we ought to be designing DCEs in this context?

I had a bit of a null-result on the risk communication formats, where I found it didn’t affect preferences. I think looking back that might have been with the types of numbers I was presenting (5%, 10%, 20% are easier to understand) and maybe people have a lot of knowledge about the risks of breast screening. It certainly warrants further research to see if my finding holds in other settings. There is a lot of support for visual risk communication formats like icon arrays in other literatures and their addition didn’t seem to do any harm.

Some of the most interesting results came from the think-aloud interviews I conducted with female members of the public. Although I originally wanted to focus on their interpretation of the risk attributes, people started verbalising all sorts of interesting behaviour and strategies. Some of it aligned with economic concepts I hadn’t thought of such as feelings of regret associated with opting-out and discounting both the costs and health benefits of later screens in the programme. But there were also some glaring violations, like ignoring certain attributes, associating cost with quality, using other people’s budget constraints to make choices, and trying to game the survey with protest responses. So perhaps people designing DCEs for benefit-risk trade-offs specifically or in healthcare more generally should be aware that respondents can and do adopt simplifying heuristics. Is this evidence of the benefits of qualitative research in this context? I make that argument here.

Your thesis describes a wealth of research methods and findings, but is there anything that you wish you could have done that you weren’t able to do?

Achieved a larger sample size for my eye-tracking study!

Chris Sampson’s journal round-up for 25th September 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Good practices for real‐world data studies of treatment and/or comparative effectiveness: recommendations from the Joint ISPOR‐ISPE Special Task Force on Real‐World Evidence in Health Care Decision Making. Value in Health Published 15th September 2017

I have an instinctive mistrust of buzzwords. They’re often used to avoid properly defining something, either because it’s too complicated or – worse – because it isn’t worth defining in the first place. For me, ‘real-world evidence’ falls foul. If your evidence isn’t from the real world, then it isn’t evidence at all. But I do like a good old ISPOR Task Force report, so let’s see where this takes us. Real-world evidence (RWE) and its sibling buzzword real-world data (RWD) relate to observational studies and other data not collected in an experimental setting. The purpose of this ISPOR task force (joint with the International Society for Pharmacoepidemiology) was to prepare some guidelines about the conduct of RWE/RWD studies, with a view to improving decision-makers’ confidence in them. Essentially, the hope is to try and create for RWE the kind of ecosystem that exists around RCTs, with procedures for study registration, protocols, and publication: a noble aim. The authors distinguish between 2 types of RWD: ‘Exploratory Treatment Effectiveness Studies’ and ‘Hypothesis Evaluating Treatment Effectiveness Studies’. The idea is that the latter test a priori hypotheses, and these are the focus of this report. Seven recommendations are presented: i) pre-specify the hypotheses, ii) publish a study protocol, iii) publish the study with reference to the protocol, iv) enable replication, v) test hypotheses on a separate dataset than the one used to generate the hypotheses, vi) publically address methodological criticisms, and vii) involve key stakeholders. Fair enough. But these are just good practices for research generally. It isn’t clear how they are in any way specific to RWE. Of course, that was always going to be the case. RWE-specific recommendations would be entirely contingent on whether or not one chose to define a study as using ‘real-world evidence’ (which you shouldn’t, because it’s meaningless). The authors are trying to fit a bag of square pegs into a hole of undefined shape. It isn’t clear to me why retrospective observational studies, prospective observational studies, registry studies, or analyses of routinely collected clinical data should all be treated the same, yet differently to randomised trials. Maybe someone can explain why I’m mistaken, but this report didn’t do it.

Are children rational decision makers when they are asked to value their own health? A contingent valuation study conducted with children and their parents. Health Economics [PubMed] [RePEc] Published 13th September 2017

Obtaining health state utility values for children presents all sorts of interesting practical and theoretical problems, especially if we want to use them in decisions about trade-offs with adults. For this study, the researchers conducted a contingent valuation exercise to elicit children’s (aged 7-19) preferences for reduced risk of asthma attacks in terms of willingness to pay. The study was informed by two preceding studies that sought to identify the best way in which to present health risk and financial information to children. The participating children (n=370) completed questionnaires at school, which asked about socio-demographics, experience of asthma, risk behaviours and altruism. They were reminded (in child-friendly language) about the idea of opportunity cost, and to consider their own budget constraint. Baseline asthma attack risk and 3 risk-reduction scenarios were presented graphically. Two weeks later, the parents completed similar questionnaires. Only 9% of children were unwilling to pay for risk reduction, and most of those said that it was the mayor’s problem! In some senses, the children did a better job than their parents. The authors conducted 3 tests for ‘incorrect’ responses – 14% of adults failed at least one, while only 4% of children did so. Older children demonstrated better scope sensitivity. Of course, children’s willingness to pay was much lower in absolute terms than their parents’, because children have a much smaller budget. As a percentage of the budget, parents were – on average – willing to pay more than children. That seems reassuringly predictable. Boys and fathers were willing to pay more than girls and mothers. Having experience of frequent asthma attacks increased willingness to pay. Interestingly, teenagers were willing to pay less (as a proportion of their budget) than younger children… and so were the teenagers’ parents! Children’s willingness to pay was correlated with that of their own parent’s at the higher risk reductions but not the lowest. This study reports lots of interesting findings and opens up plenty of avenues for future research. But the take-home message is obvious. Kids are smart. We should spend more time asking them what they think.

Journal of Patient-Reported Outcomes: aims and scope. Journal of Patient-Reported Outcomes Published 12th September 2017

Here we have a new journal that warrants a mention. The journal is sponsored by the International Society for Quality of Life Research (ISOQOL), making it a sister journal of Quality of Life Research. One of its Co-Editors-in-Chief is the venerable David Feeny, of HUI fame. They’ll be looking to publish research using PRO(M) data from trials or routine settings, studies of the determinants of PROs, qualitative studies in the development of PROs; anything PRO-related, really. This could be a good journal for more thorough reporting of PRO data that can get squeezed out of a study’s primary outcome paper. Also, “JPRO” is fun to say. The editors don’t mention that the journal is open access, but the website states that it is, so APCs at the ready. ISOQOL members get a discount.

Research and development spending to bring a single cancer drug to market and revenues after approval. JAMA Internal Medicine [PubMed] Published 11th September 2017

We often hear that new drugs are expensive because they’re really expensive to develop. Then we hear about how much money pharmaceutical companies spend on marketing, and we baulk. The problem is, pharmaceutical companies aren’t forthcoming with their accounts, so researchers have to come up with more creative ways to estimate R&D spending. Previous studies have reported divergent estimates. Whether R&D costs ‘justify’ high prices remains an open question. For this study, the authors looked at public data from the US for 10 companies that had only one cancer drug approved by the FDA between 2007 and 2016. Not very representative, perhaps, but useful because it allows for the isolation of the development costs associated with a single drug reaching the market. The median time for drug development was 7.3 years. The most generous estimate of the mean cost of development came in at under a billion dollars; substantially less than some previous estimates. This looks like a bargain; the mean revenue for the 10 companies up to December 2016 was over $6.5 billion. This study may seem a bit back-of-the-envelope in nature. But that doesn’t mean it isn’t accurate. If anything, it begs more confidence than some previous studies because the methods are entirely transparent.

Credits

 

 

Thesis Thursday: Sara Machado

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Sara Machado who graduated with a PhD from Boston University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Essays on the economics of blood donations
Supervisors
Daniele Paserman, Johannes Schmieder, Albert Ma
Repository link
https://open.bu.edu/pdfpreview/bitstream/handle/2144/19216/Machado_bu_0017E_12059.pdf

What makes blood donation an interesting context for economic research?

I’m generally interested in markets in which there is no price mechanism to help supply and demand meet. There are several examples of such markets in the health field, such as organ, bone marrow, and blood donations. In general, all altruistic markets share this feature. I define altruistic markets as markets with a volunteer supply and no market price, therefore mainly driven by social preferences.

In a way, the absence of a price leads to a very traditional coordination problem. However, it requires not-so-traditional solutions, such as market design, registries, and different types of incentives, due to many historical, political, and ethical constraints (which leads us to the concept of repugnant markets, by Roth (2007)). The specific constraints for blood donations are outlined in Slonim et al’s The Market for Blood, which also outlines the main experimental findings regarding the effects of incentives on blood donations. The blood donations market is the perfect setup to study altruistic markets, not only because of its volunteer supply but also due to the fact that it is a potentially repeated behaviour. Moreover, the donation is not to a specific patient, but to the supply of blood in general. Social preferences, as well as risk and time preferences, play a key role in minimizing market imbalances.

How did you come to identify the specific research questions for your PhD?

I was quite fortunate, due to an unfortunate situation… There was a notorious blood shortage, in Portugal, when I started thinking about possible topics for my dissertation. It got a lot of media coverage, possibly due to political factors, since the shortage happened shortly after a change in the incentives for blood donors. My first question, which eventually became the main chapter of my dissertation, was whether there was a causal relationship.

The second chapter is the outcome of spending many hours cleaning the data, to tell you the truth. I started to realize that there are many other factors determining blood donation behaviour. All non-monetary aspects of the donation process are very relevant in determining future donation behaviour (also highlighted by Slonim et al (2014) and Lacetera et al (2010)). I show that time can be a far more important currency than other forms of incentives.

Finally, I realized how important it would be for me to be able to measure social preferences to continue my research on altruistic markets and joined a team lead by Matteo Galizzi, who is working on measuring preferences of a representative sample of the UK population. My third chapter is the first installment of our work in this domain.

Your research looked at people’s behaviour. How does it relate to the growing recognition that people make ‘irrational’ choices?

The more I look into this, the more I think that we have to be careful about a generalization of irrationality. There is nothing “irrational” in blood donors’ behaviour, for the most part. So far, I have only resorted to very neoclassical models to explain donors’ behaviour – and it worked just fine.

The way I see it, there are two separate aspects to take into account. First, the market response. It is worrisome if we find market responses that are only possible if the majority of agents are making “irrational choices”. Those markets need tailored interventions to inform the decision-making process.

The second aspect zooms in into individual decision-making. In this case, it is important to determine whether there are psychological biases leading to suboptimal, or irrational, choices.

One might argue that a blood donation due to an emotional response to some stimuli is “irrational”. I strongly disagree with that categorization. For example, there is nothing suboptimal in donating blood as a sign of gratitude to previous blood donors.

The main message is that it is important to identify behavioural biases that lead to inefficient market outcomes, but “irrational choices” is too wide an umbrella term and should be used with caution.

Are any of your key findings generalisable to settings other than blood donation?

I think two key findings are quite general. The first one is the fact that it is possible to design incentive schemes that bypass the question of the crowding out of intrinsic motivation. This is a fairly general issue, that ranges from motivating employees at the workplace in general to the design of incentive schemes for physicians, to the elicitation of charitable giving, just to name a few examples. As long as it is a repeated behaviour, the result holds. This highlights a different aspect, the importance of placing lab and isolated field experimental evidence into perspective when informing policy making. There is extensive experimental literature on the crowding out of intrinsic motivation, but very little has been done at the market level and with a longitudinal component. This has limited the ability to take into account the advantages of focusing on repeated blood donation, on the one hand, and of incorporating demand side responses, on the other hand (namely by increasing the number of blood drives).

The second key aspect is the advantage of using time as the main opportunity cost faced by a volunteer supply, in the context of prosocial behaviour.

Based on your research, what might an optimal blood donation policy look like?

I believe there are two key ingredients in the design of the optimal blood donation policy: 1) promoting blood donation as a repeated behaviour; and 2) increasing the responsiveness of blood donation services in order to minimize demand and supply imbalances.

The first aspect can be addressed by designing incentive schemes targeted at repeated donors, with no rewards for non-regular behaviour. The second would greatly benefit from the existence of a blood donor registry, similar to the one already in place for bone marrow donation. This registry would allow for regular blood donors to be called to donate when their blood is needed, minimizing waste in the system. The organization of blood drives would also be more efficient if such a system was in place.

These two components contribute to the development of the blood donor identity, which guarantees a steady supply of blood, whenever necessary.