Thesis Thursday: Edward Webb

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Edward Webb who graduated with a PhD from the University of Copenhagen. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Attention and perception in decision-making and interactions
Supervisors
Alexander Sebald, Peter Norman Sørensen
Repository link
http://www.econ.ku.dk/forskning/Publikationer/ph.d_serie_2007-/Ph.D.181.pdf

Attention and perception aren’t things we often talk about in health economics. Why are they important?

There’s been a lot of work done on attention and perception in economics recently, which I think is a great development. They are really vital topics since unless you know how people perceive the information available to them, and what aspects of their environment are most likely to command their attention, it’s difficult to forecast their behaviour.

I think attention and perception will become more widely talked about in health in future, as there’s many cases in which they have a lot of relevance. For example, you might want to know whether rare symptoms grab doctors’ attention because they’re unusual, or whether they don’t notice them because they’re not expecting them. (There’s a great study by Drew, Vo and Wolfe where radiologists looking at CT scans of the chest failed to notice a picture of a gorilla embedded in them by the experimenters.)

Or if you’re planning some dietary intervention, you might want to take into account how unhealthy food such as pizza and chips attracts people’s attention much more than healthy food, and to look at why this is the case.

What can the new theoretical frameworks described in your thesis tell us about individual behaviour?

Most of the literature in psychology is about how individuals behave. I tried a lot in my thesis to move beyond studying individual decision making to look at how the effects of attention and perception change in different economic environments, as this can often be counter-intuitive.

As an example, in one of the chapters of my thesis I explore the effects of individuals having limited ability to tell the quality of different products apart. It turns out that the effects on a market can be radically different depending on whether there are fixed or marginal costs of quality.

I was also very interested in looking at how individuals with limited or biased attention interact with profit maximising firms. There’s an expectation that companies will rip people off and exploit them, and certainly, that can happen, but I was able to show that it’s not necessarily the case. The case of individuals having limited ability to tell products’ quality apart which I mentioned above is a good example. When firms rely on product differentiation to earn profits, they’re actually harmed by people with this limitation, rather than exploiting them.

Did you find yourself reaching beyond the economics literature for guidance, either in the subject matter or the techniques that you used?

Yes, I read quite a lot outside the standard economics literature during my thesis. Behavioural and experimental economics more or less sits on the boundary between economics and psychology, so it felt very natural to seek guidance from other disciplines. This was especially the case for the eye-tracking experiment that I carried out with the help of my co-authors Andreas Gotfredsen, Carsten S. Nielsen and Alexander Sebald. I needed to learn quite a bit about psychological work on visual attention.

I like that economics is as much a set of analytic tools as a subject area, which gives it the advantage of being able to take on nontraditional topics.

You studied in Denmark, yet your thesis is written in English. Did this raise any additional challenges in completing your PhD?

Danish people speak better English than what I can! Language really wasn’t a problem at all at work, since English is very much the language of academia. Seminars were in English, PhD students and a lot of masters students wrote their theses in English and nearly all postgraduate and some undergraduate teaching was in English. I did feel quite privileged to have the advantage of being a native speaker of the language, and appreciative that most of my colleagues were fine with working in a second language. That’s why I was always very willing to help people out with proofreading English. I only hope I didn’t make too many mistakes!

On the social side, you can get away with living in Denmark without speaking Danish, and many people do. Indeed, I probably wouldn’t have made the effort of becoming a (moderate) Danish speaker if my partner wasn’t Danish.

Copenhagen, and Denmark in general, is a fantastic place to live and work, and I’d urge anyone who is thinking about moving there not to be put off by the language barrier.

How did your experiences during your PhD contribute to your decision to work in the field of health economics?

The question makes it sound like I had a coherent plan! In reality, I’m terrible about thinking about the long term. (I must be a natural Keynesian.) I ended up moving back to the UK after I graduated ironically because of my Danish partner, as she had found a job here. She also works in health, as a medical physicist and cancer researcher at Leeds. I applied for economics jobs in the area and was over the moon to secure a place at the Academic Unit of Health Economics at Leeds.

It’s a little more applied and hands-on than what I was working on before, which is great. I came into economics because I was interested in finding out how people act and interact, and so it’s fantastic to have the opportunity now to work principally with discrete choice experiments, trying to work out patients’ and clinicians’ preferences.

Since I started at Leeds a few months ago I’ve really enjoyed my time. The environment is very stimulating and all my colleagues are extremely friendly and easy going and are always willing to help out or discuss an interesting new idea.

Advertisements

Chris Sampson’s journal round-up for 6th February 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A review of NICE methods and processes across health technology assessment programmes: why the differences and what is the impact? Applied Health Economics and Health Policy [PubMed] Published 27th January 2017

Depending on the type of technology under consideration, NICE adopts a variety of different approaches in coming up with their recommendations. Different approaches might result in different decisions, which could undermine allocative efficiency. This study explores this possibility. Data were extracted from the manuals and websites for 5 programmes, under the themes of ‘remit and scope’, ‘process of assessment’, ‘methods of evaluation’ and ‘appraisal of evidence’. Semi-structured interviews were conducted with 5 people with expertise in each of the 5 programmes. Results are presented in a series of tables – one for each theme – outlining the essential characteristics of the 5 programmes. In their discussion, the authors then go on to consider how the identified differences might impact on efficiency from either a ‘utilitarian’ health-maximisation perspective or NICE’s egalitarian aim of ensuring adequate levels of health care. Not all programmes deliver recommendations with mandatory funding status, and it is only the ones that do that have a formal appeals process. Allowing for local rulings on funding could be good or bad news for efficiency, depending on the capacity of local decision makers to conduct economic evaluations (so that means probably bad news). At the same time, regional variation could undermine NICE’s fairness agenda. The evidence considered by the programmes varies, from a narrow focus on clinical and cost-effectiveness to the incorporation of budget impact and wider ethical and social values. Only some of the programmes have reference cases, and those that do are the ones that use cost-per-QALY analysis, which probably isn’t a coincidence. The fact that some programmes use outcomes other than QALYs obviously has the potential to undermine health-maximisation. Most differences or borne of practicality; there’s no point in insisting on a CUA if there is no evidence at all to support one – the appraisal would simply not happen. The very existence of alternative programmes indicates that NICE is not simply concerned with health-maximisation. Additional weight is given to rare conditions, for example. And NICE want to encourage research and innovation. So it’s no surprise that we need to take into account NICE’s egalitarian view to understand the type of efficiency for which it strives.

Economic evaluations alongside efficient study designs using large observational datasets: the PLEASANT trial case study. PharmacoEconomics [PubMed] Published 21st January 2017

One of the worst things about working on trial-based economic evaluations is going to lots of effort to collect lots of data, then finding that at the end of the day you don’t have much to show for it. Nowadays, the health service routinely collects many data for other purposes. There have been proposals to use these data – instead of prospectively collecting data – to conduct clinical trials. This study explores the potential for doing an economic evaluation alongside such a trial. The study uses CPRD data, including diagnostic, clinical and resource use information, for 8,608 trial participants. The intervention was the sending out of a letter in the hope of reducing unscheduled medical contacts due to asthma exacerbation in children starting a new school year. QALYs couldn’t be estimated using the CPRD data, so values were derived from the literature and estimated on the basis of exacerbations indicated by changes in prescriptions or hospitalisations. Note here the potentially artificial correlation between costs and outcomes that this creates, thus somewhat undermining the benefit of some good old bootstrapping. The results suggest the intervention is cost-saving with little impact on QALYs. Lots of sensitivity analyses are conducted, which are interesting in themselves and say something about the concerns around some of the structural assumptions. The authors outline the pros and cons of the approach. It’s an important discussion as it seems that studies like this are going to become increasingly common. Regarding data collection, there’s little doubt that this approach is more efficient, and it should be particularly valuable in the evaluation of public health and service delivery type interventions. The problem is that the study is not able to use individual-level cost and outcome data from the same people, which is what sets a trial-based economic evaluation apart from a model-based study. So for me, this isn’t really a trial-based economic evaluation. Indeed, the analysis incorporates a Markov-type model of exacerbations. It’s a different kind of beast, which incorporates aspects of modelling and aspects of trial-based analysis, along with some unique challenges of its own. There’s a lot more methodological work that needs to be done in this area, but this study demonstrates that it could be fruitful.

“Too much medicine”: insights and explanations from economic theory and research. Social Science & Medicine [PubMed] Published 18th January 2017

Overconsumption of health care represents an inefficient use of resources, and so we wouldn’t recommend it. But is that all we – as economists – have to say on the matter? This study sought to dig a little deeper. A literature search was conducted to establish a working definition of overconsumption. Related notions such as overdiagnosis, overtreatment, overuse, low-value care, overmedicalisation and even ‘pharmaceuticalisation’ all crop up. The authors introduce ‘need’ as a basis for understanding overconsumption; it represents health care that should never be considered as “needed”. A useful distinction is identified between misconsumption – where an individual’s own consumption is detrimental to their own well-being – and overconsumption, which can be understood as having a negative effect on social welfare. Note that in a collectively funded system the two concepts aren’t entirely distinguishable. Misconsumption becomes the focus of the paper, as avoiding harm to patients has been the subject of the “too much medicine” movement. I think this is a shame, and not really consistent with an economist’s usual perspective. The authors go on to discuss issues such as moral hazard, supplier-induced demand, provider payment mechanisms, ‘indication creep’, regret theory, and physicians’ positional consumption, and whether or not such phenomena might lead to individual welfare losses and thus be considered causes of misconsumption. The authors provide a neat diagram showing the various causes of misconsumption on a plane. One dimension represents the extent to which the cause is imperfect knowledge or imperfect agency, and the other the degree to which the cause is at the individual or market level. There’s a big gap in the top right, where market level causes meet imperfect knowledge. This area could have included patent systems, research fraud and dodgy Pharma practices. Or maybe just a portrait of Ben Goldacre for shorthand. There are some warnings about the (limited) extent to which market reforms might address misconsumption, and the proposed remedy for overconsumption is not really an economic one. Rather, a change in culture is prescribed. More research looking at existing treatments rather than technology adoption, and to investigate subgroup effects, is also recommended. The authors further suggest collaboration between health economists and ecological economists.

Credits

Sam Watson’s journal round-up for 23rd January 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Short-term and long-term effects of GDP on traffic deaths in 18 OECD countries, 1960–2011. Journal of Epidemiology and Community Health [PubMedPublished February 2017

Understanding the relationships between different aspects of the economy or society in the aggregate can reveal to us knowledge about the world. However, they are more complicated than analyses of individuals who either did or did not receive an intervention, as the objects of aggregate analyses don’t ‘exist’ per se but are rather descriptions of average behaviour of the system. To make sense of these analyses an understanding of the system is therefore required. On these grounds I am a little unsure of the results of this paper, which estimates the effect of GDP on road traffic fatalities in OECD countries over time. It is noted that previous studies have shown that in the short-run, road traffic deaths are procyclical, but in the long-run they have declined, likely as a result of improved road and car safety. Indeed, this is what they find with their data and models. But, what does this result mean in the long-run? Have they picked up anything more than a correlation with time? Time is not included in the otherwise carefully specified models, so is the conclusion to policy makers, ‘just keep doing what you’re doing, whatever that is…’? Models of aggregate phenomena can be among the most interesting, but also among the least convincing (my own included!). That being said, this is better than most.

Sources of geographic variation in health care: Evidence from patient migration. Quarterly Journal of Economics [RePEcPublished November 2016

There are large geographic differences in health care utilisation both between countries and within countries. In the US, for example, the average Medicare enrollee spent around $14,400 in 2010 in Miami, Florida compared with around $7,800 in Minneapolis, Minnesota, even after adjusting for demographic differences. However, higher health care spending is generally not associated with better health outcomes. There is therefore an incentive for policy makers to legislate to reduce this disparity, but what will be effective depends on the causes of the variation. On one side, doctors may be dispensing treatments differently; for example, we previously featured a paper looking at the variation in overuse of medical testing by doctors. On the other side, patients may be sicker or have differing preferences on the intensity of their treatment. To try and distinguish between these two possible sources of variation, this paper uses geographical migration to look at utilisation among people who move from one area to another. They find that (a very specific) 47% of the difference in use of health care is attributable to patient characteristics. However, I (as ever) remain skeptical: a previous post brought up the challenge of ‘transformative treatments’, which may apply here as this paper has to rely on the assumption that patient preferences remain the same when they move. If moving from one city to another changes your preferences over healthcare, then their identification strategy no longer works well.

Seeing beyond 2020: an economic evaluation of contemporary and emerging strategies for elimination of Trypanosoma brucei gambiense. Lancet Global Health Published November 2016

African sleeping sickness, or Human African trypanosomiasis, is targeted for eradication in the next decade. However, the strategy to do so has not been determined, nor whether any such strategy would be a cost-effective use of resources. This paper aims to model all of these different strategies to estimate incremental cost-effectiveness threshold (ICERs). Infectious disease presents an interesting challenge for health economic evaluation as the disease transmission dynamics need to be captured over time, which they achieve here with a ‘standard’ epidemiological model using ordinary differential equations. To reach elimination targets, an approach incorporating case detection, treatment, and vector control would be required, they find.

A conceptual introduction to Hamiltonian Monte Carlo. ArXiv Published 10th January 2017

It is certainly possible to drive a car without understanding how the engine works. But if we want to get more out of the car or modify its components then we will have to start learning some mechanics. The same is true of statistical software. We can knock out a simple logistic regression without ever really knowing the theory or what the computer is doing. But this ‘black box’ approach to statistics has clear problems. How do we know the numbers on the screen mean what we think they mean? What if it doesn’t work or if it is running slowly, how do we diagnose the problem? Programs for Bayesian inference can sometimes seem even more opaque than others: one might well ask what are those chains actually exploring, if it’s even the distribution of interest. Well, over the last few years a new piece of kit, Stan, has become a brilliant and popular tool for Bayesian inference. It achieves fast convergence with less autocorrelation between chains and so it achieves a high effective sample size for relatively few iterations. This is due to its implementation of Hamiltonian Monte Carlo. But it’s founded in the mathematics of differential geometry, which has restricted the understanding of how it works to a limited few. This paper provides an excellent account of Hamiltonian Monte Carlo, how it works, and when it fails, all replete with figures. While it’s not necessary to become a theoretical or computational statistician, it is important, I think, to have a grasp of what the engine is doing if we’re going to play around with it.

Credits