Chris Sampson’s journal round-up for 28th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Spatial competition and quality: evidence from the English family doctor market. Journal of Health Economics [RePEc] Published 17th October 2019

Researchers will never stop asking questions about the role of competition in health care. There’s a substantial body of literature now suggesting that greater competition in the context of regulated prices may bring some quality benefits. But with weak indicators of quality and limited generalisability, it isn’t a closed case. One context in which evidence has been lacking is in health care beyond the hospital. In the NHS, an individual’s choice of GP practice is perhaps the context in which quality can be observed and choice most readily (and meaningfully) exercised. That’s where this study comes in. Aside from the horrible format of a ‘proper economics’ paper (where we start with spoilers and climax with robustness tests), it’s a good read.

The study relies on a measure of competition based on the number of rival GPs within a 2km radius. Number of GPs, that is, rather than number of practices. This is important, as the number of GPs per practice has been increasing. About 75% of a practice’s revenues are linked to the number of patients registered, wherein lies the incentive to compete with other practices for patients. And, in this context, research has shown that patient choice is responsive to indicators of quality. The study uses data for 2005-2012 from all GP practices in England, making it an impressive data set.

The measures of quality come from the Quality and Outcomes Framework (QOF) and the General Practice Patient Survey (GPPS) – the former providing indicators of clinical quality and the latter providing indicators of patient experience. A series of OLS regressions are run on the different outcome measures, with practice fixed effects and various characteristics of the population. The models show that all of the quality indicators are improved by greater competition, but the effect is very small. For example, an extra competing GP within a 2km radius results in 0.035% increase in the percentage of the population for whom the QOF indicators have been achieved. The effects are a little stronger for the patient satisfaction indicators.

The paper reports a bunch of important robustness checks. For instance, the authors try to test whether practices select their locations based on the patient casemix, finding no evidence that they do. The authors even go so far as to test the impact of a policy change, which resulted in an exogenous increase in the number of GPs in some areas but not others. The main findings seem to have withstood all the tests. They also try out a lagged model, which gives similar results.

The findings from this study slot in comfortably with the existing body of research on the role of competition in the NHS. More competition might help to achieve quality improvement, but it hardly seems worthy of dedicating much effort or, importantly, much expense to the cause.

Worth living or worth dying? The views of the general public about allowing disabled children to die. Journal of Medical Ethics [PhilPapers] [PubMed] Published 15th October 2019

Recent years have seen a series of cases in the UK where (usually very young) children have been so unwell and with such a severe prognosis that someone (usually a physician) has judged that continued treatment is not warranted and that the child should be allowed to die. These cases have generated debate and outrage in the media. But what do people actually think?

This study recruited members of the public in the UK (n=130) to an online panel and asked about the decisions that participants would support. The survey had three parts. The first part set out six scenarios of hospitalised infants, which varied in terms of the infants’ physical and sensory abilities, cognitive capacity, level of suffering, and future prospects. Some of the cases approximated real cases that have received media coverage, and the participants were asked whether they thought that withdrawing treatment was justified in each case. In the second part of the survey, participants were asked about the factors that they believed were important in making such decisions. In the third part, participants answered a few questions about themselves and answered the Oxford Utilitarianism Scale.

The authors set up the concept of a ‘life not worth living’, based on the idea that net future well-being is ‘negative’, and supposing the individual’s own judgement were they able to provide it. In the first part of the survey, 88% indicated that life would be worse than death in at least one of the cases. In such cases, 65% thought that treatment withdrawal was ethically obligatory, while 33% thought that either decision was acceptable. Pain was considered the most important factor in making such decisions, followed by the presence of pleasure. Perhaps predictably for health economists familiar with the literature, about 42% of people thought that resources should be considered in the decision, while 40% thought they shouldn’t.

The paper includes an extensive discussion, with plenty of food for thought. In particular, it discusses the ways in which the findings might inform the debate between the ‘zero line view’, whereby treatment should be withdrawn at the point where life has no benefit, and the ‘threshold view’, which establishes a grey zone of ethical uncertainty, in which either decision is ethically acceptable. To some extent, the findings of this study support the need for a threshold approach. Ethical questions are rarely black and white.

How is the trade-off between adverse selection and discrimination risk affected by genetic testing? Theory and experiment. Journal of Health Economics [PubMed] [RePEc] Published 1st October 2019

A lot of people are worried about how knowledge of their genetic information could be used against them. The most obvious scenario is one in which insurers increase premiums – or deny coverage altogether – on the basis of genetic risk factors. There are two key regulatory options in this context – disclosure duty, whereby individuals are obliged to tell insurers about the outcome of genetic tests, or consent law, whereby people can keep the findings to themselves. This study explores how people behave under each of these regulations.

The authors set up a theoretical model in which individuals can choose whether to purchase a genetic test that can identify them as being either high-risk or low-risk of developing some generic illness. The authors outline utility functions under disclosure duty and consent law. Under disclosure duty, individuals face a choice between the certainty of not knowing their risk and receiving pooled insurance premiums, or a lottery in which they have to disclose their level of risk and receive a higher or lower premium accordingly. Under consent law, individuals will only reveal their test results if they are at low risk, thus securing lower premiums and contributing to adverse selection. As a result, individuals will be more willing to take a test under consent law than under disclosure duty, all else equal.

After setting out their model (at great length), the authors go on to describe an experiment that they conducted with 67 economics students, to elicit preferences within and between the different regulatory settings. The experiment was set up in a very generic way, not related to health at all. Participants were presented with a series of tasks across which the parameters representing the price of the test and the pooled premium were varied. All of the authors’ hypotheses were supported by the experiment. More people took tests under consent law. Higher test prices reduce the number of people taking tests. If prices are high enough, people will prefer disclosure duty. The likelihood that people take tests under consent law is increasing with the level of adverse selection. And people are very sensitive to the level of discrimination risk under disclosure duty.

It’s an interesting study, but I’m not sure how much it can tell us about genetic testing. Framing the experiment as entirely unrelated to health seems especially unwise. People’s risk preferences may be very different in the domain of real health than in the hypothetical monetary domain. In the real world, there’s a lot more at stake.

Credits

Rita Faria’s journal round-up for 21st October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Quantifying how diagnostic test accuracy depends on threshold in a meta-analysis. Statistics in Medicine [PubMed] Published 30th September 2019

A diagnostic test is often based on a continuous measure, e.g. cholesterol, which is dichotomised at a certain threshold to classify people as ‘test positive’, who should be treated, or ‘test negative’, who should not. In an economic evaluation, we may wish to compare the costs and benefits of using the test at different thresholds. For example, the cost-effectiveness of offering lipid lowering therapy for people with cholesterol over 7 mmol/L vs over 5 mmol/L. This is straightforward to do if we have access to a large dataset comparing the test to its gold standard to estimate its sensitivity and specificity at various thresholds. It is quite the challenge if we only have aggregate data from multiple publications.

In this brilliant paper, Hayley Jones and colleagues report on a new method to synthesise diagnostic accuracy data from multiple studies. It consists of a multinomial meta-analysis model that can estimate how accuracy depends on the diagnostic threshold. This method produces estimates that can be used to parameterise an economic model.

These new developments in evidence synthesis are very exciting and really important to improve the data going into economic models. My only concern is that the model is implemented in WinBUGS, which is not a software that many applied analysts use. Would it be possible to have a tutorial, or even better, include this method in the online tools available in the Complex Reviews Support Unit website?

Early economic evaluation of diagnostic technologies: experiences of the NIHR Diagnostic Evidence Co-operatives. Medical Decision Making [PubMed] Published 26th September 2019

Keeping with the diagnostic theme, this paper by Lucy Abel and colleagues reports on the experience of the Diagnostic Evidence Co-operatives in conducting early modelling of diagnostic tests. These were established in 2013 to help developers of diagnostic tests link-up with clinical and academic experts.

The paper discusses eight projects where economic modelling was conducted at an early stage of project development. It was fascinating to read about the collaboration between academics and test developers. One of the positive aspects was the buy-in of the developers, while a less positive one was the pressure to produce evidence quickly and that supported the product.

The paper is excellent in discussing the strengths and challenges of these projects. Of note, there were challenges in mapping out a clinical pathway, selecting the appropriate comparators, and establishing the consequences of testing. Furthermore, they found that the parameters around treatment effectiveness were the key driver of cost-effectiveness in many of the evaluations. This is not surprising given that the benefits of a test are usually in better informing the management decisions, rather than via its direct costs and benefits. It definitely resonates with my own experience in conducting economic evaluations of diagnostic tests (see, for example, here).

Following on from the challenges, the authors suggest areas for methodological research: mapping the clinical pathway, ensuring model transparency, and modelling sequential tests. They finish with advice for researchers doing early modelling of tests, although I’d say that it would be applicable to any economic evaluation. I completely agree that we need better methods for economic evaluation of diagnostic tests. This paper is a useful first step in setting up a research agenda.

A second chance to get causal inference right: a classification of data science tasks. Chance [arXiv] Published 14th March 2019

This impressive paper by Miguel Hernan, John Hsu and Brian Healy is an essential read for all researchers, analysts and scientists. Miguel and colleagues classify data science tasks into description, prediction and counterfactual prediction. Description is using data to quantitatively summarise some features of the world. Prediction is using the data to know some features of the world given our knowledge about other features. Counterfactual prediction is using the data to know what some features of the world would have been if something hadn’t happened; that is, causal inference.

I found the explanation of the difference between prediction and causal inference quite enlightening. It is not about the amount of data or the statistical/econometric techniques. The key difference is in the role of expert knowledge. Predicting requires expert knowledge to specify the research question, the inputs, the outputs and the data sources. Additionally, causal inference requires expert knowledge “also to describe the causal structure of the system under study”. This causal knowledge is reflected in the assumptions, the ideas for the data analysis, and for the interpretation of the results.

The section on implications for decision-making makes some important points. First, that the goal of data science is to help people make better decisions. Second, that predictive algorithms can tell us that decisions need to be made but not which decision is most beneficial – for that, we need causal inference. Third, many of us work on complex systems for which we don’t know everything (the human body is a great example). Because we don’t know everything, it is impossible to predict with certainty what would be the consequences of an intervention in a specific individual from routine health records. At most, we can estimate the average causal effect, but even for that we need assumptions. The relevance to the latest developments in data science is obvious, given all the hype around real world data, artificial intelligence and machine learning.

I absolutely loved reading this paper and wholeheartedly recommend it for any health economist. It’s a must read!

Credits

Thesis Thursday: Luke Wilson

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Luke Wilson who has a PhD from Lancaster University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Essays on the economics of alcohol and risky behaviours
Supervisors
Colin P. Green, Bruce Hollingsworth, Céu Caixeiro Mateus
Repository link
https://doi.org/10.17635/lancaster/thesis/636

What inspired your research and how did ‘attractiveness’ enter the picture?

Without trying to sound like I have a problem, I find the subject of alcohol fascinating. The history of it, how it is perceived in society, how our behaviours around it have changed over time, not to mention it tastes pretty damn good!

Our attitude to alcohol is fascinating and diverse. Over 6.5 million people have visited Munich in the last month alone to attend the world’s largest beer festival Oktoberfest, drinking more than 7.3 million litres of beer. However, 2020 will be the 100-year anniversary of the introduction of prohibition in the United States. Throughout history, alcohol consumption has been portrayed as both a positive and negative commodity in society.

For my thesis, I wanted to understand individuals’ current attitudes to drinking alcohol; whether they are affected by legal restrictions such as being constrained by the minimum legal drinking age of 18 in the UK, whether their attitudes have changed over their life course, and how alcohol fits among a wider variety of risky behaviours such as smoking and illicit drug use.

As for how did ‘attractiveness’ enter the picture, I was searching for datasets that allow for longitudinal analysis, as well as contain information on risky behaviours, and I stumbled upon the data that asked the interviewers to rate the attractiveness of the respondent. My first thought was what a barbaric question to ask, but I quickly realised that the question is used a lot in determining the ‘beauty premia’ in the labour market. However, nobody has examined how these ‘beauty premia’ might come into effect while still at school.

Are people perceived to be more attractive at an advantage or a disadvantage in this context?

The current literature provides a compelling view that there are sizeable labour market returns to attractiveness in the United States (Fletcher, 2009; Stinebrickener et al., 2019). What is not well understood, and where our research fits in, is how physical attractiveness influences earlier, consequential, decisions. The previous literature seeks to provide, in essence, the effect of attractiveness on labour market outcomes conditional on individual characteristics, both demographic and ‘pre-market’. However, attractiveness is also likely to change both the opportunities and costs of a variety of behaviours during adolescence.

Exploiting the interviewer variations in ratings of attractiveness, we found that attractiveness of adolescents has marked effects on a range of risky behaviours. For instance, more attractive teens are less likely to smoke than teens of average or than lower attractiveness teens. However, attractiveness is associated with higher teen alcohol consumption. Attractive females, in particular, are substantially more likely to have consumed alcohol in the past twelve months, than those of or below average attractiveness.

How did you model the role of the minimum legal drinking age in the UK?

I was highly unoriginal and estimated the effect of the minimum legal drinking age in the UK using a regression discontinuity design approach, like that of Carpenter and Dobkin (2009). I jest but it is one of the most effective ways to estimate a causal effect of a particular law/policy that is triggered by age, especially for the UK which has not changed its legal drinking age.

Where our research deviates is that we focus on the law itself and analyse how an individual’s consumption of alcohol in a particular school year may differ at the cut-off (aged 18). For example, do those born in September purchase alcohol for themselves and their younger friends or do we all adhere to the laws that govern us and wait patiently…

Are younger people drinking less, nowadays?

The short answer is yes! Evidence from multiple British surveys shows a consistent pattern over 10-15 years of reduced participation in drinking, reduced consumption levels among drinkers, reduced prevalence of drunkenness, and less positive attitudes towards alcohol in young adults aged 16 to 24.

Friends of mine at the University of Sheffield (Oldham et al., 2018) have sought to unravel the decline in youth drinking further and find evidence that younger drinkers are consuming alcohol less often and in smaller quantities. They find that, among those who were drinkers, the percentage of 16-24 year-olds who drank in the last week fell from 76% to 60% between 2002 and 2016, while for 11-15 year-olds it fell from 35% to 19%. Additionally, alongside declines in youth drinking, the proportion of young adults who had ever tried smoking fell from 43% in 1998 to 17% in 2016.

While we are witnessing this decline, the jury is still out as to why it is happening. Explanations so far include that increases in internet use (social media) and online gaming are changing the way young people spend their leisure time. Additionally, economic factors may play a role, such as the increase in the cost of alcohol, as well as the increase in tuition fees and housing costs meaning that young adults have less disposable income.

What were some of the key methodological challenges you faced in your research?

The largest methodological problem I faced throughout my PhD was finding suitable data to examine the effect of the minimum legal drinking age in the setting of the UK. One of the key underlying components in a regression discontinuity design is the running variable. The running variable I use is age in months of the respondents, which is calculated using the date in which the survey interview took place as well as the month and year of birth of the respondent. Unfortunately, due to issues with data being disclosive, it is very difficult to obtain data that have these variables as well as suitable questions regarding alcohol consumption. Luckily, the General Household Survey (Special Licence version) had the variables I needed to conduct the analysis, albeit only between 1998 and 2007.

How might your research inform policymakers seeking to discourage risky behaviours?

Definitely a difficult question to answer, especially given that one of my chapters uses interviewer variations in ratings of attractiveness of the respondents, so I have stayed well clear from drawing individual policy recommendations from that chapter. That said, these results are important for a number of interrelated reasons. Previous labour market research demonstrates marked effects of attractiveness. My results suggest that important pre-market effects of attractiveness on individual behaviour are likely to be consequential for both labour market performance and important pre-market investments. In this sense, the findings suggest that physical attractiveness provides another avenue for understanding non-cognitive traits that are important in child and adolescent development and carry lifetime consequences.

The chapter on the minimum legal drinking age provides intriguing results regarding the effectiveness of policies that impose limits on ‘consumption’ through age-restrictive policies; whether they are enough on their own or merely delay consumption. This is especially relevant given that there is currently a discussion about increasing the minimum legal tobacco purchasing age to 21 and increasing the age in which you can buy a national lottery ticket from age 16 to 18 in the UK.