Rita Faria’s journal round-up for 21st October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Quantifying how diagnostic test accuracy depends on threshold in a meta-analysis. Statistics in Medicine [PubMed] Published 30th September 2019

A diagnostic test is often based on a continuous measure, e.g. cholesterol, which is dichotomised at a certain threshold to classify people as ‘test positive’, who should be treated, or ‘test negative’, who should not. In an economic evaluation, we may wish to compare the costs and benefits of using the test at different thresholds. For example, the cost-effectiveness of offering lipid lowering therapy for people with cholesterol over 7 mmol/L vs over 5 mmol/L. This is straightforward to do if we have access to a large dataset comparing the test to its gold standard to estimate its sensitivity and specificity at various thresholds. It is quite the challenge if we only have aggregate data from multiple publications.

In this brilliant paper, Hayley Jones and colleagues report on a new method to synthesise diagnostic accuracy data from multiple studies. It consists of a multinomial meta-analysis model that can estimate how accuracy depends on the diagnostic threshold. This method produces estimates that can be used to parameterise an economic model.

These new developments in evidence synthesis are very exciting and really important to improve the data going into economic models. My only concern is that the model is implemented in WinBUGS, which is not a software that many applied analysts use. Would it be possible to have a tutorial, or even better, include this method in the online tools available in the Complex Reviews Support Unit website?

Early economic evaluation of diagnostic technologies: experiences of the NIHR Diagnostic Evidence Co-operatives. Medical Decision Making [PubMed] Published 26th September 2019

Keeping with the diagnostic theme, this paper by Lucy Abel and colleagues reports on the experience of the Diagnostic Evidence Co-operatives in conducting early modelling of diagnostic tests. These were established in 2013 to help developers of diagnostic tests link-up with clinical and academic experts.

The paper discusses eight projects where economic modelling was conducted at an early stage of project development. It was fascinating to read about the collaboration between academics and test developers. One of the positive aspects was the buy-in of the developers, while a less positive one was the pressure to produce evidence quickly and that supported the product.

The paper is excellent in discussing the strengths and challenges of these projects. Of note, there were challenges in mapping out a clinical pathway, selecting the appropriate comparators, and establishing the consequences of testing. Furthermore, they found that the parameters around treatment effectiveness were the key driver of cost-effectiveness in many of the evaluations. This is not surprising given that the benefits of a test are usually in better informing the management decisions, rather than via its direct costs and benefits. It definitely resonates with my own experience in conducting economic evaluations of diagnostic tests (see, for example, here).

Following on from the challenges, the authors suggest areas for methodological research: mapping the clinical pathway, ensuring model transparency, and modelling sequential tests. They finish with advice for researchers doing early modelling of tests, although I’d say that it would be applicable to any economic evaluation. I completely agree that we need better methods for economic evaluation of diagnostic tests. This paper is a useful first step in setting up a research agenda.

A second chance to get causal inference right: a classification of data science tasks. Chance [arXiv] Published 14th March 2019

This impressive paper by Miguel Hernan, John Hsu and Brian Healy is an essential read for all researchers, analysts and scientists. Miguel and colleagues classify data science tasks into description, prediction and counterfactual prediction. Description is using data to quantitatively summarise some features of the world. Prediction is using the data to know some features of the world given our knowledge about other features. Counterfactual prediction is using the data to know what some features of the world would have been if something hadn’t happened; that is, causal inference.

I found the explanation of the difference between prediction and causal inference quite enlightening. It is not about the amount of data or the statistical/econometric techniques. The key difference is in the role of expert knowledge. Predicting requires expert knowledge to specify the research question, the inputs, the outputs and the data sources. Additionally, causal inference requires expert knowledge “also to describe the causal structure of the system under study”. This causal knowledge is reflected in the assumptions, the ideas for the data analysis, and for the interpretation of the results.

The section on implications for decision-making makes some important points. First, that the goal of data science is to help people make better decisions. Second, that predictive algorithms can tell us that decisions need to be made but not which decision is most beneficial – for that, we need causal inference. Third, many of us work on complex systems for which we don’t know everything (the human body is a great example). Because we don’t know everything, it is impossible to predict with certainty what would be the consequences of an intervention in a specific individual from routine health records. At most, we can estimate the average causal effect, but even for that we need assumptions. The relevance to the latest developments in data science is obvious, given all the hype around real world data, artificial intelligence and machine learning.

I absolutely loved reading this paper and wholeheartedly recommend it for any health economist. It’s a must read!

Credits

Thesis Thursday: Luke Wilson

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Luke Wilson who has a PhD from Lancaster University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Essays on the economics of alcohol and risky behaviours
Supervisors
Colin P. Green, Bruce Hollingsworth, Céu Caixeiro Mateus
Repository link
https://doi.org/10.17635/lancaster/thesis/636

What inspired your research and how did ‘attractiveness’ enter the picture?

Without trying to sound like I have a problem, I find the subject of alcohol fascinating. The history of it, how it is perceived in society, how our behaviours around it have changed over time, not to mention it tastes pretty damn good!

Our attitude to alcohol is fascinating and diverse. Over 6.5 million people have visited Munich in the last month alone to attend the world’s largest beer festival Oktoberfest, drinking more than 7.3 million litres of beer. However, 2020 will be the 100-year anniversary of the introduction of prohibition in the United States. Throughout history, alcohol consumption has been portrayed as both a positive and negative commodity in society.

For my thesis, I wanted to understand individuals’ current attitudes to drinking alcohol; whether they are affected by legal restrictions such as being constrained by the minimum legal drinking age of 18 in the UK, whether their attitudes have changed over their life course, and how alcohol fits among a wider variety of risky behaviours such as smoking and illicit drug use.

As for how did ‘attractiveness’ enter the picture, I was searching for datasets that allow for longitudinal analysis, as well as contain information on risky behaviours, and I stumbled upon the data that asked the interviewers to rate the attractiveness of the respondent. My first thought was what a barbaric question to ask, but I quickly realised that the question is used a lot in determining the ‘beauty premia’ in the labour market. However, nobody has examined how these ‘beauty premia’ might come into effect while still at school.

Are people perceived to be more attractive at an advantage or a disadvantage in this context?

The current literature provides a compelling view that there are sizeable labour market returns to attractiveness in the United States (Fletcher, 2009; Stinebrickener et al., 2019). What is not well understood, and where our research fits in, is how physical attractiveness influences earlier, consequential, decisions. The previous literature seeks to provide, in essence, the effect of attractiveness on labour market outcomes conditional on individual characteristics, both demographic and ‘pre-market’. However, attractiveness is also likely to change both the opportunities and costs of a variety of behaviours during adolescence.

Exploiting the interviewer variations in ratings of attractiveness, we found that attractiveness of adolescents has marked effects on a range of risky behaviours. For instance, more attractive teens are less likely to smoke than teens of average or than lower attractiveness teens. However, attractiveness is associated with higher teen alcohol consumption. Attractive females, in particular, are substantially more likely to have consumed alcohol in the past twelve months, than those of or below average attractiveness.

How did you model the role of the minimum legal drinking age in the UK?

I was highly unoriginal and estimated the effect of the minimum legal drinking age in the UK using a regression discontinuity design approach, like that of Carpenter and Dobkin (2009). I jest but it is one of the most effective ways to estimate a causal effect of a particular law/policy that is triggered by age, especially for the UK which has not changed its legal drinking age.

Where our research deviates is that we focus on the law itself and analyse how an individual’s consumption of alcohol in a particular school year may differ at the cut-off (aged 18). For example, do those born in September purchase alcohol for themselves and their younger friends or do we all adhere to the laws that govern us and wait patiently…

Are younger people drinking less, nowadays?

The short answer is yes! Evidence from multiple British surveys shows a consistent pattern over 10-15 years of reduced participation in drinking, reduced consumption levels among drinkers, reduced prevalence of drunkenness, and less positive attitudes towards alcohol in young adults aged 16 to 24.

Friends of mine at the University of Sheffield (Oldham et al., 2018) have sought to unravel the decline in youth drinking further and find evidence that younger drinkers are consuming alcohol less often and in smaller quantities. They find that, among those who were drinkers, the percentage of 16-24 year-olds who drank in the last week fell from 76% to 60% between 2002 and 2016, while for 11-15 year-olds it fell from 35% to 19%. Additionally, alongside declines in youth drinking, the proportion of young adults who had ever tried smoking fell from 43% in 1998 to 17% in 2016.

While we are witnessing this decline, the jury is still out as to why it is happening. Explanations so far include that increases in internet use (social media) and online gaming are changing the way young people spend their leisure time. Additionally, economic factors may play a role, such as the increase in the cost of alcohol, as well as the increase in tuition fees and housing costs meaning that young adults have less disposable income.

What were some of the key methodological challenges you faced in your research?

The largest methodological problem I faced throughout my PhD was finding suitable data to examine the effect of the minimum legal drinking age in the setting of the UK. One of the key underlying components in a regression discontinuity design is the running variable. The running variable I use is age in months of the respondents, which is calculated using the date in which the survey interview took place as well as the month and year of birth of the respondent. Unfortunately, due to issues with data being disclosive, it is very difficult to obtain data that have these variables as well as suitable questions regarding alcohol consumption. Luckily, the General Household Survey (Special Licence version) had the variables I needed to conduct the analysis, albeit only between 1998 and 2007.

How might your research inform policymakers seeking to discourage risky behaviours?

Definitely a difficult question to answer, especially given that one of my chapters uses interviewer variations in ratings of attractiveness of the respondents, so I have stayed well clear from drawing individual policy recommendations from that chapter. That said, these results are important for a number of interrelated reasons. Previous labour market research demonstrates marked effects of attractiveness. My results suggest that important pre-market effects of attractiveness on individual behaviour are likely to be consequential for both labour market performance and important pre-market investments. In this sense, the findings suggest that physical attractiveness provides another avenue for understanding non-cognitive traits that are important in child and adolescent development and carry lifetime consequences.

The chapter on the minimum legal drinking age provides intriguing results regarding the effectiveness of policies that impose limits on ‘consumption’ through age-restrictive policies; whether they are enough on their own or merely delay consumption. This is especially relevant given that there is currently a discussion about increasing the minimum legal tobacco purchasing age to 21 and increasing the age in which you can buy a national lottery ticket from age 16 to 18 in the UK.

Chris Sampson’s journal round-up for 14th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Transparency in health economic modeling: options, issues and potential solutions. PharmacoEconomics [PubMed] Published 8th October 2019

Reading this paper was a strange experience. The purpose of the paper, and its content, is much the same as a paper of my own, which was published in the same journal a few months ago.

The authors outline what they see as the options for transparency in the context of decision modelling, with a focus on open source models and a focus on for whom the details are transparent. Models might be transparent to a small number of researchers (e.g. in peer review), to HTA agencies, or to the public at large. The paper includes a figure showing the two aspects of transparency, termed ‘reach’ and ‘level’, which relate to the number of people who can access the information and the level of detail made available. We provided a similar figure in our paper, using the terms ‘breadth’ and ‘depth’, which is at least some validation of our idea. The authors then go on to discuss five ‘issues’ with transparency: copyright, model misuse, confidential data, software, and time/resources. These issues are framed as questions, to which the authors posit some answers as solutions.

Perhaps inevitably, I think our paper does a better job, and so I’m probably over-critical of this article. Ours is more comprehensive, if nothing else. But I also think the authors make a few missteps. There’s a focus on models created by academic researchers, which oversimplifies the discussion somewhat. Open source modelling is framed as a more complete solution than it really is. The ‘issues’ that are discussed are at points framed as drawbacks or negative features of transparency, which they aren’t. Certainly, they’re challenges, but they aren’t reasons not to pursue transparency. ‘Copyright’ seems to be used as a synonym for intellectual property, and transparency is considered to be a threat to this. The authors’ proposed solution here is to use licensing fees. I think that’s a bad idea. Levying a fee creates an incentive to disregard copyright, not respect it.

It’s a little ironic that both this paper and my own were published, when both describe the benefits of transparency in terms of reducing “duplication of efforts”. No doubt, I read this paper with a far more critical eye than I normally would. Had I not published a paper on precisely the same subject, I might’ve thought this paper was brilliant.

If we recognize heterogeneity of treatment effect can we lessen waste? Journal of Comparative Effectiveness Research [PubMed] Published 1st October 2019

This commentary starts from the premise that a pervasive overuse of resources creates a lot of waste in health care, which I guess might be true in the US. Apparently, this is because clinicians have an insufficient understanding of heterogeneity in treatment effects and therefore assume average treatment effects for their patients. The authors suggest that this situation is reinforced by clinical trial publications tending to only report average treatment effects. I’m not sure whether the authors are arguing that clinicians are too knowledgable and dependent on the research, or that they don’t know the research well enough. Either way, it isn’t a very satisfying explanation of the overuse of health care. Certainly, patients could benefit from more personalised care, and I would support the authors’ argument in favour of stratified studies and the reporting of subgroup treatment effects. The most insightful part of this paper is the argument that these stratifications should be on the basis of observable characteristics. It isn’t much use to your general practitioner if personalisation requires genome sequencing. In short, I agree with the authors’ argument that we should do more to recognise heterogeneity of treatment effects, but I’m not sure it has much to do with waste.

No evidence for a protective effect of education on mental health. Social Science & Medicine Published 3rd October 2019

When it comes to the determinants of health and well-being, I often think back to my MSc dissertation research. As part of that, I learned that a) stuff that you might imagine to be important often isn’t and b) methodological choices matter a lot. Though it wasn’t the purpose of my study, it seemed from this research that higher education has a negative effect on people’s subjective well-being. But there isn’t much research out there to help us understand the association between education and mental health in general.

This study add to a small body of literature on the impact of changes in compulsory schooling on mental health. In (West) Germany, education policy was determined at the state level, so when compulsory schooling was extended from eight to nine years, different states implemented the change at different times between 1949 and 1969. This study includes 5,321 people, with 20,290 person-year observations, from the German Socio-Economic Panel survey (SOEP). Inclusion was based on people being born seven years either side of the cutoff birth year for which the longer compulsory schooling was enacted, with a further restriction to people aged between 50 and 85. The SOEP includes the SF-12 questionnaire, which includes a mental health component score (MCS). There is also an 11-point life satisfaction scale. The authors use an instrumental variable approach, using the policy change as an instrument for years of schooling and estimating a standard two-stage least squares model. The MCS score, life satisfaction score, and a binary indicator for MCS score lower than or equal to 45.6, are all modelled as separate outcomes.

Estimates using an OLS model show a positive and highly significant effect of years of schooling on all three outcomes. But when the instrumental variable model is used, this effect disappears. An additional year of schooling in this model is associated with a statistically and clinically insignificant decrease in the MCS score. Also insignificant was the finding that more years of schooling increases the likelihood of developing symptoms of a mental health disorder (as indicated by the MCS threshold of 45.6) and that life satisfaction is slightly lower. The same model shows a positive effect on physical health, which corresponds with previous research and provides some reassurance that the model could detect an effect if one existed.

The specification of the model seems reasonable and a host of robustness checks are reported. The only potential issue I could spot is that a person’s state of residence at the time of schooling is not observed, and so their location at entry into the sample is used. Given that education is associated with mobility, this could be a problem, and I would have liked to see the authors subject it to more testing. The overall finding – that an additional year of school for people who might otherwise only stay at school for eight years does not improve mental health – is persuasive. But the extent to which we can say anything more general about the impact of education on well-being is limited. What if it had been three years of additional schooling, rather than one? There is still much work to be done in this area.

Scientific sinkhole: the pernicious price of formatting. PLoS One [PubMed] Published 26th September 2019

This study is based on a survey that asked 372 researchers from 41 countries about the time they spent formatting manuscripts for journal submission. Let’s see how I can frame this as health economics… Well, some of the participants are health researchers. The time they spend on formatting journal submissions is time not spent on health research. The opportunity cost of time spent formatting could be measured in terms of health.

The authors focused on the time and wage costs of formatting. The results showed that formatting took a median time of 52 hours per person per year, at a cost of $477 per manuscript or $1,908 per person per year. Researchers spend – on average – 14 hours on formatting a manuscript. That’s outrageous. I have never spent that long on formatting. If you do, you only have yourself to blame. Or maybe it’s just because of what I consider to constitute formatting. The survey asked respondents to consider formatting of figures, tables, and supplementary files. Improving the format of a figure or a table can add real value to a paper. A good figure or table can change a bad paper to a good paper. I’d love to know how the time cost differed for people using LaTeX.

Credits