Chris Sampson’s journal round-up for 13th January 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A vision ‘bolt-on’ increases the responsiveness of EQ-5D: preliminary evidence from a study of cataract surgery. The European Journal of Health Economics [PubMed] Published 4th January 2020

The EQ-5D is insensitive to differences in how well people can see, despite this seeming to be an important aspect of health. In contexts where the impact of visual impairment may be important, we could potentially use a ‘bolt-on’ item that asks about a person’s vision. I’m working on the development of a vision bolt-on at the moment. But ours won’t be the first. A previously-developed bolt-on has undergone some testing and has been shown to be sensitive to differences between people with different levels of visual function. However, there is little or no evidence to support its responsiveness to changes in visual function, which might arise from treatment.

For this study, 63 individuals were recruited prior to receiving cataract surgery in Singapore. Participants completed the EQ-5D-3L and EQ-5D-5L, both with and without a vision bolt-on, which matched the wording of other EQ-5D dimensions. Additionally, the SF-6D, HUI3, and VF-12 were completed along with a LogMAR assessment of visual acuity. The authors sought to compare the responsiveness of the EQ-5D with a vision bolt-on compared with the standard EQ-5D and the other measures. Therefore, all measures were completed before and after cataract surgery. Preference weights can be generated for the EQ-5D-3L with a vision bolt-on, but they can’t for the EQ-5D-5L, so the authors looked at rescaled sum scores to compare across all measures. Responsiveness was measured using indicators such as standardised effect size and response mean.

Visual acuity changed dramatically before and after surgery, for almost everybody. The authors found that the vision bolt-on does seem to provide a great deal more in the way of response to this, compared to the EQ-5D without the bolt-on. For instance, the mean change in the EQ-5D-3L index score was 0.018 without the vision bolt-on, and 0.031 with it. The HUI3 came out with a mean change of 0.105 and showed the highest responsiveness across all analyses.

Does this mean that we should all be using a vision bolt-on, or perhaps the HUI3? Not exactly. Something I see a lot in papers of this sort – including in this one – is the framing of a “superior responsiveness” as an indication that the measure is doing a better job. That isn’t true if the measure is responding to things to which we don’t want it to respond. As the authors point out, the HUI3 has quite different foundations to the EQ-5D. We also don’t want a situation where analysts can pick and choose measures according to which ever is most responsive to the thing to which they want it to be most responsive. In EuroQol parlance, what goes into the descriptive system is very important.

The causal effect of social activities on cognition: evidence from 20 European countries. Social Science & Medicine Published 9th January 2020

Plenty of studies have shown that cognitive abilities are correlated with social engagement, but few have attempted to demonstrate causality in a large sample. The challenge, of course, is that people who engage in more social activities are likely to have greater cognitive abilities for other reasons, and people’s decision to engage in social activities might depend on their cognitive abilities. This study tackles the question of causality using a novel (to me, at least) methodology.

The analysis uses data from five waves of SHARE (the Survey of Health, Ageing and Retirement in Europe). Survey respondents are asked about whether they engage in a variety of social activities, such as voluntary work, training, sports, or community-related organisations. From this, the authors generate an indicator for people participating in zero, one, or two or more of these activities. The survey also uses a set of tests to measure people’s cognitive abilities in terms of immediate recall capacity, delayed recall capacity, fluency, and numeracy. The authors look at each of these four outcomes, with 231,407 observations for the first three and 124,381 for numeracy (for which the questions were missing from some waves). Confirming previous findings, a strong positive correlation is found between engagement in social activities and each of the cognition indicators.

The empirical strategy, which I had never heard of, is partial identification. This is a non-parametric method that identifies bounds for the average treatment effect. Thus, it is ‘partial’ because it doesn’t identify a point estimate. Fewer assumptions means wider and less informative bounds. The authors start with a model with no assumptions, for which the lower bound for the treatment effect goes below zero. They then incrementally add assumptions. These include i) a monotone treatment response, assuming that social participation does not reduce cognitive abilities on average; ii) monotone treatment selection, assuming that people who choose to be socially active tend to have higher cognitive capacities; iii) a monotone instrumental variable assumption that body mass index is negatively associated with cognitive abilities. The authors argue that their methodology is not likely to be undermined by unobservables, as previous studies might.

The various models show that engaging in social activities has a positive impact on all four of the cognitive indicators. The assumption of monotone treatment response had the highest identifying power. For all models that included this, the 95% confidence intervals in the estimates showed a statistically significant positive impact of social activities on cognition. What is perhaps most interesting about this approach is the huge amount of uncertainty in the estimates. Social activities might have a huge effect on cognition or they might have a tiny effect. A basic OLS-type model, assuming exogenous selection, provides very narrow confidence intervals, whereas the confidence intervals on the partial identification models are almost as wide as the lower and upper band themselves.

One shortcoming of this study for me is that it doesn’t seek to identify the causal channels that have been proposed in previous literature (e.g. loneliness, physical activity, self-care). So it’s difficult to paint a clear picture of what’s going on. But then, maybe that’s the point.

Do research groups align on an intervention’s value? Concordance of cost-effectiveness findings between the Institute for Clinical and Economic Review and other health system stakeholders. Applied Health Economics and Health Policy [PubMed] Published 10th January 2020

Aside from having the most inconvenient name imaginable, ICER has been a welcome edition to the US health policy scene, appraising health technologies in order to provide guidance on coverage. ICER has become influential, with some pharmacy benefit managers using their assessments as a basis for denying coverage for low value medicines. ICER identify technologies as falling in one of three categories – high, low, or intermediate long-term value – according to whether the ICER (grr) falls below, above, or between the threshold range of $50,000-$175,000 per QALY. ICER conduct their own evaluations, but so do plenty of other people. This study sought to find out whether other analyses in the literature agree with ICER’s categorisations.

The authors consider 18 assessments by ICER, including 76 interventions, between 2015 and 2017. For each of these, the authors searched the literature for other comparative studies. Specifically, they went looking for cost-effectiveness analyses that employed the same perspectives and outcomes. Unfortunately, they were only able to identify studies for six disease areas and 14 interventions (of the 76), across 25 studies. It isn’t clear whether this is because there is a lack of literature out there – which would be an interesting finding in itself – or because their search strategy or selection criteria weren’t up to scratch. Of the 14 interventions compared, 10 get a more favourable assessment in the published studies than in their corresponding ICER evaluations, with most being categorised as intermediate value instead of low value. The authors go on to conduct one case study, comparing an ICER evaluation in the context of migraine with a published study by some of the authors of this paper. There were methodological differences. In some respects, it seems as if ICER did a more thorough job, while in other respects the published study seemed to use more defensible assumptions.

I agree with the authors that these kinds of comparisons are important. Not least, we need to be sure that ICER’s approach to appraisal is valid. The findings of this study suggest that maybe ICER should be looking at multiple studies and combining all available data in a more meaningful way. But the authors excluded too many studies. Some imperfect comparisons would have been more useful than exclusion – 14 of 76 is kind of pitiful and probably not representative. And I’m not sure why the authors set out to identify studies that are ‘more favourable’, rather than just different. That perspective seems to reveal an assumption that ICER are unduly harsh in their assessments.

Credits

Chris Sampson’s journal round-up for 6th January 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Child sleep and mother labour market outcomes. Journal of Health Economics [PubMed] [RePEc] Published January 2020

It’s pretty clear that sleep is important to almost all aspects of our lives and our well-being. So it is perhaps surprising that economists have paid relatively little attention to the ways in which the quality of sleep influences the ‘economic’ aspects of our lives. Part of the explanation might be that almost anything that you can imagine having an effect on your sleep is also likely to be affected by your sleep. Identifying causality is a challenge. This paper shows us how it’s done.

The study is focussed on the relationship between sleep and labour market outcomes in new mothers. There’s good reason to care about new mothers’ sleep because many new mothers report that lack of sleep is a problem and many suffer from mental and physical health problems that might relate to this. But the major benefit to this study is that the context provides a very nice instrument to help identify causality – children’s sleep. The study uses data from the Avon Longitudinal Study of Parents and Children (ALSPAC), which seems like an impressive data set. The study recruited 14,541 pregnant women with due dates between 1991 and 1993, collecting data on mothers’ and children’s sleep quality and mothers’ labour market activity. The authors demonstrate that children’s sleep (in terms of duration and disturbances) affects the amount of sleep that mothers get. No surprise there. They then demonstrate that the amount of sleep that mothers get affects their labour market outcomes, in terms of their likelihood of being in employment, the number of hours they work, and household income. The authors also demonstrate that children’s sleep quality does not have a direct impact on mothers’ labour market outcomes except through its effect on mothers’ sleep. The causal mechanism seems difficult to refute.

Using a two-stage least squares model with a child’s sleep as an instrument for their mother’s sleep, the authors estimate the effect of mothers’ sleep on labour market outcomes. On average, a 30-minute increase in a mother’s sleep duration increases the number of hours she works by 8.3% and increases household income by 3.1%. But the study goes further (much further) by identifying the potential mechanisms for this effect, with numerous exploratory analyses. Less sleep makes mothers more likely to self-report having problems at work. It also makes mothers less likely to work full-time. Going even further, the authors test the impact of the UK Employment Rights Act 1996, which gave mothers the right to request flexible working. The effect of the Act was to reduce the impact of mothers’ sleep duration on labour market outcomes, with a 6 percentage points lower probability that mothers drop out of the labour force.

My only criticism of this paper is that the copy-editing is pretty poor! There are so many things in this study that are interesting in their own right but also signal need for further research. Unsurprisingly, the study identifies gender inequalities. No wonder men’s wages increase while women’s plateau. Personally, I don’t much care about labour market outcomes except insofar as they affect individuals’ well-being. Thanks to the impressive data set, the study can also show that the impact on women’s labour market outcomes is not simply a response to changing priorities with respect to work, implying that it is actually a problem. The study provides a lot of food for thought for policy-makers.

Health years in total: a new health objective function for cost-effectiveness analysis. Value in Health Published 23rd December 2019

It’s common for me to complain about papers on this blog, usually in relation to one of my (many) pet peeves. This paper is in a different category. It’s dangerous. I’m angry.

The authors introduce the concept of ‘health years in total’. It’s a simple idea that involves separating the QA and the LY parts of the QALY in order to make quality of life and life years additive instead of multiplicative. This creates the possibility of attaching value to life years over and above their value in terms of the quality of life that is experienced in them. ‘Health years’ can be generated at a rate of two per year because each life year is worth 1 and that 1 is added to what the authors call a ‘modified QALY’. This ‘modified QALY’ is based on the supposition that the number of life years in its estimation corresponds to the maximum number of life years available under any treatment scenario being considered. So, if treatment A provides 2 life years and treatment B provides 3 life years, you multiply the quality of life value of treatment A by 3 years and then add the number of actual life years (i.e. 2). On the face of it, this is as stupid as it sounds.

So why do it? Well, some people don’t like QALYs. A cabal of organisations, supposedly representing patients, has sought to undermine the use of cost-effectiveness analysis. For whatever reason, they have decided to pursue the argument that the QALY discriminates against people with disabilities, or anybody else who happens to be unwell. Depending on the scenario this is either untrue or patently desirable. But the authors of this paper seem happy to entertain the cabal. The foundation for the development of the ‘health years in total’ framework is explicitly based in the equity arguments forwarded by these groups. It’s designed to be a more meaningful alternative to the ‘equal value of life’ measure; a measure that has been used in the US context, which adds a value of 1 to life years regardless of their quality.

The paper does a nice job of illustrating the ‘health years in total’ approach compared with the QALY approach and the ‘equal value of life’ approach. There’s merit in considering alternatives to the QALY model, and there may be value in an ‘additive’ approach that in some way separates the valuation of life years from the valuation of health states. There may even be some ethical justification for the ‘health years in total’ framework. But, if there is, it isn’t provided by this paper. To frame the QALY as discriminatory in the way that the authors do, describing this feature as a ‘limitation’ of the QALY approach, and to present an alternative with no basis in ethics is, at best, foolish. In practice, the ‘health years in total’ calculation would favour life-extending treatments over those that improve health. There are some organisations with vested interests in this. Expect to see ‘health years in total’ obscuring decision-making in the United States in the near future.

The causal effect of education on chronic health conditions in the UK. Journal of Health Economics Published 23rd December 2019

Since the dawn of health economics, researchers have been interested in the ways in which education and health outcomes depend on one another. People with more education tend to be healthier. But identifying causal relationships in this context is almost impossible. Some studies have claimed that education has a positive (causal) effect on both general and specific health outcomes. But there are just as many studies that show no impact. This study attempts to solve the problem by throwing a lot of data at it.

The authors analyse the impact of two sets of reforms in the UK. First, the raising of the school leaving age in 1972, from 15 to 16 years. Second, the broader set of reforms that were implemented in the 1990s that resulted in a major increase in the number of people entering higher education. The study’s weapon is the Quarterly Labour Force Survey (QLFS), which includes over 5 million observations from 1.5 million people. Part of the challenge of identifying the impact of education on health outcomes is that the effects can be expected to be observed over the long-term and can therefore be obscured by other long-term trends. To address this, the authors limit their analyses to people in narrow age ranges in correspondence with the times of the reforms. Thanks to the size of the data set, they still have more than 350,000 observations for each reform. The QLFS asks people to self-report having any of a set of 17 different chronic health conditions. These can be grouped in a variety of ways, or looked at individually. The analysis uses a regression discontinuity framework to test the impact of raising the school leaving age, with birth date acting as an instrument for the number of years spent in education. The analysis of the second reform is less precise, as there is no single discontinuity, so the model identifies variation between the relevant cohorts over the period. The models are used to test a variety of combinations of the chronic condition indicators.

In short, the study finds that education does not seem to have a causal effect on health, in terms of the number of chronic conditions or the probability of having any chronic condition. But, even with their massive data set, the authors cannot exclude the possibility that education does have an effect on health (whether positive or negative). This non-finding is consistent across both reforms and is robust to various specifications. There is one potentially important exception to this. Diabetes. Looking at the school leaving age reform, an additional year of schooling reduces the likelihood of having diabetes by 3.6 percentage points. Given the potential for diabetes to depend heavily on an individual’s behaviour and choices, this seems to make sense. Kids, stay in school. Just don’t do it for the good of your health.

Credits

Sam Watson’s journal round-up for 29th October 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Researcher Requests for Inappropriate Analysis and Reporting: A U.S. Survey of Consulting Biostatisticians. Annals of Internal Medicine. [PubMed] Published October 2018.

I have spent a fair bit of time masquerading as a statistician. While I frequently try to push for Bayesian analyses where appropriate, I have still had to do Frequentist work including power and sample size calculations. In principle these power calculations serve a good purpose: if the study is likely to produce very uncertain results it won’t contribute much to scientific knowledge and so won’t justify its cost. It can indicate that a two-arm trial would be preferred over a three-arm trial despite losing an important comparison. But many power analyses, I suspect, are purely for show; all that is wanted is the false assurance of some official looking statistics to demonstrate that a particular design is good enough. Now, I’ve never worked on economic evaluation, but I can imagine that the same pressures can sometimes exist to achieve a certain result. This study presents a survey of 400 US-based statisticians, which asks them how frequently they are asked to do some inappropriate analysis or reporting and to rate how egregious the request is. For example, the most severe request is thought to be to falsify statistical significance. But it includes common requests like to not show plots as they don’t reveal an effect as significant as thought, to downplay ‘insignificant’ findings, or to dress up post hoc power calculations as a priori analyses. I would think that those responding to this survey are less likely to be those who comply with such requests and the survey does not ask them if they did. But it wouldn’t be a big leap to suggest that there are those who do comply, career pressures being what they are. We already know that statistics are widely misused and misreported, especially p-values. Whether this is due to ignorance or malfeasance, I’ll let the reader decide.

Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results. Advances in Methods and Practices in Psychological Science. [PsyArXiv] Published August 2018.

Every data analysis requires a large number of decisions. From receiving the raw data, the analyst must decide what to do with missing or outlying values, which observations to include or exclude, whether any transformations of the data are required, how to code and combined categorical variables, how to define the outcome(s), and so forth. The consequence of each of these decisions leads to a different analysis, and if all possible analyses were enumerated there could be a myriad. Gelman and Loken called this the ‘garden of forking paths‘ after the short story by Jorge Luis Borges, who explored this idea. Gelman and Loken identify this as the source of the problem called p-hacking. It’s not that researchers are conducting thousands of analyses and publishing the one with the statistically significant result, but that each decision along the way may be favourable towards finding a statistically significant result. Do the outliers go against what you were hypothesising? Exclude them. Is there a nice long tail of the distribution in the treatment group? Don’t take logs.

This article explores the garden of forking paths by getting a number of analysts to try to answer the same question with the same data set. The question was, are darker skinned soccer players more likely to receive a red card that their lighter skinned counterparts? The data set provided had information on league, country, position, skin tone (based on subjective rating), and previous cards. Unsurprisingly there were a large range of results, with point estimates ranging from odds ratios of 0.89 to 2.93, with a similar range of standard errors. Looking at the list of analyses, I see a couple that I might have pursued, both producing vastly different results. The authors see this as demonstrating the usefulness of crowdsourcing analyses. At the very least it should be stark warning to any analyst to be transparent with every decision and to consider its consequences.

Front-Door Versus Back-Door Adjustment With Unmeasured Confounding: Bias Formulas for Front-Door and Hybrid Adjustments With Application to a Job Training Program. Journal of the American Statistical Association. Published October 2018.

Econometricians love instrumental variables. Without any supporting evidence, I would be willing to conjecture it is the most widely used type of analysis in empirical economic causal inference. When the assumptions are met it is a great tool, but decent instruments are hard to come by. We’ve covered a number of unconvincing applications on this blog where the instrument might be weak or not exogenous, and some of my own analyses have been criticised (rightfully) on these grounds. But, and we often forget, there are other causal inference techniques. One of these, which I think is unfamiliar to most economists, is the ‘front-door’ adjustment. Consider the following diagram:

frontdoorOn the right is the instrumental variable type causal model. Provided Z satisfies an exclusion restriction. i.e. independent of U, (and some other assumptions) it can be used to estimate the causal effect of A on Y. The front-door approach, on the left, shows a causal diagram where there is a post-treatment variable, M, unrelated to U, and which causes the outcome Y. Pearl showed that under a similar set of assumptions as instrumental variables, that the effect of A on Y was entirely mediated by M, and that there were no common causes of A and M or of M and Y, then M could be used to identify the causal effect of A on Y. This article discusses the front-door approach in the context of estimating the effect of a jobs training program (a favourite of James Heckman). The instrumental variable approach uses random assignment to the program, while the front-door analysis, in the absence of randomisation, uses program enrollment as its mediating variable. The paper considers the effect of the assumptions breaking down, and shows the front-door estimator to be fairly robust.

 

Credits