Sam Watson’s journal round-up for 11th December 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Can incentives improve survey data quality in developing countries?: results from a field experiment in India. Journal of the Royal Statistical Society: Series A Published 17th November 2017

I must admit a keen interest in the topic of this paper. As part of a large project looking at the availability of health services in slums and informal settlements around the world, we are designing a household survey. Much like the Demographic and Health Surveys, which are perhaps the Gold standard of household surveys in low-income countries, interviewers will go door to door to sampled households to complete surveys. One of the issues with household surveys is that they take a long time, and so non-response can be an issue. A potential solution is to offer respondents incentives, cash or otherwise, either before the survey or conditionally on completing it. But any change in survey response as a result of an incentive might create suspicion around data quality. Work in high-income countries suggests incentives to participate have little or no effect on data quality. But there is little evidence about these effects in low-income countries. We might suspect the consequences of survey incentives to differ in poorer settings. For a start, many surveys are conducted on behalf of the government or an NGO, and respondents may misrepresent themselves if they believe further investment in their area might be forthcoming if they are sufficiently badly-off. There may also be larger differences between the interviewer and interviewee in terms of education or cultural background. And finally, incentives can affect the balance between a respondent’s so-called intrinsic and extrinsic motivations for doing something. This study presents the results of a randomised trial where the ‘treatment’ was a small conditional payment for completing a survey, and the ‘control’ was no incentive. In both arms, the response rate was very high (>96%), but it was higher in the treatment arm. More importantly, the authors compare responses to a broad range of socioeconomic and demographic questions between the study arms. Aside from the frequent criticism that statistical significance is interpreted here as the existence of a difference, there are some interesting results. The key observed difference is that in the incentive arm respondents reported having lower wealth consistently across a number of categories. This may result from any of the aforementioned effects of incentives, but may be evidence that incentives can affect data quality and should be used with caution.

Association of US state implementation of newborn screening policies for critical congenital heart disease with early infant cardiac deaths. JAMA [PubMedPublished 5th December 2017

Writing these journal round-ups obviously requires reading the papers that you choose. This can be quite an undertaking for papers published in economics journals, which are often very long, but they provide substantial detail allowing for a thorough appraisal. The opposite is true for articles in medical journals. They are pleasingly concise, but often at the expense of including detail or additional analyses. This paper falls into the latter camp. Using detailed panel data on infant deaths by cause by year and by state in the US, it estimates the effect of mandated screening policies for infant congenital heart defects on deaths from this condition. Given these data and more space, one might expect to see more flexible models than the differences in differences type analysis presented here, such as allowing for state-level correlated time trends. The results seem clear and robust – the policies were associated with a reduction in death from congenital heart conditions by around a third. Given this, one might ask: if it’s so effective, why weren’t doctors doing it anyway? Additional analyses reveal little to no association of the policies with death from other conditions, which may suggest that doctors didn’t have to reallocate their time from other beneficial functions. Perhaps then the screening bore other costs. In the discussion, the authors mention that a previous economic evaluation showed that universal screening was relatively costly (approximately $40,000 per life year saved), but that this may be an overestimate in light of these new results. Certainly then an updated economic evaluation is warranted. However, the models used in the paper may lead one to be cautious about causal interpretations and hence using the estimates in an evaluation. Given some more space the authors may have added additional analyses, but then I might not have read it…

Subsidies and structure: the lasting impact of the Hill-Burton program on the hospital industry. Review of Economics and Statistics [RePEcPublished 29th November 2017

As part of the Hospital Survey and Construction Act of 1946 in the United States, the Hill-Burton program was enacted. As a reaction to the perceived lack of health care services for workers during World War 2, the program provided subsidies of up to a third for building nonprofit and local hospitals. Poorer areas were prioritised. This article examines the consequences of this subsidy program on the structure of the hospital market and health care utilisation. The main result is that the program had the consequence of increasing hospital beds per capita and that this increase was lasting. More specific analyses are presented. Firstly, the increase in beds took a number of years to materialise and showed a dose-response; higher-funded counties had bigger increases. Secondly, the funding reduced private hospital bed capacity. The net effect on overall hospital beds was positive, so the program affected the composition of the hospital sector. Although this would be expected given that it substantially affected the relative costs of different types of hospital bed. And thirdly, hospital utilisation increased in line with the increases in capacity, indicating a previously unmet need for health care. Again, this was expected given the motivation for the program in the first place. It isn’t often that results turn out as neatly as this – the effects are exactly as one would expect and are large in magnitude. If only all research projects turned out this way.

Credits

Sam Watson’s journal round-up for 13th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Scaling for economists: lessons from the non-adherence problem in the medical literature. Journal of Economic Perspectives [RePEcPublished November 2017

It has often been said that development economics has been at the vanguard of the use of randomised trials within economics. Other areas of economics have slowly caught up; the internal validity, and causal interpretation, offered by experimental randomised studies can provide reliable estimates for the effects of particular interventions. Health economics though has perhaps an even longer history with randomised controlled trials (RCTs), and now economic evaluation is often expected alongside clinical trials. RCTs of physician incentives and payments, investment programmes in child health, or treatment provision in schools all feature as other examples. However, even experimental studies can suffer from the same biases in the data analysis process as observational studies. The multiple decisions made in the data analysis and publication stages of research can lead to over-inflated estimates. Beyond that, the experimental conditions of the trial may not pertain in the real world – the study may lack external validity. The medical literature has long recognised this issue, as many as 50% of patients don’t take the medicines prescribed to them by a doctor. As a result, there has been considerable effort to develop an understanding of, and interventions to remedy, the lack of transferability between RCTs and real-world outcomes. This article summarises this literature and develops lessons for economists, who are only just starting to deal with, what they term, ‘the scaling problem’. For example, there are many reasons people don’t respond to incentives as expected: there are psychological costs to switching; people are hyperbolic discounters and often prefer small short-term gains for larger long-term costs; and, people can often fail to understand the implications of sets of complex options. We have also previously discussed the importance of social preferences in decision making. The key point is that, as policy is becoming more and more informed by randomised studies, we need to be careful about over-optimism of effect sizes and start to understand adherence to different policies in the real world. Only then are recommendations reliable.

Estimating the opportunity costs of bed-days. Health Economics [PubMedPublished 6th November 2017

The health economic evaluation of health service delivery interventions is becoming an important issue in health economics. We’ve discussed on many occasions questions surrounding the implementation of seven-day health services in England and Wales, for example. Other service delivery interventions might include changes to staffing levels more generally, medical IT technology, or an incentive to improve hand washing. Key to the evaluation of these interventions is that they are all generally targeted at improving quality of care – that is, to reduce preventable harm. The vast majority of patients who experience some sort of preventable harm do not die but are likely to experience longer lengths of stay in hospital. Consider a person suffering from bed sores or a fall in hospital. Therefore, we need to be able to value those extra bed days to be able to say what the value of improving hospital quality is. Typically we use reference costs or average accounting costs for the opportunity cost of a bed-day, mainly for pragmatic reasons, but also on the assumption that this is equivalent to the value of the second-best alternative foregone. This requires the assumption that health care markets operate properly, which they almost certainly do not. This paper explores the different ways economists have thought about opportunity costs and applies them to the question of the opportunity cost of a hospital bed-day. This includes definitions such as “Net health benefit forgone for the second-best patient‐equivalents”, “Net monetary benefit forgone for the second-best treatment-equivalents”, and “Expenditure incurred + highest net revenue forgone.” The key takeaway is that there is wide variation in the estimated opportunity costs using all the different methods and that, given the assumptions underpinning the most widely used methodologies are unlikely to hold, we may be routinely under- or over-valuing the effects of different interventions.

Universal investment in infants and long-run health: evidence from Denmark’s 1937 Home Visiting Program. American Economic Journal: Applied Economics [RePEcPublished October 2017

We have covered a raft of studies that look at the effects of in-utero health on later life outcomes, the so-called fetal origins hypothesis. A smaller, though by no means small, literature has considered what impact improving infant and childhood health has on later life adult outcomes. While many of these studies consider programmes that occurred decades ago in the US or Europe, their findings are still relevant today as many countries are grappling with high infant and childhood mortality. For many low-income countries, programmes with community health workers – lay-community members provided with some basic public health training – involving home visits, education, and referral services are being widely adopted. This article looks at the later life impacts of an infant health programme, the Home Visiting Program, implemented in Denmark in the 1930s and 40s. The aim of the programme was to provide home visits to every newborn in each district to provide education on feeding and hygiene practices and to monitor infant progress. The programme was implemented in a trial based fashion with different districts adopting the programme at different times and some districts remaining as control districts, although selection into treatment and control was not random. Data were obtained about the health outcomes in the period 1980-2012 of people born 1935-49. In short, the analyses suggest that the programme improved adult longevity and health outcomes, although the effects are small. For example, they estimate the programme reduced hospitalisations by half a day between the age of 45 and 64, and 2 to 6 more people per 1,000 survived past 60 years of age. However, these effect sizes may be large enough to justify what may be a reasonably low-cost programme when scaled across the population.

Credits

Method of the month: Synthetic control

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is synthetic control.

Principles

Health researchers are often interested in estimating the effect of a policy of change at the aggregate level. This might include a change in admissions policies at a particular hospital, or a new public health policy applied to a state or city. A common approach to inference in these settings is difference in differences (DiD) methods. Pre- and post-intervention outcomes in a treated unit are compared with outcomes in the same periods for a control unit. The aim is to estimate a counterfactual outcome for the treated unit in the post-intervention period. To do this, DiD assumes that the trend over time in the outcome is the same for both treated and control units.

It is often the case in practice that we have multiple possible control units and multiple time periods of data. To predict the post-intervention counterfactual outcomes, we can note that there are three sources of information: i) the outcomes in the treated unit prior to the intervention, ii) the behaviour of other time series predictive of that in the treated unit, including outcomes in similar but untreated units and exogenous predictors, and iii) prior knowledge of the effect of the intervention. The latter of these only really comes into play in Bayesian set-ups of this method. With longitudinal data we could just throw all this into a regression model and estimate the parameters. However, generally, this doesn’t allow for unobserved confounders to vary over time. The synthetic control method does.

Implementation

Abadie, Diamond, and Haimueller motivate the synthetic control method using the following model:

y_{it} = \delta_t + \theta_t Z_i + \lambda_t \mu_i + \epsilon_{it}

where y_{it} is the outcome for unit i at time t, \delta_t are common time effects, Z_i are observed covariates with time-varying parameters \theta_t, \lambda_t are unobserved common factors with \mu_i as unobserved factor loadings, and \epsilon_{it} is an error term. Abadie et al show in this paper that one can derive a set of weights for the outcomes of control units that can be used to estimate the post-intervention counterfactual outcomes in the treated unit. The weights are estimated as those that would minimise the distance between the outcome and covariates in the treated unit and the weighted outcomes and covariates in the control units. Kreif et al (2016) extended this idea to multiple treated units.

Inference is difficult in this framework. So to produce confidence intervals, ‘placebo’ methods are proposed. The essence of this is to re-estimate the models, but using a non-intervention point in time as the intervention date to determine the frequency with which differences of a given order of magnitude are observed.

Brodersen et al take a different approach to motivating these models. They begin with a structural time-series model, which is a form of state-space model:

y_t = Z'_t \alpha_t + \epsilon_t

\alpha_{t+1} = T_t \alpha_t + R_t \eta_t

where in this case, y_t is the outcome at time t, \alpha_t is the state vector and Z_t is an output vector with \epsilon_t as an error term. The second equation is the state equation that governs the evolution of the state vector over time where T_t is a transition matrix, R_t is a diffusion matrix, and \eta_t is the system error.

From this setup, Brodersen et al expand the model to allow for control time series (e.g. Z_t = X'_t \beta), local linear time trends, seasonal components, and allowing for dynamic effects of covariates. In this sense the model is perhaps more flexible than that of Abadie et al. Not all of the large number of covariates may be necessary, so they propose a ‘slab and spike’ prior, which combines a point mass at zero with a weakly informative distribution over the non-zero values. This lets the data select the coefficients, as it were.

Inference in this framework is simpler than above. The posterior predictive distribution can be ‘simply’ estimated for the counterfactual time series to give posterior probabilities of differences of various magnitudes.

Software

Stata

  • Synth Implements the method of Abadie et al.

R

  • Synth Implements the method of Abadie et al.
  • CausalImpact Implements the method of Brodersen et al.

Applications

Kreif et al (2016) estimate the effect of pay for performance schemes in hospitals in England and compare the synthetic control method to DiD. Pieters et al (2016) estimate the effects of democratic reform on under-five mortality. We previously covered this paper in a journal round-up and a subsequent post, for which we also used the Brodersen et al method described above. We recently featured a paper by Lépine et al (2017) in a discussion of user fees. The synthetic control method was used to estimate the impact that the removal of user fees had in various districts of Zambia on use of health care.

Credit