Skip to content

Sam Watson’s journal round-up for 11th December 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Can incentives improve survey data quality in developing countries?: results from a field experiment in India. Journal of the Royal Statistical Society: Series A Published 17th November 2017

I must admit a keen interest in the topic of this paper. As part of a large project looking at the availability of health services in slums and informal settlements around the world, we are designing a household survey. Much like the Demographic and Health Surveys, which are perhaps the Gold standard of household surveys in low-income countries, interviewers will go door to door to sampled households to complete surveys. One of the issues with household surveys is that they take a long time, and so non-response can be an issue. A potential solution is to offer respondents incentives, cash or otherwise, either before the survey or conditionally on completing it. But any change in survey response as a result of an incentive might create suspicion around data quality. Work in high-income countries suggests incentives to participate have little or no effect on data quality. But there is little evidence about these effects in low-income countries. We might suspect the consequences of survey incentives to differ in poorer settings. For a start, many surveys are conducted on behalf of the government or an NGO, and respondents may misrepresent themselves if they believe further investment in their area might be forthcoming if they are sufficiently badly-off. There may also be larger differences between the interviewer and interviewee in terms of education or cultural background. And finally, incentives can affect the balance between a respondent’s so-called intrinsic and extrinsic motivations for doing something. This study presents the results of a randomised trial where the ‘treatment’ was a small conditional payment for completing a survey, and the ‘control’ was no incentive. In both arms, the response rate was very high (>96%), but it was higher in the treatment arm. More importantly, the authors compare responses to a broad range of socioeconomic and demographic questions between the study arms. Aside from the frequent criticism that statistical significance is interpreted here as the existence of a difference, there are some interesting results. The key observed difference is that in the incentive arm respondents reported having lower wealth consistently across a number of categories. This may result from any of the aforementioned effects of incentives, but may be evidence that incentives can affect data quality and should be used with caution.

Association of US state implementation of newborn screening policies for critical congenital heart disease with early infant cardiac deaths. JAMA [PubMedPublished 5th December 2017

Writing these journal round-ups obviously requires reading the papers that you choose. This can be quite an undertaking for papers published in economics journals, which are often very long, but they provide substantial detail allowing for a thorough appraisal. The opposite is true for articles in medical journals. They are pleasingly concise, but often at the expense of including detail or additional analyses. This paper falls into the latter camp. Using detailed panel data on infant deaths by cause by year and by state in the US, it estimates the effect of mandated screening policies for infant congenital heart defects on deaths from this condition. Given these data and more space, one might expect to see more flexible models than the differences in differences type analysis presented here, such as allowing for state-level correlated time trends. The results seem clear and robust – the policies were associated with a reduction in death from congenital heart conditions by around a third. Given this, one might ask: if it’s so effective, why weren’t doctors doing it anyway? Additional analyses reveal little to no association of the policies with death from other conditions, which may suggest that doctors didn’t have to reallocate their time from other beneficial functions. Perhaps then the screening bore other costs. In the discussion, the authors mention that a previous economic evaluation showed that universal screening was relatively costly (approximately $40,000 per life year saved), but that this may be an overestimate in light of these new results. Certainly then an updated economic evaluation is warranted. However, the models used in the paper may lead one to be cautious about causal interpretations and hence using the estimates in an evaluation. Given some more space the authors may have added additional analyses, but then I might not have read it…

Subsidies and structure: the lasting impact of the Hill-Burton program on the hospital industry. Review of Economics and Statistics [RePEcPublished 29th November 2017

As part of the Hospital Survey and Construction Act of 1946 in the United States, the Hill-Burton program was enacted. As a reaction to the perceived lack of health care services for workers during World War 2, the program provided subsidies of up to a third for building nonprofit and local hospitals. Poorer areas were prioritised. This article examines the consequences of this subsidy program on the structure of the hospital market and health care utilisation. The main result is that the program had the consequence of increasing hospital beds per capita and that this increase was lasting. More specific analyses are presented. Firstly, the increase in beds took a number of years to materialise and showed a dose-response; higher-funded counties had bigger increases. Secondly, the funding reduced private hospital bed capacity. The net effect on overall hospital beds was positive, so the program affected the composition of the hospital sector. Although this would be expected given that it substantially affected the relative costs of different types of hospital bed. And thirdly, hospital utilisation increased in line with the increases in capacity, indicating a previously unmet need for health care. Again, this was expected given the motivation for the program in the first place. It isn’t often that results turn out as neatly as this – the effects are exactly as one would expect and are large in magnitude. If only all research projects turned out this way.

Credits

By

  • Sam Watson

    Health economics, statistics, and health services research at the University of Warwick. Also like rock climbing and making noise on the guitar.

We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Join the conversation, add a commentx
()
x
%d bloggers like this: