Sam Watson’s journal round-up for 11th December 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Can incentives improve survey data quality in developing countries?: results from a field experiment in India. Journal of the Royal Statistical Society: Series A Published 17th November 2017

I must admit a keen interest in the topic of this paper. As part of a large project looking at the availability of health services in slums and informal settlements around the world, we are designing a household survey. Much like the Demographic and Health Surveys, which are perhaps the Gold standard of household surveys in low-income countries, interviewers will go door to door to sampled households to complete surveys. One of the issues with household surveys is that they take a long time, and so non-response can be an issue. A potential solution is to offer respondents incentives, cash or otherwise, either before the survey or conditionally on completing it. But any change in survey response as a result of an incentive might create suspicion around data quality. Work in high-income countries suggests incentives to participate have little or no effect on data quality. But there is little evidence about these effects in low-income countries. We might suspect the consequences of survey incentives to differ in poorer settings. For a start, many surveys are conducted on behalf of the government or an NGO, and respondents may misrepresent themselves if they believe further investment in their area might be forthcoming if they are sufficiently badly-off. There may also be larger differences between the interviewer and interviewee in terms of education or cultural background. And finally, incentives can affect the balance between a respondent’s so-called intrinsic and extrinsic motivations for doing something. This study presents the results of a randomised trial where the ‘treatment’ was a small conditional payment for completing a survey, and the ‘control’ was no incentive. In both arms, the response rate was very high (>96%), but it was higher in the treatment arm. More importantly, the authors compare responses to a broad range of socioeconomic and demographic questions between the study arms. Aside from the frequent criticism that statistical significance is interpreted here as the existence of a difference, there are some interesting results. The key observed difference is that in the incentive arm respondents reported having lower wealth consistently across a number of categories. This may result from any of the aforementioned effects of incentives, but may be evidence that incentives can affect data quality and should be used with caution.

Association of US state implementation of newborn screening policies for critical congenital heart disease with early infant cardiac deaths. JAMA [PubMedPublished 5th December 2017

Writing these journal round-ups obviously requires reading the papers that you choose. This can be quite an undertaking for papers published in economics journals, which are often very long, but they provide substantial detail allowing for a thorough appraisal. The opposite is true for articles in medical journals. They are pleasingly concise, but often at the expense of including detail or additional analyses. This paper falls into the latter camp. Using detailed panel data on infant deaths by cause by year and by state in the US, it estimates the effect of mandated screening policies for infant congenital heart defects on deaths from this condition. Given these data and more space, one might expect to see more flexible models than the differences in differences type analysis presented here, such as allowing for state-level correlated time trends. The results seem clear and robust – the policies were associated with a reduction in death from congenital heart conditions by around a third. Given this, one might ask: if it’s so effective, why weren’t doctors doing it anyway? Additional analyses reveal little to no association of the policies with death from other conditions, which may suggest that doctors didn’t have to reallocate their time from other beneficial functions. Perhaps then the screening bore other costs. In the discussion, the authors mention that a previous economic evaluation showed that universal screening was relatively costly (approximately $40,000 per life year saved), but that this may be an overestimate in light of these new results. Certainly then an updated economic evaluation is warranted. However, the models used in the paper may lead one to be cautious about causal interpretations and hence using the estimates in an evaluation. Given some more space the authors may have added additional analyses, but then I might not have read it…

Subsidies and structure: the lasting impact of the Hill-Burton program on the hospital industry. Review of Economics and Statistics [RePEcPublished 29th November 2017

As part of the Hospital Survey and Construction Act of 1946 in the United States, the Hill-Burton program was enacted. As a reaction to the perceived lack of health care services for workers during World War 2, the program provided subsidies of up to a third for building nonprofit and local hospitals. Poorer areas were prioritised. This article examines the consequences of this subsidy program on the structure of the hospital market and health care utilisation. The main result is that the program had the consequence of increasing hospital beds per capita and that this increase was lasting. More specific analyses are presented. Firstly, the increase in beds took a number of years to materialise and showed a dose-response; higher-funded counties had bigger increases. Secondly, the funding reduced private hospital bed capacity. The net effect on overall hospital beds was positive, so the program affected the composition of the hospital sector. Although this would be expected given that it substantially affected the relative costs of different types of hospital bed. And thirdly, hospital utilisation increased in line with the increases in capacity, indicating a previously unmet need for health care. Again, this was expected given the motivation for the program in the first place. It isn’t often that results turn out as neatly as this – the effects are exactly as one would expect and are large in magnitude. If only all research projects turned out this way.

Credits

Chris Sampson’s journal round-up for 22nd May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The effect of health care expenditure on patient outcomes: evidence from English neonatal care. Health Economics [PubMed] Published 12th May 2017

Recently, people have started trying to identify opportunity cost in the NHS, by assessing the health gains associated with current spending. Studies have thrown up a wide range of values in different clinical areas, including in neonatal care. This study uses individual-level data for infants treated in 32 neonatal intensive care units from 2009-2013, along with the NHS Reference Cost for an intensive care cot day. A model is constructed to assess the impact of changes in expenditure, controlling for a variety of variables available in the National Neonatal Research Database. Two outcomes are considered: the in-hospital mortality rate and morbidity-free survival. The main finding is that a £100 increase in the cost per cot day is associated with a reduction in the mortality rate of 0.36 percentage points. This translates into a marginal cost per infant life saved of around £420,000. Assuming an average life expectancy of 81 years, this equates to a present value cost per life year gained of £15,200. Reductions in the mortality rate are associated with similar increases in morbidity. The estimated cost contradicts a much higher estimate presented in the Claxton et al modern classic on searching for the threshold.

A comparison of four software programs for implementing decision analytic cost-effectiveness models. PharmacoEconomics [PubMed] Published 9th May 2017

Markov models: TreeAge vs Excel vs R vs MATLAB. This paper compares the alternative programs in terms of transparency and validation, the associated learning curve, capability, processing speed and cost. A benchmarking assessment is conducted using a previously published model (originally developed in TreeAge). Excel is rightly identified as the ‘ubiquitous workhorse’ of cost-effectiveness modelling. It’s transparent in theory, but in practice can include cell relations that are difficult to disentangle. TreeAge, on the other hand, includes valuable features to aid model transparency and validation, though the workings of the software itself are not always clear. Being based on programming languages, MATLAB and R may be entirely transparent but challenging to validate. The authors assert that TreeAge is the easiest to learn due to its graphical nature and the availability of training options. Save for complex VBA, Excel is also simple to learn. R and MATLAB are equivalently more difficult to learn, but clearly worth the time saving for anybody expecting to work on multiple complex modelling studies. R and MATLAB both come top in terms of capability, with Excel falling behind due to having fewer statistical facilities. TreeAge has clearly defined capabilities limited to the features that the company chooses to support. MATLAB and R were both able to complete 10,000 simulations in a matter of seconds, while Excel took 15 minutes and TreeAge took over 4 hours. For a value of information analysis requiring 1000 runs, this could translate into 6 months for TreeAge! MATLAB has some advantage over R in processing time that might make its cost ($500 for academics) worthwhile to some. Excel and TreeAge are both identified as particularly useful as educational tools for people getting to grips with the concepts of decision modelling. Though the take-home message for me is that I really need to learn R.

Economic evaluation of factorial randomised controlled trials: challenges, methods and recommendations. Statistics in Medicine [PubMed] Published 3rd May 2017

Factorial trials randomise participants to at least 2 alternative levels (for example, different doses) of at least 2 alternative treatments (possibly in combination). Very little has been written about how economic evaluations ought to be conducted alongside such trials. This study starts by outlining some key challenges for economic evaluation in this context. First, there may be interactions between combined therapies, which might exist for costs and QALYs even if not for the primary clinical endpoint. Second, transformation of the data may not be straightforward, for example, it may not be possible to disaggregate a net benefit estimation with its components using alternative transformations. Third, regression analysis of factorial trials may be tricky for the purpose of constructing CEACs and conducting value of information analysis. Finally, defining the study question may not be simple. The authors simulate a 2×2 factorial trial (0 vs A vs B vs A+B) to demonstrate these challenges. The first analysis compares A and B against placebo separately in what’s known as an ‘at-the-margins’ approach. Both A and B are shown to be cost-effective, with the implication that A+B should be provided. The next analysis uses regression, with interaction terms demonstrating the unlikelihood of being statistically significant for costs or net benefit. ‘Inside-the-table’ analysis is used to separately evaluate the 4 alternative treatments, with an associated loss in statistical power. The findings of this analysis contradict the findings of the at-the-margins analysis. A variety of regression-based analyses is presented, with the discussion focussed on the variability in the estimated standard errors and the implications of this for value of information analysis. The authors then go on to present their conception of the ‘opportunity cost of ignoring interactions’ as a new basis for value of information analysis. A set of 14 recommendations is provided for people conducting economic evaluations alongside factorial trials, which could be used as a bolt-on to CHEERS and CONSORT guidelines.

Credits