Sam Watson’s journal round-up for 16th April 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The impact of NHS expenditure on health outcomes in England: alternative approaches to identification in all‐cause and disease specific models of mortality. Health Economics [PubMedPublished 2nd April 2018

Studies looking at the relationship between health care expenditure and patient outcomes have exploded in popularity. A recent systematic review identified 65 studies by 2014 on the topic – and recent experience from these journal round-ups suggests this number has increased significantly since then. The relationship between national spending and health outcomes is important to inform policy and health care budgets, not least through the specification of a cost-effectiveness threshold. Karl Claxton and colleagues released a big study looking at all the programmes of care in the NHS in 2015 purporting to estimate exactly this. I wrote at the time that: (i) these estimates are only truly an opportunity cost if the health service is allocatively efficient, which it isn’t; and (ii) their statistical identification method, in which they used a range of socio-economic variables as instruments for expenditure, was flawed as the instruments were neither strong determinants of expenditure nor (conditionally) independent of population health. I also noted that their tests would be unlikely to be any good to detect this problem. In response to the first, Tony O’Hagan commented to say that that they did not assume NHS efficiency, nor even that it was assumed that the NHS is trying to maximise health. This may well have been the case, but I would still, perhaps pedantically, argue then that this is therefore not an opportunity cost. For the question of instrumental variables, an alternative method was proposed by Martyn Andrews and co-authors, using information that feeds into the budget allocation formula as instruments for expenditure. In this new article, Claxton, Lomas, and Martin adopt Andrews’s approach and apply it across four key programs of care in the NHS to try to derive cost-per-QALY thresholds. First off, many of my original criticisms I would also apply to this paper, to which I’d also add one: (Statistical significance being used inappropriately complaint alert!!!) The authors use what seems to be some form of stepwise regression by including and excluding regressors on the basis of statistical significance – this is a big no-no and just introduces large biases (see this article for a list of reasons why). Beyond that, the instruments issue – I think – is still a problem, as it’s hard to justify, for example, an input price index (which translates to larger budgets) as an instrument here. It is certainly correlated with higher expenditure – inputs are more expensive in higher price areas after all – but this instrument won’t be correlated with greater inputs for this same reason. Thus, it’s the ‘wrong kind’ of correlation for this study. Needless to say, perhaps I am letting the perfect be the enemy of the good. Is this evidence strong enough to warrant a change in a cost-effectiveness threshold? My inclination would be that it is not, but that is not to deny it’s relevance to the debate.

Risk thresholds for alcohol consumption: combined analysis of individual-participant data for 599 912 current drinkers in 83 prospective studies. The Lancet Published 14th April 2018

“Moderate drinkers live longer” is the adage of the casual drinker as if to justify a hedonistic pursuit as purely pragmatic. But where does this idea come from? Studies that have compared risk of cardiovascular disease to level of alcohol consumption have shown that disease risk is lower in those that drink moderately compared to those that don’t drink. But correlation does not imply causation – non-drinkers might differ from those that drink. They may be abstinent after experiencing health issues related to alcohol, or be otherwise advised to not drink to protect their health. If we truly believed moderate alcohol consumption was better for your health than no alcohol consumption we’d advise people who don’t drink to drink. Moreover, if this relationship were true then there would be an ‘optimal’ level of consumption where any protective effect were maximised before being outweighed by the adverse effects. This new study pools data from three large consortia each containing data from multiple studies or centres on individual alcohol consumption, cardiovascular disease (CVD), and all-cause mortality to look at these outcomes among drinkers, excluding non-drinkers for the aforementioned reasons. Reading the methods section, it’s not wholly clear, if replicability were the standard, what was done. I believe that for each different database a hazard ratio or odds ratio for the risk of CVD or mortality for eight groups of alcohol consumption was estimated, these ratios were then subsequently pooled in a random-effects meta-analysis. However, it’s not clear to me why you would need to do this in two steps when you could just estimate a hierarchical model that achieves the same thing while also propagating any uncertainty through all the levels. Anyway, a polynomial was then fitted through the pooled ratios – again, why not just do this in the main stage and estimate some kind of hierarchical semi-parametric model instead of a three-stage model to get the curve of interest? I don’t know. The key finding is that risk generally increases above around 100g/week alcohol (around 5-6 UK glasses of wine per week), below which it is fairly flat (although whether it is different to non-drinkers we don’t know). However, the picture the article paints is complicated, risk of stroke and heart failure go up with increased alcohol consumption, but myocardial infarction goes down. This would suggest some kind of competing risk: the mechanism by which alcohol works increases your overall risk of CVD and your proportional risk of non-myocardial infarction CVD given CVD.

Family ruptures, stress, and the mental health of the next generation [comment] [reply]. American Economic Review [RePEc] Published April 2018

I’m not sure I will write out the full blurb again about studies of in utero exposure to difficult or stressful conditions and later life outcomes. There are a lot of them and they continue to make the top journals. Admittedly, I continue to cover them in these round-ups – so much so that we could write a literature review on the topic on the basis of the content of this blog. Needless to say, exposure in the womb to stressors likely increases the risk of low birth weight birth, neonatal and childhood disease, poor educational outcomes, and worse labour market outcomes. So what does this new study (and the comments) contribute? Firstly, it uses a new type of stressor – maternal stress caused by a death in the family and apparently this has a dose-response as stronger ties to the deceased are more stressful, and secondly, it looks at mental health outcomes of the child, which are less common in these sorts of studies. The identification strategy compares the effect of the death on infants who are in the womb to those infants who experience it shortly after birth. Herein lies the interesting discussion raised in the above linked comment and reply papers: in this paper the sample contains all births up to one year post birth and to be in the ‘treatment’ group the death had to have occurred between conception and the expected date of birth, so those babies born preterm were less likely to end up in the control group than those born after the expected date. This spurious correlation could potentially lead to bias. In the authors’ reply, they re-estimate their models by redefining the control group on the basis of expected date of birth rather than actual. They find that their estimates for the effect of their stressor on physical outcomes, like low birth weight, are much smaller in magnitude, and I’m not sure they’re clinically significant. For mental health outcomes, again the estimates are qualitatively small in magnitude, but remain similar to the original paper but this choice phrase pops up (Statistical significance being used inappropriately complaint alert!!!): “We cannot reject the null hypothesis that the mental health coefficients presented in panel C of Table 3 are statistically the same as the corresponding coefficients in our original paper.” Statistically the same! I can see they’re different! Anyway, given all the other evidence on the topic I don’t need to explain the results in detail – the methods discussion is far more interesting.

Credits

Sam Watson’s journal round-up for 21st August 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Multidimensional performance assessment of public sector organisations using dominance criteria. Health Economics [RePEcPublished 18th August 2017

The empirical assessment of the performance or quality of public organisations such as health care providers is an interesting and oft-tackled problem. Despite the development of sophisticated methods in a large and growing literature, public bodies continue to use demonstrably inaccurate or misleading statistics such as the standardised mortality ratio (SMR). Apart from the issue that these statistics may not be very well correlated with underlying quality, organisations may improve on a given measure by sacrificing their performance on another outcome valued by different stakeholders. One example from a few years ago showed how hospital rankings based upon SMRs shifted significantly if one took into account readmission rates and their correlation with SMRs. This paper advances this thinking a step further by considering multiple outcomes potentially valued by stakeholders and using dominance criteria to compare hospitals. A hospital dominates another if it performs at least as well or better across all outcomes. Importantly, correlation between these measures is captured in a multilevel model. I am an advocate of this type of approach, that is, the use of multilevel models to combine information across multiple ‘dimensions’ of quality. Indeed, my only real criticism would be that it doesn’t go far enough! The multivariate normal model used in the paper assumes a linear relationship between outcomes in their conditional distributions. Similarly, an instrumental variable model is also used (using the now routine distance-to-health-facility instrumental variable) that also assumes a linear relationship between outcomes and ‘unobserved heterogeneity’. The complex behaviour of health care providers may well suggest these assumptions do not hold – for example, failing institutions may well show poor performance across the board, while other facilities are able to trade-off outcomes with one another. This would suggest a non-linear relationship. I’m also finding it hard to get my head around the IV model: in particular what the covariance matrix for the whole model is and if correlations are permitted in these models at multiple levels as well. Nevertheless, it’s an interesting take on the performance question, but my faith that decent methods like this will be used in practice continues to wane as organisations such as Dr Foster still dominate quality monitoring.

A simultaneous equation approach to estimating HIV prevalence with nonignorable missing responses. Journal of the American Statistical Association [RePEcPublished August 2017

Non-response is a problem encountered more often than not in survey based data collection. For many public health applications though, surveys are the primary way of determining the prevalence and distribution of disease, knowledge of which is required for effective public health policy. Methods such as multiple imputation can be used in the face of missing data, but this requires an assumption that the data are missing at random. For disease surveys this is unlikely to be true. For example, the stigma around HIV may make many people choose not to respond to an HIV survey, thus leading to a situation where data are missing not at random. This paper tackles the question of estimating HIV prevalence in the face of informative non-response. Most economists are familiar with the Heckman selection model, which is a way of correcting for sample selection bias. The Heckman model is typically estimated or viewed as a control function approach in which the residuals from a selection model are used in a model for the outcome of interest to control for unobserved heterogeneity. An alternative way of representing this model is as copula between a survey response variable and the response variable itself. This representation is more flexible and permits a variety of models for both selection and outcomes. This paper includes spatial effects (given the nature of disease transmission) not only in the selection and outcomes models, but also in the model for the mixing parameter between the two marginal distributions, which allows the degree of informative non-response to differ by location and be correlated over space. The instrumental variable used is the identity of the interviewer since different interviewers are expected to be more or less successful at collecting data independent of the status of the individual being interviewed.

Clustered multistate models with observation level random effects, mover–stayer effects and dynamic covariates: modelling transition intensities and sojourn times in a study of psoriatic arthritis. Journal of the Royal Statistical Society: Series C [ArXiv] Published 25th July 2017

Modelling the progression of disease accurately is important for economic evaluation. A delicate balance between bias and variance should be sought: a model too simple will be wrong for most people, a model too complex will be too uncertain. A huge range of models therefore exists from ‘simple’ decision trees to ‘complex’ patient-level simulations. A popular choice are multistate models, such as Markov models, which provide a convenient framework for examining the evolution of stochastic processes and systems. A common feature of such models is the Markov property, which is that the probability of moving to a given state is independent of what has happened previously. This can be relaxed by adding covariates to model transition properties that capture event history or other salient features. This paper provides a neat example of extending this approach further in the case of arthritis. The development of arthritic damage in a hand joint can be described by a multistate model, but there are obviously multiple joints in one hand. What is more, the outcomes in any one joint are not likely to be independent of one another. This paper describes a multilevel model of transition probabilities for multiple correlated processes along with other extensions like dynamic covariates and different mover-stayer probabilities.

Credits

Paul Mitchell’s journal round-up for 17th April 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Is foreign direct investment good for health in low and middle income countries? An instrumental variable approach. Social Science & Medicine [PubMed] Published 28th March 2017

Foreign direct investment (FDI) is considered a key benefit of globalisation in the economic development of countries with developing economies. The effect FDI has on the population health of countries is less well understood. In this paper, the authors draw from a large panel of data, primarily World Bank and UN sources, for 85 low and middle income countries between 1974 and 2012 to assess the relationship between FDI and population health, proxied by life expectancy at birth, as well as child and adult mortality data. They explain clearly the problem of using basic regression analysis in trying to explain this relationship, given the problem of endogeneity between FDI and health outcomes. By introducing two instrumental variables, using grossed fixed capital formation and volatility of exchange rates in FDI origin countries, as well as controlling for GDP per capita, education, quality of institutions and urban population, the study shows that FDI is weakly statistically associated with life expectancy, estimated to amount to 4.15 year increase in life expectancy during the study period. FDI also appears to have an effect on reducing adult mortality, but a negligible effect on child mortality. They also produce some evidence that FDI linked to manufacturing could lead to reductions in life expectancy, although these findings are not as robust as the other findings using instrumental variables, so they recommend this relationship between FDI type and population health to be explored further. The paper also clearly shows the benefit of robust analysis using instrumental variables, as the results without the introduction of these variables to the regression would have led to misleading inferences, where no relationship between life expectancy and FDI would have been found if the analysis did not adjust for the underlying endogeneity bias.

Uncovering waste in US healthcare: evidence from ambulance referral patterns. Journal of Health Economics [PubMed] Published 22nd March 2017

This study looks to unpick some of the reasons behind the estimated waste in US healthcare spending, by focusing on mortality rates across the country following an emergency admission to hospital through ambulances. The authors argue that patients admitted to hospital for emergency care using ambulances act as a good instrument to assess hospital quality given the nature of emergency admissions limiting the selection bias of what type of patients end up in different hospitals. Using linear regressions, the study primarily measures the relationship between patients assigned to certain hospitals and the 90-day spending on these patients compared to mortality. They also consider one-year mortality and the downstream payments post-acute care (excluding pharmaceuticals outside the hospital setting) has on this outcome. Through a lengthy data cleaning process, the study looks at over 1.5 million admissions between 2002-2011, with a high average age of patients of 82 who are predominantly female and white. Approximately $27,500 per patient was spent in the first 90 days post-admission, with inpatient spending accounting for the majority of this amount (≈$16,000). The authors argue initially that the higher 90-day spending in some hospitals only produces modestly lower mortality rates. Spending over 1 year is estimated to cost more than $300,000 per life year, which the authors use to argue that current spending levels do not lead to improved outcomes. But when the authors dig deeper, it seems clear there is an association between hospitals who have higher spending on inpatient care and reduced mortality, approximately 10% lower. This leads to the authors turning their attention to post-acute care as their main target of reducing waste and they find an association between mortality and patients receiving specialised nursing care. However, this target seems somewhat strange to me, as post-acute care is not controlled for in the same way as their initial, insightful approach to randomising based on ambulatory care. I imagine those in such care are likely to be a different mix from those receiving other types of care post 90 days after the initial event. I feel there really is not enough to go on to make recommendations about specialist nursing care being the key waste driver from their analysis as it says nothing, beyond mortality, about the quality of care these elderly patients are receiving in the specialist nurse facilities. After reading this paper, one way I would suggest in reducing inefficiency related to their primary analysis could be to send patients to the most appropriate hospital for what the patient needs in the first place, which seems difficult given the complexity of the private and hospital provided mix of ambulatory care offered in the US currently.

Population health and the economy: mortality and the Great Recession in Europe. Health Economics [PubMed] Published 27th March 2017

Understanding how economic recessions affect population health is of great research interest given the recent global financial crisis that led to the worst downturn in economic performance in the West since the 1930s. This study uses data from 27 European countries between 2004 and 2010 collected by WHO and the World Bank to study the relationship between economic performance and population health by comparing national unemployment and mortality rates before and after 2007. Regression analyses appropriate for time-series data are applied with a number of different specifications applied. The authors find that the more severe the economic downturn, the greater the increase in life expectancy at birth. Additional specific health mortality rates follow a similar trend in their analysis, with largest improvements observed in countries where the severity of the recession was the highest. The only exception the authors note is data on suicide, where they argue the relationship is less clear, but points towards higher rates of suicide with greater unemployment. The message the authors were trying to get across in this study was not very clear throughout most of the paper and some lay readers of the abstract alone could easily be misled in thinking recessions themselves were responsible for better population health. Mortality rates fell across all six years, but at a faster rate in the recession years. Although the results appeared consistent across all models, question marks remain for me in terms of their initial variable selection. Although the discussion mentions evidence that suggests health care may not have a short-term effect on mortality, they did not consider any potential lagged effect record investment in healthcare as a proportion of GDP up until 2007 may have had on the initial recession years. The authors rule out earlier comparisons with countries in the post-Soviet era but do not consider the effect of recent EU accession for many of the countries and more regulated national policies as a consequence. Another issue is the potential of countries’ mortality rates to improve, where countries with existing lower life expectancy have more room for moving in the right direction. However, one interesting discussion point raised by the authors in trying to explain their findings is the potential impact of economic activity on pollution levels and knock-on health impacts from this (and to a lesser extent occupational health levels), that may have some plausibility in better mortality rates linked to physical health during recessions.

Credits