Sam Watson’s journal round-up for 16th April 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The impact of NHS expenditure on health outcomes in England: alternative approaches to identification in all‐cause and disease specific models of mortality. Health Economics [PubMedPublished 2nd April 2018

Studies looking at the relationship between health care expenditure and patient outcomes have exploded in popularity. A recent systematic review identified 65 studies by 2014 on the topic – and recent experience from these journal round-ups suggests this number has increased significantly since then. The relationship between national spending and health outcomes is important to inform policy and health care budgets, not least through the specification of a cost-effectiveness threshold. Karl Claxton and colleagues released a big study looking at all the programmes of care in the NHS in 2015 purporting to estimate exactly this. I wrote at the time that: (i) these estimates are only truly an opportunity cost if the health service is allocatively efficient, which it isn’t; and (ii) their statistical identification method, in which they used a range of socio-economic variables as instruments for expenditure, was flawed as the instruments were neither strong determinants of expenditure nor (conditionally) independent of population health. I also noted that their tests would be unlikely to be any good to detect this problem. In response to the first, Tony O’Hagan commented to say that that they did not assume NHS efficiency, nor even that it was assumed that the NHS is trying to maximise health. This may well have been the case, but I would still, perhaps pedantically, argue then that this is therefore not an opportunity cost. For the question of instrumental variables, an alternative method was proposed by Martyn Andrews and co-authors, using information that feeds into the budget allocation formula as instruments for expenditure. In this new article, Claxton, Lomas, and Martin adopt Andrews’s approach and apply it across four key programs of care in the NHS to try to derive cost-per-QALY thresholds. First off, many of my original criticisms I would also apply to this paper, to which I’d also add one: (Statistical significance being used inappropriately complaint alert!!!) The authors use what seems to be some form of stepwise regression by including and excluding regressors on the basis of statistical significance – this is a big no-no and just introduces large biases (see this article for a list of reasons why). Beyond that, the instruments issue – I think – is still a problem, as it’s hard to justify, for example, an input price index (which translates to larger budgets) as an instrument here. It is certainly correlated with higher expenditure – inputs are more expensive in higher price areas after all – but this instrument won’t be correlated with greater inputs for this same reason. Thus, it’s the ‘wrong kind’ of correlation for this study. Needless to say, perhaps I am letting the perfect be the enemy of the good. Is this evidence strong enough to warrant a change in a cost-effectiveness threshold? My inclination would be that it is not, but that is not to deny it’s relevance to the debate.

Risk thresholds for alcohol consumption: combined analysis of individual-participant data for 599 912 current drinkers in 83 prospective studies. The Lancet Published 14th April 2018

“Moderate drinkers live longer” is the adage of the casual drinker as if to justify a hedonistic pursuit as purely pragmatic. But where does this idea come from? Studies that have compared risk of cardiovascular disease to level of alcohol consumption have shown that disease risk is lower in those that drink moderately compared to those that don’t drink. But correlation does not imply causation – non-drinkers might differ from those that drink. They may be abstinent after experiencing health issues related to alcohol, or be otherwise advised to not drink to protect their health. If we truly believed moderate alcohol consumption was better for your health than no alcohol consumption we’d advise people who don’t drink to drink. Moreover, if this relationship were true then there would be an ‘optimal’ level of consumption where any protective effect were maximised before being outweighed by the adverse effects. This new study pools data from three large consortia each containing data from multiple studies or centres on individual alcohol consumption, cardiovascular disease (CVD), and all-cause mortality to look at these outcomes among drinkers, excluding non-drinkers for the aforementioned reasons. Reading the methods section, it’s not wholly clear, if replicability were the standard, what was done. I believe that for each different database a hazard ratio or odds ratio for the risk of CVD or mortality for eight groups of alcohol consumption was estimated, these ratios were then subsequently pooled in a random-effects meta-analysis. However, it’s not clear to me why you would need to do this in two steps when you could just estimate a hierarchical model that achieves the same thing while also propagating any uncertainty through all the levels. Anyway, a polynomial was then fitted through the pooled ratios – again, why not just do this in the main stage and estimate some kind of hierarchical semi-parametric model instead of a three-stage model to get the curve of interest? I don’t know. The key finding is that risk generally increases above around 100g/week alcohol (around 5-6 UK glasses of wine per week), below which it is fairly flat (although whether it is different to non-drinkers we don’t know). However, the picture the article paints is complicated, risk of stroke and heart failure go up with increased alcohol consumption, but myocardial infarction goes down. This would suggest some kind of competing risk: the mechanism by which alcohol works increases your overall risk of CVD and your proportional risk of non-myocardial infarction CVD given CVD.

Family ruptures, stress, and the mental health of the next generation [comment] [reply]. American Economic Review [RePEc] Published April 2018

I’m not sure I will write out the full blurb again about studies of in utero exposure to difficult or stressful conditions and later life outcomes. There are a lot of them and they continue to make the top journals. Admittedly, I continue to cover them in these round-ups – so much so that we could write a literature review on the topic on the basis of the content of this blog. Needless to say, exposure in the womb to stressors likely increases the risk of low birth weight birth, neonatal and childhood disease, poor educational outcomes, and worse labour market outcomes. So what does this new study (and the comments) contribute? Firstly, it uses a new type of stressor – maternal stress caused by a death in the family and apparently this has a dose-response as stronger ties to the deceased are more stressful, and secondly, it looks at mental health outcomes of the child, which are less common in these sorts of studies. The identification strategy compares the effect of the death on infants who are in the womb to those infants who experience it shortly after birth. Herein lies the interesting discussion raised in the above linked comment and reply papers: in this paper the sample contains all births up to one year post birth and to be in the ‘treatment’ group the death had to have occurred between conception and the expected date of birth, so those babies born preterm were less likely to end up in the control group than those born after the expected date. This spurious correlation could potentially lead to bias. In the authors’ reply, they re-estimate their models by redefining the control group on the basis of expected date of birth rather than actual. They find that their estimates for the effect of their stressor on physical outcomes, like low birth weight, are much smaller in magnitude, and I’m not sure they’re clinically significant. For mental health outcomes, again the estimates are qualitatively small in magnitude, but remain similar to the original paper but this choice phrase pops up (Statistical significance being used inappropriately complaint alert!!!): “We cannot reject the null hypothesis that the mental health coefficients presented in panel C of Table 3 are statistically the same as the corresponding coefficients in our original paper.” Statistically the same! I can see they’re different! Anyway, given all the other evidence on the topic I don’t need to explain the results in detail – the methods discussion is far more interesting.

Credits

Sam Watson’s journal round-up for 26th June 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Future and potential spending on health 2015–40: development assistance for health, and government, prepaid private, and out-of-pocket health spending in 184 countries. The Lancet [PubMed] Published 20th May 2017

The colossal research collaboration that is the Global Burden of Disease Study is well known for producing estimates of deaths and DALYs lost across the world due to a huge range of diseases. These figures have proven invaluable as a source of information to inform disease modelling studies and to help guide the development of public health programs. In this study, the collaboration turn their hands to modelling future health care expenditure. Predicting the future of any macroeconomic variable is tricky, to say the least. The approach taken here is to (1) model GDP to 2040 using an ensemble method, taking the ‘best performing’ models from the over 1,000 used (134 were included); (2) model all-sector government spending, out-of-pocket spending, and private health spending as a proportion of GDP in the same way, but with GDP as an input; and then (3) using a stochastic frontier approach to model maximum ‘potential’ spending. This latter step is an attempt to make the results potentially more useful by analysing different scenarios that might change overall health care expenditure by considering different frontiers. All of these steps would conceptually add a lot of uncertainty: the different probability of each model in the ensemble and the prediction uncertainty from each model including uncertainty in inputs such as size of population and demographic structure, all of which is propagated through the three step process. And this is without taking into account that health care spending at a national level is the result of a complex political decision making process, which can impact national income and prioritisation of health care in unforeseen ways (Brexit anyone?). Despite this, the predictions seem quite certain: health spending per capita is predicted to rise from $1,279 in 2014 to $2,872 with a 95% confidence intervention (or do they mean prediction interval?) of $2,426 to $3,522. It may well be a good model for average spending, but I suspect uncertainty (at least of a Bayesian kind) should be higher for a predictive model for 25 years into the future based on 20 years of data. The non-standard use of stochastic frontier analysis, which is typically a way of estimating technical efficiency, is also tricky to follow. The frontier is argued in this paper to be the maximum amount a country of similar levels of development spends on health care. This would also suggest that it is assumed spending cannot go higher than a country’s highest spending peer. A potentially strong assumption. Needless to say, these are the best predictions we currently have for future health care expenditure.

Discovering effect modification in an observational study of surgical mortality at hospitals with superior nursing. Journal of the Royal Statistical Society: Series A [ArXivPublished June 2017

An applied econometrician can find endogeneity everywhere. Such is the complexity of the social, political, and economic world. Everything is connected in some way. It’s one of the reasons I’ve argued before against null hypothesis significance testing: no effect is going to be exactly zero. Our job is one of measurement of the size of an effect and, crucially for this paper, what might affect the magnitude of these effects. This might start with a graphical or statistical exploratory analysis before proceeding to a confirmatory analysis. This paper proposes a method of exploratory analysis for treatment effect modifiers and examines the effect of superior nursing on treatment outcomes, which an approach I think to be a sensible scientific approach. But how does it propose to do it? Null hypothesis significance testing! Oh no! Essentially, the method involves a novel procedure for testing if treatment effects differ by group allowing for potential unobserved confounding and where the groups are also formed in a novel way. For example, the authors ask how much bias would need to be present for their conclusions to change. In terms of the effects of superior nurse staffing, the authors estimates that its beneficial treatment effect is the least sensitive to bias in a group of patients with the most serious conditions.

Incorporation of a health economic modelling tool into public health commissioning: Evidence use in a politicised context. Social Science & Medicine [PubMedPublished June 2017

Last up, a qualitative research paper (on an economics blog! I know…). Many health economists are involved in trying to encourage the incorporation of research findings into health care decision making and commissioning. The political decision making process often ends in inefficient or inequitable outcomes despite good evidence on what makes good policy. This paper explored how commissioners in an English local authority viewed a health economics decision tool for planning diabetes services. This is a key bit of research if we are to make headway in designing tools that actually improve commissioning decisions. Two key groups of stakeholders were involved, public health managers and politicians. The latter prioritized intelligence, local opinion, and social care agendas over scientific evidence from research, which was preferred by the former group. The push and pull between the different approaches meant the health economics tool was used as a way of supporting the agendas of different stakeholders rather than as a means to addressing complex decisions. For a tool to be successful it would seem to need to speak to or about the local population to which it is going to be applied. Well, that’s my interpretation. I’ll leave you with this quote from an interview with a manager in the study:

Public health, what they bring is a, I call it a kind of education scholarly kind of approach to things … whereas ‘social care’ sometimes are not so evidence-based-led. It’s a bit ‘well I thought that’ or, it’s a bit more fly by the seat of pants in social care.

Credits

Are we estimating the effects of health care expenditure correctly?

It is a contentious issue in philosophy whether an omission can be the cause of an event. At the very least it seems we should consider causation by omission differently from ‘ordinary’ causation. Consider Sarah McGrath’s example. Billy promised Alice to water the plant while she was away, but he did not water it. Billy not watering the plant caused its death. But there are good reasons to suppose that Billy did not cause its death. If Billy’s lack of watering caused the death of the plant, it may well be reasonable to assume that Vladimir Putin and indeed anyone else who did not water the plant were also a cause. McGrath argues that there is a normative consideration here: Billy ought to have watered the plant and that’s why we judge his omission as a cause and not anyone else’s. Similarly, the example from L.A. Paul and Ned Hall’s excellent book Causation: A User’s GuideBilly and Suzy are playing soccer on rival teams. One of Suzy’s teammates scores a goal. Both Billy and Suzy were nearby and could have easily prevented the goal. But our judgement is that the goal should only be credited to Billy’s failure to block the goal as Suzy had no responsibility to.

These arguments may appear far removed from the world of health economics. But, they have practical implications. Consider the estimation of the effect that increasing health care expenditure has on public health outcomes. The government, or relevant health authority, makes a decision about how the budget is allocated. It is often the case that there are allocative inefficiencies: greater gains could be had by reallocating the budget to more effective programs of care. In this case there would seem to be a relevant omission; the budget has not been spent where it could have provided benefits. These omissions are often seen as causes of a loss of health. Karl Claxton wrote of the Cancer Drugs Fund, a pool of money diverted from the National Health Service to provide cancer drugs otherwise considered cost-ineffective, that it was associated with

a net loss of at least 14,400 quality adjusted life years in 2013/14.

Similarly, an analysis of the lack of spending on effective HIV treatment and prevention by the Mbeki administration in South Africa wrote that

More than 330,000 lives or approximately 2.2 million person-years were lost because a feasible and timely ARV treatment program was not implemented in South Africa.

But our analyses of the effects of health care expenditure typically do not take these omissions into account.

Causal inference methods are founded on a counterfactual theory of causation. The aim of a causal inference method is to estimate the potential outcomes that would have been observed under different treatment regimes. In our case this would be what would have happened under different levels of expenditure. This is typically estimated by examining the relationship between population health and levels of expenditure, perhaps using some exogenous determinant of expenditure to identify the causal effects of interest. But this only identifies those changes caused by expenditure and not those changes caused by not spending.

Consider the following toy example. There are two causes of death in the population a and b with associated programs of care and prevention A and B. The total health care expenditure is x of which a proportion p: p\in P \subseteq [0,1] is spent on A and 1-p on B. The deaths due to each cause are y_a and y_b and so the total deaths are y = y_a + y_b. Finally, the effect of a unit increase in expenditure in each program are \beta_a and \beta_b. The question is to determine what the causal effect of expenditure is. If Y_x is the potential outcome for level of expenditure x then the average treatment effect is given by E(\frac{\partial Y_x}{\partial x}).

The country has chosen an allocation between the programmes of care of p_0. If causation by omission is not a concern then, given linear, additive models (and that all the model assumptions are met), y_a = \alpha_a + \beta_a p x + f_a(t) + u_a and y_b = \alpha_b + \beta_b (1-p) x + f_b(t) + u_b, the causal effect is E(\frac{\partial Y_x}{\partial x}) = \beta = \beta_a p_0 + \beta_b (1-p_0). But if causation by omission is relevant, then the net effect of expenditure is the lives gained \beta_a p_0 + \beta_b (1-p_0) less the lives lost. The lives lost are those under all possible things we did not do, so the estimator of the causal effect is \beta' = \beta_a p_0 + \beta_b (1-p_0) -  \int_{P/p_0} [ \beta_ap + \beta_b(1-p) ] dG(p). Now, clearly \beta \neq \beta' unless P/p_0 is the empty set, i.e. there was no other option. Indeed, the choice of possible alternatives involves a normative judgement as we’ve suggested. For an omission to count as a cause, there needs to be a judgement about what ought to have been done. For health care expenditure this may mean that the only viable alternative is the allocatively efficient distribution, in which case all allocations will result in a net loss of life unless they are allocatively efficient, which some may argue is reasonable. An alternative view is simply that the government simply has to not do worse than in the past and perhaps it is also reasonable for the government not to make significant changes to the allocation, for whatever reason. In that case we might say that P \in [p_0,1] and g(p) might be a distribution truncated below p_0 with most mass around p_0 and small variance.

The problem is that we generally do not observe the effect of expenditure in each program of care nor do we know the distribution of possible budget allocations. The normative judgements are also a contentious issue. Claxton clearly believes the government ought not to have initiated the Cancer Drugs Fund, but he does not go so far as to say any allocative inefficiency results in a net loss of life. Some working out of the underlying normative principles is warranted. But if it’s not possible to estimate these net causal effects, why discuss it? Perhaps it’s due to the lack of consistency. We estimate the ‘ordinary’ causal effect in our empirical work, but we often discuss opportunity costs and losses due to inefficiencies as being due to or caused by the spending decisions that are made. As the examples at the beginning illustrate, the normative question of responsibility seeps into our judgments about whether an omission is the cause of an outcome. For health care expenditure the government or other health care body does have a relevant responsibility. I would argue then that causation by omission is important and perhaps we need to reconsider the inferences that we make.

Credits