# Are we estimating the effects of health care expenditure correctly?

It is a contentious issue in philosophy whether an omission can be the cause of an event. At the very least it seems we should consider causation by omission differently from ‘ordinary’ causation. Consider Sarah McGrath’s example. Billy promised Alice to water the plant while she was away, but he did not water it. Billy not watering the plant caused its death. But there are good reasons to suppose that Billy did not cause its death. If Billy’s lack of watering caused the death of the plant, it may well be reasonable to assume that Vladimir Putin and indeed anyone else who did not water the plant were also a cause. McGrath argues that there is a normative consideration here: Billy ought to have watered the plant and that’s why we judge his omission as a cause and not anyone else’s. Similarly, the example from L.A. Paul and Ned Hall’s excellent book Causation: A User’s GuideBilly and Suzy are playing soccer on rival teams. One of Suzy’s teammates scores a goal. Both Billy and Suzy were nearby and could have easily prevented the goal. But our judgement is that the goal should only be credited to Billy’s failure to block the goal as Suzy had no responsibility to.

These arguments may appear far removed from the world of health economics. But, they have practical implications. Consider the estimation of the effect that increasing health care expenditure has on public health outcomes. The government, or relevant health authority, makes a decision about how the budget is allocated. It is often the case that there are allocative inefficiencies: greater gains could be had by reallocating the budget to more effective programs of care. In this case there would seem to be a relevant omission; the budget has not been spent where it could have provided benefits. These omissions are often seen as causes of a loss of health. Karl Claxton wrote of the Cancer Drugs Fund, a pool of money diverted from the National Health Service to provide cancer drugs otherwise considered cost-ineffective, that it was associated with

a net loss of at least 14,400 quality adjusted life years in 2013/14.

Similarly, an analysis of the lack of spending on effective HIV treatment and prevention by the Mbeki administration in South Africa wrote that

More than 330,000 lives or approximately 2.2 million person-years were lost because a feasible and timely ARV treatment program was not implemented in South Africa.

But our analyses of the effects of health care expenditure typically do not take these omissions into account.

Causal inference methods are founded on a counterfactual theory of causation. The aim of a causal inference method is to estimate the potential outcomes that would have been observed under different treatment regimes. In our case this would be what would have happened under different levels of expenditure. This is typically estimated by examining the relationship between population health and levels of expenditure, perhaps using some exogenous determinant of expenditure to identify the causal effects of interest. But this only identifies those changes caused by expenditure and not those changes caused by not spending.

Consider the following toy example. There are two causes of death in the population $a$ and $b$ with associated programs of care and prevention $A$ and $B$. The total health care expenditure is $x$ of which a proportion $p: p\in P \subseteq [0,1]$ is spent on $A$ and $1-p$ on $B$. The deaths due to each cause are $y_a$ and $y_b$ and so the total deaths are $y = y_a + y_b$. Finally, the effect of a unit increase in expenditure in each program are $\beta_a$ and $\beta_b$. The question is to determine what the causal effect of expenditure is. If $Y_x$ is the potential outcome for level of expenditure $x$ then the average treatment effect is given by $E(\frac{\partial Y_x}{\partial x})$.

The country has chosen an allocation between the programmes of care of $p_0$. If causation by omission is not a concern then, given linear, additive models (and that all the model assumptions are met), $y_a = \alpha_a + \beta_a p x + f_a(t) + u_a$ and $y_b = \alpha_b + \beta_b (1-p) x + f_b(t) + u_b$, the causal effect is $E(\frac{\partial Y_x}{\partial x}) = \beta = \beta_a p_0 + \beta_b (1-p_0)$. But if causation by omission is relevant, then the net effect of expenditure is the lives gained $\beta_a p_0 + \beta_b (1-p_0)$ less the lives lost. The lives lost are those under all possible things we did not do, so the estimator of the causal effect is $\beta' = \beta_a p_0 + \beta_b (1-p_0) - \int_{P/p_0} [ \beta_ap + \beta_b(1-p) ] dG(p)$. Now, clearly $\beta \neq \beta'$ unless $P/p_0$ is the empty set, i.e. there was no other option. Indeed, the choice of possible alternatives involves a normative judgement as we’ve suggested. For an omission to count as a cause, there needs to be a judgement about what ought to have been done. For health care expenditure this may mean that the only viable alternative is the allocatively efficient distribution, in which case all allocations will result in a net loss of life unless they are allocatively efficient, which some may argue is reasonable. An alternative view is simply that the government simply has to not do worse than in the past and perhaps it is also reasonable for the government not to make significant changes to the allocation, for whatever reason. In that case we might say that $P \in [p_0,1]$ and $g(p)$ might be a distribution truncated below $p_0$ with most mass around $p_0$ and small variance.

The problem is that we generally do not observe the effect of expenditure in each program of care nor do we know the distribution of possible budget allocations. The normative judgements are also a contentious issue. Claxton clearly believes the government ought not to have initiated the Cancer Drugs Fund, but he does not go so far as to say any allocative inefficiency results in a net loss of life. Some working out of the underlying normative principles is warranted. But if it’s not possible to estimate these net causal effects, why discuss it? Perhaps it’s due to the lack of consistency. We estimate the ‘ordinary’ causal effect in our empirical work, but we often discuss opportunity costs and losses due to inefficiencies as being due to or caused by the spending decisions that are made. As the examples at the beginning illustrate, the normative question of responsibility seeps into our judgments about whether an omission is the cause of an outcome. For health care expenditure the government or other health care body does have a relevant responsibility. I would argue then that causation by omission is important and perhaps we need to reconsider the inferences that we make.

Credits

# Transformative treatments: a big methodological challenge for health economics

Social scientists, especially economists, are concerned with causal inference: understanding whether and how an event causes a certain effect. Typically, we subscribe to the view that causal relations are reducible to sets of counterfactuals, and we use ever more sophisticated methods, such as instrumental variables and propensity score matching, to estimate these counterfactuals. Under the right set of assumptions, like that unobserved differences between study subjects are time invariant or that a treatment causes its effect through a certain mechanism, we can derive estimators for average treatment effects. All uncontroversial stuff indeed.

A recent paper from L.A. Paul and Kieran Healy introduces an argument of potential importance to how we can interpret studies investigating causal relations. In particular, they make the argument that we don’t know if individual preferences persist in a study through treatment. It is in general not possible to distinguish between the case where a treatment has satisfied an underlying revealed preference, or transformed an individual’s preferences. If preferences are changed or transformed, rather than revealed, then they are, in effect, a different population and in a causal inference type study, no longer comparable to the control population.

To quote their thought experiment:

Vampires: In the 21st century, vampires begin to populate North America. Psychologists decide to study the implications this could have for the human population. They put out a call for undergraduates to participate in a randomized controlled experiment, and recruit a local vampire with scientific interests. After securing the necessary permissions, they randomize and divide their population of undergraduates into a control group and a treatment group. At t1, members of each group are given standard psychological assessments measuring their preferences about vampires in general and about becoming a vampire in particular. Then members of the experimental group are bitten by the lab vampire.

Members of both groups are left to go about their daily lives for a period of time. At t2, they are assessed. Members of the control population do not report any difference in their preferences at t2. All members of the treated population, on the other hand, report living richer lives, enjoying rewarding new sensory experiences, and having a new sense of meaning at t2. As a result, they now uniformly report very strong pro-vampire preferences. (Some members of the treatment group also expressed pro-vampire preferences before the experiment, but these were a distinct minority.) In exit interviews, all treated subjects also testify that they have no desire to return to their previous condition.

Should our psychologists conclude that being bitten by a vampire somehow satisfies people’s underlying, previously unrecognized, preferences to become vampires? No. They should conclude that being bitten by a vampire causes you to become a vampire (and thus, to prefer being one). Being bitten by a vampire and then being satisfied with the result does not satisfy or reveal your underlying preference to be a vampire. Being bitten by a vampire transforms you: it changes your preferences in a deep and fundamental way, by replacing your underlying human preferences with vampire preferences, no matter what your previous preferences were.

In our latest journal round-up, I featured a paper that used German reunification in 1989 as a natural experiment to explore the impact of novel food items in the market on consumption and weight gain. The transformative treatments argument comes into play here. Did reunification reveal the preferences of East Germans for the novel food stuffs, or did it change their preferences for foodstuffs overall due to the significant cultural change? If the latter case is true then West Germans do not constitute an appropriate control group. The causal mechanism at play is also important to the development of policy: for example, without reunification there may not have been any impact from novel food products.

This argument is also sometimes skirted around with regards to the valuing of health states. Should it be the preferences of healthy people, or the experienced utility of sick people, that determine health state values? Do physical trauma and disease reveal our underlying preferences for different health states, or do they transform us to have different preferences entirely? Any study looking at the effect of disease on health status or quality of life could not distinguish between the two. Yet the two cases are akin to using the same or different groups of people to do the valuation of health states.

Consider also something like estimating the impact of retirement on health and quality of life. If self-reported quality of life is observed to improve in one of these studies, we don’t know if that is because retirement has satisfied a pre-existing preference for the retired lifestyle, or retirement has transformed a person’s preferences. In the latter case, the appropriate control group to evaluate the causal effect of retirement is not non-retired persons.

Paul and Healy do not make their argument to try to prevent or undermine research in the social sciences, they interpret their conclusion as a “methodological challenge”. The full implications of the above arguments have not been explored but could be potentially great and new innovations in methodology to estimate average causal effects could be warranted. How this may be achieved, I’ll have to admit, I do not know.

Credits

# Sam Watson’s journal round-up for 9th January 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Non-separable time preferences, novelty consumption and body weight: Theory and evidence from the East German transition to capitalism. Journal of Health Economics [PubMed] [RePEc] Published January 2017

Obesity is an ever growing (excuse the pun) problem associated with numerous health risks including diabetes and hypertension. It was recently reported that eight in ten middle-aged Britons are overweight or exercise too little. A strong correlation between economic development and obesity rates has been widely observed both over time within the same countries and between countries across the world. One potential explanation for this correlation is innovation of novel food products that are often energy dense and of little nutritional benefit. However, exploring this hypothesis is difficult as over the long time horizons associated with changing consumer habits and economic development, a multitude of confounding factors also change. This paper attempts to delve into this question making use of the natural experiment of German reunification in 1989. After the fall of the Berlin Wall a wave of products previously available in West Germany became available to East Germans, almost overnight. The paper provides a nice in depth theoretical model, which is then linked to data and an empirical analysis to provide a comprehensive study of the effect of novel food products in both the short and medium terms. At first glance the effect of reunification on diet habits and weight gain appear fairly substantial both in absolute and relative terms, and these results appear robust and well-founded, theoretically speaking. A question that remains in my mind are whether preferences in this case are endogenous or state dependent, a question that has important implications for policy. Similarly, did reunification reveal East German preferences for fast food and the like, or were those preferences changed as a result of the significant cultural shift? Sadly, this last question is unanswerable, but affects whether we can interpret these results as causal – a thought I shall expand upon in an upcoming blog post.

Ontology, methodological individualism, and the foundations of the social sciences. Journal of Economic Literature [RePEc] Published December 2016

It is not often that we feature philosophically themed papers. But, I am a keen proponent of keeping abreast of advances in our understanding of what exactly it is we are doing day to day. Are we actually producing knowledge of the real world? This review essay discusses the book The Ant Trap by Brian Epstein. Epstein argues that social scientists must get the social ontology right in order to generate knowledge of the social world. A view I think it would be hard to disagree with. But, he argues, economists have not got the social ontology right. In particular, economists are of the belief that social facts are built out of individual people, much like an ant colony is built of ants (hence the title), when in fact a less anthropocentric view should be adopted. In this essay, Robert Sugden argues that Epstein’s arguments against ontological individualism – that social facts are reducible to the actions of individuals – are unconvincing, particularly given Epstein’s apparent lack of insight into what social scientists actually do. Epstein also developed an ontological model for social facts on the basis of work by John Searle, a model which Sugden finds to be overly ambitious and ultimately unsuccessful. There is not enough space here to flesh out any of the arguments, needless to say it is an interesting debate, and one which may or may not make a difference to the methods we use, depending on who you agree with.

Heterogeneity in smokers’ responses to tobacco control policies. Health Economics [PubMedPublished 4th January 2016

In an ideal world, public health policy with regards to drugs and alcohol would be designed to minimise harm. However, it is often the case that policy is concerned with reducing the prevalence of use, rather than harm. Prevalence reducing policies, such as a Pigouvian tax, reduce overall use but only among those with the most elastic demand, who are also likely to be those whose use leads to the least harm. In this light, this study assesses the heterogeneity of tobacco users’ responses to tobacco control policies. Using quantile regression techniques, Erik Nesson finds that the effects of tobacco taxes are most pronounced in those who consume lower numbers of cigarettes, as we might expect. This is certainly not the first study to look at this (e.g. here and here), but reproduction of research findings is an essential part of the scientific process, and this study certainly provides further robust evidence to show that taxes alone may not be the optimum harm reduction strategy.

Credits