# The irrelevance of inference: (almost) 20 years on is it still irrelevant?

The Irrelevance of Inference was a seminal paper published by Karl Claxton in 1999. In it he outlines a stochastic decision making approach to the evaluation of health technologies. A key point that he makes is that we need only to examine the posterior mean incremental net benefit of one technology compared to another to make a decision. Other aspects of the distribution of incremental net benefits are irrelevant – hence the title.

I hated this idea. From a Bayesian perspective estimation and inference is a decision problem. Surely uncertainty matters! But, in the extra-welfarist framework that we generally conduct cost-effectiveness analysis in, it is irrefutable. To see why let’s consider a basic decision making framework.

There are three aspects to a decision problem. Firstly, there is a state of the world, $\theta \in \Theta$ with density $\pi(\theta)$. In this instance it is the net benefits in the population, but could be the state of the economy, or effectiveness of a medical intervention in other contexts, for example. Secondly, there is the possible actions denoted by $a \in \mathcal{A}$. There might be a discrete set of actions or a continuum of possibilities. Finally, there is the loss function $L(a,\theta)$. The loss function describes the losses or costs associated with making decision $a$ given that $\theta$ is the state of nature. The action that should be taken is the one which minimises expected losses $\rho(\theta,a)=E_\theta(L(a,\theta))$. Minimising losses can be seen as analogous to maximising utility. We also observe data $x=[x_1,...,x_N]'$ that provide information on the parameter $\theta$. Our state of knowledge regarding this parameter is then captured by the posterior distribution $\pi(\theta|x)$. Our expected losses should be calculated with respect to this distribution.

Given the data and posterior distribution of incremental net benefits, we need to make a choice about a value (a Bayes estimator), that minimises expected losses. The opportunity loss from making the wrong decision is “the difference in net benefit between the best choice and the choice actually made.” So the decision falls down to deciding whether the incremental net benefits are positive or negative (and hence whether to invest), $\mathcal{A}=[a^+,a^-]$. The losses are linear if we make the wrong decision:

$L(a^+,\theta) = 0$ if $\theta >0$ and $L(a^+,\theta) = \theta$ if $\theta <0$

$L(a^-,\theta) = - \theta$ if $\theta >0$ and $L(a^+,\theta) = 0$ if $\theta <0$

So we should decide that the incremental net benefits are positive if

$E_\theta(L(a^+,\theta)) - E_\theta(L(a^-,\theta)) > 0$

which is equivalent to

$\int_0^\infty \theta dF^{\pi(\theta|x)}(\theta) - \int_{-\infty}^0 -\theta dF^{\pi(\theta|x)}(\theta) = \int_{-\infty}^\infty \theta dF^{\pi(\theta|x)}(\theta) > 0$

which is obviously equivalent to $E(\theta|x)>0$ – the posterior mean!

If our aim is simply the estimation of net benefits (so $\mathcal{A} \subseteq \mathbb{R}$), different loss functions lead to different estimators. If we have a squared loss function $L(a, \theta)=|\theta-a|^2$ then again we should choose the posterior mean. However, other choices of loss function lead to other estimators. The linear loss function, $L(a, \theta)=|\theta-a|$ leads to the posterior median. And a ‘0-1’ loss function: $L(a, \theta)=0$ if $a=\theta$ and $L(a, \theta)=1$ if $a \neq \theta$, gives the posterior mode, which is also the maximum likelihood estimator (MLE) if we have a uniform prior. This latter point does suggest that MLEs will not give the ‘correct’ answer if the net benefit distribution is asymmetric. The loss function is therefore important. But for the purposes of the decision between technologies I see no good reason to reject our initial loss function.

Claxton also noted that equity considerations could be incorporated through ‘adjustments to the measure of outcome’. This could be some kind of weighting scheme. However, this is where I might begin to depart from the claim of the irrelevance of inference. I prefer a social decision maker approach to evaluation in the vein of cost-benefit analysis as discussed by the brilliant Alan Williams. This approach allows for non-market outcomes that extra-welfarism might include but classical welfarism would exclude; their valuations could be arrived at by a political, democratic process or by other means. It also permits inequality aversion and other features that I find are a perhaps more accurate reflection of a political decision making approach. However, one must be aware of all the flaws and failures of this approach, which Williams so neatly describes.

In a social decision maker framework, the decision that should be made is the one that maximises a social welfare function. A utility function expresses social preferences over the distribution of utility in the population, the social welfare function aggregates utility and is usually assumed to be linear (utilitarian). If the utility function is inequality averse then the variance obviously does matter. But, in making this claim I am moving away from the arguments of Claxton’s paper and towards a discussion of the relative merits extra-welfarism and other approaches.

Perhaps the statement that inference was irrelevant was made just to capture our attention. After all the process of updating our knowledge of the net benefits of alternatives from data is inference. But Claxton’s statement refers more to the process of hypothesis testing and p-values (or Bayesian ranges of equivalents), the use of which has no place in decision making. On this point I wholeheartedly agree.

# Well-being and gross national happiness for policy

In the early years of the coalition government, David Cameron lauded the measurement of happiness and well-being as an indicator of national performance. Data on life satisfaction have been collected and published by the Office for National Statistics every year since 2012. Despite this, very little is said about well-being. It is not discussed at spending or policy reviews and rarely in the media. Gross domestic product (GDP) continues to dominate the coverage of national performance and the potential impact of policies such as Brexit. Nevertheless, a precursory glance at the data can reveal an interesting picture of national well-being.

Proportion of respondents reporting their life satisfaction to be ‘high’ or ‘very high’. [Data source: ONS; .csv data; R code]

The map above plots the proportion of people reporting their life satisfaction to be ‘high’ or ‘very high’ across England and Wales. This corresponds to a score of seven or more on a ten point scale in response to the question:

Overall, how satisfied are you with your life nowadays? Where 0 is ‘not at all satisfied’ and 10 is ‘completely satisfied’.

There are clearly variations across the country, with the most obvious being the urban/rural divide. The proportion of people reporting ‘high’ or ‘very high’ life satisfaction in the UK has also increased over time, from 76.1% to 81.2% between 2012/3 and 2015/6, corresponding to a mean life satisfaction rating rising from 7.42 to 7.65.

Well-being data can also be used to evaluate the impact of policies or interventions in a cost-benefit analysis. Typically an in-depth analysis may model the impact of a policy on household incomes. But, these changes in income are only valuable insofar as they are instrumental for changes in well-being or welfare. Hence the attraction of well-being data. To derive a monetary valuation of a change in life satisfaction economists consider either compensating surplus or equivalent surplus. The former is the amount of money that someone would need to pay or receive to return them to their initial welfare position following a change in life satisfaction; the latter is the amount they would need to move them to their subsequent welfare position in the absence of a change. For example, to estimate the compensating surplus for a change in life satisfaction, one could estimate the effect of an exogenous change in income on life satisfaction. Such an exogenous change could be a lottery win, which is exactly the approach used in this report valuing the benefits of cultural and sports events like the Olympics.

Health economists have been one of the pioneering groups in the development and valuation of measures of non-monetary benefits. The quality-adjusted life year (QALY) being a prime example. However, a common criticism of these measures is that they only capture health related quality of life, and are fairly insensitive to changes in other areas of well-being. As a result there have been a growing number of broader measures of well-being, such as WEMWBS, that can be used as well as the generic life satisfaction measures discussed above. Broader measures may be able to capture some of the effects of health care policies that QALYs do not. For example, centralisation of healthcare services increases travel time and time away from home for many relatives and carers; reduced staff to patient ratios and consultation time can impact on process of care and staff-patient relationships; or, other barriers to care, such as language difficulties, may cause distress and dissatisfaction.

There are clearly good arguments for the use of broad life satisfaction and well-being instruments and sound methods to value them. One of the major barriers to their adoption is a lack of good data. The other barrier is likely to be the political willingness to accept them as measures of national performance and policy impact.

Credits

# “Health is bad for you. That’s what many economists believe.” Richard Horton’s anti-economics strikes again.

Richard Horton, editor-in-chief of the venerable medical journal the Lancet, is no stranger to bad economics. In 2012 and 2013, he stoked the ire of economists worldwide with a series of ill-informed tweets. These included gems such as:

A number of measured responses were offered, but it’s unclear if they had any influence on Horton’s thinking.

Well, in this week’s edition of the Lancet, Horton once again wades erroneously into economics. Horton discusses William Baumol’s theory of the cost disease. Briefly, this theory offers an explanation of why healthcare continues to grow as a proportion of GDP. Growth in GDP occurs in part due to an increase in productivity, but growth in some sectors, such as manufacturing, increases more rapidly than in others, such as healthcare and education, which are typically labour intensive. Wages increase in the ‘productive sectors’ as a result of increased output. The wages in the ‘stagnant sectors’ also increase to stay in line with other sectors, but since productivity does not grow as fast either prices rise or profits fall, and it is usually the former. Therefore, healthcare continues to grow as a proportion of GDP as a consequence of economic growth. This is a positive, as opposed to normative, theory and we’ve previously discussed an empirical study examining it. So what does Horton have to say about the cost disease?

Health is bad for you. That’s what many economists believe. A man called William Baumol may be largely to blame. In the 1960s, he invented the notion of a “cost disease” in modern societies. It was a powerful metaphor, one that has shaped the prejudices of many a Minister of Finance ever since. His central idea sounds convincing. Some industries are good at increasing their productivity. As a result, they earn more money to invest in the wages of their employees. These sectors of the economy deserve our praise. There are other sectors where increasing productivity is harder. […] In areas that depend on human beings interacting with one another, as medicine does, productivity gains are hard to achieve. But the salaries of those working in these productivity-poor sectors rise anyway. […] The result of the Baumol effect is a disaster for society. The costs of a concert, ballet, or health service increase even though productivity stays stubbornly the same. What else could this be but a malignant “cost disease” on our collective welfare. [Link]

The trouble with this explanation of the cost disease is that, while it gets some of the basics of the argument (not metaphor) right, it has attached Horton’s normative beliefs (and anti-economics prejudices) to it. No economist has ever declared that “health is bad for you” or that this argument leads to the conclusion of a “disaster for society”. The main conclusion is that while there is economic growth, stagnant sector services like health care will take up a greater and greater proportion of national income, but national income will grow at least as fast in size. Baumol makes some other claims that Horton may wish to engage with as they have important consequences for access to health care:

1. the cost disease will disproportionately affect the poor as healthcare services become more unaffordable,
2. misinterpretation of the cost disease will lead to suboptimal policy (as we are seeing in the UK with significant underfunding),
3. the private sector is liable to the same problems.

Matthew Bishop argues that economists should engage with those who know nothing of economics as a “worthy interlocutor in a way that values his opinion”. But as another blog argues, there is a difference between “the man who genuinely wants to learn more, and the one who is loudly spouting nonsense for political reasons.” One hopes that Horton is the former, but his previous output might suggest otherwise.

Credits