# Chris Sampson’s journal round-up for 24th April 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The association between socioeconomic status and adult fast-food consumption in the U.S. Economics & Human Biology Published 19th April 2017

It’s an old stereotype, that people of lower socioeconomic status eat a lot of fast food, and that this contributes to poorer nutritional intake and therefore poorer health. As somebody with a deep affection for Gregg’s pasties and Pot Noodles, I’ve never really bought into the idea. Mainly because a lot of fast food isn’t particularly cheap. And anyway, what about all those cheesy paninis that the middle classes are chowing down on in Starbuck’s? Plus, wouldn’t the more well-off folk have a higher opportunity cost of time that would make fast food more attractive? Happily for me, this paper provides some evidence to support these notions. The study uses 3 recent waves of data from the National Longitudinal Survey of Youth, with 8136 participants born between 1957 and 1964. The authors test for an income gradient in adult fast food consumption, as well as any relationship to wealth. I think that makes it extra interesting because wealth is likely to be more indicative of social class (which is probably what people really think about when it comes to the stereotype). The investigation of wealth also sets it apart from previous studies, which report mixed findings for the income gradient. The number of times people consumed fast food in the preceding 7 days is modelled as a function of price, time requirement, preferences and monetary resources (income and wealth). The models included estimators for these predictors and a number of health behaviour indicators and demographic variables. Logistic models distinguish fast food eaters and OLS and negative binomial models estimate how often fast food is eaten. 79% ate fast food at least once, and 23% were frequent fast food eaters. In short, there isn’t much variation by income and wealth. What there is suggests an inverted U-shape pattern, which is more pronounced when looking at income than wealth. The regression results show that there isn’t much of a relationship between wealth and the number of times a respondent ate fast food. Income is positively related to the number of fast food meals eaten. But other variables were far more important. Living in a central city and being employed were associated with greater fast food consumption, while a tendency to check ingredients was associated with a lower probability of eating fast food. The study has some important policy implications, particularly as our preconceptions may mean that interventions are targeting the wrong groups of people.

Views of the UK general public on important aspects of health not captured by EQ-5D. The Patient [PubMed] Published 13th April 2017

The notion that the EQ-5D might not reflect important aspects of health-related quality of life is a familiar one for those of us working on trial-based analyses. Some of the claims we hear might just be special pleading, but it’s hard to deny at least some truth. What really matters – if we’re trying to elicit societal values – is what the public thinks. This study tries to find out. Face-to-face interviews were conducted in which people completed time trade-off and discrete choice experiment tasks for EQ-5D-5L states. These were followed by a set of questions about the value of alternative upper anchors (e.g. ‘full health’, ‘11111’) and whether respondents believed that relevant health or quality of life domains were missing from the EQ-5D questionnaire. This paper focuses on the aspects of health that people identified as being missing, using a content analysis framework. There were 436 respondents, about half of whom reported being in a 11111 EQ-5D state. 41% of participants considered the EQ-5D questionnaire to be missing some important aspect of health. The authors identified 22 (!) different themes and attached people’s responses to these themes. Sensory deprivation and mental health were the two biggies, with many more responses than other themes. 50 people referred to vision, hearing or other sensory loss. 29 referred to mental health generally while 28 referred to specific mental health problems. This study constitutes a guide for future research and for the development of the EQ-5D and other classification systems. Obviously, the objective of the EQ-5D is not to reflect all domains. And it may be that the public’s suggestions – verbatim, at least – aren’t sensible. 10 people stated ‘cancer’, for example. But the importance of mental health and sensory deprivation in describing the evaluative space does warrant further investigation.

Re-thinking ‘The different perspectives that can be used when eliciting preferences in health’. Health Economics [PubMed] Published 21st March 2017

Pedantry is a virtue when it comes to valuing health states, which is why you’ll often find me banging on about the need for clarity. And why I like this paper. The authors look at a 2003 article by Dolan and co that outlined the different perspectives that health preference researchers ought to be using (though notably aren’t) when presenting elicitation questions to respondents. Dolan and co defined 6 perspectives along two dimensions: preferences (personal, social and socially-inclusive personal) and context (ex ante and ex post). This paper presents the argument that Dolan and co’s framework is incomplete. The authors throw new questions into the mix regarding who the user of treatment is, who the payer is and who is assessing the value, as well as introducing consideration of the timing of illness and the nature of risk. This gives rise to a total of 23 different perspectives along the dimensions of preferences (personal, social, socially-inclusive personal, non-use and proxy) and context (4 ex ante and 1 ex post). This new classification makes important distinctions between different perspectives, and health preference researchers really ought to heed its advice. However, I still think it’s limited. As I described in a recent blog post and discussed at a recent HESG meeting, I think the way we talk about ex ante and ex post in this context is very confused. In fact, this paper demonstrates the problem nicely. The authors first discuss the ex post context, the focus being on the value of ‘treatment’ (an event). Then the paper moves on to the ex ante context, and the discussion relates to ‘illness’ (a state). The problem is that health state valuation exercises aren’t (explicitly) about valuing treatments – or illnesses – but about valuing health states in relation to other health states. ‘Ex ante’ means making judgements about something before an event, and ‘ex post’ means to do so after it. But we’re trying to conduct health state valuation, not health event valuation. May the pedantry continue.

Credits

# The irrelevance of inference: (almost) 20 years on is it still irrelevant?

The Irrelevance of Inference was a seminal paper published by Karl Claxton in 1999. In it he outlines a stochastic decision making approach to the evaluation of health technologies. A key point that he makes is that we need only to examine the posterior mean incremental net benefit of one technology compared to another to make a decision. Other aspects of the distribution of incremental net benefits are irrelevant – hence the title.

I hated this idea. From a Bayesian perspective estimation and inference is a decision problem. Surely uncertainty matters! But, in the extra-welfarist framework that we generally conduct cost-effectiveness analysis in, it is irrefutable. To see why let’s consider a basic decision making framework.

There are three aspects to a decision problem. Firstly, there is a state of the world, $\theta \in \Theta$ with density $\pi(\theta)$. In this instance it is the net benefits in the population, but could be the state of the economy, or effectiveness of a medical intervention in other contexts, for example. Secondly, there is the possible actions denoted by $a \in \mathcal{A}$. There might be a discrete set of actions or a continuum of possibilities. Finally, there is the loss function $L(a,\theta)$. The loss function describes the losses or costs associated with making decision $a$ given that $\theta$ is the state of nature. The action that should be taken is the one which minimises expected losses $\rho(\theta,a)=E_\theta(L(a,\theta))$. Minimising losses can be seen as analogous to maximising utility. We also observe data $x=[x_1,...,x_N]'$ that provide information on the parameter $\theta$. Our state of knowledge regarding this parameter is then captured by the posterior distribution $\pi(\theta|x)$. Our expected losses should be calculated with respect to this distribution.

Given the data and posterior distribution of incremental net benefits, we need to make a choice about a value (a Bayes estimator), that minimises expected losses. The opportunity loss from making the wrong decision is “the difference in net benefit between the best choice and the choice actually made.” So the decision falls down to deciding whether the incremental net benefits are positive or negative (and hence whether to invest), $\mathcal{A}=[a^+,a^-]$. The losses are linear if we make the wrong decision:

$L(a^+,\theta) = 0$ if $\theta >0$ and $L(a^+,\theta) = \theta$ if $\theta <0$

$L(a^-,\theta) = - \theta$ if $\theta >0$ and $L(a^+,\theta) = 0$ if $\theta <0$

So we should decide that the incremental net benefits are positive if

$E_\theta(L(a^+,\theta)) - E_\theta(L(a^-,\theta)) > 0$

which is equivalent to

$\int_0^\infty \theta dF^{\pi(\theta|x)}(\theta) - \int_{-\infty}^0 -\theta dF^{\pi(\theta|x)}(\theta) = \int_{-\infty}^\infty \theta dF^{\pi(\theta|x)}(\theta) > 0$

which is obviously equivalent to $E(\theta|x)>0$ – the posterior mean!

If our aim is simply the estimation of net benefits (so $\mathcal{A} \subseteq \mathbb{R}$), different loss functions lead to different estimators. If we have a squared loss function $L(a, \theta)=|\theta-a|^2$ then again we should choose the posterior mean. However, other choices of loss function lead to other estimators. The linear loss function, $L(a, \theta)=|\theta-a|$ leads to the posterior median. And a ‘0-1’ loss function: $L(a, \theta)=0$ if $a=\theta$ and $L(a, \theta)=1$ if $a \neq \theta$, gives the posterior mode, which is also the maximum likelihood estimator (MLE) if we have a uniform prior. This latter point does suggest that MLEs will not give the ‘correct’ answer if the net benefit distribution is asymmetric. The loss function is therefore important. But for the purposes of the decision between technologies I see no good reason to reject our initial loss function.

Claxton also noted that equity considerations could be incorporated through ‘adjustments to the measure of outcome’. This could be some kind of weighting scheme. However, this is where I might begin to depart from the claim of the irrelevance of inference. I prefer a social decision maker approach to evaluation in the vein of cost-benefit analysis as discussed by the brilliant Alan Williams. This approach allows for non-market outcomes that extra-welfarism might include but classical welfarism would exclude; their valuations could be arrived at by a political, democratic process or by other means. It also permits inequality aversion and other features that I find are a perhaps more accurate reflection of a political decision making approach. However, one must be aware of all the flaws and failures of this approach, which Williams so neatly describes.

In a social decision maker framework, the decision that should be made is the one that maximises a social welfare function. A utility function expresses social preferences over the distribution of utility in the population, the social welfare function aggregates utility and is usually assumed to be linear (utilitarian). If the utility function is inequality averse then the variance obviously does matter. But, in making this claim I am moving away from the arguments of Claxton’s paper and towards a discussion of the relative merits extra-welfarism and other approaches.

Perhaps the statement that inference was irrelevant was made just to capture our attention. After all the process of updating our knowledge of the net benefits of alternatives from data is inference. But Claxton’s statement refers more to the process of hypothesis testing and p-values (or Bayesian ranges of equivalents), the use of which has no place in decision making. On this point I wholeheartedly agree.