# Sam Watson’s journal round-up for 9th July 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluating the 2014 sugar-sweetened beverage tax in Chile: an observational study in urban areas. PLoS Medicine [PubMedPublished 3rd July 2018

Sugar taxes are one of the public health policy options currently in vogue. Countries including Mexico, the UK, South Africa, and Sri Lanka all have sugar taxes. The aim of such levies is to reduce demand for the most sugary drinks, or if the tax is absorbed on the supply side, which is rare, to encourage producers to reduce the sugar content of their drinks. One may also view it as a form of Pigouvian taxation to internalise the public health costs associated with obesity. Chile has long had an ad valorem tax on soft drinks fixed at 13%, but in 2014 decided to pursue a sugar tax approach. Drinks with more than 6.25g/100ml saw their tax rate rise to 18% and the tax on those below this threshold dropped to 10%. To understand what effect this change had, we would want to know three key things along the causal pathway from tax policy to sugar consumption: did people know about the tax change, did prices change, and did consumption behaviour change. On this latter point, we can consider both the overall volume of soft drinks and whether people substituted low sugar for high sugar beverages. Using the Kantar Worldpanel, a household panel survey of purchasing behaviour, this paper examines these questions.

Everyone in Chile was affected by the tax so there is no control group. We must rely on time series variation to identify the effect of the tax. Sometimes, looking at plots of the data reveals a clear step-change when an intervention is introduced (e.g. the plot in this post), not so in this paper. We therefore rely heavily on the results of the model for our inferences, and I have a couple of small gripes with it. First, the model captures household fixed effects, but no consideration is given to dynamic effects. Some households may be more or less likely to buy drinks, but their decisions are also likely to be affected by how much they’ve recently bought. Similarly, the errors may be correlated over time. Ignoring dynamic effects can lead to large biases. Second, the authors choose among different functional form specifications of time using Akaike Information Criterion (AIC). While AIC and the Bayesian Information Criterion (BIC) are often thought to be interchangeable, they are not; AIC estimates predictive performance on future data, while BIC estimates goodness of fit to the data. Thus, I would think BIC would be more appropriate. Additional results show the estimates are very sensitive to the choice of functional form by an order of magnitude and in sign. The authors estimate a fairly substantial decrease of around 22% in the volume of high sugar drinks purchased, but find evidence that the price paid changed very little (~1.5%) and there was little change in other drinks. While the analysis is generally careful and well thought out, I am not wholly convinced by the authors’ conclusions that “Our main estimates suggest a significant, sizeable reduction in the volume of high-tax soft drinks purchased.”

A Bayesian framework for health economic evaluation in studies with missing data. Health Economics [PubMedPublished 3rd July 2018

Missing data is a ubiquitous problem. I’ve never used a data set where no observations were missing and I doubt I’m alone. Despite its pervasiveness, it’s often only afforded an acknowledgement in the discussion or perhaps, in more complete analyses, something like multiple imputation will be used. Indeed, the majority of trials in the top medical journals don’t handle it correctly, if at all. The majority of the methods used for missing data in practice assume the data are ‘missing at random’ (MAR). One interpretation is that this means that, conditional on the observable variables, the probability of data being missing is independent of unobserved factors influencing the outcome. Another interpretation is that the distribution of the potentially missing data does not depend on whether they are actually missing. This interpretation comes from factorising the joint distribution of the outcome $Y$ and an indicator of whether the datum is observed $R$, along with some covariates $X$, into a conditional and marginal model: $f(Y,R|X) = f(Y|R,X)f(R|X)$, a so-called pattern mixture model. This contrasts with the ‘selection model’ approach: $f(Y,R|X) = f(R|Y,X)f(Y|X)$.

This paper considers a Bayesian approach using the pattern mixture model for missing data for health economic evaluation. Specifically, the authors specify a multivariate normal model for the data with an additional term in the mean if it is missing, i.e. the model of $f(Y|R,X)$. A model is not specified for $f(R|X)$. If it were then you would typically allow for correlation between the errors in this model and the main outcomes model. But, one could view the additional term in the outcomes model as some function of the error from the observation model somewhat akin to a control function. Instead, this article uses expert elicitation methods to generate a prior distribution for the unobserved terms in the outcomes model. While this is certainly a legitimate way forward in my eyes, I do wonder how specification of a full observation model would affect the results. The approach of this article is useful and they show that it works, and I don’t want to detract from that but, given the lack of literature on missing data in this area, I am curious to compare approaches including selection models. You could even add shared parameter models as an alternative, all of which are feasible. Perhaps an idea for a follow-up study. As a final point, the models run in WinBUGS, but regular readers will know I think Stan is the future for estimating Bayesian models, especially in light of the problems with MCMC we’ve discussed previously. So equivalent Stan code would have been a bonus.

This is an economics blog. But focusing solely on economics papers in these round-ups would mean missing out on some papers from related fields that may provide insight into our own work. Thus I present to you a politics and sociology paper. It is not my field and I can’t give a reliable appraisal of the methods, but the results are of interest. In the global fight against non-communicable diseases, there is a range of policy tools available to governments, including the sugar tax of the paper at the top. The WHO recommends a large number. However, there is ongoing debate about whether trade rules and agreements are used to undermine this public health legislation. One agreement, the Technical Barriers to Trade (TBT) Agreement that World Trade Organization (WTO) members all sign, states that members may not impose ‘unnecessary trade costs’ or barriers to trade, especially if the intended aim of the measure can be achieved without doing so. For example, Philip Morris cited a bilateral trade agreement when it sued the Australian government for introducing plain packaging claiming it violated the terms of trade. Philip Morris eventually lost but not after substantial costs were incurred. In another example, the Thai government were deterred from introducing a traffic light warning system for food after threats of a trade dispute from the US, which cited WTO rules. However, there was no clear evidence on the extent to which trade disputes have undermined public health measures.

This article presents results from a new database of all TBT WTO challenges. Between 1995 and 2016, 93 challenges were raised concerning food, beverage, and tobacco products, the number per year growing over time. The most frequent challenges were over labelling products and then restricted ingredients. The paper presents four case studies, including Indonesia delaying food labelling of fat, sugar, and salt after challenge by several members including the EU, and many members including the EU again and the US objecting to the size and colour of a red STOP sign that Chile wanted to put on products containing high sugar, fat, and salt.

We have previously discussed the politics and political economy around public health policy relating to e-cigarettes, among other things. Understanding the political economy of public health and phenomena like government failure can be as important as understanding markets and market failure in designing effective interventions.

Credits

# Chris Sampson’s journal round-up for 20th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Effects of health and social care spending constraints on mortality in England: a time trend analysis. BMJ Open [PubMed] Published 15th November 2017

I’d hazard a guess that I’m not the only one here who gets angry about the politics of austerity. Having seen this study’s title, it’s clear that the research could provide fuel for that anger. It doesn’t disappoint. Recent years have seen very low year-on-year increases in public expenditure on health in England. Even worse, between 2010 and 2014, public expenditure on social care actually fell in real terms. This is despite growing need for health and social care. In this study, the authors look at health and social care spending and try to estimate the impact that reduced expenditure has had on mortality in England. The analysis uses spending and mortality data from 2001 onwards and also incorporates mortality projections for 2015-2020. Time trend analyses are conducted using Poisson regression models. From 2001-2010, deaths decreased by 0.77% per year (on average). The mortality rate was falling. Now it seems to be increasing; from 2011-2014, the average number of deaths per year increased by 0.87%. This corresponds to 18,324 additional deaths in 2014, for example. But everybody dies. Extra deaths are really sooner deaths. So the question, really, is how much sooner? The authors look at potential years of life lost and find this figure to be 75,496 life-years greater than expected in 2014, given pre-2010 trends. This shouldn’t come as much of a surprise. Spending less generally achieves less. What makes this study really interesting is that it can tell us who is losing these potential years of life as a result of spending cuts. The authors find that it’s the over-60s. Care home deaths were the largest contributor to increased mortality. A £10 cut in social care spending per capita resulted in 5 additional care home deaths per 100,000 people. When the authors looked at deaths by local area, no association was found with the level of deprivation. If health and social care expenditure are combined in a single model, we see that it’s social care spending that is driving the number of excess deaths. The impact of health spending on hospital deaths was less robust. The number of nurses acted as a mediator for the relationship between spending and mortality. The authors estimate that current spending projections will result in 150,000 additional deaths compared with pre-2010 trends. There are plenty of limitations to this study. It’s pretty much impossible (though the authors do try) to separate the effects of austerity from the effect of a weak economy. Still, I’m satisfied with the conclusion that austerity kills older people (no jokes about turkeys and Christmas, please). For me, the findings also highlight the need for more research in the context of social care, and how we (as researchers) might effectively direct policy to prevent ‘excess’ deaths.

Should cost effectiveness analyses for NICE always consider future unrelated medical costs? BMJ [PubMed] Published 10th November 2017

The question of whether or not ‘unrelated’ future medical costs should be included in economic evaluation is becoming a hot topic. So much so that the BMJ has published this Head To Head, which introduces some of the arguments for and against. NICE currently recommends excluding unrelated future medical costs. An example given in this article is the case of the expected costs of dementia care having saved someone’s life by heart transplantation. The argument in favour of including unrelated costs is quite obvious – these costs can’t be ignored if we seek to maximise social welfare. Their inclusion is described as “not difficult” by the authors defending this move. By ignoring unrelated future costs (but accounting for the benefit of longer life), the relative cost-effectiveness of life-extending treatments, compared with life-improving treatments, is artificially inflated. The argument against including unrelated medical costs is presented as one of fairness. The author suggests that their inclusion could preclude access to health care for certain groups of people that are likely to have high needs in the future. So perhaps NICE should ignore unrelated medical costs in certain circumstances. I sympathise with this view, but I feel it is less a fairness issue and more a demonstration of the current limits of health-related quality of life measurement, which don’t reflect adaptation and coping. However, I tend to disagree with both of the arguments presented here. I really don’t think NICE should include or exclude unrelated future medical costs according to the context because that could create some very perverse incentives for certain stakeholders. But then, I do not agree that it is “not difficult” to include all unrelated future costs. ‘All’ is an important qualifier here because the capacity for analysts to pick and choose unrelated future costs creates the potential to pick and choose results. When it comes to unrelated future medical costs, NICE’s position needs to be all-or-nothing, and right now the ‘all’ bit is a high bar to clear. NICE should include unrelated future medical costs – it’s difficult to formulate a sound argument against that – but they should only do so once more groundwork has been done. In particular, we need to develop more valid methods for valuing quality of life against life-years in health technology assessment across different patient groups. And we need more reliable methods for estimating future medical costs in all settings.

Oncology modeling for fun and profit! Key steps for busy analysts in health technology assessment. PharmacoEconomics [PubMed] Published 6th November 2017

Quite a title(!). The subject of this essay is ‘partitioned survival modelling’. Honestly,  I never really knew what that was until I read this article. It seems the reason for my ignorance could be that I haven’t worked on the evaluation of cancer treatments, for which it’s a popular methodology. Apparently, a recent study found that almost 75% of NICE cancer drug appraisals were informed by this sort of analysis. Partitioned survival modelling is a simple means by which to extrapolate outcomes in a context where people can survive (or not) with or without progression. Often this can be on the basis of survival analyses and standard trial endpoints. This article seeks to provide some guidance on the development and use of partitioned survival models. Or, rather, it provides a toolkit for calling out those who might seek to use the method as a means of providing favourable results for a new therapy when data and analytical resources are lacking. The ‘key steps’ can be summarised as 1) avoiding/ignoring/misrepresenting current standards of economic evaluation, 2) using handpicked parametric approaches for extrapolation in order to maximise survival benefits, 3) creatively estimating relative treatment effects using indirect comparisons without adjustment, 4) make optimistic assumptions about post-progression outcomes, and 5) deny the possibility of any structural uncertainty. The authors illustrate just how much an analyst can influence the results of an evaluation (if they want to “keep ICERs in the sweet spot!”). Generally, these tactics move the model far from being representative of reality. However, the prevailing secrecy around most models means that it isn’t always easy to detect these shortcomings. Sometimes it is though, and the authors make explicit reference to technology appraisals that they suggest demonstrate these crimes. Brilliant!

Credits

# Sam Watson’s journal round-up for 2nd October 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The path to longer and healthier lives for all Africans by 2030: the Lancet Commission on the future of health in sub-Saharan Africa. The Lancet [PubMedPublished 13th September 2017