Sam Watson’s journal round-up for 9th July 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluating the 2014 sugar-sweetened beverage tax in Chile: an observational study in urban areas. PLoS Medicine [PubMedPublished 3rd July 2018

Sugar taxes are one of the public health policy options currently in vogue. Countries including Mexico, the UK, South Africa, and Sri Lanka all have sugar taxes. The aim of such levies is to reduce demand for the most sugary drinks, or if the tax is absorbed on the supply side, which is rare, to encourage producers to reduce the sugar content of their drinks. One may also view it as a form of Pigouvian taxation to internalise the public health costs associated with obesity. Chile has long had an ad valorem tax on soft drinks fixed at 13%, but in 2014 decided to pursue a sugar tax approach. Drinks with more than 6.25g/100ml saw their tax rate rise to 18% and the tax on those below this threshold dropped to 10%. To understand what effect this change had, we would want to know three key things along the causal pathway from tax policy to sugar consumption: did people know about the tax change, did prices change, and did consumption behaviour change. On this latter point, we can consider both the overall volume of soft drinks and whether people substituted low sugar for high sugar beverages. Using the Kantar Worldpanel, a household panel survey of purchasing behaviour, this paper examines these questions.

Everyone in Chile was affected by the tax so there is no control group. We must rely on time series variation to identify the effect of the tax. Sometimes, looking at plots of the data reveals a clear step-change when an intervention is introduced (e.g. the plot in this post), not so in this paper. We therefore rely heavily on the results of the model for our inferences, and I have a couple of small gripes with it. First, the model captures household fixed effects, but no consideration is given to dynamic effects. Some households may be more or less likely to buy drinks, but their decisions are also likely to be affected by how much they’ve recently bought. Similarly, the errors may be correlated over time. Ignoring dynamic effects can lead to large biases. Second, the authors choose among different functional form specifications of time using Akaike Information Criterion (AIC). While AIC and the Bayesian Information Criterion (BIC) are often thought to be interchangeable, they are not; AIC estimates predictive performance on future data, while BIC estimates goodness of fit to the data. Thus, I would think BIC would be more appropriate. Additional results show the estimates are very sensitive to the choice of functional form by an order of magnitude and in sign. The authors estimate a fairly substantial decrease of around 22% in the volume of high sugar drinks purchased, but find evidence that the price paid changed very little (~1.5%) and there was little change in other drinks. While the analysis is generally careful and well thought out, I am not wholly convinced by the authors’ conclusions that “Our main estimates suggest a significant, sizeable reduction in the volume of high-tax soft drinks purchased.”

A Bayesian framework for health economic evaluation in studies with missing data. Health Economics [PubMedPublished 3rd July 2018

Missing data is a ubiquitous problem. I’ve never used a data set where no observations were missing and I doubt I’m alone. Despite its pervasiveness, it’s often only afforded an acknowledgement in the discussion or perhaps, in more complete analyses, something like multiple imputation will be used. Indeed, the majority of trials in the top medical journals don’t handle it correctly, if at all. The majority of the methods used for missing data in practice assume the data are ‘missing at random’ (MAR). One interpretation is that this means that, conditional on the observable variables, the probability of data being missing is independent of unobserved factors influencing the outcome. Another interpretation is that the distribution of the potentially missing data does not depend on whether they are actually missing. This interpretation comes from factorising the joint distribution of the outcome Y and an indicator of whether the datum is observed R, along with some covariates X, into a conditional and marginal model: f(Y,R|X) = f(Y|R,X)f(R|X), a so-called pattern mixture model. This contrasts with the ‘selection model’ approach: f(Y,R|X) = f(R|Y,X)f(Y|X).

This paper considers a Bayesian approach using the pattern mixture model for missing data for health economic evaluation. Specifically, the authors specify a multivariate normal model for the data with an additional term in the mean if it is missing, i.e. the model of f(Y|R,X). A model is not specified for f(R|X). If it were then you would typically allow for correlation between the errors in this model and the main outcomes model. But, one could view the additional term in the outcomes model as some function of the error from the observation model somewhat akin to a control function. Instead, this article uses expert elicitation methods to generate a prior distribution for the unobserved terms in the outcomes model. While this is certainly a legitimate way forward in my eyes, I do wonder how specification of a full observation model would affect the results. The approach of this article is useful and they show that it works, and I don’t want to detract from that but, given the lack of literature on missing data in this area, I am curious to compare approaches including selection models. You could even add shared parameter models as an alternative, all of which are feasible. Perhaps an idea for a follow-up study. As a final point, the models run in WinBUGS, but regular readers will know I think Stan is the future for estimating Bayesian models, especially in light of the problems with MCMC we’ve discussed previously. So equivalent Stan code would have been a bonus.

Trade challenges at the World Trade Organization to national noncommunicable disease prevention policies: a thematic document analysis of trade and health policy space. PLoS Medicine [PubMed] Published 26th June 2018

This is an economics blog. But focusing solely on economics papers in these round-ups would mean missing out on some papers from related fields that may provide insight into our own work. Thus I present to you a politics and sociology paper. It is not my field and I can’t give a reliable appraisal of the methods, but the results are of interest. In the global fight against non-communicable diseases, there is a range of policy tools available to governments, including the sugar tax of the paper at the top. The WHO recommends a large number. However, there is ongoing debate about whether trade rules and agreements are used to undermine this public health legislation. One agreement, the Technical Barriers to Trade (TBT) Agreement that World Trade Organization (WTO) members all sign, states that members may not impose ‘unnecessary trade costs’ or barriers to trade, especially if the intended aim of the measure can be achieved without doing so. For example, Philip Morris cited a bilateral trade agreement when it sued the Australian government for introducing plain packaging claiming it violated the terms of trade. Philip Morris eventually lost but not after substantial costs were incurred. In another example, the Thai government were deterred from introducing a traffic light warning system for food after threats of a trade dispute from the US, which cited WTO rules. However, there was no clear evidence on the extent to which trade disputes have undermined public health measures.

This article presents results from a new database of all TBT WTO challenges. Between 1995 and 2016, 93 challenges were raised concerning food, beverage, and tobacco products, the number per year growing over time. The most frequent challenges were over labelling products and then restricted ingredients. The paper presents four case studies, including Indonesia delaying food labelling of fat, sugar, and salt after challenge by several members including the EU, and many members including the EU again and the US objecting to the size and colour of a red STOP sign that Chile wanted to put on products containing high sugar, fat, and salt.

We have previously discussed the politics and political economy around public health policy relating to e-cigarettes, among other things. Understanding the political economy of public health and phenomena like government failure can be as important as understanding markets and market failure in designing effective interventions.

Credits

Rita Faria’s journal round-up for 18th June 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Objectives, budgets, thresholds, and opportunity costs—a health economics approach: an ISPOR Special Task Force report. Value in Health [PubMedPublished 21st February 2018

The economic evaluation world has been discussing cost-effectiveness thresholds for a while. This paper has been out for a few months, but it slipped under my radar. It explains the relationship between the cost-effectiveness threshold, the budget, opportunity costs and willingness to pay for health. My take-home messages are that we should use cost-effectiveness analysis to inform decisions both for publicly funded and privately funded health care systems. Each system has a budget and a way of raising funds for that budget. The cost-effectiveness threshold should be specific for each health care system, in order to reflect its specific opportunity cost. The budget can change for many reasons. The cost-effectiveness threshold should be adjusted to reflect these changes and hence reflect the opportunity cost. For example, taxpayers can increase their willingness to pay for health through increased taxes for the health care system. We are starting to see this in the UK with the calls to raise taxes to increase the NHS budget. It is worth noting that the NICE threshold may not warrant adjustment upwards since research suggests that it does not reflect the opportunity cost. This is a welcome paper on the topic and a must read, particularly if you’re arguing for the use of cost-effectiveness analysis in settings that traditionally were reluctant to embrace it, such as the US.

Basic versus supplementary health insurance: access to care and the role of cost effectiveness. Journal of Health Economics [RePEc] Published 31st May 2018

Using cost-effectiveness analysis to inform coverage decisions not only for the public but also for the privately funded health care is also a feature of this study by Jan Boone. I’ll admit that the equations are well beyond my level of microeconomics, but the text is good at explaining the insights and the intuition. Boone grapples with the question about how the public and private health care systems should choose which technologies to cover. Boone concludes that, when choosing which technologies to cover, the most cost-effective technologies should be prioritised for funding. That the theory matches the practice is reassuring to an economic evaluator like myself! One of the findings is that cost-effective technologies which are very cheap should not be covered. The rationale being that everyone can afford them. The issue for me is that people may decide not to purchase a highly cost-effective technology which is very cheap. As we know from behaviour economics, people are not rational all the time! Boone also concludes that the inclusion of technologies in the universal basic package should consider the prevalence of the conditions in those people at high risk and with low income. The way that I interpreted this is that it is more cost-effective to include technologies for high-risk low-income people in the universal basic package who would not be able to afford these technologies otherwise, than technologies for high-income people who can afford supplementary insurance. I can’t cover here all the findings and the nuances of the theoretical model. Suffice to say that it is an interesting read, even if you avoid the equations like myself.

Surveying the cost effectiveness of the 20 procedures with the largest public health services waiting lists in Ireland: implications for Ireland’s cost-effectiveness threshold. Value in Health Published 11th June 2018

As we are on the topic of cost-effectiveness thresholds, this is a study on the threshold in Ireland. This study sets out to find out if the current cost-effectiveness threshold is too high given the ICERs of the 20 procedures with the largest waiting lists. The idea is that, if the current cost-effectiveness threshold is correct, the procedures with large and long waiting lists would have an ICER of above the cost-effectiveness threshold. If the procedures have a low ICER, the cost-effectiveness threshold may be set too high. I thought that Figure 1 is excellent in conveying the discordance between ICERs and waiting lists. For example, the ICER for extracapsular extraction of crystalline lens is €10,139/QALY and the waiting list has 10,056 people; the ICER for surgical tooth removal is €195,155/QALY and the waiting list is smaller at 833. This study suggests that, similar to many other countries, there are inefficiencies in the way that the Irish health care system prioritises technologies for funding. The limitation of the study is in the ICERs. Ideally, the relevant ICER compares the procedure with the standard care in Ireland whilst on the waiting list (“no procedure” option). But it is nigh impossible to find ICERs that meet this condition for all procedures. The alternative is to assume that the difference in costs and QALYs is generalisable from the source study to Ireland. It was great to see another study on empirical cost-effectiveness thresholds. Looking forward to knowing what the cost-effectiveness threshold should be to accurately reflect opportunity costs.

Credits

Chris Sampson’s journal round-up for 11th June 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

End-of-life healthcare expenditure: testing economic explanations using a discrete choice experiment. Journal of Health Economics Published 7th June 2018

People incur a lot of health care costs at the end of life, despite the fact that – by definition – they aren’t going to get much value from it (so long as we’re using QALYs, anyway). In a 2007 paper, Gary Becker and colleagues put forward a theory for the high value of life and high expenditure on health care at the end of life. This article sets out to test a set of hypotheses derived from this theory, namely: i) higher willingness-to-pay (WTP) for health care with proximity to death, ii) higher WTP with greater chance of survival, iii) societal WTP exceeds individual WTP due to altruism, and iv) societal WTP may exceed individual WTP due to an aversion to restricting access to new end-of-life care. A further set of hypotheses relating to the ‘pain of risk-bearing’ is also tested. The authors conducted an online discrete choice experiment (DCE) with 1,529 Swiss residents, which asked respondents to suppose that they had terminal cancer and was designed to elicit WTP for a life-prolonging novel cancer drug. Attributes in the DCE included survival, quality of life, and ‘hope’ (chance of being cured). Individual WTP – using out-of-pocket costs – and societal WTP – based on social health insurance – were both estimated. The overall finding is that the hypotheses are on the whole true, at least in part. But the fact is that different people have different preferences – the authors note that “preferences with regard to end-of-life treatment are very heterogeneous”. The findings provide evidence to explain the prevailing high level of expenditure in end of life (cancer) care. But the questions remain of what we can or should do about it, if anything.

Valuation of preference-based measures: can existing preference data be used to generate better estimates? Health and Quality of Life Outcomes [PubMed] Published 5th June 2018

The EuroQol website lists EQ-5D-3L valuation studies for 27 countries. As the EQ-5D-5L comes into use, we’re going to see a lot of new valuation studies in the pipeline. But what if we could use data from one country’s valuation to inform another’s? The idea is that a valuation study in one country may be able to ‘borrow strength’ from another country’s valuation data. The author of this article has developed a Bayesian non-parametric model to achieve this and has previously applied it to UK and US EQ-5D valuations. But what about situations in which few data are available in the country of interest, and where the country’s cultural characteristics are substantially different. This study reports on an analysis to generate an SF-6D value set for Hong Kong, firstly using the Hong Kong values only, and secondly using the UK value set as a prior. As expected, the model which uses the UK data provided better predictions. And some of the differences in the valuation of health states are quite substantial (i.e. more than 0.1). Clearly, this could be a useful methodology, especially for small countries. But more research is needed into the implications of adopting the approach more widely.

Can a smoking ban save your heart? Health Economics [PubMed] Published 4th June 2018

Here we have another Swiss study, relating to the country’s public-place smoking bans. Exposure to tobacco smoke can have an acute and rapid impact on health to the extent that we would expect an immediate reduction in the risk of acute myocardial infarction (AMI) if a smoking ban reduces the number of people exposed. Studies have already looked at this effect, and found it to be large, but mostly with simple pre-/post- designs that don’t consider important confounding factors or prevailing trends. This study tests the hypothesis in a quasi-experimental setting, taking advantage of the fact that the 26 Swiss cantons implemented smoking bans at different times between 2007 and 2010. The authors analyse individual-level data from Swiss hospitals, estimating the impact of the smoking ban on AMI incidence, with area and time fixed effects, area-specific time trends, and unemployment. The findings show a large and robust effect of the smoking ban(s) for men, with a reduction in AMI incidence of about 11%. For women, the effect is weaker, with an average reduction of around 2%. The evidence also shows that men in low-education regions experienced the greatest benefit. What makes this an especially nice paper is that the authors bring in other data sources to help explain their findings. Panel survey data are used to demonstrate that non-smokers are likely to be the group benefitting most from smoking bans and that people working in public places and people with less education are most exposed to environmental tobacco smoke. These findings might not be generalisable to other settings. Other countries implemented more gradual policy changes and Switzerland had a particularly high baseline smoking rate. But the findings suggest that smoking bans are associated with population health benefits (and the associated cost savings) and could also help tackle health inequalities.

Credits