Sam Watson’s journal round-up for 9th July 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluating the 2014 sugar-sweetened beverage tax in Chile: an observational study in urban areas. PLoS Medicine [PubMedPublished 3rd July 2018

Sugar taxes are one of the public health policy options currently in vogue. Countries including Mexico, the UK, South Africa, and Sri Lanka all have sugar taxes. The aim of such levies is to reduce demand for the most sugary drinks, or if the tax is absorbed on the supply side, which is rare, to encourage producers to reduce the sugar content of their drinks. One may also view it as a form of Pigouvian taxation to internalise the public health costs associated with obesity. Chile has long had an ad valorem tax on soft drinks fixed at 13%, but in 2014 decided to pursue a sugar tax approach. Drinks with more than 6.25g/100ml saw their tax rate rise to 18% and the tax on those below this threshold dropped to 10%. To understand what effect this change had, we would want to know three key things along the causal pathway from tax policy to sugar consumption: did people know about the tax change, did prices change, and did consumption behaviour change. On this latter point, we can consider both the overall volume of soft drinks and whether people substituted low sugar for high sugar beverages. Using the Kantar Worldpanel, a household panel survey of purchasing behaviour, this paper examines these questions.

Everyone in Chile was affected by the tax so there is no control group. We must rely on time series variation to identify the effect of the tax. Sometimes, looking at plots of the data reveals a clear step-change when an intervention is introduced (e.g. the plot in this post), not so in this paper. We therefore rely heavily on the results of the model for our inferences, and I have a couple of small gripes with it. First, the model captures household fixed effects, but no consideration is given to dynamic effects. Some households may be more or less likely to buy drinks, but their decisions are also likely to be affected by how much they’ve recently bought. Similarly, the errors may be correlated over time. Ignoring dynamic effects can lead to large biases. Second, the authors choose among different functional form specifications of time using Akaike Information Criterion (AIC). While AIC and the Bayesian Information Criterion (BIC) are often thought to be interchangeable, they are not; AIC estimates predictive performance on future data, while BIC estimates goodness of fit to the data. Thus, I would think BIC would be more appropriate. Additional results show the estimates are very sensitive to the choice of functional form by an order of magnitude and in sign. The authors estimate a fairly substantial decrease of around 22% in the volume of high sugar drinks purchased, but find evidence that the price paid changed very little (~1.5%) and there was little change in other drinks. While the analysis is generally careful and well thought out, I am not wholly convinced by the authors’ conclusions that “Our main estimates suggest a significant, sizeable reduction in the volume of high-tax soft drinks purchased.”

A Bayesian framework for health economic evaluation in studies with missing data. Health Economics [PubMedPublished 3rd July 2018

Missing data is a ubiquitous problem. I’ve never used a data set where no observations were missing and I doubt I’m alone. Despite its pervasiveness, it’s often only afforded an acknowledgement in the discussion or perhaps, in more complete analyses, something like multiple imputation will be used. Indeed, the majority of trials in the top medical journals don’t handle it correctly, if at all. The majority of the methods used for missing data in practice assume the data are ‘missing at random’ (MAR). One interpretation is that this means that, conditional on the observable variables, the probability of data being missing is independent of unobserved factors influencing the outcome. Another interpretation is that the distribution of the potentially missing data does not depend on whether they are actually missing. This interpretation comes from factorising the joint distribution of the outcome Y and an indicator of whether the datum is observed R, along with some covariates X, into a conditional and marginal model: f(Y,R|X) = f(Y|R,X)f(R|X), a so-called pattern mixture model. This contrasts with the ‘selection model’ approach: f(Y,R|X) = f(R|Y,X)f(Y|X).

This paper considers a Bayesian approach using the pattern mixture model for missing data for health economic evaluation. Specifically, the authors specify a multivariate normal model for the data with an additional term in the mean if it is missing, i.e. the model of f(Y|R,X). A model is not specified for f(R|X). If it were then you would typically allow for correlation between the errors in this model and the main outcomes model. But, one could view the additional term in the outcomes model as some function of the error from the observation model somewhat akin to a control function. Instead, this article uses expert elicitation methods to generate a prior distribution for the unobserved terms in the outcomes model. While this is certainly a legitimate way forward in my eyes, I do wonder how specification of a full observation model would affect the results. The approach of this article is useful and they show that it works, and I don’t want to detract from that but, given the lack of literature on missing data in this area, I am curious to compare approaches including selection models. You could even add shared parameter models as an alternative, all of which are feasible. Perhaps an idea for a follow-up study. As a final point, the models run in WinBUGS, but regular readers will know I think Stan is the future for estimating Bayesian models, especially in light of the problems with MCMC we’ve discussed previously. So equivalent Stan code would have been a bonus.

Trade challenges at the World Trade Organization to national noncommunicable disease prevention policies: a thematic document analysis of trade and health policy space. PLoS Medicine [PubMed] Published 26th June 2018

This is an economics blog. But focusing solely on economics papers in these round-ups would mean missing out on some papers from related fields that may provide insight into our own work. Thus I present to you a politics and sociology paper. It is not my field and I can’t give a reliable appraisal of the methods, but the results are of interest. In the global fight against non-communicable diseases, there is a range of policy tools available to governments, including the sugar tax of the paper at the top. The WHO recommends a large number. However, there is ongoing debate about whether trade rules and agreements are used to undermine this public health legislation. One agreement, the Technical Barriers to Trade (TBT) Agreement that World Trade Organization (WTO) members all sign, states that members may not impose ‘unnecessary trade costs’ or barriers to trade, especially if the intended aim of the measure can be achieved without doing so. For example, Philip Morris cited a bilateral trade agreement when it sued the Australian government for introducing plain packaging claiming it violated the terms of trade. Philip Morris eventually lost but not after substantial costs were incurred. In another example, the Thai government were deterred from introducing a traffic light warning system for food after threats of a trade dispute from the US, which cited WTO rules. However, there was no clear evidence on the extent to which trade disputes have undermined public health measures.

This article presents results from a new database of all TBT WTO challenges. Between 1995 and 2016, 93 challenges were raised concerning food, beverage, and tobacco products, the number per year growing over time. The most frequent challenges were over labelling products and then restricted ingredients. The paper presents four case studies, including Indonesia delaying food labelling of fat, sugar, and salt after challenge by several members including the EU, and many members including the EU again and the US objecting to the size and colour of a red STOP sign that Chile wanted to put on products containing high sugar, fat, and salt.

We have previously discussed the politics and political economy around public health policy relating to e-cigarettes, among other things. Understanding the political economy of public health and phenomena like government failure can be as important as understanding markets and market failure in designing effective interventions.

Credits

Chris Sampson’s journal round-up for 20th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Effects of health and social care spending constraints on mortality in England: a time trend analysis. BMJ Open [PubMed] Published 15th November 2017

I’d hazard a guess that I’m not the only one here who gets angry about the politics of austerity. Having seen this study’s title, it’s clear that the research could provide fuel for that anger. It doesn’t disappoint. Recent years have seen very low year-on-year increases in public expenditure on health in England. Even worse, between 2010 and 2014, public expenditure on social care actually fell in real terms. This is despite growing need for health and social care. In this study, the authors look at health and social care spending and try to estimate the impact that reduced expenditure has had on mortality in England. The analysis uses spending and mortality data from 2001 onwards and also incorporates mortality projections for 2015-2020. Time trend analyses are conducted using Poisson regression models. From 2001-2010, deaths decreased by 0.77% per year (on average). The mortality rate was falling. Now it seems to be increasing; from 2011-2014, the average number of deaths per year increased by 0.87%. This corresponds to 18,324 additional deaths in 2014, for example. But everybody dies. Extra deaths are really sooner deaths. So the question, really, is how much sooner? The authors look at potential years of life lost and find this figure to be 75,496 life-years greater than expected in 2014, given pre-2010 trends. This shouldn’t come as much of a surprise. Spending less generally achieves less. What makes this study really interesting is that it can tell us who is losing these potential years of life as a result of spending cuts. The authors find that it’s the over-60s. Care home deaths were the largest contributor to increased mortality. A £10 cut in social care spending per capita resulted in 5 additional care home deaths per 100,000 people. When the authors looked at deaths by local area, no association was found with the level of deprivation. If health and social care expenditure are combined in a single model, we see that it’s social care spending that is driving the number of excess deaths. The impact of health spending on hospital deaths was less robust. The number of nurses acted as a mediator for the relationship between spending and mortality. The authors estimate that current spending projections will result in 150,000 additional deaths compared with pre-2010 trends. There are plenty of limitations to this study. It’s pretty much impossible (though the authors do try) to separate the effects of austerity from the effect of a weak economy. Still, I’m satisfied with the conclusion that austerity kills older people (no jokes about turkeys and Christmas, please). For me, the findings also highlight the need for more research in the context of social care, and how we (as researchers) might effectively direct policy to prevent ‘excess’ deaths.

Should cost effectiveness analyses for NICE always consider future unrelated medical costs? BMJ [PubMed] Published 10th November 2017

The question of whether or not ‘unrelated’ future medical costs should be included in economic evaluation is becoming a hot topic. So much so that the BMJ has published this Head To Head, which introduces some of the arguments for and against. NICE currently recommends excluding unrelated future medical costs. An example given in this article is the case of the expected costs of dementia care having saved someone’s life by heart transplantation. The argument in favour of including unrelated costs is quite obvious – these costs can’t be ignored if we seek to maximise social welfare. Their inclusion is described as “not difficult” by the authors defending this move. By ignoring unrelated future costs (but accounting for the benefit of longer life), the relative cost-effectiveness of life-extending treatments, compared with life-improving treatments, is artificially inflated. The argument against including unrelated medical costs is presented as one of fairness. The author suggests that their inclusion could preclude access to health care for certain groups of people that are likely to have high needs in the future. So perhaps NICE should ignore unrelated medical costs in certain circumstances. I sympathise with this view, but I feel it is less a fairness issue and more a demonstration of the current limits of health-related quality of life measurement, which don’t reflect adaptation and coping. However, I tend to disagree with both of the arguments presented here. I really don’t think NICE should include or exclude unrelated future medical costs according to the context because that could create some very perverse incentives for certain stakeholders. But then, I do not agree that it is “not difficult” to include all unrelated future costs. ‘All’ is an important qualifier here because the capacity for analysts to pick and choose unrelated future costs creates the potential to pick and choose results. When it comes to unrelated future medical costs, NICE’s position needs to be all-or-nothing, and right now the ‘all’ bit is a high bar to clear. NICE should include unrelated future medical costs – it’s difficult to formulate a sound argument against that – but they should only do so once more groundwork has been done. In particular, we need to develop more valid methods for valuing quality of life against life-years in health technology assessment across different patient groups. And we need more reliable methods for estimating future medical costs in all settings.

Oncology modeling for fun and profit! Key steps for busy analysts in health technology assessment. PharmacoEconomics [PubMed] Published 6th November 2017

Quite a title(!). The subject of this essay is ‘partitioned survival modelling’. Honestly,  I never really knew what that was until I read this article. It seems the reason for my ignorance could be that I haven’t worked on the evaluation of cancer treatments, for which it’s a popular methodology. Apparently, a recent study found that almost 75% of NICE cancer drug appraisals were informed by this sort of analysis. Partitioned survival modelling is a simple means by which to extrapolate outcomes in a context where people can survive (or not) with or without progression. Often this can be on the basis of survival analyses and standard trial endpoints. This article seeks to provide some guidance on the development and use of partitioned survival models. Or, rather, it provides a toolkit for calling out those who might seek to use the method as a means of providing favourable results for a new therapy when data and analytical resources are lacking. The ‘key steps’ can be summarised as 1) avoiding/ignoring/misrepresenting current standards of economic evaluation, 2) using handpicked parametric approaches for extrapolation in order to maximise survival benefits, 3) creatively estimating relative treatment effects using indirect comparisons without adjustment, 4) make optimistic assumptions about post-progression outcomes, and 5) deny the possibility of any structural uncertainty. The authors illustrate just how much an analyst can influence the results of an evaluation (if they want to “keep ICERs in the sweet spot!”). Generally, these tactics move the model far from being representative of reality. However, the prevailing secrecy around most models means that it isn’t always easy to detect these shortcomings. Sometimes it is though, and the authors make explicit reference to technology appraisals that they suggest demonstrate these crimes. Brilliant!

Credits

Sam Watson’s journal round-up for 2nd October 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The path to longer and healthier lives for all Africans by 2030: the Lancet Commission on the future of health in sub-Saharan Africa. The Lancet [PubMedPublished 13th September 2017

The African continent has the highest rates of economic growth, the fastest growing populations and rates of urbanisation, but also the highest burden of disease. The challenges for public health and health care provision are great. It is no surprise then that this Lancet commission on the future of health in Sub-Saharan Africa runs to 57 pages yet still has some notable absences. In the space of a few hundred words, it would be impossible to fully discuss the topics in this tome, these will appear in future blog posts. For now, I want to briefly discuss a lack of consideration of the importance of political economy in the Commission’s report. For example, the report notes the damaging effects of IMF and World Bank structural adjustment programs in the 70s and 80s. These led to a dismantling of much of the public sector in indebted African nations in order for them to qualify for further loans. However, these issues have not gone away. Despite strongly emphasizing that countries in Africa must increase their health spending, it does not mention that many countries spend much more servicing debt than on public health and health care. Kenya, for example, will soon no longer qualify for aid as it becomes a middle-income country, and yet it spends almost double (around $6 billion) servicing its debt than it does on health care (around $3 billion). Debt reform and relief may be a major step towards increasing health expenditure. The inequalities in access to basic health services reflect the disparities in income and wealth both between and within countries. The growth of slums across the continent is stark evidence of this. Residents of these communities, despite often facing the worst exposure to major disease risk factors, are often not recognised by authorities and cannot access health services. Even where health services are available there are still difficulties with access. A lack of regulation and oversight can lead the growth of a rentier class within slums as those with access to small amounts of capital, land, or property act as petty landlords. So while some in slum areas can afford the fees for basic health services, the poorest still face a barrier even when services are available. These people are also those who have little access to decent water and sanitation or education and have the highest risk of disease. Finally, the lack of incentives for trained doctors and medical staff to work in poor or rural areas is also identified as a key problem. Many doctors either leave for wealthier countries or work in urban areas. Doctors are often a powerful interest group and can influence macro health policy, distorting it to favour richer urban areas. Political solutions are required, as well as the public health interventions more widely discussed. The Commission’s report is extensive and worth the time to read for anyone with an interest in the subject matter. What also becomes clear upon reading it is the lack of solid evidence on health systems and what works and does not work. From an economic perspective, much of the evidence pertaining to health system functioning and efficiency is still just the results from country-level panel data regressions, which tell us very little about what is actually happening. This results in us being able to identify areas needed for reform with very little idea of how.

The relationship of health insurance and mortality: is lack of insurance deadly? Annals of Internal Medicine [PubMedPublished 19th September 2017

One sure-fire way of increasing your chances of publishing in a top-ranked journal is to do something on a hot political topic. In the UK this has been seven-day services, as well as other issues relating to deficiencies of supply. In the US, health insurance is right up there with the Republicans trying to repeal the Affordable Care Act, a.k.a. Obamacare. This paper systematically reviews the literature on the relationship between health insurance coverage and the risk of mortality. The theory being that health insurance permits access to medical services and therefore treatment and prevention measures that reduce the risk of death. Many readers will be familiar with the Oregon Health Insurance Experiment, in which the US state of Oregon distributed access to increased Medicaid expansion by lottery, therein creating an RCT. This experiment, which takes a top spot in the review, estimated that those who had ‘won’ the lottery had a mortality rate 0.032 percentage points lower than the ‘losers’, whose mortality rate was 0.8%; a relative reduction of around 4%. Similar results were found for the quasi-experimental studies included, and slightly larger effects were found in cohort follow-up studies. These effects are small. But then so is the baseline. Most of these studies only examined non-elderly, non-disabled people, who would otherwise not qualify for any other public health insurance. For people under 45 in the US, the leading cause of death is unintentional injury, and its only above this age that cancer becomes the leading cause of death. If you suffer major trauma in the US you will (for the most part) be treated in an ER insured or uninsured, even if you end up with a large bill afterwards. So it’s no surprise that the effects of insurance coverage on mortality are very small for these people. This is probably the inappropriate endpoint to be looking at for this study. Indeed, the Oregon experiment found that the biggest differences were in reduced out-of-pocket expenses and medical debt, and improved self-reported health. The review’s conclusion that, “The odds of dying among the insured relative to the uninsured is 0.71 to 0.97,” is seemingly unwarranted. If they want to make a political point about the need for insurance, they’re looking in the wrong place.

Smoking, expectations, and health: a dynamic stochastic model of lifetime smoking behavior. Journal of Political Economy [RePEcPublished 24th August 2017

I’ve long been sceptical of mathematical models of complex health behaviours. The most egregious of which is often the ‘rational addiction’ literature. Originating with the late Gary Becker, the rational addiction model, in essence, assumes that addiction is a rational choice made by utility maximising individuals, whose preferences alter with use of a particular drug. The biggest problem I find with this approach is that it is completely out of touch with the reality of addiction and drug dependence, and makes absurd assumptions about the preferences of addicts. Nevertheless, it has spawned a sizable literature. And, one may argue that the model is useful if it makes accurate predictions, regardless of the assumptions underlying it. On this front, I have yet to be convinced. This paper builds a rational addiction-type model for smoking to examine whether learning of one’s health risks reduces smoking. As an illustration of why I dislike this method of understanding addictive behaviours, the authors note that “…the model cannot explain why individuals start smoking. […] The estimated preference parameters in the absence of a chronic illness suggest that, for a never smoker under the age of 25, there is no incentive to begin smoking because the marginal utility of smoking is negative.” But for many, social and cultural factors simply explain why young people start smoking. The weakness of the deductive approach to social science seems to rear its head, but like I said, the aim here may be the development of good predictive models. And, the model does appear to predict smoking behaviour well. However, it is all in-sample prediction, and with the number of parameters it is not surprising it predicts well. This discussion is not meant to be completely excoriating. What is interesting is the discussion and attempt to deal with the endogeneity of smoking – people in poor health may be more likely to smoke and so the estimated effects of smoking on longevity may be overestimated. As a final point of contention though, I’m still trying to work out what the “addictive stock of smoking capital” is.

Credits