Simon McNamara’s journal round-up for 21st January 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Assessing capability in economic evaluation: a life course approach? The European Journal of Health Economics [PubMed] Published 8th January 2019

If you have spent any time on social media in the last week there is a good chance that you will have seen the hashtag #10yearchallenge. This hashtag is typically accompanied by two photos of the poster; one recent, and one from 10 years ago. Whilst the minority of these posts suggest that the elixir of permanent youth has been discovered and is being hidden away by a select group of people, the majority show clear signs of ageing. As time passes, we change. Our skin becomes wrinkled, our hair may become grey, and we may become heavier. What these pictures don’t show, is how we change internally – and I don’t mean biologically. As we become older, and we experience life, so the things we think are important change. Our souls become wrinkled, and our minds become heavier.

The first paper in this week’s round-up is founded on this premise, albeit grounded in the measurement of capability well-being across the life course, rather than a hashtag. The capabilities approach is grounded in the normative judgement that the desirability of policy outcomes should be evaluated by what Sen called the ‘capabilities’ they provide – “the functionings, or the capabilities to function” they give people, where functionings for a person are defined as “the various things that he or she manages to do or be in leading a good life” (Sen, 1993). The author (Joanna Coast) appeals to her, and others’, work on the family of ICECAP measures (capability measures), in order to argue that the capabilities we value changes across the stage of life we are experiencing. For example, she notes that the development work for the ICECAP-A (adults) resulted in the choice of an ‘achievement’ attribute in that instrument, whilst for ICECAP-O (older people) an alternative ‘role’ attribute was used – with the achievement attribute primarily linked to having the ability to make progress in life, and the role attribute linked to having the ability to do things that make you feel valued. Similarly, she notes that the attributes that emerged from development work on the ICECAP-SCM (supportive care – a term for the end of life) are different to those from ICECAP-A (adults), with dignity coming to the forefront as a valued attribute towards the end of life. The author then goes on to suggest that it would be normatively desirable to capture how the capabilities we value changes over the life-course, suggests this could be done with a range of different measures, and highlights a number of problems associated with this (e.g. when does a life-stage start and finish?).

You should read this paper. It is only four pages long and definitely worth your time. If you have spent enough time on social media to know what the #10yearchallenge is, then you definitely have time to read it. I think this is a really interesting topic and a great paper. It has certainly got me thinking more about capabilities, and I will be keeping an eye out for future papers on this in future.

Future directions in valuing benefits for estimating QALYs: is time up for the EQ-5D? Value in Health Published 17th January 2019

If EQ-5D were a person, I think I would be giving it a good hug right now. Every time my turn to write this round-up comes up there seems to be a new article criticising it, pointing out potential flaws in the way it has been valued, or proposing a new alternative. If it could speak, I imagine it would tell us it is doing its best – perhaps with a small tear in its eye. It has done what it can to evolve, it has tried to change, but as we approach its 30th birthday, and exciting new instruments are under development, the authors of the second paper in this week’s round-up question – “Is time up for the EQ-5D?”

If you are interested in the valuation of outcomes, you should probably read this paper. It is a really neat summary of recent developments in the assessment and valuation of the benefits of healthcare, and gives a good indication of where the field may be headed. Before jumping into reading the paper, it is worth dwelling on its title. Note that the authors have used the term “valuing benefits for estimating QALYs” and not “valuing health states for estimating QALYs”. This is telling, and reflects the growing interest in measuring, and valuing, the benefits of healthcare based upon a broader conception of well-being, rather than simply health as represented by the EQ-5D. It is this issue that rests at the heart of the paper, and is probably the biggest threat to the long-term domination of EQ-5D. If it wasn’t designed to capture the things we are now interested in, then why not modify it further, or go back to the drawing board and start again?

I am not going to attempt to cover all the points made in the paper, as I can’t do it justice in this blog; but in summary, the authors review a number of ways this could be done, outline recent developments in the way the subsequent instrument could be valued, and detail the potential advantages, disadvantages, and challenges of moving to a new instrument. Ultimately, the authors conclude that the future of the valuation of outcomes – be that with EQ-5D or something else, depends upon a number of judgements, including whether non-health factors are considered to be relevant when valuing the benefits of healthcare. If they are then EQ-5D isn’t fit for purpose, and we need a new instrument. Whilst the paper doesn’t provide a definitive answer to the question “Is Time Up for the EQ-5D?”, the fact that NICE, the EuroQol group, two of the authors of this paper, and a whole host of others, are currently collaborating on a new measure, which captures both health and non-health outcomes, indicates that EQ-5D may well be nearing the end of its dominance. I look forward to seeing how this work progresses over the next few years.

The association between economic uncertainty and suicide in the short-run. Social Science and Medicine [PubMed] [RePEc] Published 24th November 2018

As I write this, the United Kingdom is 10 weeks away from the date we are due to leave the European Union, and we are still uncertain about how, and potentially even whether, we will finally leave. The uncertainty created by Brexit covers both economic and social spheres, and impacts many of those in the United Kingdom, and many beyond who have ties to us. I am afraid the next paper isn’t a cheery one, but given this situation, it is a timely one.

In the final paper in this round-up, the authors explore the link between economic uncertainty and short-term suicide rates. This is done by linking the UK EPU index of economic uncertainty – an index generated based upon the articles published in 650 UK newspapers – to the daily suicide rates in England and Wales between 2001 and 2015. The authors find evidence of an increase in suicide rates on the days on which the EPU index was higher, and also of a lagged effect on the day after a spike in the index. Over the course of a year, this effect means a one standard deviation increase in the EPU is expected to lead to 11 additional deaths in that year. In comparison to the number of deaths per year from cardiovascular disease, and cancer, this effect is relatively modest, but is nevertheless concerning given the nature of the way in which these people are dying.

I am not going to pretend I enjoyed reading this paper. Technically it is good, and it is an interesting paper, but the topic was just a bit too dark and too relevant to our current situation. Whilst reading I couldn’t help but wonder whether I am going to be reading a similar paper linking Brexit uncertainty to suicide at some point in the future. Fingers crossed this isn’t the case.

Credits

Chris Sampson’s journal round-up for 5th November 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Stratified treatment recommendation or one-size-fits-all? A health economic insight based on graphical exploration. The European Journal of Health Economics [PubMed] Published 29th October 2018

Health care is increasingly personalised. This creates the need to evaluate interventions for smaller and smaller subgroups as patient heterogeneity is taken into account. And this usually means we lack the statistical power to have confidence in our findings. The purpose of this paper is to consider the usefulness of a tool that hasn’t previously been employed in economic evaluation – the subpopulation treatment effect pattern plot (STEPP). STEPP works by assessing the interaction between treatments and covariates in different subgroups, which can then be presented graphically. Imagine your X-axis with the values defining the subgroups and your Y-axis showing the treatment outcome. This information can then be used to determine which subgroups exhibit positive outcomes.

This study uses data from a trial-based economic evaluation in heart failure, where patients’ 18-month all-cause mortality risk was estimated at baseline before allocation to one of three treatment strategies. For the STEPP procedure, the authors use baseline risk to define subgroups and adopt net monetary benefit at the patient level as the outcome. The study makes two comparisons (between three alternative strategies) and therefore presents two STEPP figures. The STEPP figures are used to identify subgroups, which the authors apply in a stratified cost-effectiveness analysis, estimating net benefit in each defined risk subgroup.

Interpretation of the STEPPs is a bit loose, with no hard decision rules. The authors suggest that one of the STEPPs shows no clear relationship between net benefit and baseline risk in terms of the cost-effectiveness of the intervention (care as usual vs basic support). The other STEPP shows that, on average, people with baseline risk below 0.16 have a positive net benefit from the intervention (intensive support vs basic support), while those with higher risk do not. The authors evaluate this stratification strategy against an alternative stratification strategy (based on the patient’s New York Heart Association class) and find that the STEPP-based approach is expected to be more cost-effective. So the key message seems to be that STEPP can be used as a basis for defining subgroups as cost-effectively as possible.

I’m unsure about the extent to which this is a method that deserves to have its own name, insofar as it is used in this study. I’ve seen plenty of studies present a graph with net benefit on the Y-axis and some patient characteristic on the X-axis. But my main concern is about defining subgroups on the basis of net monetary benefit rather than some patient characteristic. Is it OK to deny treatment to subgroup A because treatment costs are higher than in subgroup B, even if treatment is cost-effective for the entire population of A+B? Maybe, but I think that creates more challenges than stratification on the basis of treatment outcome.

Using post-market utilisation analysis to support medicines pricing policy: an Australian case study of aflibercept and ranibizumab use. Applied Health Economics and Health Policy [PubMed] Published 25th October 2018

The use of ranibizumab and aflibercept has been a hot topic in the UK, where NHS providers feel that they’ve been bureaucratically strong-armed into using an incredibly expensive drug to treat certain eye conditions when a cheaper and just-as-effective alternative is available. Seeing how other countries have managed prices in this context could, therefore, be valuable to the NHS and other health services internationally. This study uses data from Australia, where decisions about subsidising medicines are informed by research into how drugs are used after they come to market. Both ranibizumab (in 2007) and aflibercept (in 2012) were supported for the treatment of age-related macular degeneration. These decisions were based on clinical trials and modelling studies, which also showed that the benefit of ~6 aflibercept prescriptions equated to the benefit of ~12 ranibizumab prescriptions, justifying a higher price-per-injection for aflibercept.

In the UK and US, aflibercept attracts a higher price. The authors assume that this is because of the aforementioned trial data relating to the number of doses. However, in Australia, the same price is paid for aflibercept and ranibizumab. This is because a post-market analysis showed that, in practice, ranibizumab and aflibercept had a similar dose frequency. The purpose of this study is to see whether this is because different groups of patients are being prescribed the two drugs. If they are, then we might anticipate heterogenous treatment outcomes and thus a justification for differential pricing. Data were drawn from an administrative claims database for 208,000 Australian veterans for 2007-2017. The monthly number of aflibercept and ranibizumab prescriptions was estimated for each person, showing that total prescriptions increased steadily over the period, with aflibercept taking around half the market within a year of its approval. Ranibizumab initiators were slightly older in the post-aflibercept era but, aside from that, there were no real differences identified. When it comes to the prescription of ranibizumab or aflibercept, gender, being in residential care, remoteness of location, and co-morbidities don’t seem to be important. Dispensing rates were similar, at around 3 prescriptions during the first 90 days and around 9 prescriptions during the following 12 months.

The findings seem to support Australia’s decision to treat ranibizumab and aflibercept as substitutes at the same price. More generally, they support the idea that post-market utilisation assessments can (and perhaps should) be used as part of the health technology assessment and reimbursement process.

Do political factors influence public health expenditures? Evidence pre- and post-great recession. The European Journal of Health Economics [PubMed] Published 24th October 2018

There is mixed evidence about the importance of partisanship in public spending, and very little relating specifically to health care. I’d be worried if political factors didn’t influence public spending on health, given that that’s a definitively political issue. How the situation might be different before and after a recession is an interesting question.

The authors combined OECD data for 34 countries from 1970-2016 with the Database of Political Institutions. This allowed for the creation of variables relating to the ideology of the government and the proximity of elections. Stationary panel data models were identified as the most appropriate method for analysis of these data. A variety of political factors were included in the models, for which the authors present marginal effects. The more left-wing a government, the higher is public spending on health care, but this is only statistically significant in the period before the crisis of 2007. Before the crisis, coalition governments tended to spend more, while governments with more years in office tended to spend less. These effects also seem to disappear after 2007. Throughout the whole period, governing parties with a stronger majority tended to spend less on health care. Several of the non-political factors included in the models show the results that we would expect. GDP per capita is positively associated with health care expenditures, for example. The findings relating to the importance of political factors appear to be robust to the inclusion of other (non-political) variables and there are similar findings when the authors look at public health expenditure as a percentage of total health expenditure. In contradiction with some previous studies, proximity to elections does not appear to be important.

The most interesting finding here is that the effect of partisanship seems to have mostly disappeared – or, at least, reduced – since the crisis of 2007. Why did left-wing parties and right-wing parties converge? The authors suggest that it’s because adverse economic circumstances restrict the extent to which governments can make decisions on the basis of ideology. Though I dare say readers of this blog could come up with plenty of other (perhaps non-economic) explanations.

Credits

Sam Watson’s journal round-up for 9th July 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluating the 2014 sugar-sweetened beverage tax in Chile: an observational study in urban areas. PLoS Medicine [PubMedPublished 3rd July 2018

Sugar taxes are one of the public health policy options currently in vogue. Countries including Mexico, the UK, South Africa, and Sri Lanka all have sugar taxes. The aim of such levies is to reduce demand for the most sugary drinks, or if the tax is absorbed on the supply side, which is rare, to encourage producers to reduce the sugar content of their drinks. One may also view it as a form of Pigouvian taxation to internalise the public health costs associated with obesity. Chile has long had an ad valorem tax on soft drinks fixed at 13%, but in 2014 decided to pursue a sugar tax approach. Drinks with more than 6.25g/100ml saw their tax rate rise to 18% and the tax on those below this threshold dropped to 10%. To understand what effect this change had, we would want to know three key things along the causal pathway from tax policy to sugar consumption: did people know about the tax change, did prices change, and did consumption behaviour change. On this latter point, we can consider both the overall volume of soft drinks and whether people substituted low sugar for high sugar beverages. Using the Kantar Worldpanel, a household panel survey of purchasing behaviour, this paper examines these questions.

Everyone in Chile was affected by the tax so there is no control group. We must rely on time series variation to identify the effect of the tax. Sometimes, looking at plots of the data reveals a clear step-change when an intervention is introduced (e.g. the plot in this post), not so in this paper. We therefore rely heavily on the results of the model for our inferences, and I have a couple of small gripes with it. First, the model captures household fixed effects, but no consideration is given to dynamic effects. Some households may be more or less likely to buy drinks, but their decisions are also likely to be affected by how much they’ve recently bought. Similarly, the errors may be correlated over time. Ignoring dynamic effects can lead to large biases. Second, the authors choose among different functional form specifications of time using Akaike Information Criterion (AIC). While AIC and the Bayesian Information Criterion (BIC) are often thought to be interchangeable, they are not; AIC estimates predictive performance on future data, while BIC estimates goodness of fit to the data. Thus, I would think BIC would be more appropriate. Additional results show the estimates are very sensitive to the choice of functional form by an order of magnitude and in sign. The authors estimate a fairly substantial decrease of around 22% in the volume of high sugar drinks purchased, but find evidence that the price paid changed very little (~1.5%) and there was little change in other drinks. While the analysis is generally careful and well thought out, I am not wholly convinced by the authors’ conclusions that “Our main estimates suggest a significant, sizeable reduction in the volume of high-tax soft drinks purchased.”

A Bayesian framework for health economic evaluation in studies with missing data. Health Economics [PubMedPublished 3rd July 2018

Missing data is a ubiquitous problem. I’ve never used a data set where no observations were missing and I doubt I’m alone. Despite its pervasiveness, it’s often only afforded an acknowledgement in the discussion or perhaps, in more complete analyses, something like multiple imputation will be used. Indeed, the majority of trials in the top medical journals don’t handle it correctly, if at all. The majority of the methods used for missing data in practice assume the data are ‘missing at random’ (MAR). One interpretation is that this means that, conditional on the observable variables, the probability of data being missing is independent of unobserved factors influencing the outcome. Another interpretation is that the distribution of the potentially missing data does not depend on whether they are actually missing. This interpretation comes from factorising the joint distribution of the outcome Y and an indicator of whether the datum is observed R, along with some covariates X, into a conditional and marginal model: f(Y,R|X) = f(Y|R,X)f(R|X), a so-called pattern mixture model. This contrasts with the ‘selection model’ approach: f(Y,R|X) = f(R|Y,X)f(Y|X).

This paper considers a Bayesian approach using the pattern mixture model for missing data for health economic evaluation. Specifically, the authors specify a multivariate normal model for the data with an additional term in the mean if it is missing, i.e. the model of f(Y|R,X). A model is not specified for f(R|X). If it were then you would typically allow for correlation between the errors in this model and the main outcomes model. But, one could view the additional term in the outcomes model as some function of the error from the observation model somewhat akin to a control function. Instead, this article uses expert elicitation methods to generate a prior distribution for the unobserved terms in the outcomes model. While this is certainly a legitimate way forward in my eyes, I do wonder how specification of a full observation model would affect the results. The approach of this article is useful and they show that it works, and I don’t want to detract from that but, given the lack of literature on missing data in this area, I am curious to compare approaches including selection models. You could even add shared parameter models as an alternative, all of which are feasible. Perhaps an idea for a follow-up study. As a final point, the models run in WinBUGS, but regular readers will know I think Stan is the future for estimating Bayesian models, especially in light of the problems with MCMC we’ve discussed previously. So equivalent Stan code would have been a bonus.

Trade challenges at the World Trade Organization to national noncommunicable disease prevention policies: a thematic document analysis of trade and health policy space. PLoS Medicine [PubMed] Published 26th June 2018

This is an economics blog. But focusing solely on economics papers in these round-ups would mean missing out on some papers from related fields that may provide insight into our own work. Thus I present to you a politics and sociology paper. It is not my field and I can’t give a reliable appraisal of the methods, but the results are of interest. In the global fight against non-communicable diseases, there is a range of policy tools available to governments, including the sugar tax of the paper at the top. The WHO recommends a large number. However, there is ongoing debate about whether trade rules and agreements are used to undermine this public health legislation. One agreement, the Technical Barriers to Trade (TBT) Agreement that World Trade Organization (WTO) members all sign, states that members may not impose ‘unnecessary trade costs’ or barriers to trade, especially if the intended aim of the measure can be achieved without doing so. For example, Philip Morris cited a bilateral trade agreement when it sued the Australian government for introducing plain packaging claiming it violated the terms of trade. Philip Morris eventually lost but not after substantial costs were incurred. In another example, the Thai government were deterred from introducing a traffic light warning system for food after threats of a trade dispute from the US, which cited WTO rules. However, there was no clear evidence on the extent to which trade disputes have undermined public health measures.

This article presents results from a new database of all TBT WTO challenges. Between 1995 and 2016, 93 challenges were raised concerning food, beverage, and tobacco products, the number per year growing over time. The most frequent challenges were over labelling products and then restricted ingredients. The paper presents four case studies, including Indonesia delaying food labelling of fat, sugar, and salt after challenge by several members including the EU, and many members including the EU again and the US objecting to the size and colour of a red STOP sign that Chile wanted to put on products containing high sugar, fat, and salt.

We have previously discussed the politics and political economy around public health policy relating to e-cigarettes, among other things. Understanding the political economy of public health and phenomena like government failure can be as important as understanding markets and market failure in designing effective interventions.

Credits