Rita Faria’s journal round-up for 18th June 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Objectives, budgets, thresholds, and opportunity costs—a health economics approach: an ISPOR Special Task Force report. Value in Health [PubMedPublished 21st February 2018

The economic evaluation world has been discussing cost-effectiveness thresholds for a while. This paper has been out for a few months, but it slipped under my radar. It explains the relationship between the cost-effectiveness threshold, the budget, opportunity costs and willingness to pay for health. My take-home messages are that we should use cost-effectiveness analysis to inform decisions both for publicly funded and privately funded health care systems. Each system has a budget and a way of raising funds for that budget. The cost-effectiveness threshold should be specific for each health care system, in order to reflect its specific opportunity cost. The budget can change for many reasons. The cost-effectiveness threshold should be adjusted to reflect these changes and hence reflect the opportunity cost. For example, taxpayers can increase their willingness to pay for health through increased taxes for the health care system. We are starting to see this in the UK with the calls to raise taxes to increase the NHS budget. It is worth noting that the NICE threshold may not warrant adjustment upwards since research suggests that it does not reflect the opportunity cost. This is a welcome paper on the topic and a must read, particularly if you’re arguing for the use of cost-effectiveness analysis in settings that traditionally were reluctant to embrace it, such as the US.

Basic versus supplementary health insurance: access to care and the role of cost effectiveness. Journal of Health Economics [RePEc] Published 31st May 2018

Using cost-effectiveness analysis to inform coverage decisions not only for the public but also for the privately funded health care is also a feature of this study by Jan Boone. I’ll admit that the equations are well beyond my level of microeconomics, but the text is good at explaining the insights and the intuition. Boone grapples with the question about how the public and private health care systems should choose which technologies to cover. Boone concludes that, when choosing which technologies to cover, the most cost-effective technologies should be prioritised for funding. That the theory matches the practice is reassuring to an economic evaluator like myself! One of the findings is that cost-effective technologies which are very cheap should not be covered. The rationale being that everyone can afford them. The issue for me is that people may decide not to purchase a highly cost-effective technology which is very cheap. As we know from behaviour economics, people are not rational all the time! Boone also concludes that the inclusion of technologies in the universal basic package should consider the prevalence of the conditions in those people at high risk and with low income. The way that I interpreted this is that it is more cost-effective to include technologies for high-risk low-income people in the universal basic package who would not be able to afford these technologies otherwise, than technologies for high-income people who can afford supplementary insurance. I can’t cover here all the findings and the nuances of the theoretical model. Suffice to say that it is an interesting read, even if you avoid the equations like myself.

Surveying the cost effectiveness of the 20 procedures with the largest public health services waiting lists in Ireland: implications for Ireland’s cost-effectiveness threshold. Value in Health Published 11th June 2018

As we are on the topic of cost-effectiveness thresholds, this is a study on the threshold in Ireland. This study sets out to find out if the current cost-effectiveness threshold is too high given the ICERs of the 20 procedures with the largest waiting lists. The idea is that, if the current cost-effectiveness threshold is correct, the procedures with large and long waiting lists would have an ICER of above the cost-effectiveness threshold. If the procedures have a low ICER, the cost-effectiveness threshold may be set too high. I thought that Figure 1 is excellent in conveying the discordance between ICERs and waiting lists. For example, the ICER for extracapsular extraction of crystalline lens is €10,139/QALY and the waiting list has 10,056 people; the ICER for surgical tooth removal is €195,155/QALY and the waiting list is smaller at 833. This study suggests that, similar to many other countries, there are inefficiencies in the way that the Irish health care system prioritises technologies for funding. The limitation of the study is in the ICERs. Ideally, the relevant ICER compares the procedure with the standard care in Ireland whilst on the waiting list (“no procedure” option). But it is nigh impossible to find ICERs that meet this condition for all procedures. The alternative is to assume that the difference in costs and QALYs is generalisable from the source study to Ireland. It was great to see another study on empirical cost-effectiveness thresholds. Looking forward to knowing what the cost-effectiveness threshold should be to accurately reflect opportunity costs.


Chris Sampson’s journal round-up for 7th May 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Building an international health economics teaching network. Health Economics [PubMedPublished 2nd May 2018

The teaching on my health economics MSc (at Sheffield) was very effective. Experts from our subdiscipline equipped me with the skills that I went on to use on a daily basis in my first job, and to this day. But not everyone gets the same opportunity. And there were only 8 people on my course. Part of the background to the new movement described in this editorial is the observation that demand for health economists outstrips supply. Great for us jobbing health economists, but suboptimal for society. The shortfall has given rise to people teaching health economics (or rather, economic evaluation methods) without any real training in economics. The main purpose of this editorial is to call on health economists (that’s me and you) to pull our weight and contribute to a collective effort to share, improve, and ultimately deliver high-quality teaching resources. The Health Economics education website, which is now being adopted by iHEA, should be the starting point. And there’s now a Teaching Health Economics Special Interest Group. So chip in! This paper got me thinking about how the blog could play its part in contributing to the infrastructure of health economics teaching, so expect to see some developments on that front.

Including future consumption and production in economic evaluation of interventions that save life-years: commentary. PharmacoEconomics – Open [PubMed] Published 30th April 2018

When people live longer, they spend their extra life-years consuming and producing. How much consuming and producing they do affects social welfare. The authors of this commentary are very clear about the point they wish to make, so I’ll just quote them: “All else equal, a given number of quality-adjusted life-years (QALYs) from life prolongation will normally be more costly from a societal perspective than the same number of QALYs from programmes that improve quality of life”. This is because (in high-income countries) most people whose life can be extended are elderly, so they’re not very productive. They’re likely to create a net cost for society (given how we measure value). Asserting that the cost is ‘worth it’ at any level, or simply ignoring the matter, isn’t really good enough because providing life extension will be at the expense of some life-improving treatments which may – were these costs taken into account – improve social welfare. The authors’ estimates suggest that the societal cost of life-extension is far greater than current methods admit. Consumption costs and production gains should be estimated and should be given some weight in decision-making. The question is not whether we should measure consumption costs and production gains – clearly, we should. The question is what weight they ought to be given in decision-making.

Methods for the economic evaluation of changes to the organisation and delivery of health services: principal challenges and recommendations. Health Economics, Policy and Law [PubMedPublished 20th April 2018

The late, great, Alan Maynard liked to speak about redisorganisations in the NHS: large-scale changes to the way services are organised and delivered, usually without a supporting evidence base. This problem extends to smaller-scale service delivery interventions. There’s no requirement for policy-makers to demonstrate that changes will be cost-effective. This paper explains why applying methods of health technology assessment to service interventions can be tricky. The causal chain of effects may be less clear when interventions are applied at the organisational level rather than individual level, and the results will be heavily dependent on the present context. The author outlines five challenges in conducting economic evaluations for service interventions: i) conducting ex-ante evaluations, ii) evaluating impact in terms of QALYs, iii) assessing costs and opportunity costs, iv) accounting for spillover effects, and v) generalisability. Those identified as most limiting right now are the challenges associated with estimating costs and QALYs. Cost data aren’t likely to be readily available at the individual level and may not be easily identifiable and divisible. So top-down programme-level costs may be all we have to work with, and they may lack precision. QALYs may be ‘attached’ to service interventions by applying a tariff to individual patients or by supplementing the analysis with simulation modelling. But more methodological development is still needed. And until we figure it out, health spending is likely to suffer from allocative inefficiencies.

Vog: using volcanic eruptions to estimate the health costs of particulates. The Economic Journal [RePEc] Published 12th April 2018

As sources of random shocks to a system go, a volcanic eruption is pretty good. A major policy concern around the world – particularly in big cities – is the impact of pollution. But the short-term impact of particulate pollution is difficult to identify because there is high correlation amongst pollutants. In this study, the authors use the eruption activity of Kīlauea on the island of Hawaiʻi as a source of variation in particulate pollution. Vog – volcanic smog – includes sulphur dioxide and is similar to particulate pollution in cities, but the fact that Hawaiʻi does not have the same levels of industrial pollutants means that the authors can more cleanly identify the impact on health outcomes. In 2008 there was a big increase in Kīlauea’s emissions when a new vent opened, and the level of emissions fluctuates daily, so there’s plenty of variation to play with. The authors have two main sources of data: emergency admissions (and their associated charges) and air quality data. A parsimonious OLS model is used to estimate the impact of air quality on the total number of admissions for a given day in a given region, with fixed effects for region and date. An instrumental variable approach is also used, which looks at air quality on a neighbouring island and uses wind direction to specify the instrumental variable. The authors find that pulmonary-related emergency admissions increased with pollution levels. Looking at the instrumental variable analysis, a one standard deviation increase in particulate pollution results in 23-36% more pulmonary-related emergency visits (depending on which measure of particulate pollution is being used). Importantly, there’s no impact on fractures, which we wouldn’t expect to be influenced by the particulate pollution. The impact is greatest for babies and young children. And it’s worth bearing in mind that avoidance behaviours – e.g. people staying indoors on ‘voggy’ days – are likely to reduce the impact of the pollution. Despite the apparent lack of similarity between Hawaiʻi and – for example – London, this study provides strong evidence that policy-makers should consider the potential savings to the health service when tackling particulate pollution.


Sam Watson’s journal round-up for 16th April 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The impact of NHS expenditure on health outcomes in England: alternative approaches to identification in all‐cause and disease specific models of mortality. Health Economics [PubMedPublished 2nd April 2018

Studies looking at the relationship between health care expenditure and patient outcomes have exploded in popularity. A recent systematic review identified 65 studies by 2014 on the topic – and recent experience from these journal round-ups suggests this number has increased significantly since then. The relationship between national spending and health outcomes is important to inform policy and health care budgets, not least through the specification of a cost-effectiveness threshold. Karl Claxton and colleagues released a big study looking at all the programmes of care in the NHS in 2015 purporting to estimate exactly this. I wrote at the time that: (i) these estimates are only truly an opportunity cost if the health service is allocatively efficient, which it isn’t; and (ii) their statistical identification method, in which they used a range of socio-economic variables as instruments for expenditure, was flawed as the instruments were neither strong determinants of expenditure nor (conditionally) independent of population health. I also noted that their tests would be unlikely to be any good to detect this problem. In response to the first, Tony O’Hagan commented to say that that they did not assume NHS efficiency, nor even that it was assumed that the NHS is trying to maximise health. This may well have been the case, but I would still, perhaps pedantically, argue then that this is therefore not an opportunity cost. For the question of instrumental variables, an alternative method was proposed by Martyn Andrews and co-authors, using information that feeds into the budget allocation formula as instruments for expenditure. In this new article, Claxton, Lomas, and Martin adopt Andrews’s approach and apply it across four key programs of care in the NHS to try to derive cost-per-QALY thresholds. First off, many of my original criticisms I would also apply to this paper, to which I’d also add one: (Statistical significance being used inappropriately complaint alert!!!) The authors use what seems to be some form of stepwise regression by including and excluding regressors on the basis of statistical significance – this is a big no-no and just introduces large biases (see this article for a list of reasons why). Beyond that, the instruments issue – I think – is still a problem, as it’s hard to justify, for example, an input price index (which translates to larger budgets) as an instrument here. It is certainly correlated with higher expenditure – inputs are more expensive in higher price areas after all – but this instrument won’t be correlated with greater inputs for this same reason. Thus, it’s the ‘wrong kind’ of correlation for this study. Needless to say, perhaps I am letting the perfect be the enemy of the good. Is this evidence strong enough to warrant a change in a cost-effectiveness threshold? My inclination would be that it is not, but that is not to deny it’s relevance to the debate.

Risk thresholds for alcohol consumption: combined analysis of individual-participant data for 599 912 current drinkers in 83 prospective studies. The Lancet Published 14th April 2018

“Moderate drinkers live longer” is the adage of the casual drinker as if to justify a hedonistic pursuit as purely pragmatic. But where does this idea come from? Studies that have compared risk of cardiovascular disease to level of alcohol consumption have shown that disease risk is lower in those that drink moderately compared to those that don’t drink. But correlation does not imply causation – non-drinkers might differ from those that drink. They may be abstinent after experiencing health issues related to alcohol, or be otherwise advised to not drink to protect their health. If we truly believed moderate alcohol consumption was better for your health than no alcohol consumption we’d advise people who don’t drink to drink. Moreover, if this relationship were true then there would be an ‘optimal’ level of consumption where any protective effect were maximised before being outweighed by the adverse effects. This new study pools data from three large consortia each containing data from multiple studies or centres on individual alcohol consumption, cardiovascular disease (CVD), and all-cause mortality to look at these outcomes among drinkers, excluding non-drinkers for the aforementioned reasons. Reading the methods section, it’s not wholly clear, if replicability were the standard, what was done. I believe that for each different database a hazard ratio or odds ratio for the risk of CVD or mortality for eight groups of alcohol consumption was estimated, these ratios were then subsequently pooled in a random-effects meta-analysis. However, it’s not clear to me why you would need to do this in two steps when you could just estimate a hierarchical model that achieves the same thing while also propagating any uncertainty through all the levels. Anyway, a polynomial was then fitted through the pooled ratios – again, why not just do this in the main stage and estimate some kind of hierarchical semi-parametric model instead of a three-stage model to get the curve of interest? I don’t know. The key finding is that risk generally increases above around 100g/week alcohol (around 5-6 UK glasses of wine per week), below which it is fairly flat (although whether it is different to non-drinkers we don’t know). However, the picture the article paints is complicated, risk of stroke and heart failure go up with increased alcohol consumption, but myocardial infarction goes down. This would suggest some kind of competing risk: the mechanism by which alcohol works increases your overall risk of CVD and your proportional risk of non-myocardial infarction CVD given CVD.

Family ruptures, stress, and the mental health of the next generation [comment] [reply]. American Economic Review [RePEc] Published April 2018

I’m not sure I will write out the full blurb again about studies of in utero exposure to difficult or stressful conditions and later life outcomes. There are a lot of them and they continue to make the top journals. Admittedly, I continue to cover them in these round-ups – so much so that we could write a literature review on the topic on the basis of the content of this blog. Needless to say, exposure in the womb to stressors likely increases the risk of low birth weight birth, neonatal and childhood disease, poor educational outcomes, and worse labour market outcomes. So what does this new study (and the comments) contribute? Firstly, it uses a new type of stressor – maternal stress caused by a death in the family and apparently this has a dose-response as stronger ties to the deceased are more stressful, and secondly, it looks at mental health outcomes of the child, which are less common in these sorts of studies. The identification strategy compares the effect of the death on infants who are in the womb to those infants who experience it shortly after birth. Herein lies the interesting discussion raised in the above linked comment and reply papers: in this paper the sample contains all births up to one year post birth and to be in the ‘treatment’ group the death had to have occurred between conception and the expected date of birth, so those babies born preterm were less likely to end up in the control group than those born after the expected date. This spurious correlation could potentially lead to bias. In the authors’ reply, they re-estimate their models by redefining the control group on the basis of expected date of birth rather than actual. They find that their estimates for the effect of their stressor on physical outcomes, like low birth weight, are much smaller in magnitude, and I’m not sure they’re clinically significant. For mental health outcomes, again the estimates are qualitatively small in magnitude, but remain similar to the original paper but this choice phrase pops up (Statistical significance being used inappropriately complaint alert!!!): “We cannot reject the null hypothesis that the mental health coefficients presented in panel C of Table 3 are statistically the same as the corresponding coefficients in our original paper.” Statistically the same! I can see they’re different! Anyway, given all the other evidence on the topic I don’t need to explain the results in detail – the methods discussion is far more interesting.