Rita Faria’s journal round-up for 18th June 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Objectives, budgets, thresholds, and opportunity costs—a health economics approach: an ISPOR Special Task Force report. Value in Health [PubMedPublished 21st February 2018

The economic evaluation world has been discussing cost-effectiveness thresholds for a while. This paper has been out for a few months, but it slipped under my radar. It explains the relationship between the cost-effectiveness threshold, the budget, opportunity costs and willingness to pay for health. My take-home messages are that we should use cost-effectiveness analysis to inform decisions both for publicly funded and privately funded health care systems. Each system has a budget and a way of raising funds for that budget. The cost-effectiveness threshold should be specific for each health care system, in order to reflect its specific opportunity cost. The budget can change for many reasons. The cost-effectiveness threshold should be adjusted to reflect these changes and hence reflect the opportunity cost. For example, taxpayers can increase their willingness to pay for health through increased taxes for the health care system. We are starting to see this in the UK with the calls to raise taxes to increase the NHS budget. It is worth noting that the NICE threshold may not warrant adjustment upwards since research suggests that it does not reflect the opportunity cost. This is a welcome paper on the topic and a must read, particularly if you’re arguing for the use of cost-effectiveness analysis in settings that traditionally were reluctant to embrace it, such as the US.

Basic versus supplementary health insurance: access to care and the role of cost effectiveness. Journal of Health Economics [RePEc] Published 31st May 2018

Using cost-effectiveness analysis to inform coverage decisions not only for the public but also for the privately funded health care is also a feature of this study by Jan Boone. I’ll admit that the equations are well beyond my level of microeconomics, but the text is good at explaining the insights and the intuition. Boone grapples with the question about how the public and private health care systems should choose which technologies to cover. Boone concludes that, when choosing which technologies to cover, the most cost-effective technologies should be prioritised for funding. That the theory matches the practice is reassuring to an economic evaluator like myself! One of the findings is that cost-effective technologies which are very cheap should not be covered. The rationale being that everyone can afford them. The issue for me is that people may decide not to purchase a highly cost-effective technology which is very cheap. As we know from behaviour economics, people are not rational all the time! Boone also concludes that the inclusion of technologies in the universal basic package should consider the prevalence of the conditions in those people at high risk and with low income. The way that I interpreted this is that it is more cost-effective to include technologies for high-risk low-income people in the universal basic package who would not be able to afford these technologies otherwise, than technologies for high-income people who can afford supplementary insurance. I can’t cover here all the findings and the nuances of the theoretical model. Suffice to say that it is an interesting read, even if you avoid the equations like myself.

Surveying the cost effectiveness of the 20 procedures with the largest public health services waiting lists in Ireland: implications for Ireland’s cost-effectiveness threshold. Value in Health Published 11th June 2018

As we are on the topic of cost-effectiveness thresholds, this is a study on the threshold in Ireland. This study sets out to find out if the current cost-effectiveness threshold is too high given the ICERs of the 20 procedures with the largest waiting lists. The idea is that, if the current cost-effectiveness threshold is correct, the procedures with large and long waiting lists would have an ICER of above the cost-effectiveness threshold. If the procedures have a low ICER, the cost-effectiveness threshold may be set too high. I thought that Figure 1 is excellent in conveying the discordance between ICERs and waiting lists. For example, the ICER for extracapsular extraction of crystalline lens is €10,139/QALY and the waiting list has 10,056 people; the ICER for surgical tooth removal is €195,155/QALY and the waiting list is smaller at 833. This study suggests that, similar to many other countries, there are inefficiencies in the way that the Irish health care system prioritises technologies for funding. The limitation of the study is in the ICERs. Ideally, the relevant ICER compares the procedure with the standard care in Ireland whilst on the waiting list (“no procedure” option). But it is nigh impossible to find ICERs that meet this condition for all procedures. The alternative is to assume that the difference in costs and QALYs is generalisable from the source study to Ireland. It was great to see another study on empirical cost-effectiveness thresholds. Looking forward to knowing what the cost-effectiveness threshold should be to accurately reflect opportunity costs.


Sam Watson’s journal round-up for 16th April 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The impact of NHS expenditure on health outcomes in England: alternative approaches to identification in all‐cause and disease specific models of mortality. Health Economics [PubMedPublished 2nd April 2018

Studies looking at the relationship between health care expenditure and patient outcomes have exploded in popularity. A recent systematic review identified 65 studies by 2014 on the topic – and recent experience from these journal round-ups suggests this number has increased significantly since then. The relationship between national spending and health outcomes is important to inform policy and health care budgets, not least through the specification of a cost-effectiveness threshold. Karl Claxton and colleagues released a big study looking at all the programmes of care in the NHS in 2015 purporting to estimate exactly this. I wrote at the time that: (i) these estimates are only truly an opportunity cost if the health service is allocatively efficient, which it isn’t; and (ii) their statistical identification method, in which they used a range of socio-economic variables as instruments for expenditure, was flawed as the instruments were neither strong determinants of expenditure nor (conditionally) independent of population health. I also noted that their tests would be unlikely to be any good to detect this problem. In response to the first, Tony O’Hagan commented to say that that they did not assume NHS efficiency, nor even that it was assumed that the NHS is trying to maximise health. This may well have been the case, but I would still, perhaps pedantically, argue then that this is therefore not an opportunity cost. For the question of instrumental variables, an alternative method was proposed by Martyn Andrews and co-authors, using information that feeds into the budget allocation formula as instruments for expenditure. In this new article, Claxton, Lomas, and Martin adopt Andrews’s approach and apply it across four key programs of care in the NHS to try to derive cost-per-QALY thresholds. First off, many of my original criticisms I would also apply to this paper, to which I’d also add one: (Statistical significance being used inappropriately complaint alert!!!) The authors use what seems to be some form of stepwise regression by including and excluding regressors on the basis of statistical significance – this is a big no-no and just introduces large biases (see this article for a list of reasons why). Beyond that, the instruments issue – I think – is still a problem, as it’s hard to justify, for example, an input price index (which translates to larger budgets) as an instrument here. It is certainly correlated with higher expenditure – inputs are more expensive in higher price areas after all – but this instrument won’t be correlated with greater inputs for this same reason. Thus, it’s the ‘wrong kind’ of correlation for this study. Needless to say, perhaps I am letting the perfect be the enemy of the good. Is this evidence strong enough to warrant a change in a cost-effectiveness threshold? My inclination would be that it is not, but that is not to deny it’s relevance to the debate.

Risk thresholds for alcohol consumption: combined analysis of individual-participant data for 599 912 current drinkers in 83 prospective studies. The Lancet Published 14th April 2018

“Moderate drinkers live longer” is the adage of the casual drinker as if to justify a hedonistic pursuit as purely pragmatic. But where does this idea come from? Studies that have compared risk of cardiovascular disease to level of alcohol consumption have shown that disease risk is lower in those that drink moderately compared to those that don’t drink. But correlation does not imply causation – non-drinkers might differ from those that drink. They may be abstinent after experiencing health issues related to alcohol, or be otherwise advised to not drink to protect their health. If we truly believed moderate alcohol consumption was better for your health than no alcohol consumption we’d advise people who don’t drink to drink. Moreover, if this relationship were true then there would be an ‘optimal’ level of consumption where any protective effect were maximised before being outweighed by the adverse effects. This new study pools data from three large consortia each containing data from multiple studies or centres on individual alcohol consumption, cardiovascular disease (CVD), and all-cause mortality to look at these outcomes among drinkers, excluding non-drinkers for the aforementioned reasons. Reading the methods section, it’s not wholly clear, if replicability were the standard, what was done. I believe that for each different database a hazard ratio or odds ratio for the risk of CVD or mortality for eight groups of alcohol consumption was estimated, these ratios were then subsequently pooled in a random-effects meta-analysis. However, it’s not clear to me why you would need to do this in two steps when you could just estimate a hierarchical model that achieves the same thing while also propagating any uncertainty through all the levels. Anyway, a polynomial was then fitted through the pooled ratios – again, why not just do this in the main stage and estimate some kind of hierarchical semi-parametric model instead of a three-stage model to get the curve of interest? I don’t know. The key finding is that risk generally increases above around 100g/week alcohol (around 5-6 UK glasses of wine per week), below which it is fairly flat (although whether it is different to non-drinkers we don’t know). However, the picture the article paints is complicated, risk of stroke and heart failure go up with increased alcohol consumption, but myocardial infarction goes down. This would suggest some kind of competing risk: the mechanism by which alcohol works increases your overall risk of CVD and your proportional risk of non-myocardial infarction CVD given CVD.

Family ruptures, stress, and the mental health of the next generation [comment] [reply]. American Economic Review [RePEc] Published April 2018

I’m not sure I will write out the full blurb again about studies of in utero exposure to difficult or stressful conditions and later life outcomes. There are a lot of them and they continue to make the top journals. Admittedly, I continue to cover them in these round-ups – so much so that we could write a literature review on the topic on the basis of the content of this blog. Needless to say, exposure in the womb to stressors likely increases the risk of low birth weight birth, neonatal and childhood disease, poor educational outcomes, and worse labour market outcomes. So what does this new study (and the comments) contribute? Firstly, it uses a new type of stressor – maternal stress caused by a death in the family and apparently this has a dose-response as stronger ties to the deceased are more stressful, and secondly, it looks at mental health outcomes of the child, which are less common in these sorts of studies. The identification strategy compares the effect of the death on infants who are in the womb to those infants who experience it shortly after birth. Herein lies the interesting discussion raised in the above linked comment and reply papers: in this paper the sample contains all births up to one year post birth and to be in the ‘treatment’ group the death had to have occurred between conception and the expected date of birth, so those babies born preterm were less likely to end up in the control group than those born after the expected date. This spurious correlation could potentially lead to bias. In the authors’ reply, they re-estimate their models by redefining the control group on the basis of expected date of birth rather than actual. They find that their estimates for the effect of their stressor on physical outcomes, like low birth weight, are much smaller in magnitude, and I’m not sure they’re clinically significant. For mental health outcomes, again the estimates are qualitatively small in magnitude, but remain similar to the original paper but this choice phrase pops up (Statistical significance being used inappropriately complaint alert!!!): “We cannot reject the null hypothesis that the mental health coefficients presented in panel C of Table 3 are statistically the same as the corresponding coefficients in our original paper.” Statistically the same! I can see they’re different! Anyway, given all the other evidence on the topic I don’t need to explain the results in detail – the methods discussion is far more interesting.


Alastair Canaway’s journal round-up for 29th January 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Is “end of life” a special case? Connecting Q with survey methods to measure societal support for views on the value of life-extending treatments. Health Economics [PubMed] Published 19th January 2018

Should end-of-life care be treated differently? A question often asked and previously discussed on this blog: findings to date are equivocal. This question is important given NICE’s End-of-Life Guidance for increased QALY thresholds for life-extending interventions, and additionally the Cancer Drugs Fund (CDF). This week’s round-up sees Helen Mason and colleagues attempt to inform the debate around societal support for views of end-of-life care, by trying to determine the degree of support for different views on the value of life-extending treatment. It’s always a treat to see papers grounded in qualitative research in the big health economics journals and this month saw the use of a particularly novel mixed methods approach adding a quantitative element to their previous qualitative findings. They combined the novel (but increasingly recognisable thanks to the Glasgow team) Q methodology with survey techniques to examine the relative strength of views on end-of-life care that they had formulated in a previous Q methodology study. Their previous research had found that there are three prevalent viewpoints on the value of life-extending treatment: 1. ‘a population perspective: value for money, no special cases’, 2. ‘life is precious: valuing life-extension and patient choice’, 3. ‘valuing wider benefits and opportunity cost: the quality of life and death’. This paper used a large Q-based survey design (n=4902) to identify societal support for the three different viewpoints. Viewpoints 1 and 2 were found to be dominant, whilst there was little support for viewpoint 3. The two supported viewpoints are not complimentary: they represent the ethical divide between the utilitarian with a fixed budget (view 1), and the perspective based on entitlement to healthcare (view 2: which implies an expanding healthcare budget in practice). I suspect most health economists will fall into camp number one. In terms of informing decision making, this is very helpful, yet unhelpful: there is no clear answer. It is, however, useful for decision makers in providing evidence to balance the oft-repeated ‘end of life is special’ argument based solely on conjecture, and not evidence (disclosure: I have almost certainly made this argument before). Neither of the dominant viewpoints supports NICE’s End of Life Guidance nor the CDF. Viewpoint 1 suggests end of life interventions should be treated the same as others, whilst viewpoint 2 suggests that treatments should be provided if the patient chooses them; it does not make end of life a special case as this viewpoint believes all treatments should be available if people wish to have them (and we should expand budgets accordingly). Should end of life care be treated differently? Well, it depends on who you ask.

A systematic review and meta-analysis of childhood health utilities. Medical Decision Making [PubMed] Published 7th October 2017

If you’re working on an economic evaluation of an intervention targeting children then you are going to be thankful for this paper. The purpose of the paper was to create a compendium of utility values for childhood conditions. A systematic review was conducted which identified a whopping 26,634 papers after deduplication – sincere sympathy to those who had to do the abstract screening. Following abstract screening, data were extracted for the remaining 272 papers. In total, 3,414 utility values were included when all subgroups were considered – this covered all ICD-10 chapters relevant to child health. When considering only the ‘main study’ samples, 1,191 utility values were recorded and these are helpfully separated by health condition, and methodological characteristics. In short, the authors have successfully built a vast catalogue of child utility values (and distributions) for use in future economic evaluations. They didn’t, however, stop there, they then built on the systematic review results by conducting a meta-analysis to i) estimate health utility decrements for each condition category compared to general population health, and ii) to examine how methodological factors impact child utility values. Interestingly for those conducting research in children, they found that parental proxy values were associated with an overestimation of values. There is a lot to unpack in this paper and a lot of appendices and supplementary materials are included (including the excel database for all 3,414 subsamples of health utilities). I’m sure this will be a valuable resource in future for health economic researchers working in the childhood context. As far as MSc dissertation projects go, this is a very impressive contribution.

Estimating a cost-effectiveness threshold for the Spanish NHS. Health Economics [PubMed] [RePEc] Published 28th December 2017

In the UK, the cost-per-QALY threshold is long-established, although whether it is the ‘correct’ value is fiercely debated. Likewise in Spain, there is a commonly cited threshold value of €30,000 per QALY with a dearth of empirical justification. This paper sought to identify a cost-per-QALY threshold for the Spanish National Health Service (SNHS) by estimating the marginal cost per QALY at which the SNHS currently operates on average. This was achieved by exploiting data on 17 regional health services between the years 2008-2012 when the health budget experienced considerable cuts due to the global economic crisis. This paper uses econometric models based on the provoking work by Claxton et al in the UK (see the full paper if you’re interested in the model specification) to achieve this. Variations between Spanish regions over time allowed the authors to estimate the impact of health spending on outcomes (measured as quality-adjusted life expectancy); this was then translated into a cost-per-QALY value for the SNHS. The headline figures derived from the analysis give a threshold between €22,000 and €25,000 per QALY. This is substantially below the commonly cited threshold of €30,000 per QALY. There are, however (as to be expected) various limitations acknowledged by the authors, which means we should not take this threshold as set in stone. However, unlike the status quo, there is empirical evidence backing this threshold and it should stimulate further research and discussion about whether such a change should be implemented.