Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
Health or happiness? A note on trading off health and happiness in rationing decisions. Value in Health Published 23rd August 2016
Health problems can impact both health and happiness. It seems obvious that individuals would attribute value to the happiness provided by a health technology over and above any health improvement. But what about ‘public’ views? Would people be willing to allocate resources to health care for other people on the basis of the happiness it provides? This study reports on a web-based survey in which 1015 people were asked to make resource allocation choices about groups of patients standing to gain varying degrees of health and/or happiness. Three scenarios were presented – one varying only happiness levels, one varying only health and another varying both. Unfortunately the third scenario was not analysed due to “the many inconsistent choices”. About half of respondents were not willing to make any trade-offs in happiness and health. Those who did make choices attached more weight to health on average. But there were some effects associated with the starting levels of health and happiness – people were less willing to discriminate between groups when starting health (or happiness) was lower, and more weight was given to health. There are a selection of potential biases associated with the responses to the questions, which the authors duly discuss.
Determinants of change in the cost-effectiveness threshold. Medical Decision Making [PubMed] Published 23rd August 2016
Set aside for the moment any theoretical concerns you might have with the ‘threshold’ approach to decision making in health care resource allocation. If we are going to use a willingness to pay threshold, how might it alter over time and in response to particular stimuli? This paper tackles that question using comparative statics and the idea of the ‘cost-effectiveness bookshelf’. If you haven’t come across it before, simply imagine a bookshelf with a book for each technology. The height of the books is determined by the ICER and their width by the budget impact; they’re lined up from shortest to tallest. This paper focuses on the introduction of technologies with ‘marginal’ budget impact, requiring the displacement of one existing technology. But a key idea to remember is that for technologies with large ‘non-marginal’ budget impacts – that is, requiring displacement of more than one existing technology – the threshold will be a weighted average of those technologies that are displaced. The authors describe the impact of changes in 4 different determinants of the threshold: i) the health budget, ii) demand for existing technologies, iii) technical efficiency of existing technologies and iv) funding for new technologies. Some changes (e.g. an increase in the health budget) have unambiguous impacts on the threshold (e.g. to increase it). Others have ambiguous effects – for example a decrease in the cost of a marginal technology might decrease the threshold through reduction of the ICER, or increase the threshold by reducing the budget impact so much that an additional technology could be funded. There’s a nice discussion towards the end about relaxing the assumptions. What if the budget isn’t fixed? What if we aren’t sure we’ve got the books in the right order? The bookshelf analogy is a starting point for these kinds of discussions. The article is an easy read and a good reference point for the threshold debate, even if its practical usefulness may be limited when lining up the NHS’s books seems like a pipedream.
Update to the report of nationally representative values for the noninstitutionalized US adult population for five health-related quality-of-life scores. Value in Health Published 21st August 2016
This paper does what it says on the tin, but it is a useful reference and worth knowing about. The last lot were published in 2006, so this paper is an update to that one using data from 2011. The measures reported are: i) self-rated health, ii) SF-12 mental subscale and (iii) physical subscale, iv) SF-6D and v) Quality of Well-Being Scale. Data come from the Medical Expenditures Panel Survey and the National Health Interview Survey, with 23,906 subjects in the former and 32,242 in the latter. Results are presented by age group (in decades) and by sex. So, for example, we can see that 20-29 year old women reported an average SF-6D index score of 0.809 while for 80-89 year olds the mean was 0.698. For almost all age groups and all measures, men reported higher scores than women. Interestingly, mean SF-6D scores were on average lower than in the 2001 data reported in the previous study.
Use of cost-effectiveness analysis to compare the efficiency of study identification methods in systematic reviews. Systematic Reviews [PubMed] Published 17th August 2016
Health economists have (or at least should have) a bit of a comparative advantage when it comes to economic evaluation. I’ve often thought that we should be leading the way in methods of economic evaluation in economics beyond the subject matter of health, and maybe into other fields. So I was pleased to see this paper using cost-effectiveness analysis for a new purpose. Often, systematic reviews can be mammoth tasks and potentially end up being of little value. Certainly at the margin there are things often done as part of a review (let’s say, including EconLit in a principally clinical review) that in the end prove to be pretty pointless. This study evaluates the cost-effectiveness of 4 alternative approaches to screening titles and abstracts as part of a systematic review. The 4 alternatives are i) ‘double screening’, which is the classic approach used by Cochrane et al whereby two researchers independently review abstracts and then meet to consider disagreements, ii) ‘safety first’, which is a variation on double screening whereby citations are only excludable if both reviewers identify them as such iii) ‘single screening’ with just one reviewer and iv) ‘single screening with text mining’ in which a machine learning process ranks studies by the likelihood of their inclusion. The outcome measure was the number of citations saved from inappropriate exclusion. It’s a big review, starting with 12,477 citations. There wasn’t much in it outcomes-wise, with at most 169 eligible studies and at least 161. But the incremental cost of double screening, compared with single screening plus text mining, was £37,279. This meant an ICER of £4660 per extra study, which seems like a lot. There are some limitations to the study, and the results clearly aren’t generalisable to all reviews. But it’s easy to see how studies-within-studies like this can help guide future research.
Photo credit: Antony Theobald (CC BY-NC-ND 2.0)