Chris Sampson’s journal round-up for 20th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Effects of health and social care spending constraints on mortality in England: a time trend analysis. BMJ Open [PubMed] Published 15th November 2017

I’d hazard a guess that I’m not the only one here who gets angry about the politics of austerity. Having seen this study’s title, it’s clear that the research could provide fuel for that anger. It doesn’t disappoint. Recent years have seen very low year-on-year increases in public expenditure on health in England. Even worse, between 2010 and 2014, public expenditure on social care actually fell in real terms. This is despite growing need for health and social care. In this study, the authors look at health and social care spending and try to estimate the impact that reduced expenditure has had on mortality in England. The analysis uses spending and mortality data from 2001 onwards and also incorporates mortality projections for 2015-2020. Time trend analyses are conducted using Poisson regression models. From 2001-2010, deaths decreased by 0.77% per year (on average). The mortality rate was falling. Now it seems to be increasing; from 2011-2014, the average number of deaths per year increased by 0.87%. This corresponds to 18,324 additional deaths in 2014, for example. But everybody dies. Extra deaths are really sooner deaths. So the question, really, is how much sooner? The authors look at potential years of life lost and find this figure to be 75,496 life-years greater than expected in 2014, given pre-2010 trends. This shouldn’t come as much of a surprise. Spending less generally achieves less. What makes this study really interesting is that it can tell us who is losing these potential years of life as a result of spending cuts. The authors find that it’s the over-60s. Care home deaths were the largest contributor to increased mortality. A £10 cut in social care spending per capita resulted in 5 additional care home deaths per 100,000 people. When the authors looked at deaths by local area, no association was found with the level of deprivation. If health and social care expenditure are combined in a single model, we see that it’s social care spending that is driving the number of excess deaths. The impact of health spending on hospital deaths was less robust. The number of nurses acted as a mediator for the relationship between spending and mortality. The authors estimate that current spending projections will result in 150,000 additional deaths compared with pre-2010 trends. There are plenty of limitations to this study. It’s pretty much impossible (though the authors do try) to separate the effects of austerity from the effect of a weak economy. Still, I’m satisfied with the conclusion that austerity kills older people (no jokes about turkeys and Christmas, please). For me, the findings also highlight the need for more research in the context of social care, and how we (as researchers) might effectively direct policy to prevent ‘excess’ deaths.

Should cost effectiveness analyses for NICE always consider future unrelated medical costs? BMJ [PubMed] Published 10th November 2017

The question of whether or not ‘unrelated’ future medical costs should be included in economic evaluation is becoming a hot topic. So much so that the BMJ has published this Head To Head, which introduces some of the arguments for and against. NICE currently recommends excluding unrelated future medical costs. An example given in this article is the case of the expected costs of dementia care having saved someone’s life by heart transplantation. The argument in favour of including unrelated costs is quite obvious – these costs can’t be ignored if we seek to maximise social welfare. Their inclusion is described as “not difficult” by the authors defending this move. By ignoring unrelated future costs (but accounting for the benefit of longer life), the relative cost-effectiveness of life-extending treatments, compared with life-improving treatments, is artificially inflated. The argument against including unrelated medical costs is presented as one of fairness. The author suggests that their inclusion could preclude access to health care for certain groups of people that are likely to have high needs in the future. So perhaps NICE should ignore unrelated medical costs in certain circumstances. I sympathise with this view, but I feel it is less a fairness issue and more a demonstration of the current limits of health-related quality of life measurement, which don’t reflect adaptation and coping. However, I tend to disagree with both of the arguments presented here. I really don’t think NICE should include or exclude unrelated future medical costs according to the context because that could create some very perverse incentives for certain stakeholders. But then, I do not agree that it is “not difficult” to include all unrelated future costs. ‘All’ is an important qualifier here because the capacity for analysts to pick and choose unrelated future costs creates the potential to pick and choose results. When it comes to unrelated future medical costs, NICE’s position needs to be all-or-nothing, and right now the ‘all’ bit is a high bar to clear. NICE should include unrelated future medical costs – it’s difficult to formulate a sound argument against that – but they should only do so once more groundwork has been done. In particular, we need to develop more valid methods for valuing quality of life against life-years in health technology assessment across different patient groups. And we need more reliable methods for estimating future medical costs in all settings.

Oncology modeling for fun and profit! Key steps for busy analysts in health technology assessment. PharmacoEconomics [PubMed] Published 6th November 2017

Quite a title(!). The subject of this essay is ‘partitioned survival modelling’. Honestly,  I never really knew what that was until I read this article. It seems the reason for my ignorance could be that I haven’t worked on the evaluation of cancer treatments, for which it’s a popular methodology. Apparently, a recent study found that almost 75% of NICE cancer drug appraisals were informed by this sort of analysis. Partitioned survival modelling is a simple means by which to extrapolate outcomes in a context where people can survive (or not) with or without progression. Often this can be on the basis of survival analyses and standard trial endpoints. This article seeks to provide some guidance on the development and use of partitioned survival models. Or, rather, it provides a toolkit for calling out those who might seek to use the method as a means of providing favourable results for a new therapy when data and analytical resources are lacking. The ‘key steps’ can be summarised as 1) avoiding/ignoring/misrepresenting current standards of economic evaluation, 2) using handpicked parametric approaches for extrapolation in order to maximise survival benefits, 3) creatively estimating relative treatment effects using indirect comparisons without adjustment, 4) make optimistic assumptions about post-progression outcomes, and 5) deny the possibility of any structural uncertainty. The authors illustrate just how much an analyst can influence the results of an evaluation (if they want to “keep ICERs in the sweet spot!”). Generally, these tactics move the model far from being representative of reality. However, the prevailing secrecy around most models means that it isn’t always easy to detect these shortcomings. Sometimes it is though, and the authors make explicit reference to technology appraisals that they suggest demonstrate these crimes. Brilliant!


Chris Sampson’s journal round-up for 17th October 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Estimating health-state utility for economic models in clinical studies: an ISPOR Good Research Practices Task Force report. Value in Health [PubMedPublished 3rd October 2016

When it comes to model-based cost-per-QALY analyses, researchers normally just use utility values from a single clinical study. So we best be sure that these studies are collecting the right data. This ISPOR Task Force report presents guidelines for the collection and reporting of utility values in the context of clinical studies, with a view to making them as useful as possible to the modelling process. The recommendations are quite general and would apply to most aspects of clinical studies: do some early planning; make sure the values are relevant to the population being modelled; bear HTA agencies’ expectations in mind. It bothers me though that the basis for the recommendations is not very concrete (the word “may” appears more than 100 times). The audience for this report isn’t so much people building models, or people conducting clinical trials. Rather, it’s people who are conducting some modelling within a clinical study (or vice versa). I’m in that position, so why don’t the guidelines strike me as useful? They expect a lot of time to be dedicated to the development of the model structure and aims before the clinical study gets underway. So modelling work would be conducted alongside the full duration of a clinical study. In my experience, that isn’t how things usually work. And even if that does happen, practical limitations to data collection will thwart the satisfaction of the vast majority of the recommendations. In short, I think the Task Force’s position puts the cart on top of the horse. Models require data and, yes, models can be used to inform data collection. But seldom can proposed modelling work be the principal basis for determining data collection in a clinical study. I think that may be a good thing and that a more incremental approach (review – model – collect data – repeat) is more fruitful. Having said all that, and having read the paper, I do think it’s useful. It isn’t useful as a set of recommendations that we might expect from an ISPOR Task Force, but rather as a list of things to think about if you’re somebody involved in the collection of health state utility data. If you’re one of those people then it’s well worth a read.

Reliability, validity, and feasibility of direct elicitation of children’s preferences for health states: a systematic review. Medical Decision Making [PubMedPublished 30th September 2016

Set aside for the moment the question of whose preferences we ought to use in valuing health improvements. There are undoubtedly situations in which it would be interesting and useful to know patients’ preferences. What if those patients are children? This study presents the findings from a systematic review of attempts at direct elicitation of preferences from children, focusing on psychometric properties and with the hope of identifying the best approach. To be included in the review, studies needed to report validity, reliability and/or feasibility. 26 studies were included, with most of them using time trade-off (n=14) or standard gamble (n=11). 7 studies reported validity and the findings suggested good construct validity with condition-specific but not generic measures. 4 studies reported reliability and TTO came off better than visual analogue scales. 9 studies reported on feasibility in terms of completion rates and generally found it to be high. The authors also extracted information about the use of preference elicitation in different age groups and found that studies making such comparisons suggested that it may not be appropriate for younger children. Generally speaking, it seems that standard gamble and time trade-off are acceptably valid, reliable and feasible. It’s important to note that there was a lot of potential for bias in the included studies, and that a number of them seemed somewhat lacking in their reporting. And there’s a definite risk of publication and reporting bias lurking here. I think a key issue that the study can’t really enlighten us on is the question of age. There might not be all that much difference between a 17 year old and a 27 year old, but there’s going to be a big difference between a 17 year old and a 7 year old. Future research needs to investigate the notion of an age threshold for valid preference elicitation. I’d like to see a more thorough quantitative analysis of findings from direct preference elicitation studies in children. But what we really need is a big new study in which children (both patients and general public) are asked to complete various direct preference elicitation tasks at multiple time points. Because right now, there just isn’t enough evidence.

Economic evaluation of integrated new technologies for health and social care: suggestions for policy makers, users and evaluators. Social Science & Medicine [PubMedPublished 24th September 2016

There are many debates that take place at the nexus of health care and social care, whether they be about funding, costs or outcome measurement. This study focusses on a specific example of health and social care integration – assisted living technologies (ALTs) – and tries to come up with a new and more appropriate method of economic evaluation. In this context, outcomes might matter ‘beyond health’. I should like this paper. It tries to propose an approach that might satisfy the suggestions I made in a recent essay. Why, then, am I not convinced? The authors outline their proposal as consisting of 3 steps: i) identify attributes relevant to the intervention, ii) value these in monetary terms and iii) value the health benefit. In essence the plan is to estimate QALYs for the health bit and then a monetary valuation for the other bits, with the ‘other bits’ specified in advance of the evaluation. That’s very easily said and not at all easily done. And the paper makes no argument that this is actually what we ought to be doing. Capabilities work their way in as attributes, but little consideration is given to the normative differences between this and other approaches (what I have termed ‘consequents’). The focus on ALTs is odd. The authors fill a lot of space arguing (unconvincingly) that it is a special case, before stating that their approach should be generalisable. The main problem can be summarised by a sentence that appears in the introduction: “the approach is highly flexible because the use of a consistent numeraire (either monetary or health) means that programmes can be compared even if the underlying attributes differ“. Maybe they can, but they shouldn’t. Or at least that’s what a lot of people think, which is precisely why we use QALYs. An ‘anything goes’ approach means that any intervention could easily be demonstrated to be more cost-effective than another if we just pick the right attributes. I’m glad to see researchers trying to tackle these problems, and this could be the start of something important, but I was disappointed that this paper couldn’t offer anything concrete.


Social impact bonds: is an ounce of (bond) prevention worth more than a pound of (budgetary) cure

It is one of the curious ironies of history that ideas which tend to destroy also help to rebuild. Innovative financial instruments played a key role in the 2007-2008 financial crisis that not only dented economic growth worldwide, but also hit government revenue streams making fewer resources available for health care spending. Roughly five years after the crisis, social impact bonds (SIBs) – a new financial instrument – hold promise to fund a raft of innovative social service delivery models via private capital. Though SIBs are still in the early development phase, they could play a niche role in relieving burdened state health care budgets and financing innovative preventive health schemes in both the US and UK.

SIBs share some common characteristics with (vanilla) bonds; however, there are also notable differences. When an investor purchases a regular bond, he/she pays a principal amount (e.g. a face value of $10,000) with the expectation of receiving periodic interest payments until the bond matures, at which point the principal amount is returned to the investor. SIBs still require an initial principal investment from investors, usually with more than a modicum of altruism for the cause involved. Not-for-profits, and sometimes commercial entities, are the main current investors in SIBs.

The main differences lie in how the money is used and how payments to investors are made. An intermediary, which charges fees, serves as the organizer of the SIB selecting the investors, service providers, and overseeing the process. Once investors purchase a SIB, a government agency contracts out with social service delivery organisation(s) for a selected cohort of individuals. Investors are not offered regular interest payments; rather, they are offered ‘performance-based’ payments based on agreed-to benchmarks in service delivery.

For example, Social Finance UK issued the first social impact bond in September 2010 in the United Kingdom. In the case of the 2010 Peterborough SIB offering, incentive payments were tied to ex-prisoner recidivism levels. That is, if the selected cohort of released ex-prisoners ‘covered’ under the bond’s services had a lower rate of recidivism than an agreed-upon counter-factual cohort (usually the natural average), investors would be rewarded with a payment from the government. If the cohort demonstrated a higher rate of recidivism, investors would forfeit both the initial principal investment and performance payments. In this scheme, the financing mechanism acts more like equity when investors receive a dividend for superior corporate performance (without the capital gain) rather than guaranteed interest payments (see diagram below from Social Finance for the SIB flow of funds between investor, government, and social service deliverer).

(c) Social Finance 2011

(c) Social Finance 2011

The interest for SIBs in health care service delivery is gaining momentum. After the successful launch of the first SIB in 2010, coupled with a greater emphasis on ‘responsible finance’, the idea quickly expanded to other fields including education, adoption and work retraining schemes. The business case for health care SIBs is arguably at least as strong, if not stronger, than other areas. There are two reasons for this.

First, governments face difficult funding choices in the age of austerity. Regardless of the expenditure area, general budgetary funds are usually allocated to existing programs with minimal risk; innovative programs with high start-up costs and unknown outcomes are not seen to deliver value-for-money.

Second, a majority of health care budgets in advanced countries are dedicated to treating patients with chronic conditions, primarily in hospital or long-term care settings. Spending on preventive services has traditionally been much lower, although this is gradually changing. This is particularly true for innovative schemes to prevent chronic disease onset. Policy makers need more tools to address the crowding out of preventive spending in health care budgets as the average population age and number of comorbidities per patient grows. SIBs might be one tool to diversify the risk associated with these schemes, while also allowing governments to pay only for programs that actually improve outcomes.

Although interest exists, adoption of SIBs for health care services has been slow.  Though the UK served as the initial testing ground for SIBs, their use in health care has been minimal. Some of the inertia is due to the NHS: the large bureaucracy has established payment and program trial systems that are not compatible with SIBs. This attitude may be changing however, particularly due to the fiscal pressures of austerity. In reaction to a May 2013 NHS/Monitor discussion paper on changing the NHS’s payment system, several organisations submitted responses that proposed SIBs as a necessary strategy. The Health Foundation’s submission cited a trial in the Milton Keynes NHS Trust associated with psychological assessment of diabetes patients with ‘SIB-like’ properties.

In the United States, state and local health care stakeholders have been at the forefront of developing SIBs. The city of Fresno in California is the country’s first site for a health care SIB: a two- year demonstration bond has been approved to assess the use of evidence-based practices in the treatment of 200 low-income paediatric asthma patients. The $660,000 SIB, funded by Collective Health and the California Endowment, will evaluate if intensive patient education and home visits will be effective in preventing emergency department visits and inpatient hospitalisations. If the selected cohort achieves a lower utilisation rate than another selected cohort in California’s Medical population, investors will receive their payback and the initial trial will be expanded to cover 2000 children in the state.

SIBs, despite their innovative nature, are also a target of criticism. First, critics point out that the SIB delivery structure is economically inefficient. The SIB’s intermediary charges fees that would not exist in a direct relationship between the government and contractor; these fees mean that a project can be expensive to scale up and potentially waste government funds. Second, the singular focus on pre-determined quantitative measures may be wrong-headed. A typical evaluation of social service schemes is more flexible including both qualitative and quantitative assessments of success. The evaluation also takes note of when service delivery or outcomes did not follow prescribed guidelines, or allows for changes in how the demonstration proceeds based on feedback. This iterative process may not be possible in SIBs.

Overall, SIBs are still in their nascency and face many challenges. The idea, however, is not simply put of a larger social investing fad. If SIBs are able to allocate investments in areas where governments are unable or unwilling to invest, they may serve their purpose; even if they show which delivery schemes fail. With tighter health care budgets and the pressing need for innovative solutions in health, SIBs should be seen as a useful new financing tool.