Chris Sampson’s journal round-up for 4th December 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Funding breakthrough therapies: a systematic review and recommendation. Health Policy Published 2nd December 2017

One of the (numerous) financial pressures on health care funders in the West is the introduction of innovative (and generally very expensive) new therapies. Some of these can be considered curative, which isn’t necessarily the best way for manufacturers to create a steady income. New funding arrangements have been proposed to facilitate patient access while maintaining financial sustainability. This article focuses on a specific group of innovative therapies known as ‘Advanced Therapy Medicinal Products’ (ATMPs), which includes gene therapies. The authors conducted a systematic review of papers proposing funding models and considered their appropriateness for ATMPs. There were 48 papers included in the review that proposed payment mechanisms for high-cost therapies. Three top-level groups were identified: i) financial agreements, ii) performance-based agreements, and iii) healthcoin (a tradable currency representing the value of outcomes). The different mechanisms are compared in terms of their feasibility, acceptability, burden, ‘financial attractiveness’ and their appeal to payers and manufacturers. Annuity payments are identified as relatively attractive compared to other options, but each mechanism is summarily shown to be imperfect in the ATMP context. So, instead, the authors propose an ATMP-specific fund. For UK readers, this will likely smell a bit too much like the disastrous Cancer Drugs Fund. It isn’t clear why such a programme would be superior to annuity payments or more inventive mechanisms, or even whether it would be theoretically sound. Thus, the proposal is not convincing.

Supply-side effects from public insurance expansions: evidence from physician labor markets. Health Economics [PubMed] Published 1st December 2017

Crazy though American health care may be, its inconsistency in coverage can make for good research fodder. The Child Health Insurance Program (CHIP) was set up in 1997 and then, when the initial money ran out 10 years later, the program was (eventually) expanded. In this study, the authors use the changes in CHIP to examine the impact of expanded public coverage on provider behaviour, namely; subspecialty training (which could become more attractive with a well-insured customer base), practice setting and prevailing wage offers. The data for the study relate to the physician labour market for New York state for 2002-2013, as collected in the Graduate Medical Education survey. A simple difference-in-differences analysis is conducted with reference to the 2009 CHIP expansion, controlling for physician demographics. Paediatricians are the treatment group and the control group is adult physician generalists (mostly internal medicine). 2009 seems to be associated with a step-change in the proportion of paediatricians choosing to subspecialise – an increased probability of about 8 percentage points. There is also an upward shift in the proportion of paediatricians entering private practice, with some (weak) evidence that there is an increased preference for rural areas. These changes don’t seem to be driven by relative wage increases, with no major change in trends. So it seems that the expanded coverage did have important supply-side effects. But the waters are muddy here. In particular, we have the Great Recession and Obamacare as possible alternative explanations. Though it’s difficult to come up with good reasons for why these might better explain the observed changes.

Reflections on the NICE decision to reject patient production losses. International Journal of Technology Assessment in Health Care [PubMedPublished 20th November 2017

When people conduct economic evaluations ‘from a societal perspective’, this often just means a health service perspective with productivity losses added. NICE explicitly exclude the inclusion of these production losses in health technology appraisals. This paper reviews the issues at play, focussing on the normative question of why they should (or should not) be included. Findings from a literature review are summarised with reference to the ethical, theoretical and policy questions. Unethical discrimination potentially occurs if people are denied health care on the basis of non-health-related characteristics, such as the ability to work. All else equal, should health care for men be prioritised over health care for women because men have higher wages? Are the unemployed less of a priority because they’re unemployed? The only basis on which to defend the efficiency of an approach that includes productivity losses seems to be a neoclassical welfarist one, which is hardly tenable in the context of health care. If we adopt the extra-welfarist understanding of opportunity cost as foregone health then there is really no place for production losses. The authors also argue that including production losses may be at odds with policy objectives, at least in the context of the NHS in the UK. Health systems based on privately-funded care or social insurance may have different priorities. The article concludes that taking account of production losses is at odds with the goal of health maximisation and therefore the purpose of the NHS in the UK. Personally, I think priority setting in health care should take a narrow health perspective. So I agree with the authors that production losses shouldn’t be included. I’m not sure this article will convince those who disagree, but it’s good to have a reference to vindicate NICE’s position.

Credits

Chris Sampson’s journal round-up for 20th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Effects of health and social care spending constraints on mortality in England: a time trend analysis. BMJ Open [PubMed] Published 15th November 2017

I’d hazard a guess that I’m not the only one here who gets angry about the politics of austerity. Having seen this study’s title, it’s clear that the research could provide fuel for that anger. It doesn’t disappoint. Recent years have seen very low year-on-year increases in public expenditure on health in England. Even worse, between 2010 and 2014, public expenditure on social care actually fell in real terms. This is despite growing need for health and social care. In this study, the authors look at health and social care spending and try to estimate the impact that reduced expenditure has had on mortality in England. The analysis uses spending and mortality data from 2001 onwards and also incorporates mortality projections for 2015-2020. Time trend analyses are conducted using Poisson regression models. From 2001-2010, deaths decreased by 0.77% per year (on average). The mortality rate was falling. Now it seems to be increasing; from 2011-2014, the average number of deaths per year increased by 0.87%. This corresponds to 18,324 additional deaths in 2014, for example. But everybody dies. Extra deaths are really sooner deaths. So the question, really, is how much sooner? The authors look at potential years of life lost and find this figure to be 75,496 life-years greater than expected in 2014, given pre-2010 trends. This shouldn’t come as much of a surprise. Spending less generally achieves less. What makes this study really interesting is that it can tell us who is losing these potential years of life as a result of spending cuts. The authors find that it’s the over-60s. Care home deaths were the largest contributor to increased mortality. A £10 cut in social care spending per capita resulted in 5 additional care home deaths per 100,000 people. When the authors looked at deaths by local area, no association was found with the level of deprivation. If health and social care expenditure are combined in a single model, we see that it’s social care spending that is driving the number of excess deaths. The impact of health spending on hospital deaths was less robust. The number of nurses acted as a mediator for the relationship between spending and mortality. The authors estimate that current spending projections will result in 150,000 additional deaths compared with pre-2010 trends. There are plenty of limitations to this study. It’s pretty much impossible (though the authors do try) to separate the effects of austerity from the effect of a weak economy. Still, I’m satisfied with the conclusion that austerity kills older people (no jokes about turkeys and Christmas, please). For me, the findings also highlight the need for more research in the context of social care, and how we (as researchers) might effectively direct policy to prevent ‘excess’ deaths.

Should cost effectiveness analyses for NICE always consider future unrelated medical costs? BMJ [PubMed] Published 10th November 2017

The question of whether or not ‘unrelated’ future medical costs should be included in economic evaluation is becoming a hot topic. So much so that the BMJ has published this Head To Head, which introduces some of the arguments for and against. NICE currently recommends excluding unrelated future medical costs. An example given in this article is the case of the expected costs of dementia care having saved someone’s life by heart transplantation. The argument in favour of including unrelated costs is quite obvious – these costs can’t be ignored if we seek to maximise social welfare. Their inclusion is described as “not difficult” by the authors defending this move. By ignoring unrelated future costs (but accounting for the benefit of longer life), the relative cost-effectiveness of life-extending treatments, compared with life-improving treatments, is artificially inflated. The argument against including unrelated medical costs is presented as one of fairness. The author suggests that their inclusion could preclude access to health care for certain groups of people that are likely to have high needs in the future. So perhaps NICE should ignore unrelated medical costs in certain circumstances. I sympathise with this view, but I feel it is less a fairness issue and more a demonstration of the current limits of health-related quality of life measurement, which don’t reflect adaptation and coping. However, I tend to disagree with both of the arguments presented here. I really don’t think NICE should include or exclude unrelated future medical costs according to the context because that could create some very perverse incentives for certain stakeholders. But then, I do not agree that it is “not difficult” to include all unrelated future costs. ‘All’ is an important qualifier here because the capacity for analysts to pick and choose unrelated future costs creates the potential to pick and choose results. When it comes to unrelated future medical costs, NICE’s position needs to be all-or-nothing, and right now the ‘all’ bit is a high bar to clear. NICE should include unrelated future medical costs – it’s difficult to formulate a sound argument against that – but they should only do so once more groundwork has been done. In particular, we need to develop more valid methods for valuing quality of life against life-years in health technology assessment across different patient groups. And we need more reliable methods for estimating future medical costs in all settings.

Oncology modeling for fun and profit! Key steps for busy analysts in health technology assessment. PharmacoEconomics [PubMed] Published 6th November 2017

Quite a title(!). The subject of this essay is ‘partitioned survival modelling’. Honestly,  I never really knew what that was until I read this article. It seems the reason for my ignorance could be that I haven’t worked on the evaluation of cancer treatments, for which it’s a popular methodology. Apparently, a recent study found that almost 75% of NICE cancer drug appraisals were informed by this sort of analysis. Partitioned survival modelling is a simple means by which to extrapolate outcomes in a context where people can survive (or not) with or without progression. Often this can be on the basis of survival analyses and standard trial endpoints. This article seeks to provide some guidance on the development and use of partitioned survival models. Or, rather, it provides a toolkit for calling out those who might seek to use the method as a means of providing favourable results for a new therapy when data and analytical resources are lacking. The ‘key steps’ can be summarised as 1) avoiding/ignoring/misrepresenting current standards of economic evaluation, 2) using handpicked parametric approaches for extrapolation in order to maximise survival benefits, 3) creatively estimating relative treatment effects using indirect comparisons without adjustment, 4) make optimistic assumptions about post-progression outcomes, and 5) deny the possibility of any structural uncertainty. The authors illustrate just how much an analyst can influence the results of an evaluation (if they want to “keep ICERs in the sweet spot!”). Generally, these tactics move the model far from being representative of reality. However, the prevailing secrecy around most models means that it isn’t always easy to detect these shortcomings. Sometimes it is though, and the authors make explicit reference to technology appraisals that they suggest demonstrate these crimes. Brilliant!

Credits

Chris Sampson’s journal round-up for 23rd October 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

What is the evidence from past National Institute of Health and Care Excellence single-technology appraisals regarding company submissions with base-case incremental cost-effectiveness ratios of less than £10,000/QALY? Value in Health Published 18th October 2017

NICE have been looking into diversifying their HTA processes of late. One of the newly proposed rules is that technologies with a base-case ICER estimate of less than £10,000 per QALY should be eligible for a fast-track appraisal, so that patients can benefit as early as possible from a therapy that does not pose a great risk of wasting NHS resources. But what have NICE been doing up to this point for such technologies? For this study, the researchers analysed content from all NICE single technology appraisals (STAs) between 2009 and 2016, of which there were 171 with final reports available that reported a base-case ICER. 15% (26) of the STAs reported all base-case ICERs to be below £10,000, and of these 73% (19) received a positive recommendation at the first appraisal committee meeting. A key finding is that 7 of the 26 received a ‘Minded No’ judgment in the first instance due in part to inadequate evidence and – though all got a positive decision in the end – some recommendations were restricted to subgroups. The authors also had a look at STAs with base-case ICERs up to £15,000, of which there were 5 more. All of these received a positive recommendation at the first appraisal committee meeting. Another group of (28) STAs reported multiple ICERs that included estimates both below and above £10,000. These tell a different story. Only 13 received an unrestricted positive recommendation at the first appraisal committee. Positive recommendations eventually followed for all 28, but 7 were on the basis of patient access schemes. There are a few things to consider in light of these findings. It may not be possible for NICE to adequately fast-track some sub-£10k submissions because the ICERs are not estimated on the basis of appropriate comparisons, or because the evidence is otherwise inadequate. But there may be good grounds for extending the fast-track threshold to £15,000. The study also highlights some indicators of complexity (such as the availability of patient access scheme discounts) that might be used as a basis for excluding submissions from the fast-track process.

EQ-5D-5L versus EQ-5D-3L: the impact on cost-effectiveness in the United Kingdom. Value in Health Published 18th October 2017

Despite some protest from NICE, most UK health economists working on trial-based economic evaluations are probably getting on with using the new EQ-5D-5L (and associated value set) over its 3L predecessor. This shift could bring important changes to the distribution of cost-effectiveness results for evaluated technologies. In this study, the researchers sought to identify what these changes might be, by examining a couple of datasets which included both 3L and 5L response data. One dataset was produced by the EuroQol group, with 3,551 individuals from across Europe with a range of health states, and the other was a North American dataset collected from 5,205 patients with rheumatoid disease, which switched from 3L to 5L with a wave of overlap. The analysis employs a previously developed method with a series of ordinal regressions, in which 3L-5L pairs are predicted using a copula approach. The first thing to note is that there was variation in the distribution of responses between the different dimensions and between the two datasets, and so a variety of model specifications are needed. To investigate the implications of using the 5L instead of the 3L, the authors considered 9 cost-effectiveness analysis case studies. The 9 studies reported 13 comparisons. In almost all cases where 3L was replaced with the 5L, the intervention resulted in a smaller QALY gain and higher ICER. The only study in which use of the 5L increased the incremental QALYs was one in which life extension was the key driver of QALY gains. Generally speaking, use of the 5L increases index values and reduces the range, so quality of life improvements are ‘more difficult’ to achieve, while life extension is relatively more valuable than on the 3L. Several technologies move from being clearly cost-effective within NICE’s £20,000-£30,000 threshold to being borderline cases. Different technologies for different diseases will be impacted differently by the move from the 3L to the 5L. So while we should probably still start using the 5L and its value set (because it’s methodologically superior), we mustn’t forget how different our findings might be in comparison to our old ways.

Experience-based utility and own health state valuation for a health state classification system: why and how to do it. The European Journal of Health Economics [PubMedPublished 11th October 2017

There’s debate around whose values we ought to be using to estimate QALYs when making resource allocation decisions. Generally we use societal values, but some researchers think we should be using values from people actually in those health states. I’ve written before about some of the problems with this debate. In this study, the authors try to bring some clarity to the discussion. Four types of values are considered, defined by two distinctions: hypothetical vs own current state and general public vs patient values. The notion of experienced utility is introduced and the authors explain why this cannot be captured by (for example) a TTO exercise, because such exercises require hypothetical future scenarios of health improvement. Thus, the preferred terminology becomes ‘own health state valuation’. The authors summarise some of the research that has sought to compare the 4 types of values specified, highlighting that own health state valuations tend to give higher values associated with dysfunctional health states than do general population hypothetical valuations. The main point is that valuations can differ systematically according to whose values are being elicited. The authors describe some reasons why these values may differ. These could include i) poor descriptions of hypothetical states, ii) changing internal standards (e.g. response shift), and iii) adaptation. Next, the authors consider how to go about collecting own health state values. Two key challenges are specified: i) respondents may be unwilling where questions are complex or intrusive, and ii) there may be ethical concerns, particular where people are in terminal conditions. It is therefore difficult to sample for all possible health states. Selection bias may also rear its head. The tendency for more mild health states to be observed creates problems for the econometricians trying to model value sets. The authors propose some ways forward for identifying own health state value sets. One way would be to purposively sample EQ-5D health states from people representative within the states. However, some states are rarely observed, so we’d be looking at screening millions of people to identify the necessary participants from a general survey. So the authors suggest targeting people via other methods. Though this may still prove very difficult. A more effective (and favourable) approach – the authors suggest – could be to try and obtain better informed general population values. This could involve improving descriptive systems and encouraging deliberation. Evidence suggests that this can reduce the discrepancy between hypothetical and own state valuations. In particular, the authors recommend the use of citizens’ juries and multi-criteria decision analysis. This isn’t something we see being done in the literature, and so may be a fruitful avenue for future research.

Credits