Chris Sampson’s journal round-up for 5th February 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness analysis of germ-line BRCA testing in women with breast cancer and cascade testing in family members of mutation carriers. Genetics in Medicine [PubMed] Published 4th January 2018

The idea of testing women for BRCA mutations – faulty genes that can increase the probability and severity of breast and ovarian cancers – periodically makes it into the headlines. That’s not just because of Angelina Jolie. It’s also because it’s a challenging and active area of research with many uncertainties. This new cost-effectiveness analysis evaluates a programme that incorporates cascade testing; testing relatives of mutation carriers. The idea is that this could increase the effectiveness of the programme with a reduced cost-per-identification, as relatives of mutation carriers are more likely to also carry a mutation. The researchers use a cohort-based Markov-style decision analytic model. A programme with three test cohorts – i) women with unilateral breast cancer and a risk prediction score >10%, ii) first-degree relatives, and iii) second-degree relatives – was compared against no testing. A positive result in the original high-risk individual leads to testing in the first- and second-degree relatives, with the number of subsequent tests occurring in the model determined by assumptions about family size. Women who test positive can receive risk-reducing mastectomy and/or bilateral salpingo-oophorectomy (removal of the ovaries). The results are favourable to the BRCA testing programme, at $19,000 (Australian) per QALY for testing affected women only and $15,000 when the cascade testing of family members was included, with high probabilities of cost-effectiveness at $50,000 per QALY. I’m a little confused by the model. The model includes the states ‘BRCA positive’ and ‘Breast cancer’, which clearly are not mutually exclusive. And It isn’t clear how women entering the model with breast cancer go on to enjoy QALY benefits compared to the no-test group. I’m definitely not comfortable with the assumption that there is no disutility associated with risk-reducing surgery. I also can’t see where the cost of identifying the high-risk women in the first place was accounted for. But this is a model, after all. The findings appear to be robust to a variety of sensitivity analyses. Part of the value of testing lies in the information it provides about people beyond the individual patient. Clearly, if we want to evaluate the true value of testing then this needs to be taken into account.

Economic evaluation of direct-acting antivirals for hepatitis C in Norway. PharmacoEconomics Published 2nd February 2018

Direct-acting antivirals (DAAs) are those new drugs that gave NICE a headache a few years back because they were – despite being very effective and high-value – unaffordable. DAAs are essentially curative, which means that they can reduce resource use over a long time horizon. This makes cost-effectiveness analysis in this context challenging. In this new study, the authors conduct an economic evaluation of DAAs compared with the previous class of treatment, in the Norwegian context. Importantly, the researchers sought to take into account the rebates that have been agreed in Norway, which mean that the prices are effectively reduced by up to 50%. There are now lots of different DAAs available. Furthermore, hepatitis C infection corresponds to several different genotypes. This means that there is a need to identify which treatments are most (cost-)effective for which groups of patients; this isn’t simply a matter of A vs B. The authors use a previously developed model that incorporates projections of the disease up to 2030, though the authors extrapolate to a 100-year time horizon. The paper presents cost-effectiveness acceptability frontiers for each of genotypes 1, 2, and 3, clearly demonstrating which medicines are the most likely to be cost-effective at given willingness-to-pay thresholds. For all three genotypes, at least one of the DAA options is most likely to be cost-effective above a threshold of €70,000 per QALY (which is apparently recommended in Norway). The model predicts that if everyone received the most cost-effective strategy then Norway would expect to see around 180 hepatitis C patients in 2030 instead of the 300-400 seen in the last six years. The study also presents the price rebates that would be necessary to make currently sub-optimal medicines cost-effective. The model isn’t that generalisable. It’s very much Norway-specific as it reflects the country’s treatment guidelines. It also only looks at people who inject drugs – a sub-population whose importance can vary a lot from one country to the next. I expect this will be a valuable piece of work for Norway, but it strikes me as odd that “affordability” or “budget impact” aren’t even mentioned in the paper.

Cost-effectiveness of prostate cancer screening: a systematic review of decision-analytical models. BMC Cancer [PubMed] Published 18th January 2018

You may have seen prostate cancer in the headlines last week. Despite the number of people in the UK dying each year from prostate cancer now being greater than the number of people dying from breast cancer, prostate cancer screening remains controversial. This is because over-detection and over-treatment are common and harmful. Plenty of cost-effectiveness studies have been conducted in the context of detecting and treating prostate cancer. But there are various ways of modelling the problem and various specifications of screening programme that can be evaluated. So here we have a systematic review of cost-effectiveness models evaluating prostate-specific antigen (PSA) blood tests as a basis for screening. From a haul of 1010 studies, 10 made it into the review. The studies modelled lots of different scenarios, with alternative screening strategies, PSA thresholds, and treatment pathways. The results are not consistent. Many of the scenarios evaluated in the studies were more costly and less effective than current practice (which tended to be the lack of any formal screening programme). None of the UK-based cost-per-QALY estimates favoured screening. The authors summarise the methodological choices made in each study and consider the extent to which this relates to the pathways being modelled. They also specify the health state utility values used in the models. This will be a very useful reference point for anyone trying their hand at a prostate cancer screening model. Of the ten studies included in the review, four of them found at least one screening programme to be potentially cost-effective. ‘Adaptive screening’ – whereby individuals’ recall to screening was based on their risk – was considered in two studies using patient-level simulations. The authors suggest that cohort-level modelling could be sufficient where screening is not determined by individual risk level. There are also warnings against inappropriate definition of the comparator, which is likely to be opportunistic screening rather than a complete absence of screening. Generally speaking, a lack of good data seems to be part of the explanation for the inconsistency in the findings. It could be some time before we have a clearer understanding of how to implement a cost-effective screening programme for prostate cancer.

Credits

 

Chris Sampson’s journal round-up for 20th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Effects of health and social care spending constraints on mortality in England: a time trend analysis. BMJ Open [PubMed] Published 15th November 2017

I’d hazard a guess that I’m not the only one here who gets angry about the politics of austerity. Having seen this study’s title, it’s clear that the research could provide fuel for that anger. It doesn’t disappoint. Recent years have seen very low year-on-year increases in public expenditure on health in England. Even worse, between 2010 and 2014, public expenditure on social care actually fell in real terms. This is despite growing need for health and social care. In this study, the authors look at health and social care spending and try to estimate the impact that reduced expenditure has had on mortality in England. The analysis uses spending and mortality data from 2001 onwards and also incorporates mortality projections for 2015-2020. Time trend analyses are conducted using Poisson regression models. From 2001-2010, deaths decreased by 0.77% per year (on average). The mortality rate was falling. Now it seems to be increasing; from 2011-2014, the average number of deaths per year increased by 0.87%. This corresponds to 18,324 additional deaths in 2014, for example. But everybody dies. Extra deaths are really sooner deaths. So the question, really, is how much sooner? The authors look at potential years of life lost and find this figure to be 75,496 life-years greater than expected in 2014, given pre-2010 trends. This shouldn’t come as much of a surprise. Spending less generally achieves less. What makes this study really interesting is that it can tell us who is losing these potential years of life as a result of spending cuts. The authors find that it’s the over-60s. Care home deaths were the largest contributor to increased mortality. A £10 cut in social care spending per capita resulted in 5 additional care home deaths per 100,000 people. When the authors looked at deaths by local area, no association was found with the level of deprivation. If health and social care expenditure are combined in a single model, we see that it’s social care spending that is driving the number of excess deaths. The impact of health spending on hospital deaths was less robust. The number of nurses acted as a mediator for the relationship between spending and mortality. The authors estimate that current spending projections will result in 150,000 additional deaths compared with pre-2010 trends. There are plenty of limitations to this study. It’s pretty much impossible (though the authors do try) to separate the effects of austerity from the effect of a weak economy. Still, I’m satisfied with the conclusion that austerity kills older people (no jokes about turkeys and Christmas, please). For me, the findings also highlight the need for more research in the context of social care, and how we (as researchers) might effectively direct policy to prevent ‘excess’ deaths.

Should cost effectiveness analyses for NICE always consider future unrelated medical costs? BMJ [PubMed] Published 10th November 2017

The question of whether or not ‘unrelated’ future medical costs should be included in economic evaluation is becoming a hot topic. So much so that the BMJ has published this Head To Head, which introduces some of the arguments for and against. NICE currently recommends excluding unrelated future medical costs. An example given in this article is the case of the expected costs of dementia care having saved someone’s life by heart transplantation. The argument in favour of including unrelated costs is quite obvious – these costs can’t be ignored if we seek to maximise social welfare. Their inclusion is described as “not difficult” by the authors defending this move. By ignoring unrelated future costs (but accounting for the benefit of longer life), the relative cost-effectiveness of life-extending treatments, compared with life-improving treatments, is artificially inflated. The argument against including unrelated medical costs is presented as one of fairness. The author suggests that their inclusion could preclude access to health care for certain groups of people that are likely to have high needs in the future. So perhaps NICE should ignore unrelated medical costs in certain circumstances. I sympathise with this view, but I feel it is less a fairness issue and more a demonstration of the current limits of health-related quality of life measurement, which don’t reflect adaptation and coping. However, I tend to disagree with both of the arguments presented here. I really don’t think NICE should include or exclude unrelated future medical costs according to the context because that could create some very perverse incentives for certain stakeholders. But then, I do not agree that it is “not difficult” to include all unrelated future costs. ‘All’ is an important qualifier here because the capacity for analysts to pick and choose unrelated future costs creates the potential to pick and choose results. When it comes to unrelated future medical costs, NICE’s position needs to be all-or-nothing, and right now the ‘all’ bit is a high bar to clear. NICE should include unrelated future medical costs – it’s difficult to formulate a sound argument against that – but they should only do so once more groundwork has been done. In particular, we need to develop more valid methods for valuing quality of life against life-years in health technology assessment across different patient groups. And we need more reliable methods for estimating future medical costs in all settings.

Oncology modeling for fun and profit! Key steps for busy analysts in health technology assessment. PharmacoEconomics [PubMed] Published 6th November 2017

Quite a title(!). The subject of this essay is ‘partitioned survival modelling’. Honestly,  I never really knew what that was until I read this article. It seems the reason for my ignorance could be that I haven’t worked on the evaluation of cancer treatments, for which it’s a popular methodology. Apparently, a recent study found that almost 75% of NICE cancer drug appraisals were informed by this sort of analysis. Partitioned survival modelling is a simple means by which to extrapolate outcomes in a context where people can survive (or not) with or without progression. Often this can be on the basis of survival analyses and standard trial endpoints. This article seeks to provide some guidance on the development and use of partitioned survival models. Or, rather, it provides a toolkit for calling out those who might seek to use the method as a means of providing favourable results for a new therapy when data and analytical resources are lacking. The ‘key steps’ can be summarised as 1) avoiding/ignoring/misrepresenting current standards of economic evaluation, 2) using handpicked parametric approaches for extrapolation in order to maximise survival benefits, 3) creatively estimating relative treatment effects using indirect comparisons without adjustment, 4) make optimistic assumptions about post-progression outcomes, and 5) deny the possibility of any structural uncertainty. The authors illustrate just how much an analyst can influence the results of an evaluation (if they want to “keep ICERs in the sweet spot!”). Generally, these tactics move the model far from being representative of reality. However, the prevailing secrecy around most models means that it isn’t always easy to detect these shortcomings. Sometimes it is though, and the authors make explicit reference to technology appraisals that they suggest demonstrate these crimes. Brilliant!

Credits

Alastair Canaway’s journal round-up for 31st October 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Ethical hurdles in the prioritization of oncology care. Applied Health Economics and Health Policy [PubMedPublished 21st October 2016

Recently between health economists, there has been significant scrutiny and disquiet directed towards the Cancer Drugs Fund with Professor Karl Claxton describing it as “an appalling, unfair use of NHS resources”. With the latest reorganization of the Cancer Drugs Fund in mind, this article examining the ethical issues surrounding prioritisation of cancer care was of particular interest. As all health economists will tell you, there is an opportunity cost with any allocation of scarce resources. Likewise, with prioritisation of specific disease groups, there may be equity issues with specific patients’ lives essentially being valued more greatly than those suffering other conditions. This article conducts a systematic review of the oncology literature to examine the ethical issues surrounding inequity in healthcare. The review found that public and political attention often focuses on ‘availability’ of pharmacological treatment in addition to factors that lead to good outcomes. The public and political focus on availability can have perverse consequences as highlighted by the Cancer Drugs Fund: resources are diverted towards availability and away from other more cost-effective areas, and in turn this may have had a detrimental effect on care for non-cancer patients. Additionally, by approving high cost, less cost-effective agents, strain will be placed upon health budgets and causing problems for existing cost-effectiveness thresholds. If prioritisation for cancer drugs is to be pursued then the authors suggest that the question of how to fund new therapies equitably will need to be addressed. Although the above issues will not be new to most, the paper is still worth reading as it: i) gives an overview of the different prioritisation frameworks used across Europe, ii) provides several suggestions for how, if prioritization is to be pursued, it can be done in a fairer manner rather than simply overriding typical HTA decision processes, iii) considers the potential legal consequences of prioritisation and iv) the impact of prioritisation on the sustainability of healthcare funding.

Doctor-patient differences in risk and time preferences: a field experiment. Journal of Health Economics Published 19th October 2016

The patient-doctor agency interaction, and associated issues due to asymmetrical information is something that was discussed often during my health economics MSc, but rarely during my day to day work. Despite being very familiar with supplier induced demand, differences in risk and time preferences in the patient-doctor dyad wasn’t something I’d considered in recent times. Upon reading, immediately, it is clear that if risk and time preferences do differ, then what is seen as the optimal treatment for the patient may be very different to that of the doctor. This may lead to poorer adherence to treatments and worse outcomes. This paper sought to investigate whether patients and their doctors had similar time and risk preferences using a framed field experiment with 300 patients and 67 doctors in Athens, Greece in a natural clinical setting. The authors claim to be the first to attempt this, and have three main findings: i) there were significant time preference differences between the patients and doctors – doctors discounted future health gains and financial outcomes less heavily than patients; ii) there were no significant differences in risk preferences for health with both doctors and patients being mildly risk averse; iii) there were however risk preference differences for financial impacts with doctors being more risk averse than patients. The implication of this paper is that there is potential for improvements in doctor-patient communication for treatments, and as agents for patients, doctors should attempts to gauge their patient’s preferences and attitudes before recommending treatment. For those who heavily discount the future it may be preferable to provide care that increases the short term benefits.

Hospital productivity growth in the English NHS 2008/09 to 2013/14 [PDF]. Centre for Health Economics Research Paper [RePEcPublished 21st October 2016

Although this is technically a ‘journal round-up’, this week I’ve chosen to include the latest CHE report as I think it is something which may be of wider interest to the AHEBlog community. Given limited resources, there is an unerring call for both productivity and efficiency gains within the NHS. The CHE report examines the extent to which NHS hospitals have improved productivity: have they made better use of their resources by increasing the number of patients they treat and the services they deliver for the same or fewer inputs. To assess productivity, the report uses established methods: Total Factor Productivity (TFP) which is the ratio of all outputs to all inputs. Growth in TFP is seen as being key to improving patient care with limited resources. The primary report finding was that TFP growth at the trust level exhibits ‘extraordinary volatility’. For example one year there maybe TFP growth followed by negative growth the next year, and then positive growth. The authors assert that much of the TFP growth measured is in fact implausible, and much of the changes are driven largely by nominal effects alongside some real changes. These nominal effects may be data entry errors or changes in accounting practices and data recording processes which results in changes to the timing of the recording of outputs and inputs. This is an important finding for research assessing productivity growth within the NHS. The TFP approach is an established methodology, yet as this research demonstrates, such methods do not provide credible measures of productivity at the hospital level. If hospital level productivity growth is to be measured credibly, then a new methodology will be required.

Credits