Chris Sampson’s journal round-up for 5th February 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness analysis of germ-line BRCA testing in women with breast cancer and cascade testing in family members of mutation carriers. Genetics in Medicine [PubMed] Published 4th January 2018

The idea of testing women for BRCA mutations – faulty genes that can increase the probability and severity of breast and ovarian cancers – periodically makes it into the headlines. That’s not just because of Angelina Jolie. It’s also because it’s a challenging and active area of research with many uncertainties. This new cost-effectiveness analysis evaluates a programme that incorporates cascade testing; testing relatives of mutation carriers. The idea is that this could increase the effectiveness of the programme with a reduced cost-per-identification, as relatives of mutation carriers are more likely to also carry a mutation. The researchers use a cohort-based Markov-style decision analytic model. A programme with three test cohorts – i) women with unilateral breast cancer and a risk prediction score >10%, ii) first-degree relatives, and iii) second-degree relatives – was compared against no testing. A positive result in the original high-risk individual leads to testing in the first- and second-degree relatives, with the number of subsequent tests occurring in the model determined by assumptions about family size. Women who test positive can receive risk-reducing mastectomy and/or bilateral salpingo-oophorectomy (removal of the ovaries). The results are favourable to the BRCA testing programme, at $19,000 (Australian) per QALY for testing affected women only and $15,000 when the cascade testing of family members was included, with high probabilities of cost-effectiveness at $50,000 per QALY. I’m a little confused by the model. The model includes the states ‘BRCA positive’ and ‘Breast cancer’, which clearly are not mutually exclusive. And It isn’t clear how women entering the model with breast cancer go on to enjoy QALY benefits compared to the no-test group. I’m definitely not comfortable with the assumption that there is no disutility associated with risk-reducing surgery. I also can’t see where the cost of identifying the high-risk women in the first place was accounted for. But this is a model, after all. The findings appear to be robust to a variety of sensitivity analyses. Part of the value of testing lies in the information it provides about people beyond the individual patient. Clearly, if we want to evaluate the true value of testing then this needs to be taken into account.

Economic evaluation of direct-acting antivirals for hepatitis C in Norway. PharmacoEconomics Published 2nd February 2018

Direct-acting antivirals (DAAs) are those new drugs that gave NICE a headache a few years back because they were – despite being very effective and high-value – unaffordable. DAAs are essentially curative, which means that they can reduce resource use over a long time horizon. This makes cost-effectiveness analysis in this context challenging. In this new study, the authors conduct an economic evaluation of DAAs compared with the previous class of treatment, in the Norwegian context. Importantly, the researchers sought to take into account the rebates that have been agreed in Norway, which mean that the prices are effectively reduced by up to 50%. There are now lots of different DAAs available. Furthermore, hepatitis C infection corresponds to several different genotypes. This means that there is a need to identify which treatments are most (cost-)effective for which groups of patients; this isn’t simply a matter of A vs B. The authors use a previously developed model that incorporates projections of the disease up to 2030, though the authors extrapolate to a 100-year time horizon. The paper presents cost-effectiveness acceptability frontiers for each of genotypes 1, 2, and 3, clearly demonstrating which medicines are the most likely to be cost-effective at given willingness-to-pay thresholds. For all three genotypes, at least one of the DAA options is most likely to be cost-effective above a threshold of €70,000 per QALY (which is apparently recommended in Norway). The model predicts that if everyone received the most cost-effective strategy then Norway would expect to see around 180 hepatitis C patients in 2030 instead of the 300-400 seen in the last six years. The study also presents the price rebates that would be necessary to make currently sub-optimal medicines cost-effective. The model isn’t that generalisable. It’s very much Norway-specific as it reflects the country’s treatment guidelines. It also only looks at people who inject drugs – a sub-population whose importance can vary a lot from one country to the next. I expect this will be a valuable piece of work for Norway, but it strikes me as odd that “affordability” or “budget impact” aren’t even mentioned in the paper.

Cost-effectiveness of prostate cancer screening: a systematic review of decision-analytical models. BMC Cancer [PubMed] Published 18th January 2018

You may have seen prostate cancer in the headlines last week. Despite the number of people in the UK dying each year from prostate cancer now being greater than the number of people dying from breast cancer, prostate cancer screening remains controversial. This is because over-detection and over-treatment are common and harmful. Plenty of cost-effectiveness studies have been conducted in the context of detecting and treating prostate cancer. But there are various ways of modelling the problem and various specifications of screening programme that can be evaluated. So here we have a systematic review of cost-effectiveness models evaluating prostate-specific antigen (PSA) blood tests as a basis for screening. From a haul of 1010 studies, 10 made it into the review. The studies modelled lots of different scenarios, with alternative screening strategies, PSA thresholds, and treatment pathways. The results are not consistent. Many of the scenarios evaluated in the studies were more costly and less effective than current practice (which tended to be the lack of any formal screening programme). None of the UK-based cost-per-QALY estimates favoured screening. The authors summarise the methodological choices made in each study and consider the extent to which this relates to the pathways being modelled. They also specify the health state utility values used in the models. This will be a very useful reference point for anyone trying their hand at a prostate cancer screening model. Of the ten studies included in the review, four of them found at least one screening programme to be potentially cost-effective. ‘Adaptive screening’ – whereby individuals’ recall to screening was based on their risk – was considered in two studies using patient-level simulations. The authors suggest that cohort-level modelling could be sufficient where screening is not determined by individual risk level. There are also warnings against inappropriate definition of the comparator, which is likely to be opportunistic screening rather than a complete absence of screening. Generally speaking, a lack of good data seems to be part of the explanation for the inconsistency in the findings. It could be some time before we have a clearer understanding of how to implement a cost-effective screening programme for prostate cancer.



Meeting round-up: Health Economists’ Study Group (HESG) Winter 2018

Last week’s biannual intellectual knees-up for UK health economists took place at City, University of London. We’ve written before about HESG, but if you need a reminder of the format you can read Lucy Abel’s blog post on the subject. This was the first HESG I’ve been to in a while that took place in an actual university building.

The conference kicked off for me with my colleague Grace Hampson‘s first ever HESG discussion. It was an excellent discussion of Toby Watt‘s paper on the impact of price promotions for cola, in terms of quantities purchased (they increase) and – by extension – sugar consumption. It was a nice paper with a clear theoretical framework and empirical strategy, which generated a busy discussion. Nutrition is a subject that I haven’t seen represented much at past HESG meetings, but there were several on the schedule this time around with other papers by Jonathan James and Ben Gershlick. I expect it’s something we’ll see becoming more prevalent as policymaking becomes more insistent.

The second and third sessions I attended were on the relationship between health and social care, which is a pressing matter in the UK, particular with regard to achieving integrated care. Ben Zaranko‘s paper considered substitution effects arising from changes in the relative budgets of health and social care. Jonathan Stokes and colleagues attempted to identify whether the Better Care Fund has achieved its goal of reducing secondary care use. That paper got a blazing discussion from Andrew Street that triggered an insightful discussion in the room.

A recurring theme in many sessions was the challenge of communicating with local decision-makers, and the apparent difficulty in working without a reference case to fall back on (such as that of NICE). This is something that I have heard regularly discussed at least since the Winter 2016 meeting in Manchester. At City, this was most clearly discussed in Emma Frew‘s paper describing the researchers’ experiences working with local government. Qualitative research has clearly broken through at HESG, including Emma’s paper and a study by Hareth Al-Janabi on the subject of treatment spillovers on family carers.

I also saw a few papers that related primarily to matters of research conduct and publishing. Charitini Stavropoulou‘s paper explored whether highly-cited researchers are more likely to receive public funding, while the paper I chaired by Anum Shaikh explored the potential for recycling cost-effectiveness models. The latter was a joy for me, with much discussion of model registries!

There were plenty of papers that satisfied my own particular research interests. Right up my research street was Mauro Laudicella‘s paper, which used real-world data to assess the cost savings associated with redirecting cancer diagnoses to GP referral rather than emergency presentation. I wasn’t quite as optimistic about the potential savings, with the standard worries about lead time bias and selection effects. But it was a great paper nonetheless. Also using real-world evidence was Ewan Gray‘s study, which supported the provision of adjuvant chemotherapy for early stage breast cancer but delivered some perplexing findings about patient-GP decision-making. Ewan’s paper explored technical methodological challenges, though the prize for the most intellectually challenging paper undoubtedly goes to Manuel Gomes, who continued his crusade to make health economists better at dealing with missing data – this time for the case of quality of life data. Milad Karimi‘s paper asked whether preferences over health states are informed. This is the kind of work I enjoy thinking about – whether measures like the EQ-5D capture what really matters and how we might do better.

As usual, many delegates worked hard and played hard. I took a beating from the schedule at this HESG, with my discussion taking place during the first session after the conference dinner (where we walked in the footsteps of the Spice Girls) and my chairing responsibilities falling on the last session of the last day. But in both cases, the audience was impressive.

I’ll leave the final thought for the blog post with Peter Smith’s plenary, which considered the role of health economists in a post-truth world. Happily, for me, Peter’s ideas chimed with my own view that we ought to be taking our message to the man on the Clapham omnibus and supporting public debate. Perhaps our focus on (national) policymakers is too strong. If not explicit, this was a theme that could be seen throughout the meeting, whether it be around broader engagement with stakeholders, recognising local decision-making processes, or harnessing the value of storytelling through qualitative research. HESG members are STRETCHing the truth.


Alastair Canaway’s journal round-up for 18th September 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Selection of key health domains from PROMIS® for a generic preference-based scoring system. Quality of Life Research [PubMedPublished 19th August 2017

The US Panel on Cost-Effectiveness recommends the use of QALYs. It doesn’t, however, instruct (unlike the UK) as to what measure should be used. This leaves the door ajar for both new and established measures. This paper sets about developing a new preference-based measure from the Patient-Reported Outcomes Measurement System (PROMIS). PROMIS is a US National Institutes of Health funded suite of person-centred measures of physical, mental, and social health. Across all the PROMIS measures there exist over 70 domains of health relevant to adult health. For all its promise, the PROMIS system does not produce a summary score amenable to the calculation of QALYs, nor for general descriptive purposes such as measuring HRQL over time. This study aimed to reduce the 70 items down to a number suitable for valuation. To do this, Delphi methods were used. The Delphi approach is something that seems to be increasing in popularity in the health economics world. For those unfamiliar, it essentially involves obtaining the opinions of experts independently and iteratively conducting rounds of questioning to reach a consensus (over two or more rounds). In this case nine health outcomes experts were recruited, they were presented with ‘all 37 domains’ (no mention is made of how they got from 70 to 37!) and asked to remove any domains that were not appropriate for inclusion in a general health utility measure or were redundant due to another PROMIS domain. If more than seven experts agreed, then the domain was removed. Responses were combined and presented until consensus was reached. This left 10 domains. They then used a community sample of 50 participants to test for independence of domains using a pairwise independence evaluation test. They were given the option of removing a domain they felt was not important to overall HRQL and asked to rate the importance of remaining domains using a VAS. These findings were used by the research team to whittle down from nine domains to seven. The final domains were: Cognitive function- abilities; Depression; Fatigue; Pain Interference; Physical Function; Ability to participate in social roles and activities; and Sleep disturbance. Many of these are common to existing measures but I did rather like the inclusion of cognitive function and fatigue – something that is missing in many, and to me appear important. The next step is valuation. Upon valuation, this is a promising candidate for use in economic evaluation – particularly in the US where the PROMIS measurement suite is already established.

Predictive validation and the re-analysis of cost-effectiveness: do we dare to tread? PharmacoEconomics [PubMedPublished 22nd August 2017

PharmacoEconomics treated us to a provocative editorial regarding predictive validation and re-analysis of cost-effectiveness models – a call to arms of sorts. For those (like me) who are not modelling experts, predictive validation (aka 4th order validation) refers to the comparison of model outputs with data that are collected after the initial analysis of the model. So essentially you’re comparing what you modelled would happen with what actually happened. The literature suggests that predictive validation is widely ignored. The importance of predictive validity is highlighted with a case study where predictive-validity was examined three years after the end of a trial – upon reanalysis the model was poor. This was then revised, which led to a much better fit of the prospective data. Predictive validation can, therefore, be used to identify sources of inaccuracies in models. If predictive validity was examined more commonly, improvements in model quality more generally are possible. Furthermore, it might be possible to identify specific contexts where poor predictive validity is prevalent and thus require further research. The authors highlight the field of advanced cancers as a particularly relevant context where uncertainty around survival curves is prevalent. By actively scheduling further data collection and updating the survival curves we can reduce the uncertainty surrounding the value of high-cost drugs. Predictive validation can also inform other aspects of the modelling process, such as the best choice of time point from which to extrapolate, or credible rates of change in predicted hazards. The authors suggest using expected value of information analysis to identify technologies with the largest costs of uncertainty to prioritise where predictive validity could be assessed. NICE and other reimbursement bodies require continued data collection for ‘some’ new technologies, the processes are therefore in place for future studies to be designed and implemented in a way to capture such data which allows later re-analysis. Assessing predictive validity seems eminently sensible, there are however barriers. Money is the obvious issue, extended prospective data collection and re-analysis of models requires resources. It does, however, have the potential to save money and improve health in the long run. The authors note how in a recent study they demonstrated that a drug for osteoporosis that had been recommended by Australia’s Pharmaceutical Benefits Advisory Committee was not actually cost-effective when further data were examined. There is clearly value to be achieved in predictive validation and re-analysis – it’s hard to disagree with the authors and we should probably be campaigning for longer term follow-ups, re-analysis and increased acknowledgement of the desirability of predictive validity.

How should cost-of-illness studies be interpreted? The Lancet Psychiatry [PubMed] Published 7th September 2017

It’s a good question – cost of illness studies are commonplace, but are they useful from a health economics perspective? A comment piece in The Lancet Psychiatry examines this issue using the case study of self-harm and suicide. It focuses on a recent publication by Tsiachristas et al, which examines the hospital resource use and care costs for all presentations of self-harm in a UK hospital. Each episode of self-harm cost £809, and when extrapolated to the UK cost £162 million. Over 30% of these costs were psychological assessments which despite being recommended by NICE only 75% of self-harming patients received. If all self-harming patients received assessments as recommended by NICE then another £51 million would be added to the bill. The author raises the question of how much use is this information for health economists. Nearly all cost of illness studies end up concluding that i) they cost a lot, and ii) money could be saved by reducing or ameliorating the underlying factors that cause the illness. Is this helpful? Well, not particularly, by focusing only on one illness there is no consideration of the opportunity cost: if you spend money preventing one condition then that money will be displacing resources elsewhere, likewise, resources spent reducing one illness will likely be balanced by increased spending on another illness. The author highlights this with a thought experiment: “imagine a world where a cost of illness study has been done for every possible diseases and that the total cost of illness was aggregated. The counterfactual from such an exercise is a world where nobody gets sick and everybody dies suddenly at some pre-determined age”. Another issue is that more often than not, cost of illness studies identify that more, not less should be spent on a problem, in the self-harm example it was that an extra £51 million should be spent on psychological assessments. Similarly, it highlights the extra cost of psychological assessments, rather than the glaring issue that 25% who attend hospital for self-harm are not getting the required psychological assessments. This very much links into the final point that cost of illness studies neglect the benefits being achieved. Now all the negatives are out the way, there are at least a couple of positives I can think of off the top of my head i) identification of key cost drivers, and ii) information for use in economic models. The take home message is that although there is some use to cost of illness studies, from a health economics perspective we (as a field) would be better off spending our time steering clear.