Rita Faria’s journal round-up for 13th August 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Analysis of clinical benefit, harms, and cost-effectiveness of screening women for abdominal aortic aneurysm. The Lancet [PubMed] Published 26th July 2018

This study is an excellent example of the power and flexibility of decision models to help inform decisions on screening policies.

In many countries, screening for abdominal aortic aneurysm is offered to older men but not to women. This is because screening was found to be beneficial and cost-effective, based on evidence from RCTs in older men. In contrast, there is no direct evidence for women. To inform this question, the study team developed a decision model to simulate the benefits and costs of screening women.

This study has many fascinating features. Not only does it simulate the outcomes of expanding the current UK screening policy for men to include women, but also of other policies with different age parameters, diagnostic thresholds and treatment thresholds.

Curiously, the most cost-effective policy for women is not the current UK policy for men. This shows the importance of including the full range of options in the evaluation, rather than just what is done now. Unfortunately, the paper is sparse on detail on how the various policies were devised and if other more cost-effective policies may have been left out.

The key cost-effectiveness driver is the probability of having the disease and its presentation (i.e. the distribution of the aortic diameter), which is quite frequent in cost-effectiveness analysis of diagnostic tests. Neither of these parameters requires an RCT to be estimated. This means that, in principle, we could reduce the uncertainty on which policy to fund by conducting a study on the prevalence of the disease, rather than an RCT on whether a specific policy works.

An exciting aspect is that treatment itself could be better targeted, in particular, that lowering the threshold for treatment could reduce non-intervention rates and operative mortality. The implication is that there may be scope to improve the cost-effectiveness of management, which in turn will leave greater scope for investment in screening. Could this be the next question to be tackled by this remarkable model?

Establishing the value of diagnostic and prognostic tests in health technology assessment. Medical Decision Making [PubMed] Published 13th March 2018

Keeping on the topic of the cost-effectiveness of screening and diagnostic tests, this is a paper on how to evaluate tests in a manner consistent with health technology assessment principles. This paper has been around for a few months, but it’s only now that I’ve had the chance to give it the careful read that such a well thought out paper deserves.

Marta Soares and colleagues lay out an approach to determine the most cost-effective way to use diagnostic and prognostic tests. They start by explaining that the value of the test is mostly in informing better management decisions. This means that the cost-effectiveness of testing necessarily depends on the cost-effectiveness of management.

The paper also spells out that the cost-effectiveness of testing depends on the prevalence of the disease, as we saw in the paper above on screening for abdominal aortic aneurysm. Clearly, the cost-effectiveness of testing depends on the accuracy of the test.

Importantly, the paper highlights that the evaluation should compare all possible ways of using the test. A decision problem with 1 test and 1 treatment yields 6 strategies, of which 3 are relevant: no test and treat all; no test and treat none; test and treat if positive. If the reference test is added, another 3 strategies need to be considered. This shows how complex a cost-effectiveness analysis of a test can quickly become! In my paper with Marta and others, for example, we ended up with 383 testing strategies.

The discussion is excellent, particularly about the limitations of end-to-end studies (which compare testing strategies in terms of their end outcomes e.g. health). End-to-end studies can only compare a limited subset of testing strategies and may not allow for the modelling of the outcomes of strategies beyond those compared in the study. Furthermore, end-to-end studies are likely to be inefficient given the large sample sizes and long follow-up required to detect differences in outcomes. I wholeheartedly agree that primary studies should focus on the prevalence of the disease and the accuracy of the test, leaving the evaluation of the best way to use the test to decision modelling.

Reasonable patient care under uncertainty. Health Economics [PubMed] Published 22nd August 2018

And for my third paper for the week, something completely different. But so worth reading! Charles Manski provides an overview of his work on how to use the available evidence to make decisions under uncertainty. It is accompanied by comments from Karl Claxton, Emma McIntosh, and Anirban Basu, together with Manski’s response. The set is a superb read and great food for thought.

Manski starts with the premise that we make decisions about which course of action to take without having full information about what is best; i.e. under uncertainty. This is uncontroversial and well accepted, ever since Arrow’s seminal paper.

Less consensual is Manski’s view that clinicians’ decisions for individual patients may be better than the recommendations of guidelines to the ‘average’ patient because clinicians can take into account more information about the specific individual patient. I would contend that it is unrealistic to expect that clinicians keep pace with new knowledge in medicine given how fast and how much it is generated. Furthermore, clinicians, like all other people, are unlikely to be fully rational in their decision-making process.

Most fascinating was Section 6 on decision theory under uncertainty. Manski focussed on the minimax-regret criterion. I had not heard about these approaches before, so Manski’s explanations were quite the eye-opener.

Manksi concludes by recommending that central health care planners take a portfolio approach to their guidelines (adaptive diversification), coupled with the minimax criterion to update the guidelines as more information emerges (adaptive minimax-regret). Whether the minimax-regret criterion is the best is a question that I will leave to better brains than mine. A more immediate question is how feasible it is to implement this adaptive diversification, particularly in instituting a process in that data are systematically collected and analysed to update the guideline. In his response, Manski suggests that specialists in decision analysis should become members of the multidisciplinary clinical team and to teach decision analysis in Medicine courses. This resonates with my own view that we need to do better in helping people using information to make better decisions.

Credits

Chris Sampson’s journal round-up for 5th February 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness analysis of germ-line BRCA testing in women with breast cancer and cascade testing in family members of mutation carriers. Genetics in Medicine [PubMed] Published 4th January 2018

The idea of testing women for BRCA mutations – faulty genes that can increase the probability and severity of breast and ovarian cancers – periodically makes it into the headlines. That’s not just because of Angelina Jolie. It’s also because it’s a challenging and active area of research with many uncertainties. This new cost-effectiveness analysis evaluates a programme that incorporates cascade testing; testing relatives of mutation carriers. The idea is that this could increase the effectiveness of the programme with a reduced cost-per-identification, as relatives of mutation carriers are more likely to also carry a mutation. The researchers use a cohort-based Markov-style decision analytic model. A programme with three test cohorts – i) women with unilateral breast cancer and a risk prediction score >10%, ii) first-degree relatives, and iii) second-degree relatives – was compared against no testing. A positive result in the original high-risk individual leads to testing in the first- and second-degree relatives, with the number of subsequent tests occurring in the model determined by assumptions about family size. Women who test positive can receive risk-reducing mastectomy and/or bilateral salpingo-oophorectomy (removal of the ovaries). The results are favourable to the BRCA testing programme, at $19,000 (Australian) per QALY for testing affected women only and $15,000 when the cascade testing of family members was included, with high probabilities of cost-effectiveness at $50,000 per QALY. I’m a little confused by the model. The model includes the states ‘BRCA positive’ and ‘Breast cancer’, which clearly are not mutually exclusive. And It isn’t clear how women entering the model with breast cancer go on to enjoy QALY benefits compared to the no-test group. I’m definitely not comfortable with the assumption that there is no disutility associated with risk-reducing surgery. I also can’t see where the cost of identifying the high-risk women in the first place was accounted for. But this is a model, after all. The findings appear to be robust to a variety of sensitivity analyses. Part of the value of testing lies in the information it provides about people beyond the individual patient. Clearly, if we want to evaluate the true value of testing then this needs to be taken into account.

Economic evaluation of direct-acting antivirals for hepatitis C in Norway. PharmacoEconomics Published 2nd February 2018

Direct-acting antivirals (DAAs) are those new drugs that gave NICE a headache a few years back because they were – despite being very effective and high-value – unaffordable. DAAs are essentially curative, which means that they can reduce resource use over a long time horizon. This makes cost-effectiveness analysis in this context challenging. In this new study, the authors conduct an economic evaluation of DAAs compared with the previous class of treatment, in the Norwegian context. Importantly, the researchers sought to take into account the rebates that have been agreed in Norway, which mean that the prices are effectively reduced by up to 50%. There are now lots of different DAAs available. Furthermore, hepatitis C infection corresponds to several different genotypes. This means that there is a need to identify which treatments are most (cost-)effective for which groups of patients; this isn’t simply a matter of A vs B. The authors use a previously developed model that incorporates projections of the disease up to 2030, though the authors extrapolate to a 100-year time horizon. The paper presents cost-effectiveness acceptability frontiers for each of genotypes 1, 2, and 3, clearly demonstrating which medicines are the most likely to be cost-effective at given willingness-to-pay thresholds. For all three genotypes, at least one of the DAA options is most likely to be cost-effective above a threshold of €70,000 per QALY (which is apparently recommended in Norway). The model predicts that if everyone received the most cost-effective strategy then Norway would expect to see around 180 hepatitis C patients in 2030 instead of the 300-400 seen in the last six years. The study also presents the price rebates that would be necessary to make currently sub-optimal medicines cost-effective. The model isn’t that generalisable. It’s very much Norway-specific as it reflects the country’s treatment guidelines. It also only looks at people who inject drugs – a sub-population whose importance can vary a lot from one country to the next. I expect this will be a valuable piece of work for Norway, but it strikes me as odd that “affordability” or “budget impact” aren’t even mentioned in the paper.

Cost-effectiveness of prostate cancer screening: a systematic review of decision-analytical models. BMC Cancer [PubMed] Published 18th January 2018

You may have seen prostate cancer in the headlines last week. Despite the number of people in the UK dying each year from prostate cancer now being greater than the number of people dying from breast cancer, prostate cancer screening remains controversial. This is because over-detection and over-treatment are common and harmful. Plenty of cost-effectiveness studies have been conducted in the context of detecting and treating prostate cancer. But there are various ways of modelling the problem and various specifications of screening programme that can be evaluated. So here we have a systematic review of cost-effectiveness models evaluating prostate-specific antigen (PSA) blood tests as a basis for screening. From a haul of 1010 studies, 10 made it into the review. The studies modelled lots of different scenarios, with alternative screening strategies, PSA thresholds, and treatment pathways. The results are not consistent. Many of the scenarios evaluated in the studies were more costly and less effective than current practice (which tended to be the lack of any formal screening programme). None of the UK-based cost-per-QALY estimates favoured screening. The authors summarise the methodological choices made in each study and consider the extent to which this relates to the pathways being modelled. They also specify the health state utility values used in the models. This will be a very useful reference point for anyone trying their hand at a prostate cancer screening model. Of the ten studies included in the review, four of them found at least one screening programme to be potentially cost-effective. ‘Adaptive screening’ – whereby individuals’ recall to screening was based on their risk – was considered in two studies using patient-level simulations. The authors suggest that cohort-level modelling could be sufficient where screening is not determined by individual risk level. There are also warnings against inappropriate definition of the comparator, which is likely to be opportunistic screening rather than a complete absence of screening. Generally speaking, a lack of good data seems to be part of the explanation for the inconsistency in the findings. It could be some time before we have a clearer understanding of how to implement a cost-effective screening programme for prostate cancer.

Credits

 

Thesis Thursday: Caroline Vass

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Caroline Vass who has a PhD from the University of Manchester. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Using discrete choice experiments to value benefits and risks in primary care
Supervisors
Katherine Payne, Stephen Campbell, Daniel Rigby
Repository link
https://www.escholar.manchester.ac.uk/uk-ac-man-scw:295629

Are there particular challenges associated with asking people to trade-off risks in a discrete choice experiment?

The challenge of communicating risk in general, not just in DCEs, was one of the things which drew me to the PhD. I’d heard a TED talk discussing a study which asked people’s understanding of weather forecasts. Although most people think they understand a simple statement like “there’s a 30% chance of rain tomorrow”, few people correctly interpreted that as meaning it will rain 30% of the days like tomorrow. Most interpret it to mean there will be rain 30% of the time or in 30% of the area.

My first ever publication was reviewing the risk communication literature, which confirmed our suspicions; even highly educated samples don’t always interpret information as we expect. Therefore, testing if the communication of risk mattered when making trade-offs in a DCE seemed a pretty important topic and formed the overarching research question of my PhD.

Most of your study used data relating to breast cancer screening. What made this a good context in which to explore your research questions?

All women are invited to participate in breast screening (either from a GP referral or at 47-50 years old) in the UK. This makes every woman a potential consumer and a potential ‘patient’. I conducted a lot of qualitative research to ensure the survey text was easily interpretable, and having a disease which many people had heard of made this easier and allowed us to focus on the risk communication formats. My supervisor Prof. Katherine Payne had also been working on a large evaluation of stratified screening which made contacting experts, patients and charities easier.

There are also national screening participation figures so we were able to test if the DCE had any real-world predictive value. Luckily, our estimates weren’t too far off the published uptake rates for the UK!

How did you come to use eye-tracking as a research method, and were there any difficulties in employing a method not widely used in our field?

I have to credit my supervisor Prof. Dan Rigby with planting the seed and introducing me to the method. I did a bit of reading into what psychologists thought you could measure using eye-movements and thought it was worth further investigation. I literally found people publishing with the technology at our institution and knocked on doors until someone would let me use it! If the University of Manchester didn’t already have the equipment, it would have been much more challenging to collect these data.

I then discovered the joys of lab-based work which I think many health economists, fortunately, don’t encounter in their PhDs. The shared bench, people messing with your experiment set-up, restricted lab time which needs to be booked weeks in advance etc. I’m sure it will all be worth it… when the paper is finally published.

What are the key messages from your research in terms of how we ought to be designing DCEs in this context?

I had a bit of a null-result on the risk communication formats, where I found it didn’t affect preferences. I think looking back that might have been with the types of numbers I was presenting (5%, 10%, 20% are easier to understand) and maybe people have a lot of knowledge about the risks of breast screening. It certainly warrants further research to see if my finding holds in other settings. There is a lot of support for visual risk communication formats like icon arrays in other literatures and their addition didn’t seem to do any harm.

Some of the most interesting results came from the think-aloud interviews I conducted with female members of the public. Although I originally wanted to focus on their interpretation of the risk attributes, people started verbalising all sorts of interesting behaviour and strategies. Some of it aligned with economic concepts I hadn’t thought of such as feelings of regret associated with opting-out and discounting both the costs and health benefits of later screens in the programme. But there were also some glaring violations, like ignoring certain attributes, associating cost with quality, using other people’s budget constraints to make choices, and trying to game the survey with protest responses. So perhaps people designing DCEs for benefit-risk trade-offs specifically or in healthcare more generally should be aware that respondents can and do adopt simplifying heuristics. Is this evidence of the benefits of qualitative research in this context? I make that argument here.

Your thesis describes a wealth of research methods and findings, but is there anything that you wish you could have done that you weren’t able to do?

Achieved a larger sample size for my eye-tracking study!