Chris Sampson’s journal round-up for 5th February 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness analysis of germ-line BRCA testing in women with breast cancer and cascade testing in family members of mutation carriers. Genetics in Medicine [PubMed] Published 4th January 2018

The idea of testing women for BRCA mutations – faulty genes that can increase the probability and severity of breast and ovarian cancers – periodically makes it into the headlines. That’s not just because of Angelina Jolie. It’s also because it’s a challenging and active area of research with many uncertainties. This new cost-effectiveness analysis evaluates a programme that incorporates cascade testing; testing relatives of mutation carriers. The idea is that this could increase the effectiveness of the programme with a reduced cost-per-identification, as relatives of mutation carriers are more likely to also carry a mutation. The researchers use a cohort-based Markov-style decision analytic model. A programme with three test cohorts – i) women with unilateral breast cancer and a risk prediction score >10%, ii) first-degree relatives, and iii) second-degree relatives – was compared against no testing. A positive result in the original high-risk individual leads to testing in the first- and second-degree relatives, with the number of subsequent tests occurring in the model determined by assumptions about family size. Women who test positive can receive risk-reducing mastectomy and/or bilateral salpingo-oophorectomy (removal of the ovaries). The results are favourable to the BRCA testing programme, at $19,000 (Australian) per QALY for testing affected women only and $15,000 when the cascade testing of family members was included, with high probabilities of cost-effectiveness at $50,000 per QALY. I’m a little confused by the model. The model includes the states ‘BRCA positive’ and ‘Breast cancer’, which clearly are not mutually exclusive. And It isn’t clear how women entering the model with breast cancer go on to enjoy QALY benefits compared to the no-test group. I’m definitely not comfortable with the assumption that there is no disutility associated with risk-reducing surgery. I also can’t see where the cost of identifying the high-risk women in the first place was accounted for. But this is a model, after all. The findings appear to be robust to a variety of sensitivity analyses. Part of the value of testing lies in the information it provides about people beyond the individual patient. Clearly, if we want to evaluate the true value of testing then this needs to be taken into account.

Economic evaluation of direct-acting antivirals for hepatitis C in Norway. PharmacoEconomics Published 2nd February 2018

Direct-acting antivirals (DAAs) are those new drugs that gave NICE a headache a few years back because they were – despite being very effective and high-value – unaffordable. DAAs are essentially curative, which means that they can reduce resource use over a long time horizon. This makes cost-effectiveness analysis in this context challenging. In this new study, the authors conduct an economic evaluation of DAAs compared with the previous class of treatment, in the Norwegian context. Importantly, the researchers sought to take into account the rebates that have been agreed in Norway, which mean that the prices are effectively reduced by up to 50%. There are now lots of different DAAs available. Furthermore, hepatitis C infection corresponds to several different genotypes. This means that there is a need to identify which treatments are most (cost-)effective for which groups of patients; this isn’t simply a matter of A vs B. The authors use a previously developed model that incorporates projections of the disease up to 2030, though the authors extrapolate to a 100-year time horizon. The paper presents cost-effectiveness acceptability frontiers for each of genotypes 1, 2, and 3, clearly demonstrating which medicines are the most likely to be cost-effective at given willingness-to-pay thresholds. For all three genotypes, at least one of the DAA options is most likely to be cost-effective above a threshold of €70,000 per QALY (which is apparently recommended in Norway). The model predicts that if everyone received the most cost-effective strategy then Norway would expect to see around 180 hepatitis C patients in 2030 instead of the 300-400 seen in the last six years. The study also presents the price rebates that would be necessary to make currently sub-optimal medicines cost-effective. The model isn’t that generalisable. It’s very much Norway-specific as it reflects the country’s treatment guidelines. It also only looks at people who inject drugs – a sub-population whose importance can vary a lot from one country to the next. I expect this will be a valuable piece of work for Norway, but it strikes me as odd that “affordability” or “budget impact” aren’t even mentioned in the paper.

Cost-effectiveness of prostate cancer screening: a systematic review of decision-analytical models. BMC Cancer [PubMed] Published 18th January 2018

You may have seen prostate cancer in the headlines last week. Despite the number of people in the UK dying each year from prostate cancer now being greater than the number of people dying from breast cancer, prostate cancer screening remains controversial. This is because over-detection and over-treatment are common and harmful. Plenty of cost-effectiveness studies have been conducted in the context of detecting and treating prostate cancer. But there are various ways of modelling the problem and various specifications of screening programme that can be evaluated. So here we have a systematic review of cost-effectiveness models evaluating prostate-specific antigen (PSA) blood tests as a basis for screening. From a haul of 1010 studies, 10 made it into the review. The studies modelled lots of different scenarios, with alternative screening strategies, PSA thresholds, and treatment pathways. The results are not consistent. Many of the scenarios evaluated in the studies were more costly and less effective than current practice (which tended to be the lack of any formal screening programme). None of the UK-based cost-per-QALY estimates favoured screening. The authors summarise the methodological choices made in each study and consider the extent to which this relates to the pathways being modelled. They also specify the health state utility values used in the models. This will be a very useful reference point for anyone trying their hand at a prostate cancer screening model. Of the ten studies included in the review, four of them found at least one screening programme to be potentially cost-effective. ‘Adaptive screening’ – whereby individuals’ recall to screening was based on their risk – was considered in two studies using patient-level simulations. The authors suggest that cohort-level modelling could be sufficient where screening is not determined by individual risk level. There are also warnings against inappropriate definition of the comparator, which is likely to be opportunistic screening rather than a complete absence of screening. Generally speaking, a lack of good data seems to be part of the explanation for the inconsistency in the findings. It could be some time before we have a clearer understanding of how to implement a cost-effective screening programme for prostate cancer.

Credits

 

Chris Sampson’s journal round-up for 31st July 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

An exploratory study on using principal-component analysis and confirmatory factor analysis to identify bolt-on dimensions: the EQ-5D case study. Value in Health Published 14th July 2017

I’m not convinced by the idea of using bolt-on dimensions for multi-attribute utility instruments. A state description with a bolt-on refers to a different evaluative space, and therefore is not comparable with the progenitor, thus undermining its purpose. Maybe this study will persuade me otherwise. The authors analyse data from the Multi Instrument Comparison database, including responses to EQ-5D-5L, SF-6D, HUI3, AQoL 8D and 15D questionnaires, as well as the ICECAP and 3 measures of subjective well-being. Content analysis was used to allocate items from the measures to underlying constructs of health-related quality of life. The sample of 8022 was randomly split, with one half used for principal-component analysis and confirmatory factor analysis, and the other used for validation. This approach looks at the underlying constructs associated with health-related quality of life and the extent to which individual items from the questionnaires influence them. Candidate items for bolt-ons are those items from questionnaires other than the EQ-5D that are important and not otherwise captured by the EQ-5D questions. The principal-component analysis supported a 9-component model: physical functioning, psychological symptoms, satisfaction, pain, relationships, speech/cognition, hearing, energy/sleep and vision. The EQ-5D only covered physical functioning, psychological symptoms and pain. Therefore, items from measures that explain the other 6 components represent bolt-on candidates for the EQ-5D. This study succeeds in its aim. It demonstrates what appears to be a meaningful quantitative approach to identifying items not fully captured by the EQ-5D, which might be added as bolt-ons. But it doesn’t answer the question of which (if any) of these bolt-ons ought to be added, or in what circumstances. That would at least require pre-definition of the evaluative space, which might not correspond to the authors’ chosen model of health-related quality of life. If it does, then these findings would be more persuasive as a reason to do away with the EQ-5D altogether.

Endogenous information, adverse selection, and prevention: implications for genetic testing policy. Journal of Health Economics Published 13th July 2017

If you can afford it, there are all sorts of genetic tests available nowadays. Some of them could provide valuable information about the risk of particular health problems in the future. Therefore, they can be used to guide individuals’ decisions about preventive care. But if the individual’s health care is financed through insurance, that same information could prove costly. It could reinforce that classic asymmetry of information and adverse selection problem. So we need policy that deals with this. This study considers the incentives and insurance market outcomes associated with four policy options: i) mandatory disclosure of test results, ii) voluntary disclosure, iii) insurers knowing the test was taken, but not the results and iv) complete ban on the use of test information by insurers. The authors describe a utility model that incorporates the use of prevention technologies, and available insurance contracts, amongst people who are informed or uninformed (according to whether they have taken a test) and high or low risk (according to test results). This is used to estimate the value of taking a genetic test, which differs under the four different policy options. Under voluntary disclosure, the information from a genetic test always has non-negative value to the individual, who can choose to only tell their insurer if it’s favourable. The analysis shows that, in terms of social welfare, mandatory disclosure is expected to be optimal, while an information ban is dominated by all other options. These findings are in line with previous studies, which were less generalisable according to the authors. In the introduction, the authors state that “ethical issues are beyond the scope of this paper”. That’s kind of a problem. I doubt anybody who supports an information ban does so on the basis that they think it will maximise social welfare in the fashion described in this paper. More likely, they’re worried about the inequities in health that mandatory disclosure could reinforce, about which this study tells us nothing. Still, an information ban seems to be a popular policy, and studies like this indicate that such decisions should be reconsidered in light of their expected impact on social welfare.

Returns to scientific publications for pharmaceutical products in the United States. Health Economics [PubMedPublished 10th July 2017

Publication bias is a big problem. Part of the cause is that pharmaceutical companies have no incentive to publish negative findings for their own products. Though positive findings may be valuable in terms of sales. As usual, it isn’t quite that simple when you really think about it. This study looks at the effect of publications on revenue for 20 branded drugs in 3 markets – statins, rheumatoid arthritis and asthma – using an ‘event-study’ approach. The authors analyse a panel of quarterly US sales data from 2003-2013 alongside publications identified through literature searches and several drug- and market-specific covariates. Effects are estimated using first difference and difference in first difference models. The authors hypothesise that publications should have an important impact on sales in markets with high generic competition, and less in those without or with high branded competition. Essentially, this is what they find. For statins and asthma drugs, where there was some competition, clinical studies in high-impact journals increased sales to the tune of $8 million per publication. For statins, volume was not significantly affected, with mediation through price. In rhematoid arthritis, where competition is limited, the effect on sales was mediated by the effect on volume. Studies published in lower impact journals seemed to have a negative influence. Cost-effectiveness studies were only important in the market with high generic competition, increasing statin sales by $2.2 million on average. I’d imagine that these impacts are something with which firms already have a reasonable grasp. But this study provides value to public policy decision makers. It highlights those situations in which we might expect manufacturers to publish evidence and those in which it might be worthwhile increasing public investment to pick up the slack. It could also help identify where publication bias might be a bigger problem due to the incentives faced by pharmaceutical companies.

Credits