Chris Sampson’s journal round-up for 5th February 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness analysis of germ-line BRCA testing in women with breast cancer and cascade testing in family members of mutation carriers. Genetics in Medicine [PubMed] Published 4th January 2018

The idea of testing women for BRCA mutations – faulty genes that can increase the probability and severity of breast and ovarian cancers – periodically makes it into the headlines. That’s not just because of Angelina Jolie. It’s also because it’s a challenging and active area of research with many uncertainties. This new cost-effectiveness analysis evaluates a programme that incorporates cascade testing; testing relatives of mutation carriers. The idea is that this could increase the effectiveness of the programme with a reduced cost-per-identification, as relatives of mutation carriers are more likely to also carry a mutation. The researchers use a cohort-based Markov-style decision analytic model. A programme with three test cohorts – i) women with unilateral breast cancer and a risk prediction score >10%, ii) first-degree relatives, and iii) second-degree relatives – was compared against no testing. A positive result in the original high-risk individual leads to testing in the first- and second-degree relatives, with the number of subsequent tests occurring in the model determined by assumptions about family size. Women who test positive can receive risk-reducing mastectomy and/or bilateral salpingo-oophorectomy (removal of the ovaries). The results are favourable to the BRCA testing programme, at $19,000 (Australian) per QALY for testing affected women only and $15,000 when the cascade testing of family members was included, with high probabilities of cost-effectiveness at $50,000 per QALY. I’m a little confused by the model. The model includes the states ‘BRCA positive’ and ‘Breast cancer’, which clearly are not mutually exclusive. And It isn’t clear how women entering the model with breast cancer go on to enjoy QALY benefits compared to the no-test group. I’m definitely not comfortable with the assumption that there is no disutility associated with risk-reducing surgery. I also can’t see where the cost of identifying the high-risk women in the first place was accounted for. But this is a model, after all. The findings appear to be robust to a variety of sensitivity analyses. Part of the value of testing lies in the information it provides about people beyond the individual patient. Clearly, if we want to evaluate the true value of testing then this needs to be taken into account.

Economic evaluation of direct-acting antivirals for hepatitis C in Norway. PharmacoEconomics Published 2nd February 2018

Direct-acting antivirals (DAAs) are those new drugs that gave NICE a headache a few years back because they were – despite being very effective and high-value – unaffordable. DAAs are essentially curative, which means that they can reduce resource use over a long time horizon. This makes cost-effectiveness analysis in this context challenging. In this new study, the authors conduct an economic evaluation of DAAs compared with the previous class of treatment, in the Norwegian context. Importantly, the researchers sought to take into account the rebates that have been agreed in Norway, which mean that the prices are effectively reduced by up to 50%. There are now lots of different DAAs available. Furthermore, hepatitis C infection corresponds to several different genotypes. This means that there is a need to identify which treatments are most (cost-)effective for which groups of patients; this isn’t simply a matter of A vs B. The authors use a previously developed model that incorporates projections of the disease up to 2030, though the authors extrapolate to a 100-year time horizon. The paper presents cost-effectiveness acceptability frontiers for each of genotypes 1, 2, and 3, clearly demonstrating which medicines are the most likely to be cost-effective at given willingness-to-pay thresholds. For all three genotypes, at least one of the DAA options is most likely to be cost-effective above a threshold of €70,000 per QALY (which is apparently recommended in Norway). The model predicts that if everyone received the most cost-effective strategy then Norway would expect to see around 180 hepatitis C patients in 2030 instead of the 300-400 seen in the last six years. The study also presents the price rebates that would be necessary to make currently sub-optimal medicines cost-effective. The model isn’t that generalisable. It’s very much Norway-specific as it reflects the country’s treatment guidelines. It also only looks at people who inject drugs – a sub-population whose importance can vary a lot from one country to the next. I expect this will be a valuable piece of work for Norway, but it strikes me as odd that “affordability” or “budget impact” aren’t even mentioned in the paper.

Cost-effectiveness of prostate cancer screening: a systematic review of decision-analytical models. BMC Cancer [PubMed] Published 18th January 2018

You may have seen prostate cancer in the headlines last week. Despite the number of people in the UK dying each year from prostate cancer now being greater than the number of people dying from breast cancer, prostate cancer screening remains controversial. This is because over-detection and over-treatment are common and harmful. Plenty of cost-effectiveness studies have been conducted in the context of detecting and treating prostate cancer. But there are various ways of modelling the problem and various specifications of screening programme that can be evaluated. So here we have a systematic review of cost-effectiveness models evaluating prostate-specific antigen (PSA) blood tests as a basis for screening. From a haul of 1010 studies, 10 made it into the review. The studies modelled lots of different scenarios, with alternative screening strategies, PSA thresholds, and treatment pathways. The results are not consistent. Many of the scenarios evaluated in the studies were more costly and less effective than current practice (which tended to be the lack of any formal screening programme). None of the UK-based cost-per-QALY estimates favoured screening. The authors summarise the methodological choices made in each study and consider the extent to which this relates to the pathways being modelled. They also specify the health state utility values used in the models. This will be a very useful reference point for anyone trying their hand at a prostate cancer screening model. Of the ten studies included in the review, four of them found at least one screening programme to be potentially cost-effective. ‘Adaptive screening’ – whereby individuals’ recall to screening was based on their risk – was considered in two studies using patient-level simulations. The authors suggest that cohort-level modelling could be sufficient where screening is not determined by individual risk level. There are also warnings against inappropriate definition of the comparator, which is likely to be opportunistic screening rather than a complete absence of screening. Generally speaking, a lack of good data seems to be part of the explanation for the inconsistency in the findings. It could be some time before we have a clearer understanding of how to implement a cost-effective screening programme for prostate cancer.

Credits

 

Chris Sampson’s journal round-up for 31st July 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

An exploratory study on using principal-component analysis and confirmatory factor analysis to identify bolt-on dimensions: the EQ-5D case study. Value in Health Published 14th July 2017

I’m not convinced by the idea of using bolt-on dimensions for multi-attribute utility instruments. A state description with a bolt-on refers to a different evaluative space, and therefore is not comparable with the progenitor, thus undermining its purpose. Maybe this study will persuade me otherwise. The authors analyse data from the Multi Instrument Comparison database, including responses to EQ-5D-5L, SF-6D, HUI3, AQoL 8D and 15D questionnaires, as well as the ICECAP and 3 measures of subjective well-being. Content analysis was used to allocate items from the measures to underlying constructs of health-related quality of life. The sample of 8022 was randomly split, with one half used for principal-component analysis and confirmatory factor analysis, and the other used for validation. This approach looks at the underlying constructs associated with health-related quality of life and the extent to which individual items from the questionnaires influence them. Candidate items for bolt-ons are those items from questionnaires other than the EQ-5D that are important and not otherwise captured by the EQ-5D questions. The principal-component analysis supported a 9-component model: physical functioning, psychological symptoms, satisfaction, pain, relationships, speech/cognition, hearing, energy/sleep and vision. The EQ-5D only covered physical functioning, psychological symptoms and pain. Therefore, items from measures that explain the other 6 components represent bolt-on candidates for the EQ-5D. This study succeeds in its aim. It demonstrates what appears to be a meaningful quantitative approach to identifying items not fully captured by the EQ-5D, which might be added as bolt-ons. But it doesn’t answer the question of which (if any) of these bolt-ons ought to be added, or in what circumstances. That would at least require pre-definition of the evaluative space, which might not correspond to the authors’ chosen model of health-related quality of life. If it does, then these findings would be more persuasive as a reason to do away with the EQ-5D altogether.

Endogenous information, adverse selection, and prevention: implications for genetic testing policy. Journal of Health Economics Published 13th July 2017

If you can afford it, there are all sorts of genetic tests available nowadays. Some of them could provide valuable information about the risk of particular health problems in the future. Therefore, they can be used to guide individuals’ decisions about preventive care. But if the individual’s health care is financed through insurance, that same information could prove costly. It could reinforce that classic asymmetry of information and adverse selection problem. So we need policy that deals with this. This study considers the incentives and insurance market outcomes associated with four policy options: i) mandatory disclosure of test results, ii) voluntary disclosure, iii) insurers knowing the test was taken, but not the results and iv) complete ban on the use of test information by insurers. The authors describe a utility model that incorporates the use of prevention technologies, and available insurance contracts, amongst people who are informed or uninformed (according to whether they have taken a test) and high or low risk (according to test results). This is used to estimate the value of taking a genetic test, which differs under the four different policy options. Under voluntary disclosure, the information from a genetic test always has non-negative value to the individual, who can choose to only tell their insurer if it’s favourable. The analysis shows that, in terms of social welfare, mandatory disclosure is expected to be optimal, while an information ban is dominated by all other options. These findings are in line with previous studies, which were less generalisable according to the authors. In the introduction, the authors state that “ethical issues are beyond the scope of this paper”. That’s kind of a problem. I doubt anybody who supports an information ban does so on the basis that they think it will maximise social welfare in the fashion described in this paper. More likely, they’re worried about the inequities in health that mandatory disclosure could reinforce, about which this study tells us nothing. Still, an information ban seems to be a popular policy, and studies like this indicate that such decisions should be reconsidered in light of their expected impact on social welfare.

Returns to scientific publications for pharmaceutical products in the United States. Health Economics [PubMedPublished 10th July 2017

Publication bias is a big problem. Part of the cause is that pharmaceutical companies have no incentive to publish negative findings for their own products. Though positive findings may be valuable in terms of sales. As usual, it isn’t quite that simple when you really think about it. This study looks at the effect of publications on revenue for 20 branded drugs in 3 markets – statins, rheumatoid arthritis and asthma – using an ‘event-study’ approach. The authors analyse a panel of quarterly US sales data from 2003-2013 alongside publications identified through literature searches and several drug- and market-specific covariates. Effects are estimated using first difference and difference in first difference models. The authors hypothesise that publications should have an important impact on sales in markets with high generic competition, and less in those without or with high branded competition. Essentially, this is what they find. For statins and asthma drugs, where there was some competition, clinical studies in high-impact journals increased sales to the tune of $8 million per publication. For statins, volume was not significantly affected, with mediation through price. In rhematoid arthritis, where competition is limited, the effect on sales was mediated by the effect on volume. Studies published in lower impact journals seemed to have a negative influence. Cost-effectiveness studies were only important in the market with high generic competition, increasing statin sales by $2.2 million on average. I’d imagine that these impacts are something with which firms already have a reasonable grasp. But this study provides value to public policy decision makers. It highlights those situations in which we might expect manufacturers to publish evidence and those in which it might be worthwhile increasing public investment to pick up the slack. It could also help identify where publication bias might be a bigger problem due to the incentives faced by pharmaceutical companies.

Credits

Chris Sampson’s journal round-up for 22nd August 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Simulation as an ethical imperative and epistemic responsibility for the implementation of medical guidelines in health care. Medicine, Health Care and Philosophy [PubMed] Published 6th August 2016

Some people describe RCTs as a ‘gold standard’ for evidence. But if more than one RCT exists, or we have useful data from outside the RCT, that probably isn’t true. Decision modelling has value over and above RCT data, as well as in lieu of it. One crucial thing that cannot – or at least not usually – be captured in an RCT is how well the evidence might be implemented. Medical guidelines will be developed, but there will be a process of adjustments and no doubt errors; all of which might impact on the quality of life of patients. Here we stray into the realms of implementation science. This paper argues that health care providers have a responsibility to acquire knowledge about implementation and the learning curve of medical guidelines. To this end, there is an epistemic and ethical imperative to simulate the possible impacts on patients’ health of the implementation learning curve. The authors provide some examples of guideline implementation that might have benefited from simulation. However, it’s very easy in hindsight to identify what went wrong and none of the examples set out realistic scenarios for simulation analyses that could have been carried out in advance. It isn’t clear to me how or why we should differentiate – in ethical or epistemic terms – implementation from effectiveness evaluation. It is clear, however, that health economists could engage more with implementation science, and that there is an ethical imperative to do so.

Estimating marginal healthcare costs using genetic variants as instrumental variables: Mendelian randomization in economic evaluation. PharmacoEconomics [PubMedPublished 2nd August 2016

To assert that obesity is associated with greater use of health care resources is uncontroversial. However, to assert that all of the additional cost associated with obesity is because of obesity is a step too far. There are many other determinants of health care costs (and outcomes) that might be independently associated with obesity. One way of dealing with this problem of identifying causality is to use instrumental variables in econometric analysis, but appropriate IVs can be tricky to identify. Enter, Mendelian randomisation. This is a method that can be used to adopt genetic variants as IVs. This paper describes the basis for Mendelian randomisation and outlines the suitability of genetic traits as IVs. En route, the authors provide a nice accessible summary of the IV approach more generally. The focus throughout the paper is upon estimating costs, with obesity used as an example. The article outlines a lot of the potential challenges and pitfalls associated with the approach, such as the use of weak instruments and non-linear exposure-outcome relationships. On the whole, the approach is intuitive and fits easily within existing methodologies. Its main value may lie in the estimation of more accurate parameters for model-based economic evaluation. Of course, we need data. Ideally, longitudinal medical records linked to genotypic information for a large number of people. That may seem like wishful thinking, but the UK Biobank project (and others) can fit the bill.

Patient and general public preferences for health states: A call to reconsider current guidelines. Social Science & Medicine [PubMed] Published 31st July 2016

One major ongoing debate in health economics is the question of whether public or patient preferences should be used to value health states and thus to estimate QALYs. Here in the UK NICE recommends public preferences, and I’d hazard a guess that most people agree. But why? After providing some useful theoretical background, this article reviews the arguments made in favour of the use of public preferences. It focuses on three that have been identified in Dutch guidelines. First, that cost-effectiveness analysis should adopt a societal perspective. The Gold Panel invoked a Rawlsian veil of ignorance argument to support the use of decision (ex ante) utility rather than experiences (ex post). The authors highlight that this is limited, as the public are not behind a veil of ignorance. Second, that the use of patient preferences might (wrongfully) ignore adaptation. This is not a complete argument as there may be elements of adaptation that decision makers wish not to take into account, and public preferences may still underestimate the benefits of treatment due to adaptation. Third, the insurance principle highlights that the obligation to be insured is made ex ante and therefore the benefits of insurance (i.e. health care) should also be valued as such. The authors set out a useful taxonomy of the arguments, their reasoning and the counter arguments. The key message is that current arguments in favour of public preferences are incomplete. As a way forward, the authors suggest that both patient and public preferences should be used alongside each other and propose that HTA guidelines require this. The paper got my cogs whirring, so expect a follow-up blog post tomorrow.

What, who and when? Incorporating a discrete choice experiment into an economic evaluation. Health Economics Review [PubMed] Published 29th July 2016

This study claims to be the first to carry out a discrete choice experiment on clinical trial participants, and to compare willingness to pay results with standard QALY-based net benefit estimates; thus comparing a CBA and a CUA. The trial in question evaluates extending the role of community pharmacists in the management of coronary heart disease. The study focusses on the questions of what, who and when: what factors should be evaluated (i.e. beyond QALYs)? whose preferences (i.e. patients with experience of the service or all participants)? and when should preferences be evaluated (i.e. during or after the intervention)? Comparisons are made along these lines. The DCE asked participants to choose between their current situation and two alternative scenarios involving either the new service or the control. The trial found no significant difference in EQ-5D scores, SF-6D scores or costs between the groups, but it did identify a higher level of satisfaction with the intervention. The intervention group (through the DCE) reported a greater willingness to pay for the intervention than the control group, and this appeared to increase with prolonged use of the service. I’m not sure what the take-home message is from this study. The paper doesn’t answer the questions in the title – at least, not in any general sense. Nevertheless, it’s an interesting discussion about how we might carry out cost-benefit analysis using DCEs.

Photo credit: Antony Theobald (CC BY-NC-ND 2.0)