Rita Faria’s journal round-up for 30th December 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Value in hepatitis C virus treatment: a patient-centered cost-effectiveness analysis. PharmacoEconomics [PubMed] Published 2nd December 2019

There have been many economic evaluations of treatments for viral hepatitis C. The usual outcomes are costs and a measure of quality-adjusted survival, such as QALYs. But health-related quality of life and life expectancy may not be the only important outcomes for patients. This fascinating paper by Joe Mattingly II and colleagues fills in the gap by collaborating with patients in the development of an economic evaluation of treatments for viral hepatitis C.

Patient engagement was guided by a stakeholder advisory board including health care professionals, four patients and a representative of a national patient advocacy organisation. This board reviewed the model design, model inputs and presentation of results. To ensure that the economic evaluation included what is important to patients, the team conducted a Delphi process with patients who had received treatment or were considering treatment. This is reported in a separate paper.

The feedback from patients led to the inclusion of two outcomes beyond QALYs and costs: infected life-years, which relate to the patient’s fear of infecting others, and workdays missed, which relate to financial issues and impact on work and career.

I was impressed with the effort put into engaging with patients and stakeholders. For example, there were 11 meetings with the stakeholder advisory board. This shows that engaging with stakeholders takes time and energy to do right! The challenge with the patient-centric outcome measures is in using them to make decisions. From an individual or an employer’s perspective, it may be useful to have results in terms of costs per workday missed avoided, for example, if these can then be compared to a maximum acceptable cost. As suggested by the authors, an interesting next step would be to seek feedback from managed care organisations. Whether such measures would be useful to inform decisions in publicly funded healthcare services is less clear.

Patient engagement is all the rage at present, but there’s not much guidance on how to do it in practice. This paper is a great example of how to go about it.

TECH-VER: a verification checklist to reduce errors in models and improve their credibility. PharmacoEconomics [PubMed] [RePEc] Published 8th November 2019

Looking for help in checking your decision model? Fear not, there’s a new tool on the block! The TECH-VER checklist lists a set of steps to assess the internal validity of your model.

I have to admit that I’m getting a bit weary of checklists, but this one is truly useful. It’s divided into five areas: model inputs, event/state calculations, results, uncertainty analysis, and overall validation and other supplementary checks. Each area includes an assessment of the completeness of the calculations in the electronic model, their consistency with the technical report, and then steps to check their correctness.

Correctness is assessed with a series of black-box, white-box, and replication-based tests. Black-box tests involve changing parameters in the model and checking if the results change as expected. For example, if the HRQOL weights=1 and decrements=0, the QALYs should be the same as the life years. White-box testing involves checking the calculations one by one. Replication-based tests involve redoing calculations independently.

The authors’ handy tip is to apply the checks in ascending order of effort and time: starting first with black-box tests, then conducting white-box tests only for priority calculations or if there are unexpected results. I recommend this paper to all cost-effectiveness modellers. TECH-VER will definitely feature in my toolbox!

Proposals on Kaplan-Meier plots in medical research and a survey of stakeholder views: KMunicate. BMJ Open [PubMed] Published 30th September 2019

What’s your view of the Kaplan-Meier plot? I find it quite difficult to explain to non-specialist audiences, particularly the uncertainty in the differences in survival time between treatment groups. It seems that I’m not the only one!

Tim Morris and colleagues agree that Kaplan-Meier can be difficult to interpret. To address this, they proposed improvements to better show the status of patients over time and the uncertainty around those estimates. They then assessed the proposed improvements with a survey of researchers. Similar to my own views, the majority of respondents preferred having a table with the number of patients who had the events and who were censored to show the status of patients over time, and confidence intervals to show the uncertainty.

The Kaplan-Meier plot with confidence intervals and the table would definitely help me to interpret and explain Kaplan-Meier plots. Also, the proposed improvements seem to be straightforward to implement. One way to make it easy for researchers to implement these plots in practice would be to publish the code to replicate the preferred plots.

There is a broader question, outside the scope of this project, about how to convey survival times and their uncertainty to untrained audiences, from health care professionals and managers to patients. Would audience-specific tools be the answer? Or should we try to up-skill the audience to understand a Kaplan-Meier plot?

Better communication is surely key if we want to engage stakeholders with research and if our research is to have an impact on policy. I, for one, would be grateful for more guidance on how to communicate research. This study is an excellent first step in making a specialist tool – the Kaplan-Meier plot – easier to understand.

Cost-effectiveness of strategies preventing late-onset infection in preterm infants. Archives of Disease in Childhood [PubMed] Published 13th December 2019

And lastly, a plug for my own paper! This article reports the cost-effectiveness analysis conducted for a ‘negative’ trial. The PREVAIL trial found that the experimental intervention – anti-microbial impregnated peripherally inserted central catheters (AM-PICCs) – had no effect compared to the standard PICCS, which are used in the NHS. AM-PICCs are more costly than standard PICCs. Clearly, AM-PICCs are not cost-effective. So, you may ask, why conduct a cost-effectiveness analysis and develop a new model?

Developing a model to evaluate the cost-effectiveness of AM-PICCs was one of the project’s objectives. We started the economic work pretty early on. By the time that the trial reported, the model was already built, tested with data from the literature, and all ready to receive the trial data. Wasted effort? Not at all!

Thanks to this cost-effectiveness analysis, we have concluded that avoiding neurodevelopmental impairment in children born preterm is very beneficial; hence warranting a large investment by the NHS. If we believe the observational evidence that infection causes neurodevelopmental impairment, interventions that reduce the risk of infection can be cost-effective.

The linkage to Hospital Episode Statistics, National Neonatal Research Database and Paediatric Intensive Care Audit Network allowed us to get a good picture of the hospital care and costs of the babies in the PREVAIL trial. This informed some of the cost inputs in the cost-effectiveness model.

If you’re planning a cost-effectiveness analysis of strategies to prevent infections and/or neurodevelopmental impairment in preterm babies, do feel free to get in touch!

Credits

Rita Faria’s journal round-up for 2nd September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ [PubMed] Published 28th August 2019

RCTs are the gold standard primary study to estimate the effect of treatments but are often far from perfect. The question is the extent to which their flaws make a difference to the results. Well, RoB 2 is your new best friend to help answer this question.

Developed by a star-studded team, the RoB 2 is the update to the original risk of bias tool by the Cochrane Collaboration. Bias is assessed by outcome, rather than for the whole RCT. For me, this makes sense.  For example, the primary outcome may be well reported, yet the secondary outcome, which may be the outcome of interest for a cost-effectiveness model, much less so.

Bias is considered in terms of 5 domains, with the overall risk of bias usually corresponding to the worst risk of bias in any of the domains. This overall risk of bias is then reflected in the evidence synthesis, with, for example, a stratified meta-analysis.

The paper is a great read! Jonathan Sterne and colleagues explain the reasons for the update and the process that was followed. Clearly, there was quite a lot of thought given to the types of bias and to develop questions to help reviewers assess it. The only downside is that it may require more time to apply, given that it needs to be done by outcome. Still, I think that’s a price worth paying for more reliable results. Looking forward to seeing it in use!

Characteristics and methods of incorporating randomised and nonrandomised evidence in network meta-analyses: a scoping review. Journal of Clinical Epidemiology [PubMed] Published 3rd May 2019

In keeping with the evidence synthesis theme, this paper by Kathryn Zhang and colleagues reviews how the applied literature has been combining randomised and non-randomised evidence. The headline findings are that combining these two types of study designs is rare and, when it does happen, naïve pooling is the most common method.

I imagine that the limited use of non-randomised evidence is due to its risk of bias. After all, it is difficult to ensure that the measure of association from a non-randomised study is an estimate of a causal effect. Hence, it is worrying that the majority of network meta-analyses that did combine non-randomised studies did so with naïve pooling.

This scoping review may kick start some discussions in the evidence synthesis world. When should we combine randomised and non-randomised evidence? How best to do so? And how to make sure that the right methods are used in practice? As a cost-effectiveness modeller, with limited knowledge of evidence synthesis, I’ve grappled with these questions myself. Do get in touch if you have any thoughts.

A cost-effectiveness analysis of shortened direct-acting antiviral treatment in genotype 1 noncirrhotic treatment-naive patients with chronic hepatitis C virus. Value in Health [PubMed] Published 17th May 2019

Rarely we see a cost-effectiveness paper where the proposed intervention is less costly and less effective, that is, in the controversial southwest quadrant. This exceptional paper by Christopher Fawsitt and colleagues is a welcome exception!

Christopher and colleagues looked at the cost-effectiveness of shorter treatment durations for chronic hepatitis C. Compared with the standard duration, the shorter treatment is not as effective, hence results in fewer QALYs. But it is much cheaper to treat patients over a shorter duration and re-treat those patients who were not cured, rather than treat everyone with the standard duration. Hence, for the base-case and for most scenarios, the shorter treatment is cost-effective.

I’m sure that labelling a less effective and less costly option as cost-effective may have been controversial in some quarters. Some may argue that it is unethical to offer a worse treatment than the standard even if it saves a lot of money. In my view, it is no different from funding better and more costlier treatments, given that the savings will be borne by other patients who will necessarily have access to fewer resources.

The paper is beautifully written and is another example of an outstanding cost-effectiveness analysis with important implications for policy and practice. The extensive sensitivity analysis should provide reassurance to the sceptics. And the discussion is clever in arguing for the value of a shorter duration in resource-constrained settings and for hard to reach populations. A must read!

Credits

Rita Faria’s journal round-up for 26th August 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Vaccine hesitancy and (fake) news: quasi‐experimental evidence from Italy. Health Economics [PubMed] [RePEc] Published 20th August 2019

Has fake news led to fewer children being vaccinated? At least in Italy, the answer seems to be yes.

It’s shocking to read that the WHO has included the reluctance or refusal to vaccinate as one of the 10 threats to global health today. And many of us are asking: why has this happened and what can we do to address it? Vincenzo Carrieri, Leonardo Madio and Francesco Principe help answer this first question. They looked at how fake news affects the take-up of vaccines, assuming that exposure to fake news is proxied by access to broadband and within a difference-in-differences framework. They found that a 10% increase in broadband coverage is associated with a 1.2-1.6% reduction in vaccination rates.

The differences-in-differences method hinges on a court ruling in 2012 that accepted that the MMR vaccine causes autism. Following the ruling, fake news about vaccines spread across the internet. In parallel, broadband coverage increased over time due to a government programme, but it varied by region, depending on the existing infrastructure and geographical conditions. Broadband coverage, by itself, cannot lead to lower vaccination rates. So it makes sense to assume that broadband coverage leads to greater exposure to fake news about vaccines, which in turn leads to lower vaccination rates.

On the other hand, it may be that greater broadband coverage and lower vaccination rates are both caused by something else. The authors wrote a good introduction to justify the model assumptions and show a few robustness checks. Had they had more space, I would have like to read a bit more about the uncertainties around the model assumptions. This is a fantastic paper and good food for thought on the consequences of fake news. Great read!

The cost-effectiveness of one-time birth cohort screening for hepatitis C as part of the National Health Service Health Check programme in England. Value in Health Published 19th August 2019

Jack Williams and colleagues looked at the cost-effectiveness of one-time birth cohort screening for hepatitis C. As hepatitis C is usually asymptomatic before reaching its more advanced stages, people may not be aware that they are infected. Therefore, they may not get tested and treated, even though treatment is effective and cost-effective.

At the level of the individual eligible for testing, the ICERs were between £8k-£31k/QALY, with lower ICERs for younger birth cohorts. The ICERs also depended on the transition probabilities for the progression of the disease, with lower ICERs if progression is faster. Extensive sensitivity and value of information analyses indicate that the key cost-effectiveness drivers are the transition probabilities, probabilities of referral and of treatment post-referral, and the quality of life benefits of being cured.

This is a great example of a good quality applied cost-effectiveness analysis. The model is well justified, the results are thoroughly tested, and the discussion is meticulous. Well done!

NICE, in confidence: an assessment of redaction to obscure confidential information in Single Technology Appraisals by the National Institute for Health and Care Excellence. PharmacoEconomics [PubMed] Published 27th June 2019

NICE walks a fine line between making decisions transparent and protecting confidential information. Confidential information includes commercially sensitive information (e.g. discounts to the price paid by the NHS) and academic-in-confidence information, such as unpublished results of clinical trials. The problem is that the redacted information may preclude readers from understanding NICE decisions.

Ash Bullement and colleagues reviewed NICE appraisals of technologies with an approved price discount. Their goal was to understand the extent of redactions and their consequences on the transparency of NICE decisions. Of the 171 NICE appraisals, 118 had an approved commercial arrangement and 110 had a simple price discount. The type of redacted information varied. Some did not present the ICER, others presented ICERs but not the components of the ICERs, and others did not even present the estimates of life expectancy from the model. Remarkably, the confidential discount could be back-calculated in seven NICE appraisals! The authors also looked at the academic-in-confidence redactions. They found that 68 out of 86 appraisals published before 2018 still had academic-in-confidence information redacted. This made me wonder if NICE has a process to review these redactions and disclose them once the information is in the public domain.

As Ash and colleagues rightly conclude, this review shows that there does not seem to be a consistent process for redaction and disclosure. This is a compelling paper on the practicalities of the NICE process, and with useful reflections for HTA agencies around the world. The message for NICE is that it may be time to review the process to handle sensitive information.

Credits