Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
Value in hepatitis C virus treatment: a patient-centered cost-effectiveness analysis. PharmacoEconomics [PubMed] Published 2nd December 2019
There have been many economic evaluations of treatments for viral hepatitis C. The usual outcomes are costs and a measure of quality-adjusted survival, such as QALYs. But health-related quality of life and life expectancy may not be the only important outcomes for patients. This fascinating paper by Joe Mattingly II and colleagues fills in the gap by collaborating with patients in the development of an economic evaluation of treatments for viral hepatitis C.
Patient engagement was guided by a stakeholder advisory board including health care professionals, four patients and a representative of a national patient advocacy organisation. This board reviewed the model design, model inputs and presentation of results. To ensure that the economic evaluation included what is important to patients, the team conducted a Delphi process with patients who had received treatment or were considering treatment. This is reported in a separate paper.
The feedback from patients led to the inclusion of two outcomes beyond QALYs and costs: infected life-years, which relate to the patient’s fear of infecting others, and workdays missed, which relate to financial issues and impact on work and career.
I was impressed with the effort put into engaging with patients and stakeholders. For example, there were 11 meetings with the stakeholder advisory board. This shows that engaging with stakeholders takes time and energy to do right! The challenge with the patient-centric outcome measures is in using them to make decisions. From an individual or an employer’s perspective, it may be useful to have results in terms of costs per workday missed avoided, for example, if these can then be compared to a maximum acceptable cost. As suggested by the authors, an interesting next step would be to seek feedback from managed care organisations. Whether such measures would be useful to inform decisions in publicly funded healthcare services is less clear.
Patient engagement is all the rage at present, but there’s not much guidance on how to do it in practice. This paper is a great example of how to go about it.
TECH-VER: a verification checklist to reduce errors in models and improve their credibility. PharmacoEconomics [PubMed] [RePEc] Published 8th November 2019
Looking for help in checking your decision model? Fear not, there’s a new tool on the block! The TECH-VER checklist lists a set of steps to assess the internal validity of your model.
I have to admit that I’m getting a bit weary of checklists, but this one is truly useful. It’s divided into five areas: model inputs, event/state calculations, results, uncertainty analysis, and overall validation and other supplementary checks. Each area includes an assessment of the completeness of the calculations in the electronic model, their consistency with the technical report, and then steps to check their correctness.
Correctness is assessed with a series of black-box, white-box, and replication-based tests. Black-box tests involve changing parameters in the model and checking if the results change as expected. For example, if the HRQOL weights=1 and decrements=0, the QALYs should be the same as the life years. White-box testing involves checking the calculations one by one. Replication-based tests involve redoing calculations independently.
The authors’ handy tip is to apply the checks in ascending order of effort and time: starting first with black-box tests, then conducting white-box tests only for priority calculations or if there are unexpected results. I recommend this paper to all cost-effectiveness modellers. TECH-VER will definitely feature in my toolbox!
Proposals on Kaplan-Meier plots in medical research and a survey of stakeholder views: KMunicate. BMJ Open [PubMed] Published 30th September 2019
What’s your view of the Kaplan-Meier plot? I find it quite difficult to explain to non-specialist audiences, particularly the uncertainty in the differences in survival time between treatment groups. It seems that I’m not the only one!
Tim Morris and colleagues agree that Kaplan-Meier can be difficult to interpret. To address this, they proposed improvements to better show the status of patients over time and the uncertainty around those estimates. They then assessed the proposed improvements with a survey of researchers. Similar to my own views, the majority of respondents preferred having a table with the number of patients who had the events and who were censored to show the status of patients over time, and confidence intervals to show the uncertainty.
The Kaplan-Meier plot with confidence intervals and the table would definitely help me to interpret and explain Kaplan-Meier plots. Also, the proposed improvements seem to be straightforward to implement. One way to make it easy for researchers to implement these plots in practice would be to publish the code to replicate the preferred plots.
There is a broader question, outside the scope of this project, about how to convey survival times and their uncertainty to untrained audiences, from health care professionals and managers to patients. Would audience-specific tools be the answer? Or should we try to up-skill the audience to understand a Kaplan-Meier plot?
Better communication is surely key if we want to engage stakeholders with research and if our research is to have an impact on policy. I, for one, would be grateful for more guidance on how to communicate research. This study is an excellent first step in making a specialist tool – the Kaplan-Meier plot – easier to understand.
Cost-effectiveness of strategies preventing late-onset infection in preterm infants. Archives of Disease in Childhood [PubMed] Published 13th December 2019
And lastly, a plug for my own paper! This article reports the cost-effectiveness analysis conducted for a ‘negative’ trial. The PREVAIL trial found that the experimental intervention – anti-microbial impregnated peripherally inserted central catheters (AM-PICCs) – had no effect compared to the standard PICCS, which are used in the NHS. AM-PICCs are more costly than standard PICCs. Clearly, AM-PICCs are not cost-effective. So, you may ask, why conduct a cost-effectiveness analysis and develop a new model?
Developing a model to evaluate the cost-effectiveness of AM-PICCs was one of the project’s objectives. We started the economic work pretty early on. By the time that the trial reported, the model was already built, tested with data from the literature, and all ready to receive the trial data. Wasted effort? Not at all!
Thanks to this cost-effectiveness analysis, we have concluded that avoiding neurodevelopmental impairment in children born preterm is very beneficial; hence warranting a large investment by the NHS. If we believe the observational evidence that infection causes neurodevelopmental impairment, interventions that reduce the risk of infection can be cost-effective.
The linkage to Hospital Episode Statistics, National Neonatal Research Database and Paediatric Intensive Care Audit Network allowed us to get a good picture of the hospital care and costs of the babies in the PREVAIL trial. This informed some of the cost inputs in the cost-effectiveness model.
If you’re planning a cost-effectiveness analysis of strategies to prevent infections and/or neurodevelopmental impairment in preterm babies, do feel free to get in touch!
Credits