Rita Faria’s journal round-up for 30th December 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Value in hepatitis C virus treatment: a patient-centered cost-effectiveness analysis. PharmacoEconomics [PubMed] Published 2nd December 2019

There have been many economic evaluations of treatments for viral hepatitis C. The usual outcomes are costs and a measure of quality-adjusted survival, such as QALYs. But health-related quality of life and life expectancy may not be the only important outcomes for patients. This fascinating paper by Joe Mattingly II and colleagues fills in the gap by collaborating with patients in the development of an economic evaluation of treatments for viral hepatitis C.

Patient engagement was guided by a stakeholder advisory board including health care professionals, four patients and a representative of a national patient advocacy organisation. This board reviewed the model design, model inputs and presentation of results. To ensure that the economic evaluation included what is important to patients, the team conducted a Delphi process with patients who had received treatment or were considering treatment. This is reported in a separate paper.

The feedback from patients led to the inclusion of two outcomes beyond QALYs and costs: infected life-years, which relate to the patient’s fear of infecting others, and workdays missed, which relate to financial issues and impact on work and career.

I was impressed with the effort put into engaging with patients and stakeholders. For example, there were 11 meetings with the stakeholder advisory board. This shows that engaging with stakeholders takes time and energy to do right! The challenge with the patient-centric outcome measures is in using them to make decisions. From an individual or an employer’s perspective, it may be useful to have results in terms of costs per workday missed avoided, for example, if these can then be compared to a maximum acceptable cost. As suggested by the authors, an interesting next step would be to seek feedback from managed care organisations. Whether such measures would be useful to inform decisions in publicly funded healthcare services is less clear.

Patient engagement is all the rage at present, but there’s not much guidance on how to do it in practice. This paper is a great example of how to go about it.

TECH-VER: a verification checklist to reduce errors in models and improve their credibility. PharmacoEconomics [PubMed] [RePEc] Published 8th November 2019

Looking for help in checking your decision model? Fear not, there’s a new tool on the block! The TECH-VER checklist lists a set of steps to assess the internal validity of your model.

I have to admit that I’m getting a bit weary of checklists, but this one is truly useful. It’s divided into five areas: model inputs, event/state calculations, results, uncertainty analysis, and overall validation and other supplementary checks. Each area includes an assessment of the completeness of the calculations in the electronic model, their consistency with the technical report, and then steps to check their correctness.

Correctness is assessed with a series of black-box, white-box, and replication-based tests. Black-box tests involve changing parameters in the model and checking if the results change as expected. For example, if the HRQOL weights=1 and decrements=0, the QALYs should be the same as the life years. White-box testing involves checking the calculations one by one. Replication-based tests involve redoing calculations independently.

The authors’ handy tip is to apply the checks in ascending order of effort and time: starting first with black-box tests, then conducting white-box tests only for priority calculations or if there are unexpected results. I recommend this paper to all cost-effectiveness modellers. TECH-VER will definitely feature in my toolbox!

Proposals on Kaplan-Meier plots in medical research and a survey of stakeholder views: KMunicate. BMJ Open [PubMed] Published 30th September 2019

What’s your view of the Kaplan-Meier plot? I find it quite difficult to explain to non-specialist audiences, particularly the uncertainty in the differences in survival time between treatment groups. It seems that I’m not the only one!

Tim Morris and colleagues agree that Kaplan-Meier can be difficult to interpret. To address this, they proposed improvements to better show the status of patients over time and the uncertainty around those estimates. They then assessed the proposed improvements with a survey of researchers. Similar to my own views, the majority of respondents preferred having a table with the number of patients who had the events and who were censored to show the status of patients over time, and confidence intervals to show the uncertainty.

The Kaplan-Meier plot with confidence intervals and the table would definitely help me to interpret and explain Kaplan-Meier plots. Also, the proposed improvements seem to be straightforward to implement. One way to make it easy for researchers to implement these plots in practice would be to publish the code to replicate the preferred plots.

There is a broader question, outside the scope of this project, about how to convey survival times and their uncertainty to untrained audiences, from health care professionals and managers to patients. Would audience-specific tools be the answer? Or should we try to up-skill the audience to understand a Kaplan-Meier plot?

Better communication is surely key if we want to engage stakeholders with research and if our research is to have an impact on policy. I, for one, would be grateful for more guidance on how to communicate research. This study is an excellent first step in making a specialist tool – the Kaplan-Meier plot – easier to understand.

Cost-effectiveness of strategies preventing late-onset infection in preterm infants. Archives of Disease in Childhood [PubMed] Published 13th December 2019

And lastly, a plug for my own paper! This article reports the cost-effectiveness analysis conducted for a ‘negative’ trial. The PREVAIL trial found that the experimental intervention – anti-microbial impregnated peripherally inserted central catheters (AM-PICCs) – had no effect compared to the standard PICCS, which are used in the NHS. AM-PICCs are more costly than standard PICCs. Clearly, AM-PICCs are not cost-effective. So, you may ask, why conduct a cost-effectiveness analysis and develop a new model?

Developing a model to evaluate the cost-effectiveness of AM-PICCs was one of the project’s objectives. We started the economic work pretty early on. By the time that the trial reported, the model was already built, tested with data from the literature, and all ready to receive the trial data. Wasted effort? Not at all!

Thanks to this cost-effectiveness analysis, we have concluded that avoiding neurodevelopmental impairment in children born preterm is very beneficial; hence warranting a large investment by the NHS. If we believe the observational evidence that infection causes neurodevelopmental impairment, interventions that reduce the risk of infection can be cost-effective.

The linkage to Hospital Episode Statistics, National Neonatal Research Database and Paediatric Intensive Care Audit Network allowed us to get a good picture of the hospital care and costs of the babies in the PREVAIL trial. This informed some of the cost inputs in the cost-effectiveness model.

If you’re planning a cost-effectiveness analysis of strategies to prevent infections and/or neurodevelopmental impairment in preterm babies, do feel free to get in touch!

Credits

Rachel Houten’s journal round-up for 11th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A comparison of national guidelines for network meta-analysis. Value in Health [PubMed] Published October 2019

The evolving treatment landscape results in a greater dependence on indirect treatment comparisons to generate estimates of clinical effectiveness, where the current practice has not been compared to the proposed new intervention in a head-to-head trial. This paper is a review of the guidelines of reimbursement bodies for conducting network meta-analyses. Reassuringly, the authors find that it is possible to meet the needs of multiple agencies with one analysis.

The authors assign three categories to the criteria; “assessment and analysis to test assumptions required for a network meta-analysis, presentation and reporting of results, and justification of modelling choices”, with heterogeneity of the included studies highlighted as one of the key elements to be sure to include if prioritisation of the criteria is necessary. I think this is a simple way of thinking about what needs to be presented but the ‘justification’ category, in my experience, is often given less weight than the other two.

This paper is a useful resource for companies submitting to multiple HTA agencies with the requirements of each national body displayed in tables that are easy to navigate. It meets a practical need but doesn’t really go far enough for me. They do signpost to the PRISMA criteria, but I think it would have been really good to think about the purpose of the submission guidelines; to encourage a logical and coherent summary of the approaches taken so the evidence can be evaluated by decision-makers.

Variation in responsiveness to warranted behaviour change among NHS clinicians: novel implementation of change detection methods in longitudinal prescribing data. BMJ [PubMed] Published 2nd October 2019

I really like this paper. Such a lot of work, from all sectors, is devoted to the production of relevant and timely evidence to inform practice, but if the guidance does not become embedded into the real world then its usefulness is limited.

The authors have managed to utilize a HUGE amount of data to identify the real reaction to two pieces of guidance recommending a change in practice in England. The authors used “trend indicator saturation”, which I’m not ashamed to admit I knew nothing about beforehand, but it is explained nicely. Their thoughtful use of the information available to them results in three indicators of response (in this case the deprescribing of two drugs) around when the change occurs, how quickly it occurs, and how much change occurs.

The authors discover variation in response to the recommendations but suggest an application of their methods could be used to generate feedback to clinicians and therefore drive further response. As some primary care practices took a while to embed the guidance change into their prescribing, the paper raises interesting questions as to where the barriers to the adoption of guidance have occurred.

What is next for patient preferences in health technology assessment? A systematic review of the challenges. Value in Health Published November 2019

It may be that patient preferences have a role to play in the uptake of guideline recommendations, as proposed by the authors of my final paper this week. This systematic review, of the literature around embedding patient preferences into HTA decision-making, groups the discussion in the academic literature into five broad areas; conceptual, normative, procedural, methodological, and practical. The authors state that their purpose was not to formulate their own views, merely to present the available literature, but they do a good job of indicating where to find more opinionated literature on this topic.

Methodological issues were the biggest group, with aspects such as the sample selection, internal and external validity of the preferences generated, and the generalisability of the preferences collected from a sample to the entire population. However, in general, the number of topics covered in the literature is vast and varied.

It’s a great summary of the challenges that are faced, and a ranking based on frequency of topic being mentioned in the literature drives the authors proposed next steps. They recommend further research into the incorporation of preferences within or beyond the QALY and the use of multiple-criteria decision analysis as a method of integrating patient preferences into decision-making. I support the need for “a scientifically and valid manner” to integrate patient preferences into HTA decision-making but wonder if we can first learn of what works well and hasn’t worked so well from the attempts of HTA agencies thus far.

Credits

Thesis Thursday: David Mott

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr David Mott who has a PhD from Newcastle University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
How do preferences for public health interventions differ? A case study using a weight loss maintenance intervention
Supervisors
Luke Vale, Laura Ternent
Repository link
http://hdl.handle.net/10443/4197

Why is it important to understand variation in people’s preferences?

It’s not all that surprising that people’s preferences for health care interventions vary, but we don’t have a great understanding of what might drive these differences. Increasingly, preference information is being used to support regulatory decisions and, to a lesser but increasing extent, health technology assessments. It could be the case that certain subgroups of individuals would not accept the risks associated with a particular health care intervention, whereas others would. Therefore, identifying differences in preferences is important. However, it’s also useful to try to understand why this heterogeneity might occur in the first place.

The debate on whose preferences to elicit for health state valuation has traditionally focused on those with experience (e.g. patients) and those without (e.g. the general population). Though this dichotomy is problematic; it has been shown that health state utilities systematically differ between these two groups, presumably due to the difference in relative experience. My project aimed to explore whether experience also affects people’s preferences for health care interventions.

How did you identify different groups of people, whose preferences might differ?

The initial plan for the project was to elicit preferences for a health care intervention from general population and patient samples. However, after reviewing the literature, it seemed highly unlikely that anyone would advocate for preferences for treatments to be elicited from general population samples. It has long been suggested that discrete choice experiments (DCEs) could be used to incorporate patient preferences into decision-making, and it turned out that patients were the focus of the majority of the DCE studies that I reviewed. Given this, I took a more granular approach in my empirical work.

We recruited a very experienced group of ‘service users’ from a randomised controlled trial (RCT). In this case, it was a novel weight loss maintenance intervention aimed at helping obese adults that had lost at least 5% of their overall weight to maintain their weight loss. We also recruited an additional three groups from an online panel. The first group were ‘potential service users’ – those that met the trial criteria but could not have experienced the intervention. The second group were ‘potential beneficiaries’ – those that were obese or overweight and did not meet the trial criteria. The final group were ‘non-users’ – those with a normal BMI.

What can your study tell us about preferences in the context of a weight loss maintenance intervention?

The empirical part of my study involved a DCE and an open-ended contingent valuation (CV) task. The DCE was focused on the delivery of the trial intervention, which was a technology-assisted behavioural intervention. It had a number of different components but, briefly, it involved participants weighing themselves regularly on a set of ‘smart scales’, which enabled the trial team to access and monitor the data. Participants received text messages from the trial team with feedback, reminders to weigh themselves (if necessary), and links to online tools and content to support the maintenance of their weight loss.

The DCE results suggested that preferences for the various components of the intervention varied significantly between individuals and between the different groups – and not all were important. In contrast, the efficacy and cost attributes were important across the board. The CV results suggested that a very significant proportion of individuals would be willing to pay for an effective intervention (i.e. that avoided weight regain), with very few respondents expressing a willingness to pay for an intervention that led to more than 10-20% weight regain.

Do alternative methods for preference elicitation provide a consistent picture of variation in preferences?

Existing evidence suggests that willingness to pay (WTP) estimates from CV tasks might differ from those derived from DCE data, but there aren’t a lot of empirical studies on this in health. Comparisons were planned in my study, but the approach taken in the end was suboptimal and ultimately inconclusive. The original plan was to obtain WTP estimates for an entire WLM intervention using the DCE and to compare this with the estimates from the CV task. Due to data limitations, it wasn’t possible to make this comparison. However, the CV task was a bit unusual because we asked for respondents’ WTP at various different efficacy levels. So instead the comparison made was between average WTP values for a percentage point of weight re-gain. The differences were statistically insignificant.

Are some people’s preferences ‘better defined’ than others’?

We hypothesised that those with experience of the trial intervention would have ‘better defined’ preferences. To explore this, we compared the data quality across the different user groups. From a quick glance at the DCE results, it is pretty clear that the data were much better for the most experienced group; the coefficients were larger, and a much higher proportion was statistically significant. However, more interestingly, we found that the most experienced group were 23% more likely to have passed all of the rationality tests that were embedded in the DCE. Therefore, if you accept that better quality data is an indicator of ‘better defined’ preferences, then the data do seem reasonably supportive of the hypothesis. That being said, there were no significant differences between the other three groups, begging the question: was it the difference in experience, or some other difference between RCT participants and online panel respondents?

What does your research imply for the use of preferences in resource allocation decisions?

While there are still many unanswered questions, and there is always a need for further research, the results from my PhD project suggest that preferences for health care interventions can differ significantly between respondents with differing levels of experience. Had my project been applied to a more clinical intervention that is harder for an average person to imagine experiencing, I would expect the differences to have been much larger. I’d love to see more research in this area in future, especially in the context of benefit-risk trade-offs.

The key message is that the level of experience of the participants matters. It is quite reasonable to believe that a preference study focusing on a particular subgroup of patients will not be generalisable to the broader patient population. As preference data, typically elicited from patients, is increasingly being used in decision-making – which is great – it is becoming increasingly important for researchers to make sure that their respondent samples are appropriate to support the decisions that are being made.