Rachel Houten’s journal round-up for 11th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A comparison of national guidelines for network meta-analysis. Value in Health [PubMed] Published October 2019

The evolving treatment landscape results in a greater dependence on indirect treatment comparisons to generate estimates of clinical effectiveness, where the current practice has not been compared to the proposed new intervention in a head-to-head trial. This paper is a review of the guidelines of reimbursement bodies for conducting network meta-analyses. Reassuringly, the authors find that it is possible to meet the needs of multiple agencies with one analysis.

The authors assign three categories to the criteria; “assessment and analysis to test assumptions required for a network meta-analysis, presentation and reporting of results, and justification of modelling choices”, with heterogeneity of the included studies highlighted as one of the key elements to be sure to include if prioritisation of the criteria is necessary. I think this is a simple way of thinking about what needs to be presented but the ‘justification’ category, in my experience, is often given less weight than the other two.

This paper is a useful resource for companies submitting to multiple HTA agencies with the requirements of each national body displayed in tables that are easy to navigate. It meets a practical need but doesn’t really go far enough for me. They do signpost to the PRISMA criteria, but I think it would have been really good to think about the purpose of the submission guidelines; to encourage a logical and coherent summary of the approaches taken so the evidence can be evaluated by decision-makers.

Variation in responsiveness to warranted behaviour change among NHS clinicians: novel implementation of change detection methods in longitudinal prescribing data. BMJ [PubMed] Published 2nd October 2019

I really like this paper. Such a lot of work, from all sectors, is devoted to the production of relevant and timely evidence to inform practice, but if the guidance does not become embedded into the real world then its usefulness is limited.

The authors have managed to utilize a HUGE amount of data to identify the real reaction to two pieces of guidance recommending a change in practice in England. The authors used “trend indicator saturation”, which I’m not ashamed to admit I knew nothing about beforehand, but it is explained nicely. Their thoughtful use of the information available to them results in three indicators of response (in this case the deprescribing of two drugs) around when the change occurs, how quickly it occurs, and how much change occurs.

The authors discover variation in response to the recommendations but suggest an application of their methods could be used to generate feedback to clinicians and therefore drive further response. As some primary care practices took a while to embed the guidance change into their prescribing, the paper raises interesting questions as to where the barriers to the adoption of guidance have occurred.

What is next for patient preferences in health technology assessment? A systematic review of the challenges. Value in Health Published November 2019

It may be that patient preferences have a role to play in the uptake of guideline recommendations, as proposed by the authors of my final paper this week. This systematic review, of the literature around embedding patient preferences into HTA decision-making, groups the discussion in the academic literature into five broad areas; conceptual, normative, procedural, methodological, and practical. The authors state that their purpose was not to formulate their own views, merely to present the available literature, but they do a good job of indicating where to find more opinionated literature on this topic.

Methodological issues were the biggest group, with aspects such as the sample selection, internal and external validity of the preferences generated, and the generalisability of the preferences collected from a sample to the entire population. However, in general, the number of topics covered in the literature is vast and varied.

It’s a great summary of the challenges that are faced, and a ranking based on frequency of topic being mentioned in the literature drives the authors proposed next steps. They recommend further research into the incorporation of preferences within or beyond the QALY and the use of multiple-criteria decision analysis as a method of integrating patient preferences into decision-making. I support the need for “a scientifically and valid manner” to integrate patient preferences into HTA decision-making but wonder if we can first learn of what works well and hasn’t worked so well from the attempts of HTA agencies thus far.

Credits

Rita Faria’s journal round-up for 2nd September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ [PubMed] Published 28th August 2019

RCTs are the gold standard primary study to estimate the effect of treatments but are often far from perfect. The question is the extent to which their flaws make a difference to the results. Well, RoB 2 is your new best friend to help answer this question.

Developed by a star-studded team, the RoB 2 is the update to the original risk of bias tool by the Cochrane Collaboration. Bias is assessed by outcome, rather than for the whole RCT. For me, this makes sense.  For example, the primary outcome may be well reported, yet the secondary outcome, which may be the outcome of interest for a cost-effectiveness model, much less so.

Bias is considered in terms of 5 domains, with the overall risk of bias usually corresponding to the worst risk of bias in any of the domains. This overall risk of bias is then reflected in the evidence synthesis, with, for example, a stratified meta-analysis.

The paper is a great read! Jonathan Sterne and colleagues explain the reasons for the update and the process that was followed. Clearly, there was quite a lot of thought given to the types of bias and to develop questions to help reviewers assess it. The only downside is that it may require more time to apply, given that it needs to be done by outcome. Still, I think that’s a price worth paying for more reliable results. Looking forward to seeing it in use!

Characteristics and methods of incorporating randomised and nonrandomised evidence in network meta-analyses: a scoping review. Journal of Clinical Epidemiology [PubMed] Published 3rd May 2019

In keeping with the evidence synthesis theme, this paper by Kathryn Zhang and colleagues reviews how the applied literature has been combining randomised and non-randomised evidence. The headline findings are that combining these two types of study designs is rare and, when it does happen, naïve pooling is the most common method.

I imagine that the limited use of non-randomised evidence is due to its risk of bias. After all, it is difficult to ensure that the measure of association from a non-randomised study is an estimate of a causal effect. Hence, it is worrying that the majority of network meta-analyses that did combine non-randomised studies did so with naïve pooling.

This scoping review may kick start some discussions in the evidence synthesis world. When should we combine randomised and non-randomised evidence? How best to do so? And how to make sure that the right methods are used in practice? As a cost-effectiveness modeller, with limited knowledge of evidence synthesis, I’ve grappled with these questions myself. Do get in touch if you have any thoughts.

A cost-effectiveness analysis of shortened direct-acting antiviral treatment in genotype 1 noncirrhotic treatment-naive patients with chronic hepatitis C virus. Value in Health [PubMed] Published 17th May 2019

Rarely we see a cost-effectiveness paper where the proposed intervention is less costly and less effective, that is, in the controversial southwest quadrant. This exceptional paper by Christopher Fawsitt and colleagues is a welcome exception!

Christopher and colleagues looked at the cost-effectiveness of shorter treatment durations for chronic hepatitis C. Compared with the standard duration, the shorter treatment is not as effective, hence results in fewer QALYs. But it is much cheaper to treat patients over a shorter duration and re-treat those patients who were not cured, rather than treat everyone with the standard duration. Hence, for the base-case and for most scenarios, the shorter treatment is cost-effective.

I’m sure that labelling a less effective and less costly option as cost-effective may have been controversial in some quarters. Some may argue that it is unethical to offer a worse treatment than the standard even if it saves a lot of money. In my view, it is no different from funding better and more costlier treatments, given that the savings will be borne by other patients who will necessarily have access to fewer resources.

The paper is beautifully written and is another example of an outstanding cost-effectiveness analysis with important implications for policy and practice. The extensive sensitivity analysis should provide reassurance to the sceptics. And the discussion is clever in arguing for the value of a shorter duration in resource-constrained settings and for hard to reach populations. A must read!

Credits

Rita Faria’s journal round-up for 29th July 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

All-male panels and gender diversity of issue panels and plenary sessions at ISPOR Europe. PharmacoEconomics – Open [PubMed] Published 22nd July 2019

All male panels and other diversity considerations for ISPOR. PharmacoEconomics – Open [PubMed] Published 22nd July 2019

How is gender balance at ISPOR Europe conferences? This fascinating paper by Jacoline Bouvy and Michelle Mujoomdar kick-started a debate among the #HealthEconomics Twitterati by showing that the gender distribution is far from balanced.

Jacoline and Michelle found that, between 2016-18, 30% of the 346 speakers at issue panels and plenary sessions were women. Of the 85 panels and sessions, 29% were manels and 64% were mainly composed by men, whereas 2% were all-women panels (‘famels’?).

The ISPOR president Nancy Devlin had a positive and constructive response. For example, I was very pleased to know that ISPOR is taking the issue seriously and no longer has all-male plenary sessions. Issue panels, however, are proposed by members. The numbers show that the gender imbalance in the panels that do get accepted reflects the imbalance of the panels that are proposed.

These two papers raise quite a lot of questions. Why are fewer women participating in abstracts for issue panels? Does the gender distribution in abstracts reflect the distribution in membership, conference attendance, and submission of other types of abstracts? And how does it compare with other conferences in health economics and in other disciplines? Could we learn from other disciplines for effective action? If there is a gender imbalance in conference attendance, providing childcare may help (see here for a discussion). If women tend to submit more abstracts for posters rather than for organised sessions, more networking opportunities both online and at conferences could be an effective action.

I haven’t studied this phenomenon, so I really don’t know. I’d like to suggest that ISPOR starts collecting data systematically and implements initiatives in a way that is amenable to evaluation. After all, doing an evaluation is the health economist way!

Seamless interactive language interfacing between R and Stata. The Stata Journal [RePEc] Published 14th March 2019

Are you a Stata-user, but every so often you’d like to use a function only available in R? This brilliant package is for you!

E.F. Haghish created the rcall package to use R from Stata. It can be used to call R from Stata, or call R for a specific function. With the console mode, we call R to perform an action. The interactive mode allows us to call R from a Stata do-file. The vanilla mode evokes a new R session. The sync mode automatically synchronises objects between R and Stata. Additionally, rcall can transfer various types of data, such as locals, globals, datasets, etc. between Stata and R. Lastly, you can write ado-commands to embed R functions in Stata programs.

This package opens up loads of possibilities. Obviously, it does require that Stata users also know R. But it does make it easy to use R from the comfort of Stata. Looking forward to trying it out more!

Development of the summary of findings table for network meta-analysis. Journal of Clinical Epidemiology [PubMed] Published 2nd May 2019

Whilst the previous paper expands your analytical toolbox, this paper helps you present the results in the context of network meta-analysis. Juan José Yepes-Nuñez and colleagues propose a new summary of findings table to present the results of network meta-analysis. This new table reports all the relevant findings in a way that works for readers.

This study is remarkable because they actually tested the new table with 32 users in four rounds of test and revision. The limitation is that the users were mostly methodologists, although I imagine that recruitment of other users such as clinicians may have been difficult. The new format comprises three sections. The upper section details the PICO (Population; Intervention; Comparison; Outcome) and shows the diagram of the evidence network. The middle section summarises the results in terms of the comparisons, number of studies, participants, relative effect, absolute outcomes and absolute difference, certainty of evidence, rankings, and interpretation of the findings. The lower section defines the terminology and provides some details on the calculations.

It was interesting to read that users felt confused and overwhelmed if the results for all comparisons were shown. Therefore, the table shows the results for one main comparator vs other interventions. The issue is that, as the authors discuss, one comparator needs to be chosen as the main comparator, which is not ideal. Nonetheless, I agree that this is a compromise worth making to achieve a table that works!

I really enjoyed reading about the process to get to this table. I’m wondering if it would be useful to conduct a similar exercise to standardise the presentation of cost-effectiveness results. It would be great to know your thoughts!

Credits