Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
Wow has the world ever changed since my last round-up! Both for me personally and for pretty much everyone else on the planet. It’s never been a more interesting time to be a health services researcher and modeller. Having a lot of conversations with casual acquaintances about topics I formerly only ever discussed with my fellow egg-heads. Here’s a few (non-COVID-related) papers that caught my eye.
Social, ethical, and other value judgments in health economics modelling. Social Science & Medicine [PubMed] Published 2nd April 2020
This paper is interesting to me for a few reasons. First, throughout my science education I didn’t do much formal training in the philosophy of science (and I suspect I’m not alone), so this was a useful look at an aspect of model building I haven’t spent a lot of time on. Second, I am closely acquainted with two of the paper’s authors and respect the work of all three a great deal. And third… I’m actually in this paper as one of the participants!
The general thrust of this paper and the field of work the authors are exploring has to do with implicit and explicit value judgments when health economists build models. While it is necessary practice to list the assumptions that go into building a model, the authors note that those tend to be mechanistic and biomedical. There are many other assumptions that simply go unexamined – social, ethical, and other normative judgments that modellers must make but don’t usually discuss or disclose.
The research took the form of a series of interviews in which modellers of various levels of expertise were asked questions to explore the kinds of judgments they make when doing their work. The authors discuss these judgments in terms of a “value free ideal” of scientific objectivity. That is, that scientists should be doing work that is free of personal biases, judgments, and values. Of course, once you put that ideal down on paper you can immediately recognize it as an impossible (and perhaps unwise?) standard. The interviews explored the tension between that ideal and practical reality.
The authors describe four distinct themes that emerged from the interviews – value-laden background assumptions; boundary challenge; arguments from inductive risk; and the cascade effect that value judgments may have on downstream study results. Participants spoke about the ways in which they are aware of the existence of these themes and a few ways to mitigate their impact, but they generally acknowledged that they are issues that are ‘thought about’, not discussed or explored in publications.
The bit of the philosophy of science that does stick with me is the importance of understanding the limitations of one’s work and discussing it openly. This paper explores and examines the ways in which health economists may be unaware of important limitations that lie outside what we think of as ‘scientific’. I happen to know that this paper is the first step in an ambitious project in the modelling world, so keep your eyes peeled.
Association between the use of surrogate measures in pivotal trials and health technology assessment decisions: a retrospective analysis of NICE and CADTH reviews of cancer drugs. Value in Health [PubMed] Published March 2020
If you’re reading this on the day it comes out, I’ll have just started a new job at the Canadian Agency for Drugs and Technologies in Health (CADTH), a firm that conducts evidence appraisals designed to help with decision-making. One of the sticking points of these kinds of evidence appraisal, particularly in oncology, is the use of surrogate measures to estimate incremental quality-adjusted life years (QALYs). Phase III clinical trials are often powered to detect differences in progression-free survival (PFS) but often conclude before they detect a difference in overall survival (OS) between trial arms. Models are typically used to extrapolate survival benefit from PFS.
The authors looked through oncology drug recommendations from CADTH and from the National Institute for Health and Care Excellence (NICE) in the United Kingdom between 2012 and the end of 2016. The recommendation documents were examined to determine the manner of endpoint used in the evaluation – OS, PFS, and disease response (DR). Linear probability modelling was then used to compare the likelihood that a drug would receive a positive recommendation with or without evidence of OS improvement, as well as contingent factors like whether the drug served an unmet need, had been designated as an ‘orphan’ drug, and whether or not cost-effectiveness evidence was present.
Generally, the authors found that both NICE and CADTH were equally likely to recommend drugs whether or not they had documented evidence of improved OS (when considered in conjunction with the contingent factors). This suggests that these appraisal bodies are willing to use PFS as a surrogate effectiveness measure when issuing recommendations. The authors also found that while cost-effectiveness was a statistically significant factor in whether or not a drug received a recommendation from NICE, CADTH recommendations did not seem to hinge on value for money. A large part of this discrepancy is the fact that CADTH issues recommendations that are conditional on improving cost-effectiveness, whereas NICE does not. It’s also important to note that NICE has a much more regulatory role than CADTH’s, which is largely advisory and does not set policy.
As oncology drugs move toward genomic (and other ‘omic) approaches and the pace of discovery accelerates, the use of surrogate markers will become more common. Whether you agree with it or not, agencies like CADTH and NICE seem to accept these surrogates even in the absence of documented benefit. This poses an obvious challenge to scientists working in those agencies (which I guess now includes me) to have a firm grasp on the limitations of those approaches.
Conducting value for money analyses for non‑randomised interventional studies including service evaluations: an educational review with recommendations. PharmacoEconomics [PubMed] [RePEc] Published 15th April 2020
As we saw from the previous paper, evidence considered by decision makers is typically focussed on the results of clinical trials. There are other kinds of evidence, however, where randomization is not possible. These include the kinds of evidence generated in evaluations of services in the field. Economic evaluation guidelines must be reinterpreted and adapted to reflect these different circumstances in order to allow decision makers to consider tradeoffs in value for money.
The authors of this article take a comprehensive overview of key issues in economic evaluations, and discuss how they may need to be adapted to reflect the types of evidence that are typically available for these kinds of service evaluations. The article provides a good introductory view of economic evaluation for people who may be unfamiliar with the practice, and who need to understand the limitations of following the established guidelines.
It was also a nice bonus to see this blog acknowledged as a source for some of the articles the authors used when writing their paper. Apparently, this is the first time we’ve seen our little corner of the internet show up in an acknowledgements section! It’s always great to see this kind of work making a positive impact.
R and Shiny for cost-effectiveness analyses: why and when? A hypothetical case study. PharmacoEconomics [PubMed] Published 31st March 2020
I have a confession to make: I’ve never built a model in Excel. I built my first model using a software suite called iThink. My second was in R, and I never looked back. Excel is, I have since learned, the standard for industry while academia is starting to move toward R. This invites the obvious question of which platform is better. This paper conducts a head-to-head comparison of the two approaches.
To do this, the authors constructed two identical models of Chimeric Antigen Reception T-Cell (CAR T) therapy in blood malignancies. The structure of each model follows a familiar format in oncology: a three-state ‘Progression Free’, ‘Post-Progression’, and ‘Dead’ Markov. What is not typical about this model is the fact that some patients will be considered ‘cured’ due to the nature of the therapy, so the model included a ‘mixture-cure’ function that sets the mortality rate for ‘cured’ patients to be equal to that of the general population. Survival data were taken from digitized Kaplan-Meier curves from the published lterature. The authors also conducted propensity score matching analysis, fit parametric models, and conducted probabilistic analysis. The model had a 28-day cycle and an 80-year time horizon.
Generally, while both models produced the same outcomes, the R model was more powerful, ran faster, and was able to conduct some of the more sophisticated statistical procedures (matching, model fitting) within the code itself. By contrast, the Excel model had to pass values in from external sources, had a slower run time (13.2 minutes vs. 1.42 minutes in R), and could not automate many quality control functions. R has a number of other useful properties, like the ability to manage multiple versions using Git, that are cumbersome in Excel.
The authors conclude with four conditions that make R favourable to Excel:
- The intended audience accepts models in R (which is far from universal);
- The presence of complex statistical methods;
- The likelihood that the underlying data and/or statistical methods may change in the future;
- A long run time.
The aspect of this comparison that goes largely unmentioned hearkens back to something I mentioned in my previous round-up: R is scary. A faster runtime isn’t particularly attractive if the only way to achieve it is to spend months learning how to program. As much as R is ‘better’ than Excel, its benefits are moot if modellers don’t know how to use it.
Credits