Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
Does the use of health technology assessment have an impact on the utilisation of health care resources? Evidence from two European countries. European Journal of Health Economics [PubMed] Published 5th February 2020
The ostensible purpose of health technology assessment (HTA) is to provide health care decision-makers with the information they need when considering whether to change existing policies. One of the questions I’ve heard muttered sotto voce (and that I will admit to having asked myself in more cynical moments) is whether or not HTAs actually make a difference. We are generating lots of evidence, but does it have any real impact on decision making? Do the complex analyses health economists undertake make any impact on policy?
This paper used data from Catalonia and England to estimate the impact of a positive HTA recommendation from the regulatory bodies – the National Institute for Health and Care Excellence (NICE) in England and a collection of regional approval bodies in Catalonia and Spain – to assess trends in medical usage prior to and following the publication of HTA-guided recommendations for new cancer drugs between 2011 and the end of 2016. Utilization (volume of drugs dispensed) and expenditure were extracted from retrospective records. The authors built a Poisson regression model that allowed them to observe temporal effects of usage before and following a positive recommendation.
The authors noted that a lack of pre-recommendation utilization data made it difficult to compute a model of negative recommendations (which is the more cynical version of the question!), so it is important to recognize that as a limitation of the approach. They also note, however, that it is typically the case in the UK and Catalonia that approvals for new drugs are conditional on a positive recommendation. Spain has a different system in which medicines may still be available even if they are not recommended.
The results of the model are a bit more complex than is easy to fit into a blog post, but the bottom line is that a positive recommendation does produce an increase in utilization. What stuck out to me about the descriptive findings was the consistent presence of a trend toward increased usage happening before the recommendation was published. But the Poisson model found a significant effect of the recommendation even controlling for that temporal trend. The authors helpfully noted that the criteria going into a recommendation are different between England and Spain (cost per QALY in England, clinical effectiveness alone sometimes in Spain), which makes inter-country comparisons challenging.
In Canada, newly-developed cancer drugs undergo HTA through the pan-Canadian Oncology Drug Review (pCODR), a program run under the auspices of the Canadian Agency for Drugs and Technologies in Health (CADTH). Unlike NICE in the UK, the results of CADTH’s pCODR recommendations are not binding; they are intended instead to provide provincial decision-makers with expert evidence they can use when deciding whether or not to add drugs to their formulary.
This paper, written by researchers at the Canadian Centre for Applied Research in Cancer Control (ARCC), reviewed the publicly-available reports governing 43 pCODR recommendations between 2015 and 2018. The paper summarizes the findings of the cost-effectiveness analyses generated in each report, including incremental costs and incremental QALYs (incremental cost per QALY being the reference case used by CADTH). The authors also appraised the methods chosen within each submission, both in terms of decision model structure and data inputs.
Interestingly, and perhaps disconcertingly, the paper reports a notable discrepancy between the ICERs reported by the submitting manufacturer and those calculated by CADTH’s Economics Guidance Panel. This appeared to be largely driven by the kind of health-related quality of life (HRQoL) data used to generate the QALYs in each submission. The authors note that the majority (56%) of the submissions provided to pCODR didn’t collect HRQoL data alongside clinical trials, preferring instead to use values published in the literature. In the face of high levels of uncertainty and relatively small incremental benefits (the median change in QALYs was 0.86), it seems crucial to have reliable information about HRQoL for making these kinds of decisions.
Regulatory and advisory agencies like CADTH have a rather weighty responsibility, not only to help decision makers identify which new drugs and technologies the health care system should adopt, but also which ones they should reject. When manufacturers’ submissions rely on inappropriate data with high levels of uncertainty, this task becomes much more difficult. The authors suggest that manufacturers should be collecting their own HRQoL data in clinical trials they fund. After all, if we want HTAs to have an effect on policy-making, we should also make sure they’re having a positive effect.
The cost-effectiveness of limiting federal housing vouchers to use in low-poverty neighborhoods in the United States. Public Health [PubMed] Published January 2020
My undergraduate education was heavily steeped in discussions of the social determinants of health. Another cynical opinion I’ve heard (again sometimes from myself) is that health economics is disproportionately concerned with the adoption of new drugs that have a marginal effect on health, often at the expense of investment in the other non-health-care determinants. This is a particularly persuasive bit of cynicism when you consider cancer drugs like in our previous two examples, where the incremental benefits are typically modest and the costs typically high. That’s why I was especially excited to see this paper published by my friend Dr. Zafar Zafari, applying health economic analysis frameworks to something atypical: housing policy.
The authors evaluated a trial running alongside a program providing housing vouchers to 4600 low-income households. The experimental condition in this case was that the vouchers could only be used in well-off neighbourhoods (i.e., those with a low level of poverty). The authors considered the evidence showing a link between neighbourhood wealth and lowering rates of obesity-related health conditions like diabetes, and used that evidence to construct a Markov decision model to measure incremental cost per QALY over the length of the study (10-15 years). Cohort characteristics, relative clinical effectiveness, and costs of the voucher program were estimated from trial results, with other costs and probabilities derived from the literature.
Compared to the control group (public housing), use of the housing vouchers provided an additional 0.23 QALYs per person, at a lower cost (about $750 less per person). Importantly, these findings were highly robust to parameter uncertainty, with 99% of ICERs falling below a willingness-to-pay threshold of $20,000/QALY (>90% below a WTP threshold of $0/QALY). The model was highly sensitive to the discount rate, which makes sense considering that we would expect, for a chronic condition like diabetes and a distal relationship like housing, that all the incremental health gains would be occurring years after the initial intervention.
There are a lot of things to like about this paper, but the one that stands out to me is the way they’ve framed the question:
We seek to inform the policy debate over the wisdom of spending health dollars on non-health sectors of the economy by defining the trade-off, or ‘opportunity cost’ of such a decision.
The idea that “health funds” should be focussed on “health care” robs us of the opportunity to consider the health impact of interventions in other policy areas. By bringing something like housing explicitly into the realm of cost-per-QALY analysis, the authors invite us all to consider the kinds of trade-offs we make when we relegate our consideration of health only to the kinds of things that happen inside hospitals.
A multidimensional array representation of state-transition model dynamics. Medical Decision Making [PubMed] Published 28th January 2020
I’ve been building models in R for a few years now, and developed a method of my own more or less out of necessity. So I’ve always been impressed with and drawn to the work of the group Decision Analysis in R for Technologies in Health (the amazingly-named DARTH). I’ve had the opportunity to meet a couple of their scientists and have followed their work for a while, and so I was really pleased to see the publication of this paper, hot on the heels of another paper discussing a formalized approach to model construction in R, and timed to coincide with the publication of a step-by-step guidebook on how to build models according to the DARTH recipe.
The DARTH approach (and, as a happy coincidence, mine too) involves tapping into R’s powerful ability to organize data into multidimensional arrays. The paper talks in depth about how R arrays can be used to represent health states, and how to set up and program models of essentially any level of complexity using a set of basic R commands. As a bonus they include publicly-accessible sample code that you can follow along as you read (which is the best way to learn something like this).
The authors argue that the method they propose is ideal for capturing and reflecting “transition rewards” – that is, effects on the cohort that occur during transitions between health states – in addition to “state rewards” (effects that happen as a consequence of being within a state). The key to this Dynamics Array approach is the use of a three-dimensional array to store the transitions, with the third array representing the passage of time. After walking the reader through the theory, the authors present a sample three-state model and show that the new method is fast, efficient, and accurate.
I hope that I have been sufficiently clear that I am a big fan of DARTH and admire their work a great deal. Because there is one big criticism I have to level at them, which is that this paper (and the others I have cited) is not terribly easy to follow. It sort of presumes that you already understand a lot of the topics that are discussed, which I personally do not. And if I, someone who has built many array-based models in R, am having a tough time understanding the explanation of their approach then woe betide anyone else who is reading this paper without a firm grasp of R, decision modelling theory, matrix algebra, and a handful of the other topics required to benefit from this (truly excellent) work.
DARTH is laying down a well-thought-out path to revolutionizing the standard approach to model building, but they can only do that if people start adopting their approach. If I were a grad student hoping to build my first model, this paper would likely intimidate me enough to maybe go back to the default of building it in Excel. As a postdoc with my own way of doing things there is a big opportunity cost of switching, and part of that cost is feeling too dumb to follow the instructions. I know that DARTH has tutorials and courses and workshops to help people get up to speed, but I hope that they also have a plan to translate some of this knowledge into a form that is more accessible for casual coders, non-economists, and other people who need this info but who (like me) might find this format opaque.