Ian Cromwell’s journal round-up for 17th February 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Does the use of health technology assessment have an impact on the utilisation of health care resources? Evidence from two European countries. European Journal of Health Economics [PubMed] Published 5th February 2020

The ostensible purpose of health technology assessment (HTA) is to provide health care decision-makers with the information they need when considering whether to change existing policies. One of the questions I’ve heard muttered sotto voce (and that I will admit to having asked myself in more cynical moments) is whether or not HTAs actually make a difference. We are generating lots of evidence, but does it have any real impact on decision making? Do the complex analyses health economists undertake make any impact on policy?

This paper used data from Catalonia and England to estimate the impact of a positive HTA recommendation from the regulatory bodies – the National Institute for Health and Care Excellence (NICE) in England and a collection of regional approval bodies in Catalonia and Spain – to assess trends in medical usage prior to and following the publication of HTA-guided recommendations for new cancer drugs between 2011 and the end of 2016. Utilization (volume of drugs dispensed) and expenditure were extracted from retrospective records. The authors built a Poisson regression model that allowed them to observe temporal effects of usage before and following a positive recommendation.

The authors noted that a lack of pre-recommendation utilization data made it difficult to compute a model of negative recommendations (which is the more cynical version of the question!), so it is important to recognize that as a limitation of the approach. They also note, however, that it is typically the case in the UK and Catalonia that approvals for new drugs are conditional on a positive recommendation. Spain has a different system in which medicines may still be available even if they are not recommended.

The results of the model are a bit more complex than is easy to fit into a blog post, but the bottom line is that a positive recommendation does produce an increase in utilization. What stuck out to me about the descriptive findings was the consistent presence of a trend toward increased usage happening before the recommendation was published. But the Poisson model found a significant effect of the recommendation even controlling for that temporal trend. The authors helpfully noted that the criteria going into a recommendation are different between England and Spain (cost per QALY in England, clinical effectiveness alone sometimes in Spain), which makes inter-country comparisons challenging.

Health‐related quality of life in oncology drug reimbursement submissions in Canada: a review of submissions to the pan‐Canadian Oncology Drug Review. Cancer [PubMed] Published 1st January 2020

In Canada, newly-developed cancer drugs undergo HTA through the pan-Canadian Oncology Drug Review (pCODR), a program run under the auspices of the Canadian Agency for Drugs and Technologies in Health (CADTH). Unlike NICE in the UK, the results of CADTH’s pCODR recommendations are not binding; they are intended instead to provide provincial decision-makers with expert evidence they can use when deciding whether or not to add drugs to their formulary.

This paper, written by researchers at the Canadian Centre for Applied Research in Cancer Control (ARCC), reviewed the publicly-available reports governing 43 pCODR recommendations between 2015 and 2018. The paper summarizes the findings of the cost-effectiveness analyses generated in each report, including incremental costs and incremental QALYs (incremental cost per QALY being the reference case used by CADTH). The authors also appraised the methods chosen within each submission, both in terms of decision model structure and data inputs.

Interestingly, and perhaps disconcertingly, the paper reports a notable discrepancy between the ICERs reported by the submitting manufacturer and those calculated by CADTH’s Economics Guidance Panel. This appeared to be largely driven by the kind of health-related quality of life (HRQoL) data used to generate the QALYs in each submission. The authors note that the majority (56%) of the submissions provided to pCODR didn’t collect HRQoL data alongside clinical trials, preferring instead to use values published in the literature. In the face of high levels of uncertainty and relatively small incremental benefits (the median change in QALYs was 0.86), it seems crucial to have reliable information about HRQoL for making these kinds of decisions.

Regulatory and advisory agencies like CADTH have a rather weighty responsibility, not only to help decision makers identify which new drugs and technologies the health care system should adopt, but also which ones they should reject. When manufacturers’ submissions rely on inappropriate data with high levels of uncertainty, this task becomes much more difficult. The authors suggest that manufacturers should be collecting their own HRQoL data in clinical trials they fund. After all, if we want HTAs to have an effect on policy-making, we should also make sure they’re having a positive effect.

The cost-effectiveness of limiting federal housing vouchers to use in low-poverty neighborhoods in the United States. Public Health [PubMed] Published January 2020

My undergraduate education was heavily steeped in discussions of the social determinants of health. Another cynical opinion I’ve heard (again sometimes from myself) is that health economics is disproportionately concerned with the adoption of new drugs that have a marginal effect on health, often at the expense of investment in the other non-health-care determinants. This is a particularly persuasive bit of cynicism when you consider cancer drugs like in our previous two examples, where the incremental benefits are typically modest and the costs typically high. That’s why I was especially excited to see this paper published by my friend Dr. Zafar Zafari, applying health economic analysis frameworks to something atypical: housing policy.

The authors evaluated a trial running alongside a program providing housing vouchers to 4600 low-income households. The experimental condition in this case was that the vouchers could only be used in well-off neighbourhoods (i.e., those with a low level of poverty). The authors considered the evidence showing a link between neighbourhood wealth and lowering rates of obesity-related health conditions like diabetes, and used that evidence to construct a Markov decision model to measure incremental cost per QALY over the length of the study (10-15 years). Cohort characteristics, relative clinical effectiveness, and costs of the voucher program were estimated from trial results, with other costs and probabilities derived from the literature.

Compared to the control group (public housing), use of the housing vouchers provided an additional 0.23 QALYs per person, at a lower cost (about $750 less per person). Importantly, these findings were highly robust to parameter uncertainty, with 99% of ICERs falling below a willingness-to-pay threshold of $20,000/QALY (>90% below a WTP threshold of $0/QALY). The model was highly sensitive to the discount rate, which makes sense considering that we would expect, for a chronic condition like diabetes and a distal relationship like housing, that all the incremental health gains would be occurring years after the initial intervention.

There are a lot of things to like about this paper, but the one that stands out to me is the way they’ve framed the question:

We seek to inform the policy debate over the wisdom of spending health dollars on non-health sectors of the economy by defining the trade-off, or ‘opportunity cost’ of such a decision.

The idea that “health funds” should be focussed on “health care” robs us of the opportunity to consider the health impact of interventions in other policy areas. By bringing something like housing explicitly into the realm of cost-per-QALY analysis, the authors invite us all to consider the kinds of trade-offs we make when we relegate our consideration of health only to the kinds of things that happen inside hospitals.

A multidimensional array representation of state-transition model dynamics. Medical Decision Making [PubMed] Published 28th January 2020

I’ve been building models in R for a few years now, and developed a method of my own more or less out of necessity. So I’ve always been impressed with and drawn to the work of the group Decision Analysis in R for Technologies in Health (the amazingly-named DARTH). I’ve had the opportunity to meet a couple of their scientists and have followed their work for a while, and so I was really pleased to see the publication of this paper, hot on the heels of another paper discussing a formalized approach to model construction in R, and timed to coincide with the publication of a step-by-step guidebook on how to build models according to the DARTH recipe.

The DARTH approach (and, as a happy coincidence, mine too) involves tapping into R’s powerful ability to organize data into multidimensional arrays. The paper talks in depth about how R arrays can be used to represent health states, and how to set up and program models of essentially any level of complexity using a set of basic R commands. As a bonus they include publicly-accessible sample code that you can follow along as you read (which is the best way to learn something like this).

The authors argue that the method they propose is ideal for capturing and reflecting “transition rewards” – that is, effects on the cohort that occur during transitions between health states – in addition to “state rewards” (effects that happen as a consequence of being within a state). The key to this Dynamics Array approach is the use of a three-dimensional array to store the transitions, with the third array representing the passage of time. After walking the reader through the theory, the authors present a sample three-state model and show that the new method is fast, efficient, and accurate.

I hope that I have been sufficiently clear that I am a big fan of DARTH and admire their work a great deal. Because there is one big criticism I have to level at them, which is that this paper (and the others I have cited) is not terribly easy to follow. It sort of presumes that you already understand a lot of the topics that are discussed, which I personally do not. And if I, someone who has built many array-based models in R, am having a tough time understanding the explanation of their approach then woe betide anyone else who is reading this paper without a firm grasp of R, decision modelling theory, matrix algebra, and a handful of the other topics required to benefit from this (truly excellent) work.

DARTH is laying down a well-thought-out path to revolutionizing the standard approach to model building, but they can only do that if people start adopting their approach. If I were a grad student hoping to build my first model, this paper would likely intimidate me enough to maybe go back to the default of building it in Excel. As a postdoc with my own way of doing things there is a big opportunity cost of switching, and part of that cost is feeling too dumb to follow the instructions. I know that DARTH has tutorials and courses and workshops to help people get up to speed, but I hope that they also have a plan to translate some of this knowledge into a form that is more accessible for casual coders, non-economists, and other people who need this info but who (like me) might find this format opaque.

Credits

Jason Shafrin’s journal round-up for 3rd February 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Capacity constraints and time allocation in public health clinics. Health Economics [PubMed] Published 14th January 2020

Capacity constraints are a key issue in many health care markets. Capacity constraints can be due to short-run fluctuations in available labor or long-term resource scarcity where available supply does not meet demand at prevailing prices. One key issue to understand is how providers respond when capacity constraints appear. There are a number of potential options for health care providers to address capacity constraints. The first would be to decrease the number of patients seen. A second option would be to decrease quality. A third option would be to access additional funding to increase capacity. A fourth option would be to demand workers to work longer hours at no additional pay, which would be an implicit hourly wage reduction (and may not be legal).

In an attempt to find out the answer to this question, Harris, Liu and McCarthy examined data from a clinic in Tennessee. One issue with capacity constraints is that workforce allocation may be endogenous. Harris and co-authors look at a case where two nurses were removed from the clinic on selected mornings to administer flu shots at local schools. They claim that the assignment of nurses to schools was done by the local health department and those decisions were plausibly exogenous to any clinic decisions on volume or quality of care. Further, these nurses were not replaced by staff from other clinics.

The authors use data covering 16 months (i.e. two flu seasons) of visits. They first conducted a nurse-level analysis to evaluate if nurses on flu days had lower productivity. ‘FluNurse’ and ‘FluDay’ indicator variables were used. The former was used to control for whether the nurses selected to administer the flu shots at the schools were systematically more or less productive than the other nurses; the latter was used to measure the impact of the actual day when nurses were assigned to the school. The authors also conduct a visit-level analysis to see how time spent in the clinic varies on the days when nurse capacity is reduced compared to when it is not.

The authors find that the extensive margin is most important. On days when nurses visited schools, capacity is reduced by about 17%. On these days, providers do see fewer patients overall and prioritize scheduled visits over walk-ins. On the intensive margin, providers do decrease time spent with patients (by about 7%), but this time reduction is largely achieved by reducing some of the administrative aspects of the visit (i.e. expedited check-out times). The authors conclude that “providers value spending sufficient time with patients over seeing as many patients as possible.”

It is unclear whether these results would translate to other settings/countries or cases where capacity constraints are more long-term. The study does not discuss in detail how the clinic is reimbursed. A fee-for-service reimbursement may incentivize prioritizing volume over quality/time spent whereas under capitated or salaried reimbursement they may prioritize quality. Since many of the services examined by the study were provided by nurses, and nurses are more likely to be paid a salary rather than compensated based on clinic volume, it is unclear whether these results would translate to capacity constraints involving physicians.

Outcome measures for oncology alternative payment models: practical considerations and recommendations. American Journal of Managed Care [PubMed] Published 11th December 2019

Value-based payment sounds great in theory. Step 1: measure health outcomes and cost. Step 2: risk adjust to control for variability in patient health status. Step 3: pay providers more who have better outcomes and lower cost; pay providers less who have worse outcomes and higher cost. Simple, right? The answer is ‘yes’ according to many payers. A number of alternative payment models are being adapted by payers in the U.S. In oncology, the Centers for Medicare and Medicaid Services (CMS) has implemented the Oncology Care Model (OCM) to reimburse providers based on quality and cost.

This approach, however, is only valid if payers and policymakers are able to adequately measure quality. How is quality currently captured for oncology patients? A paper by Hlávka, Lin, and Neumann provides an overview of existing quality measures. Specifically they review quality metrics from the following entities: (i) OCM, (ii) the Quality Oncology Practice Initiative by the American Society of Clinical Oncologists (ASCO), (iii) the Prospective Payment System–Exempt Cancer Hospital Quality Reporting Program by CMS, (iv) the Core Quality Measures Collaborative Core Sets by CMS and America’s Health Insurance Plans (AHIP), (v) the Oncology Medical Home program by the Community Oncology Alliance, (vi) the Osteoporosis Quality Improvement Registry by the National Osteoporosis Foundation and National Bone Health Alliance, and (vii) the Oncology Qualified Clinical Data Registry by the Oncology Nursing Society. So, how well do these entities measure health outcomes?

Well, in general, most are measuring quality of care processes rather than health outcomes. Of the 142 quality measures examined, only 28 (19.7%) were outcome measures; the rest were process measures. The outcome measures of interest fell into five categories: (1) hospital admissions or emergency department visits, (2) hospice care, (3) mortality, (4) patient reported outcomes (PROs), and (5) adverse events (AEs). The paper describes in more detail how/why these metrics are used (e.g. many hospital admissions are related to chemotherapy AEs, hospice care is underutilized).

While this paper is not a methodological advance, knowing what quality measures are available is extremely important. Further, the paper cites a number of limitations of these quality metrics. First, you need reliable data to measure these measures, even if patients move across health care systems. Additionally, administrative data (e.g. claims data) often appear with a lag. Second, risk adjustment is imperfect and applying a value-based payment may incentivize providers to select more low-risk patients and avoid the sickest cancer patients. Third, if we care about patients’ opinions—which we should!—PROs are important. Collecting PROs, however, is more expensive than using administrative data. Fourth, quality measurement in general takes time and effort. My own commentary in the Journal of Clinical Pathways argues that these limitations need to be taken seriously and these costs and accuracy issues may undermine the value of value-based payment in oncology if quality is measured poorly and is costly to collect.

Modeling the impact of patient treatment preference on health outcomes in relapsing-remitting multiple sclerosis. Journal of Medical Economics [PubMed] Published January 2020

We hear a lot about patient-centered care. Intuitively, it makes a lot of sense. Patients are the end users, the customers. So we should do our best to give them the treatments they want. At the same time, patients have imperfect information and rely on physicians to help guide their decisions. However, patients may value things that—as a society—we may not be willing to pay for. For instance, empirical research shows that patients place a high value on hospital amenities. Thus, one key question to answer is whether following patient preferences is likely to result in better health outcomes.

A paper by van Eijndhoven et al. (disclosure: I am an author on this paper) aims to answer this question for patients with relapsing-remitting multiple sclerosis. The first step of the paper is to measure patient preferences across disease modifying drugs (DMDs). This was parameterized based on a discrete choice experiment of patients with multiple sclerosis. Second, a Markov model was used to estimate the impact of each individual DMD on health outcomes (i.e., number of relapses, disability progression over time, QALYs). QALYs were estimated utilities by health states defined by the Expanded Disability Status Scale (EDSS), caregiver disutility in each state, disutility per relapse, and disutility due to adverse events from the literature. Third, the authors compared two states of the world: one based on patient-driven preferences (as estimated in Step 1) and another based on current prescribing practices in the United Kingdom.

Using this approach, patient-centered prescribing practices lead to 6.8% fewer relapses and a 4.6% increase in QALYs. Applied to the UK population level, this would result in almost 37,000 avoided relapses and over 44,000 discounted QALYs over a 50 year period. Additionally, disease progression was slowed. For the typical patients, EDSS was -0.16 less each year under the patient-centered prescribing compared to the current market shares.

There are a few reasons for this finding. First, under current treatment patterns, many patients are not treated. About 21% of patients with RRMS in the UK do not receive DMDs. Access to neurologist care is often difficult. Second, patients do have a strong preference for more effective, newer treatments. Previous research indicates that prescribers in England generally viewed NICE guidelines as mandatory criteria they were obligated to follow, whereas neurologists in Scotland and Wales were more varied.

This study did have a number of limitations. First, the study did not examine costs. Patients may prefer more effective treatments, but this may have cost implications for the UK health system. Second, the impact of treatments was based on clinical trial data. DMDs may be more or less effective in the real world, particularly if adherence is suboptimal. Third, once people reach high levels of disability (EDSS ≥7), the study assumed that they were treated with best supportive care. Thus, the estimates may be conservative if treatment is more aggressive. Fourth, the treatment options were based on currently approved DMDs. In the real-world, however, new treatments may become available and the actual health trajectories are likely to deviate from our model.

While this study does not answer what is the optimal treatment mix for a country, we do see evidence that patient-preferred treatments are highly related to the health benefits received. Thus—in the case of multiple sclerosis—physicians should not fear that shared decision-making with patients will result in worse outcomes. On the country, better health outcomes could be expected from shared decision-making.

Credits

Jason Shafrin’s journal round-up for 15th July 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Understanding price growth in the market for targeted oncology therapies. American Journal of Managed Care [PubMed] Published 14th June 2019

In the media, you hear that drugs prices—particularly for oncology—are on the rise. With high prices, it makes it difficult for payers to afford effective treatments. For countries where patients bear significant cost, patients may even go without treatment. Are pharmaceutical firms making money hand over fist with these rising prices?

Recent research by Sussell et al. argues that, despite increased drug price costs, pharmaceutical manufacturers are actually making less money on every new cancer drug they produce. The reason? Precision medicine.

The authors use data from both the IQVIA National Sales Perspective (NSP) data set and the Medicare Current Beneficiary Survey (MCBS) to examine changes in the price, quantity, and total revenue over time. Price is measured as episode price (price over a fixed line of therapy) rather than the price per unit of drug. The time period for the core analysis covers 1997-2015.

The authors find that drug prices have roughly tripled between 1997-2015. Despite this price increase, pharmaceutical manufacturers are actually making less money. The number of eligible (i.e., indicated) patients per new oncology drug launch fell between 85% to 90% over this time period. On net, median pharmaceutical manufacturer revenues fell by about half over this time period.

Oncology may be the case where high cost drugs are a good thing; rather than identifying treatments indicated for a large number of people that are less effective on average per patient, develop more highly effective drugs targeted to small groups of people. Patients don’t get unnecessary treatments, and overall costs to payers fall. Of course, manufacturers still need to justify that these treatments represent high value, but some of my research has shown that quality-adjusted cost of care in oncology has remained flat or even fallen for some tumors despite rising drug prices.

Do cancer treatments have option value? Real‐world evidence from metastatic melanoma. Health Economics [PubMed] [RePEc] Published 24th June 2019

Cost effectiveness models done from a societal perspective aim to capture all benefits and costs of a given treatment relative to a comparator. Are standard CEA approaches really capturing all costs and benefits? A 2018 ISPOR Task Force examines some novel components of value that are not typically captured, such as real option value. The Task Force describes real option value as value that is “…generated when a health technology that extends life creates opportunities for the patient to benefit from other future advances in medicine.” Previous studies (here and here) have shown that patients who received treatments for chronic myeloid leukemia and non-small cell lung cancer lived longer than expected since they were able to live long enough to reach the next scientific advance.

A question remains, however, of whether individuals’ behaviors actually take into account this option value. A paper by Li et al. 2019 aims to answer this question by examining whether patients were more likely to get surgical resection after the advent of a novel immuno-oncology treatment (ipilimumab). Using claims data (Marketscan), the authors use an interrupted time series design to examine whether Phase II and Phase III clinical trail read-outs affected the likelihood of surgical resection. The model is a multinomial logit regression. Their preferred specification finds that

“Phase II result was associated with a nearly twofold immediate increase (SD: 0.61; p = .033) in the probability of undergoing surgical resection of metastasis relative to no treatment and a 2.5‐fold immediate increase (SD: 1.14; p = .049) in the probability of undergoing both surgical resection of metastasis and systemic therapy relative to no treatment.”

The finding is striking, but also could benefit from further testing. For instance, the impact of the Phase III results are (incrementally) small relative to the Phase II results. This may be reasonable if one believes that Phase II is a sufficiently reliable indicator of drug benefit, but many people focus on Phase III results. One test the authors could look at is to see whether physicians in academic medical centers are more likely to respond to this news. If one believes that physicians at academic medical centers are more up to speed on the literature, one would expect to see a larger option value for patients treated at academic compared to community medical centers. Further, the study would benefit from some falsification tests. If the authors could use data from other tumors, one would expect that the ipilimumab Phase II results would not have a material impact on surgical resection for other tumor types.

Overall, however, the study is worthwhile as it looks at treatment benefits not just in a static sense, but in a dynamically evolving innovation landscape.

Aggregate distributional cost-effectiveness analysis of health technologies. Value in Health [PubMed] Published 1st May 2019

In general, health economists would like to have health insurers cover treatments that are welfare improving in the Pareto sense. This means, if a treatment provides more expected benefits than costs and no one is worse off (in expectation), then this treatment should certainly be covered. It could be the case, however, that people care who gains these benefits. For instance, consider the case of a new technology that helped people with serious diseases move around more easily inside a mansion. Assume this technology had more benefits than cost. Some (many) people, however, may not like covering a treatment that only benefits people who are very well-off. This issue is especially relevant in single payer systems—like the United Kingdom’s National Health Service (NHS)—which are funded by taxpayers.

One option is to consider both the average net health benefits (i.e., benefits less cost) to a population as well as its effect on inequality. If a society doesn’t care at all about inequality, then this is reduced to just measuring net health benefit overall; if a society has a strong preference for equality, treatments that provide benefits to only the better-off will be considered less valuable.

A paper by Love-Koh et al. 2019 provides a nice quantitative way to estimate these tradeoffs. The approach uses both the Atkinson inequality index and the Kolm index to measure inequality. The authors then use these indices to calculate the equally distributed equivalent (EDE), which is the level of population health (in QALYs) in a completely equal distribution that yields the same amount of social welfare as the distribution under investigation.

Using this approach, the authors find the following:

“Twenty-seven interventions were evaluated. Fourteen interventions were estimated to increase population health and reduce health inequality, 8 to reduce population health and increase health inequality, and 5 to increase health and increase health inequality. Among the latter 5, social welfare analysis, using inequality aversion parameters reflecting high concern for inequality, indicated that the health gain outweighs the negative health inequality impact.”

Despite the attractive features of this approach analytically, there are issues related to how it would be implemented. In this case, inequality is based solely on quality-adjusted life expectancy. However, others could take a more holistic approach and look at socioeconomic status including other factors (e.g., income, employment, etc.). In theory, one could perform the same exercise measuring individual overall utility including these other aspects, but few (rightly) would want the government to assess individuals’ overall happiness to make treatment decisions. Second, the authors qualify expected life expectancy by patients’ sex, primary diagnosis and postcode. Thus, you could have a system that prioritizes treatments for men—since men’s life expectancy is generally less than women. Third, this model assumes disease is exogenous. In many cases this is true, but in some cases individual behavior could increase the likelihood of having a disease. For instance, would citizens want to discount treatments for diseases that are preventable (e.g., lung cancer due to smoking, diabetes due to poor eating habits/exercise), even if treatments for these diseases reduced inequality. Typically, there are no diseases that are fully exogenous or fully at fault of the individual, so this is a slippery slope.

What the Love-Koh paper contributes is an easy to implement method for quantifying how inequality preferences should affect the value of different treatments. What the paper does not answer is whether this approach should be implemented.

Credits