Meeting round-up: R for Cost-Effectiveness Analysis Workshop 2019

I have switched to using R for my cost-effectiveness models, but I know that I am not using it to its full potential. As a fledgling R user, I was keen to hear about other people’s experiences. I’m in the process of updating one of my models and know that I could be coding things better. But with so many packages and ways to code, the options seem infinite and I struggle to know where to begin. In an attempt to remedy this, I attended the Workshop on R for trial and model-based cost-effectiveness analysis hosted at UCL. I was not disappointed.

The day showcased speakers with varying levels of coding expertise doing a wide range of cool things in R. We started with examples of implementing decision trees using the CEdecisiontree package and cohort Markov models. We also got to hear about a population model using the HEEMOD package, and the purrr package was suggested for probabilistic sensitivity analyses. These talks highlighted how, compared to Excel, R can be reusable, faster, transparent, iterative, and open source.

The open source nature of R, however, has its drawbacks. One of the more interesting conversations that was woven in throughout the day was around the challenges. Can we can trust open-source software? When will NICE begin accepting models coded in R? How important is it that we have models in something like Excel that people can intuitively understand? I’ve not experienced problems choosing to use R for my work; for me, it’s always been around getting the support and information I need to get things done efficiently. The steep learning curve seems to be a major hurdle for many people. I had hoped to attend the short course introduction that was held the day before the workshop, but I was not fast enough to secure my spot as the course sold out within 36 hours. Never fear, the short course will be held again next year in Bristol.

To get around some of the aforementioned barriers to using R, James O’Mahony presented work on an open-source simplified screening model that his team is developing for teaching. An Excel interface with VBA code writes the parameter values in a file that can be imported into R, which has a single file of short model code. Beautiful graphs show the impact of important parameters on the efficiency frontier. He said that they would love to have people look at the code and give suggestions as they want to keep it simple but there is a nonlinear relationship between additional features and complexity.

And then we moved on to more specific topics, such as setting up a community for R users in the NHS, packages for survival curves, and how to build packages in R. I found Gianluca Baio’s presentation on what is a package and why we should be using them really helpful. I realised that I hadn’t really thought about what a package was before (a bundle of code, data, documentation and tests that is easy to share with others) or that it was something that I could or (as he argued) should be thinking about doing for myself as a time-saving tool even if I’m not sharing with others. It’s no longer difficult to build a package when you use packages like devtools and roxygen2 and tools like rstudio and github. He pointed out that packages can be stored on github if you’re not keen to share with the wider world via CRAN.

Another talk that I found particularly helpful was on R methods to prepare routine healthcare data for disease modelling. Claire Simons from The University of Oxford outlined her experiences of using R and ended her talk with a plethora of useful tips. These included using the data.table package for big data sets as it saves time when merging, use meaningful file names to avoid confusion later, and investing in doing things properly from the start as this will save time later. She also suggested using code profiling to identify which code takes the most time. Finally, she reminded us that we should be constantly learning about R: read books on R and writing algorithms and talk to other people who are using R (programmers and other people, not just health economists).

For those who agree that the future is R, check out resources from the Decision Analysis in R for Technologies in Health, a hackathon at Imperial on 6-7th November, hosted by Nathan Green, or join the ISPOR Open Source Models Special Interest Group.

Overall, the workshop structure allowed for a lot of great discussions in a relaxed atmosphere and I look forward to attending the next one.

Thesis Thursday: Feng-An Yang

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Feng-An Yang who has a PhD from Ohio State University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Three essays on access to health care in rural areas
Daeho Kim, Joyce Chen
Repository link

What are the policy challenges for rural hospitals in the US?

Rural hospitals have been financially vulnerable, especially after the implementation of Medicare Prospective Payment System (PPS) in 1983, under which hospitals receive a predetermined, fixed reimbursement for their inpatient services. Under the PPS, they suffer from financial losses as their costs tend to exceed the reimbursement rate due to their smaller size and lower patient volume than their urban counterparts (Medicare Payment Advisory Commission, 2001 [PDF]). As a result, a noticeable number of rural hospitals have closed since the implementation of PPS (Congressional Budget Office, 1991 [PDF]).

This closure trend has slowed down thanks to public payment policies such as the Critical Access Hospitals (CAH) program, but rural hospitals are continuing to close their doors and a total of 107 rural hospitals have closed from 2010 to present according to the North Carolina Rural Health Research Program. This issue has raised public concern for rural residents’ access to health services and health status, and how to keep rural hospitals open has become an important policy priority.

Which data sources and models did you use to identify key events?

My dissertation investigated the impact of the CAH program and hospital closure by compiling data from various sources. The primary data come from the Medicare cost report, which contains detailed financial statements for nearly every U.S. hospital. Historical data on health care utilization at the county-level are obtained from the Area Health Resource File. County-level mortality rates are calculated from the national mortality files. Lastly, the list of CAHs and closed hospitals is obtained from the Flex Monitoring Team and American Hospital Association Annual Survey, respectively. This list contains information on the hospital identifier and year of event which is key to my empirical strategy.

To identify the impact of key events (i.e., CAH conversion and hospital closure), I use an event-study approach exploiting the variation in the timing of events. This approach estimates the changes in outcome for the time relative to the ‘event time’. A primary advantage of this approach is that it allows a visual examination of the evolution of changes in outcome before and after the event.

How can policies relating to rural hospitals benefit patients?

This question is not trivial because public payment policies are not directly linked to patients. The primary objective of these policies is to strengthen rural hospitals’ financial viability by providing them with enhanced reimbursement. As a result, it has been expected that, under these policies, rural hospitals will improve their financial conditions and stay open, thereby maintaining the access to health services for rural residents. Broadly speaking, public payment policies can lead to an increase in accessibility if we compare patient access to health services between counties with at least one hospital receiving financial support and counties without any hospitals receiving financial support.

I look at patient benefits from three aspects: accessibility, health care utilization, and mortality. My research shows that the CAH program has substantially improved CAHs’ financial conditions and as a result, some CAHs that otherwise would have been closed have stayed open. This in turn leads to an increase in rural residents’ access to and use of health services. We then provide suggestive evidence that the increased access to and use of health care services have improved patient health in rural areas.

Did you find any evidence that policies could have negative or unexpected consequences?

Certainly. The second chapter of my dissertation focused on skilled nursing care which can be provided in either swing beds (inpatient beds that can be used interchangeably for inpatient care or skilled nursing care) or hospital-based skilled nursing facilities (SNFs). Since the services provided in swing beds and SNFs are equivalent, differential payments, if present, may encourage hospitals to use one over the other.

While the CAH program provides enhanced reimbursement to rural hospitals, it also changes the swing bed reimbursement method such that swing bed payments are more favorable than SNF payments. As a result, CAHs may have a financial incentive to increase the use of swing beds over SNFs. By focusing on CAHs with a SNF, my research shows a remarkable increase in swing bed utilization and this increase is fully offset by the decrease in SNF utilization. These results suggest that CAHs substitute swing beds for SNFs in response to the change in swing bed reimbursement method.

Based on your research, what would be your key recommendations for policymakers?

Based on my research findings, I would make two recommendations for policymakers.

First, my research speaks to the ongoing debate over the elimination of CAH designation for certain hospitals. Loss of CAH designation could have serious financial consequences and subsequently have potentially adverse impacts on patient access to and use of health care. Therefore, I would recommend policymakers to maintain the CAH designation.

Second, while the CAH program has improved rural hospitals’ financial conditions, it has also created a financial incentive for hospitals to use the service with a higher reimbursement rate. Thus, my recommendation to policymakers would be to consider potentially substitutable health care services when designing reimbursement rates.

Jason Shafrin’s journal round-up for 15th July 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Understanding price growth in the market for targeted oncology therapies. American Journal of Managed Care [PubMed] Published 14th June 2019

In the media, you hear that drugs prices—particularly for oncology—are on the rise. With high prices, it makes it difficult for payers to afford effective treatments. For countries where patients bear significant cost, patients may even go without treatment. Are pharmaceutical firms making money hand over fist with these rising prices?

Recent research by Sussell et al. argues that, despite increased drug price costs, pharmaceutical manufacturers are actually making less money on every new cancer drug they produce. The reason? Precision medicine.

The authors use data from both the IQVIA National Sales Perspective (NSP) data set and the Medicare Current Beneficiary Survey (MCBS) to examine changes in the price, quantity, and total revenue over time. Price is measured as episode price (price over a fixed line of therapy) rather than the price per unit of drug. The time period for the core analysis covers 1997-2015.

The authors find that drug prices have roughly tripled between 1997-2015. Despite this price increase, pharmaceutical manufacturers are actually making less money. The number of eligible (i.e., indicated) patients per new oncology drug launch fell between 85% to 90% over this time period. On net, median pharmaceutical manufacturer revenues fell by about half over this time period.

Oncology may be the case where high cost drugs are a good thing; rather than identifying treatments indicated for a large number of people that are less effective on average per patient, develop more highly effective drugs targeted to small groups of people. Patients don’t get unnecessary treatments, and overall costs to payers fall. Of course, manufacturers still need to justify that these treatments represent high value, but some of my research has shown that quality-adjusted cost of care in oncology has remained flat or even fallen for some tumors despite rising drug prices.

Do cancer treatments have option value? Real‐world evidence from metastatic melanoma. Health Economics [PubMed] [RePEc] Published 24th June 2019

Cost effectiveness models done from a societal perspective aim to capture all benefits and costs of a given treatment relative to a comparator. Are standard CEA approaches really capturing all costs and benefits? A 2018 ISPOR Task Force examines some novel components of value that are not typically captured, such as real option value. The Task Force describes real option value as value that is “…generated when a health technology that extends life creates opportunities for the patient to benefit from other future advances in medicine.” Previous studies (here and here) have shown that patients who received treatments for chronic myeloid leukemia and non-small cell lung cancer lived longer than expected since they were able to live long enough to reach the next scientific advance.

A question remains, however, of whether individuals’ behaviors actually take into account this option value. A paper by Li et al. 2019 aims to answer this question by examining whether patients were more likely to get surgical resection after the advent of a novel immuno-oncology treatment (ipilimumab). Using claims data (Marketscan), the authors use an interrupted time series design to examine whether Phase II and Phase III clinical trail read-outs affected the likelihood of surgical resection. The model is a multinomial logit regression. Their preferred specification finds that

“Phase II result was associated with a nearly twofold immediate increase (SD: 0.61; p = .033) in the probability of undergoing surgical resection of metastasis relative to no treatment and a 2.5‐fold immediate increase (SD: 1.14; p = .049) in the probability of undergoing both surgical resection of metastasis and systemic therapy relative to no treatment.”

The finding is striking, but also could benefit from further testing. For instance, the impact of the Phase III results are (incrementally) small relative to the Phase II results. This may be reasonable if one believes that Phase II is a sufficiently reliable indicator of drug benefit, but many people focus on Phase III results. One test the authors could look at is to see whether physicians in academic medical centers are more likely to respond to this news. If one believes that physicians at academic medical centers are more up to speed on the literature, one would expect to see a larger option value for patients treated at academic compared to community medical centers. Further, the study would benefit from some falsification tests. If the authors could use data from other tumors, one would expect that the ipilimumab Phase II results would not have a material impact on surgical resection for other tumor types.

Overall, however, the study is worthwhile as it looks at treatment benefits not just in a static sense, but in a dynamically evolving innovation landscape.

Aggregate distributional cost-effectiveness analysis of health technologies. Value in Health [PubMed] Published 1st May 2019

In general, health economists would like to have health insurers cover treatments that are welfare improving in the Pareto sense. This means, if a treatment provides more expected benefits than costs and no one is worse off (in expectation), then this treatment should certainly be covered. It could be the case, however, that people care who gains these benefits. For instance, consider the case of a new technology that helped people with serious diseases move around more easily inside a mansion. Assume this technology had more benefits than cost. Some (many) people, however, may not like covering a treatment that only benefits people who are very well-off. This issue is especially relevant in single payer systems—like the United Kingdom’s National Health Service (NHS)—which are funded by taxpayers.

One option is to consider both the average net health benefits (i.e., benefits less cost) to a population as well as its effect on inequality. If a society doesn’t care at all about inequality, then this is reduced to just measuring net health benefit overall; if a society has a strong preference for equality, treatments that provide benefits to only the better-off will be considered less valuable.

A paper by Love-Koh et al. 2019 provides a nice quantitative way to estimate these tradeoffs. The approach uses both the Atkinson inequality index and the Kolm index to measure inequality. The authors then use these indices to calculate the equally distributed equivalent (EDE), which is the level of population health (in QALYs) in a completely equal distribution that yields the same amount of social welfare as the distribution under investigation.

Using this approach, the authors find the following:

“Twenty-seven interventions were evaluated. Fourteen interventions were estimated to increase population health and reduce health inequality, 8 to reduce population health and increase health inequality, and 5 to increase health and increase health inequality. Among the latter 5, social welfare analysis, using inequality aversion parameters reflecting high concern for inequality, indicated that the health gain outweighs the negative health inequality impact.”

Despite the attractive features of this approach analytically, there are issues related to how it would be implemented. In this case, inequality is based solely on quality-adjusted life expectancy. However, others could take a more holistic approach and look at socioeconomic status including other factors (e.g., income, employment, etc.). In theory, one could perform the same exercise measuring individual overall utility including these other aspects, but few (rightly) would want the government to assess individuals’ overall happiness to make treatment decisions. Second, the authors qualify expected life expectancy by patients’ sex, primary diagnosis and postcode. Thus, you could have a system that prioritizes treatments for men—since men’s life expectancy is generally less than women. Third, this model assumes disease is exogenous. In many cases this is true, but in some cases individual behavior could increase the likelihood of having a disease. For instance, would citizens want to discount treatments for diseases that are preventable (e.g., lung cancer due to smoking, diabetes due to poor eating habits/exercise), even if treatments for these diseases reduced inequality. Typically, there are no diseases that are fully exogenous or fully at fault of the individual, so this is a slippery slope.

What the Love-Koh paper contributes is an easy to implement method for quantifying how inequality preferences should affect the value of different treatments. What the paper does not answer is whether this approach should be implemented.