Thesis Thursday: Feng-An Yang

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Feng-An Yang who has a PhD from Ohio State University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Three essays on access to health care in rural areas
Supervisors
Daeho Kim, Joyce Chen
Repository link
http://rave.ohiolink.edu/etdc/view?acc_num=osu152353045188255

What are the policy challenges for rural hospitals in the US?

Rural hospitals have been financially vulnerable, especially after the implementation of Medicare Prospective Payment System (PPS) in 1983, under which hospitals receive a predetermined, fixed reimbursement for their inpatient services. Under the PPS, they suffer from financial losses as their costs tend to exceed the reimbursement rate due to their smaller size and lower patient volume than their urban counterparts (Medicare Payment Advisory Commission, 2001 [PDF]). As a result, a noticeable number of rural hospitals have closed since the implementation of PPS (Congressional Budget Office, 1991 [PDF]).

This closure trend has slowed down thanks to public payment policies such as the Critical Access Hospitals (CAH) program, but rural hospitals are continuing to close their doors and a total of 107 rural hospitals have closed from 2010 to present according to the North Carolina Rural Health Research Program. This issue has raised public concern for rural residents’ access to health services and health status, and how to keep rural hospitals open has become an important policy priority.

Which data sources and models did you use to identify key events?

My dissertation investigated the impact of the CAH program and hospital closure by compiling data from various sources. The primary data come from the Medicare cost report, which contains detailed financial statements for nearly every U.S. hospital. Historical data on health care utilization at the county-level are obtained from the Area Health Resource File. County-level mortality rates are calculated from the national mortality files. Lastly, the list of CAHs and closed hospitals is obtained from the Flex Monitoring Team and American Hospital Association Annual Survey, respectively. This list contains information on the hospital identifier and year of event which is key to my empirical strategy.

To identify the impact of key events (i.e., CAH conversion and hospital closure), I use an event-study approach exploiting the variation in the timing of events. This approach estimates the changes in outcome for the time relative to the ‘event time’. A primary advantage of this approach is that it allows a visual examination of the evolution of changes in outcome before and after the event.

How can policies relating to rural hospitals benefit patients?

This question is not trivial because public payment policies are not directly linked to patients. The primary objective of these policies is to strengthen rural hospitals’ financial viability by providing them with enhanced reimbursement. As a result, it has been expected that, under these policies, rural hospitals will improve their financial conditions and stay open, thereby maintaining the access to health services for rural residents. Broadly speaking, public payment policies can lead to an increase in accessibility if we compare patient access to health services between counties with at least one hospital receiving financial support and counties without any hospitals receiving financial support.

I look at patient benefits from three aspects: accessibility, health care utilization, and mortality. My research shows that the CAH program has substantially improved CAHs’ financial conditions and as a result, some CAHs that otherwise would have been closed have stayed open. This in turn leads to an increase in rural residents’ access to and use of health services. We then provide suggestive evidence that the increased access to and use of health care services have improved patient health in rural areas.

Did you find any evidence that policies could have negative or unexpected consequences?

Certainly. The second chapter of my dissertation focused on skilled nursing care which can be provided in either swing beds (inpatient beds that can be used interchangeably for inpatient care or skilled nursing care) or hospital-based skilled nursing facilities (SNFs). Since the services provided in swing beds and SNFs are equivalent, differential payments, if present, may encourage hospitals to use one over the other.

While the CAH program provides enhanced reimbursement to rural hospitals, it also changes the swing bed reimbursement method such that swing bed payments are more favorable than SNF payments. As a result, CAHs may have a financial incentive to increase the use of swing beds over SNFs. By focusing on CAHs with a SNF, my research shows a remarkable increase in swing bed utilization and this increase is fully offset by the decrease in SNF utilization. These results suggest that CAHs substitute swing beds for SNFs in response to the change in swing bed reimbursement method.

Based on your research, what would be your key recommendations for policymakers?

Based on my research findings, I would make two recommendations for policymakers.

First, my research speaks to the ongoing debate over the elimination of CAH designation for certain hospitals. Loss of CAH designation could have serious financial consequences and subsequently have potentially adverse impacts on patient access to and use of health care. Therefore, I would recommend policymakers to maintain the CAH designation.

Second, while the CAH program has improved rural hospitals’ financial conditions, it has also created a financial incentive for hospitals to use the service with a higher reimbursement rate. Thus, my recommendation to policymakers would be to consider potentially substitutable health care services when designing reimbursement rates.

Rita Faria’s journal round-up for 4th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cheap and dirty: the effect of contracting out cleaning on efficiency and effectiveness. Public Administration Review Published 25th February 2019

Before I was a health economist, I used to be a pharmacist and worked for a well-known high street chain for some years. My impression was that the stores with in-house cleaners were cleaner, but I didn’t know if this was a true difference, my leftie bias or my small sample size of 2! This new study by Shimaa Elkomy, Graham Cookson and Simon Jones confirms my suspicions, albeit in the context of NHS hospitals, so I couldn’t resist to select it for my round-up.

They looked at how contracted-out services fare in terms of perceived cleanliness, costs and MRSA rate in NHS hospitals. MRSA is a type of hospital-associated infection that is affected by how clean a hospital is.

They found that contracted-out services are cheaper than in-house cleaning, but that perceived cleanliness is worse. Importantly, contracted-out services increase the MRSA rate. In other words, contracting-out cleaning services could harm patients’ health.

This is a fascinating paper that is well worth a read. One wonders if the cost of managing MRSA is more than offset by the savings of contracting-out services. Going a step further, are in-house services cost-effective given the impact on patients’ health and costs of managing infections?

What’s been the bang for the buck? Cost-effectiveness of health care spending across selected conditions in the US. Health Affairs [PubMed] Published 1st January 2019

Staying on the topic of value for money, this study by David Wamble and colleagues looks at the extent to which the increased spending in health care in the US has translated into better health outcomes over time.

It’s clearly reassuring that, for 6 out of the 7 conditions they looked at, health outcomes have improved in 2015 compared to 1996. After all, that’s the goal of investing in medical R&D, although it remains unclear how much of this difference can be attributed to health care versus other things that have happened at the same time that could have improved health outcomes.

I wasn’t sure about the inflation adjustment for the costs, so I’d be grateful for your thoughts via comments or Twitter. In my view, we would underestimate the costs if we used medical price inflation indices. This is because these indices reflect the specific increase in prices in health care, such as due to new drugs being priced high at launch. So I understand that the main results use the US Consumer Price Index, which means that this reflects the average increase in prices over time rather than the increase in health care.

However, patients may not have seen their income rise with inflation. This means that the cost of health care may represent a disproportionally greater share of people’s income. And that the inflation adjustment may downplay the impact of health care costs on people’s pockets.

This study caught my eye and it is quite thought-provoking. It’s a good addition to the literature on the cost-effectiveness of US health care. But I’d wager that the question remains: to what extent is today’s medical care better value for money that in the past?

The dos and don’ts of influencing policy: a systematic review of advice to academics. Palgrave Communications Published 19th February 2019

We all would like to see our research findings influence policy, but how to do this in practice? Well, look no further, as Kathryn Oliver and Paul Cairney reviewed the literature, summarised it in 8 key tips and thought through their implications.

To sum up, it’s not easy to influence policy; advice about how to influence policy is rarely based on empirical evidence, and there are a few risks to trying to become a mover-and-shaker in policy circles.

They discuss three dilemmas in policy engagement. Should academics try to influence policy? How should academics influence policy? What is the purpose of academics’ engagement in policy making?

I particularly enjoyed reading about the approaches to influence policy. Tools such as evidence synthesis and social media should make evidence more accessible, but their effectiveness is unclear. Another approach is to craft stories to create a compelling case for the policy change, which seems to me to be very close to marketing. The third approach is co-production, which they note can give rise to accusations of bias and can have some practical challenges in terms of intellectual property and keeping one’s independence.

I found this paper quite refreshing. It not only boiled down the advice circulating online about how to influence policy into its key messages but also thought through the practical challenges in its application. The impact agenda seems to be here to stay, at least in the UK. This paper is an excellent source of advice on the risks and benefits of trying to navigate the policy world.

Credits

Sam Watson’s journal round-up for 11th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biology [PubMed] Published 2nd January 2019

If you work in research you will have no doubt thought to yourself at one point that you spend more time applying to do research than actually doing it. You can spend weeks working on (what you believe to be) a strong proposal only for it to fail against other strong bids. That time could have been spent collecting and analysing data. Indeed, the opportunity cost of writing extensive proposals can be very high. The question arises as to whether there is another method of allocating research funding that reduces this waste and inefficiency. This paper compares the proposal competition to a partial lottery. In this lottery system, proposals are short, and among those that meet some qualifying standard those that are funded are selected at random. This system has the benefit of not taking up too much time but has the cost of reducing the average scientific value of the winning proposals. The authors compare the two approaches using an economic model of contests, which takes into account factors like proposal strength, public benefits, benefits to the scientist like reputation and prestige, and scientific value. Ultimately they conclude that, when the number of awards is smaller than the number of proposals worthy of funding, the proposal competition is inescapably inefficient. It means that researchers have to invest heavily to get a good project funded, and even if it is good enough it may still not get funded. The stiffer the competition the more researchers have to work to win the award. And what little evidence there is suggests that the format of the application makes little difference to the amount of time spent by researchers on writing it. The lottery mechanism only requires the researcher to propose something that is good enough to get into the lottery. Far less time would therefore be devoted to writing it and more time spent on actual science. I’m all for it!

Preventability of early versus late hospital readmissions in a national cohort of general medicine patients. Annals of Internal Medicine [PubMed] Published 5th June 2018

Hospital quality is hard to judge. We’ve discussed on this blog before the pitfalls of using measures such as adjusted mortality differences for this purpose. Just because a hospital has higher than expected mortality does not mean those death could have been prevented with higher quality care. More thorough methods assess errors and preventable harm in care. Case note review studies have suggested as little as 5% of deaths might be preventable in England and Wales. Another paper we have covered previously suggests then that the predictive value of standardised mortality ratios for preventable deaths may be less than 10%.

Another commonly used metric is readmission rates. Poor care can mean patients have to return to the hospital. But again, the question remains as to how preventable these readmissions are. Indeed, there may also be substantial differences between those patients who are readmitted shortly after discharge and those for whom it may take a longer time. This article explores the preventability of early and late readmissions in ten hospitals in the US. It uses case note review and a number of reviewers to evaluate preventability. The headline figures are that 36% of early readmissions are considered preventable compared to 23% of late readmissions. Moreover, it was considered that the early readmissions were most likely to have been preventable at the hospital whereas for late readmissions, an outpatient clinic or the home would have had more impact. All in all, another paper which provides evidence to suggest crude, or even adjusted rates, are not good indicators of hospital quality.

Visualisation in Bayesian workflow. Journal of the Royal Statistical Society: Series A (Statistics in Society) [RePEc] Published 15th January 2019

This article stems from a broader programme of work from these authors on good “Bayesian workflow”. That is to say, if we’re taking a Bayesian approach to analysing data, what steps ought we to be taking to ensure our analyses are as robust and reliable as possible? I’ve been following this work for a while as this type of pragmatic advice is invaluable. I’ve often read empirical papers where the authors have chosen, say, a logistic regression model with covariates x, y, and z and reported the outcomes, but at no point ever justified why this particular model might be any good at all for these data or the research objective. The key steps of the workflow include, first, exploratory data analysis to help set up a model, and second, performing model checks before estimating model parameters. This latter step is important: one can generate data from a model and set of prior distributions, and if the data that this model generates looks nothing like what we would expect the real data to look like, then clearly the model is not very good. Following this, we should check whether our inference algorithm is doing its job, for example, are the MCMC chains converging? We can also conduct posterior predictive model checks. These have had their criticisms in the literature for using the same data to both estimate and check the model which could lead to the model generalising poorly to new data. Indeed in a recent paper of my own, posterior predictive checks showed poor fit of a model to my data and that a more complex alternative was better fitting. But other model fit statistics, which penalise numbers of parameters, led to the alternative conclusions. So the simpler model was preferred on the grounds that the more complex model was overfitting the data. So I would argue posterior predictive model checks are a sensible test to perform but must be interpreted carefully as one step among many. Finally, we can compare models using tools like cross-validation.

This article discusses the use of visualisation to aid in this workflow. They use the running example of building a model to estimate exposure to small particulate matter from air pollution across the world. Plots are produced for each of the steps and show just how bad some models can be and how we can refine our model step by step to arrive at a convincing analysis. I agree wholeheartedly with the authors when they write, “Visualization is probably the most important tool in an applied statistician’s toolbox and is an important complement to quantitative statistical procedures.”

Credits