Sam Watson’s journal round-up for 11th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biology [PubMed] Published 2nd January 2019

If you work in research you will have no doubt thought to yourself at one point that you spend more time applying to do research than actually doing it. You can spend weeks working on (what you believe to be) a strong proposal only for it to fail against other strong bids. That time could have been spent collecting and analysing data. Indeed, the opportunity cost of writing extensive proposals can be very high. The question arises as to whether there is another method of allocating research funding that reduces this waste and inefficiency. This paper compares the proposal competition to a partial lottery. In this lottery system, proposals are short, and among those that meet some qualifying standard those that are funded are selected at random. This system has the benefit of not taking up too much time but has the cost of reducing the average scientific value of the winning proposals. The authors compare the two approaches using an economic model of contests, which takes into account factors like proposal strength, public benefits, benefits to the scientist like reputation and prestige, and scientific value. Ultimately they conclude that, when the number of awards is smaller than the number of proposals worthy of funding, the proposal competition is inescapably inefficient. It means that researchers have to invest heavily to get a good project funded, and even if it is good enough it may still not get funded. The stiffer the competition the more researchers have to work to win the award. And what little evidence there is suggests that the format of the application makes little difference to the amount of time spent by researchers on writing it. The lottery mechanism only requires the researcher to propose something that is good enough to get into the lottery. Far less time would therefore be devoted to writing it and more time spent on actual science. I’m all for it!

Preventability of early versus late hospital readmissions in a national cohort of general medicine patients. Annals of Internal Medicine [PubMed] Published 5th June 2018

Hospital quality is hard to judge. We’ve discussed on this blog before the pitfalls of using measures such as adjusted mortality differences for this purpose. Just because a hospital has higher than expected mortality does not mean those death could have been prevented with higher quality care. More thorough methods assess errors and preventable harm in care. Case note review studies have suggested as little as 5% of deaths might be preventable in England and Wales. Another paper we have covered previously suggests then that the predictive value of standardised mortality ratios for preventable deaths may be less than 10%.

Another commonly used metric is readmission rates. Poor care can mean patients have to return to the hospital. But again, the question remains as to how preventable these readmissions are. Indeed, there may also be substantial differences between those patients who are readmitted shortly after discharge and those for whom it may take a longer time. This article explores the preventability of early and late readmissions in ten hospitals in the US. It uses case note review and a number of reviewers to evaluate preventability. The headline figures are that 36% of early readmissions are considered preventable compared to 23% of late readmissions. Moreover, it was considered that the early readmissions were most likely to have been preventable at the hospital whereas for late readmissions, an outpatient clinic or the home would have had more impact. All in all, another paper which provides evidence to suggest crude, or even adjusted rates, are not good indicators of hospital quality.

Visualisation in Bayesian workflow. Journal of the Royal Statistical Society: Series A (Statistics in Society) [RePEc] Published 15th January 2019

This article stems from a broader programme of work from these authors on good “Bayesian workflow”. That is to say, if we’re taking a Bayesian approach to analysing data, what steps ought we to be taking to ensure our analyses are as robust and reliable as possible? I’ve been following this work for a while as this type of pragmatic advice is invaluable. I’ve often read empirical papers where the authors have chosen, say, a logistic regression model with covariates x, y, and z and reported the outcomes, but at no point ever justified why this particular model might be any good at all for these data or the research objective. The key steps of the workflow include, first, exploratory data analysis to help set up a model, and second, performing model checks before estimating model parameters. This latter step is important: one can generate data from a model and set of prior distributions, and if the data that this model generates looks nothing like what we would expect the real data to look like, then clearly the model is not very good. Following this, we should check whether our inference algorithm is doing its job, for example, are the MCMC chains converging? We can also conduct posterior predictive model checks. These have had their criticisms in the literature for using the same data to both estimate and check the model which could lead to the model generalising poorly to new data. Indeed in a recent paper of my own, posterior predictive checks showed poor fit of a model to my data and that a more complex alternative was better fitting. But other model fit statistics, which penalise numbers of parameters, led to the alternative conclusions. So the simpler model was preferred on the grounds that the more complex model was overfitting the data. So I would argue posterior predictive model checks are a sensible test to perform but must be interpreted carefully as one step among many. Finally, we can compare models using tools like cross-validation.

This article discusses the use of visualisation to aid in this workflow. They use the running example of building a model to estimate exposure to small particulate matter from air pollution across the world. Plots are produced for each of the steps and show just how bad some models can be and how we can refine our model step by step to arrive at a convincing analysis. I agree wholeheartedly with the authors when they write, “Visualization is probably the most important tool in an applied statistician’s toolbox and is an important complement to quantitative statistical procedures.”

Credits

 

Chris Sampson’s journal round-up for 19th March 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Using HTA and guideline development as a tool for research priority setting the NICE way: reducing research waste by identifying the right research to fund. BMJ Open [PubMed] Published 8th March 2018

As well as the cost-effectiveness of health care, economists are increasingly concerned with the cost-effectiveness of health research. This makes sense, given that both are usually publicly funded and so spending on one (in principle) limits spending on the other. NICE exists in part to prevent waste in the provision of health care – seeking to maximise benefit. In this paper, the authors (all current or ex-employees of NICE) consider the extent to which NICE processes are also be used to prevent waste in health research. The study focuses on the processes underlying NICE guideline development and HTA, and the work by NICE’s Science Policy and Research (SP&R) programme. Through systematic review and (sometimes) economic modelling, NICE guidelines identify research needs, and NICE works with the National Institute for Health Research to get their recommended research commissioned, with some research fast-tracked as ‘NICE Key Priorities’. Sometimes, it’s also necessary to prioritise research into methodological development, and NICE have conducted reviews to address this, with the Internal Research Advisory Group established to ensure that methodological research is commissioned. The paper also highlights the roles of other groups such as the Decision Support Unit, Technical Support Unit and External Assessment Centres. This paper is useful for two reasons. First, it gives a clear and concise explanation of NICE’s processes with respect to research prioritisation, and maps out the working groups involved. This will provide researchers with an understanding of how their work fits into this process. Second, the paper highlights NICE’s current research priorities and provides insight into how these develop. This could be helpful to researchers looking to develop new ideas and proposals that will align with NICE’s priorities.

The impact of the minimum wage on health. International Journal of Health Economics and Management [PubMed] Published 7th March 2018

The minimum wage is one of those policies that is so far-reaching, and with such ambiguous implications for different people, that research into its impact can deliver dramatically different conclusions. This study uses American data and takes advantage of the fact that different states have different minimum wage levels. The authors try to look at a broad range of mechanisms by which minimum wage can affect health. A major focus is on risky health behaviours. The study uses data from the Behavioral Risk Factor Surveillance System, which includes around 300,000 respondents per year across all states. Relevant variables from these data characterise smoking, drinking, and fruit and vegetable consumption, as well as obesity. There are also indicators of health care access and self-reported health. The authors cut their sample to include 21-64-year-olds with no more than a high school degree. Difference-in-differences are estimated by OLS according to individual states’ minimum wage changes. As is often the case for minimum wage studies, the authors find several non-significant effects: smoking and drinking don’t seem to be affected. Similarly, there isn’t much of an impact on health care access. There seems to be a small positive impact of minimum wage on the likelihood of being obese, but no impact on BMI. I’m not sure how to interpret that, but there is also evidence that a minimum wage increase leads to a reduction in fruit and vegetable consumption, which adds credence to the obesity finding. The results also demonstrate that a minimum wage increase can reduce the number of days that people report to be in poor health. But generally – on aggregate – there isn’t much going on at all. So the authors look at subgroups. Smoking is found to increase (and BMI decrease) with minimum wage for younger non-married white males. Obesity is more likely to be increased by minimum wage hikes for people who are white or married, and especially for those in older age groups. Women seem to benefit from fewer days with mental health problems. The main concerns identified in this paper are that minimum wage increases could increase smoking in young men and could reduce fruit and veg consumption. But I don’t think we should overstate it. There’s a lot going on in the data, and though the authors do a good job of trying to identify the effects, other explanations can’t be excluded. Minimum wage increases probably don’t have a major direct impact on health behaviours – positive or negative – but policymakers should take note of the potential value in providing public health interventions to those groups of people who are likely to be affected by the minimum wage.

Aligning policy objectives and payment design in palliative care. BMC Palliative Care [PubMed] Published 7th March 2018

Health care at the end of life – including palliative care – presents challenges in evaluation. The focus is on improving patients’ quality of life, but it’s also about satisfying preferences for processes of care, the experiences of carers, and providing a ‘good death’. And partly because these things can be difficult to measure, it can be difficult to design payment mechanisms to achieve desirable outcomes. Perhaps that’s why there is no current standard approach to funding for palliative care, with a lot of variation between countries, despite the common aspiration for universality. This paper tackles the question of payment design with a discussion of the literature. Traditionally, palliative care has been funded by block payments, per diems, or fee-for-service. The author starts with the acknowledgement that there are two challenges to ensuring value for money in palliative care: moral hazard and adverse selection. Providers may over-supply because of fee-for-service funding arrangements, or they may ‘cream-skim’ patients. Adverse selection may arise in an insurance-based system, with demand from high-risk people causing the market to fail. These problems could potentially be solved by capitation-based payments and risk adjustment. The market could also be warped by blunt eligibility restrictions and funding caps. Another difficulty is the challenge of achieving allocative efficiency between home-based and hospital-based services, made plain by the fact that, in many countries, a majority of people die in hospital despite a preference for dying at home. The author describes developments (particularly in Australia) in activity-based funding for palliative care. An interesting proposal – though not discussed in enough detail – is that payments could be made for each death (per mortems?). Capitation-based payment models are considered and the extent to which pay-for-performance could be incorporated is also discussed – the latter being potentially important in achieving those process outcomes that matter so much in palliative care. Yet another challenge is the question of when palliative care should come into play, because, in some cases, it’s a matter of sooner being better, because the provision of palliative care can give rise to less costly and more preferred treatment pathways. Thus, palliative care funding models will have implications for the funding of acute care. Throughout, the paper includes examples from different countries, along with a wealth of references to dig into. Helpfully, the author explicitly states in a table the models that different settings ought to adopt, given their prevailing model. As our population ages and the purse strings tighten, this is a discussion we can expect to be having more and more.

Credits

 

Thesis Thursday: Francesco Longo

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Francesco Longo who has a PhD from the University of York. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Essays on hospital performance in England
Supervisor
Luigi Siciliani
Repository link
http://etheses.whiterose.ac.uk/18975/

What do you mean by ‘hospital performance’, and how is it measured?

The concept of performance in the healthcare sector covers a number of dimensions including responsiveness, affordability, accessibility, quality, and efficiency. A PhD does not normally provide enough time to investigate all these aspects and, hence, my thesis mostly focuses on quality and efficiency in the hospital sector. The concept of quality or efficiency of a hospital is also surprisingly broad and, as a consequence, perfect quality and efficiency measures do not exist. For example, mortality and readmissions are good clinical quality measures but the majority of hospital patients do not die and are not readmitted. How well does the hospital treat these patients? Similarly for efficiency: knowing that a hospital is more efficient because it now has lower costs is essential, but how is that hospital actually reducing costs? My thesis tries to answer also these questions by analysing various quality and efficiency indicators. For example, Chapter 3 uses quality measures such as overall and condition-specific mortality, overall readmissions, and patient-reported outcomes for hip replacement. It also uses efficiency indicators such as bed occupancy, cancelled elective operations, and cost indexes. Chapter 4 analyses additional efficiency indicators, such as admissions per bed, the proportion of day cases, and proportion of untouched meals.

You dedicated a lot of effort to comparing specialist and general hospitals. Why is this important?

The first part of my thesis focuses on specialisation, i.e. an organisational form which is supposed to generate greater efficiency, quality, and responsiveness but not necessarily lower costs. Some evidence from the US suggests that orthopaedic and surgical hospitals had 20 percent higher inpatient costs because of, for example, higher staffing levels and better quality of care. In the English NHS, specialist hospitals play an important role because they deliver high proportions of specialised services, commonly low-volume but high-cost treatments for patients with complex and rare conditions. Specialist hospitals, therefore, allow the achievement of a critical mass of clinical expertise to ensure patients receive specialised treatments that produce better health outcomes. More precisely, my thesis focuses on specialist orthopaedic hospitals which, for instance, provide 90% of bone and soft tissue sarcomas surgeries, and 50% of scoliosis treatments. It is therefore important to investigate the financial viability of specialist orthopaedic hospitals relative to general hospitals that undertake similar activities, under the current payment system. The thesis implements weighted least square regressions to compare profit margins between specialist and general hospitals. Specialist orthopaedic hospitals are found to have lower profit margins, which are explained by patient characteristics such as age and severity. This means that, under the current payment system, providers that generally attract more complex patients such as specialist orthopaedic hospitals may be financially disadvantaged.

In what way is your analysis of competition in the NHS distinct from that of previous studies?

The second part of my thesis investigates the effect of competition on quality and efficiency under two different perspectives. First, it explores whether under competitive pressures neighbouring hospitals strategically interact in quality and efficiency, i.e. whether a hospital’s quality and efficiency respond to neighbouring hospitals’ quality and efficiency. Previous studies on English hospitals analyse strategic interactions only in quality and they employ cross-sectional spatial econometric models. Instead, my thesis uses panel spatial econometric models and a cross-sectional IV model in order to make causal statements about the existence of strategic interactions among rival hospitals. Second, the thesis examines the direct effect of hospital competition on efficiency. The previous empirical literature has studied this topic by focusing on two measures of efficiency such as unit costs and length of stay measured at the aggregate level or for a specific procedure (hip and knee replacement). My thesis provides a richer analysis by examining a wider range of efficiency dimensions. It combines a difference-in-difference strategy, commonly used in the literature, with Seemingly Unrelated Regression models to estimate the effect of competition on efficiency and enhance the precision of the estimates. Moreover, the thesis tests whether the effect of competition varies for more or less efficient hospitals using an unconditional quantile regression approach.

Where should researchers turn next to help policymakers understand hospital performance?

Hospitals are complex organisations and the idea of performance within this context is multifaceted. Even when we focus on a single performance dimension such as quality or efficiency, it is difficult to identify a measure that could work as a comprehensive proxy. It is therefore important to decompose as much as possible the analysis by exploring indicators capturing complementary aspects of the performance dimension of interest. This practice is likely to generate findings that are readily interpretable by policymakers. For instance, some results from my thesis suggest that hospital competition improves efficiency by reducing admissions per bed. Such an effect is driven by a reduction in the number of beds rather than an increase in the number of admissions. In addition, competition improves efficiency by pushing hospitals to increase the proportion of day cases. These findings may help to explain why other studies in the literature find that competition decreases length of stay: hospitals may replace elective patients, who occupy hospital beds for one or more nights, with day case patients, who are instead likely to be discharged the same day of admission.