Sam Watson’s journal round-up for 11th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biology [PubMed] Published 2nd January 2019

If you work in research you will have no doubt thought to yourself at one point that you spend more time applying to do research than actually doing it. You can spend weeks working on (what you believe to be) a strong proposal only for it to fail against other strong bids. That time could have been spent collecting and analysing data. Indeed, the opportunity cost of writing extensive proposals can be very high. The question arises as to whether there is another method of allocating research funding that reduces this waste and inefficiency. This paper compares the proposal competition to a partial lottery. In this lottery system, proposals are short, and among those that meet some qualifying standard those that are funded are selected at random. This system has the benefit of not taking up too much time but has the cost of reducing the average scientific value of the winning proposals. The authors compare the two approaches using an economic model of contests, which takes into account factors like proposal strength, public benefits, benefits to the scientist like reputation and prestige, and scientific value. Ultimately they conclude that, when the number of awards is smaller than the number of proposals worthy of funding, the proposal competition is inescapably inefficient. It means that researchers have to invest heavily to get a good project funded, and even if it is good enough it may still not get funded. The stiffer the competition the more researchers have to work to win the award. And what little evidence there is suggests that the format of the application makes little difference to the amount of time spent by researchers on writing it. The lottery mechanism only requires the researcher to propose something that is good enough to get into the lottery. Far less time would therefore be devoted to writing it and more time spent on actual science. I’m all for it!

Preventability of early versus late hospital readmissions in a national cohort of general medicine patients. Annals of Internal Medicine [PubMed] Published 5th June 2018

Hospital quality is hard to judge. We’ve discussed on this blog before the pitfalls of using measures such as adjusted mortality differences for this purpose. Just because a hospital has higher than expected mortality does not mean those death could have been prevented with higher quality care. More thorough methods assess errors and preventable harm in care. Case note review studies have suggested as little as 5% of deaths might be preventable in England and Wales. Another paper we have covered previously suggests then that the predictive value of standardised mortality ratios for preventable deaths may be less than 10%.

Another commonly used metric is readmission rates. Poor care can mean patients have to return to the hospital. But again, the question remains as to how preventable these readmissions are. Indeed, there may also be substantial differences between those patients who are readmitted shortly after discharge and those for whom it may take a longer time. This article explores the preventability of early and late readmissions in ten hospitals in the US. It uses case note review and a number of reviewers to evaluate preventability. The headline figures are that 36% of early readmissions are considered preventable compared to 23% of late readmissions. Moreover, it was considered that the early readmissions were most likely to have been preventable at the hospital whereas for late readmissions, an outpatient clinic or the home would have had more impact. All in all, another paper which provides evidence to suggest crude, or even adjusted rates, are not good indicators of hospital quality.

Visualisation in Bayesian workflow. Journal of the Royal Statistical Society: Series A (Statistics in Society) [RePEc] Published 15th January 2019

This article stems from a broader programme of work from these authors on good “Bayesian workflow”. That is to say, if we’re taking a Bayesian approach to analysing data, what steps ought we to be taking to ensure our analyses are as robust and reliable as possible? I’ve been following this work for a while as this type of pragmatic advice is invaluable. I’ve often read empirical papers where the authors have chosen, say, a logistic regression model with covariates x, y, and z and reported the outcomes, but at no point ever justified why this particular model might be any good at all for these data or the research objective. The key steps of the workflow include, first, exploratory data analysis to help set up a model, and second, performing model checks before estimating model parameters. This latter step is important: one can generate data from a model and set of prior distributions, and if the data that this model generates looks nothing like what we would expect the real data to look like, then clearly the model is not very good. Following this, we should check whether our inference algorithm is doing its job, for example, are the MCMC chains converging? We can also conduct posterior predictive model checks. These have had their criticisms in the literature for using the same data to both estimate and check the model which could lead to the model generalising poorly to new data. Indeed in a recent paper of my own, posterior predictive checks showed poor fit of a model to my data and that a more complex alternative was better fitting. But other model fit statistics, which penalise numbers of parameters, led to the alternative conclusions. So the simpler model was preferred on the grounds that the more complex model was overfitting the data. So I would argue posterior predictive model checks are a sensible test to perform but must be interpreted carefully as one step among many. Finally, we can compare models using tools like cross-validation.

This article discusses the use of visualisation to aid in this workflow. They use the running example of building a model to estimate exposure to small particulate matter from air pollution across the world. Plots are produced for each of the steps and show just how bad some models can be and how we can refine our model step by step to arrive at a convincing analysis. I agree wholeheartedly with the authors when they write, “Visualization is probably the most important tool in an applied statistician’s toolbox and is an important complement to quantitative statistical procedures.”

Credits

 

How important is healthcare for population health?

How important is a population’s access to healthcare as a determinant of population health? I have heard the claim that “as little as 10% of a population’s health is linked to access to healthcare”, or some variant of it, in many places. Some examples include the Health Foundation, the AHRQ, the King’s Fund, the WHO, and determinantsofhealth.org. This claim is appealing: it feels counter-intuitive and it brings to the fore questions of public health and health-related behaviour. But it’s not clear what it means.

I can think of two possible interpretations. One, 10% of the variation in population health outcomes is explained by variation in healthcare access. Or two, access to healthcare leads to a 10% change in population health outcomes compared to no access to healthcare. Both of these claims would be very hard to evaluate empirically. Within many countries, particularly the highest income countries, there is little variation in access to healthcare relative to possible levels of access across the world. Inter-country comparisons would provide a greater range of variation to compare to population outcomes. But even the most sophisticated statistical analysis will struggle to separate out the effects of other economic determinants of health.

It would also be difficult to make sense of any study that purported to estimate the effect of adding or removing healthcare beyond any within-country variation. The labour and capital resource needs of the most sophisticated hospitals are too great for the poorest settings, and it is unlikely that the wealthiest democratic countries could end up with the level of healthcare the world’s poorest face.

But what is the evidence for the claim of 10%? There are a handful of key citations, all of which were summarised at the time in a widely cited article in Health Affairs in 2014. For each of the two ways we could think about the contribution of healthcare above, we would need to look at estimates of the probability of health conditional on different levels of healthcare, Pr(health|healthcare). Each of the references for the 10% figure above in fact provides evidence for the proportion of deaths associated with ‘inadequate’ healthcare, or to put in another way, the probability of having received ‘inadequate’ care given death, Pr(healthcare|health). This is known as transposing the conditional: we have got our conditional probability the wrong way round. Even if we accept mortality rates as an acceptable proxy for population health, the two probabilities are not equal to one another.

Interpretation of this evidence is also complex. Smoking tobacco, for example, would be considered a behavioural determinant of health and deaths caused by it would be attributed to a behavioural cause rather than healthcare. But survival rates for lung cancers have improved dramatically over the last few decades due to improvements in healthcare. While it would be foolish to attribute a death in the past to a lack of access to treatments which had not been invented, contemporary lung cancer deaths in low income settings may well have been prevented by access to better healthcare. Thus using cause-of-death statistics to estimate the contributions of different factors to population health only typically picks up those deaths resulting from medical error or negligence. They are a wholly unreliable guide to the role of healthcare in determining population health.

A study published recently in The Lancet, timed to coincide with a commission on healthcare quality, adopted a different approach. The study aimed to estimate the annual number of deaths worldwide due to a lack of access to high-quality care. To do this they compared the mortality rates of conditions amenable to healthcare intervention around the world with those in the wealthiest nations. Any differences were attributed to either non-utilisation of or lack of access to high-quality care. 15.6 million ‘excess deaths’ were estimated. However, to attribute to these deaths a cause of inadequate healthcare access, one would need to conceive of a counter-factual world in which everyone was treated in the best healthcare systems. This is surely implausible in the extreme. A comparable question might be to ask how many people around the world are dying because their incomes are not as high as those of the top 10% of Americans.

On the normative question, there is little disagreement with the goal of achieving universal health coverage and improving population health. But these dramatic, eye-catching, or counter-intuitive figures do little to support achieving these ends: they can distort policy priorities and create unattainable goals and expectations. Health systems are not built overnight; an incremental approach is needed to ensure sustainability and affordability. Evidence to support this is where great strides are being made both methodologically and empirically, but it is not nearly as exciting as claiming healthcare isn’t very important or that millions of people are dying every year due to poor healthcare access. Healthcare systems are an integral and important part of overall population health; assigning a number to this importance is not.

Picture credit: pixabay

OHE Lunchtime Seminar: What Can NHS Trusts Do to Reduce Cancer Waiting Times?

OHE Lunchtime Seminar with Sarah Karlsberg, Steve Paling, and Júlia Esquerré on ‘What can NHS trusts do to reduce cancer waiting times?’ To be held on 14th November 2018 from 12 p.m. to 2 p.m.

Rapid diagnosis and access to treatment for cancer are vital for both clinical outcomes and patient experience of care. The NHS Constitution contains several waiting times targets, including that 85% of patients diagnosed with cancer should receive treatment within 62 days of referral. However, waiting times are increasing in England: the 62-day target has not been met since late 2013 and, in July 2018, the NHS recorded its worst performance since records began in October 2009.

This seminar will present evidence on where NHS trusts can take practical steps to reduce cancer waiting times. The work uses patient-level data (Hospital Episode Statistics) from 2016/17 and an econometric model to quantify the potential effects of several recommendations on the average length of patients’ cancer pathways. The project won the 2018 John Hoy Memorial Award for the best piece of economic analysis produced by government economists.

Sarah Karlsberg, Steven Paling, and Júlia González Esquerré work in the NHS Improvement Economics Team, which provides economics expertise to NHS Improvement (previously Monitor and the Trust Development Authority) and the provider sector. Their work covers all aspects of provider policy, including operational and financial performance, quality of care, leadership and strategic change. Sarah is also a Visiting Fellow at OHE.

Download the full seminar invite here.

The seminar will be held in the Sir Alexander Fleming Room, Southside, 7th Floor, 105 Victoria Street, London SW1E 6QT. A buffet lunch will be available from 12 p.m. The seminar will start promptly at 12:30 p.m. and finish promptly at 2 p.m.

If you would like to attend this seminar, please reply to ohegeneral@ohe.org.