Hawking is right, Jeremy Hunt does egregiously cherry pick the evidence

I’m beginning to think Jeremy Hunt doesn’t actually care what the evidence says on the weekend effect. Last week, renowned physicist Stephen Hawking criticized Hunt for ‘cherry picking’ evidence with regard to the ‘weekend effect’: that patients admitted at the weekend are observed to be more likely than their counterparts admitted on a weekday to die. Hunt responded by doubling down on his claims:

Some people have questioned Hawking’s credentials to speak on the topic beyond being a user of the NHS. But it has taken a respected public figure to speak out to elicit a response from the Secretary of State for Health, and that should be welcomed. It remains the case though that a multitude of experts do continue to be ignored. Even the oft-quoted Freemantle paper is partially ignored where it notes of the ‘excess’ weekend deaths, “to assume that [these deaths] are avoidable would be rash and misleading.”

We produced a simple tool to demonstrate how weekend effect studies might estimate an increased risk of mortality associated with weekend admissions even in the case of no difference in care quality. However, the causal model underlying these arguments is not always obvious. So here it is:

weekend

A simple model of the effect of the weekend on patient health outcomes. The dashed line represents unobserved effects

 

So what do we know about the weekend effect?

  1. The weekend effect exists. A multitude of studies have observed that patients admitted at the weekend are more likely to die than those admitted on a weekday. This amounts to having shown that E(Y|W,S) \neq E(Y|W',S). As our causal model demonstrates, being admitted is correlated with health and, importantly, the day of the week. So, this is not the same as saying that risk of adverse clinical outcomes differs by day of the week if you take into account propensity for admission, we can’t say E(Y|W) \neq E(Y|W'). Nor does this evidence imply care quality differs at the weekend, E(Q|W) \neq E(Q|W'). In fact, the evidence only implies differences in care quality if the propensity to be admitted is independent of (unobserved) health status, i.e. Pr(S|U,X) = Pr(S|X) (or if health outcomes are uncorrelated with health status, which is definitely not the case!).
  2. Admissions are different at the weekend. Fewer patients are admitted at the weekend and those that are admitted are on average more severely unwell. Evidence suggests that the better patient severity is controlled for, the smaller the estimated weekend effect. Weekend effect estimates also diminish in models that account for the selection mechanism.
  3. There is some evidence that care quality may be worse at the weekend (at least in the United States). So E(Q|W) \neq E(Q|W'). Although this has not been established in the UK (we’re currently investigating it!)
  4. Staffing levels, particularly specialist to patient ratios, are different at the weekend, E(X|W) \neq E(X|W').
  5. There is little evidence to suggest how staffing levels and care quality are related. While the relationship seems evident prima facie, its extent is not well understood, for example, we might expect a diminishing return to increased staffing levels.
  6. There is a reasonable amount of evidence on the impact of care quality (preventable errors and adverse events) on patient health outcomes.

But what are we actually interested in from a policy perspective? Do we actually care that it is the weekend per se? I would say no, we care that there is potentially a lapse in care quality. So, it’s a two part question: (i) how does care quality (and hence avoidable patient harm) differ at the weekend E(Q|W) - E(Q|W') = ?; and (ii) what effect does this have on patient outcomes E(Y|Q)=?. The first question answers to what extent policy may affect change and the second gives us a way of valuing that change and yet the vast majority of studies in the area address neither. Despite there being a number of publicly funded research projects looking at these questions right now, it’s the studies that are not useful for policy that keep being quoted by those with the power to make change.

Hawking is right, Jeremy Hunt has egregiously cherry picked and misrepresented the evidence, as has been pointed out again and again and again and again and … One begins to wonder if there isn’t some motive other than ensuring long run efficiency and equity in the health service.

Credits

Variations in NHS admissions at a glance

Variations in admissions to NHS hospitals are the source of a great deal of consternation. Over the long-run, admissions and the volume of activity required of the NHS have increased, without equivalent increases in funding or productivity. Over the course of the year, there are repeated claims of crises as hospitals are ill-equipped for the increase in demand in the winter. While different patterns of admissions at weekends relative to weekdays may be the foundation of the ‘weekend effect’ as we recently demonstrated. And yet all these different sources of variation produce a singular time series of numbers of daily admissions. But, each of the different sources of variation are important for different planning and research aims. So let’s decompose the daily number of admissions into its various components.

Data

Daily number of emergency admissions to NHS hospitals between April 2007 and March 2015 from Hospital Episode Statistics.

Methods

A similar analysis was first conducted on variations in the number of births by day of the year. A full description of the model can be found in Chapter 21 of the textbook Bayesian Data Analysis (indeed the model is shown on the front cover!). The model is a sum of Gaussian processes, each one modelling a different aspect of the data, such as the long-run trend or weekly periodic variation. We have previously used Gaussian processes in a geostatistical model on this blog. Gaussian processes are a flexible class of models for which any finite dimensional marginal distribution is Gaussian. Different covariance functions can be specified for different models, such as the aforementioned periodic or long-run trends. The model was run using the software GPstuff in Octave (basically an open-source version of Matlab) and we have modified code from the GPstuff website.

Results

admit5-1

The four panels of the figure reveal to us things we may claim to already know. Emergency admissions have been increasing over time and were about 15% higher in 2015 than in 2007 (top panel). The second panel shows us the day of the week effects: there are about 20% fewer admissions on a Saturday or Sunday than on a weekday. The third panel shows a decrease in summer and increase in winter as we often see reported, although perhaps not quite as large as we might have expected. And finally the bottom panel shows the effects of different days of the year. We should note that the large dip at the end of March/beginning of April is an artifact of coding at the end of the financial year in HES and not an actual drop in admissions. But, we do see expected drops for public holidays such as Christmas and the August bank holiday.

While none of this is unexpected it does show that there’s a lot going on underneath the aggregate data. Perhaps the most alarming aspect of the data is the long run increase in emergency admissions when we compare it to the (lack of) change in funding or productivity. It suggests that hospitals will often be running at capacity so other variation, such as over winter, may lead to an excess capacity problem. We might also speculate on other possible ‘weekend effects’, such as admission on a bank holiday.

As a final thought, the method used to model the data is an excellent way of modelling data with an unknown structure without posing assumptions such as linearity that might be too strong. Hence their use in geostatistics. They are widely used in machine learning and artificial intelligence as well. We often encounter data with unknown and potentially complicated structures in health care and public health research so hopefully this will serve as a good advert for some new methods. See this book, or the one referenced in the methods section, for an in depth look.

Credits

Weekend effect explainer: why we are not the ‘climate change deniers of healthcare’

The statistics underlying the arguments around the weekend effect are complicated. Despite over a hundred empirical studies on the topic, and an observed increase in the risk of mortality for weekend admissions in multiple countries, there is still no real consensus on what is going on. We have previously covered the arguments on this blog and suggested that the best explanation for the weekend effect is that healthier patients are less likely to be admitted to hospital at the weekend. Nevertheless, a little knowledge can be a dangerous thing (the motto of the Dunning-Kruger effect), and some people can be very confident about the interpretation of the statistics despite their complicated nature. For example, one consultant nephrologist wrote in a comment on a recent article that those who attribute the weekend effect to differences in admission are becoming ‘the climate change deniers of healthcare’ as they are not taking into account all the risk-adjusted analyses!

It may certainly be the case that there is a reduction in healthcare quality at the weekend. But it is also important for policy makers to understand that it is still possible to observe a weekend effect even with quite comprehensive mortality risk adjustment. The image below links to an app that simulates multiple weekend effect studies from a model where there is no weekend effect but potentially different chances of admission at the weekend and on weekdays. We are assuming that those who turn up to A&E but are not admitted and sent home are the healthiest patients. In the app, you can change the parameters: the proportion of attendances that are admitted on weekends and weekdays, the mortality rate among patients who are admitted, and, crucially, the amount of variation in patient mortality explained (a sort of “R-squared”) by our risk adjustment. It will display crude and adjusted odds ratios as well as a distribution of possible results from similar studies. (Be patient though, simulating lots of large studies seems to take a while on the server!).

weekendeffect

As is evident, even with a very high proportion of variance explained, we can still get an odds ratio not equal to one and an observed weekend effect if the proportion of attendances who are admitted differs between weekend and weekday. And, with the very large sample sizes often used for these studies, these will likely appear “statistically significant“. Recent evidence from the UK has suggested that 27% of A&E attendances are admitted at the weekend compared to 30% on a weekday. Even when we can explain 90% of the variation in mortality, we can still get a ‘weekend effect’ with these small differences in propensity for admission. And, if there is any element of publication bias, or the ‘garden of forking paths’ [PDF] we will see lots of statistically significant weekend effect studies published.

When there is a misunderstanding about statistics, one often blames the audience for not understanding, but it is often the case that an idea has just not been explained well enough. I can’t judge whether little web apps will actually help explain concepts like this, but hopefully it’s a step in the right direction.

Credits