The ‘Weekend Effect’ (and what it means for research and policy)

Today, two new studies are published examining different aspects of the observed increase in the risk of mortality associated with weekend admission, the so-called ‘weekend effect’. In the first [disclaimer: I am an author of this], the results of a national consultant survey are analysed. Variations in Sunday to Wednesday consultant to patient ratios are compared to the estimated ‘weekend effect’ for each trust in England: no correlation is found between the two. In the second, compliance with clinical guidelines for stroke care are examined over the course of the whole week. A specialist database allows for severity of admission. No ‘weekend effect’ is found for the 2013/14 sample of patients used, however, patients admitted at night are found to have a higher risk of mortality. A third recently published study also sheds some further light on the processes occurring at the weekend in the NHS. Fewer people who attend accident and emergency departments are admitted at the weekend and fewer patients are referred directly from the community.

These papers provide an important insight into the weekend effect, which has been used to justify a move to seven day services in the NHS. I won’t delve deeper into the implications of these studies for this particular policy, Nick Black does an exemplary job of this in an editorial for The Lancet. I want instead to consider whether the seven day NHS issue suggests a change in the way economic and service delivery research is presented.

It is becoming more common for cost-effectiveness results to be published along with the results of randomised controlled trials. This facilitates interpretation and use of the results of the trial. However, these results, ICERs or relative risks or whatever, are not a decision and tell us nothing about what we should do. A decision making framework is required for that. Within the realm of HTA there is a well defined process for interpreting the results of evaluations and making decisions with regards to implementation. This function has been embodied in England and Wales in NICE. Generally decisions are made with respect to a predefined cost-effectiveness threshold: if a technology is expected to be more cost-effective than the threshold then the decision is to invest in it. However, such a system is not used in general for service delivery interventions. For example, the finding that a seven day NHS policy, if evaluated at face value, would not be cost-effective by any standard criteria has sadly had little impact on the debate. Indeed the decisions made at a health system level are inherently political.

A basic model of political economy puts decision makers on a scale somewhere between two extremes. At one end decision makers act towards predefined long run equity and efficiency goals and at the other they are purely self-serving. At the HTA level there is little political capital to be gained from making cost-ineffective decisions; particularly because the decision making criteria are well known. Exceptions are made only when there is significant interest group pressure. The framework for evaluating health system interventions is typically more opaque; there are often more interest groups are involved; and, the evidence is often complex or lacking. The statistics can be complex and easily misrepresented – what Tim Harford has recently described as statistical bullshit.

What is lacking then is both a decision making framework and a way of interpreting results within this framework. A simple decision rule on the basis of net benefits weighted by a societal willingness to pay as used for HTA may suffice. Although it may be desirable to consider also risk and uncertainty. The interpretation of results often occurs in a qualitative or discursive way in the Discussion of research papers. But this often focuses on advising caution in interpretation (often due to the idiosyncrasies of frequentist statistics and their meaning). A more formal Interpretation section that examines decisions implied by the results if possible or that uses alternative simple calculations and opinion as posterior model checks would be desirable. At the very least the seven day NHS debate may suggest that research communication and implementation is currently sub-optimal.


  • Health economics, statistics, and health services research at the University of Warwick. Also like rock climbing and making noise on the guitar.


We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

12 thoughts on “The ‘Weekend Effect’ (and what it means for research and policy)”

  1. Pingback: The NHS as policy laboratory | The Academic Health Economists' Blog

  2. Pingback: Sam Watson’s journal round-up for August 15th | The Academic Health Economists' Blog

  3. Pingback: Weekend effect redux | The Academic Health Economists' Blog

  4. Pingback: Sam Watson’s journal round-up for 7th November 2016 | The Academic Health Economists' Blog

  5. Pingback: Weekend effect explainer: why we are not the ‘climate change deniers of healthcare’ | The Academic Health Economists' Blog

  6. Pingback: Variations in NHS admissions at a glance | The Academic Health Economists' Blog

  7. Pingback: Sam Watson’s journal round-up for 23rd May 2016 | The Academic Health Economists' Blog

  8. Pingback: Chris Sampson’s journal round-up for 16th May 2016 | The Academic Health Economists' Blog

  9. Pingback: Hawking is right, Jeremy Hunt does egregiously cherry pick the evidence | The Academic Health Economists' Blog

  10. Pingback: Sam Watson’s journal round-up for 13th November 2017 | The Academic Health Economists' Blog

Join the discussion