Skip to content

The ‘Weekend Effect’ (and what it means for research and policy)

Today, two new studies are published examining different aspects of the observed increase in the risk of mortality associated with weekend admission, the so-called ‘weekend effect’. In the first [disclaimer: I am an author of this], the results of a national consultant survey are analysed. Variations in Sunday to Wednesday consultant to patient ratios are compared to the estimated ‘weekend effect’ for each trust in England: no correlation is found between the two. In the second, compliance with clinical guidelines for stroke care are examined over the course of the whole week. A specialist database allows for severity of admission. No ‘weekend effect’ is found for the 2013/14 sample of patients used, however, patients admitted at night are found to have a higher risk of mortality. A third recently published study also sheds some further light on the processes occurring at the weekend in the NHS. Fewer people who attend accident and emergency departments are admitted at the weekend and fewer patients are referred directly from the community.

These papers provide an important insight into the weekend effect, which has been used to justify a move to seven day services in the NHS. I won’t delve deeper into the implications of these studies for this particular policy, Nick Black does an exemplary job of this in an editorial for The Lancet. I want instead to consider whether the seven day NHS issue suggests a change in the way economic and service delivery research is presented.

It is becoming more common for cost-effectiveness results to be published along with the results of randomised controlled trials. This facilitates interpretation and use of the results of the trial. However, these results, ICERs or relative risks or whatever, are not a decision and tell us nothing about what we should do. A decision making framework is required for that. Within the realm of HTA there is a well defined process for interpreting the results of evaluations and making decisions with regards to implementation. This function has been embodied in England and Wales in NICE. Generally decisions are made with respect to a predefined cost-effectiveness threshold: if a technology is expected to be more cost-effective than the threshold then the decision is to invest in it. However, such a system is not used in general for service delivery interventions. For example, the finding that a seven day NHS policy, if evaluated at face value, would not be cost-effective by any standard criteria has sadly had little impact on the debate. Indeed the decisions made at a health system level are inherently political.

A basic model of political economy puts decision makers on a scale somewhere between two extremes. At one end decision makers act towards predefined long run equity and efficiency goals and at the other they are purely self-serving. At the HTA level there is little political capital to be gained from making cost-ineffective decisions; particularly because the decision making criteria are well known. Exceptions are made only when there is significant interest group pressure. The framework for evaluating health system interventions is typically more opaque; there are often more interest groups are involved; and, the evidence is often complex or lacking. The statistics can be complex and easily misrepresented – what Tim Harford has recently described as statistical bullshit.

What is lacking then is both a decision making framework and a way of interpreting results within this framework. A simple decision rule on the basis of net benefits weighted by a societal willingness to pay as used for HTA may suffice. Although it may be desirable to consider also risk and uncertainty. The interpretation of results often occurs in a qualitative or discursive way in the Discussion of research papers. But this often focuses on advising caution in interpretation (often due to the idiosyncrasies of frequentist statistics and their meaning). A more formal Interpretation section that examines decisions implied by the results if possible or that uses alternative simple calculations and opinion as posterior model checks would be desirable. At the very least the seven day NHS debate may suggest that research communication and implementation is currently sub-optimal.

By

  • Health economics, statistics, and health services research at the University of Warwick. Also like rock climbing and making noise on the guitar.

close

We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

0 0 votes
Article Rating
Subscribe
Notify of
guest

12 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
trackback
4 years ago

[…] economics. We’ve discussed on many occasions questions surrounding the implementation of seven-day health services in England and Wales, for example. Other service delivery interventions might include changes to […]

trackback
5 years ago

[…] cherry picked and misrepresented the evidence, as has been pointed out again and again and again and again and … One begins to wonder if there isn’t some motive other than ensuring […]

Anonymous
Anonymous
5 years ago
trackback
5 years ago

[…] ‘weekend effect‘ is the hot topic in health policy in the UK right now. Whether or not it exists, and […]

trackback
5 years ago

[…] despite the majority of the variation in mortality rates being during the post-neonatal period. As we have seen with evidence on the 7-day NHS policy, once aggregate statistics are examined in more depth, the […]

trackback
5 years ago

[…] in demand in the winter. While different patterns of admissions at weekends relative to weekdays may be the foundation of the ‘weekend effect’ as we recently demonstrated. And yet all these different […]

trackback
5 years ago

[…] in multiple countries, there is still no real consensus on what is going on. We have previously covered the arguments on this blog and suggested that the best explanation for the weekend effect is that healthier […]

trackback
5 years ago

[…] admitted on a weekday. This evidence has been widely questioned, as we have discussed here, here, and here. Part of the problem is that patients who are admitted at the weekend differ from those […]

trackback
6 years ago

[…] ‘weekend effect’ has continued to make headlines since we last posted about it. Last week an open letter to beleaguered Secretary of State for Health Jeremy Hunt was […]

trackback
6 years ago

[…] American Cancer Society, suggesting harms were being caused by the program. Perhaps another case of policy not following the […]

trackback
6 years ago

[…] to serve political rather than public health or economic ends. Consider the recent case of the 7-day NHS, which the evidence is beginning to show will likely not produce the benefits expected of it. […]

12
0
Would love your thoughts, please comment.x
()
x
%d bloggers like this: