Weekend effect explainer: why we are not the ‘climate change deniers of healthcare’

The statistics underlying the arguments around the weekend effect are complicated. Despite over a hundred empirical studies on the topic, and an observed increase in the risk of mortality for weekend admissions in multiple countries, there is still no real consensus on what is going on. We have previously covered the arguments on this blog and suggested that the best explanation for the weekend effect is that healthier patients are less likely to be admitted to hospital at the weekend. Nevertheless, a little knowledge can be a dangerous thing (the motto of the Dunning-Kruger effect), and some people can be very confident about the interpretation of the statistics despite their complicated nature. For example, one consultant nephrologist wrote in a comment on a recent article that those who attribute the weekend effect to differences in admission are becoming ‘the climate change deniers of healthcare’ as they are not taking into account all the risk-adjusted analyses!

It may certainly be the case that there is a reduction in healthcare quality at the weekend. But it is also important for policy makers to understand that it is still possible to observe a weekend effect even with quite comprehensive mortality risk adjustment. The image below links to an app that simulates multiple weekend effect studies from a model where there is no weekend effect but potentially different chances of admission at the weekend and on weekdays. We are assuming that those who turn up to A&E but are not admitted and sent home are the healthiest patients. In the app, you can change the parameters: the proportion of attendances that are admitted on weekends and weekdays, the mortality rate among patients who are admitted, and, crucially, the amount of variation in patient mortality explained (a sort of “R-squared”) by our risk adjustment. It will display crude and adjusted odds ratios as well as a distribution of possible results from similar studies. (Be patient though, simulating lots of large studies seems to take a while on the server!).


As is evident, even with a very high proportion of variance explained, we can still get an odds ratio not equal to one and an observed weekend effect if the proportion of attendances who are admitted differs between weekend and weekday. And, with the very large sample sizes often used for these studies, these will likely appear “statistically significant“. Recent evidence from the UK has suggested that 27% of A&E attendances are admitted at the weekend compared to 30% on a weekday. Even when we can explain 90% of the variation in mortality, we can still get a ‘weekend effect’ with these small differences in propensity for admission. And, if there is any element of publication bias, or the ‘garden of forking paths’ [PDF] we will see lots of statistically significant weekend effect studies published.

When there is a misunderstanding about statistics, one often blames the audience for not understanding, but it is often the case that an idea has just not been explained well enough. I can’t judge whether little web apps will actually help explain concepts like this, but hopefully it’s a step in the right direction.


Chris Sampson’s journal round-up for 16th January 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Competition and quality indicators in the health care sector: empirical evidence from the Dutch hospital sector. The European Journal of Health Economics [PubMed] Published 3rd January 2017

In case you weren’t already convinced, this paper presents more evidence to support the notion that (non-price) competition between health care providers is good for quality. The Dutch system is based on compulsory insurance and information on quality of hospital care is made public. One feature of the Dutch health system is that – for many elective hospital services – prices are set following a negotiation between insurers and hospitals. This makes the setting of the study a bit different to some of the European evidence considered to date, because there is scope for competition on price. The study looks at claims data for 3 diagnosis groups – cataract, adenoid/tonsils and bladder tumor – between 2008 and 2011. The authors’ approach to measuring competition is a bit more sophisticated than some other studies’ and is based on actual market share. A variety of quality indicators are used for the 3 diagnosis groups relating mainly to the process of care (rather than health outcomes). Fixed and random effects linear regression models are used to estimate the impact of market share upon quality. Casemix was only controlled for in relation to the proportion of people over 65 and the proportion of women. Where a relationship was found, it tended to be in favour of lower market share (i.e. greater competition) being associated with higher quality. For cataract and for bladder tumor there was a ‘significant’ effect. So in this setting at least, competition seems to be good news for quality. But the effect sizes are neither huge nor certain. A look at each of the quality indicators separately showed plenty of ‘non-significant’ relationships in both directions. While a novelty of this study is the liberalised pricing context, the authors find that there is no relationship between price and quality scores. So even if we believe the competition-favouring results, we needn’t abandon the ‘non-price competition only’ mantra.

Cost-effectiveness thresholds in global health: taking a multisectoral perspective. Value in Health Published 3rd January 2017

We all know health care is not the only – and probably not even the most important – determinant of health. We call ourselves health economists, but most of us are simply health care economists. Rarely do we look beyond the domain of health care. If our goal as researchers is to help improve population health, then we should probably be allocating more of our mental resource beyond health care. The same goes for public spending. Publicly provided education might improve health in a way that the health service would be willing to fund. Likewise, health care might improve educational attainment. This study considers resource allocation decisions using the familiar ‘bookshelf approach’, but goes beyond the unisectoral perspective. The authors discuss a two-sector world of health and education, and demonstrate the ways in which there may be overlaps in costs and outcomes. In short, there are likely to be situations in which the optimal multisectoral decision would be for individual sectors to increase their threshold in order to incorporate the spillover benefits of an intervention in another sector. The authors acknowledge that – in a perfect world – a social-welfare-maximising government would have sufficient information to allocate resources earmarked for specific purposes (e.g. health improvement) across sectors. But this doesn’t happen. Instead the authors propose the use of a cofinancing mechanism, whereby funds would be transferred between sectors as needed. The paper provides an interesting and thought-provoking discussion, and the idea of transferring funds between sectors seems sensible. Personally I think the problem is slightly misspecified. I don’t believe other sectors face thresholds in the same way, because (generally speaking) they do not employ cost-effectiveness analysis. And I’m not sure they should. I’m convinced that for health we need to deviate from welfarism, but I’m not convinced of it for other sectors. So from my perspective it is simply a matter of health vs everything else, and we can incorporate the ‘everything else’ into a cost-effectiveness analysis (with a societal perspective) in monetary terms. Funds can be reallocated as necessary with each budget statement (of which there seem to be a lot nowadays).

Is the Rational Addiction model inherently impossible to estimate? Journal of Health Economics [RePEc] Published 28th December 2016

Saddle point dynamics. Something I’ve never managed to get my head around, but here goes… This paper starts from the problem that empirical tests of the Rational Addiction model serve up wildly variable and often ridiculous (implied) discount rates. That may be part of the reason why economists tend to support the RA model but at the same time believe that it has not been empirically proven. The paper sets out the basis for saddle point dynamics in the context of the RA model, and outlines the nature of the stable and unstable root within the function that determines a person’s consumption over time. The authors employ Monte Carlo estimation of RA-type equations, simulating panel data observations. These simulations demonstrate that the presence of the unstable root may make it very difficult to estimate the coefficients. So even if the RA model can truly represent behaviour, empirical estimation may contradict it. This raises the question of whether the RA model is essentially untestable. A key feature of the argument relates to use of the model where a person’s time horizon is not considered to be infinite. Some non-health economists like to assume it is, which, as the authors wryly note, is not particularly ‘rational’.


Meeting round-up: Health Economists’ Study Group (HESG) Winter 2017

The perfect tonic to the January blues, this year’s winter HESG took us to Birmingham. Continuing the trend of recent years, 100+ health economists gathered in a major chain hotel to discuss 50 odd papers currently in progress in our little corner of academia. First thing I’ll say is that it was a great conference. It was flawlessly organised and the team helped create that unmistakable HESG buzz.

As we’ve come to expect from HESG, there was an impressive breadth of subject matter and methodologies on offer across the 4 or 5 parallel sessions throughout each day. From mental health to dentistry, from financial incentive schemes to integrated care, and from small-scale preference elicitation studies to regression analyses of millions of data points – that was just the first day.

I did the usual hat-trick duties of having a paper, giving a discussion and doing a bit of chairing; nothing compared to our own Sam Watson‘s herculean effort to tackle ‘the quad’ with two papers accepted. Despite my concern that it might just be a bit too boring, my paper – Systematic review and meta-analysis of health state utility values for diabetic retinopathy: implications for model-based economic evaluation – was well received on the first day. We discussed the reason and basis for a meta-analysis of utility values, and whether it makes more sense to target specific values or adopt a blanket approach. I’m very grateful to my discussant, Anthony Hatswell, and to the rest of the room for their feedback. The other highlight of the first day’s sessions for me was a paper by Uma Thomas that was discussed by Hareth Al-Janabi. The paper tried to tackle the very difficult problem of identifying ‘sophistication’ in the context of present bias and commitment contracts. Some people will be able to anticipate their own time-inconsistent preferences and should therefore demand commitment contracts. But as the discussion testified, identifying sophistication (or even understanding it) is no mean feat.

Day one ended with a very engaging plenary in which 4 speakers – Judith Smith, Matt Sutton, Andrew Street and Paula Lorgelly – discussed their short to medium-term priorities for the NHS. Generally, things looked bleak. Judith discussed the need to ‘get through the winter’, while Matt highlighted the apparent lack of attention given to evidence in the policy-making process. Andy warned us against getting sick in 2017 as the government demands impossible efficiency savings. Paula mentioned the ‘p’ word, attracting (jovial) hisses and boos. But she’s right – we really could do a better job of optimising NHS links with the private sector. The substance of the plenary as a whole was a call to arms. Health economists need to improve their communication to decision makers at all levels of the health service and of government. Numerous suggestions came from the floor and something seemed to be sparked in the room. I suspect we’ll hear more about this in the future.

My discussion on day 2 was of a paper by John Brazier and co, which fortuitously related to a paper that I previously discussed here on the blog. I was badly behaved, going well over time, but there were a lot of issues to grapple with around whether or not we should use ‘patient preferences’ in economic evaluation. The room was packed and provided a lively discussion. It’s a question that we’ll no doubt return to on this blog. I chaired a session in which Yan Feng discussed Liz Camacho‘s paper on the suitability of the EQ-5D for people at risk of developing psychosis. The take-home message of the discussion was that we need to stop considering ‘mental illness’ as a single diagnosis, and that while the EQ-5D might be valid in some groups it might not be in others.

A well-attended member’s meeting touched on some of the issues raised in the plenary, around the idea that HESG and its members might do more to influence decision makers and inform interested parties. What’s more, we learnt of some exciting news about HESG’s future that might facilitate action on this. There was the inevitable discussion of HESG’s controversial trip away, with the conclusion being that we probably won’t do it again for a few years (at least). This presents the exciting prospect that next year’s meeting – to be hosted by City University – might just end up in Cleethorpes.

The high quality of discussion was maintained into the last day. For me there was Penny Mullen’s discussion of Jytte Nielsen‘s paper describing a novel method by which to elicit people’s preferences for end of life treatment, without taking into account distributional concerns. And everything was wrapped up with champion HESG organiser Phil Kinghorn‘s discussion of Padraig Dixon‘s paper about the challenges of including carer spillover effects in economic evaluation. Phil gets the prize for inducing the most laughs during a presentation.

Yet another brilliant HESG that left me physically drained and mentally invigorated.