Alastair Canaway’s journal round-up for 27th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Elevated mortality among weekend hospital admissions is not associated with adoption of seven day clinical standards. Emergency Medicine Journal [PubMedPublished 8th November 2017

Our esteemed colleagues in Manchester brought more evidence to the seven-day NHS debate (debacle?). Patients who are admitted to hospital in an emergency at weekends have higher mortality rates than those during the week. Despite what our Secretary of State will have you believe, there is an increasing body of evidence suggesting that once case-mix is adequately adjusted for, the ‘weekend effect’ becomes negligible. This paper takes a slightly different angle for examining the same phenomenon. It harnesses the introduction of four priority clinical standards in England, which aim to reduce the number of deaths associated with the weekend effect. These are time to first consultant review; access to diagnostics; access to consultant-directed interventions; and on-going consultant review. The study uses publicly available data on the performance of NHS Trusts in relation to these four priority clinical standards. For the latest financial year (2015/16), Trusts’ weekend effect odds ratios were compared to their achievement against the four clinical standards. Data were available for 123 Trusts. The authors found that adoption of the four clinical standards was not associated with the extent to which mortality was elevated for patients admitted at the weekend. Furthermore, they found no association between the Trusts’ performance against any of the four standards and the magnitude of the weekend effect. The authors offer three reasons as to why this may be the case. First, data quality could be poor, second, it could be that the standards themselves are inadequate for reducing mortality, finally, it could be that the weekend effect in terms of mortality may be the wrong metric by which to judge the benefits of a seven-day service. They note that their previous research demonstrated that the weekend effect is driven by admission volumes at the weekend rather than the number of deaths, so it will not be impacted by care provision, and this is consistent with the findings in this study. The spectre of opportunity cost looms over the implementation of these standards; although no direct harm may arise from the introduction of these standards, resources will be diverted away from potentially more beneficial alternatives, this is a serious concern. The seven-day debate continues.

The effect of level overlap and color coding on attribute non-attendance in discrete choice experiments. Value in Health Published 16th November 2017

I think discrete choice experiments (DCE) are difficult to complete. That may be due to me not being the sharpest knife in the drawer, or it could be due to the nature of DCEs, or a bit of both. For this reason, I like best-worst scaling (BWS). BWS aside, DCEs are a common tool used in health economics research to assess and understand preferences. Given the difficulty of DCEs, people often resort to heuristics, that is, respondents often simplify choice tasks by taking shortcuts, e.g. ignoring one or more attribute (attribute non-attendance) or always selecting the option with the highest level of a certain attribute. This has downstream consequences leading to bias within preference estimates. Furthermore, difficulty with comprehension leads to high attrition rates. This RCT sought to examine whether participant dropout and attribute non-attendance could be reduced through two methods: level overlap, and colour coding. Level overlap refers to the DCE design whereby in each choice task a certain number of attributes are presented with the same level; in different choice tasks different attributes are overlapped. The idea of this is to prevent dominant attribute strategies whereby participants always choose the option with the highest level of one specific attribute and forces them to evaluate all attributes. The second method involves colour coding and the provision of other visual cues to reduce task complexity, e.g. colour coding levels to make it easy to see which levels are equal. There were five trial arms. The control arm featured no colour coding and no attribute overlap. The other four arms featured either colour coding (two different types were tested), attribute overlap, or a combination of them both. A nationally (Dutch) representative sample in relation to age, gender, education and geographic region were recruited online. In total 3394 respondents were recruited and each arm contained over 500 respondents. Familiarisation and warm-up questions were followed by 21 pairwise choice tasks in a randomised order. For the control arm (no overlap, no colour coding) 13.9% dropped out whilst only attending to on average 2.1 out of the five attributes. Colour coding reduced this to 9.6% with 2.8 attributes being attended. Combining level overlap with intensity colour coding reduced drop out further to 7.2% whilst increasing attribute attendance to four out of five. Thus, the combination of level overlap and colour coding nearly halved the dropout and doubled the attribute attendance within the DCE task. An additional, and perhaps most important benefit of the improvement in attribute attendance is that it reduces the need to model for potential attribute non-attendance post-hoc. Given the difficult of DCE completion, it seems colour coding in combination with level overlap should be implored for future DCE tasks.

Evidence on the longitudinal construct validity of major generic and utility measures of health-related quality of life in teens with depression. Quality of Life Research [PubMed] Published 17th November 2017

There appears to be increasing recognition of the prevalence and seriousness of youth mental health problems. Nearly 20% of young people will suffer depression during their adolescent years. To facilitate cost-utility analysis it is necessary to have a measure of preference based health-related quality of life (HRQL). However, there are few measures designed for use in adolescents. This study sought to examine various existing HRQL measures in relation to their responsiveness for the evaluation of interventions targeting depression in young people. This builds on previous work conducted by Brazier et al that found the EQ-5D and SF-6D performed adequately for depression in adults. In total 392 adolescents aged between 13 and 17 years joined the study, 376 of these completed follow up assessments. Assessments were taken at baseline and 12 weeks. The justification for 12 weeks is that it represented the modal time to clinical change. The following utility instruments were included: the HUI suite, the EQ-5D-3L, Quality of Well-Being Scale (QWB), and the SF-6D (derived from SF-36). Other non-preference based HRQL measures were also included: disease-specific ratings and scales, and the PedsQL 4.0. All (yes, you read that correctly) measures were found to be responsive to change in depression symptomology over the 12-week follow up period and each of the multi-attribute utility instruments was able to detect clinically meaningful change. In terms of comparing the utility instruments, the HUI-3, the QWB and the SF-6D were the most responsive whilst the EQ-5D-3L was the least responsive. In summary, any of the utility instruments could be used. One area of disappointment for me was that the CHU-9D was not included within this study – it’s one of the few instruments that has been developed by and for children and would have very much been a worthy addition. Regardless, this is an informative study for those of us working within the youth mental health sphere.

Credits

Sam Watson’s journal round-up for 13th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Scaling for economists: lessons from the non-adherence problem in the medical literature. Journal of Economic Perspectives [RePEcPublished November 2017

It has often been said that development economics has been at the vanguard of the use of randomised trials within economics. Other areas of economics have slowly caught up; the internal validity, and causal interpretation, offered by experimental randomised studies can provide reliable estimates for the effects of particular interventions. Health economics though has perhaps an even longer history with randomised controlled trials (RCTs), and now economic evaluation is often expected alongside clinical trials. RCTs of physician incentives and payments, investment programmes in child health, or treatment provision in schools all feature as other examples. However, even experimental studies can suffer from the same biases in the data analysis process as observational studies. The multiple decisions made in the data analysis and publication stages of research can lead to over-inflated estimates. Beyond that, the experimental conditions of the trial may not pertain in the real world – the study may lack external validity. The medical literature has long recognised this issue, as many as 50% of patients don’t take the medicines prescribed to them by a doctor. As a result, there has been considerable effort to develop an understanding of, and interventions to remedy, the lack of transferability between RCTs and real-world outcomes. This article summarises this literature and develops lessons for economists, who are only just starting to deal with, what they term, ‘the scaling problem’. For example, there are many reasons people don’t respond to incentives as expected: there are psychological costs to switching; people are hyperbolic discounters and often prefer small short-term gains for larger long-term costs; and, people can often fail to understand the implications of sets of complex options. We have also previously discussed the importance of social preferences in decision making. The key point is that, as policy is becoming more and more informed by randomised studies, we need to be careful about over-optimism of effect sizes and start to understand adherence to different policies in the real world. Only then are recommendations reliable.

Estimating the opportunity costs of bed-days. Health Economics [PubMedPublished 6th November 2017

The health economic evaluation of health service delivery interventions is becoming an important issue in health economics. We’ve discussed on many occasions questions surrounding the implementation of seven-day health services in England and Wales, for example. Other service delivery interventions might include changes to staffing levels more generally, medical IT technology, or an incentive to improve hand washing. Key to the evaluation of these interventions is that they are all generally targeted at improving quality of care – that is, to reduce preventable harm. The vast majority of patients who experience some sort of preventable harm do not die but are likely to experience longer lengths of stay in hospital. Consider a person suffering from bed sores or a fall in hospital. Therefore, we need to be able to value those extra bed days to be able to say what the value of improving hospital quality is. Typically we use reference costs or average accounting costs for the opportunity cost of a bed-day, mainly for pragmatic reasons, but also on the assumption that this is equivalent to the value of the second-best alternative foregone. This requires the assumption that health care markets operate properly, which they almost certainly do not. This paper explores the different ways economists have thought about opportunity costs and applies them to the question of the opportunity cost of a hospital bed-day. This includes definitions such as “Net health benefit forgone for the second-best patient‐equivalents”, “Net monetary benefit forgone for the second-best treatment-equivalents”, and “Expenditure incurred + highest net revenue forgone.” The key takeaway is that there is wide variation in the estimated opportunity costs using all the different methods and that, given the assumptions underpinning the most widely used methodologies are unlikely to hold, we may be routinely under- or over-valuing the effects of different interventions.

Universal investment in infants and long-run health: evidence from Denmark’s 1937 Home Visiting Program. American Economic Journal: Applied Economics [RePEcPublished October 2017

We have covered a raft of studies that look at the effects of in-utero health on later life outcomes, the so-called fetal origins hypothesis. A smaller, though by no means small, literature has considered what impact improving infant and childhood health has on later life adult outcomes. While many of these studies consider programmes that occurred decades ago in the US or Europe, their findings are still relevant today as many countries are grappling with high infant and childhood mortality. For many low-income countries, programmes with community health workers – lay-community members provided with some basic public health training – involving home visits, education, and referral services are being widely adopted. This article looks at the later life impacts of an infant health programme, the Home Visiting Program, implemented in Denmark in the 1930s and 40s. The aim of the programme was to provide home visits to every newborn in each district to provide education on feeding and hygiene practices and to monitor infant progress. The programme was implemented in a trial based fashion with different districts adopting the programme at different times and some districts remaining as control districts, although selection into treatment and control was not random. Data were obtained about the health outcomes in the period 1980-2012 of people born 1935-49. In short, the analyses suggest that the programme improved adult longevity and health outcomes, although the effects are small. For example, they estimate the programme reduced hospitalisations by half a day between the age of 45 and 64, and 2 to 6 more people per 1,000 survived past 60 years of age. However, these effect sizes may be large enough to justify what may be a reasonably low-cost programme when scaled across the population.

Credits

Hawking is right, Jeremy Hunt does egregiously cherry pick the evidence

I’m beginning to think Jeremy Hunt doesn’t actually care what the evidence says on the weekend effect. Last week, renowned physicist Stephen Hawking criticized Hunt for ‘cherry picking’ evidence with regard to the ‘weekend effect’: that patients admitted at the weekend are observed to be more likely than their counterparts admitted on a weekday to die. Hunt responded by doubling down on his claims:

Some people have questioned Hawking’s credentials to speak on the topic beyond being a user of the NHS. But it has taken a respected public figure to speak out to elicit a response from the Secretary of State for Health, and that should be welcomed. It remains the case though that a multitude of experts do continue to be ignored. Even the oft-quoted Freemantle paper is partially ignored where it notes of the ‘excess’ weekend deaths, “to assume that [these deaths] are avoidable would be rash and misleading.”

We produced a simple tool to demonstrate how weekend effect studies might estimate an increased risk of mortality associated with weekend admissions even in the case of no difference in care quality. However, the causal model underlying these arguments is not always obvious. So here it is:

weekend

A simple model of the effect of the weekend on patient health outcomes. The dashed line represents unobserved effects

 

So what do we know about the weekend effect?

  1. The weekend effect exists. A multitude of studies have observed that patients admitted at the weekend are more likely to die than those admitted on a weekday. This amounts to having shown that E(Y|W,S) \neq E(Y|W',S). As our causal model demonstrates, being admitted is correlated with health and, importantly, the day of the week. So, this is not the same as saying that risk of adverse clinical outcomes differs by day of the week if you take into account propensity for admission, we can’t say E(Y|W) \neq E(Y|W'). Nor does this evidence imply care quality differs at the weekend, E(Q|W) \neq E(Q|W'). In fact, the evidence only implies differences in care quality if the propensity to be admitted is independent of (unobserved) health status, i.e. Pr(S|U,X) = Pr(S|X) (or if health outcomes are uncorrelated with health status, which is definitely not the case!).
  2. Admissions are different at the weekend. Fewer patients are admitted at the weekend and those that are admitted are on average more severely unwell. Evidence suggests that the better patient severity is controlled for, the smaller the estimated weekend effect. Weekend effect estimates also diminish in models that account for the selection mechanism.
  3. There is some evidence that care quality may be worse at the weekend (at least in the United States). So E(Q|W) \neq E(Q|W'). Although this has not been established in the UK (we’re currently investigating it!)
  4. Staffing levels, particularly specialist to patient ratios, are different at the weekend, E(X|W) \neq E(X|W').
  5. There is little evidence to suggest how staffing levels and care quality are related. While the relationship seems evident prima facie, its extent is not well understood, for example, we might expect a diminishing return to increased staffing levels.
  6. There is a reasonable amount of evidence on the impact of care quality (preventable errors and adverse events) on patient health outcomes.

But what are we actually interested in from a policy perspective? Do we actually care that it is the weekend per se? I would say no, we care that there is potentially a lapse in care quality. So, it’s a two part question: (i) how does care quality (and hence avoidable patient harm) differ at the weekend E(Q|W) - E(Q|W') = ?; and (ii) what effect does this have on patient outcomes E(Y|Q)=?. The first question answers to what extent policy may affect change and the second gives us a way of valuing that change and yet the vast majority of studies in the area address neither. Despite there being a number of publicly funded research projects looking at these questions right now, it’s the studies that are not useful for policy that keep being quoted by those with the power to make change.

Hawking is right, Jeremy Hunt has egregiously cherry picked and misrepresented the evidence, as has been pointed out again and again and again and again and … One begins to wonder if there isn’t some motive other than ensuring long run efficiency and equity in the health service.

Credits