# Sam Watson’s journal round-up for 2nd October 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The path to longer and healthier lives for all Africans by 2030: the Lancet Commission on the future of health in sub-Saharan Africa. The Lancet [PubMedPublished 13th September 2017

The relationship of health insurance and mortality: is lack of insurance deadly? Annals of Internal Medicine [PubMedPublished 19th September 2017

Smoking, expectations, and health: a dynamic stochastic model of lifetime smoking behavior. Journal of Political Economy [RePEcPublished 24th August 2017

Credits

# Hawking is right, Jeremy Hunt does egregiously cherry pick the evidence

I’m beginning to think Jeremy Hunt doesn’t actually care what the evidence says on the weekend effect. Last week, renowned physicist Stephen Hawking criticized Hunt for ‘cherry picking’ evidence with regard to the ‘weekend effect’: that patients admitted at the weekend are observed to be more likely than their counterparts admitted on a weekday to die. Hunt responded by doubling down on his claims:

Some people have questioned Hawking’s credentials to speak on the topic beyond being a user of the NHS. But it has taken a respected public figure to speak out to elicit a response from the Secretary of State for Health, and that should be welcomed. It remains the case though that a multitude of experts do continue to be ignored. Even the oft-quoted Freemantle paper is partially ignored where it notes of the ‘excess’ weekend deaths, “to assume that [these deaths] are avoidable would be rash and misleading.”

We produced a simple tool to demonstrate how weekend effect studies might estimate an increased risk of mortality associated with weekend admissions even in the case of no difference in care quality. However, the causal model underlying these arguments is not always obvious. So here it is:

A simple model of the effect of the weekend on patient health outcomes. The dashed line represents unobserved effects

So what do we know about the weekend effect?

1. The weekend effect exists. A multitude of studies have observed that patients admitted at the weekend are more likely to die than those admitted on a weekday. This amounts to having shown that $E(Y|W,S) \neq E(Y|W',S)$. As our causal model demonstrates, being admitted is correlated with health and, importantly, the day of the week. So, this is not the same as saying that risk of adverse clinical outcomes differs by day of the week if you take into account propensity for admission, we can’t say $E(Y|W) \neq E(Y|W')$. Nor does this evidence imply care quality differs at the weekend, $E(Q|W) \neq E(Q|W')$. In fact, the evidence only implies differences in care quality if the propensity to be admitted is independent of (unobserved) health status, i.e. $Pr(S|U,X) = Pr(S|X)$ (or if health outcomes are uncorrelated with health status, which is definitely not the case!).
2. Admissions are different at the weekend. Fewer patients are admitted at the weekend and those that are admitted are on average more severely unwell. Evidence suggests that the better patient severity is controlled for, the smaller the estimated weekend effect. Weekend effect estimates also diminish in models that account for the selection mechanism.
3. There is some evidence that care quality may be worse at the weekend (at least in the United States). So $E(Q|W) \neq E(Q|W')$. Although this has not been established in the UK (we’re currently investigating it!)
4. Staffing levels, particularly specialist to patient ratios, are different at the weekend, $E(X|W) \neq E(X|W')$.
5. There is little evidence to suggest how staffing levels and care quality are related. While the relationship seems evident prima facie, its extent is not well understood, for example, we might expect a diminishing return to increased staffing levels.
6. There is a reasonable amount of evidence on the impact of care quality (preventable errors and adverse events) on patient health outcomes.

But what are we actually interested in from a policy perspective? Do we actually care that it is the weekend per se? I would say no, we care that there is potentially a lapse in care quality. So, it’s a two part question: (i) how does care quality (and hence avoidable patient harm) differ at the weekend $E(Q|W) - E(Q|W') = ?$; and (ii) what effect does this have on patient outcomes $E(Y|Q)=?$. The first question answers to what extent policy may affect change and the second gives us a way of valuing that change and yet the vast majority of studies in the area address neither. Despite there being a number of publicly funded research projects looking at these questions right now, it’s the studies that are not useful for policy that keep being quoted by those with the power to make change.

Hawking is right, Jeremy Hunt has egregiously cherry picked and misrepresented the evidence, as has been pointed out again and again and again and again and … One begins to wonder if there isn’t some motive other than ensuring long run efficiency and equity in the health service.

Credits

# Chris Sampson’s journal round-up for 22nd May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The effect of health care expenditure on patient outcomes: evidence from English neonatal care. Health Economics [PubMed] Published 12th May 2017

Recently, people have started trying to identify opportunity cost in the NHS, by assessing the health gains associated with current spending. Studies have thrown up a wide range of values in different clinical areas, including in neonatal care. This study uses individual-level data for infants treated in 32 neonatal intensive care units from 2009-2013, along with the NHS Reference Cost for an intensive care cot day. A model is constructed to assess the impact of changes in expenditure, controlling for a variety of variables available in the National Neonatal Research Database. Two outcomes are considered: the in-hospital mortality rate and morbidity-free survival. The main finding is that a £100 increase in the cost per cot day is associated with a reduction in the mortality rate of 0.36 percentage points. This translates into a marginal cost per infant life saved of around £420,000. Assuming an average life expectancy of 81 years, this equates to a present value cost per life year gained of £15,200. Reductions in the mortality rate are associated with similar increases in morbidity. The estimated cost contradicts a much higher estimate presented in the Claxton et al modern classic on searching for the threshold.

A comparison of four software programs for implementing decision analytic cost-effectiveness models. PharmacoEconomics [PubMed] Published 9th May 2017

Markov models: TreeAge vs Excel vs R vs MATLAB. This paper compares the alternative programs in terms of transparency and validation, the associated learning curve, capability, processing speed and cost. A benchmarking assessment is conducted using a previously published model (originally developed in TreeAge). Excel is rightly identified as the ‘ubiquitous workhorse’ of cost-effectiveness modelling. It’s transparent in theory, but in practice can include cell relations that are difficult to disentangle. TreeAge, on the other hand, includes valuable features to aid model transparency and validation, though the workings of the software itself are not always clear. Being based on programming languages, MATLAB and R may be entirely transparent but challenging to validate. The authors assert that TreeAge is the easiest to learn due to its graphical nature and the availability of training options. Save for complex VBA, Excel is also simple to learn. R and MATLAB are equivalently more difficult to learn, but clearly worth the time saving for anybody expecting to work on multiple complex modelling studies. R and MATLAB both come top in terms of capability, with Excel falling behind due to having fewer statistical facilities. TreeAge has clearly defined capabilities limited to the features that the company chooses to support. MATLAB and R were both able to complete 10,000 simulations in a matter of seconds, while Excel took 15 minutes and TreeAge took over 4 hours. For a value of information analysis requiring 1000 runs, this could translate into 6 months for TreeAge! MATLAB has some advantage over R in processing time that might make its cost (\$500 for academics) worthwhile to some. Excel and TreeAge are both identified as particularly useful as educational tools for people getting to grips with the concepts of decision modelling. Though the take-home message for me is that I really need to learn R.

Economic evaluation of factorial randomised controlled trials: challenges, methods and recommendations. Statistics in Medicine [PubMed] Published 3rd May 2017

Factorial trials randomise participants to at least 2 alternative levels (for example, different doses) of at least 2 alternative treatments (possibly in combination). Very little has been written about how economic evaluations ought to be conducted alongside such trials. This study starts by outlining some key challenges for economic evaluation in this context. First, there may be interactions between combined therapies, which might exist for costs and QALYs even if not for the primary clinical endpoint. Second, transformation of the data may not be straightforward, for example, it may not be possible to disaggregate a net benefit estimation with its components using alternative transformations. Third, regression analysis of factorial trials may be tricky for the purpose of constructing CEACs and conducting value of information analysis. Finally, defining the study question may not be simple. The authors simulate a 2×2 factorial trial (0 vs A vs B vs A+B) to demonstrate these challenges. The first analysis compares A and B against placebo separately in what’s known as an ‘at-the-margins’ approach. Both A and B are shown to be cost-effective, with the implication that A+B should be provided. The next analysis uses regression, with interaction terms demonstrating the unlikelihood of being statistically significant for costs or net benefit. ‘Inside-the-table’ analysis is used to separately evaluate the 4 alternative treatments, with an associated loss in statistical power. The findings of this analysis contradict the findings of the at-the-margins analysis. A variety of regression-based analyses is presented, with the discussion focussed on the variability in the estimated standard errors and the implications of this for value of information analysis. The authors then go on to present their conception of the ‘opportunity cost of ignoring interactions’ as a new basis for value of information analysis. A set of 14 recommendations is provided for people conducting economic evaluations alongside factorial trials, which could be used as a bolt-on to CHEERS and CONSORT guidelines.

Credits