Data sharing and the cost of error

The world’s highest impact factor medical journal, the New England Journal of Medicine (NEJM), seems to have been doing some soul searching. After publishing an editorial early in 2016 insinuating that researchers requesting data from trials for re-analysis were “research parasites“, they have released a series of articles on the topic of data sharing. Four articles were published in August: two in favour and two less so. This month another three articles are published on the same topic. And, the journal is sponsoring a challenge to re-analyse data from a previous trial. We reported earlier in the year about a series of concerns at the NEJM and these new steps are all welcome to address those challenges. However, while the articles consider questions of fairness about sharing data from large, long, and difficult trials, little has been said about the potential costs to society of un-remedied errors in data analysis. The costs of not sharing data can be large as the long running saga over the controversial PACE trial illustrates.

The PACE trial was a randomised, controlled trial to assess the benefits of a number of treatments for chronic fatigue syndrome including graded exercise therapy and cognitive behavioural therapy. However, after publication of the trial results in 2011, a number of concerns were raised about the conduct of the trial, its analysis, and reporting. This included a change in the definitions of ‘improvement’ and ‘recovery’ mid-way through the trial. Other researchers sought access to the data from the trial for re-analysis, but such requests were rebutted with what a judge later described as ‘wild speculations’. The data were finally released and recently re-analysed. The new analysis revealed what many suspected – that the interventions in the trial had little benefit. Nevertheless, the recommended treatments for chronic fatigue syndrome had changed as a result of the trial. (STAT has the whole story here).

A cost-effectiveness analysis was published alongside the PACE trial. The results showed that chronic behavioural therapy (CBT) was cost-effective compared to standard care, as was graded exercise therapy (GET). Quality of life was measured in the trial using the EQ-5D, and costs were also recorded, making calculation of incremental cost-effectiveness ratios straightforward. Costs were higher for all the intervention groups. The table reporting QALY outcomes is reproduced below:

journal-pone-0040808-t005

At face value the analysis seems reasonable. But, in light of the problems with the trial, including that none of the objective measures of patient health, such as walking tests and step tests, nor labour market outcomes, showed much sign of improvement or recovery, these data seem less convincing. In particular, their statistically significant difference in QALYs – “After controlling for baseline utility, the difference between CBT and SMC was 0.05 (95% CI 0.01 to 0.09)” – may well just be a type I error. A re-analysis of these data is warranted (although gaining access may yet still be hard).

If there actually was no real benefit from the new treatments, then benefits have been lost from elsewhere in the healthcare system. If we assume the NHS achieves £20,000/QALY (contentious I know!) then the health service loses 0.05 QALYs for each patient with chronic fatigue syndrome put on the new treatment. The prevalence of chronic fatigue syndrome may be as high as 0.2% among adults in England, which represents approximately 76,000 people. If all of these were switched to new, ineffective treatments, the opportunity cost could potentially be as much as 3,800 QALYs.

The key point is that analytical errors have costs if the analyses go on to lead to changes in recommended treatments. And when averaged over a national health service these costs could become quite substantial. Researchers may worry about publication prestige or fairness in using other people’s hard won data, but the bigger issue is the wider costs of letting an error go unchallenged.

Credits

Sam Watson’s journal round-up for 18th April 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Life and growth. Journal of Political Economy Published 8th March 2016

The economic evaluation of the benefits of new interventions or policies often requires the discounting of benefits occurring in the future. NICE recommend a discount rate of 3.5%. One of the reasons for discounting is a diminishing marginal utility of consumption: future generations will have greater consumption and marginal utility is decreasing in consumption. A positive social discount rate for health effects is therefore implied if the consumption value of health (the amount of consumption equivalent to “one unit” of health is decreasing). However, as previous authors have noted, the consumption value of health is likely to be increasing. In this paper from Charles Jones, this idea is fully expounded and explicated. He shows that under standard assumptions about people’s preferences the value of life increases faster than that of consumption. In some cases this leads the optimal rate of consumption growth to fall to zero. As a result, interventions such as medical technologies that prolong life are valued much higher than consumption. Perhaps a negative discount rate is required after all? More on this in a future blog…

The returns to medical school: evidence from admission lotteries. AEJ: Applied Economics [RePEcPublished April 2016

The remuneration of doctors is highly topical in the UK where junior doctors are involved in a series of strikes in response to imposed contract changes from the government. This paper offers some insight into the labour market returns to medical school entry in the Netherlands. Entry into medical school in the Netherlands is decided by lottery; using this randomisation mechanism the authors compare the earnings of those who got in to those that don’t. Some people get into medical school but don’t finish and some people who don’t get in, reapply the next year. The lottery result is therefore used as an instrument for entry into the medical profession. They find that doctors earn 20% more than what they would have had they entered their next best profession, which rises to 50% after twenty years. The authors argue that this could be interpreted as meaning that state subsidies for medical education are too high. Whether this has any bearing on the debates in the UK I’ll leave to the reader.

How to translate clinical trial results into gain in healthy life expectancy for individual patients. BMJ [PubMedPublished 30th March 2016

I covered another paper in the Research Methods and Reporting series of the BMJ a couple of weeks ago. I have a high regard for these sorts of papers which translate important ideas about methods to a more general audience improving both their adoption and understanding in the medical literature. This time the paper is on translating results from clinical trials, usually reported as a change in the risk of an adverse outcome, to changes in healthy life expectancy. The paper begins by saying that cost-effectiveness studies use health life expectancy outcomes at the group level. By which I think they mean these studies look at averages. This paper discusses how this can be done at the individual patient level. But ultimately they’re just using subgroup averages (say 48 year old women). Nevertheless these distinctions can help with the communication of potentially complex statistical ideas and this series stands out from the other big four medical journals which rarely publish similar papers.