Data sharing and the cost of error

The world’s highest impact factor medical journal, the New England Journal of Medicine (NEJM), seems to have been doing some soul searching. After publishing an editorial early in 2016 insinuating that researchers requesting data from trials for re-analysis were “research parasites“, they have released a series of articles on the topic of data sharing. Four articles were published in August: two in favour and two less so. This month another three articles are published on the same topic. And, the journal is sponsoring a challenge to re-analyse data from a previous trial. We reported earlier in the year about a series of concerns at the NEJM and these new steps are all welcome to address those challenges. However, while the articles consider questions of fairness about sharing data from large, long, and difficult trials, little has been said about the potential costs to society of un-remedied errors in data analysis. The costs of not sharing data can be large as the long running saga over the controversial PACE trial illustrates.

The PACE trial was a randomised, controlled trial to assess the benefits of a number of treatments for chronic fatigue syndrome including graded exercise therapy and cognitive behavioural therapy. However, after publication of the trial results in 2011, a number of concerns were raised about the conduct of the trial, its analysis, and reporting. This included a change in the definitions of ‘improvement’ and ‘recovery’ mid-way through the trial. Other researchers sought access to the data from the trial for re-analysis, but such requests were rebutted with what a judge later described as ‘wild speculations’. The data were finally released and recently re-analysed. The new analysis revealed what many suspected – that the interventions in the trial had little benefit. Nevertheless, the recommended treatments for chronic fatigue syndrome had changed as a result of the trial. (STAT has the whole story here).

A cost-effectiveness analysis was published alongside the PACE trial. The results showed that chronic behavioural therapy (CBT) was cost-effective compared to standard care, as was graded exercise therapy (GET). Quality of life was measured in the trial using the EQ-5D, and costs were also recorded, making calculation of incremental cost-effectiveness ratios straightforward. Costs were higher for all the intervention groups. The table reporting QALY outcomes is reproduced below:

journal-pone-0040808-t005

At face value the analysis seems reasonable. But, in light of the problems with the trial, including that none of the objective measures of patient health, such as walking tests and step tests, nor labour market outcomes, showed much sign of improvement or recovery, these data seem less convincing. In particular, their statistically significant difference in QALYs – “After controlling for baseline utility, the difference between CBT and SMC was 0.05 (95% CI 0.01 to 0.09)” – may well just be a type I error. A re-analysis of these data is warranted (although gaining access may yet still be hard).

If there actually was no real benefit from the new treatments, then benefits have been lost from elsewhere in the healthcare system. If we assume the NHS achieves £20,000/QALY (contentious I know!) then the health service loses 0.05 QALYs for each patient with chronic fatigue syndrome put on the new treatment. The prevalence of chronic fatigue syndrome may be as high as 0.2% among adults in England, which represents approximately 76,000 people. If all of these were switched to new, ineffective treatments, the opportunity cost could potentially be as much as 3,800 QALYs.

The key point is that analytical errors have costs if the analyses go on to lead to changes in recommended treatments. And when averaged over a national health service these costs could become quite substantial. Researchers may worry about publication prestige or fairness in using other people’s hard won data, but the bigger issue is the wider costs of letting an error go unchallenged.

Credits

Health economics journals and negative findings

Recently, a number of health economics journals (henceforth HEJs) co-signed a statement about the publication of negative findings:

The Editors of the health economics journals named below believe that well-designed, well-executed empirical studies that address interesting and important problems in health economics, utilize appropriate data in a sound and creative manner, and deploy innovative conceptual and methodological approaches compatible with each journal’s distinctive emphasis and scope have potential scientific and publication merit regardless of whether such studies’ empirical findings do or do not reject null hypotheses that may be specified.

There was an outpouring of support for this statement; on Twitter, at least. Big deal. Welcome to the 21st century, health economics. Thanks for agreeing to not actively undermine scientific discourse. Don’t get me wrong, it is of course a good thing that this has been published. Inter-journal agreements are rare and valuable things. But is there really anything to celebrate?

Firstly, the statement has no real substance. The HEJs apparently wish to encourage the submission of negative findings, which is nice, but no real commitments are made. The final sentence reads, “As always, the ultimate responsibility for acceptance or rejection of a submission rests with each journal’s Editors.” So it’s business as usual.

Secondly, one has to wonder whether this is an admission that at least some of the HEJs have until now been refusing to publish negative findings. If they have then this statement is somewhat shameful, if they haven’t then it is just hot air.

Thirdly, is publication bias really a problem in the health economics literature? Generally I think health economists – or those publishing in health economics journals – are less committed to any intervention that they might be evaluating, and less rests on a ‘positive’ result. When it comes to econometric studies or issues specific to our sub-discipline I see plenty of contradictory and non-significant findings being published in the HEJs.

Finally, and most importantly for me, this highlights what I think is a great shame for the health economics literature. We exist mainly at the nexus between medical research and economics research. Medical journals have been at the forefront of publishing in a number of aspects: gold open access; transparency; systematic reporting of methods. Meanwhile, the field of economics is a leading light in green open access with the publication of working papers at RePEc, and journals like American Economic Review are committed to making data available for replication studies. Yet health economics has fallen somewhere between the two and is weak in respect to most of these. It isn’t good enough.

There are exceptions, of course. There are a growing number of working papers series. The likes of CHE and OHE have long been bastions in this regard. And there are some journals – including one of the signatories, Health Economics Review – that are ahead of their associates in some respects.

But in general, the HEJs are still on the wrong side of history. So rather than addressing (and in such a weak way) an issue that has been known about for at least 35 years, the HEJs should be taking bolder steps and pushing for progress in our mouldy old system of academic publishing. Here are a few things that I would have celebrated:

  • A commitment to an open-access-first policy. This could take various forms. For example, the BMJ makes all research articles open access. A policy that I have often thought useful would be for html versions of articles to be open access, possibly after a period of embargo, and for PDFs to remain behind a paywall. Journals could easily monetise this – most already deliver adverts. The journals should commit to providing reasonably priced open access options for authors. In fairness, most already do this, but firm commitments are valuable. Furthermore, the journals should commit to providing generous fee waivers for academics without the means to pay.
  • A commitment to transparency. For me, this is the most pressing issue that needs addressing in academic publishing. It’s a big one to address, but it can be tackled in stages. I’ve written before that decision models should be published. This is a no-brainer, and I remain dumbfounded by the fact that funders don’t insist on this. If you have written a paper based on a decision model I literally have no idea whether or not you are making the results up unless I have access to the model. The fact that reviewers tend not to be able to access the models is outrageous. The HEJs should also make the sorts of commitments to transparent reporting of methodologies that medical journals make. For example, most medical journals (at least in principle) do not publish trials that are not prospectively registered. The HEJs should encourage and facilitate the publication of protocols for empirical studies. And like some of the economics journals they should insist on raw data being made available. This would be progress.
  • Improving peer review. The system of closed pre-publication peer review is broken. It doesn’t work. It can function as part of a wider process of peer review, but as the sole means of review it stinks. There are a number of things the HEJs should do to address this. I am very much in favour of open peer review, which makes journals accountable and can expose any weaknesses in their review processes. The HEJs should also facilitate post-publication peer review native to their own journal’s pages. Only one of the signatories currently provides this.

If you are particularly enamoured of the HEJs’ statement then please share your thoughts in the comments below. My intention here is not to chastise the HEJs themselves, but rather the system in which they operate. I just wish that the HEJs would be more willing to take risks in the name of science, and I hope that this is simply a first baby step towards grander and more concrete commitments across the journals. Until then, I will save my praise.