Skip to content

Data sharing and the cost of error

The world’s highest impact factor medical journal, the New England Journal of Medicine (NEJM), seems to have been doing some soul searching. After publishing an editorial early in 2016 insinuating that researchers requesting data from trials for re-analysis were “research parasites“, they have released a series of articles on the topic of data sharing. Four articles were published in August: two in favour and two less so. This month another three articles are published on the same topic. And, the journal is sponsoring a challenge to re-analyse data from a previous trial. We reported earlier in the year about a series of concerns at the NEJM and these new steps are all welcome to address those challenges. However, while the articles consider questions of fairness about sharing data from large, long, and difficult trials, little has been said about the potential costs to society of un-remedied errors in data analysis. The costs of not sharing data can be large as the long running saga over the controversial PACE trial illustrates.

The PACE trial was a randomised, controlled trial to assess the benefits of a number of treatments for chronic fatigue syndrome including graded exercise therapy and cognitive behavioural therapy. However, after publication of the trial results in 2011, a number of concerns were raised about the conduct of the trial, its analysis, and reporting. This included a change in the definitions of ‘improvement’ and ‘recovery’ mid-way through the trial. Other researchers sought access to the data from the trial for re-analysis, but such requests were rebutted with what a judge later described as ‘wild speculations’. The data were finally released and recently re-analysed. The new analysis revealed what many suspected – that the interventions in the trial had little benefit. Nevertheless, the recommended treatments for chronic fatigue syndrome had changed as a result of the trial. (STAT has the whole story here).

A cost-effectiveness analysis was published alongside the PACE trial. The results showed that chronic behavioural therapy (CBT) was cost-effective compared to standard care, as was graded exercise therapy (GET). Quality of life was measured in the trial using the EQ-5D, and costs were also recorded, making calculation of incremental cost-effectiveness ratios straightforward. Costs were higher for all the intervention groups. The table reporting QALY outcomes is reproduced below:


At face value the analysis seems reasonable. But, in light of the problems with the trial, including that none of the objective measures of patient health, such as walking tests and step tests, nor labour market outcomes, showed much sign of improvement or recovery, these data seem less convincing. In particular, their statistically significant difference in QALYs – “After controlling for baseline utility, the difference between CBT and SMC was 0.05 (95% CI 0.01 to 0.09)” – may well just be a type I error. A re-analysis of these data is warranted (although gaining access may yet still be hard).

If there actually was no real benefit from the new treatments, then benefits have been lost from elsewhere in the healthcare system. If we assume the NHS achieves £20,000/QALY (contentious I know!) then the health service loses 0.05 QALYs for each patient with chronic fatigue syndrome put on the new treatment. The prevalence of chronic fatigue syndrome may be as high as 0.2% among adults in England, which represents approximately 76,000 people. If all of these were switched to new, ineffective treatments, the opportunity cost could potentially be as much as 3,800 QALYs.

The key point is that analytical errors have costs if the analyses go on to lead to changes in recommended treatments. And when averaged over a national health service these costs could become quite substantial. Researchers may worry about publication prestige or fairness in using other people’s hard won data, but the bigger issue is the wider costs of letting an error go unchallenged.



  • Health economics, statistics, and health services research at the University of Warwick. Also like rock climbing and making noise on the guitar.


We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

21 thoughts on “Data sharing and the cost of error”

  1. Well argued point Sam. As an aside, I managed to get access to the CFS data (EQ-5D and Chalder Fatigue Scale mostly) for my Masters thesis, so they do give it out if you have a specific research question. What you didn’t mention there is the politics around CFS – it’s a crazy, crazy area to do research in.

    1. Yes, the PACE authors had previously shared the data based on a “supporters only” model. You could receive the data only if you were sympathetic to their psychosocial view – that is, that CFS is largely the product of dysfunctional beliefs and behaviours. All others were refused.

      I have recently been working with some CFS patients on a research article. I have found no craziness, but instead intelligent, reasonable people willing to work from their beds to improve the future of their fellow sufferers. The lack of craziness is surprising, given that many have spent decades bedbound or housebound, reading over and over again that their illness is due to their maladaptive beliefs and behaviours, and that it can be “cured” by simple exercise or talk theory. And that if they weren’t so dysfunctional, they would be better by now. In their position, I think I’d be a little crazy by now.

      As for the “crazy, crazy” area of CFS, I hope other groups whose severe illnesses have been inappropriately “psychologised” will one day join in on the “craziness” too, and refuse to accept these weak, unsubstantiated accounts of their disorder.

  2. Thanks for the post.

    “But, in light of the problems with the trial, including that none of the objective measures of patient health, such as walking tests and step tests, nor labour market outcomes, showed much sign of improvement or recovery, these data seem less convincing.”

    I think that this is the key point. If you ignore potential for bias in the self-report outcomes, it is only really the spinning of ‘recovery’ claims that is a major problem.

    That those in the CBT and GET groups were told as a part of their treatment that these interventions had been found to be effective, and encouraged to believe their ill health was a matter they had more control over than they’d realised, means that it is important more objective outcomes are used. Encouraging someone to fill in a questionnaire more positively is not the same as successfully treating their health condition.

    1. I agree that their subjective outcome may be a ‘Hawthorne effect’. I certainly think that if one properly modeled the objective and subjective outcomes as being correlated then you would end up with a null/cost-ineffective result for the interventions.

  3. Sam, you said “A re-analysis of these data is warranted (although gaining access may yet still be hard).” I believe this is where you can download the data:
    Dataset file: Readme file:
    It was reference #10 in the PACE data reanalysis:

  4. The lack of any clinically meaningful benefit on objective measures, let alone any correlation with subjective measures, just about sinks the good ship PACE on its own. Patients were left scoring down alongside some of the sickest disease groups on all the objective measures, and didn’t gain that much benefit on subjective ones.

    I can’t even see that PACE demonstrated a placebo effect, given no objective gains.

    How all that translates into the 22% “recovery” rates back to “normal” as claimed by the PACE team simply defies reason.

    PACE did little more than confirm the existence of known confounders in clinical trials of psychotherapy.

    I guess at least they reproduced something.

    1. Nevertheless I think it remains an interesting question whether the observed subjective gains were random variation or some type of placebo effect. Not sure how one might differentiate them exactly but if the data are obtainable I’ll try and give it a go.

      1. I always thought that the placebo effect was defined as the power of psychological processes to produce objectively measurable changes in physiology, including in the global functional capacity of the subject.

        If so, then how do we distinguish between a genuine placebo effect and subjective confounders, if we only use subjective outcome measures?

        While there is a generic place in the clinic for palliative psychological approaches to help reduce secondary distress and improve self-management, that is not what is being claimed by PACE for the role of psychological factors and psychotherapy in this condition.

        It is beyond dispute that the authors and their like-minded colleagues have long advocated psychological processes as both the source of the primary pathology, and hence the primary therapeutic target for improving outcomes in CFS, potentially to the level of complete recovery. A claim they have yet to clearly repudiate.

        That is where the debate gets heated, and the politics crazy, because after 30 years of trying to demonstrate their model to be correct there is still no firm evidence that the authors’ causal assumptions and therapeutic claims are justified, and indeed substantial and increasing evidence against them, not least of all from PACE itself.

        Yet their model remains the dominant one in research, clinical practice, professional education, and policy advice (to both government, and the private insurance industry).

        Obviously such a situation will have very serious direct and indirect adverse consequences for patients themselves, and for the broader efficiency of health expenditure, including via cumulative opportunity costs from failing to adequately investigate alternative possible causal explanations and modes of treatment.

        1. “…still no firm evidence that the authors’ causal assumptions and therapeutic claims are justified, and indeed substantial and increasing evidence against them,…”

          …still no firm evidence that the authors’ causal assumptions and therapeutic claims are justified, and indeed there is substantial and increasing evidence against them,…

  5. In addition to the above study conduct issues, its worth looking at PACE’s patient selection methods.

    PACE selected patients with the Oxford definition, which requires only chronic fatigue and includes mental illness. A 2015 report by the U.S. National Institutes of Health concluded that Oxford could “impair progress and cause harm” and called for Oxford to be retired.

    The U.S. Agency for HealthCare Research and Quality (AHRQ) issued an addendum to its evidence review in July and noted that the use of Oxford results in “a high risk of including patients who may have an alternate fatiguing illness or whose illness resolves spontaneously with time.” Once they excluded Oxford studies from their analysis, they found no evidence of effectiveness of GET and barely any for CBT. AHRQ stated that no trials of CBT and GET have used definitions that selected patients with the hallmark criteria of ME.

    PACE claimed they also characterized patients with other CFS and ME definitions. But they applied those definitions after first selecting by Oxford and they also used modified versions of the CFS and ME definitions – one of the PACE publications acknowledged that such modification could result in inaccurate characterization of patients.

    So even if the PACE’s analysis methods had been perfect, which they are not, what patients did they actually study.

  6. Statistical flaws alone barely begin to count the extremely large costs (negative cost-benefit outcomes) of the PACE trial’s. It and prior lobbying by the psychiatrists behind it resulted in total and absolute commitment of NHS to CBT and GET. Doctors who had been properly treating ME patients (to which the CFS dicta were applied) based on clinical knowledge developed 1955-1988 were fired and clinics closed, depriving patients of care. The imposition of CBT and GET on patients despite harms of GET resulted in sectioning of adults and forced taking of children “into care” when such ministrations were resisted. In addition, studies by ME charities demonstrated that more than half (possibly as many as 74%) of patients treated with “GET” were permanently vastly more disabled as the forced increase in exercise for patients whose key symptom is collapse after exercise and failure to recover (due to metabolic dysfunction) was sheer folly. A characteristic outcome would feature a patient changing from part-time work to housebound or bedridden.
    The data show far more than 76,000 patients with “cfs” in the UK. I commonly read citations of at least 150,000 and as many as 250,000. Of course the devil’s in the details and the epidemiology, and the success of the PACE PIs in bedazzling the nation with a new twist on the emperor who was actually altogether as naked as the day that he was born leaves one with a certain questioning attitude towards figures.

  7. Pingback: A Broader Picture – spoonseekerdotcom

  8. Pingback: Spreading the Word – spoonseekerdotcom

Join the discussion

%d bloggers like this: