Data sharing and the cost of error

The world’s highest impact factor medical journal, the New England Journal of Medicine (NEJM), seems to have been doing some soul searching. After publishing an editorial early in 2016 insinuating that researchers requesting data from trials for re-analysis were “research parasites“, they have released a series of articles on the topic of data sharing. Four articles were published in August: two in favour and two less so. This month another three articles are published on the same topic. And, the journal is sponsoring a challenge to re-analyse data from a previous trial. We reported earlier in the year about a series of concerns at the NEJM and these new steps are all welcome to address those challenges. However, while the articles consider questions of fairness about sharing data from large, long, and difficult trials, little has been said about the potential costs to society of un-remedied errors in data analysis. The costs of not sharing data can be large as the long running saga over the controversial PACE trial illustrates.

The PACE trial was a randomised, controlled trial to assess the benefits of a number of treatments for chronic fatigue syndrome including graded exercise therapy and cognitive behavioural therapy. However, after publication of the trial results in 2011, a number of concerns were raised about the conduct of the trial, its analysis, and reporting. This included a change in the definitions of ‘improvement’ and ‘recovery’ mid-way through the trial. Other researchers sought access to the data from the trial for re-analysis, but such requests were rebutted with what a judge later described as ‘wild speculations’. The data were finally released and recently re-analysed. The new analysis revealed what many suspected – that the interventions in the trial had little benefit. Nevertheless, the recommended treatments for chronic fatigue syndrome had changed as a result of the trial. (STAT has the whole story here).

A cost-effectiveness analysis was published alongside the PACE trial. The results showed that chronic behavioural therapy (CBT) was cost-effective compared to standard care, as was graded exercise therapy (GET). Quality of life was measured in the trial using the EQ-5D, and costs were also recorded, making calculation of incremental cost-effectiveness ratios straightforward. Costs were higher for all the intervention groups. The table reporting QALY outcomes is reproduced below:

journal-pone-0040808-t005

At face value the analysis seems reasonable. But, in light of the problems with the trial, including that none of the objective measures of patient health, such as walking tests and step tests, nor labour market outcomes, showed much sign of improvement or recovery, these data seem less convincing. In particular, their statistically significant difference in QALYs – “After controlling for baseline utility, the difference between CBT and SMC was 0.05 (95% CI 0.01 to 0.09)” – may well just be a type I error. A re-analysis of these data is warranted (although gaining access may yet still be hard).

If there actually was no real benefit from the new treatments, then benefits have been lost from elsewhere in the healthcare system. If we assume the NHS achieves £20,000/QALY (contentious I know!) then the health service loses 0.05 QALYs for each patient with chronic fatigue syndrome put on the new treatment. The prevalence of chronic fatigue syndrome may be as high as 0.2% among adults in England, which represents approximately 76,000 people. If all of these were switched to new, ineffective treatments, the opportunity cost could potentially be as much as 3,800 QALYs.

The key point is that analytical errors have costs if the analyses go on to lead to changes in recommended treatments. And when averaged over a national health service these costs could become quite substantial. Researchers may worry about publication prestige or fairness in using other people’s hard won data, but the bigger issue is the wider costs of letting an error go unchallenged.

Credits

PrEP: A story in desperate need of health economics communication

The poor state of public economics communication has been decried in many fora. The consensus of economists regarding issues such as the impacts of austerity, leaving the European Union, and other major policy choices, is in general poorly communicated to the public. With a few exceptions, such as Martin Wolf in the Financial Times and Paul Krugman in the New York Times, most major economics issues are communicated by political journalists and frequently lack appropriate scrutiny. Health economics is no exception and this week’s ruling on PrEP (pre-exposure prophylaxis), a combination of anti-retroviral drugs that can reduce the risk of transmission of HIV by over 90%, reveals the poor state of public understanding of economic evaluation and cost-effectiveness analysis.

Perhaps the most shocking of the reportage on this topic came, unsurprisingly, from the Daily Mail which claimed that funding the “promiscuity pill” would prevent cancer treatment and amputees receiving limbs. Even the comments sections in more temperate journals, such as the Guardian, reveal the same concerns: providing PrEP will both encourage risky sexual behaviour and prevent other treatments being provided. Indeed NHS England has itself made the statement that they will be prevented from treating children with cystic fibrosis despite the lack of any formal cost-effectiveness analyses. A basic understanding of health economics is lacking. The communication of some straightforward facts may improve public understanding:

  • New treatments that would displace resources for other interventions that provide greater benefits are generally not recommended within the NHS.
  • New interventions are often more expensive than standard therapy but the extra benefit is greater than is being achieved with the same resources elsewhere, if the new intervention is considered cost-effective. In some cases a treatment may have a negative net cost if it is cheaper or prevents longer term costs arising freeing up resources to be used elsewhere.
  • When assessing whether or not a new intervention is cost-effective it is compared to relevant alternative treatments. In the case of PrEP this would be providing treatment for HIV after it has been contracted, for example. A thoroughgoing cost-effectiveness analysis should take into account possible changes in behaviour induced by the availability of the treatment: vaccination programmes are a good example of this.
  • Criteria other than cost-effectiveness are used to decide on treatment recommendations such as access to other treatments or the demographics of the groups affected by the disease in question.

Understanding these concepts should suggest many of the concerns around PrEP are moot until its cost-effectiveness has been established. But, PrEP clearly ignites some heated moral debates. Some might still argue that regardless of its cost-effectiveness, even if it is cost saving, it shouldn’t be provided by a public healthcare system. Many of the objectors to the ruling voice the familiar objection to funding treatments for conditions that result from personal choices about behaviour: the luck egalitarian argument that we have addressed before (here and here, for example). While these ethical and political considerations may be valid grounds for debate, the issue is not exclusive to PrEP, and could cover rugby injuries, cancer as a result of smoking, or even provision of HPV vaccine. A final point could therefore be added:

  • Normative economics, a question of what should be, and positive economics, a question of what is, shouldn’t be conflated.

Misconceptions of economic evaluation as bean counting or as being valueless abound. It is only through effective communication can this be remedied.

Image credit: Jeffrey Beal (CC BY-SA 3.0)

The NHS as policy laboratory

In an ideal world new policies and interventions could be tested in a randomised fashion before implementation. But, all to often, policies within the health service are decided upon in the absence of decent evidence to serve political rather than public health or economic ends. Consider the recent case of the 7-day NHS, which the evidence is beginning to show will likely not produce the benefits expected of it. Researchers cannot expect political decisions to be delayed for them to be able to conduct the ideal study. Sometimes the researcher has to evaluate a policy or organisation change that will go ahead regardless or one that cannot be reversed once it is in place. Nevertheless, this can still produce a good opportunity for evaluation that can satisfy both researchers and policy makers alike: the stepped wedge cluster randomised trial.

The stepped wedge trial design is a variant on the cluster RCT design. The Figure below illustrates the different set-ups. What is unique to the stepped wedge design is that by the end of the study all of the study sites will receive the intervention: it is the order in which they receive the intervention that is randomised. Hemming et al (2015) provide a good overview with examples of the stepped wedge trial, while Hemming, Girling, and Lilford (2015) give a statistical rationale and background. And, recently Girling and Hemming (2016) have investigated hybrid designs to optimize statistical efficiency.

f1-large

Figure. Schematic illustration of the conventional parallel cluster study (with variations) and the stepped wedge study. Hemming et al. (CC BY 4.0)

The stepped wedge design presents an attractive proposition and compromise for researchers and policy makers alike. But the feasibility of implementing it depends on the stage when the researchers are involved in the design of the roll out of the intervention. Often it is the case that researchers are involved after the fact, opportunistically examining an ongoing change in the health system. However, there are a growing number of examples of stepped wedge studies being implemented in the NHS (e.g. here). Researcher involvement with policy and organisational changes in the health system should become an opt-out system rather than opt-in. Data is readily available and the intervention will already be planned making such research relatively cheap. The NHS can become a powerful policy laboratory.

Photo credit: As6022014 (CC BY-SA 3.0)