Chris Sampson’s journal round-up for 1st April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Toward a centralized, systematic approach to the identification, appraisal, and use of health state utility values for reimbursement decision making: introducing the Health Utility Book (HUB). Medical Decision Making [PubMed] Published 22nd March 2019

Every data point reported in research should be readily available to us all in a structured knowledge base. Most of us waste most of our time retreading old ground, meaning that we don’t have the time to do the best research possible. One instance of this is in the identification of health state utility values to plug into decision models. Everyone who builds a model in a particular context goes searching for utility values – there is no central source. The authors of this paper are hoping to put an end to that.

The paper starts with an introduction to the importance of health state utility values in cost-effectiveness analysis, which most of us don’t need to read. Of course, the choice of utility values in a model is very important and can dramatically alter estimates of cost-effectiveness. The authors also discuss issues around the identification of utility values and the assessment of their quality and applicability. Then we get into the objectives of the ‘Health Utility Book’, which is designed to tackle these issues.

The Health Utility Book will consist of a registry (I like registries), backed by a systematic approach to the identification and inclusion (registration?) of utility values. The authors plan to develop a quality assessment tool for studies that report utility values, using a Delphi panel method to identify appropriate indicators of quality to be included. The quality assessment tool will be complemented by a tool to assess applicability, which will be developed through interviews with stakeholders involved in the reimbursement process.

In the first place, the Health Utility Book will only compile utility values for cancer, and some of the funding for the project is cancer specific. To survive, the project will need more money from more sources. To be sustainable, the project will need to attract funding indefinitely. Or perhaps it could morph into a crowd-sourced platform. Either way, the Health Utility Book has my support.

A review of attitudes towards the reuse of health data among people in the European Union: the primacy of purpose and the common good. Health Policy Published 21st March 2019

We all agree that data protection is important. We all love the GDPR. Organisations such as the European Council and the OECD are committed to facilitating the availability of health data as a means of improving population health. And yet, there often seem to be barriers to accessing health data, and we occasionally hear stories of patients opposing data sharing (e.g. care.data). Maybe people don’t want researchers to be using their data, and we just need to respect that. Or, more likely, we need to figure out what it is that people are opposed to, and design systems that recognise this.

This study reviews research on attitudes towards the sharing of health data for purposes other than treatment, among people living in the EU, employing a ‘configurative literature synthesis’ (a new one for me). From 5,691 abstracts, 29 studies were included. Most related to the use of health data in research in general, while some focused on registries. A few studies looked at other uses, such as for planning and policy purposes. And most were from the UK.

An overarching theme was a low awareness among the population about the reuse of health data. However, in some studies, a desire to be better informed was observed. In general, views towards the use of health data were positive. But this was conditional on the data being used to serve the common good. This includes such purposes as achieving a better understanding of diseases, improving treatments, or achieving more efficient health care. Participants weren’t so happy with health data reuse if it was seen to conflict with the interests of patients providing the data. Commercialisation is a big concern, including the sale of data and private companies profiting from the data. Employers and insurance companies were also considered a threat to patients’ interests. There were conflicting views about whether it is positive for pharmaceutical companies to have access to health data. A minority of people were against sharing data altogether. Certain types of data are seen as being particularly sensitive, including those relating to mental health or sexual health. In general, people expressed concern about data security and the potential for leaks. The studies also looked at the basis for consent that people would prefer. A majority accepted that their data could be used without consent so long as the data were anonymised. But there were no clear tendencies of preference for the various consent models.

It’s important to remember that – on the whole – patients want their data to be used to further the common good. But support can go awry if the data are used to generate profits for private firms or used in a way that might be perceived to negatively affect patients.

Health-related quality of life in injury patients: the added value of extending the EQ-5D-3L with a cognitive dimension. Quality of Life Research [PubMed] Published 18th March 2019

I’m currently working on a project to develop a cognition ‘bolt-on’ for the EQ-5D. Previous research has demonstrated that a cognition bolt-on could provide additional information to distinguish meaningful differences between health states, and that cognition might be a more important candidate than other bolt-ons. Injury – especially traumatic brain injury – can be associated with cognitive impairments. This study explores the value of a cognition bolt-on in this context.

The authors sought to find out whether cognition is sufficiently independent of other dimensions, whether the impact of cognitive problems is reflected in the EuroQol visual analogue scale (EQ VAS), and how a cognition bolt-on affects the overall explanatory power of the EQ-5D-3L. The data used are from the Dutch Injury Surveillance System, which surveys people who have attended an emergency department with an injury, including EQ-5D-3L. The survey adds a cognitive bolt-on relating to memory and concentration.

Data were available for 16,624 people at baseline, with 5,346 complete responses at 2.5-month follow-up. The cognition item was the least affected, with around 20% reporting any problems (though it’s worth noting that the majority of the cohort had injuries to parts of the body other than the head). The frequency of different responses suggests that cognition is dominant over other dimensions in the sense that severe cognitive problems tend to be observed alongside problems in other dimensions, but not vice versa. The mean EQ VAS for people reporting severe cognitive impairment was 41, compared with a mean of 75 for those reporting no problems. Regression analysis showed that moderate and severe cognitive impairment explained 8.7% and 6.2% of the variance of the EQ VAS. Multivariate analysis suggested that the cognitive dimension added roughly the same explanatory power as any other dimension. This was across the whole sample. Interestingly (or, perhaps, worryingly) when the authors looked at the subset of people with traumatic brain injury, the explanatory power of the cognitive dimension was slightly lower than overall.

There’s enough in this paper to justify further research into the advantages and disadvantages of using a cognition bolt-on. But I would say that. Whether or not the bolt-on descriptors used in this study are meaningful to patients remains an open question.

Developing the role of electronic health records in economic evaluation. The European Journal of Health Economics [PubMed] Published 14th March 2019

One way that we can use patients’ routinely collected data is to support the conduct of economic evaluations. In this commentary, the authors set out some of the ways to make the most of these data and discuss some of the methodological challenges. Large datasets have the advantage of being large. When this is combined with the collection of sociodemographic data, estimates for sub-groups can be produced. The data can also facilitate the capture of outcomes not otherwise available. For example, the impact of bariatric surgery on depression outcomes could be identified beyond the timeframe of a trial. The datasets also have the advantage of being representative, where trials are not. This could mean more accurate estimates of costs and outcomes. But there are things to bear in mind when using the data, such as the fact that coding might not always be very accurate, and coding practices could vary between observations. Missing data are likely to be missing for a reason (i.e. not at random), which creates challenges for the analyst. I had hoped that this paper would discuss novel uses of routinely collected data systems, such as the embedding of economic evaluations within them, rather than simply their use to estimate parameters for a model. But if you’re just getting started with using routine data, I suppose you could do worse than start with this paper.

Credits

Data sharing and the cost of error

The world’s highest impact factor medical journal, the New England Journal of Medicine (NEJM), seems to have been doing some soul searching. After publishing an editorial early in 2016 insinuating that researchers requesting data from trials for re-analysis were “research parasites“, they have released a series of articles on the topic of data sharing. Four articles were published in August: two in favour and two less so. This month another three articles are published on the same topic. And, the journal is sponsoring a challenge to re-analyse data from a previous trial. We reported earlier in the year about a series of concerns at the NEJM and these new steps are all welcome to address those challenges. However, while the articles consider questions of fairness about sharing data from large, long, and difficult trials, little has been said about the potential costs to society of un-remedied errors in data analysis. The costs of not sharing data can be large as the long running saga over the controversial PACE trial illustrates.

The PACE trial was a randomised, controlled trial to assess the benefits of a number of treatments for chronic fatigue syndrome including graded exercise therapy and cognitive behavioural therapy. However, after publication of the trial results in 2011, a number of concerns were raised about the conduct of the trial, its analysis, and reporting. This included a change in the definitions of ‘improvement’ and ‘recovery’ mid-way through the trial. Other researchers sought access to the data from the trial for re-analysis, but such requests were rebutted with what a judge later described as ‘wild speculations’. The data were finally released and recently re-analysed. The new analysis revealed what many suspected – that the interventions in the trial had little benefit. Nevertheless, the recommended treatments for chronic fatigue syndrome had changed as a result of the trial. (STAT has the whole story here).

A cost-effectiveness analysis was published alongside the PACE trial. The results showed that chronic behavioural therapy (CBT) was cost-effective compared to standard care, as was graded exercise therapy (GET). Quality of life was measured in the trial using the EQ-5D, and costs were also recorded, making calculation of incremental cost-effectiveness ratios straightforward. Costs were higher for all the intervention groups. The table reporting QALY outcomes is reproduced below:

journal-pone-0040808-t005

At face value the analysis seems reasonable. But, in light of the problems with the trial, including that none of the objective measures of patient health, such as walking tests and step tests, nor labour market outcomes, showed much sign of improvement or recovery, these data seem less convincing. In particular, their statistically significant difference in QALYs – “After controlling for baseline utility, the difference between CBT and SMC was 0.05 (95% CI 0.01 to 0.09)” – may well just be a type I error. A re-analysis of these data is warranted (although gaining access may yet still be hard).

If there actually was no real benefit from the new treatments, then benefits have been lost from elsewhere in the healthcare system. If we assume the NHS achieves £20,000/QALY (contentious I know!) then the health service loses 0.05 QALYs for each patient with chronic fatigue syndrome put on the new treatment. The prevalence of chronic fatigue syndrome may be as high as 0.2% among adults in England, which represents approximately 76,000 people. If all of these were switched to new, ineffective treatments, the opportunity cost could potentially be as much as 3,800 QALYs.

The key point is that analytical errors have costs if the analyses go on to lead to changes in recommended treatments. And when averaged over a national health service these costs could become quite substantial. Researchers may worry about publication prestige or fairness in using other people’s hard won data, but the bigger issue is the wider costs of letting an error go unchallenged.

Credits