Chris Sampson’s journal round-up for 27th January 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A general framework for classifying costing methods for economic evaluation of health care. The European Journal of Health Economics [PubMed] Published 20th January 2020

When it comes to health state valuation and quality of life, I’m always very concerned about the use of precise terminology, and it bugs me when people get things wrong. But when it comes to costing methods, I’m pretty shoddy. So I’m pleased to see this very useful paper, which should help us all to gain some clarity in our reporting of costing studies.

The authors start out by clearly distinguishing between micro-costing and gross-costing in the identification of costs and between top-down and bottom-up valuation of these costs. I’m ashamed to say that I had never properly grasped the four distinct approaches that can be adopted based on these classifications, but the authors make it quite clear. Micro-costing means detailed identification of cost components, while gross-costing considers resource use in aggregate. Top-down methods use expenditure data collected at the organisational level, while bottom-up approaches use patient-level data.

A key problem is that our language – as health economists – is in several respects contradictory to the language used by management accountants. It’s the accountants who are usually preparing the cost information that we might use in analyses, and these data are not normally prepared for the types of analysis that we wish to conduct, so there is a lot that can go awry. Perhaps most important is that financial accounting is not concerned with opportunity costs. The authors provide a kind of glossary of terms that can support translation between the two contexts, as well as a set of examples of the ways in which the two contexts differ. They also point out the importance of different accounting practices in different countries and the ways in which these might necessitate adjustment in costing methods for economic evaluation.

The study includes a narrative review of costing studies in order to demonstrate the sorts of errors in terminology that can arise and the lack of clarity that results. The studies included in the review provide examples of the different approaches to costing, though no study is identified as ‘bottom-up gross-costing’. One of the most useful contributions of the paper is to provide two methodological checklists, one for top-down and one for bottom-up costing studies. If you’re performing, reviewing, or in any way making use of costing studies, this will be a handy reference.

Health state values of deaf British Sign Language (BSL) users in the UK: an application of the BSL version of the EQ-5D-5L. Applied Health Economics and Health Policy [PubMed] Published 16th January 2020

The BSL translation of the EQ-5D is like no other. It is to be used – almost exclusively – by people who have a specific functional health impairment. For me, this raises questions about whether or not we can actually consider it simply a translation of the EQ-5D and compare values with other translations in the way we would any other language. This study uses data collected during the initial development and validation of the EQ-5D-5L BSL translation. The authors compared health state utility values from Deaf people (BSL users) with a general population sample from the Health Survey for England.

As we might expect, the Deaf sample reported a lower mean utility score (0.78) than the general population (0.84). Several other health measures were used in the BSL study. A staggering 43% of the Deaf participants had depression and a lot of the analysis in the paper is directed towards comparing the groups with and without psychological distress. The authors conduct some simple regression analyses to explore what might be the determinants of health state utility values in the Deaf population, with long-standing physical illness having the biggest impact.

I had hoped that the study might be able to tell us a bit more about the usefulness of the BSL version of the EQ-5D-5L, because the EQ-5D has previously been shown to be insensitive to hearing problems. The small sample (<100) can’t tell us a great deal on its own, so it’s a shame that there isn’t some attempt at matching with individuals from the Health Survey for England for the sake of comparison. Using the crosswalk from the EQ-5D-3L to obtain 5L values is also a problem, as it limits the responsiveness of index values. Nevertheless, it’s good to see data relating to this under-represented population.

A welfare-theoretic model consistent with the practice of cost-effectiveness analysis and its implications. Journal of Health Economics [PubMed] Published 11th January 2020

There are plenty of good reasons to deviate from a traditional welfarist approach to cost-benefit analysis in the context of health care, as health economists have debated for decades. But it is nevertheless important to understand the ways in which cost-effectiveness analysis, as we conduct it, deviates from welfarism, and to aim for some kind of consistency in our handling of different issues. This paper attempts to draw together disparate subjects of discussion on the theoretical basis for aspects of cost-effectiveness analysis. The author focuses on issues relating to the inclusion of future (unrelated) costs, to discounting, and to consistency with welfarism, in the conduct of cost-per-QALY analyses. The implications are given consideration with respect to adopting a societal perspective, recognising multiple budget holders, and accounting for distributional impacts.

All of this is based on the description of an intertemporal utility model and a model of medical care investment. The model hinges especially on how we understand consumption to be affected by our ambition to maximise QALYs. For instance, the author argues that, once we consider time preferences in an overall utility function, we don’t need to worry about differential discounting in health and consumption. The various implications of the model are compared to the recommendations of the Second Panel on Cost-Effectiveness in Health and Medicine. In general, the model supports the recommendations of the Panel, where others have been critical. As such, it sets out some of the theoretical basis for those recommendations. It also implies other recommendations, not considered by the Panel. For example, the optimal cost-effectiveness threshold is likely to be higher than GDP per capita.

It’s difficult to judge the validity of the framework from a first read. The paper is dense with theoretical exposition. My first instinct is ‘so what’. One of the great things about the practice of cost-effectiveness analysis in health care is that it isn’t constrained by restrictive theoretical frameworks, and so the very idea of a kind of unified theoretical framework is a bit worrying to me. But my second thought is that this is a valuable paper, as it attempts to gather up several loose threads. Whether or not these can be gathered up within a welfarist framework is debatable, but the exercise is revealing. I suspect this paper will help to trigger further inquiry, which can only be a good thing.

Registered reports: time to radically rethink peer review in health economics. PharmacoEconomics – Open [PubMed] Published 23rd January 2020

As a discipline, health economics isn’t great when it comes to publication practices. We excel in neither the open access culture of medical sciences nor the discussion paper culture of economics proper. In this article, the authors express concern about publication bias, and the fact that health economics journals – and health economists in general – aren’t doing much to combat it. In fairness to the discipline, there isn’t really any evidence that publication bias abounds. But that isn’t really the point. We should be able to prove and ensure that it doesn’t if we want our research to been seen as credible.

One (partial) solution to publication bias is the adoption – by journals – of registered reports. Under such a system, researchers would submit study protocols to journals for peer review. If the journal were satisfied with the methods then they could guarantee to publish the study once the results are in, regardless of how sexy the results may or may not be. The authors of this paper identify the prevalence of studies in major health economics journals that could benefit from registered reports. These would be prospectively designed experimental or quasi-experimental studies. It seems that there are plenty.

I’ve used this blog in the past to propose more transparent research practices and to complain about publication practices in health economics generally, while others have complained about the use of p-values in our discipline. The adoption of registered reports is one tactic that could bring improvements and I hope it will be given proper consideration by those in a position to enact change.

Credits

E-cigarettes and the role of science in public health policy

E-cigarettes have become, without doubt, one of the public health issues du jour. Many countries and states have been quick to prohibit them, while others continue to debate the issue. The debate ostensibly revolves around the relative harms of e-cigarettes: Are they dangerous? Will they reduce the harms caused by smoking tobacco? Will children take them up? Questions which would typically be informed by the available evidence. However, there is a growing schism within the scientific community about what indeed the evidence does say. On the one hand, there is the view that the evidence, when taken altogether, overwhelmingly suggests that e-cigarettes are significantly less harmful than cigarettes and would reduce the harms caused by nicotine use. On the other hand, there is vocal group that doubt the veracity of the available evidence and are critical of e-cigarette availability in general. Indeed, this latter view has been adopted by the biggest journals in medicine, The Lancet, the BMJ, the New England Journal of Medicine, and JAMA, each of whom have published either research or editorials along this line.

The evidence around e-cigarettes was recently summarised and reviewed by Public Health England. The conclusion of the review was the e-cigarettes were 95% less harmful than smoking tobacco. So why might these journals take a position that is arguably contrary to the evidence? From a sociological perspective, epistemological conflicts in science are also political conflicts. Actions within the scientific field are directed to acquiring scientific authority, and that authority requires social recognition. However, e-cigarette policy is also a political issue, and as such actions in this area are also directed at gaining political capital. If the e-cigarette issue can be delimited to a purely scientific problem then scientific capital can be translated into political capital. One way of achieving this is to try to establish oneself as the authoritative scientific voice on such matters and to doubt the claims made by others.

We can also view the issue in a broader context. The traditional journal format is under threat from other models of scientific publishing, including blogs, open access publishers, pre-print archives, and post-publication peer review. Much of the debate around e-cigarettes has come from these new sources. Dominant producers in the scientific field must necessarily be conservative since it is the established structure of scientific field that grants these producers their dominant status. But this competition is the scientific field may have wider, pernicious consequences.

Typically, we try to formulate policies that maximise social welfare. But, as Acemoglu and Robinson point out, the policy that may maximise social welfare now, may not maximise welfare in the long run. Different policies today affect the political equilibrium tomorrow and thus the policies that are available to policy makers tomorrow. Prohibiting e-cigarettes today may be socially optimal if there were no reliable evidence on their harms or benefits and there were suspicions that they could cause public harm. But, it is very difficult politically to reverse prohibition policies, even if evidence were to later emerge that e-cigarettes were an effective harm reduction product. Thus, even if the journals were to doubt the evidence around e-cigarettes, then the best policy position would arguably be to remain agnostic and await further evidence. But, this would not be a position that would grant them socially recognised scientific capital.

Perhaps this e-cigarette debate is reflective of a broader shift in the way in which scientific evidence and those with scientific capital are engaged in public health policy decisions. Different forms of evidence beyond RCTs are being more widely accepted in biomedical research and methods of evidence synthesis are being developed. New forums are also becoming available for their dissemination. This, I would say, can only be a positive thing.

Health economics journals and negative findings

Recently, a number of health economics journals (henceforth HEJs) co-signed a statement about the publication of negative findings:

The Editors of the health economics journals named below believe that well-designed, well-executed empirical studies that address interesting and important problems in health economics, utilize appropriate data in a sound and creative manner, and deploy innovative conceptual and methodological approaches compatible with each journal’s distinctive emphasis and scope have potential scientific and publication merit regardless of whether such studies’ empirical findings do or do not reject null hypotheses that may be specified.

There was an outpouring of support for this statement; on Twitter, at least. Big deal. Welcome to the 21st century, health economics. Thanks for agreeing to not actively undermine scientific discourse. Don’t get me wrong, it is of course a good thing that this has been published. Inter-journal agreements are rare and valuable things. But is there really anything to celebrate?

Firstly, the statement has no real substance. The HEJs apparently wish to encourage the submission of negative findings, which is nice, but no real commitments are made. The final sentence reads, “As always, the ultimate responsibility for acceptance or rejection of a submission rests with each journal’s Editors.” So it’s business as usual.

Secondly, one has to wonder whether this is an admission that at least some of the HEJs have until now been refusing to publish negative findings. If they have then this statement is somewhat shameful, if they haven’t then it is just hot air.

Thirdly, is publication bias really a problem in the health economics literature? Generally I think health economists – or those publishing in health economics journals – are less committed to any intervention that they might be evaluating, and less rests on a ‘positive’ result. When it comes to econometric studies or issues specific to our sub-discipline I see plenty of contradictory and non-significant findings being published in the HEJs.

Finally, and most importantly for me, this highlights what I think is a great shame for the health economics literature. We exist mainly at the nexus between medical research and economics research. Medical journals have been at the forefront of publishing in a number of aspects: gold open access; transparency; systematic reporting of methods. Meanwhile, the field of economics is a leading light in green open access with the publication of working papers at RePEc, and journals like American Economic Review are committed to making data available for replication studies. Yet health economics has fallen somewhere between the two and is weak in respect to most of these. It isn’t good enough.

There are exceptions, of course. There are a growing number of working papers series. The likes of CHE and OHE have long been bastions in this regard. And there are some journals – including one of the signatories, Health Economics Review – that are ahead of their associates in some respects.

But in general, the HEJs are still on the wrong side of history. So rather than addressing (and in such a weak way) an issue that has been known about for at least 35 years, the HEJs should be taking bolder steps and pushing for progress in our mouldy old system of academic publishing. Here are a few things that I would have celebrated:

  • A commitment to an open-access-first policy. This could take various forms. For example, the BMJ makes all research articles open access. A policy that I have often thought useful would be for html versions of articles to be open access, possibly after a period of embargo, and for PDFs to remain behind a paywall. Journals could easily monetise this – most already deliver adverts. The journals should commit to providing reasonably priced open access options for authors. In fairness, most already do this, but firm commitments are valuable. Furthermore, the journals should commit to providing generous fee waivers for academics without the means to pay.
  • A commitment to transparency. For me, this is the most pressing issue that needs addressing in academic publishing. It’s a big one to address, but it can be tackled in stages. I’ve written before that decision models should be published. This is a no-brainer, and I remain dumbfounded by the fact that funders don’t insist on this. If you have written a paper based on a decision model I literally have no idea whether or not you are making the results up unless I have access to the model. The fact that reviewers tend not to be able to access the models is outrageous. The HEJs should also make the sorts of commitments to transparent reporting of methodologies that medical journals make. For example, most medical journals (at least in principle) do not publish trials that are not prospectively registered. The HEJs should encourage and facilitate the publication of protocols for empirical studies. And like some of the economics journals they should insist on raw data being made available. This would be progress.
  • Improving peer review. The system of closed pre-publication peer review is broken. It doesn’t work. It can function as part of a wider process of peer review, but as the sole means of review it stinks. There are a number of things the HEJs should do to address this. I am very much in favour of open peer review, which makes journals accountable and can expose any weaknesses in their review processes. The HEJs should also facilitate post-publication peer review native to their own journal’s pages. Only one of the signatories currently provides this.

If you are particularly enamoured of the HEJs’ statement then please share your thoughts in the comments below. My intention here is not to chastise the HEJs themselves, but rather the system in which they operate. I just wish that the HEJs would be more willing to take risks in the name of science, and I hope that this is simply a first baby step towards grander and more concrete commitments across the journals. Until then, I will save my praise.