Chris Sampson’s journal round-up for 31st July 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

An exploratory study on using principal-component analysis and confirmatory factor analysis to identify bolt-on dimensions: the EQ-5D case study. Value in Health Published 14th July 2017

I’m not convinced by the idea of using bolt-on dimensions for multi-attribute utility instruments. A state description with a bolt-on refers to a different evaluative space, and therefore is not comparable with the progenitor, thus undermining its purpose. Maybe this study will persuade me otherwise. The authors analyse data from the Multi Instrument Comparison database, including responses to EQ-5D-5L, SF-6D, HUI3, AQoL 8D and 15D questionnaires, as well as the ICECAP and 3 measures of subjective well-being. Content analysis was used to allocate items from the measures to underlying constructs of health-related quality of life. The sample of 8022 was randomly split, with one half used for principal-component analysis and confirmatory factor analysis, and the other used for validation. This approach looks at the underlying constructs associated with health-related quality of life and the extent to which individual items from the questionnaires influence them. Candidate items for bolt-ons are those items from questionnaires other than the EQ-5D that are important and not otherwise captured by the EQ-5D questions. The principal-component analysis supported a 9-component model: physical functioning, psychological symptoms, satisfaction, pain, relationships, speech/cognition, hearing, energy/sleep and vision. The EQ-5D only covered physical functioning, psychological symptoms and pain. Therefore, items from measures that explain the other 6 components represent bolt-on candidates for the EQ-5D. This study succeeds in its aim. It demonstrates what appears to be a meaningful quantitative approach to identifying items not fully captured by the EQ-5D, which might be added as bolt-ons. But it doesn’t answer the question of which (if any) of these bolt-ons ought to be added, or in what circumstances. That would at least require pre-definition of the evaluative space, which might not correspond to the authors’ chosen model of health-related quality of life. If it does, then these findings would be more persuasive as a reason to do away with the EQ-5D altogether.

Endogenous information, adverse selection, and prevention: implications for genetic testing policy. Journal of Health Economics Published 13th July 2017

If you can afford it, there are all sorts of genetic tests available nowadays. Some of them could provide valuable information about the risk of particular health problems in the future. Therefore, they can be used to guide individuals’ decisions about preventive care. But if the individual’s health care is financed through insurance, that same information could prove costly. It could reinforce that classic asymmetry of information and adverse selection problem. So we need policy that deals with this. This study considers the incentives and insurance market outcomes associated with four policy options: i) mandatory disclosure of test results, ii) voluntary disclosure, iii) insurers knowing the test was taken, but not the results and iv) complete ban on the use of test information by insurers. The authors describe a utility model that incorporates the use of prevention technologies, and available insurance contracts, amongst people who are informed or uninformed (according to whether they have taken a test) and high or low risk (according to test results). This is used to estimate the value of taking a genetic test, which differs under the four different policy options. Under voluntary disclosure, the information from a genetic test always has non-negative value to the individual, who can choose to only tell their insurer if it’s favourable. The analysis shows that, in terms of social welfare, mandatory disclosure is expected to be optimal, while an information ban is dominated by all other options. These findings are in line with previous studies, which were less generalisable according to the authors. In the introduction, the authors state that “ethical issues are beyond the scope of this paper”. That’s kind of a problem. I doubt anybody who supports an information ban does so on the basis that they think it will maximise social welfare in the fashion described in this paper. More likely, they’re worried about the inequities in health that mandatory disclosure could reinforce, about which this study tells us nothing. Still, an information ban seems to be a popular policy, and studies like this indicate that such decisions should be reconsidered in light of their expected impact on social welfare.

Returns to scientific publications for pharmaceutical products in the United States. Health Economics [PubMedPublished 10th July 2017

Publication bias is a big problem. Part of the cause is that pharmaceutical companies have no incentive to publish negative findings for their own products. Though positive findings may be valuable in terms of sales. As usual, it isn’t quite that simple when you really think about it. This study looks at the effect of publications on revenue for 20 branded drugs in 3 markets – statins, rheumatoid arthritis and asthma – using an ‘event-study’ approach. The authors analyse a panel of quarterly US sales data from 2003-2013 alongside publications identified through literature searches and several drug- and market-specific covariates. Effects are estimated using first difference and difference in first difference models. The authors hypothesise that publications should have an important impact on sales in markets with high generic competition, and less in those without or with high branded competition. Essentially, this is what they find. For statins and asthma drugs, where there was some competition, clinical studies in high-impact journals increased sales to the tune of $8 million per publication. For statins, volume was not significantly affected, with mediation through price. In rhematoid arthritis, where competition is limited, the effect on sales was mediated by the effect on volume. Studies published in lower impact journals seemed to have a negative influence. Cost-effectiveness studies were only important in the market with high generic competition, increasing statin sales by $2.2 million on average. I’d imagine that these impacts are something with which firms already have a reasonable grasp. But this study provides value to public policy decision makers. It highlights those situations in which we might expect manufacturers to publish evidence and those in which it might be worthwhile increasing public investment to pick up the slack. It could also help identify where publication bias might be a bigger problem due to the incentives faced by pharmaceutical companies.

Credits

How to cite The Academic Health Economists’ Blog

Occasionally we get emails from people who would like to cite our blog posts. Usually, these requests are framed as ‘is this going to be published in a journal?’. It’s no surprise that people are more comfortable citing the traditional academic literature. But researchers are increasingly citing blog posts. Indeed, some of our blog posts have been cited in published academic literature.

There are plenty of guides out there for citing blog posts. You may like to refer to them for specific formatting styles. Cite This For Me is a useful tool for generating references in a variety of styles. Here I’d like to provide a few specific recommendations for citing posts from this blog.

1. Cite the author

Our blog posts are written by lots of different authors, not by ‘the blog’. The author’s name – assuming they have not claimed anonymity – will appear at the top of the blog post. Let’s take a recent example. To start with, your citation should look something like:

Watson, S. (2017). Variations in NHS admissions at a glance. The Academic Health Economists’ Blog. Available at: https://aheblog.com/2017/01/25/variations-in-nhs-admissions-at-a-glance/ [Accessed 8 Mar. 2017].

2. Use our ISSN

As of this week, the blog now has its own International Standard Serial Number (ISSN). This number uniquely identifies and distinguishes the blog. Our ISSN is 2514-3441. You can find it at the bottom of the sidebar and on our About page. So your citation could become:

Watson, S. (2017). Variations in NHS admissions at a glance. The Academic Health Economists’ Blog (ISSN 2514-3441). Available at: https://aheblog.com/2017/01/25/variations-in-nhs-admissions-at-a-glance/ [Accessed 8 Mar. 2017].

3. Use WebCite

Unlike journal articles, websites can change. One of our authors could (in principle) completely change the content of their blog post after publishing it. More importantly, it is possible that our URLs may change in the future. If this were to happen, the link in the reference above would become redundant and the citation would not be useful to readers. What needs to be cited, therefore, is the blog post at the time at which you accessed it. Enter WebCite. WebCite is a service that archives a webpage and provides a permanent link for citation. This can be achieved by completing an archiving form. Our citation becomes:

Watson, S. (2017). Variations in NHS admissions at a glance. The Academic Health Economists’ Blog (ISSN 2514-3441). Available at: https://aheblog.com/2017/01/25/variations-in-nhs-admissions-at-a-glance/ [Accessed 8 Mar. 2017]. (Archived by WebCite® at http://www.webcitation.org/6ooALaGyF)

4. Check the comments

Finally, authors may choose to subsequently publish their blog post elsewhere in another format or to upload it to a service such as figshare in order to obtain a DOI. Check the comments below a blog post to see if this is the case as there may be an alternative source that you might prefer to cite.

But as ever, if you’re struggling, get in touch.

Credits

E-cigarettes and the role of science in public health policy

E-cigarettes have become, without doubt, one of the public health issues du jour. Many countries and states have been quick to prohibit them, while others continue to debate the issue. The debate ostensibly revolves around the relative harms of e-cigarettes: Are they dangerous? Will they reduce the harms caused by smoking tobacco? Will children take them up? Questions which would typically be informed by the available evidence. However, there is a growing schism within the scientific community about what indeed the evidence does say. On the one hand, there is the view that the evidence, when taken altogether, overwhelmingly suggests that e-cigarettes are significantly less harmful than cigarettes and would reduce the harms caused by nicotine use. On the other hand, there is vocal group that doubt the veracity of the available evidence and are critical of e-cigarette availability in general. Indeed, this latter view has been adopted by the biggest journals in medicine, The Lancet, the BMJ, the New England Journal of Medicine, and JAMA, each of whom have published either research or editorials along this line.

The evidence around e-cigarettes was recently summarised and reviewed by Public Health England. The conclusion of the review was the e-cigarettes were 95% less harmful than smoking tobacco. So why might these journals take a position that is arguably contrary to the evidence? From a sociological perspective, epistemological conflicts in science are also political conflicts. Actions within the scientific field are directed to acquiring scientific authority, and that authority requires social recognition. However, e-cigarette policy is also a political issue, and as such actions in this area are also directed at gaining political capital. If the e-cigarette issue can be delimited to a purely scientific problem then scientific capital can be translated into political capital. One way of achieving this is to try to establish oneself as the authoritative scientific voice on such matters and to doubt the claims made by others.

We can also view the issue in a broader context. The traditional journal format is under threat from other models of scientific publishing, including blogs, open access publishers, pre-print archives, and post-publication peer review. Much of the debate around e-cigarettes has come from these new sources. Dominant producers in the scientific field must necessarily be conservative since it is the established structure of scientific field that grants these producers their dominant status. But this competition is the scientific field may have wider, pernicious consequences.

Typically, we try to formulate policies that maximise social welfare. But, as Acemoglu and Robinson point out, the policy that may maximise social welfare now, may not maximise welfare in the long run. Different policies today affect the political equilibrium tomorrow and thus the policies that are available to policy makers tomorrow. Prohibiting e-cigarettes today may be socially optimal if there were no reliable evidence on their harms or benefits and there were suspicions that they could cause public harm. But, it is very difficult politically to reverse prohibition policies, even if evidence were to later emerge that e-cigarettes were an effective harm reduction product. Thus, even if the journals were to doubt the evidence around e-cigarettes, then the best policy position would arguably be to remain agnostic and await further evidence. But, this would not be a position that would grant them socially recognised scientific capital.

Perhaps this e-cigarette debate is reflective of a broader shift in the way in which scientific evidence and those with scientific capital are engaged in public health policy decisions. Different forms of evidence beyond RCTs are being more widely accepted in biomedical research and methods of evidence synthesis are being developed. New forums are also becoming available for their dissemination. This, I would say, can only be a positive thing.