Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
An exploratory study on using principal-component analysis and confirmatory factor analysis to identify bolt-on dimensions: the EQ-5D case study. Value in Health Published 14th July 2017
I’m not convinced by the idea of using bolt-on dimensions for multi-attribute utility instruments. A state description with a bolt-on refers to a different evaluative space, and therefore is not comparable with the progenitor, thus undermining its purpose. Maybe this study will persuade me otherwise. The authors analyse data from the Multi Instrument Comparison database, including responses to EQ-5D-5L, SF-6D, HUI3, AQoL 8D and 15D questionnaires, as well as the ICECAP and 3 measures of subjective well-being. Content analysis was used to allocate items from the measures to underlying constructs of health-related quality of life. The sample of 8022 was randomly split, with one half used for principal-component analysis and confirmatory factor analysis, and the other used for validation. This approach looks at the underlying constructs associated with health-related quality of life and the extent to which individual items from the questionnaires influence them. Candidate items for bolt-ons are those items from questionnaires other than the EQ-5D that are important and not otherwise captured by the EQ-5D questions. The principal-component analysis supported a 9-component model: physical functioning, psychological symptoms, satisfaction, pain, relationships, speech/cognition, hearing, energy/sleep and vision. The EQ-5D only covered physical functioning, psychological symptoms and pain. Therefore, items from measures that explain the other 6 components represent bolt-on candidates for the EQ-5D. This study succeeds in its aim. It demonstrates what appears to be a meaningful quantitative approach to identifying items not fully captured by the EQ-5D, which might be added as bolt-ons. But it doesn’t answer the question of which (if any) of these bolt-ons ought to be added, or in what circumstances. That would at least require pre-definition of the evaluative space, which might not correspond to the authors’ chosen model of health-related quality of life. If it does, then these findings would be more persuasive as a reason to do away with the EQ-5D altogether.
Endogenous information, adverse selection, and prevention: implications for genetic testing policy. Journal of Health Economics Published 13th July 2017
If you can afford it, there are all sorts of genetic tests available nowadays. Some of them could provide valuable information about the risk of particular health problems in the future. Therefore, they can be used to guide individuals’ decisions about preventive care. But if the individual’s health care is financed through insurance, that same information could prove costly. It could reinforce that classic asymmetry of information and adverse selection problem. So we need policy that deals with this. This study considers the incentives and insurance market outcomes associated with four policy options: i) mandatory disclosure of test results, ii) voluntary disclosure, iii) insurers knowing the test was taken, but not the results and iv) complete ban on the use of test information by insurers. The authors describe a utility model that incorporates the use of prevention technologies, and available insurance contracts, amongst people who are informed or uninformed (according to whether they have taken a test) and high or low risk (according to test results). This is used to estimate the value of taking a genetic test, which differs under the four different policy options. Under voluntary disclosure, the information from a genetic test always has non-negative value to the individual, who can choose to only tell their insurer if it’s favourable. The analysis shows that, in terms of social welfare, mandatory disclosure is expected to be optimal, while an information ban is dominated by all other options. These findings are in line with previous studies, which were less generalisable according to the authors. In the introduction, the authors state that “ethical issues are beyond the scope of this paper”. That’s kind of a problem. I doubt anybody who supports an information ban does so on the basis that they think it will maximise social welfare in the fashion described in this paper. More likely, they’re worried about the inequities in health that mandatory disclosure could reinforce, about which this study tells us nothing. Still, an information ban seems to be a popular policy, and studies like this indicate that such decisions should be reconsidered in light of their expected impact on social welfare.
Returns to scientific publications for pharmaceutical products in the United States. Health Economics [PubMed] Published 10th July 2017
Publication bias is a big problem. Part of the cause is that pharmaceutical companies have no incentive to publish negative findings for their own products. Though positive findings may be valuable in terms of sales. As usual, it isn’t quite that simple when you really think about it. This study looks at the effect of publications on revenue for 20 branded drugs in 3 markets – statins, rheumatoid arthritis and asthma – using an ‘event-study’ approach. The authors analyse a panel of quarterly US sales data from 2003-2013 alongside publications identified through literature searches and several drug- and market-specific covariates. Effects are estimated using first difference and difference in first difference models. The authors hypothesise that publications should have an important impact on sales in markets with high generic competition, and less in those without or with high branded competition. Essentially, this is what they find. For statins and asthma drugs, where there was some competition, clinical studies in high-impact journals increased sales to the tune of $8 million per publication. For statins, volume was not significantly affected, with mediation through price. In rhematoid arthritis, where competition is limited, the effect on sales was mediated by the effect on volume. Studies published in lower impact journals seemed to have a negative influence. Cost-effectiveness studies were only important in the market with high generic competition, increasing statin sales by $2.2 million on average. I’d imagine that these impacts are something with which firms already have a reasonable grasp. But this study provides value to public policy decision makers. It highlights those situations in which we might expect manufacturers to publish evidence and those in which it might be worthwhile increasing public investment to pick up the slack. It could also help identify where publication bias might be a bigger problem due to the incentives faced by pharmaceutical companies.