Our authors provide regular round-ups of the latest peer-reviewed journals. We cover all issues of major health economics journals as well as other notable releases. Visit our journal round-up log to see past editions organised by publication title. If you’d like to write a journal round-up, get in touch.
There are four original research articles, a couple of reviews, and a handful of letters and editorials in this issue of PharmacoEconomics. A few of the studies are concerned with methods for economic evaluation, and therefore of interest to me. The leading editorial outlines a framework for considering disease-specific elements of value using an impact inventory table. Essentially, the proposal is to speak to patients and clinicians to figure out what matters most in any given context and to use this for value assessment. It seems reasonable, but the editorial is lacking in concrete examples. I may lack imagination, but I struggle to see the problem that this paper is trying to solve.
In recent months, we’ve seen a glut of papers outlining the challenge posed by gene therapies to cost-effectiveness analysis. This issue includes another review article, in this case focussing on a set of evaluations of voretigene neparvovec. A key finding is that the reported cost-effectiveness ratio varied greatly across jurisdictions, thanks to methodological choices. The point is that, yes, standard economic evaluation methods are valid but, because of much greater uncertainty – particularly in extrapolation – methodological choices are extremely important.
The issue also includes a model-based economic evaluation and budget impact analysis of ibalizumab for HIV in the US. The letter-based discussion in this issue also relates to an applied cost-effectiveness analysis. A commentary highlights three recently published evaluations of the use of cannabidiol for Lennox–Gastaut syndrome and Dravet syndrome in different settings. The authors are from the company marketing the drug, and they are essentially highlighting that it’s a complicated decision problem. Inevitably, their critique focuses on the one analysis that showed that the drug was probably not cost-effective. The authors of that study responded in a letter. The authors of one of the other studies also weighed in with a more general comment on the need for transparency.
Speaking of transparency, replication is still a fringe activity for modellers, so I was pleased to read a study attempting to replicate decision models in obesity. The researchers chose four models to replicate and were – for the most part – quite successful in their attempts. Of course, there were inconsistencies; none of the case studies resulted in perfectly replicated results. At the extreme, one case study resulted in a 10-fold difference in the cost-effectiveness ratio, which might result in a different decision in some circumstances. The study’s main contribution is in the researchers’ qualitative assessment of the process, which leads them to recommend some changes to the CHEERS statement. They call for some additional reporting that would have saved the replicators having to make certain assumptions. This is timely, as CHEERS II is almost ready to launch.
The issue also includes a review of estimates of the global economic burden of ADHD, with results ranging from $244 to $18,751. On its own, that information is worse than useless, but the paper provides some valuable pointers for future research priorities. Elsewhere in the issue, we have one of the first value sets for the EQ-5D-Y, for Slovenia, and a study describing some cardiovascular risk models for people with diabetes.