Our authors provide regular round-ups of the latest peer-reviewed journals. We cover all issues of major health economics journals as well as other notable releases. Visit our journal round-up log to see past editions organised by publication title. If you’d like to write a journal round-up, get in touch.
This issue of PharmacoEconomics – Open is dominated by applied economic evaluations. There are applied studies on oral glucose-lowering drugs for diabetes, device implantation for glaucoma, a prompt-based intervention for the management of head and neck cancer, and movement therapy in neurorehabilitation. There’s also a systematic review of economic evaluations of community distribution of Naloxone. And, though not a cost-effectiveness model per se, another modelling study demonstrates the potentially huge benefit of rapid initiation of ART for HIV in Spain.
The economic evaluation that caught my attention examines digital interventions for anxiety. I reviewed this paper for the journal what seems like a lifetime ago. In that review, I encouraged the authors to make their model open-source. And they have! Hurrah! You can download all the R script and play around with the model yourself. The research reported here builds on a meta-analysis of digital interventions for generalised anxiety disorder, extrapolating costs and QALYs over the lifetime. The authors’ main conclusion is that digital interventions are better than nothing but are likely to be less cost-effective than drugs or group therapy. The problem is that the authors have lumped together a range of digital interventions, classified only according to whether they were supported by a clinician. To me, this is the equivalent of pooling a variety of medicines according to their mode of administration. Nobody is considering the adoption of all digital interventions en masse, so it doesn’t represent a real decision problem. So the implications of the study relate primarily to the need for (and value of) further research, particularly clinical effectiveness studies for digital interventions.
Other applied analyses in this issue focused on costs. I’ve been doing some work in neonatal nutrition, so I was interested in a study of mother’s own milk (MOM) for very low birth weight infants. The data come from a 4-year prospective cohort study within a hospital in the US, with 430 very low birth weight infants taking part. After calculating the unit costs, the authors used a generalised linear regression model to estimate the impact of MOM intake as a proportion of total feed, controlling for a variety of relevant factors. There are some big costs flying around here, with the average stay in the neonatal intensive care unit costing $190,000. It’s evident that the authors aren’t sure how to deal with their findings. They want to claim that a greater proportion of MOM is associated with reduced costs, which is what they found, but the result is not statistically significant. That’s probably because of the relatively small potential savings as a proportion of the massive and highly variable total costs in this cohort. But more MOM – relative to donor human milk or formula – was associated with a reduction in costly complications. The authors do an excellent job of costing MOM, and their approach can inform future studies, but we can’t be confident about the causality of effects observed in this cohort.
This issue also includes costing studies exploring Parkinson’s disease in Luxembourg and bladder cancer in Brazil. The applied studies are rounded out by a couple of studies focusing on outcomes, estimating health state utility values for CAR-T adverse events, and presenting an EQ-5D-5L valuation study for Polish migrants in Ireland.
When you’re conducting research at the business end of the implementation pathway, the challenge of understanding how health technology assessment (HTA) works in different settings rears its head regularly. There’s a complex assortment of processes, often recorded in obscure and hard-to-find documents. So review studies in this context are helpful. However, there’s no shortage of reviews on this topic, so I wondered whether another review of HTA systems, which appears in this issue, could bring anything new to the table. I’m pleased to say it does. The authors set out a conceptual framework, building on earlier attempts, that characterises how HTA systems differ along nine dimensions, from governance, to remit, to whether decisions are binding. The framework gains its validity through consultation with 18 expert stakeholders. The authors reviewed grey literature from 62 institutions across 32 settings, focusing on Europe, and developed a taxonomy to organise the institutions according to the conceptual framework. The tables and figures in the paper make for a valuable reference. The usual conclusion from reviews like this is to assert that agencies exhibit many differences, and the authors make that claim. But I like that the authors also emphasise the similarities, such as the tendency to independence, that most agencies do not issue binding decisions, and the predominance of pharmaceuticals over other technologies within the remit of HTA.