Our authors provide regular round-ups of the latest peer-reviewed journals. We cover all issues of major health economics journals as well as other notable releases. Visit our journal round-up log to see past editions organised by publication title. If you’d like to write a journal round-up, get in touch.
Hello. It’s been a while. I don’t know what happened, really. Let’s blame the pandemic. Here’s my latest journal round-up, which will mark a return to form. The newest issue of AHEHP (one of my favourite journals, you may recall) includes a few papers that piqued my interest. In particular, a couple of papers on cost-effectiveness thresholds, which have become a popular topic for discussion on this blog.
As we know, recent years have seen numerous attempts to identify the opportunity cost of health expenditures at the national level. These estimates can, in turn, be used to infer the cost of QALY gains and a cost-effectiveness threshold. Here in the UK, when we conduct cost-effectiveness analyses, we compare ICERs to our ‘willingness to pay’ value of £30,000 per QALY, as specified by NICE. Some might argue that we should instead use an ‘opportunity cost’ estimate of £12,936. A new review study in this issue looks at whether analysts are using published ‘opportunity cost’ estimates for Spain, Australia, the Netherlands, and South Africa. From 1,171 published cost-effectiveness analyses (and protocols), the authors found that 28% of Spanish studies and 11% of Australian studies cited published ‘opportunity cost’ estimates, with none in the other countries. So, most aren’t using these new estimates. But the interesting part is the authors’ regression analysis of what explains the use of the estimates. Surprise, surprise, if an ICER is below the ‘opportunity cost’ threshold, a study is more likely to report it. Check yourselves!
Presenting us with the provocative question of whether Hungary should ‘pay more for a QALY gain’ than other countries, one study reports on an attempt to identify a new cost-effectiveness threshold for the country. This isn’t a quantitative analysis of the sort discussed in the paper above. Instead, the authors reviewed guidelines published by agencies in Europe, extracted information, and discussed it with various stakeholders. Hungary’s threshold is 3 times GDP per capita, or about €40,000. That makes it one of the highest in Europe, once you account for purchasing power. Some western European countries have much lower thresholds, which aren’t linked to GDP per capita and therefore do not rise year-on-year. This is troubling. The authors identify the two critical problems as being that 1) the threshold is too high relative to other countries and that 2) it does not reflect other priorities. The team set out to find a more acceptable basis for a Hungarian threshold. In truth, it seems like they struggled. They focus on discussing differential thresholds according to rarity or severity of disease but, ultimately, they still end up recommending a baseline threshold relative to GDP per capita (1.5x instead of 3x). It’s unfortunate that the team didn’t consider a more radical perspective, that perhaps a threshold – especially one based on GDP – might be the wrong policy tool to begin with.
One country that gets short shrift in the Hungarian threshold study is Germany, which seems to cope without a threshold. Germany has always seemed like something of an enigma to me when it comes to HTA. The opening paper of this issue is a Commentary piece considering Germany’s progress toward value-based pricing, marking the tenth anniversary of ‘AMNOG’, the ‘Act to Reorganize the Pharmaceuticals’ Market in the Statutory Health Insurance System’. It’s a helpful account of German policy, but it still left me with more questions than answers.
There are some useful systematic reviews in this issue. One study reviews economic evaluations in osteoarthritis, while another reviews economic evaluations in the middle east and north Africa. Another considers the much-maligned notion (in the UK, at least) of herd immunity. The authors reviewed cost-effectiveness analyses of immunisation programmes in low- and middle-income countries to determine whether, and how, they accounted for herd immunity. Of 243 studies, 44 included it and, in many cases, doing so made a substantive difference to the implied optimal strategy.
An intriguing study sets out a framework for forecasting the value to society of a new class of prescription drugs. The researchers build a forecasting model that considers prescribing behaviour, elicited through qualitative interviews with physicians. The authors apply their methodology to the case of direct-acting antivirals for hepatitis C in India. Beyond the message that ‘mixed methods are useful’, it’s hard to see what is generalisable beyond this case study.
We also have some applied economic evaluation studies in this issue. There’s a report from a NICE assessment of an implantable device for refractory overactive bladder, which is interesting in its reliance on implicit comparisons based on the characteristics of the technology rather than direct evidence from comparisons with other technologies. There’s also a study demonstrating the cost-effectiveness of a talking intervention to prevent dental caries in children.
Other research includes a study generating ‘experienced-based values’ for the EQ-5D-Y, and a contingent valuation study of the COVID-19 vaccine in China that warns about the methodology’s validity due to scope issues. A difference-in-differences analysis of the impact of the Affordable Care Act on risky behaviours finds some indication that smoking and excessive drinking fell and that Americans didn’t take the opportunity to go wild and make themselves sick.
And, finally, a lovely demonstration of good practice. Howard Thom wrote a Research Letter, using a simulation study to show the significant differences that may arise between deterministic and probabilistic analyses in model-based cost-effectiveness analyses. Thom argues that we must favour the latter. All of the data and code were made freely available, and another researcher – Javier Sanchez Alvarez – spotted an error, which was duly corrected. Kudos to Howard and Javier. It should worry us that we so rarely see this.