Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
A methodological review of US budget-impact models for new drugs. PharmacoEconomics [PubMed] Published 22nd June 2016
Budget-impact analysis is a necessary step in the decision-making process. In the UK, NICE make recommendations on the basis of cost-effectiveness (mainly) and facilitate regional budget-impact estimates using a costing template. Guidelines are available from a whole host of HTA agencies and other organisations. This study reviews the methods used in US-based studies of new drugs. The authors identified 7 key elements to consider in the design of budget-impact models: i) model structure, ii) population size and characteristics, iii) time horizon, iv) treatment mix, v) treatment costs, vi) disease-related costs and vii) uncertainty analysis. Papers identified in a literature review were divided into those for drugs for acute conditions (n=8) and chronic conditions (n=27) and studies that combined budget-impact and cost-effectiveness analyses for any kind of drug (n=10). Each paper is summarised in terms of the 7 key elements. The methods adopted by the reviewed studies were not consistent with recommendations. For example, many studies omitted adverse event costs and a 1-year time horizon was often adopted where it may not be sufficient. Combined budget-impact and cost-effectiveness models are not recommended, on the basis that this adds unnecessary complexity. Generally, the authors support the use of costing models with simple structures and advise the use of a cost-calculator approach wherever possible. A neat table is provided which sets out recommendations and common flaws in relation to the key elements.
Why do health economists promote technology adoption rather than the search for efficiency? A proposal for a change in our approach to economic evaluation in health care. Medical Decision Making [PubMed] Published 17th June 2016
It seems like the wrong question. Health economists don’t really decide what to research, research funding bodies do. It is difficult for a researcher to find the time to research something without any funding. So surely the blame lies with the NIHR et al? The paper starts by explaining why low-value care exists, before outlining two ways in which we health economists might appropriately realign economic evaluation towards the search for efficiency. First, ‘technology management’. This is the idea that evidence should be evaluated throughout a technology’s life-cycle. The authors discuss examples from diabetic retinopathy screening and gastrointestinal endoscopy. I think they are flawed examples as they don’t relate to disinvestment per se, but I’ll set that aside for now. The second idea is ‘pathway management’. This is akin to whole disease modelling. The authors present an illustrative example for the ways in which this might be used to ‘search for efficiency’. The authors then go on to discuss the promise and challenges associated with their suggestions and outline some things that we ought to be thinking about. Maybe research groups need reorganising along clinical lines. Certainly, we need to figure out how to deal with intellectual property associated with whole disease models. But it still seems like the wrong question to me, and that health economists don’t have that much sway. Broadly speaking, so long as we’re paid to evaluate technology adoption we will be evaluating technology adoption.
Using survival analysis to improve estimates of life year gains in policy evaluations. Medical Decision Making [PubMed] Published 16th June 2016
Evaluation of policies in terms of their cost-effectiveness is increasingly possible. Often, analyses of this kind extrapolate survival of both the intervention and the control group based on life expectancy estimates from the general population. It’s unlikely that people affected by a policy under evaluation will be completely representative of the wider population. Policies are often also evaluated on the basis of near-term mortality, despite the possibility for them to have longer-term impacts. This study explores the potential for using parametric survival models to extrapolate outcomes for policy evaluations, as is often done for clinical trials. As an example, the authors used their previously published evaluation of the Advancing Quality pay-for-performance programme. Three methods are compared: i) application of published life expectancy tariffs, ii) incorporation of short-term observed survival and iii) extrapolation using survival models. The third approach used two separate models: one for short-term post-hospitalisation survival and another for long-term survival that excluded the first 30 days after admission. For the evaluation of the AQ programme, the three methods found increases in life expectancy of i) 0.154, ii) 0.221 and iii) 0.380. This demonstrates the importance both of incorporating observed mortality rates using survival analysis and of using all available data.