Rita Faria’s journal round-up for 28th January 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Appraising the value of evidence generation activities: an HIV modelling study. BMJ Global Health [PubMed] Published 7th December 2018

How much should we spend on implementing our health care strategy versus getting more information to devise a better strategy? Should we devolve budgets to regions or administer the budget centrally? These are difficult questions and this new paper by Beth Woods et al has a brilliant stab at answering them.

The paper looks at the HIV prevention and treatment policies in Zambia. It starts by finding the most cost-effective strategy and the corresponding budget in each region, given what is currently known about the prevalence of the infection, the effectiveness of interventions, etc. The idea is that the regions receive a cost-effective budget to implement a cost-effective strategy. The issue is that the cost-effective strategy and budget are devised according to what we currently know. In practice, regions might face a situation on the ground which is different from what was expected. Regions might not have enough budget to implement the strategy or might have some leftover.

What if we spend some of the budget to get more information to make a better decision? This paper considers the value of perfect information given the costs of research. Depending on the size of the budget and the cost of research, it may be worthwhile to divert some funds to get more information. But what if we had more flexibility in the budgetary policy? This paper tests 2 more budgetary options: a national hard budget but with the flexibility to transfer funds from under- to overspending regions, and a regional hard budget with a contingency fund.

The results are remarkable. The best budgetary policy is to have a national budget with the flexibility to reallocate funds across regions. This is a fascinating paper, with implications not only for prioritisation and budget setting in LMICs but also for high-income countries. For example, the 2012 Health and Social Care Act broke down PCTs into smaller CCGs and gave them hard budgets. Some CCGs went into deficit, and there are reports that some interventions have been cut back as a result. There are probably many reasons for the deficit, but this paper shows that hard regional budgets clearly have negative consequences.

Health economics methods for public health resource allocation: a qualitative interview study of decision makers from an English local authority. Health Economics, Policy and Law [PubMed] Published 11th January 2019

Our first paper looked at how to use cost-effectiveness to allocate resources between regions and across health care services and research. Emma Frew and Katie Breheny look at how decisions are actually made in practice, but this time in a local authority in England. Another change of the 2012 Health and Social Care Act was to move public health responsibilities from the NHS to local authorities. Local authorities are now given a ring-fenced budget to implement cost-effective interventions that best match their needs. How do they make decisions? Thanks to this paper, we’re about to find out.

This paper is an enjoyable read and quite an eye-opener. It was startling that health economics evidence was not much used in practice. But the barriers that were cited are not insurmountable. And the suggestions by the interviewees were really useful. There were suggestions about how economic evaluations should consider the local context to get a fair picture of the impact of the intervention to services and to the population, and to move beyond the trial into the real world. Equity was mentioned too, as well as broadening the outcomes beyond health. Fortunately, the health economics community is working on many of these issues.

Lastly, there was a clear message to make economic evidence accessible to lay audiences. This is a topic really close to my heart, and something I’d like to help improve. We have to make our work easy to understand and use. Otherwise, it may stay locked away in papers rather than do what we intended it for. Which is, at least in my view, to help inform decisions and to improve people’s lives.

I found this paper reassuring in that there is clearly a need for economic evidence and a desire to use it. Yes, there are some teething issues, but we’re working in the right direction. In sum, the future for health economics is bright!

Survival extrapolation in cancer immunotherapy: a validation-based case study. Value in Health Published 13th December 2018

Often, the cost-effectiveness of cancer drugs hangs in the method to extrapolate overall survival. This is because many cancer drugs receive their marketing authorisation before most patients in the trial have died. Extrapolation is tested extensively in the sensitivity analysis, and this is the subject of many discussions in NICE appraisal committees. Ultimately, at the point of making the decision, the correct method to extrapolate is a known unknown. Only in hindsight can we know for sure what the best choice was.

Ash Bullement and colleagues take advantage of hindsight to know the best method for extrapolation of a clinical trial of an immunotherapy drug. Survival after treatment with immunotherapy drugs is more difficult to predict because some patients can survive for a very long time, while others have much poorer outcomes. They fitted survival models to the 3-year data cut, which was available at the time of the NICE technology appraisal. Then they compared their predictions to the observed survival in the 5-year data cut and to long-term survival trends from registry data. They found that the piecewise model and a mixture-cure model had the best predictions at 5 years.

This is a relevant paper for those of us who work in the technology appraisal world. I have to admit that I can be sceptical of piecewise and mixture-cure models, but they definitely have a role in our toolbox for survival extrapolation. Ideally, we’d have a study like this for all the technology appraisals hanging on the survival extrapolation so that we can take learnings across cancers and classes of drugs. With time, we would get to know more about what works best for which condition or drug. Ultimately, we may be able to get to a stage where we can look at the extrapolation with less inherent uncertainty.

Credits

Sam Watson’s journal round-up for 8th October 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A cost‐effectiveness threshold based on the marginal returns of cardiovascular hospital spending. Health Economics [PubMed] Published 1st October 2018

There are two types of cost-effectiveness threshold of interest to researchers. First, there’s the societal willingness-to-pay for a given gain in health or quality of life. This is what many regulatory bodies, such as NICE, use. Second, there is the actual return on medical spending achieved by the health service. Reimbursement of technologies with a lesser return for every pound or dollar would reduce the overall efficiency of the health service. Some refer to this as the opportunity cost, although in a technical sense I would disagree that it is the opportunity cost per se. Nevertheless, this latter definition has seen a growth in empirical work; with some data on health spending and outcomes, we can start to estimate this threshold.

This article looks at spending on cardiovascular disease (CVD) among elderly age groups by gender in the Netherlands and survival. Estimating the causal effect of spending is tricky with these data: spending may go up because survival is worsening, external factors like smoking may have a confounding role, and using five year age bands (as the authors do) over time can lead to bias as the average age in these bands is increasing as demographics shift. The authors do a pretty good job in specifying a Bayesian hierarchical model with enough flexibility to accommodate these potential issues. For example, linear time trends are allowed to vary by age-gender groups and  dynamic effects of spending are included. However, there’s no examination of whether the model is actually a good fit to the data, something which I’m growing to believe is an area where we, in health and health services research, need to improve.

Most interestingly (for me at least) the authors look at a range of priors based on previous studies and a meta-analysis of similar studies. The estimated elasticity using information from prior studies is more ‘optimistic’ about the effect of health spending than a ‘vague’ prior. This could be because CVD or the Netherlands differs in a particular way from other areas. I might argue that the modelling here is better than some previous efforts as well, which could explain the difference. Extrapolating using life tables the authors estimate a base case cost per QALY of €40,000.

Early illicit drug use and the age of onset of homelessness. Journal of the Royal Statistical Society: Series A Published 11th September 2018

How the consumption of different things, like food, drugs, or alcohol, affects life and health outcomes is a difficult question to answer empirically. Consider a recent widely-criticised study on alcohol published in The Lancet. Among a number of issues, despite including a huge amount of data, the paper was unable to address the problem that different kinds of people drink different amounts. The kind of person who is teetotal may be so for a number of reasons including alcoholism, interaction with medication, or other health issues. Similarly, studies on the effect of cannabis consumption have shown among other things an association with lower IQ and poorer mental health. But are those who consume cannabis already those with lower IQs or at higher risk of psychoses? This article considers the relationship between cannabis and homelessness. While homelessness may lead to an increase in drug use, drug use may also be a cause of homelessness.

The paper is a neat application of bivariate hazard models. We recently looked at shared parameter models on the blog, which factorise the joint distribution of two variables into their marginal distribution by assuming their relationship is due to some unobserved variable. The bivariate hazard models work here in a similar way: the bivariate model is specified as the product of the marginal densities and the individual unobserved heterogeneity. This specification allows (i) people to have different unobserved risks for both homelessness and cannabis use and (ii) cannabis to have a causal effect on homelessness and vice versa.

Despite the careful set-up though, I’m not wholly convinced of the face validity of the results. The authors claim that daily cannabis use among men has a large effect on becoming homeless – as large an effect as having separated parents – which seems implausible to me. Cannabis use can cause psychological dependency but I can’t see people choosing it over having a home as they might with something like heroin. The authors also claim that homelessness doesn’t really have an effect on cannabis use among men because the estimated effect is “relatively small” (it is the same order of magnitude as the reverse causal effect) and only “marginally significant”. Interpreting these results in the context of cannabis use would then be difficult, though. The paper provides much additional material of interest. However, the conclusion that regular cannabis use, all else being equal, has a “strong effect” on male homelessness, seems both difficult to conceptualise and not in keeping with the messiness of the data and complexity of the empirical question.

How could health care be anything other than high quality? The Lancet: Global Health [PubMed] Published 5th September 2018

Tedros Adhanom Ghebreyesus, or Dr Tedros as he’s better known, is the head of the WHO. This editorial was penned in response to the recent Lancet Commission on Health Care Quality and related studies (see this round-up). However, I was critical of these studies for a number of reasons, in particular, the conflation of ‘quality’ as we normally understand it and everything else that may impact on how a health system performs. This includes resourcing, which is obviously low in poor countries, availability of labour and medical supplies, and demand side choices about health care access. The empirical evidence was fairly weak; even in countries like in the UK in which we’re swimming in data we struggle to quantify quality. Data are also often averaged at the national level, masking huge underlying variation within-country. This editorial is, therefore, a bit of an empty platitude: of course we should strive to improve ‘quality’ – its goodness is definitional. But without a solid understanding of how to do this or even what we mean when we say ‘quality’ in this context, we’re not really saying anything at all. Proposing that we need a ‘revolution’ without any real concrete proposals is fairly meaningless and ignores the massive strides that have been made in recent years. Delivering high-quality, timely, effective, equitable, and integrated health care in the poorest settings means more resources. Tinkering with what little services already exist for those most in need is not going to produce a revolutionary change. But this strays into political territory, which UN organisations often flounder in.

Editorial: Statistical flaws in the teaching excellence and student outcomes framework in UK higher education. Journal of the Royal Statistical Society: Series A Published 21st September 2018

As a final note for our academic audience, we give you a statement on the Teaching Excellence Framework (TEF). For our non-UK audience, the TEF is a new system being introduced by the government, which seeks to introduce more of a ‘market’ in higher education by trying to quantify teaching quality and then allowing the best-performing universities to charge more. No-one would disagree with the sentiment that improving higher education standards is better for students and teachers alike, but the TEF is fundamentally statistically flawed, as discussed in this editorial in the JRSS.

Some key points of contention are: (i) TEF doesn’t actually assess any teaching, such as through observation; (ii) there is no consideration of uncertainty about scores and rankings; (iii) “The benchmarking process appears to be a kind of poor person’s propensity analysis” – copied verbatim as I couldn’t have phrased it any better; (iv) there has been no consideration of gaming the metrics; and (v) the proposed models do not reflect the actual aims of TEF and are likely to be biased. Economists will also likely have strong views on how the TEF incentives will affect institutional behaviour. But, as Michael Gove, the former justice and education secretary said, Britons have had enough of experts.

Credits

Chris Sampson’s journal round-up for 27th June 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A methodological review of US budget-impact models for new drugs. PharmacoEconomics [PubMed] Published 22nd June 2016

Budget-impact analysis is a necessary step in the decision-making process. In the UK, NICE make recommendations on the basis of cost-effectiveness (mainly) and facilitate regional budget-impact estimates using a costing template. Guidelines are available from a whole host of HTA agencies and other organisations. This study reviews the methods used in US-based studies of new drugs. The authors identified 7 key elements to consider in the design of budget-impact models: i) model structure, ii) population size and characteristics, iii) time horizon, iv) treatment mix, v) treatment costs, vi) disease-related costs and vii) uncertainty analysis. Papers identified in a literature review were divided into those for drugs for acute conditions (n=8) and chronic conditions (n=27) and studies that combined budget-impact and cost-effectiveness analyses for any kind of drug (n=10). Each paper is summarised in terms of the 7 key elements. The methods adopted by the reviewed studies were not consistent with recommendations. For example, many studies omitted adverse event costs and a 1-year time horizon was often adopted where it may not be sufficient. Combined budget-impact and cost-effectiveness models are not recommended, on the basis that this adds unnecessary complexity. Generally, the authors support the use of costing models with simple structures and advise the use of a cost-calculator approach wherever possible. A neat table is provided which sets out recommendations and common flaws in relation to the key elements.

Why do health economists promote technology adoption rather than the search for efficiency? A proposal for a change in our approach to economic evaluation in health care. Medical Decision Making [PubMed] Published 17th June 2016

It seems like the wrong question. Health economists don’t really decide what to research, research funding bodies do. It is difficult for a researcher to find the time to research something without any funding. So surely the blame lies with the NIHR et al? The paper starts by explaining why low-value care exists, before outlining two ways in which we health economists might appropriately realign economic evaluation towards the search for efficiency. First, ‘technology management’. This is the idea that evidence should be evaluated throughout a technology’s life-cycle. The authors discuss examples from diabetic retinopathy screening and gastrointestinal endoscopy. I think they are flawed examples as they don’t relate to disinvestment per se, but I’ll set that aside for now. The second idea is ‘pathway management’. This is akin to whole disease modelling. The authors present an illustrative example for the ways in which this might be used to ‘search for efficiency’. The authors then go on to discuss the promise and challenges associated with their suggestions and outline some things that we ought to be thinking about. Maybe research groups need reorganising along clinical lines. Certainly, we need to figure out how to deal with intellectual property associated with whole disease models. But it still seems like the wrong question to me, and that health economists don’t have that much sway. Broadly speaking, so long as we’re paid to evaluate technology adoption we will be evaluating technology adoption.

Using survival analysis to improve estimates of life year gains in policy evaluations. Medical Decision Making [PubMed] Published 16th June 2016

Evaluation of policies in terms of their cost-effectiveness is increasingly possible. Often, analyses of this kind extrapolate survival of both the intervention and the control group based on life expectancy estimates from the general population. It’s unlikely that people affected by a policy under evaluation will be completely representative of the wider population. Policies are often also evaluated on the basis of near-term mortality, despite the possibility for them to have longer-term impacts. This study explores the potential for using parametric survival models to extrapolate outcomes for policy evaluations, as is often done for clinical trials. As an example, the authors used their previously published evaluation of the Advancing Quality pay-for-performance programme. Three methods are compared: i) application of published life expectancy tariffs, ii) incorporation of short-term observed survival and iii) extrapolation using survival models. The third approach used two separate models: one for short-term post-hospitalisation survival and another for long-term survival that excluded the first 30 days after admission. For the evaluation of the AQ programme, the three methods found increases in life expectancy of i) 0.154, ii) 0.221 and iii) 0.380. This demonstrates the importance both of incorporating observed mortality rates using survival analysis and of using all available data.