Rita Faria’s journal round-up for 4th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The marginal benefits of healthcare spending in the Netherlands: estimating cost-effectiveness thresholds using a translog production function. Health Economics [PubMed] Published 30th August 2019

The marginal productivity of the healthcare sector or, as commonly known, the supply-side cost-effectiveness threshold, is a hot topic right now. A few years ago, we could only guess at the magnitude of health that was displaced by reimbursing expensive and not-that-beneficial drugs. Since the seminal work by Karl Claxton and colleagues, we have started to have a pretty good idea of what we’re giving up.

This paper by Niek Stadhouders and colleagues adds to this literature by estimating the marginal productivity of hospital care in the Netherlands. Spoiler alert: they estimated that hospital care generates 1 QALY for around €74,000 at the margin, with 95% confidence intervals ranging from €53,000 to €94,000. Remarkably, it’s close to the Dutch upper reference value for the cost-effectiveness threshold at €80,000!

The approach for estimation is quite elaborate because it required building QALYs and costs, and accounting for the effect of mortality on costs. The diagram in Figure 1 is excellent in explaining it. Their approach is different from the Claxton et al method, in that they corrected for the cost due to changes in mortality directly rather than via an instrumental variable analysis. To estimate the marginal effect of spending on health, they use a translog function. The confidence intervals are generated with Monte Carlo simulation and various robustness checks are presented.

This is a fantastic paper, which will be sure to have important policy implications. Analysts conducting cost-effectiveness analysis in the Netherlands, do take note.

Mixed-effects models for health care longitudinal data with an informative visiting process: a Monte Carlo simulation study. Statistica Neerlandica Published 5th September 2019

Electronic health records are the current big thing in health economics research, but they’re not without challenges. One issue is that the data reflects the clinical management, rather than a trial protocol. This means that doctors may test more severe patients more often. For example, people with higher cholesterol may get more frequent cholesterol tests. The challenge is that traditional methods for longitudinal data assume independence between observation times and disease severity.

Alessandro Gasparini and colleagues set out to solve this problem. They propose using inverse intensity of visit weighting within a mixed-methods model framework. Importantly, they provide a Stata package that includes the method. It’s part of the wide ranging and super-useful merlin package.

It was great to see how the method works with the directed acyclic graph. Essentially, after controlling for confounders, the longitudinal outcome and the observation process are associated through shared random effects. By assuming a distribution for the shared random effects, the model blocks the path between the outcome and the observation process. It makes it sound easy!

The paper goes through the method, compares it with other methods in the literature in a simulation study, and applies to a real case study. It’s a brilliant paper that deserves a close look by all of those using electronic health records.

Alternative approaches for confounding adjustment in observational studies using weighting based on the propensity score: a primer for practitioners. BMJ [PubMed] Published 23rd October 2019

Would you like to use a propensity score method but don’t know where to start? Look no further! This paper by Rishi Desai and Jessica Franklin provides a practical guide to propensity score methods.

They start by explaining what a propensity score is and how it can be used, from matching to reweighting and regression adjustment. I particularly enjoyed reading about the importance of conceptualising the target of inference, that is, what treatment effect are we trying to estimate. In the medical literature, it is rare to see a paper that is clear on whether it is average treatment effect or average treatment effect among the treated population.

I found the algorithm for method selection really useful. Here, Rishi and Jessica describe the steps in the choice of the propensity score method and recommend their preferred method for each situation. The paper also includes the application of each method to the example of dabigatran versus warfarin for atrial fibrillation. Thanks to the graphs, we can visualise how the distribution of the propensity score changes for each method and depending on the target of inference.

This is an excellent paper to those starting their propensity score analyses, or for those who would like a refresher. It’s a keeper!

Credits

Chris Sampson’s journal round-up for 20th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Effects of health and social care spending constraints on mortality in England: a time trend analysis. BMJ Open [PubMed] Published 15th November 2017

I’d hazard a guess that I’m not the only one here who gets angry about the politics of austerity. Having seen this study’s title, it’s clear that the research could provide fuel for that anger. It doesn’t disappoint. Recent years have seen very low year-on-year increases in public expenditure on health in England. Even worse, between 2010 and 2014, public expenditure on social care actually fell in real terms. This is despite growing need for health and social care. In this study, the authors look at health and social care spending and try to estimate the impact that reduced expenditure has had on mortality in England. The analysis uses spending and mortality data from 2001 onwards and also incorporates mortality projections for 2015-2020. Time trend analyses are conducted using Poisson regression models. From 2001-2010, deaths decreased by 0.77% per year (on average). The mortality rate was falling. Now it seems to be increasing; from 2011-2014, the average number of deaths per year increased by 0.87%. This corresponds to 18,324 additional deaths in 2014, for example. But everybody dies. Extra deaths are really sooner deaths. So the question, really, is how much sooner? The authors look at potential years of life lost and find this figure to be 75,496 life-years greater than expected in 2014, given pre-2010 trends. This shouldn’t come as much of a surprise. Spending less generally achieves less. What makes this study really interesting is that it can tell us who is losing these potential years of life as a result of spending cuts. The authors find that it’s the over-60s. Care home deaths were the largest contributor to increased mortality. A £10 cut in social care spending per capita resulted in 5 additional care home deaths per 100,000 people. When the authors looked at deaths by local area, no association was found with the level of deprivation. If health and social care expenditure are combined in a single model, we see that it’s social care spending that is driving the number of excess deaths. The impact of health spending on hospital deaths was less robust. The number of nurses acted as a mediator for the relationship between spending and mortality. The authors estimate that current spending projections will result in 150,000 additional deaths compared with pre-2010 trends. There are plenty of limitations to this study. It’s pretty much impossible (though the authors do try) to separate the effects of austerity from the effect of a weak economy. Still, I’m satisfied with the conclusion that austerity kills older people (no jokes about turkeys and Christmas, please). For me, the findings also highlight the need for more research in the context of social care, and how we (as researchers) might effectively direct policy to prevent ‘excess’ deaths.

Should cost effectiveness analyses for NICE always consider future unrelated medical costs? BMJ [PubMed] Published 10th November 2017

The question of whether or not ‘unrelated’ future medical costs should be included in economic evaluation is becoming a hot topic. So much so that the BMJ has published this Head To Head, which introduces some of the arguments for and against. NICE currently recommends excluding unrelated future medical costs. An example given in this article is the case of the expected costs of dementia care having saved someone’s life by heart transplantation. The argument in favour of including unrelated costs is quite obvious – these costs can’t be ignored if we seek to maximise social welfare. Their inclusion is described as “not difficult” by the authors defending this move. By ignoring unrelated future costs (but accounting for the benefit of longer life), the relative cost-effectiveness of life-extending treatments, compared with life-improving treatments, is artificially inflated. The argument against including unrelated medical costs is presented as one of fairness. The author suggests that their inclusion could preclude access to health care for certain groups of people that are likely to have high needs in the future. So perhaps NICE should ignore unrelated medical costs in certain circumstances. I sympathise with this view, but I feel it is less a fairness issue and more a demonstration of the current limits of health-related quality of life measurement, which don’t reflect adaptation and coping. However, I tend to disagree with both of the arguments presented here. I really don’t think NICE should include or exclude unrelated future medical costs according to the context because that could create some very perverse incentives for certain stakeholders. But then, I do not agree that it is “not difficult” to include all unrelated future costs. ‘All’ is an important qualifier here because the capacity for analysts to pick and choose unrelated future costs creates the potential to pick and choose results. When it comes to unrelated future medical costs, NICE’s position needs to be all-or-nothing, and right now the ‘all’ bit is a high bar to clear. NICE should include unrelated future medical costs – it’s difficult to formulate a sound argument against that – but they should only do so once more groundwork has been done. In particular, we need to develop more valid methods for valuing quality of life against life-years in health technology assessment across different patient groups. And we need more reliable methods for estimating future medical costs in all settings.

Oncology modeling for fun and profit! Key steps for busy analysts in health technology assessment. PharmacoEconomics [PubMed] Published 6th November 2017

Quite a title(!). The subject of this essay is ‘partitioned survival modelling’. Honestly,  I never really knew what that was until I read this article. It seems the reason for my ignorance could be that I haven’t worked on the evaluation of cancer treatments, for which it’s a popular methodology. Apparently, a recent study found that almost 75% of NICE cancer drug appraisals were informed by this sort of analysis. Partitioned survival modelling is a simple means by which to extrapolate outcomes in a context where people can survive (or not) with or without progression. Often this can be on the basis of survival analyses and standard trial endpoints. This article seeks to provide some guidance on the development and use of partitioned survival models. Or, rather, it provides a toolkit for calling out those who might seek to use the method as a means of providing favourable results for a new therapy when data and analytical resources are lacking. The ‘key steps’ can be summarised as 1) avoiding/ignoring/misrepresenting current standards of economic evaluation, 2) using handpicked parametric approaches for extrapolation in order to maximise survival benefits, 3) creatively estimating relative treatment effects using indirect comparisons without adjustment, 4) make optimistic assumptions about post-progression outcomes, and 5) deny the possibility of any structural uncertainty. The authors illustrate just how much an analyst can influence the results of an evaluation (if they want to “keep ICERs in the sweet spot!”). Generally, these tactics move the model far from being representative of reality. However, the prevailing secrecy around most models means that it isn’t always easy to detect these shortcomings. Sometimes it is though, and the authors make explicit reference to technology appraisals that they suggest demonstrate these crimes. Brilliant!

Credits

Chris Sampson’s journal round-up for 22nd August 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Simulation as an ethical imperative and epistemic responsibility for the implementation of medical guidelines in health care. Medicine, Health Care and Philosophy [PubMed] Published 6th August 2016

Some people describe RCTs as a ‘gold standard’ for evidence. But if more than one RCT exists, or we have useful data from outside the RCT, that probably isn’t true. Decision modelling has value over and above RCT data, as well as in lieu of it. One crucial thing that cannot – or at least not usually – be captured in an RCT is how well the evidence might be implemented. Medical guidelines will be developed, but there will be a process of adjustments and no doubt errors; all of which might impact on the quality of life of patients. Here we stray into the realms of implementation science. This paper argues that health care providers have a responsibility to acquire knowledge about implementation and the learning curve of medical guidelines. To this end, there is an epistemic and ethical imperative to simulate the possible impacts on patients’ health of the implementation learning curve. The authors provide some examples of guideline implementation that might have benefited from simulation. However, it’s very easy in hindsight to identify what went wrong and none of the examples set out realistic scenarios for simulation analyses that could have been carried out in advance. It isn’t clear to me how or why we should differentiate – in ethical or epistemic terms – implementation from effectiveness evaluation. It is clear, however, that health economists could engage more with implementation science, and that there is an ethical imperative to do so.

Estimating marginal healthcare costs using genetic variants as instrumental variables: Mendelian randomization in economic evaluation. PharmacoEconomics [PubMedPublished 2nd August 2016

To assert that obesity is associated with greater use of health care resources is uncontroversial. However, to assert that all of the additional cost associated with obesity is because of obesity is a step too far. There are many other determinants of health care costs (and outcomes) that might be independently associated with obesity. One way of dealing with this problem of identifying causality is to use instrumental variables in econometric analysis, but appropriate IVs can be tricky to identify. Enter, Mendelian randomisation. This is a method that can be used to adopt genetic variants as IVs. This paper describes the basis for Mendelian randomisation and outlines the suitability of genetic traits as IVs. En route, the authors provide a nice accessible summary of the IV approach more generally. The focus throughout the paper is upon estimating costs, with obesity used as an example. The article outlines a lot of the potential challenges and pitfalls associated with the approach, such as the use of weak instruments and non-linear exposure-outcome relationships. On the whole, the approach is intuitive and fits easily within existing methodologies. Its main value may lie in the estimation of more accurate parameters for model-based economic evaluation. Of course, we need data. Ideally, longitudinal medical records linked to genotypic information for a large number of people. That may seem like wishful thinking, but the UK Biobank project (and others) can fit the bill.

Patient and general public preferences for health states: A call to reconsider current guidelines. Social Science & Medicine [PubMed] Published 31st July 2016

One major ongoing debate in health economics is the question of whether public or patient preferences should be used to value health states and thus to estimate QALYs. Here in the UK NICE recommends public preferences, and I’d hazard a guess that most people agree. But why? After providing some useful theoretical background, this article reviews the arguments made in favour of the use of public preferences. It focuses on three that have been identified in Dutch guidelines. First, that cost-effectiveness analysis should adopt a societal perspective. The Gold Panel invoked a Rawlsian veil of ignorance argument to support the use of decision (ex ante) utility rather than experiences (ex post). The authors highlight that this is limited, as the public are not behind a veil of ignorance. Second, that the use of patient preferences might (wrongfully) ignore adaptation. This is not a complete argument as there may be elements of adaptation that decision makers wish not to take into account, and public preferences may still underestimate the benefits of treatment due to adaptation. Third, the insurance principle highlights that the obligation to be insured is made ex ante and therefore the benefits of insurance (i.e. health care) should also be valued as such. The authors set out a useful taxonomy of the arguments, their reasoning and the counter arguments. The key message is that current arguments in favour of public preferences are incomplete. As a way forward, the authors suggest that both patient and public preferences should be used alongside each other and propose that HTA guidelines require this. The paper got my cogs whirring, so expect a follow-up blog post tomorrow.

What, who and when? Incorporating a discrete choice experiment into an economic evaluation. Health Economics Review [PubMed] Published 29th July 2016

This study claims to be the first to carry out a discrete choice experiment on clinical trial participants, and to compare willingness to pay results with standard QALY-based net benefit estimates; thus comparing a CBA and a CUA. The trial in question evaluates extending the role of community pharmacists in the management of coronary heart disease. The study focusses on the questions of what, who and when: what factors should be evaluated (i.e. beyond QALYs)? whose preferences (i.e. patients with experience of the service or all participants)? and when should preferences be evaluated (i.e. during or after the intervention)? Comparisons are made along these lines. The DCE asked participants to choose between their current situation and two alternative scenarios involving either the new service or the control. The trial found no significant difference in EQ-5D scores, SF-6D scores or costs between the groups, but it did identify a higher level of satisfaction with the intervention. The intervention group (through the DCE) reported a greater willingness to pay for the intervention than the control group, and this appeared to increase with prolonged use of the service. I’m not sure what the take-home message is from this study. The paper doesn’t answer the questions in the title – at least, not in any general sense. Nevertheless, it’s an interesting discussion about how we might carry out cost-benefit analysis using DCEs.

Photo credit: Antony Theobald (CC BY-NC-ND 2.0)