Rita Faria’s journal round-up for 4th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The marginal benefits of healthcare spending in the Netherlands: estimating cost-effectiveness thresholds using a translog production function. Health Economics [PubMed] Published 30th August 2019

The marginal productivity of the healthcare sector or, as commonly known, the supply-side cost-effectiveness threshold, is a hot topic right now. A few years ago, we could only guess at the magnitude of health that was displaced by reimbursing expensive and not-that-beneficial drugs. Since the seminal work by Karl Claxton and colleagues, we have started to have a pretty good idea of what we’re giving up.

This paper by Niek Stadhouders and colleagues adds to this literature by estimating the marginal productivity of hospital care in the Netherlands. Spoiler alert: they estimated that hospital care generates 1 QALY for around €74,000 at the margin, with 95% confidence intervals ranging from €53,000 to €94,000. Remarkably, it’s close to the Dutch upper reference value for the cost-effectiveness threshold at €80,000!

The approach for estimation is quite elaborate because it required building QALYs and costs, and accounting for the effect of mortality on costs. The diagram in Figure 1 is excellent in explaining it. Their approach is different from the Claxton et al method, in that they corrected for the cost due to changes in mortality directly rather than via an instrumental variable analysis. To estimate the marginal effect of spending on health, they use a translog function. The confidence intervals are generated with Monte Carlo simulation and various robustness checks are presented.

This is a fantastic paper, which will be sure to have important policy implications. Analysts conducting cost-effectiveness analysis in the Netherlands, do take note.

Mixed-effects models for health care longitudinal data with an informative visiting process: a Monte Carlo simulation study. Statistica Neerlandica Published 5th September 2019

Electronic health records are the current big thing in health economics research, but they’re not without challenges. One issue is that the data reflects the clinical management, rather than a trial protocol. This means that doctors may test more severe patients more often. For example, people with higher cholesterol may get more frequent cholesterol tests. The challenge is that traditional methods for longitudinal data assume independence between observation times and disease severity.

Alessandro Gasparini and colleagues set out to solve this problem. They propose using inverse intensity of visit weighting within a mixed-methods model framework. Importantly, they provide a Stata package that includes the method. It’s part of the wide ranging and super-useful merlin package.

It was great to see how the method works with the directed acyclic graph. Essentially, after controlling for confounders, the longitudinal outcome and the observation process are associated through shared random effects. By assuming a distribution for the shared random effects, the model blocks the path between the outcome and the observation process. It makes it sound easy!

The paper goes through the method, compares it with other methods in the literature in a simulation study, and applies to a real case study. It’s a brilliant paper that deserves a close look by all of those using electronic health records.

Alternative approaches for confounding adjustment in observational studies using weighting based on the propensity score: a primer for practitioners. BMJ [PubMed] Published 23rd October 2019

Would you like to use a propensity score method but don’t know where to start? Look no further! This paper by Rishi Desai and Jessica Franklin provides a practical guide to propensity score methods.

They start by explaining what a propensity score is and how it can be used, from matching to reweighting and regression adjustment. I particularly enjoyed reading about the importance of conceptualising the target of inference, that is, what treatment effect are we trying to estimate. In the medical literature, it is rare to see a paper that is clear on whether it is average treatment effect or average treatment effect among the treated population.

I found the algorithm for method selection really useful. Here, Rishi and Jessica describe the steps in the choice of the propensity score method and recommend their preferred method for each situation. The paper also includes the application of each method to the example of dabigatran versus warfarin for atrial fibrillation. Thanks to the graphs, we can visualise how the distribution of the propensity score changes for each method and depending on the target of inference.

This is an excellent paper to those starting their propensity score analyses, or for those who would like a refresher. It’s a keeper!

Credits

Alastair Canaway’s journal round-up for 20th February 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The estimation and inclusion of presenteeism costs in applied economic evaluation: a systematic review. Value in Health Published 30th January 2017

Presenteeism is one of those issues that you hear about from time to time, but rarely see addressed within economic evaluations. For those who haven’t come across it before, presenteeism refers to being at work, but not working at full capacity, for example, due to your health limiting your ability to work. The literature suggests that given presenteeism can have large associated costs which could significantly impact economic evaluations, it should be considered. These impacts are rarely captured in practice. This paper sought to identify studies where presenteeism costs were included, examined how valuation was approached and the degree of impact of including presenteeism on costs. The review included cost of illness studies as well as economic evaluations, just 28 papers had attempted to capture the costs of presenteeism, these were in a wide variety of disease areas. A range of methods was used, across all studies, presenteeism costs accounted for 52% (range from 19%-85%) of the total costs relating to the intervention and disease. This is a vast proportion and significantly outweighed absenteeism costs. Presenteeism is clearly a significant issue, yet widely ignored within economic evaluation. This in part may be due to the health and social care perspective advised within the NICE reference case and compounded by the lack of guidance in how to measure and value productivity costs. Should an economic evaluation pursue a societal perspective, the findings suggest that capturing and valuing presenteeism costs should be a priority.

Priority to end of life treatments? Views of the public in the Netherlands. Value in Health Published 5th January 2017

Everybody dies, and thus, end of life care is probably something that we should all have at least a passing interest in. The end of life context is an incredibly tricky research area with methodological pitfalls at every turn. End of life care is often seen as ‘different’ to other care, and this is reflected in NICE having supplementary guidance for the appraisal of end of life interventions. Similarly, in the Netherlands, treatments that do not meet typical cost per QALY thresholds may be provided should public support be sufficient. There, however, is a dearth of such evidence, and this paper sought to elucidate this issue using the novel Q methodology. Three primary viewpoints emerged: 1) Access to healthcare as a human right – all have equal rights regardless of setting, that is, nobody is more important. Viewpoint one appeared to reject the notion of scarce resources when it comes to health: ‘you can’t put a price on life’. 2) The second group focussed on providing the ‘right’ care for those with terminal illness and emphasised that quality of life should be respected and unnecessary care at end of life should be avoided. This second group did not place great importance on cost-effectiveness but did acknowledge that costly treatments at end of life might not be the best use of money. 3) Finally, the third group felt there should be a focus on care which is effective and efficient, that is, those treatments which generate the most health should be prioritised. There was a consensus across all three groups that the ultimate goal of the health system is to generate the greatest overall health benefit for the population. This rejects the notion that priority should be given to those at end of life and the study concludes that across the three groups there was minimal support for the possibility of the terminally ill being treated with priority.

Methodological issues surrounding the use of baseline health-related quality of life data to inform trial-based economic evaluations of interventions within emergency and critical care settings: a systematic literature review. PharmacoEconomics [PubMed] Published 6th January 2017

Catchy title. Conducting research within emergency and critical settings presents a number of unique challenges. For the health economist seeking to conduct a trial based economic evaluation, one such issue relates to the calculation of QALYs. To calculate QALYs within a trial, baseline and follow-up data are required. For obvious reasons – severe and acute injuries/illness, unplanned admission – collecting baseline data on those entering emergency and critical care is problematic. Even when patients are conscious, there are ethical issues surrounding collecting baseline data in this setting, the example used relates to somebody being conscious after cardiac arrest, is it appropriate to be getting them to complete HRQL questionnaires? Probably not. Various methods have been used to circumnavigate this issue; this paper sought to systematically review the methods that have been used and provide guidance for future studies. Just 19 studies made it through screening, thus highlighting the difficulty of research in this context. Just one study prospectively collected baseline HRQL data, and this was restricted to patients in a non-life threatening state. Four different strategies were adopted in the remaining papers. Eight studies adopted a fixed health utility for all participants at baseline, four used only the available data, that is, from the first time point where HRQL was measured. One asked patients to retrospectively recall their baseline state, whilst one other used Delphi methods to derive EQ-5D states from experts. The paper examines the implications and limitations of adopting each of these strategies. The key finding seems to relate to whether or not the trial arms are balanced with respect to HRQL at baseline. This obviously isn’t observed, the authors suggest trial covariates should instead be used to explore this, and adjustments made where applicable. If, and that’s a big if, trial arms are balanced, then all of the four methods suggested should give similar answers. It seems the key here is the randomisation, however, even the best randomisation techniques do not always lead to balanced arms and there is no guarantee of baseline balance. The authors conclude trials should aim to make an initial assessment of HRQL at the earliest opportunity and that further research is required to thoroughly examine how the different approaches will impact cost-effectiveness results.

Credits

Chris Sampson’s journal round-up for 16th January 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Competition and quality indicators in the health care sector: empirical evidence from the Dutch hospital sector. The European Journal of Health Economics [PubMed] Published 3rd January 2017

In case you weren’t already convinced, this paper presents more evidence to support the notion that (non-price) competition between health care providers is good for quality. The Dutch system is based on compulsory insurance and information on quality of hospital care is made public. One feature of the Dutch health system is that – for many elective hospital services – prices are set following a negotiation between insurers and hospitals. This makes the setting of the study a bit different to some of the European evidence considered to date, because there is scope for competition on price. The study looks at claims data for 3 diagnosis groups – cataract, adenoid/tonsils and bladder tumor – between 2008 and 2011. The authors’ approach to measuring competition is a bit more sophisticated than some other studies’ and is based on actual market share. A variety of quality indicators are used for the 3 diagnosis groups relating mainly to the process of care (rather than health outcomes). Fixed and random effects linear regression models are used to estimate the impact of market share upon quality. Casemix was only controlled for in relation to the proportion of people over 65 and the proportion of women. Where a relationship was found, it tended to be in favour of lower market share (i.e. greater competition) being associated with higher quality. For cataract and for bladder tumor there was a ‘significant’ effect. So in this setting at least, competition seems to be good news for quality. But the effect sizes are neither huge nor certain. A look at each of the quality indicators separately showed plenty of ‘non-significant’ relationships in both directions. While a novelty of this study is the liberalised pricing context, the authors find that there is no relationship between price and quality scores. So even if we believe the competition-favouring results, we needn’t abandon the ‘non-price competition only’ mantra.

Cost-effectiveness thresholds in global health: taking a multisectoral perspective. Value in Health Published 3rd January 2017

We all know health care is not the only – and probably not even the most important – determinant of health. We call ourselves health economists, but most of us are simply health care economists. Rarely do we look beyond the domain of health care. If our goal as researchers is to help improve population health, then we should probably be allocating more of our mental resource beyond health care. The same goes for public spending. Publicly provided education might improve health in a way that the health service would be willing to fund. Likewise, health care might improve educational attainment. This study considers resource allocation decisions using the familiar ‘bookshelf approach’, but goes beyond the unisectoral perspective. The authors discuss a two-sector world of health and education, and demonstrate the ways in which there may be overlaps in costs and outcomes. In short, there are likely to be situations in which the optimal multisectoral decision would be for individual sectors to increase their threshold in order to incorporate the spillover benefits of an intervention in another sector. The authors acknowledge that – in a perfect world – a social-welfare-maximising government would have sufficient information to allocate resources earmarked for specific purposes (e.g. health improvement) across sectors. But this doesn’t happen. Instead the authors propose the use of a cofinancing mechanism, whereby funds would be transferred between sectors as needed. The paper provides an interesting and thought-provoking discussion, and the idea of transferring funds between sectors seems sensible. Personally I think the problem is slightly misspecified. I don’t believe other sectors face thresholds in the same way, because (generally speaking) they do not employ cost-effectiveness analysis. And I’m not sure they should. I’m convinced that for health we need to deviate from welfarism, but I’m not convinced of it for other sectors. So from my perspective it is simply a matter of health vs everything else, and we can incorporate the ‘everything else’ into a cost-effectiveness analysis (with a societal perspective) in monetary terms. Funds can be reallocated as necessary with each budget statement (of which there seem to be a lot nowadays).

Is the Rational Addiction model inherently impossible to estimate? Journal of Health Economics [RePEc] Published 28th December 2016

Saddle point dynamics. Something I’ve never managed to get my head around, but here goes… This paper starts from the problem that empirical tests of the Rational Addiction model serve up wildly variable and often ridiculous (implied) discount rates. That may be part of the reason why economists tend to support the RA model but at the same time believe that it has not been empirically proven. The paper sets out the basis for saddle point dynamics in the context of the RA model, and outlines the nature of the stable and unstable root within the function that determines a person’s consumption over time. The authors employ Monte Carlo estimation of RA-type equations, simulating panel data observations. These simulations demonstrate that the presence of the unstable root may make it very difficult to estimate the coefficients. So even if the RA model can truly represent behaviour, empirical estimation may contradict it. This raises the question of whether the RA model is essentially untestable. A key feature of the argument relates to use of the model where a person’s time horizon is not considered to be infinite. Some non-health economists like to assume it is, which, as the authors wryly note, is not particularly ‘rational’.

Credits