Shilpi Swami’s journal round-up for 9th December 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Performance of UK National Health Service compared with other high-income countries: observational study. BMJ [PubMed] Published 27th November 2019

Efficiencies and inefficiencies of the NHS in the UK have been debated in recent years. This new study reveals the performance of the NHS compared to other high-income countries, based on observational data, and has already caught a bunch of attention (almost 3,000 tweets and 6 news appearances, since publication)!

The authors presented a descriptive analysis of the UK (England, Scotland, Northern Ireland, and Wales) compared to nine other countries (US, Canada, Germany, Australia, Sweden, France, Denmark, the Netherlands, and Switzerland) based on aggregated recent data from a range of sources (such as OECD, World Bank, the Institute for Health Metrics Evaluation, and Eurostat). Good things first; access to care – a lower proportion of people felt unmet needs owing to costs. The waiting times were comparable across other countries, except for specialist care. The UK performed slightly better on the metric of patient safety. The main challenge, however, is that NHS healthcare spending is lower and has been growing more slowly. This means fewer doctors and nurses, and doctors spending less time with patients. The authors vividly suggest that

“Policy makers should consider how recent changes to nursing bursaries, the weakened pound, and uncertainty about the status of immigrant workers in the light of the Brexit referendum result have influenced these numbers and how to respond to these challenges in the future.”

Understandably comparing healthcare systems across the world is difficult. Including the US in the study, and not including other countries like Spain and Japan, may need more justification or could be a scope of future research.

To be fair, the article is a not-to-miss read. It is an eye-opener for those who think it’s only a (too much) demand-side problem the the NHS is facing and confirms the perspective of those who think it’s a (not enough) supply-side problem. Kudos to the hardworking doctors and nurses who are currently delivering efficiently in the stretched situation! For sustainability, the NHS needs to consider increasing its spending to increase labour supply and long-term care.

A systematic review of methods to predict weight trajectories in health economic models of behavioral weight management programs: the potential role of psychosocial factors. Medical Decision Making [PubMed] Published 2nd December 2019

In economic modelling, assumptions are often made about the long-term impact of interventions, and it’s important that these assumptions are based on sound evidence and/or tested in sensitivity analysis, as these could affect the cost-effectiveness results.

The authors explored assumptions about weight trajectories to inform economic modelling of behavioural weight management programmes. Also, they checked their evidence sources, and whether these assumptions were based on any psychosocial variables (such as self-regulation, motivation, self-efficacy, and habit), as these are known to be associated with weight-loss trajectories.

The authors conducted a systematic literature review of economic models of weight management interventions that aimed at reducing weight. In the 38 studies included, they found 6 types of assumptions of weight trajectories beyond trial duration (weight loss maintained, weight loss regained immediately, linear weight regain, subgroup-specific trajectories, exponential decay of effect, maintenance followed by regain), with only 15 of the studies reporting sources for these assumptions. The authors also elaborated on the assumptions and graphically represented them. Psychosocial variables were, in fact, measured in evidence sources of some of the included studies. However, the authors found that none of the studies estimated their weight trajectory assumptions based on these! Though the article also reports on how the assumptions were tested in sensitivity analyses and their impact on results in the studies (if reported within these studies), it would have been interesting to see more insights into this. The authors feel that there’s a need to investigate how psychosocial variables measured in trials can be used within health economic models to calculate weight trajectories and, thus, to improve the validity of cost-effectiveness estimates.

To me, given that only around half of included studies reported sources of assumptions on long-term effects of the interventions and performed sensitivity analysis on these assumptions, it raises the bigger long-debated question on the quality of economic evaluations! To conclude, the review is comprehensive and insightful. It is an interesting read and will be especially useful for those interested in modelling long-term impacts of behavioural support programs.

The societal monetary value of a QALY associated with EQ‐5D‐3L health gains. The European Journal of Health Economics [PubMed] Published 28th November 2019

Finding an estimate of the societal monetary value of a QALY (MVQALY) is mostly performed to inform a range of thresholds for accurately guiding cost-effectiveness decisions.

This study explores the degree of variation in the societal MVQALY based on a large sample of the population in Spain. It uses a discrete choice experiment and a time trade-off exercise to derive a value set for utilities, followed by a willingness to pay questionnaire. The study reveals that the societal values for a QALY, corresponding to different EQ-5D-3L health gains, vary approximately between €10,000 and €30,000. Ironically, the MVQALY associated with larger improvements on QoL was found to be lower than with moderate QoL gains, meaning that WTP is less than proportional to the size of the QoL improvement. The authors further explored whether budgetary restrictions could be a reason for this by analysing responses of individuals with higher income and found out that it may somewhat explain this, but not fully. As this, at face value, implies there should be a lower cost per QALY threshold for interventions with largest improvement of health than with moderate improvements, it raises a lot of questions and forces you to interpret the findings with caution. The authors suggest that the diminishing MVQALY is, at least partly, produced by the lack of sensitivity of WTP responses.

Though I think that the article does not provide a clear take-home message, it makes the readers re-think the very underlying norms of estimating monetary values of QALYs. The study eventually raises more questions than providing answers but could be useful to further explore areas of utility research.

Credits

Rita Faria’s journal round-up for 4th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The marginal benefits of healthcare spending in the Netherlands: estimating cost-effectiveness thresholds using a translog production function. Health Economics [PubMed] Published 30th August 2019

The marginal productivity of the healthcare sector or, as commonly known, the supply-side cost-effectiveness threshold, is a hot topic right now. A few years ago, we could only guess at the magnitude of health that was displaced by reimbursing expensive and not-that-beneficial drugs. Since the seminal work by Karl Claxton and colleagues, we have started to have a pretty good idea of what we’re giving up.

This paper by Niek Stadhouders and colleagues adds to this literature by estimating the marginal productivity of hospital care in the Netherlands. Spoiler alert: they estimated that hospital care generates 1 QALY for around €74,000 at the margin, with 95% confidence intervals ranging from €53,000 to €94,000. Remarkably, it’s close to the Dutch upper reference value for the cost-effectiveness threshold at €80,000!

The approach for estimation is quite elaborate because it required building QALYs and costs, and accounting for the effect of mortality on costs. The diagram in Figure 1 is excellent in explaining it. Their approach is different from the Claxton et al method, in that they corrected for the cost due to changes in mortality directly rather than via an instrumental variable analysis. To estimate the marginal effect of spending on health, they use a translog function. The confidence intervals are generated with Monte Carlo simulation and various robustness checks are presented.

This is a fantastic paper, which will be sure to have important policy implications. Analysts conducting cost-effectiveness analysis in the Netherlands, do take note.

Mixed-effects models for health care longitudinal data with an informative visiting process: a Monte Carlo simulation study. Statistica Neerlandica Published 5th September 2019

Electronic health records are the current big thing in health economics research, but they’re not without challenges. One issue is that the data reflects the clinical management, rather than a trial protocol. This means that doctors may test more severe patients more often. For example, people with higher cholesterol may get more frequent cholesterol tests. The challenge is that traditional methods for longitudinal data assume independence between observation times and disease severity.

Alessandro Gasparini and colleagues set out to solve this problem. They propose using inverse intensity of visit weighting within a mixed-methods model framework. Importantly, they provide a Stata package that includes the method. It’s part of the wide ranging and super-useful merlin package.

It was great to see how the method works with the directed acyclic graph. Essentially, after controlling for confounders, the longitudinal outcome and the observation process are associated through shared random effects. By assuming a distribution for the shared random effects, the model blocks the path between the outcome and the observation process. It makes it sound easy!

The paper goes through the method, compares it with other methods in the literature in a simulation study, and applies to a real case study. It’s a brilliant paper that deserves a close look by all of those using electronic health records.

Alternative approaches for confounding adjustment in observational studies using weighting based on the propensity score: a primer for practitioners. BMJ [PubMed] Published 23rd October 2019

Would you like to use a propensity score method but don’t know where to start? Look no further! This paper by Rishi Desai and Jessica Franklin provides a practical guide to propensity score methods.

They start by explaining what a propensity score is and how it can be used, from matching to reweighting and regression adjustment. I particularly enjoyed reading about the importance of conceptualising the target of inference, that is, what treatment effect are we trying to estimate. In the medical literature, it is rare to see a paper that is clear on whether it is average treatment effect or average treatment effect among the treated population.

I found the algorithm for method selection really useful. Here, Rishi and Jessica describe the steps in the choice of the propensity score method and recommend their preferred method for each situation. The paper also includes the application of each method to the example of dabigatran versus warfarin for atrial fibrillation. Thanks to the graphs, we can visualise how the distribution of the propensity score changes for each method and depending on the target of inference.

This is an excellent paper to those starting their propensity score analyses, or for those who would like a refresher. It’s a keeper!

Credits

Rita Faria’s journal round-up for 2nd September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ [PubMed] Published 28th August 2019

RCTs are the gold standard primary study to estimate the effect of treatments but are often far from perfect. The question is the extent to which their flaws make a difference to the results. Well, RoB 2 is your new best friend to help answer this question.

Developed by a star-studded team, the RoB 2 is the update to the original risk of bias tool by the Cochrane Collaboration. Bias is assessed by outcome, rather than for the whole RCT. For me, this makes sense.  For example, the primary outcome may be well reported, yet the secondary outcome, which may be the outcome of interest for a cost-effectiveness model, much less so.

Bias is considered in terms of 5 domains, with the overall risk of bias usually corresponding to the worst risk of bias in any of the domains. This overall risk of bias is then reflected in the evidence synthesis, with, for example, a stratified meta-analysis.

The paper is a great read! Jonathan Sterne and colleagues explain the reasons for the update and the process that was followed. Clearly, there was quite a lot of thought given to the types of bias and to develop questions to help reviewers assess it. The only downside is that it may require more time to apply, given that it needs to be done by outcome. Still, I think that’s a price worth paying for more reliable results. Looking forward to seeing it in use!

Characteristics and methods of incorporating randomised and nonrandomised evidence in network meta-analyses: a scoping review. Journal of Clinical Epidemiology [PubMed] Published 3rd May 2019

In keeping with the evidence synthesis theme, this paper by Kathryn Zhang and colleagues reviews how the applied literature has been combining randomised and non-randomised evidence. The headline findings are that combining these two types of study designs is rare and, when it does happen, naïve pooling is the most common method.

I imagine that the limited use of non-randomised evidence is due to its risk of bias. After all, it is difficult to ensure that the measure of association from a non-randomised study is an estimate of a causal effect. Hence, it is worrying that the majority of network meta-analyses that did combine non-randomised studies did so with naïve pooling.

This scoping review may kick start some discussions in the evidence synthesis world. When should we combine randomised and non-randomised evidence? How best to do so? And how to make sure that the right methods are used in practice? As a cost-effectiveness modeller, with limited knowledge of evidence synthesis, I’ve grappled with these questions myself. Do get in touch if you have any thoughts.

A cost-effectiveness analysis of shortened direct-acting antiviral treatment in genotype 1 noncirrhotic treatment-naive patients with chronic hepatitis C virus. Value in Health [PubMed] Published 17th May 2019

Rarely we see a cost-effectiveness paper where the proposed intervention is less costly and less effective, that is, in the controversial southwest quadrant. This exceptional paper by Christopher Fawsitt and colleagues is a welcome exception!

Christopher and colleagues looked at the cost-effectiveness of shorter treatment durations for chronic hepatitis C. Compared with the standard duration, the shorter treatment is not as effective, hence results in fewer QALYs. But it is much cheaper to treat patients over a shorter duration and re-treat those patients who were not cured, rather than treat everyone with the standard duration. Hence, for the base-case and for most scenarios, the shorter treatment is cost-effective.

I’m sure that labelling a less effective and less costly option as cost-effective may have been controversial in some quarters. Some may argue that it is unethical to offer a worse treatment than the standard even if it saves a lot of money. In my view, it is no different from funding better and more costlier treatments, given that the savings will be borne by other patients who will necessarily have access to fewer resources.

The paper is beautifully written and is another example of an outstanding cost-effectiveness analysis with important implications for policy and practice. The extensive sensitivity analysis should provide reassurance to the sceptics. And the discussion is clever in arguing for the value of a shorter duration in resource-constrained settings and for hard to reach populations. A must read!

Credits