Rita Faria’s journal round-up for 13th August 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Analysis of clinical benefit, harms, and cost-effectiveness of screening women for abdominal aortic aneurysm. The Lancet [PubMed] Published 26th July 2018

This study is an excellent example of the power and flexibility of decision models to help inform decisions on screening policies.

In many countries, screening for abdominal aortic aneurysm is offered to older men but not to women. This is because screening was found to be beneficial and cost-effective, based on evidence from RCTs in older men. In contrast, there is no direct evidence for women. To inform this question, the study team developed a decision model to simulate the benefits and costs of screening women.

This study has many fascinating features. Not only does it simulate the outcomes of expanding the current UK screening policy for men to include women, but also of other policies with different age parameters, diagnostic thresholds and treatment thresholds.

Curiously, the most cost-effective policy for women is not the current UK policy for men. This shows the importance of including the full range of options in the evaluation, rather than just what is done now. Unfortunately, the paper is sparse on detail on how the various policies were devised and if other more cost-effective policies may have been left out.

The key cost-effectiveness driver is the probability of having the disease and its presentation (i.e. the distribution of the aortic diameter), which is quite frequent in cost-effectiveness analysis of diagnostic tests. Neither of these parameters requires an RCT to be estimated. This means that, in principle, we could reduce the uncertainty on which policy to fund by conducting a study on the prevalence of the disease, rather than an RCT on whether a specific policy works.

An exciting aspect is that treatment itself could be better targeted, in particular, that lowering the threshold for treatment could reduce non-intervention rates and operative mortality. The implication is that there may be scope to improve the cost-effectiveness of management, which in turn will leave greater scope for investment in screening. Could this be the next question to be tackled by this remarkable model?

Establishing the value of diagnostic and prognostic tests in health technology assessment. Medical Decision Making [PubMed] Published 13th March 2018

Keeping on the topic of the cost-effectiveness of screening and diagnostic tests, this is a paper on how to evaluate tests in a manner consistent with health technology assessment principles. This paper has been around for a few months, but it’s only now that I’ve had the chance to give it the careful read that such a well thought out paper deserves.

Marta Soares and colleagues lay out an approach to determine the most cost-effective way to use diagnostic and prognostic tests. They start by explaining that the value of the test is mostly in informing better management decisions. This means that the cost-effectiveness of testing necessarily depends on the cost-effectiveness of management.

The paper also spells out that the cost-effectiveness of testing depends on the prevalence of the disease, as we saw in the paper above on screening for abdominal aortic aneurysm. Clearly, the cost-effectiveness of testing depends on the accuracy of the test.

Importantly, the paper highlights that the evaluation should compare all possible ways of using the test. A decision problem with 1 test and 1 treatment yields 6 strategies, of which 3 are relevant: no test and treat all; no test and treat none; test and treat if positive. If the reference test is added, another 3 strategies need to be considered. This shows how complex a cost-effectiveness analysis of a test can quickly become! In my paper with Marta and others, for example, we ended up with 383 testing strategies.

The discussion is excellent, particularly about the limitations of end-to-end studies (which compare testing strategies in terms of their end outcomes e.g. health). End-to-end studies can only compare a limited subset of testing strategies and may not allow for the modelling of the outcomes of strategies beyond those compared in the study. Furthermore, end-to-end studies are likely to be inefficient given the large sample sizes and long follow-up required to detect differences in outcomes. I wholeheartedly agree that primary studies should focus on the prevalence of the disease and the accuracy of the test, leaving the evaluation of the best way to use the test to decision modelling.

Reasonable patient care under uncertainty. Health Economics [PubMed] Published 22nd August 2018

And for my third paper for the week, something completely different. But so worth reading! Charles Manski provides an overview of his work on how to use the available evidence to make decisions under uncertainty. It is accompanied by comments from Karl Claxton, Emma McIntosh, and Anirban Basu, together with Manski’s response. The set is a superb read and great food for thought.

Manski starts with the premise that we make decisions about which course of action to take without having full information about what is best; i.e. under uncertainty. This is uncontroversial and well accepted, ever since Arrow’s seminal paper.

Less consensual is Manski’s view that clinicians’ decisions for individual patients may be better than the recommendations of guidelines to the ‘average’ patient because clinicians can take into account more information about the specific individual patient. I would contend that it is unrealistic to expect that clinicians keep pace with new knowledge in medicine given how fast and how much it is generated. Furthermore, clinicians, like all other people, are unlikely to be fully rational in their decision-making process.

Most fascinating was Section 6 on decision theory under uncertainty. Manski focussed on the minimax-regret criterion. I had not heard about these approaches before, so Manski’s explanations were quite the eye-opener.

Manksi concludes by recommending that central health care planners take a portfolio approach to their guidelines (adaptive diversification), coupled with the minimax criterion to update the guidelines as more information emerges (adaptive minimax-regret). Whether the minimax-regret criterion is the best is a question that I will leave to better brains than mine. A more immediate question is how feasible it is to implement this adaptive diversification, particularly in instituting a process in that data are systematically collected and analysed to update the guideline. In his response, Manski suggests that specialists in decision analysis should become members of the multidisciplinary clinical team and to teach decision analysis in Medicine courses. This resonates with my own view that we need to do better in helping people using information to make better decisions.

Credits

James Lomas’s journal round-up for 21st May 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Decision making for healthcare resource allocation: joint v. separate decisions on interacting interventions. Medical Decision Making [PubMed] Published 23rd April 2018

While it may be uncontroversial that including all of the relevant comparators in an economic evaluation is crucial, a careful examination of this statement raises some interesting questions. Which comparators are relevant? For those that are relevant, how crucial is it that they are not excluded? The answer to the first of these questions may seem obvious, that all feasible mutually exclusive interventions should be compared, but this is in fact deceptive. Dakin and Gray highlight inconsistency between guidelines as to what constitutes interventions that are ‘mutually exclusive’ and so try to re-frame the distinction according to whether interventions are ‘incompatible’ – when it is physically impossible to implement both interventions simultaneously – and, if not, whether interventions are ‘interacting’ – where the costs and effects of the simultaneous implementation of A and B do not equal the sum of these parts. What I really like about this paper is that it has a very pragmatic focus. Inspired by policy arrangements, for example single technology appraisals, and the difficulty in capturing all interactions, Dakin and Gray provide a reader-friendly flow diagram to illustrate cases where excluding interacting interventions from a joint evaluation is likely to have a big impact, and furthermore propose a sequencing approach that avoids the major problems in evaluating separately what should be considered jointly. Essentially when we have interacting interventions at different points of the disease pathway, evaluating separately may not be problematic if we start at the end of the pathway and move backwards, similar to the method of backward induction used in sequence problems in game theory. There are additional related questions that I’d like to see these authors turn to next, such as how to include interaction effects between interventions and, in particular, how to evaluate system-wide policies that may interact with a very large number of interventions. This paper makes a great contribution to answering all of these questions by establishing a framework that clearly distinguishes concepts that had previously been subject to muddied thinking.

When cost-effective interventions are unaffordable: integrating cost-effectiveness and budget impact in priority setting for global health programs. PLoS Medicine [PubMed] Published 2nd October 2017

In my opinion, there are many things that health economists shouldn’t try to include when they conduct cost-effectiveness analysis. Affordability is not one of these. This paper is great, because Bilinski et al shine a light on the worldwide phenomenon of interventions being found to be ‘cost-effective’ but not affordable. A particular quote – that it would be financially impossible to implement all interventions that are found to be ‘very cost-effective’ in many low- and middle-income countries – is quite shocking. Bilinski et al compare and contrast cost-effectiveness analysis and budget impact analysis, and argue that there are four key reasons why something could be ‘cost-effective’ but not affordable: 1) judging cost-effectiveness with reference to an inappropriate cost-effectiveness ‘threshold’, 2) adoption of a societal perspective that includes costs not falling upon the payer’s budget, 3) failing to make explicit consideration of the distribution of costs over time and 4) the use of an inappropriate discount rate that may not accurately reflect the borrowing and investment opportunities facing the payer. They then argue that, because of this, cost-effectiveness analysis should be presented along with budget impact analysis so that the decision-maker can base a decision on both analyses. I don’t disagree with this as a pragmatic interim solution, but – by highlighting these four reasons for divergence of results with such important economic consequences – I think that there will be further reaching implications of this paper. To my mind, Bilinski et al essentially serves as a call to arms for researchers to try to come up with frameworks and estimates so that the conduct of cost-effectiveness analysis can be improved in order that paradoxical results are no longer produced, decisions are more usefully informed by cost-effectiveness analysis, and the opportunity costs of large budget impacts are properly evaluated – especially in the context of low- and middle-income countries where the foregone health from poor decisions can be so significant.

Patient cost-sharing, socioeconomic status, and children’s health care utilization. Journal of Health Economics [PubMed] Published 16th April 2018

This paper evaluates a policy using a combination of regression discontinuity design and difference-in-difference methods. Not only does it do that, but it tackles an important policy question using a detailed population-wide dataset (a set of linked datasets, more accurately). As if that weren’t enough, one of the policy reforms was actually implemented as a result of a vote where two politicians ‘accidentally pressed the wrong button’, reducing concerns that the policy may have in some way not been exogenous. Needless to say I found the method employed in this paper to be a pretty convincing identification strategy. The policy question at hand is about whether demand for GP visits for children in the Swedish county of Scania (Skåne) is affected by cost-sharing. Cost-sharing for GP visits has occurred for different age groups over different periods of time, providing the basis for regression discontinuities around the age threshold and treated and control groups over time. Nilsson and Paul find results suggesting that when health care is free of charge doctor visits by children increase by 5-10%. In this context, doctor visits happened subject to telephone triage by a nurse and so in this sense it can be argued that all of these visits would be ‘needed’. Further, Nilsson and Paul find that the sensitivity to price is concentrated in low-income households, and is greater among sickly children. The authors contextualise their results very well and, in addition to that context, I can’t deny that it also particularly resonated with me to read this approaching the 70th birthday of the NHS – a system where cost-sharing has never been implemented for GP visits by children. This paper is clearly also highly relevant to that debate that has surfaced again and again in the UK.

Credits

 

Chris Sampson’s journal round-up for 7th May 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Building an international health economics teaching network. Health Economics [PubMedPublished 2nd May 2018

The teaching on my health economics MSc (at Sheffield) was very effective. Experts from our subdiscipline equipped me with the skills that I went on to use on a daily basis in my first job, and to this day. But not everyone gets the same opportunity. And there were only 8 people on my course. Part of the background to the new movement described in this editorial is the observation that demand for health economists outstrips supply. Great for us jobbing health economists, but suboptimal for society. The shortfall has given rise to people teaching health economics (or rather, economic evaluation methods) without any real training in economics. The main purpose of this editorial is to call on health economists (that’s me and you) to pull our weight and contribute to a collective effort to share, improve, and ultimately deliver high-quality teaching resources. The Health Economics education website, which is now being adopted by iHEA, should be the starting point. And there’s now a Teaching Health Economics Special Interest Group. So chip in! This paper got me thinking about how the blog could play its part in contributing to the infrastructure of health economics teaching, so expect to see some developments on that front.

Including future consumption and production in economic evaluation of interventions that save life-years: commentary. PharmacoEconomics – Open [PubMed] Published 30th April 2018

When people live longer, they spend their extra life-years consuming and producing. How much consuming and producing they do affects social welfare. The authors of this commentary are very clear about the point they wish to make, so I’ll just quote them: “All else equal, a given number of quality-adjusted life-years (QALYs) from life prolongation will normally be more costly from a societal perspective than the same number of QALYs from programmes that improve quality of life”. This is because (in high-income countries) most people whose life can be extended are elderly, so they’re not very productive. They’re likely to create a net cost for society (given how we measure value). Asserting that the cost is ‘worth it’ at any level, or simply ignoring the matter, isn’t really good enough because providing life extension will be at the expense of some life-improving treatments which may – were these costs taken into account – improve social welfare. The authors’ estimates suggest that the societal cost of life-extension is far greater than current methods admit. Consumption costs and production gains should be estimated and should be given some weight in decision-making. The question is not whether we should measure consumption costs and production gains – clearly, we should. The question is what weight they ought to be given in decision-making.

Methods for the economic evaluation of changes to the organisation and delivery of health services: principal challenges and recommendations. Health Economics, Policy and Law [PubMedPublished 20th April 2018

The late, great, Alan Maynard liked to speak about redisorganisations in the NHS: large-scale changes to the way services are organised and delivered, usually without a supporting evidence base. This problem extends to smaller-scale service delivery interventions. There’s no requirement for policy-makers to demonstrate that changes will be cost-effective. This paper explains why applying methods of health technology assessment to service interventions can be tricky. The causal chain of effects may be less clear when interventions are applied at the organisational level rather than individual level, and the results will be heavily dependent on the present context. The author outlines five challenges in conducting economic evaluations for service interventions: i) conducting ex-ante evaluations, ii) evaluating impact in terms of QALYs, iii) assessing costs and opportunity costs, iv) accounting for spillover effects, and v) generalisability. Those identified as most limiting right now are the challenges associated with estimating costs and QALYs. Cost data aren’t likely to be readily available at the individual level and may not be easily identifiable and divisible. So top-down programme-level costs may be all we have to work with, and they may lack precision. QALYs may be ‘attached’ to service interventions by applying a tariff to individual patients or by supplementing the analysis with simulation modelling. But more methodological development is still needed. And until we figure it out, health spending is likely to suffer from allocative inefficiencies.

Vog: using volcanic eruptions to estimate the health costs of particulates. The Economic Journal [RePEc] Published 12th April 2018

As sources of random shocks to a system go, a volcanic eruption is pretty good. A major policy concern around the world – particularly in big cities – is the impact of pollution. But the short-term impact of particulate pollution is difficult to identify because there is high correlation amongst pollutants. In this study, the authors use the eruption activity of Kīlauea on the island of Hawaiʻi as a source of variation in particulate pollution. Vog – volcanic smog – includes sulphur dioxide and is similar to particulate pollution in cities, but the fact that Hawaiʻi does not have the same levels of industrial pollutants means that the authors can more cleanly identify the impact on health outcomes. In 2008 there was a big increase in Kīlauea’s emissions when a new vent opened, and the level of emissions fluctuates daily, so there’s plenty of variation to play with. The authors have two main sources of data: emergency admissions (and their associated charges) and air quality data. A parsimonious OLS model is used to estimate the impact of air quality on the total number of admissions for a given day in a given region, with fixed effects for region and date. An instrumental variable approach is also used, which looks at air quality on a neighbouring island and uses wind direction to specify the instrumental variable. The authors find that pulmonary-related emergency admissions increased with pollution levels. Looking at the instrumental variable analysis, a one standard deviation increase in particulate pollution results in 23-36% more pulmonary-related emergency visits (depending on which measure of particulate pollution is being used). Importantly, there’s no impact on fractures, which we wouldn’t expect to be influenced by the particulate pollution. The impact is greatest for babies and young children. And it’s worth bearing in mind that avoidance behaviours – e.g. people staying indoors on ‘voggy’ days – are likely to reduce the impact of the pollution. Despite the apparent lack of similarity between Hawaiʻi and – for example – London, this study provides strong evidence that policy-makers should consider the potential savings to the health service when tackling particulate pollution.

Credits