Chris Sampson’s journal round-up for 12th August 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Developing open-source models for the US health system: practical experiences and challenges to date with the Open-Source Value Project. PharmacoEconomics [PubMed] Published 7th August 2019

PharmacoEconomics will soon publish a themed issue on transparency in decision modelling (to which I’ve contributed), and this paper – I assume – is one that will feature. At least one output from the Open-Source Value Project has featured in these round-ups before. The purpose of this paper is to describe the experiences of the initiative in developing and releasing two open-source models, one in rheumatoid arthritis and one in lung cancer.

The authors outline the background to the project and its goal to develop credible models that are more tuned-in to stakeholders’ needs. By sharing the R and C++ source code, developing interactive web applications, and providing extensive documentation, the models are intended to be wholly transparent and flexible. The model development process also involves feedback from experts and the public, followed by revision and re-release. It’s a huge undertaking. The paper sets out the key challenges associated with this process, such as enabling stakeholders with different backgrounds to understand technical models and each other. The authors explain how they have addressed such difficulties along the way. The resource implications of this process are also challenging, because the time and expertise required are much greater than for run-of-the-mill decision models. The advantages of the tools used by the project, such as R and GitHub, are explained, and the paper provides some ammunition for the open-source movement. One of the best parts of the paper is the authors’ challenge to those who question open-source modelling on the basis of intellectual property concerns. For example, they state that, “Claiming intellectually property on the implementation of a relatively common modeling approach in Excel or other programming software, such as a partitioned survival model in oncology, seems a bit pointless.” Agreed.

The response to date from the community has been broadly positive, though there has been a lack of engagement from US decision-makers. Despite this, the initiative has managed to secure adequate funding. This paper is a valuable read for anyone involved in open-source modelling or in establishing a collaborative platform for the creation and dissemination of research tools.

Incorporating affordability concerns within cost-effectiveness analysis for health technology assessment. Value in Health Published 30th July 2019

The issue of affordability is proving to be a hard nut to crack for health economists. That’s probably because we’ve spent a very long time conducting incremental cost-effectiveness analyses that pay little or no attention to the budget constraint. This paper sets out to define a framework that finally brings affordability into the fold.

The author sets up an example with a decision-maker that seeks to maximise population health with a fixed budget – read, HTA agency – and the motivating example is new medicines for hepatitis C. The core of the proposal is an alternative decision rule. Rather than simply comparing the incremental cost-effectiveness ratio (ICER) to a fixed threshold, it incorporates a threshold that is a function of the budget impact. At it’s most basic, a bigger budget impact (all else equal) means a greater opportunity cost and thus a lower threshold. The author suggests doing away with the ICER (which is almost impossible to work with) and instead using net health benefits. In this framework, whether or not net health benefit is greater than zero depends on the size of the budget impact at any given ICER. If we accept the core principle that budget impact should be incorporated into the decision rule, it raises two other issues – time and uncertainty – which are also addressed in the paper. The framework moves us beyond the current focus on net present value, which ignores the distribution of costs over time beyond simply discounting future expenditure. Instead, the opportunity cost ‘threshold’ depends on the budget impact in each time period. The description of the framework also addresses uncertainty in budget impact, which requires the estimation of opportunity costs in each iteration of a probabilistic analysis.

The paper is thorough in setting out the calculations needed to implement this framework. If you’re conducting an economic evaluation of a technology that could have a non-marginal (big) budget impact, you should tag this on to your analysis plan. Once researchers start producing these estimates, we’ll be able to understand how important these differences could be for resource allocation decision-making and determine whether the likes of NICE ought to incorporate it into their methods guide.

Did UberX reduce ambulance volume? Health Economics [PubMed] [RePEc] Published 24th June 2019

In London, you can probably – at most times of day – get an Uber quicker than you can get an ambulance. That isn’t necessarily a bad thing, as ambulances aren’t there to provide convenience. But it does raise an interesting question. Could the availability of super-fast, low-cost, low-effort taxi hailing reduce pressure on ambulance services? If so, we might anticipate the effect to be greatest where people have to actually pay for ambulances.

This study combines data on Uber market entry in the US, by state and city, with ambulance rates. Between Q1 2012 and Q4 2015, the proportion of the US population with access to Uber rose from 0% to almost 25%. The authors are also able to distinguish ‘lights and sirens’ ambulance rides from ‘no lights and sirens’ rides. A difference-in-differences model estimates the ambulance rate for a given city by quarter-year. The analysis suggests that there was a significant decline in ambulance rates in the years following Uber’s entry to the market, implying an average of 1.2 fewer ambulance trips per 1,000 population per quarter.

There are some questionable results in here, including the fact that a larger effect was found for the ‘lights and sirens’ ambulance rate, so it’s not entirely clear what’s going on. The authors describe a variety of robustness checks for our consideration. Unfortunately, the discussion of the results is lacking in detail and insight, so readers need to figure it out themselves. I’d be very interested to see a similar analysis in the UK. I suspect that I would be inclined to opt for an Uber over an ambulance in many cases. And I wouldn’t have the usual concern about Uber exploiting its drivers, as I dare say ambulance drivers aren’t treated much better.

Credits

Rita Faria’s journal round-up for 13th May 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Communicating uncertainty about facts, numbers and science. Royal Society Open Science Published 8th May 2019

This remarkable paper by Anne Marthe van der Bles and colleagues, including the illustrious David Spiegelhalter, covers two of my most favourite topics: communication and uncertainty. They focused on epistemic uncertainty. That is, the uncertainty about facts, numbers and science due to limited knowledge (rather than due to the randomness of the world). This is what we could know more about, if we spent more resources in finding it out.

The authors propose a framework for communicating uncertainty and apply it to two case studies, one in climate change and the other in economic statistics. They also review the literature on the effect of communicating uncertainty. It is so wide-ranging and exhaustive that, if I have any criticism, its 42 pages are not conducive to a leisurely read.

I found the distinction between direct and indirect uncertainty fascinating and incredibly relevant to health economics. Direct uncertainty is about the precision of the evidence whilst indirect uncertainty is about its quality. For example, evidence based on a naïve comparison of patients in a Phase 2 trial with historical controls in another country (yup, this happens!).

So, how should we communicate the uncertainty in our findings? I’m afraid that this paper is not a practical guide but rather a brilliant ground clearing exercise on how to start thinking about this. Nevertheless Box 5 (p35) does give some good advice! I do hope this paper kick-starts research on how to explain uncertainty beyond an academic audience. Looking forward to more!

Was Brexit triggered by the old and unhappy? Or by financial feelings? Journal of Economic Behavior & Organization [RePEc] Published 18th April 2019

Not strictly health economics – although arguably Brexit affects our health – is this impressive study about the factors that contributed to the Leave win in the Brexit referendum. Federica Liberini and colleagues used data from the Understanding Society survey to look at the predictors of people’s views about whether or not the UK should leave the EU. The main results are from a regression on whether or not a person was pro-Brexit, regressed on life satisfaction, their feelings on their financial situation, and other characteristics.

Their conclusions are staggering. They found that people’s views were generally unrelated to their age, their life satisfaction or their income. Instead, it was a person’s feelings about their financial situation that was the strongest predictor. For economists, it may be a bit cringe-worthy to see OLS used for a categorical dependent variable. But to be fair, the authors mention that the results are similar with non-linear models and they report extensive supplementary analyses. Remarkably, they’re making the individual level data available on the 18th of June here.

As the authors discuss, it is not clear if we’re looking at predictive estimates of characteristics related to pro-Brexit feeling or at causal estimates of factors that led to the pro-Brexit feeling. That is, if we could improve someone’s perceived financial situation, would we reduce their probability of feeling pro-Brexit? In any case, the message is clear. Feelings matter!

How does treating chronic hepatitis C affect individuals in need of organ transplants in the United Kingdom? Value in Health Published 8th March 2019

Anupam Bapu Jena and colleagues looked at the spillover benefits of curing hepatitis C given its consequences on the supply and demand of liver and other organs for transplant in the UK. They compare three policies: the status quo, in which there is no screening for hepatitis C and organ donation by people with hepatitis C is rare; universal screen and treat policy where cured people opt-in for organ donation; and similarly, but with opt-out for organ donation.

To do this, they adapted a previously developed queuing model. For the status quo, the model inputs were estimated by calibrating the model outputs to reported NHS performance. They then changed the model inputs to reflect the anticipated impact of the new policies. Importantly, they assumed that all patients with hepatitis C would be cured and no longer require a transplanted organ; conversely, that cured patients would donate organs at similar rates to the general population. They predict that curing hepatitis C would directly reduce the waiting list for organ transplants by reducing the number of patients needing them. Also, there would be an indirect benefit via increasing their availability to other patients. These consequences aren’t typically included in the cost-effectiveness analysis of treatments for hepatitis C, which means that their comparative benefits and costs may not be accurate.

Keeping in the theme of uncertainty, it was disappointing that the paper does not include some sort of confidence bounds on its results nor does it present sensitivity analysis to their assumptions, which in my view, were quite favourable towards a universal screen and test policy. This is an interesting application of a queuing model, which is something I don’t often see in cost-effectiveness analysis. It is also timely and relevant, given the recent drive by the NHS to eliminate hepatitis C. In a few years’ time, we’ll hopefully know to what extent the predicted spillover benefits were realised.

Credits

Rita Faria’s journal round-up for 13th August 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Analysis of clinical benefit, harms, and cost-effectiveness of screening women for abdominal aortic aneurysm. The Lancet [PubMed] Published 26th July 2018

This study is an excellent example of the power and flexibility of decision models to help inform decisions on screening policies.

In many countries, screening for abdominal aortic aneurysm is offered to older men but not to women. This is because screening was found to be beneficial and cost-effective, based on evidence from RCTs in older men. In contrast, there is no direct evidence for women. To inform this question, the study team developed a decision model to simulate the benefits and costs of screening women.

This study has many fascinating features. Not only does it simulate the outcomes of expanding the current UK screening policy for men to include women, but also of other policies with different age parameters, diagnostic thresholds and treatment thresholds.

Curiously, the most cost-effective policy for women is not the current UK policy for men. This shows the importance of including the full range of options in the evaluation, rather than just what is done now. Unfortunately, the paper is sparse on detail on how the various policies were devised and if other more cost-effective policies may have been left out.

The key cost-effectiveness driver is the probability of having the disease and its presentation (i.e. the distribution of the aortic diameter), which is quite frequent in cost-effectiveness analysis of diagnostic tests. Neither of these parameters requires an RCT to be estimated. This means that, in principle, we could reduce the uncertainty on which policy to fund by conducting a study on the prevalence of the disease, rather than an RCT on whether a specific policy works.

An exciting aspect is that treatment itself could be better targeted, in particular, that lowering the threshold for treatment could reduce non-intervention rates and operative mortality. The implication is that there may be scope to improve the cost-effectiveness of management, which in turn will leave greater scope for investment in screening. Could this be the next question to be tackled by this remarkable model?

Establishing the value of diagnostic and prognostic tests in health technology assessment. Medical Decision Making [PubMed] Published 13th March 2018

Keeping on the topic of the cost-effectiveness of screening and diagnostic tests, this is a paper on how to evaluate tests in a manner consistent with health technology assessment principles. This paper has been around for a few months, but it’s only now that I’ve had the chance to give it the careful read that such a well thought out paper deserves.

Marta Soares and colleagues lay out an approach to determine the most cost-effective way to use diagnostic and prognostic tests. They start by explaining that the value of the test is mostly in informing better management decisions. This means that the cost-effectiveness of testing necessarily depends on the cost-effectiveness of management.

The paper also spells out that the cost-effectiveness of testing depends on the prevalence of the disease, as we saw in the paper above on screening for abdominal aortic aneurysm. Clearly, the cost-effectiveness of testing depends on the accuracy of the test.

Importantly, the paper highlights that the evaluation should compare all possible ways of using the test. A decision problem with 1 test and 1 treatment yields 6 strategies, of which 3 are relevant: no test and treat all; no test and treat none; test and treat if positive. If the reference test is added, another 3 strategies need to be considered. This shows how complex a cost-effectiveness analysis of a test can quickly become! In my paper with Marta and others, for example, we ended up with 383 testing strategies.

The discussion is excellent, particularly about the limitations of end-to-end studies (which compare testing strategies in terms of their end outcomes e.g. health). End-to-end studies can only compare a limited subset of testing strategies and may not allow for the modelling of the outcomes of strategies beyond those compared in the study. Furthermore, end-to-end studies are likely to be inefficient given the large sample sizes and long follow-up required to detect differences in outcomes. I wholeheartedly agree that primary studies should focus on the prevalence of the disease and the accuracy of the test, leaving the evaluation of the best way to use the test to decision modelling.

Reasonable patient care under uncertainty. Health Economics [PubMed] Published 22nd August 2018

And for my third paper for the week, something completely different. But so worth reading! Charles Manski provides an overview of his work on how to use the available evidence to make decisions under uncertainty. It is accompanied by comments from Karl Claxton, Emma McIntosh, and Anirban Basu, together with Manski’s response. The set is a superb read and great food for thought.

Manski starts with the premise that we make decisions about which course of action to take without having full information about what is best; i.e. under uncertainty. This is uncontroversial and well accepted, ever since Arrow’s seminal paper.

Less consensual is Manski’s view that clinicians’ decisions for individual patients may be better than the recommendations of guidelines to the ‘average’ patient because clinicians can take into account more information about the specific individual patient. I would contend that it is unrealistic to expect that clinicians keep pace with new knowledge in medicine given how fast and how much it is generated. Furthermore, clinicians, like all other people, are unlikely to be fully rational in their decision-making process.

Most fascinating was Section 6 on decision theory under uncertainty. Manski focussed on the minimax-regret criterion. I had not heard about these approaches before, so Manski’s explanations were quite the eye-opener.

Manksi concludes by recommending that central health care planners take a portfolio approach to their guidelines (adaptive diversification), coupled with the minimax criterion to update the guidelines as more information emerges (adaptive minimax-regret). Whether the minimax-regret criterion is the best is a question that I will leave to better brains than mine. A more immediate question is how feasible it is to implement this adaptive diversification, particularly in instituting a process in that data are systematically collected and analysed to update the guideline. In his response, Manski suggests that specialists in decision analysis should become members of the multidisciplinary clinical team and to teach decision analysis in Medicine courses. This resonates with my own view that we need to do better in helping people using information to make better decisions.

Credits