Rita Faria’s journal round-up for 13th May 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Communicating uncertainty about facts, numbers and science. Royal Society Open Science Published 8th May 2019

This remarkable paper by Anne Marthe van der Bles and colleagues, including the illustrious David Spiegelhalter, covers two of my most favourite topics: communication and uncertainty. They focused on epistemic uncertainty. That is, the uncertainty about facts, numbers and science due to limited knowledge (rather than due to the randomness of the world). This is what we could know more about, if we spent more resources in finding it out.

The authors propose a framework for communicating uncertainty and apply it to two case studies, one in climate change and the other in economic statistics. They also review the literature on the effect of communicating uncertainty. It is so wide-ranging and exhaustive that, if I have any criticism, its 42 pages are not conducive to a leisurely read.

I found the distinction between direct and indirect uncertainty fascinating and incredibly relevant to health economics. Direct uncertainty is about the precision of the evidence whilst indirect uncertainty is about its quality. For example, evidence based on a naïve comparison of patients in a Phase 2 trial with historical controls in another country (yup, this happens!).

So, how should we communicate the uncertainty in our findings? I’m afraid that this paper is not a practical guide but rather a brilliant ground clearing exercise on how to start thinking about this. Nevertheless Box 5 (p35) does give some good advice! I do hope this paper kick-starts research on how to explain uncertainty beyond an academic audience. Looking forward to more!

Was Brexit triggered by the old and unhappy? Or by financial feelings? Journal of Economic Behavior & Organization [RePEc] Published 18th April 2019

Not strictly health economics – although arguably Brexit affects our health – is this impressive study about the factors that contributed to the Leave win in the Brexit referendum. Federica Liberini and colleagues used data from the Understanding Society survey to look at the predictors of people’s views about whether or not the UK should leave the EU. The main results are from a regression on whether or not a person was pro-Brexit, regressed on life satisfaction, their feelings on their financial situation, and other characteristics.

Their conclusions are staggering. They found that people’s views were generally unrelated to their age, their life satisfaction or their income. Instead, it was a person’s feelings about their financial situation that was the strongest predictor. For economists, it may be a bit cringe-worthy to see OLS used for a categorical dependent variable. But to be fair, the authors mention that the results are similar with non-linear models and they report extensive supplementary analyses. Remarkably, they’re making the individual level data available on the 18th of June here.

As the authors discuss, it is not clear if we’re looking at predictive estimates of characteristics related to pro-Brexit feeling or at causal estimates of factors that led to the pro-Brexit feeling. That is, if we could improve someone’s perceived financial situation, would we reduce their probability of feeling pro-Brexit? In any case, the message is clear. Feelings matter!

How does treating chronic hepatitis C affect individuals in need of organ transplants in the United Kingdom? Value in Health Published 8th March 2019

Anupam Bapu Jena and colleagues looked at the spillover benefits of curing hepatitis C given its consequences on the supply and demand of liver and other organs for transplant in the UK. They compare three policies: the status quo, in which there is no screening for hepatitis C and organ donation by people with hepatitis C is rare; universal screen and treat policy where cured people opt-in for organ donation; and similarly, but with opt-out for organ donation.

To do this, they adapted a previously developed queuing model. For the status quo, the model inputs were estimated by calibrating the model outputs to reported NHS performance. They then changed the model inputs to reflect the anticipated impact of the new policies. Importantly, they assumed that all patients with hepatitis C would be cured and no longer require a transplanted organ; conversely, that cured patients would donate organs at similar rates to the general population. They predict that curing hepatitis C would directly reduce the waiting list for organ transplants by reducing the number of patients needing them. Also, there would be an indirect benefit via increasing their availability to other patients. These consequences aren’t typically included in the cost-effectiveness analysis of treatments for hepatitis C, which means that their comparative benefits and costs may not be accurate.

Keeping in the theme of uncertainty, it was disappointing that the paper does not include some sort of confidence bounds on its results nor does it present sensitivity analysis to their assumptions, which in my view, were quite favourable towards a universal screen and test policy. This is an interesting application of a queuing model, which is something I don’t often see in cost-effectiveness analysis. It is also timely and relevant, given the recent drive by the NHS to eliminate hepatitis C. In a few years’ time, we’ll hopefully know to what extent the predicted spillover benefits were realised.

Credits

Rita Faria’s journal round-up for 13th August 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Analysis of clinical benefit, harms, and cost-effectiveness of screening women for abdominal aortic aneurysm. The Lancet [PubMed] Published 26th July 2018

This study is an excellent example of the power and flexibility of decision models to help inform decisions on screening policies.

In many countries, screening for abdominal aortic aneurysm is offered to older men but not to women. This is because screening was found to be beneficial and cost-effective, based on evidence from RCTs in older men. In contrast, there is no direct evidence for women. To inform this question, the study team developed a decision model to simulate the benefits and costs of screening women.

This study has many fascinating features. Not only does it simulate the outcomes of expanding the current UK screening policy for men to include women, but also of other policies with different age parameters, diagnostic thresholds and treatment thresholds.

Curiously, the most cost-effective policy for women is not the current UK policy for men. This shows the importance of including the full range of options in the evaluation, rather than just what is done now. Unfortunately, the paper is sparse on detail on how the various policies were devised and if other more cost-effective policies may have been left out.

The key cost-effectiveness driver is the probability of having the disease and its presentation (i.e. the distribution of the aortic diameter), which is quite frequent in cost-effectiveness analysis of diagnostic tests. Neither of these parameters requires an RCT to be estimated. This means that, in principle, we could reduce the uncertainty on which policy to fund by conducting a study on the prevalence of the disease, rather than an RCT on whether a specific policy works.

An exciting aspect is that treatment itself could be better targeted, in particular, that lowering the threshold for treatment could reduce non-intervention rates and operative mortality. The implication is that there may be scope to improve the cost-effectiveness of management, which in turn will leave greater scope for investment in screening. Could this be the next question to be tackled by this remarkable model?

Establishing the value of diagnostic and prognostic tests in health technology assessment. Medical Decision Making [PubMed] Published 13th March 2018

Keeping on the topic of the cost-effectiveness of screening and diagnostic tests, this is a paper on how to evaluate tests in a manner consistent with health technology assessment principles. This paper has been around for a few months, but it’s only now that I’ve had the chance to give it the careful read that such a well thought out paper deserves.

Marta Soares and colleagues lay out an approach to determine the most cost-effective way to use diagnostic and prognostic tests. They start by explaining that the value of the test is mostly in informing better management decisions. This means that the cost-effectiveness of testing necessarily depends on the cost-effectiveness of management.

The paper also spells out that the cost-effectiveness of testing depends on the prevalence of the disease, as we saw in the paper above on screening for abdominal aortic aneurysm. Clearly, the cost-effectiveness of testing depends on the accuracy of the test.

Importantly, the paper highlights that the evaluation should compare all possible ways of using the test. A decision problem with 1 test and 1 treatment yields 6 strategies, of which 3 are relevant: no test and treat all; no test and treat none; test and treat if positive. If the reference test is added, another 3 strategies need to be considered. This shows how complex a cost-effectiveness analysis of a test can quickly become! In my paper with Marta and others, for example, we ended up with 383 testing strategies.

The discussion is excellent, particularly about the limitations of end-to-end studies (which compare testing strategies in terms of their end outcomes e.g. health). End-to-end studies can only compare a limited subset of testing strategies and may not allow for the modelling of the outcomes of strategies beyond those compared in the study. Furthermore, end-to-end studies are likely to be inefficient given the large sample sizes and long follow-up required to detect differences in outcomes. I wholeheartedly agree that primary studies should focus on the prevalence of the disease and the accuracy of the test, leaving the evaluation of the best way to use the test to decision modelling.

Reasonable patient care under uncertainty. Health Economics [PubMed] Published 22nd August 2018

And for my third paper for the week, something completely different. But so worth reading! Charles Manski provides an overview of his work on how to use the available evidence to make decisions under uncertainty. It is accompanied by comments from Karl Claxton, Emma McIntosh, and Anirban Basu, together with Manski’s response. The set is a superb read and great food for thought.

Manski starts with the premise that we make decisions about which course of action to take without having full information about what is best; i.e. under uncertainty. This is uncontroversial and well accepted, ever since Arrow’s seminal paper.

Less consensual is Manski’s view that clinicians’ decisions for individual patients may be better than the recommendations of guidelines to the ‘average’ patient because clinicians can take into account more information about the specific individual patient. I would contend that it is unrealistic to expect that clinicians keep pace with new knowledge in medicine given how fast and how much it is generated. Furthermore, clinicians, like all other people, are unlikely to be fully rational in their decision-making process.

Most fascinating was Section 6 on decision theory under uncertainty. Manski focussed on the minimax-regret criterion. I had not heard about these approaches before, so Manski’s explanations were quite the eye-opener.

Manksi concludes by recommending that central health care planners take a portfolio approach to their guidelines (adaptive diversification), coupled with the minimax criterion to update the guidelines as more information emerges (adaptive minimax-regret). Whether the minimax-regret criterion is the best is a question that I will leave to better brains than mine. A more immediate question is how feasible it is to implement this adaptive diversification, particularly in instituting a process in that data are systematically collected and analysed to update the guideline. In his response, Manski suggests that specialists in decision analysis should become members of the multidisciplinary clinical team and to teach decision analysis in Medicine courses. This resonates with my own view that we need to do better in helping people using information to make better decisions.

Credits

Chris Sampson’s journal round-up for 14th November 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Weighing clinical evidence using patient preferences: an application of probabilistic multi-criteria decision analysis. PharmacoEconomics [PubMedPublished 10th November 2016

There are at least two ways in which preferences determine the allocation of health care resources (in a country with a HTA agency, at least). One of them we think about a lot; the (societal) valuation of health states as defined by a multi-attribute measure (like the EQ-5D). The other relates to patient preferences that determine whether or not a specific individual (and their physician) will choose to use a particular technology, given its expected clinical outcomes for that individual. A drug may very well make sense at the aggregate level but be a very bad choice for a particular individual when compared with alternatives. It’s right that this process should be deliberative and not solely driven by an algorithm, but it’s also important to maintain transparent and consistent decision making. Multi-criteria decision analysis (MCDA) has been proposed as a means of achieving this, and it can be used to take into account the uncertainty associated with clinical outcomes. In this study the authors present an approach that also incorporates random preference variation along with parameter uncertainty in both preferences and clinical evidence. The model defines a value function and estimates the impact of uncertainty using probabilistic Monte Carlo simulation, which in turn estimates the mean value of each possible treatment in the population. Treatments can therefore be ranked according to patients’ preferences, along with an estimate of the uncertainty associated with this ranking. To demonstrate the utility of the model it is applied to an example for the relative value of HAARTs for HIV, with parameters derived from clinical evaluations and stated preferences studies. It’s nice to see that the authors also provide their R script. One headline finding seems to be that this approach is likely to demonstrate just how much uncertainty is involved that might not previously have been given much attention. It could therefore help steer us towards more valuable research in the future. And it could be used to demonstrate that optimal decisions might change when all sources of uncertainty are considered. Clearly a potential application of this method is in the realm of personalised medicine, which is slowly but inevitably reaching beyond the confines of pharmacogenomics.

Communal sharing and the provision of low-volume high-cost health services: results of a survey. PharmacoEconomics – Open Published 4th November 2016

One of the distributional concerns we might have about the QALY-maximisation approach is its implications for people with rare diseases. Drugs for rare diseases are often expensive (because the marginal cost is likely to be higher) and therefore less cost-effective. There is mixed evidence about whether or not people exhibit a preference for redistributive allocation of QALY-creating resources according to rarity. Of course, the result you get from such studies is dependent on the question you ask. In order to ask the right question it’s important to understand the mechanisms by which people might prefer allocation of additional resources to services for rare diseases. One suggestion in the literature is the preservation of hope. This study presents another, based on the number of people sharing the cost. So imagine a population of 1000 people, and all those people share the cost of health care. For a rare disease, more people will share the cost of the treatment per person treated. So if 10 people have the disease, that’s 100 payers per recipient. If 100 people have the disease then it’s just 10 payers per recipient. The idea is that people prefer a situation in which more people share the cost, and on that basis prefer to allocate resources to rare diseases. A web-based survey was conducted in Australia in which 702 people were asked to divide a budget between a small patient group with a high-cost illness and a large patient group with a low-cost illness. There were also a set of questions in which respondents indicated the importance of 6 possible influences on their decisions. The findings show that people did choose to allocate more funds to the rarer disease, despite the reduced overall health gain. This suggests that people do have a preference for wider cost sharing, which could explain extra weight being given to rare diseases. I think it’s a good idea that deserves more research, but for me there are a few problems with the study. Much of the effect could be explained by people’s non-linear valuations of risk, as the scenario highlighted that the respondents themselves would be at risk of the disease. We also can’t clearly differentiate between an effect due to the rarity of the disease (and associated cost sharing) and an effect due to the severity of the disease.

The challenge of conditional reimbursement: stopping reimbursement can be more difficult than not starting in the first place! Value in Health Published 3rd November 2016

If anything’s going to make me read a paper, it’s an exclamation mark! Conditional reimbursement of technologies that are probably effective but probably not cost-effective can be conducted in a rational way in order to generate research findings and benefit social welfare in the long run. But that can only hold true if those technologies subsequently found (through more research) to be ineffective or too costly are then made unavailable. Otherwise conditional reimbursement agreements will do more harm than good. This study uses discrete choice experiments to compare public (n=1169) and potential policymaker (n=90) values associated with the removal of an available treatment compared with non-reimbursement of a new treatment. The results showed (in addition to some other common findings) that both the public and policymakers preferred reimbursement of an existing treatment over the reimbursement of a new treatment, and were willing to accept an ICER of more than €7,000 higher for an existing treatment. Though the DCE found it to be a significant determinant, 60% of policymakers reported that they thought that reimbursement status was unimportant, so there may be some cognitive dissonance going on there. The most obvious (and probably most likely) explanation for the observed preference for currently reimbursed treatments is loss aversion. But it could also be that people recognise real costs associated with ending reimbursement that are not reflected in either the QALY estimates or the costs to the health system. Whatever the explanation, HTA agencies need to bear this in mind when using conditional reimbursement agreements.

Head-to-head comparison of health-state values derived by a probabilistic choice model and scores on a visual analogue scale. The European Journal of Health Economics [PubMed] Published 2nd November 2016

I’ve always had a fondness for a good old VAS as a direct measure of health state (dare we say utility) values, despite the limitations of the approach. This study compares discrete choices for EQ-5D-5L states with VAS valuations – thus comparing indirect and direct health state valuations – in Canada, the USA, England and The Netherlands (n=1775). Each respondent had to make a forced choice between two EQ-5D-5L health states and then assess both states on a single VAS. Ten different pairs were completed by each respondent. The two different approaches correlated strongly within and across countries, as we might expect. And pairs of EQ-5D-5L states that were valued relatively low or high in the discrete choice model were also valued accordingly in the VAS. But the relationship between the two approaches was non-linear in that values differed more at the ends of the scale, with poor health states valued more differently in the choice model and good health states valued more differently on the VAS. This probably just reflects some of the biases observed in the use of VAS that are already well-documented, particularly context bias and end-state aversion. This study clearly suggests (though does not by itself prove) that discrete choice models are a better choice for health state valuation… but the VAS ain’t dead yet.

Credits