Rita Faria’s journal round-up for 13th May 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Communicating uncertainty about facts, numbers and science. Royal Society Open Science Published 8th May 2019

This remarkable paper by Anne Marthe van der Bles and colleagues, including the illustrious David Spiegelhalter, covers two of my most favourite topics: communication and uncertainty. They focused on epistemic uncertainty. That is, the uncertainty about facts, numbers and science due to limited knowledge (rather than due to the randomness of the world). This is what we could know more about, if we spent more resources in finding it out.

The authors propose a framework for communicating uncertainty and apply it to two case studies, one in climate change and the other in economic statistics. They also review the literature on the effect of communicating uncertainty. It is so wide-ranging and exhaustive that, if I have any criticism, its 42 pages are not conducive to a leisurely read.

I found the distinction between direct and indirect uncertainty fascinating and incredibly relevant to health economics. Direct uncertainty is about the precision of the evidence whilst indirect uncertainty is about its quality. For example, evidence based on a naïve comparison of patients in a Phase 2 trial with historical controls in another country (yup, this happens!).

So, how should we communicate the uncertainty in our findings? I’m afraid that this paper is not a practical guide but rather a brilliant ground clearing exercise on how to start thinking about this. Nevertheless Box 5 (p35) does give some good advice! I do hope this paper kick-starts research on how to explain uncertainty beyond an academic audience. Looking forward to more!

Was Brexit triggered by the old and unhappy? Or by financial feelings? Journal of Economic Behavior & Organization [RePEc] Published 18th April 2019

Not strictly health economics – although arguably Brexit affects our health – is this impressive study about the factors that contributed to the Leave win in the Brexit referendum. Federica Liberini and colleagues used data from the Understanding Society survey to look at the predictors of people’s views about whether or not the UK should leave the EU. The main results are from a regression on whether or not a person was pro-Brexit, regressed on life satisfaction, their feelings on their financial situation, and other characteristics.

Their conclusions are staggering. They found that people’s views were generally unrelated to their age, their life satisfaction or their income. Instead, it was a person’s feelings about their financial situation that was the strongest predictor. For economists, it may be a bit cringe-worthy to see OLS used for a categorical dependent variable. But to be fair, the authors mention that the results are similar with non-linear models and they report extensive supplementary analyses. Remarkably, they’re making the individual level data available on the 18th of June here.

As the authors discuss, it is not clear if we’re looking at predictive estimates of characteristics related to pro-Brexit feeling or at causal estimates of factors that led to the pro-Brexit feeling. That is, if we could improve someone’s perceived financial situation, would we reduce their probability of feeling pro-Brexit? In any case, the message is clear. Feelings matter!

How does treating chronic hepatitis C affect individuals in need of organ transplants in the United Kingdom? Value in Health Published 8th March 2019

Anupam Bapu Jena and colleagues looked at the spillover benefits of curing hepatitis C given its consequences on the supply and demand of liver and other organs for transplant in the UK. They compare three policies: the status quo, in which there is no screening for hepatitis C and organ donation by people with hepatitis C is rare; universal screen and treat policy where cured people opt-in for organ donation; and similarly, but with opt-out for organ donation.

To do this, they adapted a previously developed queuing model. For the status quo, the model inputs were estimated by calibrating the model outputs to reported NHS performance. They then changed the model inputs to reflect the anticipated impact of the new policies. Importantly, they assumed that all patients with hepatitis C would be cured and no longer require a transplanted organ; conversely, that cured patients would donate organs at similar rates to the general population. They predict that curing hepatitis C would directly reduce the waiting list for organ transplants by reducing the number of patients needing them. Also, there would be an indirect benefit via increasing their availability to other patients. These consequences aren’t typically included in the cost-effectiveness analysis of treatments for hepatitis C, which means that their comparative benefits and costs may not be accurate.

Keeping in the theme of uncertainty, it was disappointing that the paper does not include some sort of confidence bounds on its results nor does it present sensitivity analysis to their assumptions, which in my view, were quite favourable towards a universal screen and test policy. This is an interesting application of a queuing model, which is something I don’t often see in cost-effectiveness analysis. It is also timely and relevant, given the recent drive by the NHS to eliminate hepatitis C. In a few years’ time, we’ll hopefully know to what extent the predicted spillover benefits were realised.

Credits

James Altunkaya’s journal round-up for 3rd September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Sensitivity analysis for not-at-random missing data in trial-based cost-effectiveness analysis: a tutorial. PharmacoEconomics [PubMed] [RePEc] Published 20th April 2018

Last month, we highlighted a Bayesian framework for imputing missing data in economic evaluation. The paper dealt with the issue of departure from the ‘Missing at Random’ (MAR) assumption by using a Bayesian approach to specify a plausible missingness model from the results of expert elicitation. This was used to estimate a prior distribution for the unobserved terms in the outcomes model.

For those less comfortable with Bayesian estimation, this month we highlight a tutorial paper from the same authors, outlining an approach to recognise the impact of plausible departures from ‘Missingness at Random’ assumptions on cost-effectiveness results. Given poor adherence to current recommendations for the best practice in handling and reporting missing data, an incremental approach to improving missing data methods in health research may be more realistic. The authors supply accompanying Stata code.

The paper investigates the importance of assuming a degree of ‘informative’ missingness (i.e. ‘Missingness not at Random’) in sensitivity analyses. In a case study, the authors present a range of scenarios which assume a decrement of 5-10% in the quality of life of patients with missing health outcomes, compared to multiple imputation estimates based on observed characteristics under standard ‘Missing at Random’ assumptions. This represents an assumption that, controlling for all observed characteristics used in multiple imputation, those with complete quality of life profiles may have higher quality of life than those with incomplete surveys.

Quality of life decrements were implemented in the control and treatment arm separately, and then jointly, in six scenarios. This aimed to demonstrate the sensitivity of cost-effectiveness judgements to the possibility of a different missingness mechanism in each arm. The authors similarly investigate sensitivity to higher health costs in those with missing data than predicted based on observed characteristics in imputation under ‘Missingness at Random’. Finally, sensitivity to a simultaneous departure from ‘Missingness at Random’ in both health outcomes and health costs is investigated.

The proposed sensitivity analyses provide a useful heuristic to assess what degree of difference between missing and non-missing subjects on unobserved characteristics would be necessary to change cost-effectiveness decisions. The authors admit this framework could appear relatively crude to those comfortable with more advanced missing data approaches such as those outlined in last month’s round-up. However, this approach should appeal to those interested in presenting the magnitude of uncertainty introduced by missing data assumptions, in a way that is easily interpretable to decision makers.

The impact of waiting for intervention on costs and effectiveness: the case of transcatheter aortic valve replacement. The European Journal of Health Economics [PubMed] [RePEc] Published September 2018

This paper appears in print this month and sparked interest as one of comparatively few studies on the cost-effectiveness of waiting lists. Given interest in using constrained optimisation methods in health outcomes research, highlighted in this month’s editorial in Value in Health, there is rightly interest in extending the traditional sphere of economic evaluation from drugs and devices to understanding the trade-offs of investing in a wider range of policy interventions, using a common metric of costs and QALYs. Rachel Meacock’s paper earlier this year did a great job at outlining some of the challenges involved broadening the scope of economic evaluation to more general decisions in health service delivery.

The authors set out to understand the cost-effectiveness of delaying a cardiac treatment (TVAR) using a waiting list of up to 12 months compared to a policy of immediate treatment. The effectiveness of treatment at 3, 6, 9 & 12 months after initial diagnosis, health decrements during waiting, and corresponding health costs during wait time and post-treatment were derived from a small observational study. As treatment is studied in an elderly population, a non-ignorable proportion of patients die whilst waiting for surgery. This translates to lower modelled costs, but also lower quality life years in modelled cohorts where there was any delay from a policy of immediate treatment. The authors conclude that eliminating all waiting time for TVAR would produce population health at a rate of ~€12,500 per QALY gained.

However, based on the modelling presented, the authors lack the ability to make cost-effectiveness judgements of this sort. Waiting lists exist for a reason, chiefly a lack of clinical capacity to treat patients immediately. In taking a decision to treat patients immediately in one disease area, we therefore need some judgement as to whether the health displaced in now untreated patients in another disease area is of greater, less or equal magnitude to that gained by treating TVAR patients immediately. Alternately, modelling should include the cost of acquiring additional clinical capacity (such as theatre space) to treat TVAR patients immediately, so as not to displace other treatments. In such a case, the ICER is likely to be much higher, due to the large cost of new resources needed to reduce waiting times to zero.

Given the data available, a simple improvement to the paper would be to reflect current waiting times (already gathered from observational study) as the ‘standard of care’ arm. As such, the estimated change in quality of life and healthcare resource cost from reducing waiting times to zero from levels observed in current practice could be calculated. This could then be used to calculate the maximum acceptable cost of acquiring additional treatment resources needed to treat patients with no waiting time, given current national willingness-to-pay thresholds.

Admittedly, there remain problems in using the authors’ chosen observational dataset to calculate quality of life and cost outcomes for patients treated at different time periods. Waiting times were prioritised in this ‘real world’ observational study, based on clinical assessment of patients’ treatment need. Thus it is expected that the quality of life lost during a waiting period would be lower for patients treated in the observational study at 12 months, compared to the expected quality of life loss of waiting for the group of patients judged to need immediate treatment. A previous study in cardiac care took on the more manageable task of investigating the cost-effectiveness of different prioritisation strategies for the waiting list, investigating the sensitivity of conclusions to varying a fixed maximum wait-time for the last patient treated.

This study therefore demonstrates some of the difficulties in attempting to make cost-effectiveness judgements about waiting time policy. Given that the cost-effectiveness of reducing waiting times in different disease areas is expected to vary, based on relative importance of waiting for treatment on short and long-term health outcomes and costs, this remains an interesting area for economic evaluation to explore. In the context of the current focus on constrained optimisation techniques across different areas in healthcare (see ISPOR task force), it is likely that extending economic evaluation to evaluate a broader range of decision problems on a common scale will become increasingly important in future.

Understanding and identifying key issues with the involvement of clinicians in the development of decision-analytic model structures: a qualitative study. PharmacoEconomics [PubMed] Published 17th August 2018

This paper gathers evidence from interviews with clinicians and modellers, with the aim to improve the nature of the working relationship between the two fields during model development.

Researchers gathered opinion from a variety of settings, including industry. The main report focusses on evidence from two case studies – one tracking the working relationship between modellers and a single clinical advisor at a UK university, with the second gathering evidence from a UK policy institute – where modellers worked with up to 11 clinical experts per meeting.

Some of the authors’ conclusions are not particularly surprising. Modellers reported difficulty in recruiting clinicians to advise on model structures, and further difficulty in then engaging recruited clinicians to provide relevant advice for the model building process. Specific comments suggested difficulty for some clinical advisors in identifying representative patient experiences, instead diverting modellers’ attention towards rare outlier events.

Study responses suggested currently only 1 or 2 clinicians were typically consulted during model development. The authors recommend involving a larger group of clinicians at this stage of the modelling process, with a more varied range of clinical experience (junior as well as senior clinicians, with some geographical variation). This is intended to help ensure clinical pathways modelled are generalizable. The experience of one clinical collaborator involved in the case study based at a UK university, compared to 11 clinicians at the policy institute studied, perhaps may also illustrate a general problem of inadequate compensation for clinical time within the university system. The authors also advocate the availability of some relevant training for clinicians in decision modelling to help enhance the efficiency of participants’ time during model building. Clinicians sampled were supportive of this view – citing the need for further guidance from modellers on the nature of their expected contribution.

This study ties into the general literature regarding structural uncertainty in decision analytic models. In advocating the early contribution of a larger, more diverse group of clinicians in model development, the authors advocate a degree of alignment between clinical involvement during model structuring, and guidelines for eliciting parameter estimates from clinical experts. Similar problems, however, remain for both fields, in recruiting clinical experts from sufficiently diverse backgrounds to provide a valid sample.

Credits