Rita Faria’s journal round-up for 13th August 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Analysis of clinical benefit, harms, and cost-effectiveness of screening women for abdominal aortic aneurysm. The Lancet [PubMed] Published 26th July 2018

This study is an excellent example of the power and flexibility of decision models to help inform decisions on screening policies.

In many countries, screening for abdominal aortic aneurysm is offered to older men but not to women. This is because screening was found to be beneficial and cost-effective, based on evidence from RCTs in older men. In contrast, there is no direct evidence for women. To inform this question, the study team developed a decision model to simulate the benefits and costs of screening women.

This study has many fascinating features. Not only does it simulate the outcomes of expanding the current UK screening policy for men to include women, but also of other policies with different age parameters, diagnostic thresholds and treatment thresholds.

Curiously, the most cost-effective policy for women is not the current UK policy for men. This shows the importance of including the full range of options in the evaluation, rather than just what is done now. Unfortunately, the paper is sparse on detail on how the various policies were devised and if other more cost-effective policies may have been left out.

The key cost-effectiveness driver is the probability of having the disease and its presentation (i.e. the distribution of the aortic diameter), which is quite frequent in cost-effectiveness analysis of diagnostic tests. Neither of these parameters requires an RCT to be estimated. This means that, in principle, we could reduce the uncertainty on which policy to fund by conducting a study on the prevalence of the disease, rather than an RCT on whether a specific policy works.

An exciting aspect is that treatment itself could be better targeted, in particular, that lowering the threshold for treatment could reduce non-intervention rates and operative mortality. The implication is that there may be scope to improve the cost-effectiveness of management, which in turn will leave greater scope for investment in screening. Could this be the next question to be tackled by this remarkable model?

Establishing the value of diagnostic and prognostic tests in health technology assessment. Medical Decision Making [PubMed] Published 13th March 2018

Keeping on the topic of the cost-effectiveness of screening and diagnostic tests, this is a paper on how to evaluate tests in a manner consistent with health technology assessment principles. This paper has been around for a few months, but it’s only now that I’ve had the chance to give it the careful read that such a well thought out paper deserves.

Marta Soares and colleagues lay out an approach to determine the most cost-effective way to use diagnostic and prognostic tests. They start by explaining that the value of the test is mostly in informing better management decisions. This means that the cost-effectiveness of testing necessarily depends on the cost-effectiveness of management.

The paper also spells out that the cost-effectiveness of testing depends on the prevalence of the disease, as we saw in the paper above on screening for abdominal aortic aneurysm. Clearly, the cost-effectiveness of testing depends on the accuracy of the test.

Importantly, the paper highlights that the evaluation should compare all possible ways of using the test. A decision problem with 1 test and 1 treatment yields 6 strategies, of which 3 are relevant: no test and treat all; no test and treat none; test and treat if positive. If the reference test is added, another 3 strategies need to be considered. This shows how complex a cost-effectiveness analysis of a test can quickly become! In my paper with Marta and others, for example, we ended up with 383 testing strategies.

The discussion is excellent, particularly about the limitations of end-to-end studies (which compare testing strategies in terms of their end outcomes e.g. health). End-to-end studies can only compare a limited subset of testing strategies and may not allow for the modelling of the outcomes of strategies beyond those compared in the study. Furthermore, end-to-end studies are likely to be inefficient given the large sample sizes and long follow-up required to detect differences in outcomes. I wholeheartedly agree that primary studies should focus on the prevalence of the disease and the accuracy of the test, leaving the evaluation of the best way to use the test to decision modelling.

Reasonable patient care under uncertainty. Health Economics [PubMed] Published 22nd August 2018

And for my third paper for the week, something completely different. But so worth reading! Charles Manski provides an overview of his work on how to use the available evidence to make decisions under uncertainty. It is accompanied by comments from Karl Claxton, Emma McIntosh, and Anirban Basu, together with Manski’s response. The set is a superb read and great food for thought.

Manski starts with the premise that we make decisions about which course of action to take without having full information about what is best; i.e. under uncertainty. This is uncontroversial and well accepted, ever since Arrow’s seminal paper.

Less consensual is Manski’s view that clinicians’ decisions for individual patients may be better than the recommendations of guidelines to the ‘average’ patient because clinicians can take into account more information about the specific individual patient. I would contend that it is unrealistic to expect that clinicians keep pace with new knowledge in medicine given how fast and how much it is generated. Furthermore, clinicians, like all other people, are unlikely to be fully rational in their decision-making process.

Most fascinating was Section 6 on decision theory under uncertainty. Manski focussed on the minimax-regret criterion. I had not heard about these approaches before, so Manski’s explanations were quite the eye-opener.

Manksi concludes by recommending that central health care planners take a portfolio approach to their guidelines (adaptive diversification), coupled with the minimax criterion to update the guidelines as more information emerges (adaptive minimax-regret). Whether the minimax-regret criterion is the best is a question that I will leave to better brains than mine. A more immediate question is how feasible it is to implement this adaptive diversification, particularly in instituting a process in that data are systematically collected and analysed to update the guideline. In his response, Manski suggests that specialists in decision analysis should become members of the multidisciplinary clinical team and to teach decision analysis in Medicine courses. This resonates with my own view that we need to do better in helping people using information to make better decisions.

Credits

Meeting round-up: Health Economists’ Study Group (HESG) Summer 2018

HESG Summer 2018 was hosted by the University of Bristol at the Mercure Bristol Holland House on 20th-22nd June. The organisers did a superb job… the hotel was super comfortable, the food & drink were excellent, and the discussions were enlightening. So the Bristol team can feel satisfied with a job very well done, and one that has certainly set the bar high for the next HESG at York.

Day 1

I started by attending the engaging discussion by Mark Pennington on Tristan Snowsill’s paper on how to use moment-generating functions in cost-effectiveness modelling. Tristan has suggested a new method to model time-dependent disease progression, rather than using multiple tunnel states, or discrete event simulation. I think this could really be a game changer in decision modelling. But for me, the clear challenge will be in explaining the method in a simple way, so that modellers will feel comfortable in trying it out.

It was soon time to take the reins myself and chair the next session. The paper, by Joanna Thorn and colleagues, explored which items should be included in health economic analysis plans (HEAPs), with the discussion being led by David Turner. There was a very lively back-and-forth on the role of HEAPs and their relationship with the study protocol and statistical analysis plan. In my view, this highlighted how HEAPs can be a useful tool to set out the economic analysis, help plan resources and manage expectations from the wider project team.

My third session was the eye-opening discussion of Ian Ross’s paper on time costs of open defecation in India, led by Julius Ohrnberger. It was truly astonishing to learn how prevalent the practice of open defecation is, and the time costs involved to find a suitable location. The impact of which would never have crossed my mind without this fascinating paper.

My last session of the day took in the discussion by Aideen Ahern of the thought-provoking paper by Tessa Peasgood and colleagues on the process of identifying the dimensions that should be included in an instrument to measure health, social care and carer-related quality of life. Having an extended QALY-weight for health and care-related quality of life is almost the holy grail in preference measures. It would allow us to account for the impact of interventions in these two very related areas of quality of life. The challenge is in generating an instrument that it is both generic and sensitive. This extended-QALY weight is still under development at this point, with the next step being to select the final set of dimensions for valuation.

The evening plenary session was on the hot-button topic of “Opportunities and challenges of Brexit for health economists” and included presentations by Paula Lorgelly, Andrew Street and Ellen Rule. We found ourselves jointly commiserating about the numerous challenges that are being posed due to the increased demand of health care and decreased supply of health care professionals. But it wasn’t all doom and gloom fortunately, as Andrew Street suggested that future economic research may use Brexit as an exogenous shock. Clearly this is not enough for comfort but left the room in a positive mood to face dinner!

Day 2

It was time for one of my own papers on day 2, as we started with Nicky Welton discussing the paper by Alessandro Grosso, myself and other colleagues on the structural uncertainties in cost-effectiveness modelling. We were delighted that we received excellent comments that will help to improve our paper. The session also prompted us to think about whether we should separate the model from the structural uncertainty analysis element and create 2 distinct papers. This would allow us to explore and extend the latter even further. So, watch this space!

I attended Matthew Quaife’s discussion next, on the study by Katharina Diernberger and colleagues of expert elicitation to parameterise a cost-effectiveness model. Their expert elicitation had a whopping 47 responses, which allowed the team to explore different ways to aggregate the answers and demonstrate their impact on the results. This paper prompted a quick-fire discussion about how far to push decision modelling if data are scarce. Expert elicitation is often seen as the answer to scarce data but it is no silver bullet! Thanks to this paper, it is clear that the differing views among experts make a difference to the findings.

I continued along the modelling topic with the next session I’d chosen: Tracey Sach’s discussion on Ieva Skarda’s and colleagues excellent paper simulating the long-term consequences of interventions in childhood. The paper prompted a lot of interest regarding the use of the results to inform the extrapolation of trials with a short time duration. The authors are looking at developing a tool to facilitate the use of the model by external researchers, which I’m sure will have a high take-up.

After lunch, I attended Tristan Snowsill’s discussion of Felix Achana and colleagues’ paper on regression models for analysis of clinical trials data. Felix and colleagues propose multivariate generalised linear mixed effects models to account for the centre-specific heterogeneity and simultaneous estimation of the effect on the costs and outcomes. Although the analysis is quite complex, the method has strong potential to be very useful in multinational trials. I was excited to hear that the authors are developing functions in Stata and R, which will make it much easier for analysts to use the method.

Keeping to the cost-effectiveness topic, I then attended Ed Wilson’s discussion on the paper by Laura Flight and colleagues on the risk of bias of adaptive RCTs. The paper discusses how an adaptive trial may be stopped early depending on interim analysis. However, our attention must be drawn to the caveat that conducting multiple interim analysis requires adjustment for bias to inform the economic analysis. This is an opportune paper as we are seeing the use of adaptive trial designs rise, and definitely one I’ll make a note to refer to in the future.

For my final session of the day, I discussed Emma McManus‘s paper on establishing a definition of model replication. Replication has been subject to increased interest by the scientific community but its take-up has been slow in health economics, the exception being cost-effectiveness modelling of diabetes. Well done to Emma and the team for bringing the topic to the forum! The ensuing discussion interestingly revealed that we can often have quite different concepts of what replication is and its role in model validation. The authors are working on replicating published models, so I’m looking forward to hearing more about their experience in future meetings.

Day 3

The last day got off to a strong start when Andrew Street opened with a discussion of Joshua Kraindler and Ben Gershlick‘s study on the impact of capital investment on hospital productivity. The session was both thought-provoking and extremely engaging, with Andrew encouraging our involvement by asking us all to think about the shape of a production function, in order to better interpret the results. This timely discussion was centred around the challenges in measuring capital investment in the NHS, given the paucity of data.

My final session was Francesco Ramponi’s paper on cross-sectoral economic evaluations, discussed by Mandy Maredza. This session was quite a record-breaker for HESG Bristol, enjoying probably the largest audience of the conference. Opportunely, it was able to shine a spotlight on the interest in expanding economic evaluations beyond decisions in health care, and the role of economic evaluations when costs and outcomes relate to different budgets and decision makers.

This HESG, as always, was a testament to the breadth of topics covered by health economists, and their hard work in pushing this important science onward. I’m now very much looking forward to seeing so many interesting papers published, many of which I will certainly use and reflect upon with my own research. Of course, I’m also very much looking forward to the next new batch of new research at the HESG in York. The date is firmly in my diary!

Credit

Rita Faria’s journal round-up for 18th June 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Objectives, budgets, thresholds, and opportunity costs—a health economics approach: an ISPOR Special Task Force report. Value in Health [PubMedPublished 21st February 2018

The economic evaluation world has been discussing cost-effectiveness thresholds for a while. This paper has been out for a few months, but it slipped under my radar. It explains the relationship between the cost-effectiveness threshold, the budget, opportunity costs and willingness to pay for health. My take-home messages are that we should use cost-effectiveness analysis to inform decisions both for publicly funded and privately funded health care systems. Each system has a budget and a way of raising funds for that budget. The cost-effectiveness threshold should be specific for each health care system, in order to reflect its specific opportunity cost. The budget can change for many reasons. The cost-effectiveness threshold should be adjusted to reflect these changes and hence reflect the opportunity cost. For example, taxpayers can increase their willingness to pay for health through increased taxes for the health care system. We are starting to see this in the UK with the calls to raise taxes to increase the NHS budget. It is worth noting that the NICE threshold may not warrant adjustment upwards since research suggests that it does not reflect the opportunity cost. This is a welcome paper on the topic and a must read, particularly if you’re arguing for the use of cost-effectiveness analysis in settings that traditionally were reluctant to embrace it, such as the US.

Basic versus supplementary health insurance: access to care and the role of cost effectiveness. Journal of Health Economics [RePEc] Published 31st May 2018

Using cost-effectiveness analysis to inform coverage decisions not only for the public but also for the privately funded health care is also a feature of this study by Jan Boone. I’ll admit that the equations are well beyond my level of microeconomics, but the text is good at explaining the insights and the intuition. Boone grapples with the question about how the public and private health care systems should choose which technologies to cover. Boone concludes that, when choosing which technologies to cover, the most cost-effective technologies should be prioritised for funding. That the theory matches the practice is reassuring to an economic evaluator like myself! One of the findings is that cost-effective technologies which are very cheap should not be covered. The rationale being that everyone can afford them. The issue for me is that people may decide not to purchase a highly cost-effective technology which is very cheap. As we know from behaviour economics, people are not rational all the time! Boone also concludes that the inclusion of technologies in the universal basic package should consider the prevalence of the conditions in those people at high risk and with low income. The way that I interpreted this is that it is more cost-effective to include technologies for high-risk low-income people in the universal basic package who would not be able to afford these technologies otherwise, than technologies for high-income people who can afford supplementary insurance. I can’t cover here all the findings and the nuances of the theoretical model. Suffice to say that it is an interesting read, even if you avoid the equations like myself.

Surveying the cost effectiveness of the 20 procedures with the largest public health services waiting lists in Ireland: implications for Ireland’s cost-effectiveness threshold. Value in Health Published 11th June 2018

As we are on the topic of cost-effectiveness thresholds, this is a study on the threshold in Ireland. This study sets out to find out if the current cost-effectiveness threshold is too high given the ICERs of the 20 procedures with the largest waiting lists. The idea is that, if the current cost-effectiveness threshold is correct, the procedures with large and long waiting lists would have an ICER of above the cost-effectiveness threshold. If the procedures have a low ICER, the cost-effectiveness threshold may be set too high. I thought that Figure 1 is excellent in conveying the discordance between ICERs and waiting lists. For example, the ICER for extracapsular extraction of crystalline lens is €10,139/QALY and the waiting list has 10,056 people; the ICER for surgical tooth removal is €195,155/QALY and the waiting list is smaller at 833. This study suggests that, similar to many other countries, there are inefficiencies in the way that the Irish health care system prioritises technologies for funding. The limitation of the study is in the ICERs. Ideally, the relevant ICER compares the procedure with the standard care in Ireland whilst on the waiting list (“no procedure” option). But it is nigh impossible to find ICERs that meet this condition for all procedures. The alternative is to assume that the difference in costs and QALYs is generalisable from the source study to Ireland. It was great to see another study on empirical cost-effectiveness thresholds. Looking forward to knowing what the cost-effectiveness threshold should be to accurately reflect opportunity costs.

Credits