Meeting round-up: Health Economists’ Study Group (HESG) Summer 2018

HESG Summer 2018 was hosted by the University of Bristol at the Mercure Bristol Holland House on 20th-22nd June. The organisers did a superb job… the hotel was super comfortable, the food & drink were excellent, and the discussions were enlightening. So the Bristol team can feel satisfied with a job very well done, and one that has certainly set the bar high for the next HESG at York.

Day 1

I started by attending the engaging discussion by Mark Pennington on Tristan Snowsill’s paper on how to use moment-generating functions in cost-effectiveness modelling. Tristan has suggested a new method to model time-dependent disease progression, rather than using multiple tunnel states, or discrete event simulation. I think this could really be a game changer in decision modelling. But for me, the clear challenge will be in explaining the method in a simple way, so that modellers will feel comfortable in trying it out.

It was soon time to take the reins myself and chair the next session. The paper, by Joanna Thorn and colleagues, explored which items should be included in health economic analysis plans (HEAPs), with the discussion being led by David Turner. There was a very lively back-and-forth on the role of HEAPs and their relationship with the study protocol and statistical analysis plan. In my view, this highlighted how HEAPs can be a useful tool to set out the economic analysis, help plan resources and manage expectations from the wider project team.

My third session was the eye-opening discussion of Ian Ross’s paper on time costs of open defecation in India, led by Julius Ohrnberger. It was truly astonishing to learn how prevalent the practice of open defecation is, and the time costs involved to find a suitable location. The impact of which would never have crossed my mind without this fascinating paper.

My last session of the day took in the discussion by Aideen Ahern of the thought-provoking paper by Tessa Peasgood and colleagues on the process of identifying the dimensions that should be included in an instrument to measure health, social care and carer-related quality of life. Having an extended QALY-weight for health and care-related quality of life is almost the holy grail in preference measures. It would allow us to account for the impact of interventions in these two very related areas of quality of life. The challenge is in generating an instrument that it is both generic and sensitive. This extended-QALY weight is still under development at this point, with the next step being to select the final set of dimensions for valuation.

The evening plenary session was on the hot-button topic of “Opportunities and challenges of Brexit for health economists” and included presentations by Paula Lorgelly, Andrew Street and Ellen Rule. We found ourselves jointly commiserating about the numerous challenges that are being posed due to the increased demand of health care and decreased supply of health care professionals. But it wasn’t all doom and gloom fortunately, as Andrew Street suggested that future economic research may use Brexit as an exogenous shock. Clearly this is not enough for comfort but left the room in a positive mood to face dinner!

Day 2

It was time for one of my own papers on day 2, as we started with Nicky Welton discussing the paper by Alessandro Grosso, myself and other colleagues on the structural uncertainties in cost-effectiveness modelling. We were delighted that we received excellent comments that will help to improve our paper. The session also prompted us to think about whether we should separate the model from the structural uncertainty analysis element and create 2 distinct papers. This would allow us to explore and extend the latter even further. So, watch this space!

I attended Matthew Quaife’s discussion next, on the study by Katharina Diernberger and colleagues of expert elicitation to parameterise a cost-effectiveness model. Their expert elicitation had a whopping 47 responses, which allowed the team to explore different ways to aggregate the answers and demonstrate their impact on the results. This paper prompted a quick-fire discussion about how far to push decision modelling if data are scarce. Expert elicitation is often seen as the answer to scarce data but it is no silver bullet! Thanks to this paper, it is clear that the differing views among experts make a difference to the findings.

I continued along the modelling topic with the next session I’d chosen: Tracey Sach’s discussion on Ieva Skarda’s and colleagues excellent paper simulating the long-term consequences of interventions in childhood. The paper prompted a lot of interest regarding the use of the results to inform the extrapolation of trials with a short time duration. The authors are looking at developing a tool to facilitate the use of the model by external researchers, which I’m sure will have a high take-up.

After lunch, I attended Tristan Snowsill’s discussion of Felix Achana and colleagues’ paper on regression models for analysis of clinical trials data. Felix and colleagues propose multivariate generalised linear mixed effects models to account for the centre-specific heterogeneity and simultaneous estimation of the effect on the costs and outcomes. Although the analysis is quite complex, the method has strong potential to be very useful in multinational trials. I was excited to hear that the authors are developing functions in Stata and R, which will make it much easier for analysts to use the method.

Keeping to the cost-effectiveness topic, I then attended Ed Wilson’s discussion on the paper by Laura Flight and colleagues on the risk of bias of adaptive RCTs. The paper discusses how an adaptive trial may be stopped early depending on interim analysis. However, our attention must be drawn to the caveat that conducting multiple interim analysis requires adjustment for bias to inform the economic analysis. This is an opportune paper as we are seeing the use of adaptive trial designs rise, and definitely one I’ll make a note to refer to in the future.

For my final session of the day, I discussed Emma McManus‘s paper on establishing a definition of model replication. Replication has been subject to increased interest by the scientific community but its take-up has been slow in health economics, the exception being cost-effectiveness modelling of diabetes. Well done to Emma and the team for bringing the topic to the forum! The ensuing discussion interestingly revealed that we can often have quite different concepts of what replication is and its role in model validation. The authors are working on replicating published models, so I’m looking forward to hearing more about their experience in future meetings.

Day 3

The last day got off to a strong start when Andrew Street opened with a discussion of Joshua Kraindler and Ben Gershlick‘s study on the impact of capital investment on hospital productivity. The session was both thought-provoking and extremely engaging, with Andrew encouraging our involvement by asking us all to think about the shape of a production function, in order to better interpret the results. This timely discussion was centred around the challenges in measuring capital investment in the NHS, given the paucity of data.

My final session was Francesco Ramponi’s paper on cross-sectoral economic evaluations, discussed by Mandy Maredza. This session was quite a record-breaker for HESG Bristol, enjoying probably the largest audience of the conference. Opportunely, it was able to shine a spotlight on the interest in expanding economic evaluations beyond decisions in health care, and the role of economic evaluations when costs and outcomes relate to different budgets and decision makers.

This HESG, as always, was a testament to the breadth of topics covered by health economists, and their hard work in pushing this important science onward. I’m now very much looking forward to seeing so many interesting papers published, many of which I will certainly use and reflect upon with my own research. Of course, I’m also very much looking forward to the next new batch of new research at the HESG in York. The date is firmly in my diary!

Credit

Rita Faria’s journal round-up for 18th June 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Objectives, budgets, thresholds, and opportunity costs—a health economics approach: an ISPOR Special Task Force report. Value in Health [PubMedPublished 21st February 2018

The economic evaluation world has been discussing cost-effectiveness thresholds for a while. This paper has been out for a few months, but it slipped under my radar. It explains the relationship between the cost-effectiveness threshold, the budget, opportunity costs and willingness to pay for health. My take-home messages are that we should use cost-effectiveness analysis to inform decisions both for publicly funded and privately funded health care systems. Each system has a budget and a way of raising funds for that budget. The cost-effectiveness threshold should be specific for each health care system, in order to reflect its specific opportunity cost. The budget can change for many reasons. The cost-effectiveness threshold should be adjusted to reflect these changes and hence reflect the opportunity cost. For example, taxpayers can increase their willingness to pay for health through increased taxes for the health care system. We are starting to see this in the UK with the calls to raise taxes to increase the NHS budget. It is worth noting that the NICE threshold may not warrant adjustment upwards since research suggests that it does not reflect the opportunity cost. This is a welcome paper on the topic and a must read, particularly if you’re arguing for the use of cost-effectiveness analysis in settings that traditionally were reluctant to embrace it, such as the US.

Basic versus supplementary health insurance: access to care and the role of cost effectiveness. Journal of Health Economics [RePEc] Published 31st May 2018

Using cost-effectiveness analysis to inform coverage decisions not only for the public but also for the privately funded health care is also a feature of this study by Jan Boone. I’ll admit that the equations are well beyond my level of microeconomics, but the text is good at explaining the insights and the intuition. Boone grapples with the question about how the public and private health care systems should choose which technologies to cover. Boone concludes that, when choosing which technologies to cover, the most cost-effective technologies should be prioritised for funding. That the theory matches the practice is reassuring to an economic evaluator like myself! One of the findings is that cost-effective technologies which are very cheap should not be covered. The rationale being that everyone can afford them. The issue for me is that people may decide not to purchase a highly cost-effective technology which is very cheap. As we know from behaviour economics, people are not rational all the time! Boone also concludes that the inclusion of technologies in the universal basic package should consider the prevalence of the conditions in those people at high risk and with low income. The way that I interpreted this is that it is more cost-effective to include technologies for high-risk low-income people in the universal basic package who would not be able to afford these technologies otherwise, than technologies for high-income people who can afford supplementary insurance. I can’t cover here all the findings and the nuances of the theoretical model. Suffice to say that it is an interesting read, even if you avoid the equations like myself.

Surveying the cost effectiveness of the 20 procedures with the largest public health services waiting lists in Ireland: implications for Ireland’s cost-effectiveness threshold. Value in Health Published 11th June 2018

As we are on the topic of cost-effectiveness thresholds, this is a study on the threshold in Ireland. This study sets out to find out if the current cost-effectiveness threshold is too high given the ICERs of the 20 procedures with the largest waiting lists. The idea is that, if the current cost-effectiveness threshold is correct, the procedures with large and long waiting lists would have an ICER of above the cost-effectiveness threshold. If the procedures have a low ICER, the cost-effectiveness threshold may be set too high. I thought that Figure 1 is excellent in conveying the discordance between ICERs and waiting lists. For example, the ICER for extracapsular extraction of crystalline lens is €10,139/QALY and the waiting list has 10,056 people; the ICER for surgical tooth removal is €195,155/QALY and the waiting list is smaller at 833. This study suggests that, similar to many other countries, there are inefficiencies in the way that the Irish health care system prioritises technologies for funding. The limitation of the study is in the ICERs. Ideally, the relevant ICER compares the procedure with the standard care in Ireland whilst on the waiting list (“no procedure” option). But it is nigh impossible to find ICERs that meet this condition for all procedures. The alternative is to assume that the difference in costs and QALYs is generalisable from the source study to Ireland. It was great to see another study on empirical cost-effectiveness thresholds. Looking forward to knowing what the cost-effectiveness threshold should be to accurately reflect opportunity costs.

Credits