Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
A new method to determine the optimal willingness to pay in cost-effectiveness analysis. Value in Health Published 17th May 2019
Efforts to identify a robust estimate of the willingness to pay for a QALY have floundered. Mostly, these efforts have relied on asking people about their willingness to pay. In the UK, we have moved away from using such estimates as a basis for setting cost-effectiveness thresholds in the context of resource allocation decisions. Instead, we have attempted to identify the opportunity cost of a QALY, which is perhaps even more difficult, but more easy to justify in the context of a fixed budget. This paper seeks to inject new life into the willingness-to-pay approach by developing a method based on relative risk aversion.
The author outlines the relationship between relative risk aversion and the rate at which willingness-to-pay changes with income. Various candidate utility functions are described with respect to risk preferences, with a Weibull function being adopted for this framework. Estimates of relative risk aversion have been derived from numerous data sources, including labour supply, lottery experiments, and happiness surveys. These estimates from the literature are used to demonstrate the relationship between relative risk aversion and the ‘optimal’ willingness to pay (K), calibrated using the Weibull utility function. For an individual with ‘representative’ parameters plugged into their utility function, K is around twice the income level. K always increases with relative risk aversion.
Various normative questions are raised, including whether a uniform K should be adopted for everybody within the population, and whether individuals should be able to spend on health care on top of public provision. This approach certainly appears to be more straightforward than other approaches to estimating willingness-to-pay in health care, and may be well-suited to decentralised (US-style) resource allocation decision-making. It’s difficult to see how this framework could gain traction in the UK, but it’s good to see alternative approaches being proposed and I hope to see this work developed further.
Striving for a societal perspective: a framework for economic evaluations when costs and effects fall on multiple sectors and decision makers. Applied Health Economics and Health Policy [PubMed] Published 16th May 2019
I’ve always been sceptical of a ‘societal perspective’ in economic evaluation, and I have written in favour of a limited health care perspective. This is mostly for practical reasons. Being sufficiently exhaustive to identify a truly ‘societal’ perspective is so difficult that, in attempting to do so, there is a very high chance that you will produce estimates that are so inaccurate and imprecise that they are more dangerous than useful. But the fact is that there is no single decision-maker when it comes to public expenditure. Governments are made up of various departments, within which there are many levels and divisions. Not everybody will care about the health care perspective, so other objectives ought to be taken into account.
The purpose of this paper is to build on the idea of the ‘impact inventory’, described by the Second Panel on Cost-Effectiveness in Health and Medicine, which sought to address the challenge of multiple objectives. The extended framework described in this paper captures effects and opportunity costs associated with an intervention within various dimensions. These dimensions could (or should) align with decision-makers’ objectives. Trade-offs invariably require aggregation, and this aggregation could take place either within individuals or within dimensions – something not addressed by the Second Panel. The authors describe the implications of each approach to aggregation, providing visual representations of the impact inventory in each case. Aggregating within individuals requires a normative judgement about how each dimension is valued to the individual and then a judgement about how to aggregate for overall population net benefit. Aggregating across individuals within dimensions requires similar normative judgements. Where the chosen aggregation functions are linear and additive, both approaches will give the same results. But as soon as we start to consider equity concerns or more complex aggregation, we’ll see different decisions being indicated.
The authors adopt an example used by the Second Panel to demonstrate the decisions that would be made within a health-only perspective and then decisions that consider other dimensions. There could be a simple extension beyond health, such as including the impact on individuals’ consumption of other goods. Or it could be more complex, incorporating multiple dimensions, sectors, and decision-makers. For the more complex situation, the authors consider the inclusion of the criminal justice sector, introducing the number of crimes averted as an object of value.
It’s useful to think about the limitations of the Second Panel’s framing of the impact inventory and to make explicit the normative judgements involved. What this paper seems to be saying is that cross-sector decision-making is too complex to be adequately addressed by the Second Panel’s impact inventory. The framework described in this paper may be too abstract to be practically useful, and too vague to be foundational. But the complexities and challenges in multi-sector economic evaluation need to be spelt out – there is no simple solution.
Advanced data visualisation in health economics and outcomes research: opportunities and challenges. Applied Health Economics and Health Policy [PubMed] Published 4th May 2019
Computers can make your research findings look cool, which can help make people pay attention. But data visualisation can also be used as part of the research process and provide a means of more intuitively (and accurately) communicating research findings. The data sets used by health economists are getting bigger, which provides more opportunity and need for effective visualisation. The authors of this paper suggest that data visualisation techniques could be more widely adopted in our field, but that there are challenges and potential pitfalls to consider.
Decision modelling is an obvious context in which to use data visualisation, because models tend to involve large numbers of simulations. Dynamic visualisations can provide a means by which to better understand what is going on in these simulations, particularly with respect to uncertainty in estimates associated with alternative model structures or parameters. If paired with interactive models and customised dashboards, visualisation can make complex models accessible to non-expert users. Communicating patient outcomes data is also highlighted as a potential application, aiding the characterisation of differences between groups of individuals and alternative outcome measures.
Yet, there are barriers to wider use of visualisation. There is some scepticism about bias in underlying analyses, and end users don’t want to be bamboozled by snazzy graphics. The fact that journal articles are still the primary mode of communicating research findings is a problem, as you can’t have dynamic visualisations in a PDF. There’s also a learning curve for analysts wishing to develop complex visualisations. Hopefully, opportunities will be identified for two-way learning between the health economics world and data scientists more accustomed to data visualisation.
The authors provide several examples (static in the publication, but with links to live tools), to demonstrate the types of visualisations that can be created. Generally speaking, complex visualisations are proposed as complements to our traditional presentations of results, such as cost-effectiveness acceptability curves, rather than as alternatives. The key thing is to maintain credibility by ensuring that data visualisation is used to describe data in a more accurate and meaningful way, and to avoid exaggeration of research findings. It probably won’t be long until we see a set of good practice guidelines being developed for our field.