Chris Sampson’s journal round-up for 20th May 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A new method to determine the optimal willingness to pay in cost-effectiveness analysis. Value in Health Published 17th May 2019

Efforts to identify a robust estimate of the willingness to pay for a QALY have floundered. Mostly, these efforts have relied on asking people about their willingness to pay. In the UK, we have moved away from using such estimates as a basis for setting cost-effectiveness thresholds in the context of resource allocation decisions. Instead, we have attempted to identify the opportunity cost of a QALY, which is perhaps even more difficult, but more easy to justify in the context of a fixed budget. This paper seeks to inject new life into the willingness-to-pay approach by developing a method based on relative risk aversion.

The author outlines the relationship between relative risk aversion and the rate at which willingness-to-pay changes with income. Various candidate utility functions are described with respect to risk preferences, with a Weibull function being adopted for this framework. Estimates of relative risk aversion have been derived from numerous data sources, including labour supply, lottery experiments, and happiness surveys. These estimates from the literature are used to demonstrate the relationship between relative risk aversion and the ‘optimal’ willingness to pay (K), calibrated using the Weibull utility function. For an individual with ‘representative’ parameters plugged into their utility function, K is around twice the income level. K always increases with relative risk aversion.

Various normative questions are raised, including whether a uniform K should be adopted for everybody within the population, and whether individuals should be able to spend on health care on top of public provision. This approach certainly appears to be more straightforward than other approaches to estimating willingness-to-pay in health care, and may be well-suited to decentralised (US-style) resource allocation decision-making. It’s difficult to see how this framework could gain traction in the UK, but it’s good to see alternative approaches being proposed and I hope to see this work developed further.

Striving for a societal perspective: a framework for economic evaluations when costs and effects fall on multiple sectors and decision makers. Applied Health Economics and Health Policy [PubMed] Published 16th May 2019

I’ve always been sceptical of a ‘societal perspective’ in economic evaluation, and I have written in favour of a limited health care perspective. This is mostly for practical reasons. Being sufficiently exhaustive to identify a truly ‘societal’ perspective is so difficult that, in attempting to do so, there is a very high chance that you will produce estimates that are so inaccurate and imprecise that they are more dangerous than useful. But the fact is that there is no single decision-maker when it comes to public expenditure. Governments are made up of various departments, within which there are many levels and divisions. Not everybody will care about the health care perspective, so other objectives ought to be taken into account.

The purpose of this paper is to build on the idea of the ‘impact inventory’, described by the Second Panel on Cost-Effectiveness in Health and Medicine, which sought to address the challenge of multiple objectives. The extended framework described in this paper captures effects and opportunity costs associated with an intervention within various dimensions. These dimensions could (or should) align with decision-makers’ objectives. Trade-offs invariably require aggregation, and this aggregation could take place either within individuals or within dimensions – something not addressed by the Second Panel. The authors describe the implications of each approach to aggregation, providing visual representations of the impact inventory in each case. Aggregating within individuals requires a normative judgement about how each dimension is valued to the individual and then a judgement about how to aggregate for overall population net benefit. Aggregating across individuals within dimensions requires similar normative judgements. Where the chosen aggregation functions are linear and additive, both approaches will give the same results. But as soon as we start to consider equity concerns or more complex aggregation, we’ll see different decisions being indicated.

The authors adopt an example used by the Second Panel to demonstrate the decisions that would be made within a health-only perspective and then decisions that consider other dimensions. There could be a simple extension beyond health, such as including the impact on individuals’ consumption of other goods. Or it could be more complex, incorporating multiple dimensions, sectors, and decision-makers. For the more complex situation, the authors consider the inclusion of the criminal justice sector, introducing the number of crimes averted as an object of value.

It’s useful to think about the limitations of the Second Panel’s framing of the impact inventory and to make explicit the normative judgements involved. What this paper seems to be saying is that cross-sector decision-making is too complex to be adequately addressed by the Second Panel’s impact inventory. The framework described in this paper may be too abstract to be practically useful, and too vague to be foundational. But the complexities and challenges in multi-sector economic evaluation need to be spelt out – there is no simple solution.

Advanced data visualisation in health economics and outcomes research: opportunities and challenges. Applied Health Economics and Health Policy [PubMed] Published 4th May 2019

Computers can make your research findings look cool, which can help make people pay attention. But data visualisation can also be used as part of the research process and provide a means of more intuitively (and accurately) communicating research findings. The data sets used by health economists are getting bigger, which provides more opportunity and need for effective visualisation. The authors of this paper suggest that data visualisation techniques could be more widely adopted in our field, but that there are challenges and potential pitfalls to consider.

Decision modelling is an obvious context in which to use data visualisation, because models tend to involve large numbers of simulations. Dynamic visualisations can provide a means by which to better understand what is going on in these simulations, particularly with respect to uncertainty in estimates associated with alternative model structures or parameters. If paired with interactive models and customised dashboards, visualisation can make complex models accessible to non-expert users. Communicating patient outcomes data is also highlighted as a potential application, aiding the characterisation of differences between groups of individuals and alternative outcome measures.

Yet, there are barriers to wider use of visualisation. There is some scepticism about bias in underlying analyses, and end users don’t want to be bamboozled by snazzy graphics. The fact that journal articles are still the primary mode of communicating research findings is a problem, as you can’t have dynamic visualisations in a PDF. There’s also a learning curve for analysts wishing to develop complex visualisations. Hopefully, opportunities will be identified for two-way learning between the health economics world and data scientists more accustomed to data visualisation.

The authors provide several examples (static in the publication, but with links to live tools), to demonstrate the types of visualisations that can be created. Generally speaking, complex visualisations are proposed as complements to our traditional presentations of results, such as cost-effectiveness acceptability curves, rather than as alternatives. The key thing is to maintain credibility by ensuring that data visualisation is used to describe data in a more accurate and meaningful way, and to avoid exaggeration of research findings. It probably won’t be long until we see a set of good practice guidelines being developed for our field.

Credits

Chris Sampson’s journal round-up for 31st December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Perspectives of patients with cancer on the quality-adjusted life year as a measure of value in healthcare. Value in Health Published 29th December 2018

Patients should have the opportunity to understand how decisions are made about which treatments they are and are not allowed to use, given their coverage. This study reports on a survey of cancer patients and survivors, with the aim of identifying patients’ awareness, understanding, and opinions about the QALY as a measure of value.

Participants were recruited from a (presumably US-based) patient advocacy group and 774 mostly well-educated, mostly white, mostly women responded. The online survey asked about cancer status and included a couple of measures of health literacy. Fewer than 7% of participants had ever heard of the QALY – more likely for those with greater health literacy. The survey explained the QALY to the participants and then asked if the concept of the QALY makes sense. Around half said it did and 24% thought that it was a good way to measure value in health care. The researchers report a variety of ‘significant’ differences in tendencies to understand or support the use of QALYs, but I’m not convinced that they’re meaningful because the differences aren’t big and the samples are relatively small.

At the end of the survey, respondents were asked to provide opinions on QALYs and value in health care. 165 people provided responses and these were coded and analysed qualitatively. The researchers identified three themes from this one free-text question: i) measuring value, ii) opinions on QALY, and iii) value in health care and decision making. I’m not sure that they’re meaningful themes that help us to understand patients’ views on QALYs. A significant proportion of respondents rejected the idea of using numbers to quantify value in health care. On the other hand, some suggested that the QALY could be a useful decision aid for patients. There was opposition to ‘external decision makers’ having any involvement in health care decision making. Unless you’re paying for all of your care out of pocket, that’s tough luck. But the most obvious finding from the qualitative analysis is that respondents didn’t understand what QALYs were for. That’s partly because health economists in general need to be better at communicating concepts like the QALY. But I think it’s also in large part because the authors failed to provide a clear explanation. They didn’t even use my lovely Wikipedia graphic. Many of the points made by respondents are entirely irrelevant to the appropriateness of QALYs as they’re used (or in the case of the US, aren’t yet used) in practice. For example, several discussed the use of QALYs in clinical decision making. Patients think that they should maintain autonomy, which is fair enough but has nothing to do with how QALYs are used to assess health technologies.

QALYs are built on the idea of trade-offs. They measure the trade-off between life extension and life improvement. They are used to guide trade-offs between different treatments for different people. But the researchers didn’t explain how or why QALYs are used to make trade-offs, so the elicited views aren’t well-informed.

Measuring multivariate risk preferences in the health domain. Journal of Health Economics Published 27th December 2018

Health preferences research is now a substantial field in itself. But there’s still a lot of work left to be done on understanding risk preferences with respect to health. Gradually, we’re coming round to the idea that people tend to be risk-averse. But risk preferences aren’t (necessarily) so simple. Recent research has proposed that ‘higher order’ preferences such as prudence and temperance play a role. A person exhibiting univariate prudence for longevity would be better able to cope with risk if they are going to live longer. Univariate temperance is characterised by a preference for prospects that disaggregate risk across different possible outcomes. Risk preferences can also be multivariate – across health and wealth, for example – determining the relationship between univariate risk preferences and other attributes. These include correlation aversion, cross-prudence, and cross-temperance. Many articles from the Arthur Attema camp demand a great deal of background knowledge. This paper isn’t an exception, but it does provide a very clear and intuitive description of the various kinds of uni- and multivariate risk preferences that the researchers are considering.

For this study, an experiment was conducted with 98 people, who were asked to make 69 choices, corresponding to 3 choices about each risk preference trait being tested, for both gains and losses. Participants were told that they had €240,000 in wealth and 40 years of life to play with. The number of times that an individual made choices in line with a particular trait was used as an indicator of their strength of preference.

For gains, risk aversion was common for both wealth and longevity, and prudence was a common trait. There was no clear tendency towards temperance. For losses, risk aversion and prudence tended to neutrality. For multivariate risk preferences, a majority of people were correlation averse for gains and correlation seeking for losses. For gains, 76% of choices were compatible with correlation aversion, suggesting that people prefer to disaggregate fixed wealth and health gains. For losses, the opposite was true in 68% of choices. There was evidence for cross-prudence in wealth gains but not longevity gains, suggesting that people prefer health risk if they have higher wealth. For losses, the researchers observed cross-prudence and cross-temperance neutrality. The authors go on to explore associations between different traits.

A key contribution is in understanding how risk preferences differ in the health domain as compared with the monetary domain (which is what most economists study). Conveniently, there are a lot of similarities between risk preferences in the two domains, suggesting that health economists can learn from the wider economics literature. Risk aversion and prudence seem to apply to longevity as well as monetary gains, with a shift to neutrality in losses. The potential implications of these findings are far-reaching, but this is just a small experimental study. More research needed (and anticipated).

Prospective payment systems and discretionary coding—evidence from English mental health providers. Health Economics [PubMed] Published 27th December 2018

If you’ve conducted an economic evaluation in the context of mental health care in England, you’ll have come across mental health care clusters. Patients undergoing mental health care are allocated to one of 20 clusters, classed as either ‘psychotic’, ‘non-psychotic’, or ‘organic’, which forms the basis of an episodic payment model. In 2013/14, these episodes were associated with an average cost of between £975 and £9,354 per day. Doctors determine the clusters and the clusters determine reimbursement. Perverse incentives abound. Or do they?

This study builds on the fact that patients are allocated by clinical teams with guidance from the algorithm-based Mental Health Clustering Tool (MHCT). Clinical teams might exhibit upcoding, whereby patients are allocated to clusters that attract a higher price than that recommended by the MHCT. Data were analysed for 148,471 patients from the Mental Health Services Data Set for 2011-2015. For each patient, their allocated cluster is known, along with a variety of socioeconomic indicators and the HoNoS and SARN instruments, which go into the MHCT algorithm. Mixed-effects logistic regression was used to look at whether individual patients were or were not allocated to the cluster recommended as ‘best fit’ by the MHCT, controlling for patient and provider characteristics. Further to this, multilevel multinomial logit models were used to categorise decisions that don’t match the MHCT as either under- or overcoding.

Average agreement across clusters between the MHCT and clinicians was 36%. In most cases, patients were allocated to a cluster either one step higher or one step lower in terms of the level of need, and there isn’t an obvious tendency to overcode. The authors are able to identify a few ways in which observable provider and patient characteristics influence the tendency to under- or over-cluster patients. For example, providers with higher activity are less likely to deviate from the MHCT best fit recommendation. However, the dominant finding – identified by using median odds ratios for the probability of a mismatch between two random providers – seems to be that unobserved heterogeneity determines variation in behaviour.

The study provides clues about the ways in which providers could manipulate coding to their advantage and identifies the need for further data collection for a proper assessment. But reimbursement wasn’t linked to clustering during the time period of the study, so it remains to be seen how clinicians actually respond to these potentially perverse incentives.

Credits

Thesis Thursday: Caroline Vass

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Caroline Vass who has a PhD from the University of Manchester. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Using discrete choice experiments to value benefits and risks in primary care
Supervisors
Katherine Payne, Stephen Campbell, Daniel Rigby
Repository link
https://www.escholar.manchester.ac.uk/uk-ac-man-scw:295629

Are there particular challenges associated with asking people to trade-off risks in a discrete choice experiment?

The challenge of communicating risk in general, not just in DCEs, was one of the things which drew me to the PhD. I’d heard a TED talk discussing a study which asked people’s understanding of weather forecasts. Although most people think they understand a simple statement like “there’s a 30% chance of rain tomorrow”, few people correctly interpreted that as meaning it will rain 30% of the days like tomorrow. Most interpret it to mean there will be rain 30% of the time or in 30% of the area.

My first ever publication was reviewing the risk communication literature, which confirmed our suspicions; even highly educated samples don’t always interpret information as we expect. Therefore, testing if the communication of risk mattered when making trade-offs in a DCE seemed a pretty important topic and formed the overarching research question of my PhD.

Most of your study used data relating to breast cancer screening. What made this a good context in which to explore your research questions?

All women are invited to participate in breast screening (either from a GP referral or at 47-50 years old) in the UK. This makes every woman a potential consumer and a potential ‘patient’. I conducted a lot of qualitative research to ensure the survey text was easily interpretable, and having a disease which many people had heard of made this easier and allowed us to focus on the risk communication formats. My supervisor Prof. Katherine Payne had also been working on a large evaluation of stratified screening which made contacting experts, patients and charities easier.

There are also national screening participation figures so we were able to test if the DCE had any real-world predictive value. Luckily, our estimates weren’t too far off the published uptake rates for the UK!

How did you come to use eye-tracking as a research method, and were there any difficulties in employing a method not widely used in our field?

I have to credit my supervisor Prof. Dan Rigby with planting the seed and introducing me to the method. I did a bit of reading into what psychologists thought you could measure using eye-movements and thought it was worth further investigation. I literally found people publishing with the technology at our institution and knocked on doors until someone would let me use it! If the University of Manchester didn’t already have the equipment, it would have been much more challenging to collect these data.

I then discovered the joys of lab-based work which I think many health economists, fortunately, don’t encounter in their PhDs. The shared bench, people messing with your experiment set-up, restricted lab time which needs to be booked weeks in advance etc. I’m sure it will all be worth it… when the paper is finally published.

What are the key messages from your research in terms of how we ought to be designing DCEs in this context?

I had a bit of a null-result on the risk communication formats, where I found it didn’t affect preferences. I think looking back that might have been with the types of numbers I was presenting (5%, 10%, 20% are easier to understand) and maybe people have a lot of knowledge about the risks of breast screening. It certainly warrants further research to see if my finding holds in other settings. There is a lot of support for visual risk communication formats like icon arrays in other literatures and their addition didn’t seem to do any harm.

Some of the most interesting results came from the think-aloud interviews I conducted with female members of the public. Although I originally wanted to focus on their interpretation of the risk attributes, people started verbalising all sorts of interesting behaviour and strategies. Some of it aligned with economic concepts I hadn’t thought of such as feelings of regret associated with opting-out and discounting both the costs and health benefits of later screens in the programme. But there were also some glaring violations, like ignoring certain attributes, associating cost with quality, using other people’s budget constraints to make choices, and trying to game the survey with protest responses. So perhaps people designing DCEs for benefit-risk trade-offs specifically or in healthcare more generally should be aware that respondents can and do adopt simplifying heuristics. Is this evidence of the benefits of qualitative research in this context? I make that argument here.

Your thesis describes a wealth of research methods and findings, but is there anything that you wish you could have done that you weren’t able to do?

Achieved a larger sample size for my eye-tracking study!