Chris Sampson’s journal round-up for 7th January 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Overview, update, and lessons learned from the international EQ-5D-5L valuation work: version 2 of the EQ-5D-5L valuation protocol. Value in Health Published 2nd January 2019

Insofar as there is any drama in health economics, the fallout from the EQ-5D-5L value set for England was pretty dramatic. If you ask me, the criticisms are entirely ill-conceived. Regardless of that, one of the main sticking points was that the version of the EQ-5D-5L valuation protocol that was used was flawed. England was one of the first countries to get a valuation, so it used version 1.0 of the EuroQol Valuation Technique (EQ-VT). We’re now up to version 2.1. This article outlines the issues that arose in using the first version, what EuroQol did to try and solve them, and describes the current challenges in valuation.

EQ-VT 1.0 includes the composite time trade-off (cTTO) task to elicit values for health states better and worse than dead. Early valuation studies showed some unusual patterns. Research into the causes of this showed that in many cases there was very little time spent on the task. Some interviewers had a tendency to skip parts of the explanation for completing the worse-than-dead bit of the cTTO, resulting in no values worse than dead. EQ-VT 1.1 added three practise valuations along with greater monitoring of interviewer performance and a quality control procedure. This dramatically reduced interviewer effects and the likelihood of inconsistent responses. Yet further improvements could be envisioned. And so EQ-VT 2.0 added a feedback module. The feedback module shows respondents the ranking of states implied by their valuations, with which respondents can then agree or disagree. 2.0 was tested against 1.1 and showed further reductions in inconsistencies thanks to the feedback module. Other modifications were not supported by the evaluation. EQ-VT 2.1 added a dynamic question to further improve the warm-up tasks.

There are ongoing challenges with the cTTO, mostly to do with how to model the data. The authors provide a table setting out causes, consequences, and possible solutions for various issues that might arise in the modelling of cTTO data. And then there’s the discrete choice experiment (DCE), which is included in addition to the cTTO, but which different valuation studies used (or did not use) differently in modelling values. Research is ongoing that will probably lead to developments beyond EQ-VT 2.1. This might involve abandoning the cTTO altogether. Or, at least, there might be a reduction in cTTO tasks and a greater reliance on DCE. But more research is needed before duration can be adequately incorporated into DCEs.

Helpfully, the paper includes a table with a list of countries and specification of the EQ-VT versions used. This demonstrates the vast amount of knowledge that has been accrued about EQ-5D-5L valuation and the lack of wisdom in continuing to support the (relatively under-interrogated) EQ-5D-3L MVH valuation.

Do time trade-off values fully capture attitudes that are relevant to health-related choices? The European Journal of Health Economics [PubMed] Published 31st December 2018

Different people have different preferences, so values for health states elicited using TTO should vary from person to person. This study is concerned with how personal circumstances and beliefs influence TTO values and whether TTO entirely captures the impact of these on preferences for health states.

The authors analysed data from an online survey with a UK-representative sample of 1,339. Participants were asked about their attitudes towards quality and quantity of life, before completing some TTO tasks based on the EQ-5D-5L. Based on their response, they were shown two ‘lives’ that – given their TTO response – they should have considered to be of equivalent value. The researchers constructed generalised estimating equations to model the TTO values and logit models for the subsequent choices between states. Age, marital status, education, and attitudes towards trading quality and quantity of life all determined TTO values in addition to the state that was being valued. In the modelling of the decisions about the two lives, attitudes influenced decisions through the difference between the two lives in the number of life years available. That is, an interaction term between the attitudes variable and years variables showed that people who prefer quantity of life over quality of life were more likely to choose the state with a greater number of years.

The authors’ interpretation from this is that TTO reflects people’s attitudes towards quality and quantity of life, but only partially. My interpretation would be that the TTO exercise would have benefitted from the kind of refinement described above. The choice between the two lives is similar to the feedback module of the EQ-VT 2.0. People often do not understand the implications of their TTO valuations. The study could also be interpreted as supportive of ‘head-to-head’ choice methods (such as DCE) rather than making choices involving full health and death. But the design of the TTO task used in this study was quite dissimilar to others, which makes it difficult to say anything generally about TTO as a valuation method.

Exploring the item sets of the Recovering Quality of Life (ReQoL) measures using factor analysis. Quality of Life Research [PubMed] Published 21st December 2018

The ReQoL is a patient-reported outcome measure for use with people experiencing mental health difficulties. The ReQoL-10 and ReQoL-20 both ask questions relating to seven domains: six mental, one physical. There’s been a steady stream of ReQoL research published in recent years and the measures have been shown to have acceptable psychometric properties. This study concerns the factorial structure of the ReQoL item sets, testing internal construct validity and informing scoring procedures. There’s also a more general methodological contribution relating to the use of positive and negative factors in mental health outcome questionnaires.

At the outset of this study, the ReQoL was based on 61 items. These were reduced to 40 on the basis of qualitative and quantitative analysis reported in other papers. This paper reports on two studies – the first group (n=2,262) completed the 61 items and the second group (n=4,266) completed 40 items. Confirmatory factor analysis and exploratory factor analysis were conducted. Six-factor (according to ReQoL domains), two-factor (negative/positive) and bi-factor (global/negative/positive) models were tested. In the second study, participants were either presented with a version that jumbled up the positively and negatively worded questions or a version that showed a block of negatives followed by a block of positives. The idea here is that if a two-factor structure is simply a product of the presentation of questions, it should be more pronounced in the jumbled version.

The results were much the same from the two study samples. The bi-factor model demonstrated acceptable fit, with much higher factor loadings on the general quality of life factor that loaded on all items. The results indicated sufficient unidimensionality to go ahead with reducing the number of items and the two ordering formats didn’t differ, suggesting that the negative and positive loadings weren’t just an artefact of the presentation. The findings show that the six dimensions of the ReQoL don’t stand as separate factors. The justification for maintaining items from each of the six dimensions, therefore, seems to be a qualitative one.

Some outcome measurement developers have argued that items should all be phrased in the same direction – as either positive or negative – to obtain high-quality data. But there’s good reason to think that features of mental health can’t reliably be translated from negative to positive, and this study supports the inclusion (and intermingling) of both within a measure.

Credits

Thesis Thursday: Ernest Law

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Ernest Law who has a PhD from the University of Illinois at Chicago. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Examining sources of variation in developing a societal health state value set
Supervisors
Simon Pickard, Todd Lee, Surrey Walton, Alan Schwartz, Feng Xie
Repository link
http://hdl.handle.net/10027/23037

How did you come to study EQ-5D valuation methods, and why are they important?

I came across health preferences research after beginning my studies at UIC with my thesis supervisor, Prof. Simon Pickard. Before this, I was a clinical pharmacist who spent a lot of time helping patients and their families navigate the trade-offs between the benefits and harms of pharmacotherapy. So, when I was introduced to a set of methods that seeks to quantify such trade-offs, I was quickly captivated and set on a path to understanding more. I continued on to expand my interests in valuation methods pertinent to health-system decision-making. Naturally, I collided with societal health state value sets – important tools developed from generic preference-based measures, such as the EQ-5D.

During my studies at UIC, our group received a grant (PI: Simon Pickard) from the EuroQol Research Foundation to develop the United States EQ-5D-5L value set. While developing the study protocol, we built in additional data elements (e.g., EQ-5D-3L valuation tasks, advance directive status) that would help answer important questions in explaining variation in value sets. By understanding these sources of variation, we could inform researchers and policymakers alike on the development and application of EQ-5D value sets.

What does your thesis add to the debate about EQ-5D-3L and -5L value sets?

As a self-reported measure, the literature appears reasonably clear regarding the 5L’s advantages over the 3L: reduced ceiling effects, more unique self-reported health states, and improved discriminatory power. However, less was known on how differences in descriptive systems impact direct valuations.

Previous comparisons focused on differences in index scores and QALYs generated from existing value sets. But these value sets differed in substantive ways: preferences from different respondents, in different time periods, from different geographic locations, using different study protocols. This makes it difficult to isolate the differences due to the descriptive system.

In our study, we asked respondents in the US EQ-5D valuation study to complete time trade-off tasks for 3L and 5L health states. By doing so, we were able to hold many of the aforementioned factors constant except the valued health state. From a research perspective, we provide strong evidence on how even small changes in the descriptive system can have a profound impact on the valuations. From a policy perspective, and an HTA agency deciding specifically between the 3L and 5L, we’ve provided critical insight into the kind of value set one might expect to obtain using either descriptive system.

Why are health state valuations by people with advance directives particularly interesting?

The interminable debate over “whose preferences” should be captured when obtaining preferences for the purposes of generating QALYs is well-known among health outcomes researchers and policy-makers. Two camps typically emerge, those that argue for capturing preferences from the general population and those that argue for patients to be the primary source. The supporting arguments for both sides have been well-documented. One additional approach has recently emerged which may reconcile some of the differences by using informed preferences. Guidance from influential groups in the US, such as the First and Second Panels of Cost-Effectiveness in Health and Medicine have also maintained that “the best articulation of a society’s preferences… would be gathered from a representative sample of fully informed members”.

We posited that individuals with advance directives may represent a group that had reflected substantially on their current health state, as well as the experience and consequences of a range of (future) health states. Individuals who complete an advance directive undergo a process that includes discussion and documentation of an individual’s preferences concerning their goals of care in the event they are unable to do so themselves. So we set out to examine this relationship between advance directives and stated preferences, and whether the completion of an advance directive was associated with differences in health state preferences (spoiler: it was).

Is there evidence that value sets should be updated over time?

We sought to address this literature gap by using respondent-level data from the US EQ-5D-3L study that collected TTO values in 2002 and from our EQ-5D-5L study, which also collected 3L TTO values in 2017. However there were inherent challenges with using these data collected so many years apart: demographics shift, new methods and modes of administration are implemented, etc.

So, we attempted to account for what was possible by controlling for respondent characteristics and restricting health state values to those obtained using the same preference elicitation technique (i.e., conventional TTO). We found that values in 2017 were modestly higher, implying that the average adult in the US in 2017 was less willing to trade time for quality of life than in 2002, i.e. 6 months over a 10-year time-horizon. Our research suggests that time-specific differences in societal preferences exist and that the time period in which values were elicited may be an important factor to consider when selecting or applying a value set.

Based on your research, do you have any recommendations for future valuation studies?

I would encourage researchers conducting future valuation studies, particularly societal value sets, to consider some of the following:

1) Consider building in small but powerful methodological sub-aims into your study. Of course, you must balance resource constraints, data quality, and respondent burden against such add-ons, but a balance can be struck!

2) Pay attention to important developments in the population being sampled; for example, we incorporated advance directives because it is becoming an important topic in the US healthcare debate, in addition to contributing to the discussion surrounding informed preferences.

3) Take a close look at the most commonly utilized health state values sets representing your health-system/target population. Is it possible that existing value sets are “outdated”? If so, a proposal to update this value set might fill a very important need. While you’re at it, consider an analysis to compare current and previous values. The evidence is scarce (and difficult to study!) so it’s important to continue building evidence that can inform the broader scientific and HTA community as to the role that time plays in changes to societal preferences.

Chris Sampson’s journal round-up for 19th November 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Valuation of health states considered to be worse than death—an analysis of composite time trade-off data from 5 EQ-5D-5L valuation studies. Value in Health Published 12th November 2018

I have a problem with the idea of health states being ‘worse than dead’, and I’ve banged on about it on this blog. Happily, this new article provides an opportunity for me to continue my campaign. Health state valuation methods estimate how much a person prefers being in a more healthy state. Positive values are easy to understand; 1.0 is twice as good as 0.5. But how about the negative values? Is -1.0 twice as bad as -0.5? How much worse than being dead is that? The purpose of this study is to evaluate whether or not negative EQ-5D-5L values meaningfully discriminate between different health states.

The study uses data from EQ-5D-5L valuation studies conducted in Singapore, the Netherlands, China, Thailand, and Canada. Altogether, more than 5000 people provided valuations of 10 states each. As a simple measure of severity, the authors summed the number of steps from full health in all domains, giving a value from 0 (11111) to 20 (55555). We’d expect this measure of severity of states to correlate strongly with the mean utility values derived from the composite time trade-off (TTO) exercise.

Taking Singapore as an example, the mean of positive values (states better than dead) decreased from 0.89 to 0.21 with increasing severity, which is reassuring. The mean of negative values, on the other hand, ranged from -0.98 to -0.89. Negative values were clustered between -0.5 and -1.0. Results were similar across the other countries. In all except Thailand, observed negative values were indistinguishable from random noise. There was no decreasing trend in mean utility values as severity increased for states worse than dead. A linear mixed model with participant-specific intercepts and an ANOVA model confirmed the findings.

What this means is that we can’t say much about states worse than dead except that they are worse than dead. How much worse doesn’t relate to severity, which is worrying if we’re using these values in trade-offs against states better than dead. Mostly, the authors frame this lack of discriminative ability as a practical problem, rather than anything more fundamental. The discussion section provides some interesting speculation, but my favourite part of the paper is an analogy, which I’ll be quoting in future: “it might be worse to be lost at sea in deep waters than in a pond, but not in any way that truly matters”. Dead is dead is dead.

Determining value in health technology assessment: stay the course or tack away? PharmacoEconomics [PubMed] Published 9th November 2018

The cost-per-QALY approach to value in health care is no stranger to assault. The majority of criticisms are ill-founded special pleading, but, sometimes, reasonable tweaks and alternatives have been proposed. The aim of this paper was to bring together a supergroup of health economists to review and discuss these reasonable alternatives. Specifically, the questions they sought to address were: i) what should health technology assessment achieve, and ii) what should be the approach to value-based pricing?

The paper provides an unstructured overview of a selection of possible adjustments or alternatives to the cost-per-QALY method. We’re very briefly introduced to QALY weighting, efficiency frontiers, and multi-criteria decision analysis. The authors don’t tell us why we ought (or ought not) to adopt these alternatives. I was hoping that the paper would provide tentative answers to the normative questions posed, but it doesn’t do that. It doesn’t even outline the thought processes required to answer them.

The purpose of this paper seems to be to argue that alternative approaches aren’t sufficiently developed to replace the cost-per-QALY approach. But it’s hardly a strong defence. I’m a big fan of the cost-per-QALY as a necessary (if not sufficient) part of decision making in health care, and I agree with the authors that the alternatives are lacking in support. But the lack of conviction in this paper scares me. It’s tempting to make a comparison between the EU and the QALY.

How can we evaluate the cost-effectiveness of health system strengthening? A typology and illustrations. Social Science & Medicine [PubMed] Published 3rd November 2018

Health care is more than the sum of its parts. This is particularly evident in low- and middle-income countries that might lack strong health systems and which therefore can’t benefit from a new intervention in the way a strong system could. Thus, there is value in health system strengthening. But, as the authors of this paper point out, this value can be difficult to identify. The purpose of this study is to provide new methods to model the impact of health system strengthening in order to support investment decisions in this context.

The authors introduce standard cost-effectiveness analysis and economies of scope as relevant pieces of the puzzle. In essence, this paper is trying to marry the two. An intervention is more likely to be cost-effective if it helps to provide economies of scope, either by making use of an underused platform or providing a new platform that would improve the cost-effectiveness of other interventions. The authors provide a typology with three types of health system strengthening: i) investing in platform efficiency, ii) investing in platform capacity, and iii) investing in new platforms. Examples are provided for each. Simple mathematical approaches to evaluating these are described, using scaling factors and disaggregated cost and outcome constraints. Numerical demonstrations show how these approaches can reveal differences in cost-effectiveness that arise through changes in technical efficiency or the opportunity cost linked to health system strengthening.

This paper is written with international development investment decisions in mind, and in particular the challenge of investments that can mostly be characterised as health system strengthening. But it’s easy to see how many – perhaps all – health services are interdependent. If anything, the broader impact of new interventions on health systems should be considered as standard. The methods described in this paper provide a useful framework to tackle these issues, with food for thought for anybody engaged in cost-effectiveness analysis.

Credits