Chris Sampson’s journal round-up for 14th May 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A practical guide to conducting a systematic review and meta-analysis of health state utility values. PharmacoEconomics [PubMed] Published 10th May 2018

I love articles that outline the practical application of a particular method to solve a particular problem, especially when the article shares analysis code that can be copied and adapted. This paper does just that for the case of synthesising health state utility values. Decision modellers use utility values as parameters. Most of the time these are drawn from a single source which almost certainly introduces some kind of bias to the resulting cost-effectiveness estimates. So it’s better to combine all of the relevant available information. But that’s easier said than done, as numerous researchers (myself included) have discovered. This paper outlines the various approaches and some of the merits and limitations of each. There are some standard stages, for which advice is provided, relating to the identification, selection, and extraction of data. Those are by no means simple tasks, but the really tricky bit comes when you try and pool the utility values that you’ve found. The authors outline three strategies: i) fixed effect meta-analysis, ii) random effects meta-analysis, and iii) mixed effects meta-regression. Each is illustrated with a hypothetical example, with Stata and R commands provided. Broadly speaking, the authors favour mixed effects meta-regression because of its ability to identify the extent of similarity between sources and to help explain heterogeneity. The authors insist that comparability between sources is a precondition for pooling. But the thing about health state utility values is that they are – almost by definition – never comparable. Different population? Not comparable. Different treatment pathway? No chance. Different utility measure? Ha! They may or may not appear to be similar statistically, but that’s totally irrelevant. What matters is whether the decision-maker ‘believes’ the values. If they believe them then they should be included and pooled. If decision-makers have reason to believe one source more or less than another then this should be accounted for in the weighting. If they don’t believe them at all then they should be excluded. Comparability is framed as a statistical question, when in reality it is a conceptual one. For now, researchers will have to tackle that themselves. This paper doesn’t solve all of the problems around meta-analysis of health state utility values, but it does a good job of outlining methodological developments to date and provides recommendations in accordance with them.

Unemployment, unemployment duration, and health: selection or causation? The European Journal of Health Economics [PubMed] Published 3rd May 2018

One of the major socioeconomic correlates of poor health is unemployment. It appears not to be very good for you. But there’s an obvious challenge here – does unemployment cause ill-health, or are unhealthy people just more likely to be unemployed? Both, probably, but that answer doesn’t make for clear policy solutions. This paper – following a large body of literature – attempts to explain what’s going on. Its novelty comes in the way the author considers timing and distinguishes between mental and physical health. The basis for the analysis is that selection into unemployment by the unhealthy ought to imply time-constant effects of unemployment on health. On the other hand, the negative effect of unemployment on health ought to grow over time. Using seven waves of data from the German Socio-economic Panel, a sample of 17,000 people (chopped from 48,000) is analysed, of which around 3,000 experienced unemployment. The basis for measuring mental and physical health is summary scores from the SF-12. A fixed-effects model is constructed based on the dependence of health on the duration and timing of unemployment, rather than just the occurrence of unemployment per se. The author finds a cumulative effect of unemployment on physical ill-health over time, implying causation. This is particularly pronounced for people unemployed in later life, and there was essentially no impact on physical health for younger people. The longer people spent unemployed, the more their health deteriorated. This was accompanied by a strong long-term selection effect of less physically healthy people being more likely to become unemployed. In contrast, for mental health, the findings suggest a short-term selection effect of people who experience a decline in mental health being more likely to become unemployed. But then, following unemployment, mental health declines further, so the balance of selection and causation effects is less clear. In contrast to physical health, people’s mental health is more badly affected by unemployment at younger ages. By no means does this study prove the balance between selection and causality. It can’t account for people’s anticipation of unemployment or future ill-health. But it does provide inspiration for better-targeted policies to limit the impact of unemployment on health.

Different domains – different time preferences? Social Science & Medicine [PubMed] Published 30th April 2018

Economists are often criticised by non-economists. Usually, the criticisms are unfounded, but one of the ways in which I think some (micro)economists can have tunnel vision is in thinking that preferences elicited with respect to money exhibit the same characteristics as preferences about things other than money. My instinct tells me that – for most people – that isn’t true. This study looks at one of those characteristics of preferences – namely, time preferences. Unfortunately for me, it suggests that my instincts aren’t correct. The authors outline a quasi-hyperbolic discounting model, incorporating both short-term present bias and long-term impatience, to explain gym members’ time preferences in the health and monetary domains. A survey was conducted with members of a chain of fitness centres in Denmark, of which 1,687 responded. Half were allocated to money-related questions and half to health-related questions. Respondents were asked to match an amount of future gains with an amount of immediate gains to provide a point of indifference. Health problems were formulated as back pain, with an EQ-5D-3L level 2 for usual activities and a level 2 for pain or discomfort. The findings were that estimates for discount rates and present bias in the two domains are different, but not by very much. On average, discount rates are slightly higher in the health domain – a finding driven by female respondents and people with more education. Present bias is the same – on average – in each domain, though retired people are more present biased for health. The authors conclude by focussing on the similarity between health and monetary time preferences, suggesting that time preferences in the monetary domain can safely be applied in the health domain. But I’d still be wary of this. For starters, one would expect a group of gym members – who have all decided to join the gym – to be relatively homogenous in their time preferences. Findings are similar on average, and there are only small differences in subgroups, but when it comes to health care (even public health) we’re never dealing with average people. Targeted interventions are increasingly needed, which means that differential discount rates in the health domain – of the kind identified in this study – should be brought into focus.

Credits

 

Chris Sampson’s journal round-up for 2nd April 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Quality-adjusted life-years without constant proportionality. Value in Health Published 27th March 2018

The assumption of constant proportional trade-offs (CPTO) is at the heart of everything we do with QALYs. It assumes that duration has no impact on the value of a given health state, and so the value of a health state is constant regardless of its duration. This assumption has been repeatedly demonstrated to fail. This study looks for a non-constant alternative, which hasn’t been done before. The authors consider a quality-adjusted lifespan and four functional forms for the relationship between time and the value of life: constant, discount, logarithmic, and power. These relationships were tested in an online survey with more than 5,000 people, which involved the completion of 30-40 time trade-off pairs based on the EQ-5D-5L. Respondents traded off health states of varying severities and durations. Initially, a saturated model (making no assumptions about functional form) was estimated. This demonstrated that the marginal value of lifespan is decreasing. The authors provide a set of values attached to different health states at different durations. Then, the econometric model is adjusted to suit a power model, with the power estimated for duration expressed in days, weeks, months, or years. The power value for time is 0.415, but different expressions of time could introduce bias; time expressed in days (power=0.403) loses value faster than time expressed in years (power=0.654). There are also some anomalies that arise from the data that don’t fit the power function. For example, a single day of moderate problems can be worse than death, whereas 7 days or more is not. Using ‘power QALYs’ could be the future. But the big remaining question is whether decisionmakers ought to respond to people’s time preferences in this way.

A systematic review of studies comparing the measurement properties of the three-level and five-level versions of the EQ-5D. PharmacoEconomics [PubMed] Published 23rd March 2018

The debate about the EQ-5D-5L continues (on Twitter, at least). Conveniently, this paper addresses a concern held by some people – that we don’t understand the implications of using the 5L descriptive system. The authors systematically review papers comparing the measurement properties of the 3L and 5L, written in English or German. The review ended up including 24 studies. The measurement properties that were considered by the authors were: i) distributional properties, ii) informativity, iii) inconsistencies, iv) responsiveness, and v) test-retest reliability. The last property involves consideration of index values. Each study was also quality-assessed, with all being considered of good to excellent quality. The studies covered numerous countries and different respondent groups, with sample sizes from the tens to the thousands. For most measurement properties, the findings for the 3L and 5L were very similar. Floor effects were generally below 5% and tended to be slightly reduced for the 5L. In some cases, the 5L was associated with major reductions in the proportion of people responding as 11111 – a well-recognised ceiling effect associated with the 3L. Just over half of the studies reported on informativity using Shannon’s H’ and Shannon’s J’. The 5L provided consistently better results. Only three studies looked at responsiveness, with two slightly favouring the 5L and one favouring the 3L. The latter could be explained by the use of the 3L-5L crosswalk, which is inherently less responsive because it is a crosswalk. The overarching message is consistency. Business as usual. This is important because it means that the 3L and 5L descriptive systems provide comparable results (which is the basis for the argument I recently made that they are measuring the same thing). In some respects, this could be disappointing for 5L proponents because it suggests that the 5L descriptive system is not a lot better than the 3L. But it is a little better. This study demonstrates that there are still uncertainties about the differences between 3L and 5L assessments of health-related quality of life. More comparative studies, of the kind included in this review, should be conducted so that we can better understand the differences in results that are likely to arise now that we have moved (relatively assuredly) towards using the 5L instead of the 3L.

Preference-based measures to obtain health state utility values for use in economic evaluations with child-based populations: a review and UK-based focus group assessment of patient and parent choices. Quality of Life Research [PubMed] Published 21st March 2018

Calculating QALYs for kids continues to be a challenge. One of the challenges is the choice of which preference-based measure to use. Part of the problem here is that the EuroQol group – on which we rely for measuring adult health preferences – has been a bit slow. There’s the EQ-5D-Y, which has been around for a while, but it wasn’t developed with any serious thought about what kids value and there still isn’t a value set for the UK. So, if we use anything, we use a variety of measures. In this study, the authors review the use of generic preference-based measures. 45 papers are identified, including 5 different measures: HUI2, HUI3, CHU-9D, EQ-5D-Y, and AQOL-6D. No prizes for guessing that the EQ-5D (adult version) was the most commonly used measure for child-based populations. Unfortunately, the review is a bit of a disappointment. And I’m not just saying that because at least one study on which I’ve worked isn’t cited. The search strategy is likely to miss many (perhaps most) trial-based economic evaluations with children, for which cost-utility analyses don’t usually get a lot of airtime. It’s hard to see how a review of this kind is useful if it isn’t comprehensive. But the goal of the paper isn’t just to summarise the use of measures to date. The focus is on understanding when researchers should use self- or proxy-response, and when a parent-child dyad might be most useful. The literature review can’t do much to guide that question except to assert that the identified studies tended to use parent–proxy respondents. But the study also reports on some focus groups, which are potentially more useful. These were conducted as part of a wider study relating to the design of an RCT. In five focus groups, participants were presented with the EQ-5D-Y and the CHU-9D. It isn’t clear why these two measures were selected. The focus groups included parents and some children over the age of 11. Unfortunately, there’s no real (qualitative) analysis conducted, so the findings are limited. Parents expressed concern about a lack of sensitivity. Naturally, they thought that they knew best and should be the respondents. Of the young people reviewing the measures themselves, the EQ-5D-Y was perceived as more straightforward in referring to tangible experiences, whereas the CHU-9D’s severity levels were seen as more representative. Older adolescents tended to prefer the CHU-9D. The youths weren’t so sure of themselves as the adults and, though they expressed concern about their parents not understanding how they feel, they were generally neutral to who ought to respond. The older kids wanted to speak for themselves. The paper provides a good overview of the different measures, which could be useful for researchers planning data collection for child health utility measurement. But due to the limitations of the review and the lack of analysis of the focus groups, the paper isn’t able to provide any real guidance.

Credits

 

Chris Sampson’s journal round-up for 19th December 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Discounting the recommendations of the Second Panel on Cost-Effectiveness in Health and Medicine. PharmacoEconomics [PubMed] Published 9th December 2016

I do enjoy a bit of academic controversy. In this paper, renowned troublemakers Paulden, O’Mahony and McCabe do what they do best. Their target is the approach to discounting recommended by the report from the new Panel on Cost-Effectiveness, which I briefly covered in a recent round-up. This paper starts out by describing what – exactly – the Panel recommends. The real concerns lie with the approach recommended for analyses from the societal perspective. According to the authors, the problems start when the Panel conflates the marginal utility of income and that of consumption, and confusingly label it with our old friend the lambda. The confusion continues with the use of other imprecise terminology. And then there are some aspects of the Panel’s calculations that just seem to be plain old errors, resulting in illogical results – for example, that future consumption should be discounted more heavily if associated with higher marginal utility. Eh? The core criticism is that the Panel recommends the same discount rate for both costs and the consumption value of health, and that this contradicts recent developments. The Panel fails to clearly explain the basis for its recommendation. Helpfully, the authors outline an alternative (correct?) approach. The 3% rate for costs and health effects that the Panel recommends is not justified. The criticisms made in this paper are technical ones. That doesn’t mean they are any less important, but all we can see is that use of the Panel’s recommended decision rule results in some vague threat to utility-maximisation. Whether or not the conflation of consumption and utility value would actually result in bad decisions is not clear. Nevertheless, considering the massive influence of the original Gold Panel that will presumably be enjoyed by the Second Panel, extreme scrutiny is needed. I hope Basu and Ganiats see it fit to respond. I also wonder whether Paulden, O’Mahony and McCabe might have other chapters in their crosshairs.

Is best–worst scaling suitable for health state valuation? A comparison with discrete choice experiments. Health Economics [PubMed] Published 4th December 2016

BWS is gaining favour as a means of valuing health states. In this paper, team DCE throw down the gauntlet for team BWS. The study uses data collected during the development of a ‘glaucoma utility index’ in which DCE and BWS exercises were completed. The first question is, do DCE and BWS give the same results? The answer is no. The models indicate relatively weak correlation. For most dimensions, the BWS gave values for different severity levels that were closer together than in the DCE. This means that large improvements in health might be associated with smaller utility gains using BWS values than using DCE values. BWS is also identified as being more prone to decision biases. The second question is, which technique is best ‘to develop health utility indices’ (as the authors put it)? We need to bear in mind that this may in part be moot. Proponents of BWS have often claimed that they are not even trying to measure utility, so to judge BWS on this basis may not be appropriate. Anyway, set aside for now the fact that your own definition of utility might be (and that the authors’ almost certainly is) at odds with the BWS approach. No surprise that the authors suggest that DCE is superior. The bases on which this judgement is made are stability, monotonicity, continuity and completeness. All of these relate to whether the respondents make the kinds of responses we might expect. BWS answers are found to be less stable, more likely to be non-continuous and tend not to satisfy monotonicity. Personally I don’t see these as objective identifiers of goodness or ability of the technique to identify ‘true’ preferences. Also, I don’t know anything about how the glaucoma measure was developed, but if the health states it defines aren’t very informative then the results of this study won’t be either. Nevertheless, the findings do indicate to me that health state valuation using BWS might be subject to more caveats that need investigating before we start to make greater use of the technique. The much larger body of research behind DCE counts in its favour. Over to you, Terry team BWS.

Preference weighting of health state values: what difference does it make, and why? Value in Health Published 23rd November 2016

When non-economists ask about the way we measure health outcomes, the crux of it all is that the EQ-5D et al are preference-based. We think – or at least have accepted – that preferences must be really very serious and important. Equal weighting of dimensions? Nothing but meaningless nonsense! That may well be true in theory, but what if our approach to preference-elicitation is actually providing us with much the same results as if we were using equal weighting? Much research energy (and some money) goes into the preference weighting project, but could it be a waste of time? I had hoped that this paper might answer that question, but while it’s a useful study I didn’t find it quite so enlightening. The authors look at the EQ-5D-5L and 15D and compared the usual preference-based index for each with one constructed using an equal weighting, rescaled to the 0-1 dead-full health scale. The rescaling takes into account the differences in scale length for the 15D (0 to 1, 1.000) and the EQ-5D-5L (-0.281 to 1, 1.281). Data are from the Multi-Instrument Comparison (MIC) study, which includes healthy people as well as subsamples with a range of chronic diseases. The authors look at the correlations between the preference-based and equal weighted index values. They find very high correlation, especially for the 15D, and agreement on the EQ-5D increases when adjusted for the scale length. Furthermore, the results are investigated for known group validity alongside a depression-specific outcome measure. The EQ-5D performs a little better. But the study doesn’t really tell me what I want to know: would the use of equal-weighting normally give us the same results, and in what cases might it not? The MIC study includes a whole range of generic and condition-specific measures and I can’t see why the study didn’t look at all of them. It also could have used alternative preference weights to see how they differ. And it could have looked at all of the different disease-based subgroups in the sample to try and determine under what circumstances preference weighting might approach equal weighting. I hope to see more research on this issue, not to undermine preference weighting but to inform its improvement.

Credits