Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
Discounting the recommendations of the Second Panel on Cost-Effectiveness in Health and Medicine. PharmacoEconomics [PubMed] Published 9th December 2016
I do enjoy a bit of academic controversy. In this paper, renowned troublemakers Paulden, O’Mahony and McCabe do what they do best. Their target is the approach to discounting recommended by the report from the new Panel on Cost-Effectiveness, which I briefly covered in a recent round-up. This paper starts out by describing what – exactly – the Panel recommends. The real concerns lie with the approach recommended for analyses from the societal perspective. According to the authors, the problems start when the Panel conflates the marginal utility of income and that of consumption, and confusingly label it with our old friend the lambda. The confusion continues with the use of other imprecise terminology. And then there are some aspects of the Panel’s calculations that just seem to be plain old errors, resulting in illogical results – for example, that future consumption should be discounted more heavily if associated with higher marginal utility. Eh? The core criticism is that the Panel recommends the same discount rate for both costs and the consumption value of health, and that this contradicts recent developments. The Panel fails to clearly explain the basis for its recommendation. Helpfully, the authors outline an alternative (correct?) approach. The 3% rate for costs and health effects that the Panel recommends is not justified. The criticisms made in this paper are technical ones. That doesn’t mean they are any less important, but all we can see is that use of the Panel’s recommended decision rule results in some vague threat to utility-maximisation. Whether or not the conflation of consumption and utility value would actually result in bad decisions is not clear. Nevertheless, considering the massive influence of the original Gold Panel that will presumably be enjoyed by the Second Panel, extreme scrutiny is needed. I hope Basu and Ganiats see it fit to respond. I also wonder whether Paulden, O’Mahony and McCabe might have other chapters in their crosshairs.
Is best–worst scaling suitable for health state valuation? A comparison with discrete choice experiments. Health Economics [PubMed] Published 4th December 2016
BWS is gaining favour as a means of valuing health states. In this paper, team DCE throw down the gauntlet for team BWS. The study uses data collected during the development of a ‘glaucoma utility index’ in which DCE and BWS exercises were completed. The first question is, do DCE and BWS give the same results? The answer is no. The models indicate relatively weak correlation. For most dimensions, the BWS gave values for different severity levels that were closer together than in the DCE. This means that large improvements in health might be associated with smaller utility gains using BWS values than using DCE values. BWS is also identified as being more prone to decision biases. The second question is, which technique is best ‘to develop health utility indices’ (as the authors put it)? We need to bear in mind that this may in part be moot. Proponents of BWS have often claimed that they are not even trying to measure utility, so to judge BWS on this basis may not be appropriate. Anyway, set aside for now the fact that your own definition of utility might be (and that the authors’ almost certainly is) at odds with the BWS approach. No surprise that the authors suggest that DCE is superior. The bases on which this judgement is made are stability, monotonicity, continuity and completeness. All of these relate to whether the respondents make the kinds of responses we might expect. BWS answers are found to be less stable, more likely to be non-continuous and tend not to satisfy monotonicity. Personally I don’t see these as objective identifiers of goodness or ability of the technique to identify ‘true’ preferences. Also, I don’t know anything about how the glaucoma measure was developed, but if the health states it defines aren’t very informative then the results of this study won’t be either. Nevertheless, the findings do indicate to me that health state valuation using BWS might be subject to more caveats that need investigating before we start to make greater use of the technique. The much larger body of research behind DCE counts in its favour. Over to you, Terry team BWS.
Preference weighting of health state values: what difference does it make, and why? Value in Health Published 23rd November 2016
When non-economists ask about the way we measure health outcomes, the crux of it all is that the EQ-5D et al are preference-based. We think – or at least have accepted – that preferences must be really very serious and important. Equal weighting of dimensions? Nothing but meaningless nonsense! That may well be true in theory, but what if our approach to preference-elicitation is actually providing us with much the same results as if we were using equal weighting? Much research energy (and some money) goes into the preference weighting project, but could it be a waste of time? I had hoped that this paper might answer that question, but while it’s a useful study I didn’t find it quite so enlightening. The authors look at the EQ-5D-5L and 15D and compared the usual preference-based index for each with one constructed using an equal weighting, rescaled to the 0-1 dead-full health scale. The rescaling takes into account the differences in scale length for the 15D (0 to 1, 1.000) and the EQ-5D-5L (-0.281 to 1, 1.281). Data are from the Multi-Instrument Comparison (MIC) study, which includes healthy people as well as subsamples with a range of chronic diseases. The authors look at the correlations between the preference-based and equal weighted index values. They find very high correlation, especially for the 15D, and agreement on the EQ-5D increases when adjusted for the scale length. Furthermore, the results are investigated for known group validity alongside a depression-specific outcome measure. The EQ-5D performs a little better. But the study doesn’t really tell me what I want to know: would the use of equal-weighting normally give us the same results, and in what cases might it not? The MIC study includes a whole range of generic and condition-specific measures and I can’t see why the study didn’t look at all of them. It also could have used alternative preference weights to see how they differ. And it could have looked at all of the different disease-based subgroups in the sample to try and determine under what circumstances preference weighting might approach equal weighting. I hope to see more research on this issue, not to undermine preference weighting but to inform its improvement.
Credits
Thanks for the review Chris. For an academic, “renowned troublemaker” is definitely a complement!
Just to add to your comments: while our theoretical concerns primarily apply to the Panel’s consideration of the societal perspective, our concern about the Panel’s consideration of empirical evidence applies to the healthcare sector perspective as well.
Under a healthcare sector perspective, the estimate of real bond yields cited by the Panel (2-4% per annum) is far in excess of the estimates of real bond yields provided by the US federal government in its official discounting guidance: https://www.whitehouse.gov/omb/circulars_a094/a94_appx-c
Interestingly, since our paper went to press, this official discounting guidance has been updated for 2017 and their recommended discount rates are lower than for 2016: negative 0.5% at 3 years, rising to 0.7% at 30 years (compared to 1.5% at 30 years in 2016). Which only reinforces our argument that the Panel’s recommended discount rate of 3% rate is too high under a healthcare sector perspective.
In answer to Sam Watson’s comment regarding negative discount rates: from a healthcare sector perspective, whether we should have a negative discount rate on costs depends upon whether the real rate of interest faced by the funder of the healthcare system is negative (estimating the components of the Ramsey equation is appropriate under a societal perspective but not under a healthcare sector perspective). In the US, the latest federal government data (cited above) suggest that real interest rates are negative for bonds with a maturity of up to 5 years, but not for bonds with a longer term to maturity. So, in principle, for an economic evaluation conducted today from a US healthcare sector perspective, a negative discount rate would be justified on costs incurred during the first 5 years of the evaluation, but not beyond that. Whether the discount rate on health effects should also be negative depends upon the growth rate of the shadow price of the healthcare sector budget. If the expected growth rate of the shadow price is zero, then health effects should also be discounted at a negative rate for the first 5 years. That said, the use of a time-varying discount rate is a relatively under-researched topic, and has raised concerns among some economists about “time-inconsistent decision making”, so it would be understandable if decision makers would prefer to adopt a constant rate for the time being. In my view this topic provides very fertile ground for future research.
I’m happy to answer any further questions that people have about the paper.
“Renowned trouble makers” Should we take that as a compliment?
Absolutely! It’s admirable! I hope you don’t mind me saying/thinking it.
Not at all. I hope we are more than just trouble makers.
Great blog. Goid summary of both papers. Has Terry been in touch?
I think the ‘vague’ difference is more important than people realize. A 1.5% premium on the discount rate is quite a large penalty for preventive vs. chronic treatment interventions.
Did you note that the reference we use for the US reference is very explicit that it is the US govt. recommended rate for use in cost effectiveness analysis? It’s not just us. The US govt. wouldn’t approve of the 2nd Panel’s recommendation.
Did that first paper talk about any of the arguments we covered our post about (https://aheblog.com/2016/04/21/three-arguments-in-favour-of-a-negative-social-discount-rate/)?
Negative!
Our aim was to correct the theoretical and empirical errors in the 2nd Panel’s work on discounting; not provide an overview of the current state of the art in discounting. Chris