Skip to content

Chris Sampson’s journal round-up for 13th January 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A vision ‘bolt-on’ increases the responsiveness of EQ-5D: preliminary evidence from a study of cataract surgery. The European Journal of Health Economics [PubMed] Published 4th January 2020

The EQ-5D is insensitive to differences in how well people can see, despite this seeming to be an important aspect of health. In contexts where the impact of visual impairment may be important, we could potentially use a ‘bolt-on’ item that asks about a person’s vision. I’m working on the development of a vision bolt-on at the moment. But ours won’t be the first. A previously-developed bolt-on has undergone some testing and has been shown to be sensitive to differences between people with different levels of visual function. However, there is little or no evidence to support its responsiveness to changes in visual function, which might arise from treatment.

For this study, 63 individuals were recruited prior to receiving cataract surgery in Singapore. Participants completed the EQ-5D-3L and EQ-5D-5L, both with and without a vision bolt-on, which matched the wording of other EQ-5D dimensions. Additionally, the SF-6D, HUI3, and VF-12 were completed along with a LogMAR assessment of visual acuity. The authors sought to compare the responsiveness of the EQ-5D with a vision bolt-on compared with the standard EQ-5D and the other measures. Therefore, all measures were completed before and after cataract surgery. Preference weights can be generated for the EQ-5D-3L with a vision bolt-on, but they can’t for the EQ-5D-5L, so the authors looked at rescaled sum scores to compare across all measures. Responsiveness was measured using indicators such as standardised effect size and response mean.

Visual acuity changed dramatically before and after surgery, for almost everybody. The authors found that the vision bolt-on does seem to provide a great deal more in the way of response to this, compared to the EQ-5D without the bolt-on. For instance, the mean change in the EQ-5D-3L index score was 0.018 without the vision bolt-on, and 0.031 with it. The HUI3 came out with a mean change of 0.105 and showed the highest responsiveness across all analyses.

Does this mean that we should all be using a vision bolt-on, or perhaps the HUI3? Not exactly. Something I see a lot in papers of this sort – including in this one – is the framing of a “superior responsiveness” as an indication that the measure is doing a better job. That isn’t true if the measure is responding to things to which we don’t want it to respond. As the authors point out, the HUI3 has quite different foundations to the EQ-5D. We also don’t want a situation where analysts can pick and choose measures according to which ever is most responsive to the thing to which they want it to be most responsive. In EuroQol parlance, what goes into the descriptive system is very important.

The causal effect of social activities on cognition: evidence from 20 European countries. Social Science & Medicine Published 9th January 2020

Plenty of studies have shown that cognitive abilities are correlated with social engagement, but few have attempted to demonstrate causality in a large sample. The challenge, of course, is that people who engage in more social activities are likely to have greater cognitive abilities for other reasons, and people’s decision to engage in social activities might depend on their cognitive abilities. This study tackles the question of causality using a novel (to me, at least) methodology.

The analysis uses data from five waves of SHARE (the Survey of Health, Ageing and Retirement in Europe). Survey respondents are asked about whether they engage in a variety of social activities, such as voluntary work, training, sports, or community-related organisations. From this, the authors generate an indicator for people participating in zero, one, or two or more of these activities. The survey also uses a set of tests to measure people’s cognitive abilities in terms of immediate recall capacity, delayed recall capacity, fluency, and numeracy. The authors look at each of these four outcomes, with 231,407 observations for the first three and 124,381 for numeracy (for which the questions were missing from some waves). Confirming previous findings, a strong positive correlation is found between engagement in social activities and each of the cognition indicators.

The empirical strategy, which I had never heard of, is partial identification. This is a non-parametric method that identifies bounds for the average treatment effect. Thus, it is ‘partial’ because it doesn’t identify a point estimate. Fewer assumptions means wider and less informative bounds. The authors start with a model with no assumptions, for which the lower bound for the treatment effect goes below zero. They then incrementally add assumptions. These include i) a monotone treatment response, assuming that social participation does not reduce cognitive abilities on average; ii) monotone treatment selection, assuming that people who choose to be socially active tend to have higher cognitive capacities; iii) a monotone instrumental variable assumption that body mass index is negatively associated with cognitive abilities. The authors argue that their methodology is not likely to be undermined by unobservables, as previous studies might.

The various models show that engaging in social activities has a positive impact on all four of the cognitive indicators. The assumption of monotone treatment response had the highest identifying power. For all models that included this, the 95% confidence intervals in the estimates showed a statistically significant positive impact of social activities on cognition. What is perhaps most interesting about this approach is the huge amount of uncertainty in the estimates. Social activities might have a huge effect on cognition or they might have a tiny effect. A basic OLS-type model, assuming exogenous selection, provides very narrow confidence intervals, whereas the confidence intervals on the partial identification models are almost as wide as the lower and upper band themselves.

One shortcoming of this study for me is that it doesn’t seek to identify the causal channels that have been proposed in previous literature (e.g. loneliness, physical activity, self-care). So it’s difficult to paint a clear picture of what’s going on. But then, maybe that’s the point.

Do research groups align on an intervention’s value? Concordance of cost-effectiveness findings between the Institute for Clinical and Economic Review and other health system stakeholders. Applied Health Economics and Health Policy [PubMed] Published 10th January 2020

Aside from having the most inconvenient name imaginable, ICER has been a welcome edition to the US health policy scene, appraising health technologies in order to provide guidance on coverage. ICER has become influential, with some pharmacy benefit managers using their assessments as a basis for denying coverage for low value medicines. ICER identify technologies as falling in one of three categories – high, low, or intermediate long-term value – according to whether the ICER (grr) falls below, above, or between the threshold range of $50,000-$175,000 per QALY. ICER conduct their own evaluations, but so do plenty of other people. This study sought to find out whether other analyses in the literature agree with ICER’s categorisations.

The authors consider 18 assessments by ICER, including 76 interventions, between 2015 and 2017. For each of these, the authors searched the literature for other comparative studies. Specifically, they went looking for cost-effectiveness analyses that employed the same perspectives and outcomes. Unfortunately, they were only able to identify studies for six disease areas and 14 interventions (of the 76), across 25 studies. It isn’t clear whether this is because there is a lack of literature out there – which would be an interesting finding in itself – or because their search strategy or selection criteria weren’t up to scratch. Of the 14 interventions compared, 10 get a more favourable assessment in the published studies than in their corresponding ICER evaluations, with most being categorised as intermediate value instead of low value. The authors go on to conduct one case study, comparing an ICER evaluation in the context of migraine with a published study by some of the authors of this paper. There were methodological differences. In some respects, it seems as if ICER did a more thorough job, while in other respects the published study seemed to use more defensible assumptions.

I agree with the authors that these kinds of comparisons are important. Not least, we need to be sure that ICER’s approach to appraisal is valid. The findings of this study suggest that maybe ICER should be looking at multiple studies and combining all available data in a more meaningful way. But the authors excluded too many studies. Some imperfect comparisons would have been more useful than exclusion – 14 of 76 is kind of pitiful and probably not representative. And I’m not sure why the authors set out to identify studies that are ‘more favourable’, rather than just different. That perspective seems to reveal an assumption that ICER are unduly harsh in their assessments.

Credits

By

  • Chris Sampson

    Founder of the Academic Health Economists' Blog. Senior Principal Economist at the Office of Health Economics. ORCID: 0000-0001-9470-2369

We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Join the conversation, add a commentx
()
x
%d bloggers like this: