Skip to content

Chris Sampson’s journal round-up for 9th October 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluating the relationship between visual acuity and utilities in patients with diabetic macular edema enrolled in intravitreal aflibercept studies. Investigative Ophthalmology & Visual Science [PubMed] Published October 2017

Part of my day job involves the evaluation of a new type of screening programme for diabetic eye disease, including the use of a decision analytic model. Cost-effectiveness models usually need health state utility values for parameters in order to estimate QALYs. There are some interesting challenges in evaluating health-related quality of life in the context of vision loss; does vision in the best eye or worst eye affect quality of life most; do different eye diseases have different impacts independent of sight loss; do generic preference-based measures even work in this context? This study explores some of these questions. It combines baseline and follow-up EQ-5D and VFQ-UI (a condition-specific preference-based measure) responses from 1,320 patients from 4 different studies, along with visual acuity data. OLS and random effects panel models are used to predict utility values dependent on visual acuity and other individual characteristics. Best-seeing eye seems to be a more important determinant than worst-seeing eye, which supports previous studies. But worst-seeing eye is still important, with about a third of the impact of best-seeing eye. So economic evaluations shouldn’t ignore the bilateral nature of eye disease. Visual acuity – in both best- and worst-seeing eye – was more strongly associated with the condition-specific VFQ-UI than with the EQ-5D index, leading to better predictive power, which is not a big surprise. One way to look at this is that the EQ-5D underestimates the impact of visual acuity on utility. An alternative view could be that the VFQ-UI valuation process overestimates the impact of visual acuity on utility. This study is a nice demonstration of the fact that selecting health state utility values for a model-based economic evaluation is not straightforward. Attention needs to be given to the choice of measure (e.g. generic or condition-specific), but also to the way states are defined to allow for accurate utility values to be attached.

Do capability and functioning differ? A study of U.K. survey responses. Health Economics [PubMed] Published 24th September 2017

I like the capability approach in theory, but not in practice. I’ve written before about some of my concerns. One of them is that we don’t know whether capability measures (such as the ICECAP) offer anything beyond semantic nuance. This study sought to address that. A ‘functioning and capability’ instrument was devised, which reworded the ICECAP-A by changing phrases like “I am able to be” to phrases like “I am”, so that each question could have a ‘functioning’ version as well as a ‘capability’ version. Then, both the functioning and capability versions of the domains were presented in tandem. Questionnaires were sent to 1,627 individuals who had participated in another study about spillover effects in meningitis. Respondents (n=1,022) were family members of people experiencing after-effects of meningitis. The analysis focusses on the instances where capabilities and functionings diverge. Across the sample, 34% of respondents reported a difference between capability and functioning on at least one domain. For all domain-level responses, 12% were associated with higher capability than functioning, while 2% reported higher functioning. Some differences were observed between different groups of people. Older people tended to be less likely to report excess capabilities, while those with degree-level education reported greater capabilities. Informal care providers had lower functionings and capabilities but were more likely to report a difference between the two. Women were more likely to report excess capabilities in the ‘attachment’ domain. These differences lead the author to conclude that the wording of the ICECAP measure enables researchers to capture something beyond functioning, and that the choice of a capability measure could lead to different resource allocation decisions. I’m not convinced. The study makes an error that is common in this field; it presupposes that the changes in wording successfully distinguish between capabilities and functionings. This is implemented analytically by dropping from the primary analysis the cases where capabilities exceeded functionings, which are presumed to be illogical. If we don’t accept this presupposition (and we shouldn’t) then the meaning of the findings becomes questionable. The paper does outline most of the limitations of the study, but it doesn’t dedicate much space to alternative explanations. One is to do with the distinction between ‘can’ and ‘could’. If people answer ‘capability’ questions with reference to future possibilities, then the difference could simply be driven by optimism about future functionings. This future-reference problem is most obvious in the ‘achievement and progress’ domain, which incidentally, in this study, was the domain with the greatest probability of showing a discrepancy between capabilities and functionings. Another alternative explanation is that showing someone two slightly different questions coaxes them into making an artificial distinction that they wouldn’t otherwise make. In my previous writing on this, I suggested that two things needed to be identified. The first was to see whether people give different responses with the different wording. This study goes some way towards that, which is a good start. The second was to see whether people value states defined in these ways any differently. Until we have answers to both these questions I will remain sceptical about the implications of the ICECAP’s semantic nuance.

Estimating a constant WTP for a QALY—a mission impossible? The European Journal of Health Economics [PubMed] Published 21st September 2017

The idea of estimating willingness to pay (WTP) for a QALY has fallen out of fashion. It’s a nice idea in principle but, as the title of this paper suggests, it’s not easy to come up with a meaningful answer. One key problem has been that WTP for a QALY is not constant in the number of QALYs being gained – that is, people are willing to pay less (at the margin) for greater QALY gains. But maybe that’s OK. NICE and their counterparts tend not to use a fixed threshold but rather a range: £20,000-£30,000 per QALY, say. So maybe the variability in WTP for a QALY can be reflected in this range. This study explores some of the reasons – including uncertainty – for differences in elicited WTP values for a QALY. A contingent valuation exercise was conducted using a 2014 Internet panel survey of 1,400 Swedish citizens. The survey consisted 21 questions about respondents’ own health, sociodemographics, prioritisation attitudes, WTP for health improvements, and a societal decision-making task. Respondents were randomly assigned to one of five scenarios with different magnitudes and probabilities of health gain, with yes/no responses for five different WTP ‘bids’. The estimated WTP for a QALY – using the UK EQ-5D-3L tariff – was €17,000. But across the different scenarios, the WTP ranged from €10,600 to over a million. Wide confidence intervals abound. The authors’ findings only partially support an assumption of weak scope sensitivity – that more QALYs are worth paying more for – and do not at all support a strong assumption of scope sensitivity that WTP is proportional to QALY gain. This is what is known as scope bias, and this insensitivity to scope also applied to the variability in uncertainty. The authors also found that using different EQ-5D or VAS tariffs to estimate health state values resulted in variable differences in WTP estimates. Consistent relationships between individuals’ characteristics and their WTP were not found, though income and education seemed to be associated with higher willingness to pay across the sample. It isn’t clear what the implications of these findings are, except for the reinforcement of any scepticism you might have about the sociomathematical validity (yes, I’m sticking with that) of the QALY.

Credits

By

  • Chris Sampson

    Founder of the Academic Health Economists' Blog. Senior Principal Economist at the Office of Health Economics. ORCID: 0000-0001-9470-2369

We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Join the conversation, add a commentx
()
x
%d bloggers like this: