Chris Sampson’s journal round-up for 9th October 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluating the relationship between visual acuity and utilities in patients with diabetic macular edema enrolled in intravitreal aflibercept studies. Investigative Ophthalmology & Visual Science [PubMed] Published October 2017

Part of my day job involves the evaluation of a new type of screening programme for diabetic eye disease, including the use of a decision analytic model. Cost-effectiveness models usually need health state utility values for parameters in order to estimate QALYs. There are some interesting challenges in evaluating health-related quality of life in the context of vision loss; does vision in the best eye or worst eye affect quality of life most; do different eye diseases have different impacts independent of sight loss; do generic preference-based measures even work in this context? This study explores some of these questions. It combines baseline and follow-up EQ-5D and VFQ-UI (a condition-specific preference-based measure) responses from 1,320 patients from 4 different studies, along with visual acuity data. OLS and random effects panel models are used to predict utility values dependent on visual acuity and other individual characteristics. Best-seeing eye seems to be a more important determinant than worst-seeing eye, which supports previous studies. But worst-seeing eye is still important, with about a third of the impact of best-seeing eye. So economic evaluations shouldn’t ignore the bilateral nature of eye disease. Visual acuity – in both best- and worst-seeing eye – was more strongly associated with the condition-specific VFQ-UI than with the EQ-5D index, leading to better predictive power, which is not a big surprise. One way to look at this is that the EQ-5D underestimates the impact of visual acuity on utility. An alternative view could be that the VFQ-UI valuation process overestimates the impact of visual acuity on utility. This study is a nice demonstration of the fact that selecting health state utility values for a model-based economic evaluation is not straightforward. Attention needs to be given to the choice of measure (e.g. generic or condition-specific), but also to the way states are defined to allow for accurate utility values to be attached.

Do capability and functioning differ? A study of U.K. survey responses. Health Economics [PubMed] Published 24th September 2017

I like the capability approach in theory, but not in practice. I’ve written before about some of my concerns. One of them is that we don’t know whether capability measures (such as the ICECAP) offer anything beyond semantic nuance. This study sought to address that. A ‘functioning and capability’ instrument was devised, which reworded the ICECAP-A by changing phrases like “I am able to be” to phrases like “I am”, so that each question could have a ‘functioning’ version as well as a ‘capability’ version. Then, both the functioning and capability versions of the domains were presented in tandem. Questionnaires were sent to 1,627 individuals who had participated in another study about spillover effects in meningitis. Respondents (n=1,022) were family members of people experiencing after-effects of meningitis. The analysis focusses on the instances where capabilities and functionings diverge. Across the sample, 34% of respondents reported a difference between capability and functioning on at least one domain. For all domain-level responses, 12% were associated with higher capability than functioning, while 2% reported higher functioning. Some differences were observed between different groups of people. Older people tended to be less likely to report excess capabilities, while those with degree-level education reported greater capabilities. Informal care providers had lower functionings and capabilities but were more likely to report a difference between the two. Women were more likely to report excess capabilities in the ‘attachment’ domain. These differences lead the author to conclude that the wording of the ICECAP measure enables researchers to capture something beyond functioning, and that the choice of a capability measure could lead to different resource allocation decisions. I’m not convinced. The study makes an error that is common in this field; it presupposes that the changes in wording successfully distinguish between capabilities and functionings. This is implemented analytically by dropping from the primary analysis the cases where capabilities exceeded functionings, which are presumed to be illogical. If we don’t accept this presupposition (and we shouldn’t) then the meaning of the findings becomes questionable. The paper does outline most of the limitations of the study, but it doesn’t dedicate much space to alternative explanations. One is to do with the distinction between ‘can’ and ‘could’. If people answer ‘capability’ questions with reference to future possibilities, then the difference could simply be driven by optimism about future functionings. This future-reference problem is most obvious in the ‘achievement and progress’ domain, which incidentally, in this study, was the domain with the greatest probability of showing a discrepancy between capabilities and functionings. Another alternative explanation is that showing someone two slightly different questions coaxes them into making an artificial distinction that they wouldn’t otherwise make. In my previous writing on this, I suggested that two things needed to be identified. The first was to see whether people give different responses with the different wording. This study goes some way towards that, which is a good start. The second was to see whether people value states defined in these ways any differently. Until we have answers to both these questions I will remain sceptical about the implications of the ICECAP’s semantic nuance.

Estimating a constant WTP for a QALY—a mission impossible? The European Journal of Health Economics [PubMed] Published 21st September 2017

The idea of estimating willingness to pay (WTP) for a QALY has fallen out of fashion. It’s a nice idea in principle but, as the title of this paper suggests, it’s not easy to come up with a meaningful answer. One key problem has been that WTP for a QALY is not constant in the number of QALYs being gained – that is, people are willing to pay less (at the margin) for greater QALY gains. But maybe that’s OK. NICE and their counterparts tend not to use a fixed threshold but rather a range: £20,000-£30,000 per QALY, say. So maybe the variability in WTP for a QALY can be reflected in this range. This study explores some of the reasons – including uncertainty – for differences in elicited WTP values for a QALY. A contingent valuation exercise was conducted using a 2014 Internet panel survey of 1,400 Swedish citizens. The survey consisted 21 questions about respondents’ own health, sociodemographics, prioritisation attitudes, WTP for health improvements, and a societal decision-making task. Respondents were randomly assigned to one of five scenarios with different magnitudes and probabilities of health gain, with yes/no responses for five different WTP ‘bids’. The estimated WTP for a QALY – using the UK EQ-5D-3L tariff – was €17,000. But across the different scenarios, the WTP ranged from €10,600 to over a million. Wide confidence intervals abound. The authors’ findings only partially support an assumption of weak scope sensitivity – that more QALYs are worth paying more for – and do not at all support a strong assumption of scope sensitivity that WTP is proportional to QALY gain. This is what is known as scope bias, and this insensitivity to scope also applied to the variability in uncertainty. The authors also found that using different EQ-5D or VAS tariffs to estimate health state values resulted in variable differences in WTP estimates. Consistent relationships between individuals’ characteristics and their WTP were not found, though income and education seemed to be associated with higher willingness to pay across the sample. It isn’t clear what the implications of these findings are, except for the reinforcement of any scepticism you might have about the sociomathematical validity (yes, I’m sticking with that) of the QALY.

Credits

Thesis Thursday: Lidia Engel

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Lidia Engel who graduated with a PhD from Simon Fraser University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Going beyond health-related quality of life for outcome measurement in economic evaluation
Supervisors
David Whitehurst, Scott Lear, Stirling Bryan
Repository link
https://theses.lib.sfu.ca/thesis/etd10264

Your thesis explores the potential for expanding the ‘evaluative space’ in economic evaluation. Why is this important?

I think there are two answers to this question. Firstly, methods for economic evaluation of health care interventions have existed for a number of years but these evaluations have mainly been applied to more narrowly defined ‘clinical’ interventions, such as drugs. Interventions nowadays are more complex, where benefits cannot be simply measured in terms of health. You can think of areas such as public health, mental health, social care, and end-of-life care, where interventions may result in broader benefits, such as increased control over daily life, independence, or aspects related to the process of health care delivery. Therefore, I believe there is a need to re-think the way we measure and value outcomes when we conduct an economic evaluation. Secondly, ignoring broader outcomes of health care interventions that go beyond the narrow focus of health-related quality of life can potentially lead to misallocation of scarce health care resources. Evidence has shown that the choice of outcome measure (such as a health outcome or a broader measure of wellbeing) can have a significant influence on the conclusions drawn from an economic evaluation.

You use both qualitative and quantitative approaches. Was this key to answering your research questions?

I mainly applied quantitative methods in my thesis research. However, Chapter 3 draws upon some qualitative methodology. To gain a better understanding of ‘benefits beyond health’, I came across a novel approach, called Critical Interpretive Synthesis. It is similar to meta-ethnography (i.e. a synthesis of qualitative research), with the difference that the synthesis is not of qualitative literature but of methodologically diverse literature. It involves an iterative approach, where searching, sampling, and synthesis go hand in hand. It doesn’t only produce a summary of existing literature but enables the development of new interpretations that go beyond those originally offered in the literature. I really liked this approach because it enabled me to synthesise the evidence in a more effective way compared with a conventional systematic review. Defining and applying codes and themes, as it is traditionally done in qualitative research, allowed me to organize the general idea of non-health benefits into a coherent thematic framework, which in the end provided me with a better understanding of the topic overall.

What data did you analyse and what quantitative methods did you use?

I conducted three empirical analyses in my thesis research, which all made use of data from the ICECAP measures (ICECAP-O and ICECAP-A). In my first paper, I used data from the ‘Walk the Talk (WTT)‘ project to investigate the complementarity of the ICECAP-O and the EQ-5D-5L in a public health context using regression analyses. My second paper used exploratory factor analysis to investigate the extent of overlap between the ICECAP-A and five preference-based health-related quality of life measures, using data from the Multi Instrument Comparison (MIC) project. I am currently finalizing submission of my third empirical analysis, which reports findings from a path analysis using cross-sectional data from a web-based survey. The path analysis explores three outcome measurement approaches (health-related quality of life, subjective wellbeing, and capability wellbeing) through direct and mediated pathways in individuals living with spinal cord injury. Each of the three studies addressed different components of the overall research question, which, collectively, demonstrated the added value of broader outcome measures in economic evaluation when compared with existing preference-based health-related quality of life measures.

Thinking about the different measures that you considered in your analyses, were any of your findings surprising or unexpected?

In my first paper, I found that the ICECAP-O is more sensitive to environmental features (i.e. social cohesion and street connectivity) when compared with the EQ-5D-5L. As my second paper has shown, this was not surprising, as the ICECAP-A (a measure for adults rather than older adults) and the EQ-5D-5L measure different constructs and had only limited overlap in their descriptive classification systems. While a similar observation was made when comparing the ICECAP-A with three other preference-based health-related quality of life measures (15D, HUI-3, and SF-6D), a substantial overlap was observed between the ICECAP-A and the AQoL-8D, which suggests that it is possible for broader benefits to be captured by preference-based health-related measures (although some may not consider the AQoL-8D to be exclusively ‘health-related’, despite the label). The findings from the path analysis confirmed the similarities between the ICECAP-A and the AQoL-8D. However, the findings do not imply that the AQoL-8D and ICECAP-A are interchangeable instruments, as a mediation effect was found that requires further research.

How would you like to see your research inform current practice in economic evaluation? Is the QALY still in good health?

I am aware of the limitations of the QALY and although there are increasing concerns that the QALY framework does not capture all benefits of health care interventions, it is important to understand that the evaluative space of the QALY is determined by the dimensions included in preference-based measures. From a theoretical point of view, the QALY can embrace any characteristics that are important for the allocation of health care resources. However, in practice, it seems that QALYs are currently defined by what is measured (e.g. the dimensions and response options of EQ-5D instruments) rather than the conceptual origin. Therefore, although non-health benefits have been largely ignored when estimating QALYs, one should not dismiss the QALY framework but rather develop appropriate instruments that capture such broader benefits. I believe the findings of my thesis have particular relevance for national HTA bodies that set guidelines for the conduct of economic evaluation. While the need to maintain methodological consistency is important, the assessment of the real benefits of some health care interventions would be more accurate if we were less prescriptive in terms of which outcome measure to use when conducting an economic evaluation. As my thesis has shown, some preference-based measures already adopt a broad evaluative space but are less frequently used.

Alastair Canaway’s journal round-up for 18th September 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Selection of key health domains from PROMIS® for a generic preference-based scoring system. Quality of Life Research [PubMedPublished 19th August 2017

The US Panel on Cost-Effectiveness recommends the use of QALYs. It doesn’t, however, instruct (unlike the UK) as to what measure should be used. This leaves the door ajar for both new and established measures. This paper sets about developing a new preference-based measure from the Patient-Reported Outcomes Measurement System (PROMIS). PROMIS is a US National Institutes of Health funded suite of person-centred measures of physical, mental, and social health. Across all the PROMIS measures there exist over 70 domains of health relevant to adult health. For all its promise, the PROMIS system does not produce a summary score amenable to the calculation of QALYs, nor for general descriptive purposes such as measuring HRQL over time. This study aimed to reduce the 70 items down to a number suitable for valuation. To do this, Delphi methods were used. The Delphi approach is something that seems to be increasing in popularity in the health economics world. For those unfamiliar, it essentially involves obtaining the opinions of experts independently and iteratively conducting rounds of questioning to reach a consensus (over two or more rounds). In this case nine health outcomes experts were recruited, they were presented with ‘all 37 domains’ (no mention is made of how they got from 70 to 37!) and asked to remove any domains that were not appropriate for inclusion in a general health utility measure or were redundant due to another PROMIS domain. If more than seven experts agreed, then the domain was removed. Responses were combined and presented until consensus was reached. This left 10 domains. They then used a community sample of 50 participants to test for independence of domains using a pairwise independence evaluation test. They were given the option of removing a domain they felt was not important to overall HRQL and asked to rate the importance of remaining domains using a VAS. These findings were used by the research team to whittle down from nine domains to seven. The final domains were: Cognitive function- abilities; Depression; Fatigue; Pain Interference; Physical Function; Ability to participate in social roles and activities; and Sleep disturbance. Many of these are common to existing measures but I did rather like the inclusion of cognitive function and fatigue – something that is missing in many, and to me appear important. The next step is valuation. Upon valuation, this is a promising candidate for use in economic evaluation – particularly in the US where the PROMIS measurement suite is already established.

Predictive validation and the re-analysis of cost-effectiveness: do we dare to tread? PharmacoEconomics [PubMedPublished 22nd August 2017

PharmacoEconomics treated us to a provocative editorial regarding predictive validation and re-analysis of cost-effectiveness models – a call to arms of sorts. For those (like me) who are not modelling experts, predictive validation (aka 4th order validation) refers to the comparison of model outputs with data that are collected after the initial analysis of the model. So essentially you’re comparing what you modelled would happen with what actually happened. The literature suggests that predictive validation is widely ignored. The importance of predictive validity is highlighted with a case study where predictive-validity was examined three years after the end of a trial – upon reanalysis the model was poor. This was then revised, which led to a much better fit of the prospective data. Predictive validation can, therefore, be used to identify sources of inaccuracies in models. If predictive validity was examined more commonly, improvements in model quality more generally are possible. Furthermore, it might be possible to identify specific contexts where poor predictive validity is prevalent and thus require further research. The authors highlight the field of advanced cancers as a particularly relevant context where uncertainty around survival curves is prevalent. By actively scheduling further data collection and updating the survival curves we can reduce the uncertainty surrounding the value of high-cost drugs. Predictive validation can also inform other aspects of the modelling process, such as the best choice of time point from which to extrapolate, or credible rates of change in predicted hazards. The authors suggest using expected value of information analysis to identify technologies with the largest costs of uncertainty to prioritise where predictive validity could be assessed. NICE and other reimbursement bodies require continued data collection for ‘some’ new technologies, the processes are therefore in place for future studies to be designed and implemented in a way to capture such data which allows later re-analysis. Assessing predictive validity seems eminently sensible, there are however barriers. Money is the obvious issue, extended prospective data collection and re-analysis of models requires resources. It does, however, have the potential to save money and improve health in the long run. The authors note how in a recent study they demonstrated that a drug for osteoporosis that had been recommended by Australia’s Pharmaceutical Benefits Advisory Committee was not actually cost-effective when further data were examined. There is clearly value to be achieved in predictive validation and re-analysis – it’s hard to disagree with the authors and we should probably be campaigning for longer term follow-ups, re-analysis and increased acknowledgement of the desirability of predictive validity.

How should cost-of-illness studies be interpreted? The Lancet Psychiatry [PubMed] Published 7th September 2017

It’s a good question – cost of illness studies are commonplace, but are they useful from a health economics perspective? A comment piece in The Lancet Psychiatry examines this issue using the case study of self-harm and suicide. It focuses on a recent publication by Tsiachristas et al, which examines the hospital resource use and care costs for all presentations of self-harm in a UK hospital. Each episode of self-harm cost £809, and when extrapolated to the UK cost £162 million. Over 30% of these costs were psychological assessments which despite being recommended by NICE only 75% of self-harming patients received. If all self-harming patients received assessments as recommended by NICE then another £51 million would be added to the bill. The author raises the question of how much use is this information for health economists. Nearly all cost of illness studies end up concluding that i) they cost a lot, and ii) money could be saved by reducing or ameliorating the underlying factors that cause the illness. Is this helpful? Well, not particularly, by focusing only on one illness there is no consideration of the opportunity cost: if you spend money preventing one condition then that money will be displacing resources elsewhere, likewise, resources spent reducing one illness will likely be balanced by increased spending on another illness. The author highlights this with a thought experiment: “imagine a world where a cost of illness study has been done for every possible diseases and that the total cost of illness was aggregated. The counterfactual from such an exercise is a world where nobody gets sick and everybody dies suddenly at some pre-determined age”. Another issue is that more often than not, cost of illness studies identify that more, not less should be spent on a problem, in the self-harm example it was that an extra £51 million should be spent on psychological assessments. Similarly, it highlights the extra cost of psychological assessments, rather than the glaring issue that 25% who attend hospital for self-harm are not getting the required psychological assessments. This very much links into the final point that cost of illness studies neglect the benefits being achieved. Now all the negatives are out the way, there are at least a couple of positives I can think of off the top of my head i) identification of key cost drivers, and ii) information for use in economic models. The take home message is that although there is some use to cost of illness studies, from a health economics perspective we (as a field) would be better off spending our time steering clear.

Credits