Chris Sampson’s journal round-up for 30th September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A need for change! A coding framework for improving transparency in decision modeling. PharmacoEconomics [PubMed] Published 24th September 2019

We’ve featured a few papers in recent round-ups that (I assume) will be included in an upcoming themed issue of PharmacoEconomics on transparency in modelling. It’s shaping up to be a good one. The value of transparency in decision modelling has been recognised, but simply making the stuff visible is not enough – it needs to make sense. The purpose of this paper is to help make that achievable.

The authors highlight that the writing of analyses, including coding, involves personal style and preferences. To aid transparency, we need a systematic framework of conventions that make the inner workings of a model understandable to any (expert) user. The paper describes a framework developed by the Decision Analysis in R for Technologies in Health (DARTH) group. The DARTH framework builds on a set of core model components, generalisable to all cost-effectiveness analyses and model structures. There are five components – i) model inputs, ii) model implementation, iii) model calibration, iv) model validation, and v) analysis – and the paper describes the role of each. Importantly, the analysis component can be divided into several parts relating to, for example, sensitivity analyses and value of information analyses.

Based on this framework, the authors provide recommendations for organising and naming files and on the types of functions and data structures required. The recommendations build on conventions established in other fields and in the use of R generally. The authors recommend the implementation of functions in R, and relate general recommendations to the context of decision modelling. We’re also introduced to unit testing, which will be unfamiliar to most Excel modellers but which can be relatively easily implemented in R. The role of various tools are introduced, including R Studio, R Markdown, Shiny, and GitHub.

The real value of this work lies in the linked R packages and other online material, which you can use to test out the framework and consider its application to whatever modelling problem you might have. The authors provide an example using a basic Sick-Sicker model, which you can have a play with using the DARTH packages. In combination with the online resources, this is a valuable paper that you should have to hand if you’re developing a model in R.

Accounts from developers of generic health state utility instruments explain why they produce different QALYs: a qualitative study. Social Science & Medicine [PubMed] Published 19th September 2019

It’s well known that different preference-based measures of health will generate different health state utility values for the same person. Yet, they continue to be used almost interchangeably. For this study, the authors spoke to people involved in the development of six popular measures: QWB, 15D, HUI, EQ-5D, SF-6D, and AQoL. Their goal was to understand the bases for the development of the measures and to explain why the different measures should give different results.

At least one original developer for each instrument was recruited, along with people involved at later stages of development. Semi-structured interviews were conducted with 15 people, with questions on the background, aims, and criteria for the development of the measure, and on the descriptive system, preference weights, performance, and future development of the instrument.

Five broad topics were identified as being associated with differences in the measures: i) knowledge sources used for conceptualisation, ii) development purposes, iii) interpretations of what makes a ‘good’ instrument, iv) choice of valuation techniques, and v) the context for the development process. The online appendices provide some useful tables that summarise the differences between the measures. The authors distinguish between measures based on ‘objective’ definitions (QWB) and items that people found important (15D). Some prioritised sensitivity (AQoL, 15D), others prioritised validity (HUI, QWB), and several focused on pragmatism (SF-6D, HUI, 15D, EQ-5D). Some instruments had modest goals and opportunistic processes (EQ-5D, SF-6D, HUI), while others had grand goals and purposeful processes (QWB, 15D, AQoL). The use of some measures (EQ-5D, HUI) extended far beyond what the original developers had anticipated. In short, different measures were developed with quite different concepts and purposes in mind, so it’s no surprise that they give different results.

This paper provides some interesting accounts and views on the process of instrument development. It might prove most useful in understanding different measures’ blind spots, which can inform the selection of measures in research, as well as future development priorities.

The emerging social science literature on health technology assessment: a narrative review. Value in Health Published 16th September 2019

Health economics provides a good example of multidisciplinarity, with economists, statisticians, medics, epidemiologists, and plenty of others working together to inform health technology assessment. But I still don’t understand what sociologists are talking about half of the time. Yet, it seems that sociologists and political scientists are busy working on the big questions in HTA, as demonstrated by this paper’s 120 references. So, what are they up to?

This article reports on a narrative review, based on 41 empirical studies. Three broad research themes are identified: i) what drove the establishment and design of HTA bodies? ii) what has been the influence of HTA? and iii) what have been the social and political influences on HTA decisions? Some have argued that HTA is inevitable, while others have argued that there are alternative arrangements. Either way, no two systems are the same and it is not easy to explain differences. It’s important to understand HTA in the context of other social tendencies and trends, and that HTA influences and is influenced by these. The authors provide a substantial discussion on the role of stakeholders in HTA and the potential for some to attempt to game the system. Uncertainty abounds in HTA and this necessarily requires negotiation and acts as a limit on the extent to which HTA can rely on objectivity and rationality.

Something lacking is a critical history of HTA as a discipline and the question of what HTA is actually good for. There’s also not a lot of work out there on culture and values, which contrasts with medical sociology. The authors suggest that sociologists and political scientists could be more closely involved in HTA research projects. I suspect that such a move would be more challenging for the economists than for the sociologists.

Credits

36th EuroQol Plenary Meeting

The 36th EuroQol Plenary Meeting will be held on 18-21 September 2019 in Brussels, Belgium.

  • 10 April 2019: Deadline submitting abstracts
  • 11 April – 21 April 2019: Review and selection of abstracts
  • 29 April 2019: Abstract acceptance notification
  • 12 June 2019: Deadline submitting papers and posters
  • 13 June – 26 June 2019: Review of submitted papers and posters
  • 8 July 2019: Papers and posters published on EuroQol members’ website

David Mott’s journal round-up for 16th September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Opening the ‘black box’: an overview of methods to investigate the decision‑making process in choice‑based surveys. The Patient [PubMed] Published 5th September 2019

Choice-based surveys using methods such as discrete choice experiments (DCEs) and best-worst scaling (BWS) exercises are increasingly being used in health to understand people’s preferences. A lot of time and energy is spent on analysing the data that come out from these surveys but increasingly there is an interest in better understanding respondents’ decision-making processes. Whilst many will be aware of ‘think aloud’ interviews (often used for piloting), other methods may be less familiar as they’re not applied frequently in health. That’s where this fascinating paper by Dan Rigby and colleagues comes in. It provides an overview of five different methods of what they call ‘pre-choice process analysis’ of decision-making, describing the application, state of knowledge, and future research opportunities.

Eye-tracking has been used in health recently. It’s intuitive and provides an insight into where the participants’ focus is (or isn’t). The authors explained that one of the ways it has been used is to explore attribute non-attendance (ANA), which essentially occurs when people are ignoring attributes either because they’re irrelevant to them, or simply because it makes the task easier. However, surprisingly, it has been suggested that ‘visual ANA’ (not looking at the attribute) doesn’t always align with ‘stated ANA’ (participants stating that they ignored the attribute) – which raises some interesting questions!

However, the real highlight for me was the overview of the use of brain imaging techniques to explore choices being made in DCEs. One study highlighted by the authors – which was a DCE about eggs and is now at least #2 on my list of the bizarre preference study topics after this oddly specific one on Iberian ham – predicted choices from an initial ‘passive viewing’ using functional magnetic resonance imaging (fMRI). They found that incorporating changes in blood flow (prompted by changes in attribute levels during ‘passive viewing’) into a random utility model accounted for a lot of the variation in willingness to pay for eggs – pretty amazing stuff.

Whilst I’ve highlighted the more unusual methods here, after reading this overview I have to admit that I’m an even bigger advocate for the ‘think aloud’ technique now. Although it may have some limitations, the amount of insight offered combined with its practicality is hard to beat. Though maybe I’m biased because I know that I won’t get my hands on any eye-tracking or brain imaging devices any time soon. In any case, I highly recommend that any researchers conducting preference studies give this paper a read as it’s really well written and will surely be of interest.

Disentangling public preferences for health gains at end-of-life: further evidence of no support of an end-of-life premium. Social Science & Medicine [PubMed] Published 21st June 2019

The end of life (EOL) policy introduced by NICE in 2009 [PDF] has proven controversial. The policy allows treatments that are not cost-effective within the usual range to be considered for approval, provided that certain criteria are met. Specifically, that the treatment targets patients with a short life expectancy (≤24 months), offers a life extension (of ≥3 months) and is for a ‘small patient population’. One of the biggest issues with this policy is that it is unclear whether the general population actually supports the idea of valuing health gains (specifically life extension) at EOL more than other health gains.

Numerous academic studies, usually involving some form of stated preference exercise, have been conducted to test whether the public might support this EOL premium. A recent review by Koonal Shah and colleagues summarised the existing published studies (up to October 2017), highlighting that evidence is extremely mixed. This recently published Danish study, by Lise Desireé Hansen and Trine Kjær, adds to this literature. The authors conducted an incredibly thorough stated preference exercise to test whether quality of life (QOL) gains and life extension (LE) at EOL are valued differently from other similarly sized health gains. Not only that, but the study also explored the effect of perspective on results (social vs individual), the effect of age (18-35 vs. 65+), and impact of initial severity (25% vs. 40% initial QOL) on results.

Overall, they did not find evidence of support for an EOL premium for QOL gains or for LEs (regardless of perspective) but their results do suggest that QOL gains are preferred over LE. In some scenarios, there was slightly more support for EOL in the social perspective variant, relative to the individual perspective – which seems quite intuitive. Both age and initial severity had an impact on results, with respondents preferring to treat the young and those with worse QOL at baseline. One of the most interesting results for me was within their subgroup analyses, which suggested that women and those with a relation to a terminally ill patient had a significantly positive preference for EOL – but only in the social perspective scenarios.

This is a really well-designed study, which covers a lot of different concepts. This probably doesn’t end the debate on NICE’s use of the EOL criteria – not least because the study wasn’t conducted in England and Wales – but it contributes a lot. I’d consider it a must-read for anyone interested in this area.

How should we capture health state utility in dementia? Comparisons of DEMQOL-Proxy-U and of self- and proxy-completed EQ-5D-5L. Value in Health Published 26th August 2019

Capturing quality of life (QOL) in dementia and obtaining health state utilities is incredibly challenging; which is something that I’ve started to really appreciate recently upon getting involved in a EuroQol-funded ‘bolt-ons’ project. The EQ-5D is not always able to detect meaningful changes in cognitive function and condition-specific preference-based measures (PBMs), such as the DEMQOL, may be preferred as a result. However, this isn’t the only challenge because in many cases patients are not in a position to complete the surveys themselves. This means that proxy-reporting is often required, which could be done by either a professional (formal) carer, or a friend or family member (informal carer). Researchers that want to use a PBM in this population therefore have a lot to consider.

This paper compares the performance of the EQ-5D-5L and the DEMQOL-Proxy when completed by care home residents (EQ-5D-5L only), formal carers and informal carers. The impressive dataset that the authors use contains 1,004 care home residents, across up to three waves, and includes a battery of different cognitive and QOL measures. The overall objective was to compare the performance of the EQ-5D-5L and DEMQOL-Proxy, across the three respondent groups, based on 1) construct validity, 2) criterion validity, and 3) responsiveness.

The authors found that self-reported EQ-5D-5L scores were larger and less responsive to changes in the cognitive measures, but better at capturing residents’ self-reported QOL (based on a non-PBM) relative to proxy-reported scores. It is unclear whether this is a case of adaptation as seen in many other patient groups, or if the residents’ cognitive impairments prevent them from reliably assessing their current status. The proxy-reported EQ-5D-5L scores were generally more responsive to changes in the cognitive measures relative to the DEMQOL-Proxy (irrespective of which type of proxy), which the authors note is probably due to the fact that the DEMQOL-Proxy focuses more on the emotional impact of dementia rather than functional impairment.

Overall, this is a really interesting paper, which highlights the challenges well and illustrates that there is value in collecting these data from both patients and proxies. In terms of the PBM comparison, whilst the authors do not explicitly state it, it does seem that the EQ-5D-5L may have a slight upper hand due to its responsiveness, as well as for pragmatic reasons (the DEMQOL-Proxy has >30 questions). Perhaps a cognition ‘bolt-on’ to the EQ-5D-5L might help to improve the situation in future?

Credits