Chris Sampson’s journal round-up for 18th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A conceptual map of health-related quality of life dimensions: key lessons for a new instrument. Quality of Life Research [PubMed] Published 1st November 2019

EQ-5D, SF-6D, HUI3, AQoL, 15D; they’re all used to describe health states for the purpose of estimating health state utility values, to get the ‘Q’ in the QALY. But it’s widely recognised (and evidenced) that they measure different things. This study sought to better understand the challenge by doing two things: i) ‘mapping’ the domains of the different instruments and ii) advising on the domains to be included in a new measure.

The conceptual model described in this paper builds on two standard models of health – the ICF (International Classification of Functioning, Disability, and Health), which is endorsed by the WHO, and the Wilson and Cleary model. The new model is built around four distinctions, which can be used to define the dimensions included in health state utility instruments: cause vs effect, specific vs broad, physical vs psychological, and subjective vs objective. The idea is that each possible dimension of health can relate, with varying levels of precision, to one or the other of these alternatives.

The authors argue that, conveniently, cause/effect and specific/broad map to one another, as do physical/psychological and objective/subjective. The framework is presented visually, which makes it easy to interpret – I recommend you take a look. Each of the five instruments previously mentioned is mapped to the framework, with the HUI and 15D coming out as ‘symptom’ oriented, EQ-5D and SF-6D as ‘functioning’ oriented, and the AQoL as a hybrid of a health and well-being instrument. Based (it seems) on the Personal Wellbeing Index, the authors also include two social dimensions in the framework, which interact with the health domains. Based on the frequency with which dimensions are included in existing instruments, the authors recommend that a new measure should include three physical dimensions (mobility, self-care, pain), three mental health dimensions (depression, vitality, sleep), and two social domains (personal relationships, social isolation).

This framework makes no sense to me. The main problem is that none of the four distinctions hold water, let alone stand up to being mapped linearly to one another. Take pain as an example. It could be measured subjectively or objectively. It’s usually considered a physical matter, but psychological pain is no less meaningful. It may be a ‘causal’ symptom, but there is little doubt that it matters in and of itself as an ‘effect’. The authors themselves even offer up a series of examples of where the distinctions fall down.

It would be nice if this stuff could be drawn-up on a two-dimensional plane, but it isn’t that simple. In addition to oversimplifying complex ideas, I don’t think the authors have fully recognised the level of complexity. For instance, the work seems to be inspired – at least in part – by a desire to describe health state utility instruments in relation to subjective well-being (SWB). But the distinction between health state utility instruments and SWB isn’t simply a matter of scope. Health state utility instruments (as we use them) are about valuing states in relation to preferences, whereas SWB is about experienced utility. That’s a far more important and meaningful distinction than the distinction between symptoms and functioning.

Careless costs related to inefficient technology used within NHS England. Clinical Medicine Journal [PubMed] Published 8th November 2019

This little paper – barely even a single page – was doing the rounds on Twitter. The author was inspired by some frustration in his day job, waiting for the IT to work. We can all relate to that. This brief analysis sums the potential costs of what the author calls ‘careless costs’, which is vaguely defined as time spent by an NHS employee on activity that does not relate to patient care. Supposing that all doctors in the English NHS wasted an average of 10 minutes per day on such activities, it would cost over £143 million (per year, I assume) based on current salaries. The implication is that a little bit of investment could result in massive savings.

This really bugs me, for at least two reasons. First, it is normal for anybody in any profession to have a bit of downtime. Nobody operates at maximum productivity for every minute of every day. If the doctor didn’t have their downtime waiting for a PC to boot, it would be spent queuing in Costa, or having a nice relaxed wee. Probably both. Those 10 minutes that are displaced cannot be considered equivalent in value to 10 minutes of patient contact time. The second reason is that there is no intervention that can fix this problem at little or no cost. Investments cost money. And if perfect IT systems existed, we wouldn’t all find these ‘careless costs’ so familiar. No doubt, the NHS lags behind, but the potential savings of improvement may very well be closer to zero than to the estimates in this paper.

When it comes to clinical impacts, people insist on being able to identify causal improvements from clearly defined interventions or changes. But when it comes to costs, too many people are confident in throwing around huge numbers of speculative origin.

Socioeconomic disparities in unmet need for student mental health services in higher education. Applied Health Economics and Health Policy [PubMed] Published 5th November 2019

In many countries, the size of the student population is growing, and this population seems to have a high level of need for mental health services. There are a variety of challenges in this context that make it an interesting subject for health economists to study (which is why I do), including the fact that universities are often the main providers of services. If universities are going to provide the right services and reach the right people, a better understanding of who needs what is required. This study contributes to this challenge.

The study is set in the context of higher education in Ireland. If you have no idea how higher education is organised in Ireland, and have an interest in mental health, then the Institutional Context section of this paper is worth reading in its own right. The study reports on findings from a national survey of students. This analysis is a secondary analysis of data collected for the primary purpose of eliciting students’ preferences for counselling services, which has been described elsewhere. In this paper, the authors report on supplementary questions, including measures of psychological distress and use of mental health services. Responses from 5,031 individuals, broadly representative of the population, were analysed.

Around 23% of respondents were classified as having unmet need for mental health services based on them reporting both a) severe distress and b) not using services. Arguably, it’s a sketchy definition of unmet need, but it seems reasonable for the purpose of this analysis. The authors regress this binary indicator of unmet need on a selection of sociodemographic and individual characteristics. The model is also run for the binary indicator of need only (rather than unmet need).

The main finding is that people from lower social classes are more likely to have unmet need, but that this is only because these people have a higher level of need. That is, people from less well-off backgrounds are more likely to have mental health problems but are no less likely to have their need met. So this is partly good news and partly bad news. It seems that there are no additional barriers to services in Ireland for students from a lower social class. But unmet need is still high and – with more inclusive university admissions – likely to grow. Based on the analyses, the authors recommend that universities could reach out to male students, who have greater unmet need.

Credits

Chris Sampson’s journal round-up for 30th September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A need for change! A coding framework for improving transparency in decision modeling. PharmacoEconomics [PubMed] Published 24th September 2019

We’ve featured a few papers in recent round-ups that (I assume) will be included in an upcoming themed issue of PharmacoEconomics on transparency in modelling. It’s shaping up to be a good one. The value of transparency in decision modelling has been recognised, but simply making the stuff visible is not enough – it needs to make sense. The purpose of this paper is to help make that achievable.

The authors highlight that the writing of analyses, including coding, involves personal style and preferences. To aid transparency, we need a systematic framework of conventions that make the inner workings of a model understandable to any (expert) user. The paper describes a framework developed by the Decision Analysis in R for Technologies in Health (DARTH) group. The DARTH framework builds on a set of core model components, generalisable to all cost-effectiveness analyses and model structures. There are five components – i) model inputs, ii) model implementation, iii) model calibration, iv) model validation, and v) analysis – and the paper describes the role of each. Importantly, the analysis component can be divided into several parts relating to, for example, sensitivity analyses and value of information analyses.

Based on this framework, the authors provide recommendations for organising and naming files and on the types of functions and data structures required. The recommendations build on conventions established in other fields and in the use of R generally. The authors recommend the implementation of functions in R, and relate general recommendations to the context of decision modelling. We’re also introduced to unit testing, which will be unfamiliar to most Excel modellers but which can be relatively easily implemented in R. The role of various tools are introduced, including R Studio, R Markdown, Shiny, and GitHub.

The real value of this work lies in the linked R packages and other online material, which you can use to test out the framework and consider its application to whatever modelling problem you might have. The authors provide an example using a basic Sick-Sicker model, which you can have a play with using the DARTH packages. In combination with the online resources, this is a valuable paper that you should have to hand if you’re developing a model in R.

Accounts from developers of generic health state utility instruments explain why they produce different QALYs: a qualitative study. Social Science & Medicine [PubMed] Published 19th September 2019

It’s well known that different preference-based measures of health will generate different health state utility values for the same person. Yet, they continue to be used almost interchangeably. For this study, the authors spoke to people involved in the development of six popular measures: QWB, 15D, HUI, EQ-5D, SF-6D, and AQoL. Their goal was to understand the bases for the development of the measures and to explain why the different measures should give different results.

At least one original developer for each instrument was recruited, along with people involved at later stages of development. Semi-structured interviews were conducted with 15 people, with questions on the background, aims, and criteria for the development of the measure, and on the descriptive system, preference weights, performance, and future development of the instrument.

Five broad topics were identified as being associated with differences in the measures: i) knowledge sources used for conceptualisation, ii) development purposes, iii) interpretations of what makes a ‘good’ instrument, iv) choice of valuation techniques, and v) the context for the development process. The online appendices provide some useful tables that summarise the differences between the measures. The authors distinguish between measures based on ‘objective’ definitions (QWB) and items that people found important (15D). Some prioritised sensitivity (AQoL, 15D), others prioritised validity (HUI, QWB), and several focused on pragmatism (SF-6D, HUI, 15D, EQ-5D). Some instruments had modest goals and opportunistic processes (EQ-5D, SF-6D, HUI), while others had grand goals and purposeful processes (QWB, 15D, AQoL). The use of some measures (EQ-5D, HUI) extended far beyond what the original developers had anticipated. In short, different measures were developed with quite different concepts and purposes in mind, so it’s no surprise that they give different results.

This paper provides some interesting accounts and views on the process of instrument development. It might prove most useful in understanding different measures’ blind spots, which can inform the selection of measures in research, as well as future development priorities.

The emerging social science literature on health technology assessment: a narrative review. Value in Health Published 16th September 2019

Health economics provides a good example of multidisciplinarity, with economists, statisticians, medics, epidemiologists, and plenty of others working together to inform health technology assessment. But I still don’t understand what sociologists are talking about half of the time. Yet, it seems that sociologists and political scientists are busy working on the big questions in HTA, as demonstrated by this paper’s 120 references. So, what are they up to?

This article reports on a narrative review, based on 41 empirical studies. Three broad research themes are identified: i) what drove the establishment and design of HTA bodies? ii) what has been the influence of HTA? and iii) what have been the social and political influences on HTA decisions? Some have argued that HTA is inevitable, while others have argued that there are alternative arrangements. Either way, no two systems are the same and it is not easy to explain differences. It’s important to understand HTA in the context of other social tendencies and trends, and that HTA influences and is influenced by these. The authors provide a substantial discussion on the role of stakeholders in HTA and the potential for some to attempt to game the system. Uncertainty abounds in HTA and this necessarily requires negotiation and acts as a limit on the extent to which HTA can rely on objectivity and rationality.

Something lacking is a critical history of HTA as a discipline and the question of what HTA is actually good for. There’s also not a lot of work out there on culture and values, which contrasts with medical sociology. The authors suggest that sociologists and political scientists could be more closely involved in HTA research projects. I suspect that such a move would be more challenging for the economists than for the sociologists.

Credits

36th EuroQol Plenary Meeting

The 36th EuroQol Plenary Meeting will be held on 18-21 September 2019 in Brussels, Belgium.

  • 10 April 2019: Deadline submitting abstracts
  • 11 April – 21 April 2019: Review and selection of abstracts
  • 29 April 2019: Abstract acceptance notification
  • 12 June 2019: Deadline submitting papers and posters
  • 13 June – 26 June 2019: Review of submitted papers and posters
  • 8 July 2019: Papers and posters published on EuroQol members’ website