Chris Sampson’s journal round-up for 18th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A conceptual map of health-related quality of life dimensions: key lessons for a new instrument. Quality of Life Research [PubMed] Published 1st November 2019

EQ-5D, SF-6D, HUI3, AQoL, 15D; they’re all used to describe health states for the purpose of estimating health state utility values, to get the ‘Q’ in the QALY. But it’s widely recognised (and evidenced) that they measure different things. This study sought to better understand the challenge by doing two things: i) ‘mapping’ the domains of the different instruments and ii) advising on the domains to be included in a new measure.

The conceptual model described in this paper builds on two standard models of health – the ICF (International Classification of Functioning, Disability, and Health), which is endorsed by the WHO, and the Wilson and Cleary model. The new model is built around four distinctions, which can be used to define the dimensions included in health state utility instruments: cause vs effect, specific vs broad, physical vs psychological, and subjective vs objective. The idea is that each possible dimension of health can relate, with varying levels of precision, to one or the other of these alternatives.

The authors argue that, conveniently, cause/effect and specific/broad map to one another, as do physical/psychological and objective/subjective. The framework is presented visually, which makes it easy to interpret – I recommend you take a look. Each of the five instruments previously mentioned is mapped to the framework, with the HUI and 15D coming out as ‘symptom’ oriented, EQ-5D and SF-6D as ‘functioning’ oriented, and the AQoL as a hybrid of a health and well-being instrument. Based (it seems) on the Personal Wellbeing Index, the authors also include two social dimensions in the framework, which interact with the health domains. Based on the frequency with which dimensions are included in existing instruments, the authors recommend that a new measure should include three physical dimensions (mobility, self-care, pain), three mental health dimensions (depression, vitality, sleep), and two social domains (personal relationships, social isolation).

This framework makes no sense to me. The main problem is that none of the four distinctions hold water, let alone stand up to being mapped linearly to one another. Take pain as an example. It could be measured subjectively or objectively. It’s usually considered a physical matter, but psychological pain is no less meaningful. It may be a ‘causal’ symptom, but there is little doubt that it matters in and of itself as an ‘effect’. The authors themselves even offer up a series of examples of where the distinctions fall down.

It would be nice if this stuff could be drawn-up on a two-dimensional plane, but it isn’t that simple. In addition to oversimplifying complex ideas, I don’t think the authors have fully recognised the level of complexity. For instance, the work seems to be inspired – at least in part – by a desire to describe health state utility instruments in relation to subjective well-being (SWB). But the distinction between health state utility instruments and SWB isn’t simply a matter of scope. Health state utility instruments (as we use them) are about valuing states in relation to preferences, whereas SWB is about experienced utility. That’s a far more important and meaningful distinction than the distinction between symptoms and functioning.

Careless costs related to inefficient technology used within NHS England. Clinical Medicine Journal [PubMed] Published 8th November 2019

This little paper – barely even a single page – was doing the rounds on Twitter. The author was inspired by some frustration in his day job, waiting for the IT to work. We can all relate to that. This brief analysis sums the potential costs of what the author calls ‘careless costs’, which is vaguely defined as time spent by an NHS employee on activity that does not relate to patient care. Supposing that all doctors in the English NHS wasted an average of 10 minutes per day on such activities, it would cost over £143 million (per year, I assume) based on current salaries. The implication is that a little bit of investment could result in massive savings.

This really bugs me, for at least two reasons. First, it is normal for anybody in any profession to have a bit of downtime. Nobody operates at maximum productivity for every minute of every day. If the doctor didn’t have their downtime waiting for a PC to boot, it would be spent queuing in Costa, or having a nice relaxed wee. Probably both. Those 10 minutes that are displaced cannot be considered equivalent in value to 10 minutes of patient contact time. The second reason is that there is no intervention that can fix this problem at little or no cost. Investments cost money. And if perfect IT systems existed, we wouldn’t all find these ‘careless costs’ so familiar. No doubt, the NHS lags behind, but the potential savings of improvement may very well be closer to zero than to the estimates in this paper.

When it comes to clinical impacts, people insist on being able to identify causal improvements from clearly defined interventions or changes. But when it comes to costs, too many people are confident in throwing around huge numbers of speculative origin.

Socioeconomic disparities in unmet need for student mental health services in higher education. Applied Health Economics and Health Policy [PubMed] Published 5th November 2019

In many countries, the size of the student population is growing, and this population seems to have a high level of need for mental health services. There are a variety of challenges in this context that make it an interesting subject for health economists to study (which is why I do), including the fact that universities are often the main providers of services. If universities are going to provide the right services and reach the right people, a better understanding of who needs what is required. This study contributes to this challenge.

The study is set in the context of higher education in Ireland. If you have no idea how higher education is organised in Ireland, and have an interest in mental health, then the Institutional Context section of this paper is worth reading in its own right. The study reports on findings from a national survey of students. This analysis is a secondary analysis of data collected for the primary purpose of eliciting students’ preferences for counselling services, which has been described elsewhere. In this paper, the authors report on supplementary questions, including measures of psychological distress and use of mental health services. Responses from 5,031 individuals, broadly representative of the population, were analysed.

Around 23% of respondents were classified as having unmet need for mental health services based on them reporting both a) severe distress and b) not using services. Arguably, it’s a sketchy definition of unmet need, but it seems reasonable for the purpose of this analysis. The authors regress this binary indicator of unmet need on a selection of sociodemographic and individual characteristics. The model is also run for the binary indicator of need only (rather than unmet need).

The main finding is that people from lower social classes are more likely to have unmet need, but that this is only because these people have a higher level of need. That is, people from less well-off backgrounds are more likely to have mental health problems but are no less likely to have their need met. So this is partly good news and partly bad news. It seems that there are no additional barriers to services in Ireland for students from a lower social class. But unmet need is still high and – with more inclusive university admissions – likely to grow. Based on the analyses, the authors recommend that universities could reach out to male students, who have greater unmet need.

Credits

Simon McNamara’s journal round-up for 24th June 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Manipulating the 5 dimensions of the EuroQoL instrument: the effects on self-reporting actual health and valuing hypothetical health states. Medical Decision Making [PubMed] Published 4th June 2019

EQ-5D is the Rocky Balboa of health economics. A left-hook here, a jab there, vicious undercuts straight to the chin – it takes the hits, it never stays down. Every man and his dog is ganging up on it, yet, it still stands, proudly resolute in its undefeated record.

When you are the champ” it thinks to itself, “everyone wants a piece you”. The door opens. Out the darkness emerges four mysterious figures. “No… not…”, the instrument stumbles over its words. A bead of sweat rolls slowly down its glistening forehead. Its thumping heartbeat pierces the silence like a drum being thrashed by spear-wielding members of an ancient tribe. “It can’t beNo.” A clear, precise, voice emerges from the darkness, “taken at face value” it states, “our results suggest that economic evaluations that use EQ-5D-5L are systematically biased.” EQ-5D stares blankly, its pupils dilated. It responds, “I’ve been waiting for you”. The gloom clears. Tsuchiya et al (2019) stand there proudly: “bring it on… punk”.

The first paper in this week’s round-up is a surgical probing of a sample of potential issues with EQ-5D. Whilst the above paragraph contains a fair amount of poetic license (read: this is the product of an author who would rather be writing dystopian health-economics short stories than doing their actual work), this paper by Tsuchiya et al. does seems to land a number of strong blows squarely on the chin of EQ-5D. The authors employ a large discrete choice experiment (n=2,494 members of the UK general public), in order to explore the impact of three issues on the way people both report and value health. Specifically: (1) the order the five dimensions are presented; (2) the use of composite dimensions (dimensions that pool two things – e.g. pain or discomfort) rather than separate dimensions; (3) “bolting-off” domains (the reverse of a bolt-on: removing domains from the EQ-5D).

If you are interested in these issues, I suggest you read the paper in full. In brief, the authors find that splitting anxiety/depression into two dimensions had a significant effect on the way people reported their health; that splitting level 5 of the pain/discomfort and anxiety/depression dimensions (e.g. I have extreme pain or discomfort) into individual dimensions significantly impacted the way people valued health; and, that “bolting off” dimensions impacted valuation of the remaining dimensions. Personally, I think the composite domain findings are most interesting here. The authors find that that extreme pain/discomfort is perceived as being a more severe state than extreme discomfort alone, and similarly, that being extremely depressed/anxious is perceived as a more severe state than simply being extremely anxious. The authors suggest this means the EQ-5D-5L may be systematically biased, as an individual who reports extreme discomfort (or anxiety) will have their health state valued based upon the composite domains for each of these, and subsequently have the severity of their health-state over-estimated.

I like this paper, and think it has a lot to contribute to the refinement of EQ-5D, and the development of new instruments. I suggest the champ uses Tsuchiya et al as a sparring partner, gets back to the gym and works on some new moves – I sense a training montage coming on.

Methods for public health economic evaluation: A Delphi survey of decision makers in English and Welsh local government. Health Economics [PubMed] Published 7th June 2019

Imagine the government in your local city is considering a major new public health initiative. Politicians plan to destroy a number of out of date social housing blocks in deprived communities, and building 10,000 new high-quality homes in their place. This will cost a significant amount of money and, as a result, you have been asked to do an economic evaluation of this intervention. How would you go about doing this?

This is clearly a complicated task. You are unlikely to find a randomised controlled trial on which to base your evaluation, the costs and benefits of the programme are likely to fall on multiple sectors, and you will likely have to balance health gains with a wide range of other non-health outcomes (e.g. reductions in crime). If you somehow managed to model the impact of the intervention perfectly, you would then be faced with the challenge of how to value these benefits. Equally, you would have to consider whether or not to weight the benefits of this programme more highly than programmes in alternative parts of the city, because it benefits people in deprived communities – note that inequalities in health seem to be a much larger issue in public health than in ‘normal health’ (e.g. the bread and butter of health economics evaluation). This complexity, and concern for inequalities, makes public health economic evaluation a completely different beast to traditional economic evaluation. This has led some to question the value of QALY-based cost-utility analysis in public health, and to calls for methods that better meet the needs of the field.  

The second paper in this week’s round-up contributes to the development of these methods, by providing information on what public health decision makers in England and Wales think about different economic evaluation methodologies. The authors fielded an online, two-round, Delphi-panel study featuring 26 to 36 statements (round 1 and 2 respectively). For each statement, participants were asked to rank their level of agreement with the statement on a five-point scale (e.g. 1 = strongly agree and 5 = strongly disagree). In the first round, participants (n=66) simply responded to the statements, and in the second, they (n=29) were presented with the median response from the prior round, and asked to consider their response in light of this feedback. The statements tested covered a wide range of issues, including: the role distributional concerns should play in public health economic evaluation (e.g. economic evaluation should formally weight outcomes by population subgroup); the type of outcomes considered (e.g. economic evidence should use a single outcome that captures length of life and quality of life); and, the budgets to be considered (e.g. economic evaluation should take account of multi-sectoral budgets available).

Interestingly, the decision-makers rejected the idea of focusing solely on maximising outcomes (the current norm for health economic evaluations), and supported placing an equal focus on minimising inequality and maximising outcomes. Furthermore, they supported formal weighting of outcomes by population subgroup, the use of multiple outcomes to capture health, wellbeing and broader outcomes, and failed to support use of a single outcome that captures well-being gain. These findings suggest cost-consequence analysis may provide a better fit to the needs of these decision makers than simply attempting to apply the QALY model in public health – particularly if augmented by some form of multi-criteria decision analysis (MCDA) that can reflect distributional concerns and allow comparison across outcome types. I think this is a great paper and expect to be citing it for years to come.

I AM IMMORTAL. Economic Enquiry [RePEc] Published 16th November 2016

I love this paper. It isn’t a recent one, but it hasn’t been covered in the AHE blog before, and I think everyone should know about it, so – luckily for you – it has made it in to this week’s round-up.

In this groundbreaking work, Riccardo Trezzi fits a series of “state of the art”, complex, econometric models to his own electrocardiogram (ECG) signal – a measure of the electrical function of the heart. He then compares these models, identifies the one that best fits his data, and uses the model to predict his future ECG signal, and subsequently his life expectancy. This provides an astonishing result  – “the n steps ahead forecast remains bounded and well above zero even after one googol period, implying that my life expectancy tends to infinite. I therefore conclude that I am immortal”.

I think this is genius. If you haven’t already realised the point of the paper by the time you have reached this part of my write-up, I suggest you think very carefully about the face-validity of this result. If you still don’t get it after that, have a look at the note on the front page – specifically the bit that says “this paper is intended to be a joke”. If you still don’t get it – the author measured their heart activity for 10 seconds, and then applied lots of complex statistical methods, which (obviously) when extrapolated suggested his heart would keep beating forever, and subsequently that he would live forever.

Whilst the paper is a parody, it makes an important point. If we fit models to data, and attempt to predict the future without considering external evidence, we may well make a hash of that prediction – despite the apparent sophistication of our econometric methods. This is clearly an extreme example, but resonates with me, because this is what many people continue to do when modelling oncology data. This is certainly less prevalent than it was a few years ago, and I expect it will become a thing of the past, but for now, whenever I meet someone who does this, I will be sure to send them a copy of this paper. That being said, as far as I am aware the author is still alive, so maybe he will have the last laugh – perhaps even the last laugh of all of humankind if his model is to be believed.

Credits

Chris Sampson’s journal round-up for 17th June 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Mental health: a particular challenge confronting policy makers and economists. Applied Health Economics and Health Policy [PubMed] Published 7th June 2019

This paper has a bad title. You’d never guess that its focus is on the ‘inconsistency of preferences’ expressed by users of mental health services. The idea is that people experiencing certain mental health problems (e.g. depression, conduct disorders, ADHD) may express different preferences during acute episodes. Preference inconsistency, the author explains, can result in failures in prediction (because behaviour may contradict expectations) and failures in evaluation (because… well, this is a bit less clear). Because of preference inconsistency, a standard principal-agent model cannot apply to treatment decisions. Conventional microeconomic theory cannot apply. If this leaves you wondering “so what has this got to do with economists?” then you’re not alone. The author of this article believes that our role is to identify suitable agents who can interpret patients’ inconsistent preferences and make appropriate decisions on their behalf.

But, after introducing this challenge, the framing of the issue seems to change and the discussion becomes about finding an agent who can determine a patient’s “true preferences” from “conflicting statements”. That seems to me to be a bit different from the issue of ‘inconsistent preferences’, and the phrase “true preferences” should raise an eyebrow of any sceptical economist. From here, the author describes some utility models of perfect agency and imperfect agency – the latter taking account of the agent’s opportunity cost of effort. The models include error in judging whether the patient is exhibiting ‘true preferences’ and the strength of the patient’s expression of preference. Five dimensions of preference with respect to treatment are specified: when, what, who, how, and where. Eight candidate agents are specified: family member, lay helper, worker in social psychiatry, family physician, psychiatrist/psychologist, health insurer, government, and police/judge. The knowledge level of each agent in each domain is surmised and related to the precision of estimates for the utility models described. The author argues that certain agents are better at representing a patient’s ‘true preferences’ within certain domains, and that no candidate agent will serve an optimal role in every domain. For instance, family members are likely to be well-placed to make judgements with little error, but they will probably have a higher opportunity cost than care professionals.

The overall conclusion that different agents will be effective in different contexts seems logical, and I support the view of the author that economists should dedicate themselves to better understanding the incentives and behaviours of different agents. But I’m not convinced by the route to that conclusion.

Exploring the impact of adding a respiratory dimension to the EQ-5D-5L. Medical Decision Making [PubMed] Published 16th May 2019

I’m currently working on a project to develop and test EQ-5D bolt-ons for cognition and vision, so I was keen to see the methods reported in this study. The EQ-5D-5L has been shown to have only a weak correlation with clinically-relevant changes in the context of respiratory disease, so it might be worth developing a bolt-on (or multiple bolt-ons) that describe relevant functional changes not captured by the core dimensions of the EQ-5D. In this study, the authors looked at how the inclusion of respiratory dimensions influenced utility values.

Relevant disease-specific outcome measures were reviewed. The researchers also analysed EQ-5D-3L data and disease-specific outcome measure data from three clinical studies in asthma and COPD, to see how much variance in visual analogue scores was explained by disease-specific items. The selection of potential bolt-ons was also informed by principal-component analysis to try to identify which items form constructs distinct from the EQ-5D dimensions. The conclusion of this process was that two other dimensions represented separate constructs and could be good candidates for bolt-ons: ‘limitations in physical activities due to shortness of breath’ and ‘breathing problems’. Some think-aloud interviews were conducted to ensure that the bolt-ons made sense to patients and the general public.

A valuation study using time trade-off and discrete choice experiments was conducted in the Netherlands with a representative sample of 430 people from the general public. The sample was split in two, with each half completing the EQ-5D-5L with one or the other bolt-on. The Dutch EQ-5D-5L valuation study was used as a comparator data set. The inclusion of the bolt-ons seemed to extend the scale of utility values; the best-functioning states were associated with higher utility values when the bolt-ons were added and the worst-functioning states were associated with lower values. This was more pronounced for the ‘breathing problems’ bolt-on. The size of the coefficients on the two bolt-ons (i.e. the effect on utility values) was quite different. The ‘physical activities’ bolt-on had coefficients similar in size to self-care and usual activities. The coefficients on the ‘breathing problems’ bolt-on were a bit larger, comparable in size with those of the mobility dimension.

The authors raise an interesting question in light of their findings from the development process, in which the quantitative analysis supported a ‘symptoms’ dimension and patients indicated the importance of a dimension relating to ‘physical activities’. They ask whether it is more important for an item to be relevant or for it to be quantitatively important for valuation. Conceptually, it seems to me that the apparent added value of a ‘physical activity’ bolt-on is problematic for the EQ-5D. The ‘physical activity’ bolt-on specifies “climbing stairs, going for a walk, carrying things, gardening” as the types of activities it is referring to. Surely, these should be reflected in ‘mobility’ and ‘usual activities’. If they aren’t then I think the ‘usual activities’ descriptor, in particular, is not doing its job. What we might be seeing here, more than anything, is the flaws in the development process for the original EQ-5D descriptors. Namely, that they didn’t give adequate consideration to the people who would be filling them in. Nevertheless, it looks like a ‘breathing problems’ bolt-on could be a useful part of the EuroQol armoury.

Technology and college student mental health: challenges and opportunities. Frontiers in Psychiatry [PubMed] Published 15th April 2019

Universities in the UK and elsewhere are facing growing demand for counselling services from students. That’s probably part of the reason that our Student Mental Health Research Network was funded. Some researchers have attributed this rising demand to the use of personal computing technologies – smartphones, social media, and the like. No doubt, their use is correlated with mental health problems, certainly through time and probably between individuals. But causality is uncertain, and there are plenty of ways in which – as set out in this article – these technologies might be used in a positive way.

Most obviously, smartphones can be a platform for mental health programmes, delivered via apps. This is particularly important because there are perceived and actual barriers for students to accessing face-to-face support. This is an issue for all people with mental health problems. But the opportunity to address this issue using technology is far greater for students, who are hyper-connected. Part of the problem, the authors argue, is that there has not been a focus on implementation, and so the evidence that does exist is from studies with self-selecting samples. Yet the opportunity is great here, too, because students are often co-located with service providers and already engaged with course-related software.

Challenges remain with respect to ethics, privacy, accountability, and duty of care. In the UK, we have the benefit of being able to turn to GDPR for guidance, and universities are well-equipped to assess the suitability of off-the-shelf and bespoke services in terms of their ethical implications. The authors outline some possible ways in which universities can approach implementation and the challenges therein. Adopting these approaches will be crucial if universities are to address the current gap between the supply and demand for services.

Credits