Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
Manipulating the 5 dimensions of the EuroQoL instrument: the effects on self-reporting actual health and valuing hypothetical health states. Medical Decision Making [PubMed] Published 4th June 2019
EQ-5D is the Rocky Balboa of health economics. A left-hook here, a jab there, vicious undercuts straight to the chin – it takes the hits, it never stays down. Every man and his dog is ganging up on it, yet, it still stands, proudly resolute in its undefeated record.
“When you are the champ” it thinks to itself, “everyone wants a piece you”. The door opens. Out the darkness emerges four mysterious figures. “No… not…”, the instrument stumbles over its words. A bead of sweat rolls slowly down its glistening forehead. Its thumping heartbeat pierces the silence like a drum being thrashed by spear-wielding members of an ancient tribe. “It can’t be… No.” A clear, precise, voice emerges from the darkness, “taken at face value” it states, “our results suggest that economic evaluations that use EQ-5D-5L are systematically biased.” EQ-5D stares blankly, its pupils dilated. It responds, “I’ve been waiting for you”. The gloom clears. Tsuchiya et al (2019) stand there proudly: “bring it on… punk”.
The first paper in this week’s round-up is a surgical probing of a sample of potential issues with EQ-5D. Whilst the above paragraph contains a fair amount of poetic license (read: this is the product of an author who would rather be writing dystopian health-economics short stories than doing their actual work), this paper by Tsuchiya et al. does seems to land a number of strong blows squarely on the chin of EQ-5D. The authors employ a large discrete choice experiment (n=2,494 members of the UK general public), in order to explore the impact of three issues on the way people both report and value health. Specifically: (1) the order the five dimensions are presented; (2) the use of composite dimensions (dimensions that pool two things – e.g. pain or discomfort) rather than separate dimensions; (3) “bolting-off” domains (the reverse of a bolt-on: removing domains from the EQ-5D).
If you are interested in these issues, I suggest you read the paper in full. In brief, the authors find that splitting anxiety/depression into two dimensions had a significant effect on the way people reported their health; that splitting level 5 of the pain/discomfort and anxiety/depression dimensions (e.g. I have extreme pain or discomfort) into individual dimensions significantly impacted the way people valued health; and, that “bolting off” dimensions impacted valuation of the remaining dimensions. Personally, I think the composite domain findings are most interesting here. The authors find that that extreme pain/discomfort is perceived as being a more severe state than extreme discomfort alone, and similarly, that being extremely depressed/anxious is perceived as a more severe state than simply being extremely anxious. The authors suggest this means the EQ-5D-5L may be systematically biased, as an individual who reports extreme discomfort (or anxiety) will have their health state valued based upon the composite domains for each of these, and subsequently have the severity of their health-state over-estimated.
I like this paper, and think it has a lot to contribute to the refinement of EQ-5D, and the development of new instruments. I suggest the champ uses Tsuchiya et al as a sparring partner, gets back to the gym and works on some new moves – I sense a training montage coming on.
Methods for public health economic evaluation: A Delphi survey of decision makers in English and Welsh local government. Health Economics [PubMed] Published 7th June 2019
Imagine the government in your local city is considering a major new public health initiative. Politicians plan to destroy a number of out of date social housing blocks in deprived communities, and building 10,000 new high-quality homes in their place. This will cost a significant amount of money and, as a result, you have been asked to do an economic evaluation of this intervention. How would you go about doing this?
This is clearly a complicated task. You are unlikely to find a randomised controlled trial on which to base your evaluation, the costs and benefits of the programme are likely to fall on multiple sectors, and you will likely have to balance health gains with a wide range of other non-health outcomes (e.g. reductions in crime). If you somehow managed to model the impact of the intervention perfectly, you would then be faced with the challenge of how to value these benefits. Equally, you would have to consider whether or not to weight the benefits of this programme more highly than programmes in alternative parts of the city, because it benefits people in deprived communities – note that inequalities in health seem to be a much larger issue in public health than in ‘normal health’ (e.g. the bread and butter of health economics evaluation). This complexity, and concern for inequalities, makes public health economic evaluation a completely different beast to traditional economic evaluation. This has led some to question the value of QALY-based cost-utility analysis in public health, and to calls for methods that better meet the needs of the field.
The second paper in this week’s round-up contributes to the development of these methods, by providing information on what public health decision makers in England and Wales think about different economic evaluation methodologies. The authors fielded an online, two-round, Delphi-panel study featuring 26 to 36 statements (round 1 and 2 respectively). For each statement, participants were asked to rank their level of agreement with the statement on a five-point scale (e.g. 1 = strongly agree and 5 = strongly disagree). In the first round, participants (n=66) simply responded to the statements, and in the second, they (n=29) were presented with the median response from the prior round, and asked to consider their response in light of this feedback. The statements tested covered a wide range of issues, including: the role distributional concerns should play in public health economic evaluation (e.g. economic evaluation should formally weight outcomes by population subgroup); the type of outcomes considered (e.g. economic evidence should use a single outcome that captures length of life and quality of life); and, the budgets to be considered (e.g. economic evaluation should take account of multi-sectoral budgets available).
Interestingly, the decision-makers rejected the idea of focusing solely on maximising outcomes (the current norm for health economic evaluations), and supported placing an equal focus on minimising inequality and maximising outcomes. Furthermore, they supported formal weighting of outcomes by population subgroup, the use of multiple outcomes to capture health, wellbeing and broader outcomes, and failed to support use of a single outcome that captures well-being gain. These findings suggest cost-consequence analysis may provide a better fit to the needs of these decision makers than simply attempting to apply the QALY model in public health – particularly if augmented by some form of multi-criteria decision analysis (MCDA) that can reflect distributional concerns and allow comparison across outcome types. I think this is a great paper and expect to be citing it for years to come.
I love this paper. It isn’t a recent one, but it hasn’t been covered in the AHE blog before, and I think everyone should know about it, so – luckily for you – it has made it in to this week’s round-up.
In this groundbreaking work, Riccardo Trezzi fits a series of “state of the art”, complex, econometric models to his own electrocardiogram (ECG) signal – a measure of the electrical function of the heart. He then compares these models, identifies the one that best fits his data, and uses the model to predict his future ECG signal, and subsequently his life expectancy. This provides an astonishing result – “the n steps ahead forecast remains bounded and well above zero even after one googol period, implying that my life expectancy tends to infinite. I therefore conclude that I am immortal”.
I think this is genius. If you haven’t already realised the point of the paper by the time you have reached this part of my write-up, I suggest you think very carefully about the face-validity of this result. If you still don’t get it after that, have a look at the note on the front page – specifically the bit that says “this paper is intended to be a joke”. If you still don’t get it – the author measured their heart activity for 10 seconds, and then applied lots of complex statistical methods, which (obviously) when extrapolated suggested his heart would keep beating forever, and subsequently that he would live forever.
Whilst the paper is a parody, it makes an important point. If we fit models to data, and attempt to predict the future without considering external evidence, we may well make a hash of that prediction – despite the apparent sophistication of our econometric methods. This is clearly an extreme example, but resonates with me, because this is what many people continue to do when modelling oncology data. This is certainly less prevalent than it was a few years ago, and I expect it will become a thing of the past, but for now, whenever I meet someone who does this, I will be sure to send them a copy of this paper. That being said, as far as I am aware the author is still alive, so maybe he will have the last laugh – perhaps even the last laugh of all of humankind if his model is to be believed.