Simon McNamara’s journal round-up for 24th June 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Manipulating the 5 dimensions of the EuroQoL instrument: the effects on self-reporting actual health and valuing hypothetical health states. Medical Decision Making [PubMed] Published 4th June 2019

EQ-5D is the Rocky Balboa of health economics. A left-hook here, a jab there, vicious undercuts straight to the chin – it takes the hits, it never stays down. Every man and his dog is ganging up on it, yet, it still stands, proudly resolute in its undefeated record.

When you are the champ” it thinks to itself, “everyone wants a piece you”. The door opens. Out the darkness emerges four mysterious figures. “No… not…”, the instrument stumbles over its words. A bead of sweat rolls slowly down its glistening forehead. Its thumping heartbeat pierces the silence like a drum being thrashed by spear-wielding members of an ancient tribe. “It can’t beNo.” A clear, precise, voice emerges from the darkness, “taken at face value” it states, “our results suggest that economic evaluations that use EQ-5D-5L are systematically biased.” EQ-5D stares blankly, its pupils dilated. It responds, “I’ve been waiting for you”. The gloom clears. Tsuchiya et al (2019) stand there proudly: “bring it on… punk”.

The first paper in this week’s round-up is a surgical probing of a sample of potential issues with EQ-5D. Whilst the above paragraph contains a fair amount of poetic license (read: this is the product of an author who would rather be writing dystopian health-economics short stories than doing their actual work), this paper by Tsuchiya et al. does seems to land a number of strong blows squarely on the chin of EQ-5D. The authors employ a large discrete choice experiment (n=2,494 members of the UK general public), in order to explore the impact of three issues on the way people both report and value health. Specifically: (1) the order the five dimensions are presented; (2) the use of composite dimensions (dimensions that pool two things – e.g. pain or discomfort) rather than separate dimensions; (3) “bolting-off” domains (the reverse of a bolt-on: removing domains from the EQ-5D).

If you are interested in these issues, I suggest you read the paper in full. In brief, the authors find that splitting anxiety/depression into two dimensions had a significant effect on the way people reported their health; that splitting level 5 of the pain/discomfort and anxiety/depression dimensions (e.g. I have extreme pain or discomfort) into individual dimensions significantly impacted the way people valued health; and, that “bolting off” dimensions impacted valuation of the remaining dimensions. Personally, I think the composite domain findings are most interesting here. The authors find that that extreme pain/discomfort is perceived as being a more severe state than extreme discomfort alone, and similarly, that being extremely depressed/anxious is perceived as a more severe state than simply being extremely anxious. The authors suggest this means the EQ-5D-5L may be systematically biased, as an individual who reports extreme discomfort (or anxiety) will have their health state valued based upon the composite domains for each of these, and subsequently have the severity of their health-state over-estimated.

I like this paper, and think it has a lot to contribute to the refinement of EQ-5D, and the development of new instruments. I suggest the champ uses Tsuchiya et al as a sparring partner, gets back to the gym and works on some new moves – I sense a training montage coming on.

Methods for public health economic evaluation: A Delphi survey of decision makers in English and Welsh local government. Health Economics [PubMed] Published 7th June 2019

Imagine the government in your local city is considering a major new public health initiative. Politicians plan to destroy a number of out of date social housing blocks in deprived communities, and building 10,000 new high-quality homes in their place. This will cost a significant amount of money and, as a result, you have been asked to do an economic evaluation of this intervention. How would you go about doing this?

This is clearly a complicated task. You are unlikely to find a randomised controlled trial on which to base your evaluation, the costs and benefits of the programme are likely to fall on multiple sectors, and you will likely have to balance health gains with a wide range of other non-health outcomes (e.g. reductions in crime). If you somehow managed to model the impact of the intervention perfectly, you would then be faced with the challenge of how to value these benefits. Equally, you would have to consider whether or not to weight the benefits of this programme more highly than programmes in alternative parts of the city, because it benefits people in deprived communities – note that inequalities in health seem to be a much larger issue in public health than in ‘normal health’ (e.g. the bread and butter of health economics evaluation). This complexity, and concern for inequalities, makes public health economic evaluation a completely different beast to traditional economic evaluation. This has led some to question the value of QALY-based cost-utility analysis in public health, and to calls for methods that better meet the needs of the field.  

The second paper in this week’s round-up contributes to the development of these methods, by providing information on what public health decision makers in England and Wales think about different economic evaluation methodologies. The authors fielded an online, two-round, Delphi-panel study featuring 26 to 36 statements (round 1 and 2 respectively). For each statement, participants were asked to rank their level of agreement with the statement on a five-point scale (e.g. 1 = strongly agree and 5 = strongly disagree). In the first round, participants (n=66) simply responded to the statements, and in the second, they (n=29) were presented with the median response from the prior round, and asked to consider their response in light of this feedback. The statements tested covered a wide range of issues, including: the role distributional concerns should play in public health economic evaluation (e.g. economic evaluation should formally weight outcomes by population subgroup); the type of outcomes considered (e.g. economic evidence should use a single outcome that captures length of life and quality of life); and, the budgets to be considered (e.g. economic evaluation should take account of multi-sectoral budgets available).

Interestingly, the decision-makers rejected the idea of focusing solely on maximising outcomes (the current norm for health economic evaluations), and supported placing an equal focus on minimising inequality and maximising outcomes. Furthermore, they supported formal weighting of outcomes by population subgroup, the use of multiple outcomes to capture health, wellbeing and broader outcomes, and failed to support use of a single outcome that captures well-being gain. These findings suggest cost-consequence analysis may provide a better fit to the needs of these decision makers than simply attempting to apply the QALY model in public health – particularly if augmented by some form of multi-criteria decision analysis (MCDA) that can reflect distributional concerns and allow comparison across outcome types. I think this is a great paper and expect to be citing it for years to come.

I AM IMMORTAL. Economic Enquiry [RePEc] Published 16th November 2016

I love this paper. It isn’t a recent one, but it hasn’t been covered in the AHE blog before, and I think everyone should know about it, so – luckily for you – it has made it in to this week’s round-up.

In this groundbreaking work, Riccardo Trezzi fits a series of “state of the art”, complex, econometric models to his own electrocardiogram (ECG) signal – a measure of the electrical function of the heart. He then compares these models, identifies the one that best fits his data, and uses the model to predict his future ECG signal, and subsequently his life expectancy. This provides an astonishing result  – “the n steps ahead forecast remains bounded and well above zero even after one googol period, implying that my life expectancy tends to infinite. I therefore conclude that I am immortal”.

I think this is genius. If you haven’t already realised the point of the paper by the time you have reached this part of my write-up, I suggest you think very carefully about the face-validity of this result. If you still don’t get it after that, have a look at the note on the front page – specifically the bit that says “this paper is intended to be a joke”. If you still don’t get it – the author measured their heart activity for 10 seconds, and then applied lots of complex statistical methods, which (obviously) when extrapolated suggested his heart would keep beating forever, and subsequently that he would live forever.

Whilst the paper is a parody, it makes an important point. If we fit models to data, and attempt to predict the future without considering external evidence, we may well make a hash of that prediction – despite the apparent sophistication of our econometric methods. This is clearly an extreme example, but resonates with me, because this is what many people continue to do when modelling oncology data. This is certainly less prevalent than it was a few years ago, and I expect it will become a thing of the past, but for now, whenever I meet someone who does this, I will be sure to send them a copy of this paper. That being said, as far as I am aware the author is still alive, so maybe he will have the last laugh – perhaps even the last laugh of all of humankind if his model is to be believed.

Credits

Brendan Collins’s journal round-up for 18th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluation of intervention impact on health inequality for resource allocation. Medical Decision Making [PubMed] Published 28th February 2019

How should decision-makers factor equity impacts into economic decisions? Can we trade off an intervention’s cost-effectiveness with its impact on unfair health inequalities? Is a QALY just a QALY or should we weight it more if it is gained by someone from a disadvantaged group? Can we assume that, because people of lower socioeconomic position lose more QALYs through ill health, that most interventions should, by default, reduce inequalities?

I really like the health equity plane. This is where you show health impacts (usually including a summary measure of cost-effectiveness like net health benefit or net monetary benefit) and equity impacts (which might be a change in slope index of inequality [SII] or relative index of inequality) on the same plane. This enables decision-makers to identify potential trade-offs between interventions that produce a greater benefit, but have less impact on inequalities, and those that produce a smaller benefit, but increase equity. I think there has been a debate over whether the ‘win-win’ quadrant should be south-east (which would be consistent with the dominant quadrant of the cost-effectiveness plane) or north-east, which is what seems to have been adopted as the consensus and is used here.

This paper showcases a reproducible method to estimate the equity impact of interventions. It considers public health interventions recommended by NICE from 2006-2016, with equity impacts estimated based on whether they targeted specific diseases, risk factors or populations. The disease distributions were based on hospital episode statistics data by deprivation (IMD). The study used equity weights to convert QALYs gained to different social groups into net social welfare. In this case, valuing the most disadvantaged fifth of people’s health at around 6-7 times that of the least disadvantaged fifth. I think there might still be work to be done around reaching consensus for equity weights.

The total expected effect on inequalities is small – full implementation of all recommendations would produce a reduction of the quality-adjusted life expectancy gap between the healthiest and least healthy from 13.78 to 13.34 QALYs. But maybe this is to be expected; NICE does not typically look at vaccinations or screening and has not looked at large scale public health programmes like the Healthy Child Programme in the whole. Reassuringly, where recommended interventions were likely to increase inequality, the trade-off between efficiency and equity was within the social welfare function they had used. The increase in inequality might be acceptable because the interventions were cost-effective – producing 5.6million QALYs while increasing the SII by 0.005. If these interventions are buying health at a good price, then you would hope this might then release money for other interventions that would reduce inequalities.

I suspect that public health folks might not like equity trade-offs at all – trading off equity and cost-effectiveness might be the moral equivalent of trading off human rights – you can’t choose between them. But the reality is that these kinds of trade-offs do happen, and like a lot of economic methods, it is about revealing these implicit trade-offs so that they become explicit, and having ‘accountability for reasonableness‘.

Future unrelated medical costs need to be considered in cost effectiveness analysis. The European Journal of Health Economics [PubMed] [RePEc] Published February 2019

This editorial says that NICE should include unrelated future medical costs in its decision making. At the moment, if NICE looks at a cardiovascular disease (CVD) drug, it might look at future costs related to CVD but it won’t include changes in future costs of cancer, or dementia, which may occur because individuals live longer. But usually unrelated QALY gains will be implicitly included; so there is an inconsistency. If you are a health economic modeller, you know that including unrelated costs properly is technically difficult. You might weight average population costs by disease prevalence so you get a cost estimate for people with coronary heart disease, diabetes, and people without either disease. Or you might have a general healthcare running cost that you can apply to future years. But accounting for a full matrix of competing causes of morbidity and mortality is very tricky if not impossible. To help with this, this group of authors produced the excellent PAID tool, which helps with doing this for the Netherlands (can we have one for the UK please?).

To me, including unrelated future costs means that in some cases ICERs might be driven more by the ratio of future costs to QALYs gained. Whereas currently, ICERs are often driven by the ratio of the intervention costs to QALYs gained. So it might be that a lot of treatments that are currently cost-effective no longer are, or we need to judge all interventions with a higher ICER willingness to pay threshold or value of a QALY. The authors suggest that, although including unrelated medical costs usually pushes up the ICER, it should ultimately result in better decisions that increase health.

There are real ethical issues here. I worry that including future unrelated costs might be used for an integrated care agenda in the NHS, moving towards a capitation system where the total healthcare spend on any one individual is capped, which I don’t necessarily think should happen in a health insurance system. Future developments around big data mean we will be able to segment the population a lot better and estimate who will benefit from treatments. But I think if someone is unlucky enough to need a lot of healthcare spending, maybe they should have it. This is risk sharing and, without it, you may get the ‘double jeopardy‘ problem.

For health economic modellers and decision-makers, a compromise might be to present analyses with related and unrelated medical costs and to consider both for investment decisions.

Overview of cost-effectiveness analysis. JAMA [PubMed] Published 11th March 2019

This paper probably won’t offer anything new to academic health economists in terms of methods, but I think it might be a useful teaching resource. It gives an interesting example of a model of ovarian cancer screening in the US that was published in February 2018. There has been a large-scale trial of ovarian cancer screening in the UK (the UKCTOCS), which has been extended because the results have been promising but mortality reductions were not statistically significant. The model gives a central ICER estimate of $106,187/QALY (based on $100 per screen) which would probably not be considered cost-effective in the UK.

I would like to explore one statement that I found particularly interesting, around the willingness to pay threshold; “This willingness to pay is often represented by the largest ICER among all the interventions that were adopted before current resources were exhausted, because adoption of any new intervention would require removal of an existing intervention to free up resources.”

The Culyer bookshelf model is similar to this, although as well as the ICER you also need to consider the burden of disease or size of the investment. Displacing a $110,000/QALY intervention for 1000 people with a $109,000/QALY intervention for a million people will bust your budget.

This idea works intuitively – if Liverpool FC are signing a new player then I might hope they are better than all of the other players, or at least better than the average player. But actually, as long as they are better than the worst player then the team will be improved (leaving aside issues around different positions, how they play together, etc.).

However, I think that saying that the reference ICER should be the largest current ICER might be a bit dangerous. Leaving aside inefficient legacy interventions (like unnecessary tonsillectomies etc), it is likely that the intervention being considered for investment and the current maximum ICER intervention to be displaced may both be new, expensive immunotherapies. It might be last in, first out. But I can’t see this happening; people are loss averse, so decision-makers and patients might not accept what is seen as a fantastic new drug for pancreatic cancer being approved then quickly usurped by a fantastic new leukaemia drug.

There has been a lot of debate around what the threshold should be in the UK; in England NICE currently use £20,000 – £30,000, up to a hypothetical maximum £300,000/QALY in very specific circumstances. UK Treasury value QALYs at £60,000. Work by Karl Claxton and colleagues suggests that marginal productivity (the ‘shadow price’) in the NHS is nearer to £5,000 – £15,000 per QALY.

I don’t know what the answer to this is. I don’t think the willingness-to-pay threshold for a new treatment should be the maximum ICER of a current portfolio of interventions; maybe it should be the marginal health production cost in a health system, as might be inferred from the Claxton work. Of course, investment decisions are made on other factors, like impact on health inequalities, not just on the ICER.

Credits

My quality-adjusted life year

Why did I do it?

I have evaluated lots of services and been involved in trials where I have asked people to collect EQ-5D data. During this time several people have complained to me about having to collect EQ-5D data so I thought I would have a ‘taste of my own medicine’. I measured my health-related quality of life (HRQoL) using EQ-5D-3L, EQ-5D-VAS, and EQ-5D-5L, every day for a year (N=1). I had the EQ-5D on a spreadsheet on my smartphone and prompted myself to do it at 9 p.m. every night. I set a target of never being more than three days late in doing it, which I missed twice through the year. I also recorded health-related notes for some days, for instance, 21st January said “tired, dropped a keytar on toe (very 1980s injury)”.

By doing this I wanted to illuminate issues around anchoring, ceiling effects and ideas of health and wellness. With a big increase in wearable tech and smartphone health apps this type of big data collection might become a lot more commonplace. I have not kept a diary since I was about 13 so it was an interesting way of keeping track on what was happening, with a focus on health. Starting the year I knew I had one big life event coming up: a new baby due in early March. I am generally quite healthy, a bit overweight, don’t get enough sleep. I have been called a hypochondriac by people before, typically complaining of headaches, colds and sore throats around six months of the year. I usually go running once or twice a week.

From the start I was very conscious that I felt I shouldn’t grumble too much, that EQ-5D was mainly used to measure functional health in people with disease, not in well people (and ceiling effects were a feature of the EQ-5D). I immediately felt a ‘freedom’ of the greater sensitivity of the EQ-5D-5L when compared to the 3L so I could score myself as having slight problems with the 5L, but not that they were bad enough to be ‘some problems’ on the 3L.

There were days when I felt a bit achey or tired because I had been for a run, but unless I had an actual injury I did not score myself as having problems with pain or mobility because of this; generally if I feel achey from running I think of that as a good thing as having pushed myself hard, ‘no pain no gain’. I also started doing yoga this year which made me feel great but also a bit achey sometimes. But in general I noticed that one of the main problems I had was fatigue which is not explicitly covered in the EQ-5D but was reflected sometimes as being slightly impaired on usual activities. I also thought that usual activities could be impaired if you are working and travelling a lot, as you don’t get to do any of the things you enjoy doing like hobbies or spending time with family, but this is more of a capability question whereas the EQ-5D is more functional.

How did my HRQoL compare?

I matched up my levels on the individual domains to EQ-5D-3L and 5L index scores based on UK preference scores. The final 5L value set may still change; I used the most recent published scores. I also matched my levels to a personal 5L value set which I did using this survey which uses discrete choice experiments and involves comparing a set of pairs of EQ-5D-5L health states. I found doing this fascinating and it made me think about how mutually exclusive the EQ-5D dimensions are, and whether some health states are actually implausible: for instance, is it possible to be in extreme pain but not have any impairment on usual activities?

Surprisingly, my average EQ-5D-3L index score (0.982) was higher than the population averages for my age group (for England age 35-44 it is 0.888 based on Szende et al 2014); I expected them to be lower. In fact my average index scores were higher than the average for 18-24 year olds (0.922). I thought that measuring EQ-5D more often and having more granularity would lead to lower average scores but it actually led to high average scores.

My average score from the personal 5L value set was slightly higher than the England population value set (0.983 vs 0.975). Digging into the data, the main differences were that I thought that usual activities were slightly more important, and pain slightly less important, than the general population. The 5L (England tariff) correlated more closely with the VAS than the 3L (r2 =0.746 vs. r2 =0.586) but the 5L (personal tariff) correlated most closely with the VAS (r2 =0.792). So based on my N=1 sample, this suggests that the 5L is a better predictor of overall health than the 3L, and that the personal value set has validity in predicting VAS scores.

Figure 1. My EQ-5D-3L index score [3L], EQ-5D-5L index score (England value set) [5L], EQ-5DL-5L index score (personal value set) [5LP], and visual analogue scale (VAS) score divided by 100 [VAS/100].

Reflection

I definitely regretted doing the EQ-5D every day and was glad when the year was over! I would have preferred to have done it every week but I think that would have missed a lot of subtleties in how I felt from day to day. On reflection the way I was approaching it was that the end of each day I would try to recall if I was stressed, or if anything hurt, and adjust the level on the relevant dimension. But I wonder if I was prompted at any moment during the day as to whether I was stressed, had some mobility issues, or pain, would I say I did? It makes me think about Kahneman and Riis’s ‘remembering brain’ and ‘experiencing brain’. Was my EQ-5D profile a slave to my ‘remembering brain’ rather than my ‘experiencing brain’?

One thing when my score was low for a few days was when I had a really painful abscess on my tooth. At the time I felt like the pain was unbearable so had a high pain score, but looking back I wonder if it was that bad, but I didn’t want to retrospectively change my score. Strangely, I had the flu twice in this year which gave me some health decrements, which I don’t think has ever happened to me before (I don’t think it was just ‘man flu’!).

I knew that I was going to have a baby this year but I didn’t know that I would spend 18 days in hospital, despite not being ill myself. This has led me to think a lot more about ‘caregiver effects‘ – the impact of close relatives being ill; it is unnerving spending night after night in hospital, in this case because my wife was very ill after giving birth, and then when my baby son was two months old, he got very ill (both are doing a lot better now). Being in hospital with a sick relative is a strange feeling, stressful and boring at the same time. I spent a long time staring out of the window or scrolling through Twitter. When my baby son was really ill he would not sleep and did not want to be put down, so my arms were aching after holding him all night. I was lucky that I had understanding managers in work and I was not significantly financially disadvantaged by caring for sick relatives. And glad of the NHS and not getting a huge bill when family members are discharged from hospital.

Health, wellbeing & exercise

Doing this made me think more about the difference between health and wellbeing; there might be days where I was really happy but it wasn’t reflected in my EQ-5D index score. I noticed that doing exercise always led to a higher VAS score – maybe subconsciously I was thinking exercise was increasing my ‘health stock‘. I probably used the VAS score more like an overall wellbeing score rather than just health which is not correct – but I wonder if other people do this as well, and that is why there are less pronounced ceiling effects with the VAS score.

Could trials measure EQ-5D every day?

One advantage of EQ-5D and QALYs over other health outcomes is that they should be measured over a schedule and use the area under the curve. Completing an EQ5D every day has shown me that health does vary every day, but I still think it might be impractical for trial participants to complete an EQ-5D questionnaire every day. Perhaps EQ-5D data could be combined with a simple daily VAS score, possibly out of ten rather than 100 for simplicity.

Joint worst day: 6th and 7th October: EQ-5D-3L index 0.264, EQ-5D-5L index 0.724; personal EQ-5D-5L index 0.824; VAS score 60 – ‘abscess on tooth, couldn’t sleep, face swollen’.

Joint best day: 27th January, 7th September, 11th September, 18th November, 4th December, 30th December: EQ-5D-3L index 1.00;  both EQ-5D-5L index scores 1.00; VAS score 95 – notes include ‘lovely day with family’, ‘went for a run’, ‘holiday’, ‘met up with friends’.