Brendan Collins’s journal round-up for 18th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluation of intervention impact on health inequality for resource allocation. Medical Decision Making [PubMed] Published 28th February 2019

How should decision-makers factor equity impacts into economic decisions? Can we trade off an intervention’s cost-effectiveness with its impact on unfair health inequalities? Is a QALY just a QALY or should we weight it more if it is gained by someone from a disadvantaged group? Can we assume that, because people of lower socioeconomic position lose more QALYs through ill health, that most interventions should, by default, reduce inequalities?

I really like the health equity plane. This is where you show health impacts (usually including a summary measure of cost-effectiveness like net health benefit or net monetary benefit) and equity impacts (which might be a change in slope index of inequality [SII] or relative index of inequality) on the same plane. This enables decision-makers to identify potential trade-offs between interventions that produce a greater benefit, but have less impact on inequalities, and those that produce a smaller benefit, but increase equity. I think there has been a debate over whether the ‘win-win’ quadrant should be south-east (which would be consistent with the dominant quadrant of the cost-effectiveness plane) or north-east, which is what seems to have been adopted as the consensus and is used here.

This paper showcases a reproducible method to estimate the equity impact of interventions. It considers public health interventions recommended by NICE from 2006-2016, with equity impacts estimated based on whether they targeted specific diseases, risk factors or populations. The disease distributions were based on hospital episode statistics data by deprivation (IMD). The study used equity weights to convert QALYs gained to different social groups into net social welfare. In this case, valuing the most disadvantaged fifth of people’s health at around 6-7 times that of the least disadvantaged fifth. I think there might still be work to be done around reaching consensus for equity weights.

The total expected effect on inequalities is small – full implementation of all recommendations would produce a reduction of the quality-adjusted life expectancy gap between the healthiest and least healthy from 13.78 to 13.34 QALYs. But maybe this is to be expected; NICE does not typically look at vaccinations or screening and has not looked at large scale public health programmes like the Healthy Child Programme in the whole. Reassuringly, where recommended interventions were likely to increase inequality, the trade-off between efficiency and equity was within the social welfare function they had used. The increase in inequality might be acceptable because the interventions were cost-effective – producing 5.6million QALYs while increasing the SII by 0.005. If these interventions are buying health at a good price, then you would hope this might then release money for other interventions that would reduce inequalities.

I suspect that public health folks might not like equity trade-offs at all – trading off equity and cost-effectiveness might be the moral equivalent of trading off human rights – you can’t choose between them. But the reality is that these kinds of trade-offs do happen, and like a lot of economic methods, it is about revealing these implicit trade-offs so that they become explicit, and having ‘accountability for reasonableness‘.

Future unrelated medical costs need to be considered in cost effectiveness analysis. The European Journal of Health Economics [PubMed] [RePEc] Published February 2019

This editorial says that NICE should include unrelated future medical costs in its decision making. At the moment, if NICE looks at a cardiovascular disease (CVD) drug, it might look at future costs related to CVD but it won’t include changes in future costs of cancer, or dementia, which may occur because individuals live longer. But usually unrelated QALY gains will be implicitly included; so there is an inconsistency. If you are a health economic modeller, you know that including unrelated costs properly is technically difficult. You might weight average population costs by disease prevalence so you get a cost estimate for people with coronary heart disease, diabetes, and people without either disease. Or you might have a general healthcare running cost that you can apply to future years. But accounting for a full matrix of competing causes of morbidity and mortality is very tricky if not impossible. To help with this, this group of authors produced the excellent PAID tool, which helps with doing this for the Netherlands (can we have one for the UK please?).

To me, including unrelated future costs means that in some cases ICERs might be driven more by the ratio of future costs to QALYs gained. Whereas currently, ICERs are often driven by the ratio of the intervention costs to QALYs gained. So it might be that a lot of treatments that are currently cost-effective no longer are, or we need to judge all interventions with a higher ICER willingness to pay threshold or value of a QALY. The authors suggest that, although including unrelated medical costs usually pushes up the ICER, it should ultimately result in better decisions that increase health.

There are real ethical issues here. I worry that including future unrelated costs might be used for an integrated care agenda in the NHS, moving towards a capitation system where the total healthcare spend on any one individual is capped, which I don’t necessarily think should happen in a health insurance system. Future developments around big data mean we will be able to segment the population a lot better and estimate who will benefit from treatments. But I think if someone is unlucky enough to need a lot of healthcare spending, maybe they should have it. This is risk sharing and, without it, you may get the ‘double jeopardy‘ problem.

For health economic modellers and decision-makers, a compromise might be to present analyses with related and unrelated medical costs and to consider both for investment decisions.

Overview of cost-effectiveness analysis. JAMA [PubMed] Published 11th March 2019

This paper probably won’t offer anything new to academic health economists in terms of methods, but I think it might be a useful teaching resource. It gives an interesting example of a model of ovarian cancer screening in the US that was published in February 2018. There has been a large-scale trial of ovarian cancer screening in the UK (the UKCTOCS), which has been extended because the results have been promising but mortality reductions were not statistically significant. The model gives a central ICER estimate of $106,187/QALY (based on $100 per screen) which would probably not be considered cost-effective in the UK.

I would like to explore one statement that I found particularly interesting, around the willingness to pay threshold; “This willingness to pay is often represented by the largest ICER among all the interventions that were adopted before current resources were exhausted, because adoption of any new intervention would require removal of an existing intervention to free up resources.”

The Culyer bookshelf model is similar to this, although as well as the ICER you also need to consider the burden of disease or size of the investment. Displacing a $110,000/QALY intervention for 1000 people with a $109,000/QALY intervention for a million people will bust your budget.

This idea works intuitively – if Liverpool FC are signing a new player then I might hope they are better than all of the other players, or at least better than the average player. But actually, as long as they are better than the worst player then the team will be improved (leaving aside issues around different positions, how they play together, etc.).

However, I think that saying that the reference ICER should be the largest current ICER might be a bit dangerous. Leaving aside inefficient legacy interventions (like unnecessary tonsillectomies etc), it is likely that the intervention being considered for investment and the current maximum ICER intervention to be displaced may both be new, expensive immunotherapies. It might be last in, first out. But I can’t see this happening; people are loss averse, so decision-makers and patients might not accept what is seen as a fantastic new drug for pancreatic cancer being approved then quickly usurped by a fantastic new leukaemia drug.

There has been a lot of debate around what the threshold should be in the UK; in England NICE currently use £20,000 – £30,000, up to a hypothetical maximum £300,000/QALY in very specific circumstances. UK Treasury value QALYs at £60,000. Work by Karl Claxton and colleagues suggests that marginal productivity (the ‘shadow price’) in the NHS is nearer to £5,000 – £15,000 per QALY.

I don’t know what the answer to this is. I don’t think the willingness-to-pay threshold for a new treatment should be the maximum ICER of a current portfolio of interventions; maybe it should be the marginal health production cost in a health system, as might be inferred from the Claxton work. Of course, investment decisions are made on other factors, like impact on health inequalities, not just on the ICER.

Credits

My quality-adjusted life year

Why did I do it?

I have evaluated lots of services and been involved in trials where I have asked people to collect EQ-5D data. During this time several people have complained to me about having to collect EQ-5D data so I thought I would have a ‘taste of my own medicine’. I measured my health-related quality of life (HRQoL) using EQ-5D-3L, EQ-5D-VAS, and EQ-5D-5L, every day for a year (N=1). I had the EQ-5D on a spreadsheet on my smartphone and prompted myself to do it at 9 p.m. every night. I set a target of never being more than three days late in doing it, which I missed twice through the year. I also recorded health-related notes for some days, for instance, 21st January said “tired, dropped a keytar on toe (very 1980s injury)”.

By doing this I wanted to illuminate issues around anchoring, ceiling effects and ideas of health and wellness. With a big increase in wearable tech and smartphone health apps this type of big data collection might become a lot more commonplace. I have not kept a diary since I was about 13 so it was an interesting way of keeping track on what was happening, with a focus on health. Starting the year I knew I had one big life event coming up: a new baby due in early March. I am generally quite healthy, a bit overweight, don’t get enough sleep. I have been called a hypochondriac by people before, typically complaining of headaches, colds and sore throats around six months of the year. I usually go running once or twice a week.

From the start I was very conscious that I felt I shouldn’t grumble too much, that EQ-5D was mainly used to measure functional health in people with disease, not in well people (and ceiling effects were a feature of the EQ-5D). I immediately felt a ‘freedom’ of the greater sensitivity of the EQ-5D-5L when compared to the 3L so I could score myself as having slight problems with the 5L, but not that they were bad enough to be ‘some problems’ on the 3L.

There were days when I felt a bit achey or tired because I had been for a run, but unless I had an actual injury I did not score myself as having problems with pain or mobility because of this; generally if I feel achey from running I think of that as a good thing as having pushed myself hard, ‘no pain no gain’. I also started doing yoga this year which made me feel great but also a bit achey sometimes. But in general I noticed that one of the main problems I had was fatigue which is not explicitly covered in the EQ-5D but was reflected sometimes as being slightly impaired on usual activities. I also thought that usual activities could be impaired if you are working and travelling a lot, as you don’t get to do any of the things you enjoy doing like hobbies or spending time with family, but this is more of a capability question whereas the EQ-5D is more functional.

How did my HRQoL compare?

I matched up my levels on the individual domains to EQ-5D-3L and 5L index scores based on UK preference scores. The final 5L value set may still change; I used the most recent published scores. I also matched my levels to a personal 5L value set which I did using this survey which uses discrete choice experiments and involves comparing a set of pairs of EQ-5D-5L health states. I found doing this fascinating and it made me think about how mutually exclusive the EQ-5D dimensions are, and whether some health states are actually implausible: for instance, is it possible to be in extreme pain but not have any impairment on usual activities?

Surprisingly, my average EQ-5D-3L index score (0.982) was higher than the population averages for my age group (for England age 35-44 it is 0.888 based on Szende et al 2014); I expected them to be lower. In fact my average index scores were higher than the average for 18-24 year olds (0.922). I thought that measuring EQ-5D more often and having more granularity would lead to lower average scores but it actually led to high average scores.

My average score from the personal 5L value set was slightly higher than the England population value set (0.983 vs 0.975). Digging into the data, the main differences were that I thought that usual activities were slightly more important, and pain slightly less important, than the general population. The 5L (England tariff) correlated more closely with the VAS than the 3L (r2 =0.746 vs. r2 =0.586) but the 5L (personal tariff) correlated most closely with the VAS (r2 =0.792). So based on my N=1 sample, this suggests that the 5L is a better predictor of overall health than the 3L, and that the personal value set has validity in predicting VAS scores.

Figure 1. My EQ-5D-3L index score [3L], EQ-5D-5L index score (England value set) [5L], EQ-5DL-5L index score (personal value set) [5LP], and visual analogue scale (VAS) score divided by 100 [VAS/100].

Reflection

I definitely regretted doing the EQ-5D every day and was glad when the year was over! I would have preferred to have done it every week but I think that would have missed a lot of subtleties in how I felt from day to day. On reflection the way I was approaching it was that the end of each day I would try to recall if I was stressed, or if anything hurt, and adjust the level on the relevant dimension. But I wonder if I was prompted at any moment during the day as to whether I was stressed, had some mobility issues, or pain, would I say I did? It makes me think about Kahneman and Riis’s ‘remembering brain’ and ‘experiencing brain’. Was my EQ-5D profile a slave to my ‘remembering brain’ rather than my ‘experiencing brain’?

One thing when my score was low for a few days was when I had a really painful abscess on my tooth. At the time I felt like the pain was unbearable so had a high pain score, but looking back I wonder if it was that bad, but I didn’t want to retrospectively change my score. Strangely, I had the flu twice in this year which gave me some health decrements, which I don’t think has ever happened to me before (I don’t think it was just ‘man flu’!).

I knew that I was going to have a baby this year but I didn’t know that I would spend 18 days in hospital, despite not being ill myself. This has led me to think a lot more about ‘caregiver effects‘ – the impact of close relatives being ill; it is unnerving spending night after night in hospital, in this case because my wife was very ill after giving birth, and then when my baby son was two months old, he got very ill (both are doing a lot better now). Being in hospital with a sick relative is a strange feeling, stressful and boring at the same time. I spent a long time staring out of the window or scrolling through Twitter. When my baby son was really ill he would not sleep and did not want to be put down, so my arms were aching after holding him all night. I was lucky that I had understanding managers in work and I was not significantly financially disadvantaged by caring for sick relatives. And glad of the NHS and not getting a huge bill when family members are discharged from hospital.

Health, wellbeing & exercise

Doing this made me think more about the difference between health and wellbeing; there might be days where I was really happy but it wasn’t reflected in my EQ-5D index score. I noticed that doing exercise always led to a higher VAS score – maybe subconsciously I was thinking exercise was increasing my ‘health stock‘. I probably used the VAS score more like an overall wellbeing score rather than just health which is not correct – but I wonder if other people do this as well, and that is why there are less pronounced ceiling effects with the VAS score.

Could trials measure EQ-5D every day?

One advantage of EQ-5D and QALYs over other health outcomes is that they should be measured over a schedule and use the area under the curve. Completing an EQ5D every day has shown me that health does vary every day, but I still think it might be impractical for trial participants to complete an EQ-5D questionnaire every day. Perhaps EQ-5D data could be combined with a simple daily VAS score, possibly out of ten rather than 100 for simplicity.

Joint worst day: 6th and 7th October: EQ-5D-3L index 0.264, EQ-5D-5L index 0.724; personal EQ-5D-5L index 0.824; VAS score 60 – ‘abscess on tooth, couldn’t sleep, face swollen’.

Joint best day: 27th January, 7th September, 11th September, 18th November, 4th December, 30th December: EQ-5D-3L index 1.00;  both EQ-5D-5L index scores 1.00; VAS score 95 – notes include ‘lovely day with family’, ‘went for a run’, ‘holiday’, ‘met up with friends’.

Chris Sampson’s journal round-up for 31st December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Perspectives of patients with cancer on the quality-adjusted life year as a measure of value in healthcare. Value in Health Published 29th December 2018

Patients should have the opportunity to understand how decisions are made about which treatments they are and are not allowed to use, given their coverage. This study reports on a survey of cancer patients and survivors, with the aim of identifying patients’ awareness, understanding, and opinions about the QALY as a measure of value.

Participants were recruited from a (presumably US-based) patient advocacy group and 774 mostly well-educated, mostly white, mostly women responded. The online survey asked about cancer status and included a couple of measures of health literacy. Fewer than 7% of participants had ever heard of the QALY – more likely for those with greater health literacy. The survey explained the QALY to the participants and then asked if the concept of the QALY makes sense. Around half said it did and 24% thought that it was a good way to measure value in health care. The researchers report a variety of ‘significant’ differences in tendencies to understand or support the use of QALYs, but I’m not convinced that they’re meaningful because the differences aren’t big and the samples are relatively small.

At the end of the survey, respondents were asked to provide opinions on QALYs and value in health care. 165 people provided responses and these were coded and analysed qualitatively. The researchers identified three themes from this one free-text question: i) measuring value, ii) opinions on QALY, and iii) value in health care and decision making. I’m not sure that they’re meaningful themes that help us to understand patients’ views on QALYs. A significant proportion of respondents rejected the idea of using numbers to quantify value in health care. On the other hand, some suggested that the QALY could be a useful decision aid for patients. There was opposition to ‘external decision makers’ having any involvement in health care decision making. Unless you’re paying for all of your care out of pocket, that’s tough luck. But the most obvious finding from the qualitative analysis is that respondents didn’t understand what QALYs were for. That’s partly because health economists in general need to be better at communicating concepts like the QALY. But I think it’s also in large part because the authors failed to provide a clear explanation. They didn’t even use my lovely Wikipedia graphic. Many of the points made by respondents are entirely irrelevant to the appropriateness of QALYs as they’re used (or in the case of the US, aren’t yet used) in practice. For example, several discussed the use of QALYs in clinical decision making. Patients think that they should maintain autonomy, which is fair enough but has nothing to do with how QALYs are used to assess health technologies.

QALYs are built on the idea of trade-offs. They measure the trade-off between life extension and life improvement. They are used to guide trade-offs between different treatments for different people. But the researchers didn’t explain how or why QALYs are used to make trade-offs, so the elicited views aren’t well-informed.

Measuring multivariate risk preferences in the health domain. Journal of Health Economics Published 27th December 2018

Health preferences research is now a substantial field in itself. But there’s still a lot of work left to be done on understanding risk preferences with respect to health. Gradually, we’re coming round to the idea that people tend to be risk-averse. But risk preferences aren’t (necessarily) so simple. Recent research has proposed that ‘higher order’ preferences such as prudence and temperance play a role. A person exhibiting univariate prudence for longevity would be better able to cope with risk if they are going to live longer. Univariate temperance is characterised by a preference for prospects that disaggregate risk across different possible outcomes. Risk preferences can also be multivariate – across health and wealth, for example – determining the relationship between univariate risk preferences and other attributes. These include correlation aversion, cross-prudence, and cross-temperance. Many articles from the Arthur Attema camp demand a great deal of background knowledge. This paper isn’t an exception, but it does provide a very clear and intuitive description of the various kinds of uni- and multivariate risk preferences that the researchers are considering.

For this study, an experiment was conducted with 98 people, who were asked to make 69 choices, corresponding to 3 choices about each risk preference trait being tested, for both gains and losses. Participants were told that they had €240,000 in wealth and 40 years of life to play with. The number of times that an individual made choices in line with a particular trait was used as an indicator of their strength of preference.

For gains, risk aversion was common for both wealth and longevity, and prudence was a common trait. There was no clear tendency towards temperance. For losses, risk aversion and prudence tended to neutrality. For multivariate risk preferences, a majority of people were correlation averse for gains and correlation seeking for losses. For gains, 76% of choices were compatible with correlation aversion, suggesting that people prefer to disaggregate fixed wealth and health gains. For losses, the opposite was true in 68% of choices. There was evidence for cross-prudence in wealth gains but not longevity gains, suggesting that people prefer health risk if they have higher wealth. For losses, the researchers observed cross-prudence and cross-temperance neutrality. The authors go on to explore associations between different traits.

A key contribution is in understanding how risk preferences differ in the health domain as compared with the monetary domain (which is what most economists study). Conveniently, there are a lot of similarities between risk preferences in the two domains, suggesting that health economists can learn from the wider economics literature. Risk aversion and prudence seem to apply to longevity as well as monetary gains, with a shift to neutrality in losses. The potential implications of these findings are far-reaching, but this is just a small experimental study. More research needed (and anticipated).

Prospective payment systems and discretionary coding—evidence from English mental health providers. Health Economics [PubMed] Published 27th December 2018

If you’ve conducted an economic evaluation in the context of mental health care in England, you’ll have come across mental health care clusters. Patients undergoing mental health care are allocated to one of 20 clusters, classed as either ‘psychotic’, ‘non-psychotic’, or ‘organic’, which forms the basis of an episodic payment model. In 2013/14, these episodes were associated with an average cost of between £975 and £9,354 per day. Doctors determine the clusters and the clusters determine reimbursement. Perverse incentives abound. Or do they?

This study builds on the fact that patients are allocated by clinical teams with guidance from the algorithm-based Mental Health Clustering Tool (MHCT). Clinical teams might exhibit upcoding, whereby patients are allocated to clusters that attract a higher price than that recommended by the MHCT. Data were analysed for 148,471 patients from the Mental Health Services Data Set for 2011-2015. For each patient, their allocated cluster is known, along with a variety of socioeconomic indicators and the HoNoS and SARN instruments, which go into the MHCT algorithm. Mixed-effects logistic regression was used to look at whether individual patients were or were not allocated to the cluster recommended as ‘best fit’ by the MHCT, controlling for patient and provider characteristics. Further to this, multilevel multinomial logit models were used to categorise decisions that don’t match the MHCT as either under- or overcoding.

Average agreement across clusters between the MHCT and clinicians was 36%. In most cases, patients were allocated to a cluster either one step higher or one step lower in terms of the level of need, and there isn’t an obvious tendency to overcode. The authors are able to identify a few ways in which observable provider and patient characteristics influence the tendency to under- or over-cluster patients. For example, providers with higher activity are less likely to deviate from the MHCT best fit recommendation. However, the dominant finding – identified by using median odds ratios for the probability of a mismatch between two random providers – seems to be that unobserved heterogeneity determines variation in behaviour.

The study provides clues about the ways in which providers could manipulate coding to their advantage and identifies the need for further data collection for a proper assessment. But reimbursement wasn’t linked to clustering during the time period of the study, so it remains to be seen how clinicians actually respond to these potentially perverse incentives.

Credits