My quality-adjusted life year

Why did I do it?

I have evaluated lots of services and been involved in trials where I have asked people to collect EQ-5D data. During this time several people have complained to me about having to collect EQ-5D data so I thought I would have a ‘taste of my own medicine’. I measured my health-related quality of life (HRQoL) using EQ-5D-3L, EQ-5D-VAS, and EQ-5D-5L, every day for a year (N=1). I had the EQ-5D on a spreadsheet on my smartphone and prompted myself to do it at 9 p.m. every night. I set a target of never being more than three days late in doing it, which I missed twice through the year. I also recorded health-related notes for some days, for instance, 21st January said “tired, dropped a keytar on toe (very 1980s injury)”.

By doing this I wanted to illuminate issues around anchoring, ceiling effects and ideas of health and wellness. With a big increase in wearable tech and smartphone health apps this type of big data collection might become a lot more commonplace. I have not kept a diary since I was about 13 so it was an interesting way of keeping track on what was happening, with a focus on health. Starting the year I knew I had one big life event coming up: a new baby due in early March. I am generally quite healthy, a bit overweight, don’t get enough sleep. I have been called a hypochondriac by people before, typically complaining of headaches, colds and sore throats around six months of the year. I usually go running once or twice a week.

From the start I was very conscious that I felt I shouldn’t grumble too much, that EQ-5D was mainly used to measure functional health in people with disease, not in well people (and ceiling effects were a feature of the EQ-5D). I immediately felt a ‘freedom’ of the greater sensitivity of the EQ-5D-5L when compared to the 3L so I could score myself as having slight problems with the 5L, but not that they were bad enough to be ‘some problems’ on the 3L.

There were days when I felt a bit achey or tired because I had been for a run, but unless I had an actual injury I did not score myself as having problems with pain or mobility because of this; generally if I feel achey from running I think of that as a good thing as having pushed myself hard, ‘no pain no gain’. I also started doing yoga this year which made me feel great but also a bit achey sometimes. But in general I noticed that one of the main problems I had was fatigue which is not explicitly covered in the EQ-5D but was reflected sometimes as being slightly impaired on usual activities. I also thought that usual activities could be impaired if you are working and travelling a lot, as you don’t get to do any of the things you enjoy doing like hobbies or spending time with family, but this is more of a capability question whereas the EQ-5D is more functional.

How did my HRQoL compare?

I matched up my levels on the individual domains to EQ-5D-3L and 5L index scores based on UK preference scores. The final 5L value set may still change; I used the most recent published scores. I also matched my levels to a personal 5L value set which I did using this survey which uses discrete choice experiments and involves comparing a set of pairs of EQ-5D-5L health states. I found doing this fascinating and it made me think about how mutually exclusive the EQ-5D dimensions are, and whether some health states are actually implausible: for instance, is it possible to be in extreme pain but not have any impairment on usual activities?

Surprisingly, my average EQ-5D-3L index score (0.982) was higher than the population averages for my age group (for England age 35-44 it is 0.888 based on Szende et al 2014); I expected them to be lower. In fact my average index scores were higher than the average for 18-24 year olds (0.922). I thought that measuring EQ-5D more often and having more granularity would lead to lower average scores but it actually led to high average scores.

My average score from the personal 5L value set was slightly higher than the England population value set (0.983 vs 0.975). Digging into the data, the main differences were that I thought that usual activities were slightly more important, and pain slightly less important, than the general population. The 5L (England tariff) correlated more closely with the VAS than the 3L (r2 =0.746 vs. r2 =0.586) but the 5L (personal tariff) correlated most closely with the VAS (r2 =0.792). So based on my N=1 sample, this suggests that the 5L is a better predictor of overall health than the 3L, and that the personal value set has validity in predicting VAS scores.

Figure 1. My EQ-5D-3L index score [3L], EQ-5D-5L index score (England value set) [5L], EQ-5DL-5L index score (personal value set) [5LP], and visual analogue scale (VAS) score divided by 100 [VAS/100].

Reflection

I definitely regretted doing the EQ-5D every day and was glad when the year was over! I would have preferred to have done it every week but I think that would have missed a lot of subtleties in how I felt from day to day. On reflection the way I was approaching it was that the end of each day I would try to recall if I was stressed, or if anything hurt, and adjust the level on the relevant dimension. But I wonder if I was prompted at any moment during the day as to whether I was stressed, had some mobility issues, or pain, would I say I did? It makes me think about Kahneman and Riis’s ‘remembering brain’ and ‘experiencing brain’. Was my EQ-5D profile a slave to my ‘remembering brain’ rather than my ‘experiencing brain’?

One thing when my score was low for a few days was when I had a really painful abscess on my tooth. At the time I felt like the pain was unbearable so had a high pain score, but looking back I wonder if it was that bad, but I didn’t want to retrospectively change my score. Strangely, I had the flu twice in this year which gave me some health decrements, which I don’t think has ever happened to me before (I don’t think it was just ‘man flu’!).

I knew that I was going to have a baby this year but I didn’t know that I would spend 18 days in hospital, despite not being ill myself. This has led me to think a lot more about ‘caregiver effects‘ – the impact of close relatives being ill; it is unnerving spending night after night in hospital, in this case because my wife was very ill after giving birth, and then when my baby son was two months old, he got very ill (both are doing a lot better now). Being in hospital with a sick relative is a strange feeling, stressful and boring at the same time. I spent a long time staring out of the window or scrolling through Twitter. When my baby son was really ill he would not sleep and did not want to be put down, so my arms were aching after holding him all night. I was lucky that I had understanding managers in work and I was not significantly financially disadvantaged by caring for sick relatives. And glad of the NHS and not getting a huge bill when family members are discharged from hospital.

Health, wellbeing & exercise

Doing this made me think more about the difference between health and wellbeing; there might be days where I was really happy but it wasn’t reflected in my EQ-5D index score. I noticed that doing exercise always led to a higher VAS score – maybe subconsciously I was thinking exercise was increasing my ‘health stock‘. I probably used the VAS score more like an overall wellbeing score rather than just health which is not correct – but I wonder if other people do this as well, and that is why there are less pronounced ceiling effects with the VAS score.

Could trials measure EQ-5D every day?

One advantage of EQ-5D and QALYs over other health outcomes is that they should be measured over a schedule and use the area under the curve. Completing an EQ5D every day has shown me that health does vary every day, but I still think it might be impractical for trial participants to complete an EQ-5D questionnaire every day. Perhaps EQ-5D data could be combined with a simple daily VAS score, possibly out of ten rather than 100 for simplicity.

Joint worst day: 6th and 7th October: EQ-5D-3L index 0.264, EQ-5D-5L index 0.724; personal EQ-5D-5L index 0.824; VAS score 60 – ‘abscess on tooth, couldn’t sleep, face swollen’.

Joint best day: 27th January, 7th September, 11th September, 18th November, 4th December, 30th December: EQ-5D-3L index 1.00;  both EQ-5D-5L index scores 1.00; VAS score 95 – notes include ‘lovely day with family’, ‘went for a run’, ‘holiday’, ‘met up with friends’.

Visualising PROMs data

The patient reported outcomes measures, or PROMs, is a large database with before and after health-related quality of life (HRQoL) measures for a large number of patients undergoing four key conditions: hip replacement, knee replacement, varicose vein surgery and surgery for groin hernia. The outcome measures are the EQ-5D index and visual analogue scale (and a disease-specific measure for three of the interventions). These data also contain the provider of the operation. Being publicly available, these data allow us to look at a range of different questions: what’s the average effect of the surgery on HRQoL? What are the differences between providers in gains to HRQoL or in patient casemix? Great!

The first thing we should always do with new data is to look at it. This might be in an exploratory way to determine the questions to ask of the data or in an analytical way to get an idea of the relationships between variables. Plotting the data communicates more about what’s going on than any table of statistics alone. However, the plots on the NHS Digital website might be accused of being a little uninspired as they collapse a lot of the variation into simple charts that conceal a lot of what’s going on. For example:

So let’s consider other ways of visualising this data. For all these plots a walk through of the code is at the end of this post.

Now, I’m not a regular user of PROMs data, so what I think are the interesting features of the data may not reflect what the data are generally used for. For this, I think the interesting features are:

  • The joint distribution of pre- and post-op scores
  • The marginal distributions of pre- and post-op scores
  • The relationship between pre- and post-op scores over time

We will pool all the data from six years’ worth of PROMs data. This gives us over 200,000 observations. A scatter plot with this information is useless as the density of the points will be very high. A useful alternative is hexagonal binning, which is like a two-dimensional histogram. Hexagonal tiles, which usefully tessellate and are more interesting to look at than squares, can be shaded or coloured with respect to the number of observations in each bin across the support of the joint distribution of pre- and post-op scores (which is [-0.5,1]x[-0.5,1]). We can add the marginal distributions to the axes and then add smoothed trend lines for each year. Since the data are constrained between -0.5 and 1, the mean may not be a very good summary statistic, so we’ll plot a smoothed median trend line for each year. Finally, we’ll add a line on the diagonal. Patients above this line have improved and patients below it deteriorated.

Hip replacement results

Hip replacement results

There’s a lot going on in the graph, but I think it reveals a number of key points about the data that we wouldn’t have seen from the standard plots on the website:

  • There appear to be four clusters of patients:
    • Those who were in close to full health prior to the operation and were in ‘perfect’ health (score = 1) after;
    • Those who were in close to full health pre-op and who didn’t really improve post-op;
    • Those who were in poor health (score close to zero) and made a full recovery;
    • Those who were in poor health and who made a partial recovery.
  • The median change is an improvement in health.
  • The median change improves modestly from year to year for a given pre-op score.
  • There are ceiling effects for the EQ-5D.

None of this is news to those who study these data. But this way of presenting the data certainly tells more of a story that the current plots on the website.

R code

We’re going to consider hip replacement, but the code is easily modified for the other outcomes. Firstly we will take the pre- and post-op score and their difference and pool them into one data frame.

# df 14/15
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1415.csv")

df<-df[!is.na(df$Pre.Op.Q.EQ5D.Index),]
df$pre<-df$Pre.Op.Q.EQ5D.Index
df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1415 <- df[,c('Provider.Code','pre','post','diff')]

#
# df 13/14
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1314.csv")

df<-df[!is.na(df$Pre.Op.Q.EQ5D.Index),]
df$pre<-df$Pre.Op.Q.EQ5D.Index
df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1314 <- df[,c('Provider.Code','pre','post','diff')]

# df 12/13
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1213.csv")

df<-df[!is.na(df$Pre.Op.Q.EQ5D.Index),]
df$pre<-df$Pre.Op.Q.EQ5D.Index
df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1213 <- df[,c('Provider.Code','pre','post','diff')]

# df 11/12
df<-read.csv("C:/docs/proms/Hip Replacement 1112.csv")

df$pre<-df$Q1_EQ5D_INDEX
df$post<- df$Q2_EQ5D_INDEX
df$diff<- df$post - df$pre
names(df)[1]<-'Provider.Code'

df1112 <- df[,c('Provider.Code','pre','post','diff')]

# df 10/11
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1011.csv")

df$pre<-df$Q1_EQ5D_INDEX
df$post<- df$Q2_EQ5D_INDEX
df$diff<- df$post - df$pre
names(df)[1]<-'Provider.Code'

df1011 <- df[,c('Provider.Code','pre','post','diff')]

#combine

df1415$year<-"2014/15"
df1314$year<-"2013/14"
df1213$year<-"2012/13"
df1112$year<-"2011/12"
df1011$year<-"2010/11"

df<-rbind(df1415,df1314,df1213,df1112,df1011)
write.csv(df,"C:/docs/proms/eq5d.csv")

Now, for the plot. We will need the packages ggplot2, ggExtra, and extrafont. The latter package is just to change the plot fonts, not essential, but aesthetically pleasing.

require(ggplot2)
require(ggExtra)
require(extrafont)
font_import()
loadfonts(device = "win")

p<-ggplot(data=df,aes(x=pre,y=post))+
 stat_bin_hex(bins=15,color="white",alpha=0.8)+
 geom_abline(intercept=0,slope=1,color="black")+
 geom_quantile(aes(color=year),method = "rqss", lambda = 2,quantiles=0.5,size=1)+
 scale_fill_gradient2(name="Count (000s)",low="light grey",midpoint = 15000,
   mid="blue",high = "red",
   breaks=c(5000,10000,15000,20000),labels=c(5,10,15,20))+
 theme_bw()+
 labs(x="Pre-op EQ-5D index score",y="Post-op EQ-5D index score")+
 scale_color_discrete(name="Year")+
 theme(legend.position = "bottom",text=element_text(family="Gill Sans MT"))

ggMarginal(p, type = "histogram")

Chris Sampson’s journal round-up for 14th November 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Weighing clinical evidence using patient preferences: an application of probabilistic multi-criteria decision analysis. PharmacoEconomics [PubMedPublished 10th November 2016

There are at least two ways in which preferences determine the allocation of health care resources (in a country with a HTA agency, at least). One of them we think about a lot; the (societal) valuation of health states as defined by a multi-attribute measure (like the EQ-5D). The other relates to patient preferences that determine whether or not a specific individual (and their physician) will choose to use a particular technology, given its expected clinical outcomes for that individual. A drug may very well make sense at the aggregate level but be a very bad choice for a particular individual when compared with alternatives. It’s right that this process should be deliberative and not solely driven by an algorithm, but it’s also important to maintain transparent and consistent decision making. Multi-criteria decision analysis (MCDA) has been proposed as a means of achieving this, and it can be used to take into account the uncertainty associated with clinical outcomes. In this study the authors present an approach that also incorporates random preference variation along with parameter uncertainty in both preferences and clinical evidence. The model defines a value function and estimates the impact of uncertainty using probabilistic Monte Carlo simulation, which in turn estimates the mean value of each possible treatment in the population. Treatments can therefore be ranked according to patients’ preferences, along with an estimate of the uncertainty associated with this ranking. To demonstrate the utility of the model it is applied to an example for the relative value of HAARTs for HIV, with parameters derived from clinical evaluations and stated preferences studies. It’s nice to see that the authors also provide their R script. One headline finding seems to be that this approach is likely to demonstrate just how much uncertainty is involved that might not previously have been given much attention. It could therefore help steer us towards more valuable research in the future. And it could be used to demonstrate that optimal decisions might change when all sources of uncertainty are considered. Clearly a potential application of this method is in the realm of personalised medicine, which is slowly but inevitably reaching beyond the confines of pharmacogenomics.

Communal sharing and the provision of low-volume high-cost health services: results of a survey. PharmacoEconomics – Open Published 4th November 2016

One of the distributional concerns we might have about the QALY-maximisation approach is its implications for people with rare diseases. Drugs for rare diseases are often expensive (because the marginal cost is likely to be higher) and therefore less cost-effective. There is mixed evidence about whether or not people exhibit a preference for redistributive allocation of QALY-creating resources according to rarity. Of course, the result you get from such studies is dependent on the question you ask. In order to ask the right question it’s important to understand the mechanisms by which people might prefer allocation of additional resources to services for rare diseases. One suggestion in the literature is the preservation of hope. This study presents another, based on the number of people sharing the cost. So imagine a population of 1000 people, and all those people share the cost of health care. For a rare disease, more people will share the cost of the treatment per person treated. So if 10 people have the disease, that’s 100 payers per recipient. If 100 people have the disease then it’s just 10 payers per recipient. The idea is that people prefer a situation in which more people share the cost, and on that basis prefer to allocate resources to rare diseases. A web-based survey was conducted in Australia in which 702 people were asked to divide a budget between a small patient group with a high-cost illness and a large patient group with a low-cost illness. There were also a set of questions in which respondents indicated the importance of 6 possible influences on their decisions. The findings show that people did choose to allocate more funds to the rarer disease, despite the reduced overall health gain. This suggests that people do have a preference for wider cost sharing, which could explain extra weight being given to rare diseases. I think it’s a good idea that deserves more research, but for me there are a few problems with the study. Much of the effect could be explained by people’s non-linear valuations of risk, as the scenario highlighted that the respondents themselves would be at risk of the disease. We also can’t clearly differentiate between an effect due to the rarity of the disease (and associated cost sharing) and an effect due to the severity of the disease.

The challenge of conditional reimbursement: stopping reimbursement can be more difficult than not starting in the first place! Value in Health Published 3rd November 2016

If anything’s going to make me read a paper, it’s an exclamation mark! Conditional reimbursement of technologies that are probably effective but probably not cost-effective can be conducted in a rational way in order to generate research findings and benefit social welfare in the long run. But that can only hold true if those technologies subsequently found (through more research) to be ineffective or too costly are then made unavailable. Otherwise conditional reimbursement agreements will do more harm than good. This study uses discrete choice experiments to compare public (n=1169) and potential policymaker (n=90) values associated with the removal of an available treatment compared with non-reimbursement of a new treatment. The results showed (in addition to some other common findings) that both the public and policymakers preferred reimbursement of an existing treatment over the reimbursement of a new treatment, and were willing to accept an ICER of more than €7,000 higher for an existing treatment. Though the DCE found it to be a significant determinant, 60% of policymakers reported that they thought that reimbursement status was unimportant, so there may be some cognitive dissonance going on there. The most obvious (and probably most likely) explanation for the observed preference for currently reimbursed treatments is loss aversion. But it could also be that people recognise real costs associated with ending reimbursement that are not reflected in either the QALY estimates or the costs to the health system. Whatever the explanation, HTA agencies need to bear this in mind when using conditional reimbursement agreements.

Head-to-head comparison of health-state values derived by a probabilistic choice model and scores on a visual analogue scale. The European Journal of Health Economics [PubMed] Published 2nd November 2016

I’ve always had a fondness for a good old VAS as a direct measure of health state (dare we say utility) values, despite the limitations of the approach. This study compares discrete choices for EQ-5D-5L states with VAS valuations – thus comparing indirect and direct health state valuations – in Canada, the USA, England and The Netherlands (n=1775). Each respondent had to make a forced choice between two EQ-5D-5L health states and then assess both states on a single VAS. Ten different pairs were completed by each respondent. The two different approaches correlated strongly within and across countries, as we might expect. And pairs of EQ-5D-5L states that were valued relatively low or high in the discrete choice model were also valued accordingly in the VAS. But the relationship between the two approaches was non-linear in that values differed more at the ends of the scale, with poor health states valued more differently in the choice model and good health states valued more differently on the VAS. This probably just reflects some of the biases observed in the use of VAS that are already well-documented, particularly context bias and end-state aversion. This study clearly suggests (though does not by itself prove) that discrete choice models are a better choice for health state valuation… but the VAS ain’t dead yet.

Credits