Meeting round-up: 7th annual Vancouver Health Economics Methodology (VanHEM) meeting

The 7th annual Vancouver Health Economics Methodology (VanHEM) meeting took place on June 16 in Vancouver, Canada. This one-day conference brings together health economists from across the Pacific Northwest, including Vancouver, Washington State, and Calgary. This has always been more than a Vancouver meeting, which led Anirban Basu from Washington State to suggest changing the name of the meeting to the Cascadia Health Economics Workshop (CHEW) – a definite improvement.

This year’s event began a day early, with Richard Grieve from the London School of Hygiene and Tropical Medicine, Stephen O’Neill from NUI Galway, and Jasjeet Sekhon from the University of California Berkeley, delivering a workshop titled Methods for Addressing Confounding in Comparative Effectiveness and Cost-effectiveness Studies. This provided both theoretical and practical examples of propensity score matching, genetic matching, difference-in-difference estimation and the synthetic control method. I was fortunate enough to be one of the 16 attendees (it was oversubscribed) to participate after being unable to attend when the course was offered at the Society for Medical Decision Making conference this past October. The course was an excellent introduction to these methodologies, including both theoretical and empirical examples of their use. I was particularly interested to have R and Stata code provided, to work through real-world examples. Being able to see the data and code and explore different analyses provided an incredibly rich learning experience.

The following morning, Prof Grieve delivered the plenary address to the more than 80 attendees. This talk discussed the potential for causal inference and large-scale data to influence policy, and outlined how observational data can complement evidence from randomized controlled trials (the slides are available here [PDF]). Since the expertise of our health economics community centres on other methods, primarily economic evaluation and stated preference methods, Prof Grieve’s plenary catalyzed a lot of discussion, which continued throughout the day. After the plenary, there were eight papers discussed over four parallel sessions, in addition to ten posters presented over lunch. This included an interesting paper by Nathaniel Hendrix from Washington state on a mapping algorithm between a generic and condition-specific quality-of-life measure for epilepsy, and two papers using discrete choice methodology. One by Tracey-Lea Laba evaluated cost sharing for long-acting beta-agonists in Australia, and another by Dean Regier, Verity Watson and Jonathon Sicsic explored choice certainty and choice consistency in DCEs using Kahneman’s dual processing theory.

Having been to three HESG meetings, there are lots of similarities with the format of VanHEM. For instance, papers are discussed for 20 minutes by another attendee, and the author has 5-minutes for clarification. What is different is that before a wider discussion, members of the audience break into small groups for 5 minutes. In my experience, this addition has been very effective at increasing participation during the final 25 minutes of the session, which is an open discussion amongst all attendees. It also gave attendees the opportunity to swap tips on where to find the best deals on plaid shirts.

I was fortunate enough to have my paper accepted and discussed by Prof Larry Lynd from the UBC Faculty of Pharmaceutical Science. Prof Lynd provided a number of excellent suggestions. Of particular note was a much simpler and more intuitive description of the marginal rate of substitution.

VanHEM also afforded an opportunity for discussion and reflection within the local health economics community. Recently, the Canadian Institutes for Health Research launched the Strategy for Patient-Oriented Research (SPOR). In BC, this involves an $80 million investment to “foster evidence-informed health care by bringing innovative approaches to the point of care, so as to ensure greater quality, accountability, and access of care”. One innovative approach is the creation of a new health economics methods cluster in the province, which is co-led by David Whitehurst (Simon Fraser University) and Nick Bansback (University of British Columbia). It receives SPOR funds to help support the health economics community as a whole, and specific research projects that focus on novel methods. At VanHEM, one hour was dedicated to determining how the cluster could help support the community that sees many health economists located at different sites throughout the region. Participants suggested having a number of dedicated academic half-days throughout the year that aim to provide an opportunity for members of the community to see each other face-to-face and engage in activities that support professional development. The theme of great titles continued with the suggestion of a “HEck-a-thon”.

Overall, this year’s VanHEM meeting was a great success. The addition of a pre-meeting workshop provided an excellent opportunity for our community to gain practical experience in causal methods, and we continue to see increased numbers of participants from outside our local region. I’m looking forward to doing this again in 2018, and I would encourage anyone visiting our region to be in touch!

Credits

Advertisements

Visualising PROMs data

The patient reported outcomes measures, or PROMs, is a large database with before and after health-related quality of life (HRQoL) measures for a large number of patients undergoing four key conditions: hip replacement, knee replacement, varicose vein surgery and surgery for groin hernia. The outcome measures are the EQ-5D index and visual analogue scale (and a disease-specific measure for three of the interventions). These data also contain the provider of the operation. Being publicly available, these data allow us to look at a range of different questions: what’s the average effect of the surgery on HRQoL? What are the differences between providers in gains to HRQoL or in patient casemix? Great!

The first thing we should always do with new data is to look at it. This might be in an exploratory way to determine the questions to ask of the data or in an analytical way to get an idea of the relationships between variables. Plotting the data communicates more about what’s going on than any table of statistics alone. However, the plots on the NHS Digital website might be accused of being a little uninspired as they collapse a lot of the variation into simple charts that conceal a lot of what’s going on. For example:

So let’s consider other ways of visualising this data. For all these plots a walk through of the code is at the end of this post.

Now, I’m not a regular user of PROMs data, so what I think are the interesting features of the data may not reflect what the data are generally used for. For this, I think the interesting features are:

  • The joint distribution of pre- and post-op scores
  • The marginal distributions of pre- and post-op scores
  • The relationship between pre- and post-op scores over time

We will pool all the data from six years’ worth of PROMs data. This gives us over 200,000 observations. A scatter plot with this information is useless as the density of the points will be very high. A useful alternative is hexagonal binning, which is like a two-dimensional histogram. Hexagonal tiles, which usefully tessellate and are more interesting to look at than squares, can be shaded or coloured with respect to the number of observations in each bin across the support of the joint distribution of pre- and post-op scores (which is [-0.5,1]x[-0.5,1]). We can add the marginal distributions to the axes and then add smoothed trend lines for each year. Since the data are constrained between -0.5 and 1, the mean may not be a very good summary statistic, so we’ll plot a smoothed median trend line for each year. Finally, we’ll add a line on the diagonal. Patients above this line have improved and patients below it deteriorated.

Hip replacement results

Hip replacement results

There’s a lot going on in the graph, but I think it reveals a number of key points about the data that we wouldn’t have seen from the standard plots on the website:

  • There appear to be four clusters of patients:
    • Those who were in close to full health prior to the operation and were in ‘perfect’ health (score = 1) after;
    • Those who were in close to full health pre-op and who didn’t really improve post-op;
    • Those who were in poor health (score close to zero) and made a full recovery;
    • Those who were in poor health and who made a partial recovery.
  • The median change is an improvement in health.
  • The median change improves modestly from year to year for a given pre-op score.
  • There are ceiling effects for the EQ-5D.

None of this is news to those who study these data. But this way of presenting the data certainly tells more of a story that the current plots on the website.

R code

We’re going to consider hip replacement, but the code is easily modified for the other outcomes. Firstly we will take the pre- and post-op score and their difference and pool them into one data frame.

# df 14/15
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1415.csv")

df<-df[!is.na(df$Pre.Op.Q.EQ5D.Index),]
df$pre<-df$Pre.Op.Q.EQ5D.Index
df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1415 <- df[,c('Provider.Code','pre','post','diff')]

#
# df 13/14
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1314.csv")

df<-df[!is.na(df$Pre.Op.Q.EQ5D.Index),]
df$pre<-df$Pre.Op.Q.EQ5D.Index
df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1314 <- df[,c('Provider.Code','pre','post','diff')]

# df 12/13
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1213.csv")

df<-df[!is.na(df$Pre.Op.Q.EQ5D.Index),]
df$pre<-df$Pre.Op.Q.EQ5D.Index
df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1213 <- df[,c('Provider.Code','pre','post','diff')]

# df 11/12
df<-read.csv("C:/docs/proms/Hip Replacement 1112.csv")

df$pre<-df$Q1_EQ5D_INDEX
df$post<- df$Q2_EQ5D_INDEX
df$diff<- df$post - df$pre
names(df)[1]<-'Provider.Code'

df1112 <- df[,c('Provider.Code','pre','post','diff')]

# df 10/11
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1011.csv")

df$pre<-df$Q1_EQ5D_INDEX
df$post<- df$Q2_EQ5D_INDEX
df$diff<- df$post - df$pre
names(df)[1]<-'Provider.Code'

df1011 <- df[,c('Provider.Code','pre','post','diff')]

#combine

df1415$year<-"2014/15"
df1314$year<-"2013/14"
df1213$year<-"2012/13"
df1112$year<-"2011/12"
df1011$year<-"2010/11"

df<-rbind(df1415,df1314,df1213,df1112,df1011)
write.csv(df,"C:/docs/proms/eq5d.csv")

Now, for the plot. We will need the packages ggplot2, ggExtra, and extrafont. The latter package is just to change the plot fonts, not essential, but aesthetically pleasing.

require(ggplot2)
require(ggExtra)
require(extrafont)
font_import()
loadfonts(device = "win")

p<-ggplot(data=df,aes(x=pre,y=post))+
 stat_bin_hex(bins=15,color="white",alpha=0.8)+
 geom_abline(intercept=0,slope=1,color="black")+
 geom_quantile(aes(color=year),method = "rqss", lambda = 2,quantiles=0.5,size=1)+
 scale_fill_gradient2(name="Count (000s)",low="light grey",midpoint = 15000,
   mid="blue",high = "red",
   breaks=c(5000,10000,15000,20000),labels=c(5,10,15,20))+
 theme_bw()+
 labs(x="Pre-op EQ-5D index score",y="Post-op EQ-5D index score")+
 scale_color_discrete(name="Year")+
 theme(legend.position = "bottom",text=element_text(family="Gill Sans MT"))

ggMarginal(p, type = "histogram")

Sam Watson’s journal round-up for 6th March 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

It’s good to be first: order bias in reading and citing NBER working papers. The Review of Economics and Statistics [RePEc] Published 23rd February 2017

Each week one of the authors at this blog choose three or four recently published studies to summarise and briefly discuss. Making this choice from the many thousands of articles published every week can be difficult. I browse those journals that publish in my area and search recently published economics papers on PubMed and Econlit for titles that pique my interest. But this strategy is not without its own flaws as this study aptly demonstrates. When making a choice among many alternatives, people aren’t typically presented with a set of choices, rather a list. This arises in healthcare as well. In an effort to promote competition, at least in the UK, patients are presented with a list of possible of providers and some basic information about those providers. We recently covered a paper that explored this expansion of choice ‘sets’ and investigated its effects on quality. We have previously criticised the use of such lists. People often skim these lists relying on simple heuristics to make choices. This article shows that for the weekly email of new papers published by the National Bureau of Economic Research (NBER), being listed first leads to an increase of approximately 30% in downloads and citations, despite the essentially random ordering of the list. This is certainly not the first study to illustrate the biases in human decision making, but it shows that this journal round-up may not be a fair reflection of the literature, and providing more information about healthcare providers may not have the impact on quality that might be hypothesised.

Economic conditions, illicit drug use, and substance use disorders in the United States. Journal of Health Economics [PubMed] Published March 2017

We have featured a large number of papers about the relationship between macroeconomic conditions and health and health-related behaviours on this blog. It is certainly one of the health economic issues du jour and one we have discussed in detail. Generally speaking, when looking at an aggregate level, such as countries or states, all-cause mortality appears to be pro-cyclical: it declines in economic downturns. Whereas an examination at individual or household levels suggest unemployment and reduced income is generally bad for health. It is certainly possible to reconcile these two effects as any discussion of Simpson’s paradox will reveal. This study takes the aggregate approach to looking at US state-level unemployment rates and their relationship with drug use. It’s relevant to the discussion around economic conditions and health; the US has seen soaring rates of opiate-related deaths recently, although whether this is linked to the prevailing economic conditions remains to be seen. Unfortunately, this paper predicates a lot of its discussion about whether there is an effect on whether there was statistical significance, a gripe we’ve contended with previously. And there are no corrections for multiple comparisons, despite the well over 100 hypothesis tests that are conducted. That aside, the authors conclude that the evidence suggests that use of ecstasy and heroin is procyclical with respect to unemployment (i.e increase with greater unemployment) and LSD, crack cocaine, and cocaine use is counter-cyclical. The results appear robust to the model specifications they compare, but I find it hard to reconcile some of the findings with the prior information about how people actually consume drugs. Many drugs are substitutes and/or compliments for one another. For example, many heroin users began using opiates through abuse of prescription drugs such as oxycodone but made the switch as heroin is generally much cheaper. Alcohol and marijuana have been shown to be substitutes for one another. All of this suggesting a lack of independence between the different outcomes considered. People may also lose their job because of drug use. Taken all together I remain a little sceptical of the conclusions from the study, but it is nevertheless an interesting and timely piece of research.

Child-to-adult neurodevelopmental and mental health trajectories after early life deprivation: the young adult follow-up of the longitudinal English and Romanian Adoptees study. The Lancet [PubMed] Published 22nd February 2017

Does early life deprivation lead to later life mental health issues? A question that is difficult to answer with observational data. Children from deprived backgrounds may be predisposed to mental health issues, perhaps through familial inheritance. To attempt to discern whether deprivation in early life is a cause of mental health issues this paper uses data derived from a cohort of Romanian children who spent time in one of the terribly deprived institutions of Ceaușescu’s Romania and who were later adopted by British families. These institutions were characterised by poor hygiene, inadequate food, and lack of social or educational stimulation. A cohort of British adoptees was used for comparison. For children who spent more than six months in one of the deprived institutions, there was a large increase in cognitive and social problems in later life compared with either British adoptees or those who spent less than six months in an institution. The evidence is convincing, with differences being displayed across multiple dimensions of mental health, and a clear causal mechanism by which deprivation acts. However, for this and many other studies that I write about on this blog, a disclaimer might be needed when there is significant (pun intended) abuse and misuse of p-values. Ziliak and McClosky’s damning diatribe on p-values, The Cult of Statistical Significance, presents examples of lists of p-values being given completely out of context, with no reference to the model or hypothesis test they are derived from, and with the implication that they represent whether an effect exists or not. This study does just that. I’ll leave you with this extract from the abstract:

Cognitive impairment in the group who spent more than 6 months in an institution remitted from markedly higher rates at ages 6 years (p=0·0001) and 11 years (p=0·0016) compared with UK controls, to normal rates at young adulthood (p=0·76). By contrast, self-rated emotional symptoms showed a late onset pattern with minimal differences versus UK controls at ages 11 years (p=0·0449) and 15 years (p=0·17), and then marked increases by young adulthood (p=0·0005), with similar effects seen for parent ratings. The high deprivation group also had a higher proportion of people with low educational achievement (p=0·0195), unemployment (p=0·0124), and mental health service use (p=0·0120, p=0·0032, and p=0·0003 for use when aged <11 years, 11–14 years, and 15–23 years, respectively) than the UK control group.

Credits