Sam Watson’s journal round-up for 21st August 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Multidimensional performance assessment of public sector organisations using dominance criteria. Health Economics [RePEcPublished 18th August 2017

The empirical assessment of the performance or quality of public organisations such as health care providers is an interesting and oft-tackled problem. Despite the development of sophisticated methods in a large and growing literature, public bodies continue to use demonstrably inaccurate or misleading statistics such as the standardised mortality ratio (SMR). Apart from the issue that these statistics may not be very well correlated with underlying quality, organisations may improve on a given measure by sacrificing their performance on another outcome valued by different stakeholders. One example from a few years ago showed how hospital rankings based upon SMRs shifted significantly if one took into account readmission rates and their correlation with SMRs. This paper advances this thinking a step further by considering multiple outcomes potentially valued by stakeholders and using dominance criteria to compare hospitals. A hospital dominates another if it performs at least as well or better across all outcomes. Importantly, correlation between these measures is captured in a multilevel model. I am an advocate of this type of approach, that is, the use of multilevel models to combine information across multiple ‘dimensions’ of quality. Indeed, my only real criticism would be that it doesn’t go far enough! The multivariate normal model used in the paper assumes a linear relationship between outcomes in their conditional distributions. Similarly, an instrumental variable model is also used (using the now routine distance-to-health-facility instrumental variable) that also assumes a linear relationship between outcomes and ‘unobserved heterogeneity’. The complex behaviour of health care providers may well suggest these assumptions do not hold – for example, failing institutions may well show poor performance across the board, while other facilities are able to trade-off outcomes with one another. This would suggest a non-linear relationship. I’m also finding it hard to get my head around the IV model: in particular what the covariance matrix for the whole model is and if correlations are permitted in these models at multiple levels as well. Nevertheless, it’s an interesting take on the performance question, but my faith that decent methods like this will be used in practice continues to wane as organisations such as Dr Foster still dominate quality monitoring.

A simultaneous equation approach to estimating HIV prevalence with nonignorable missing responses. Journal of the American Statistical Association [RePEcPublished August 2017

Non-response is a problem encountered more often than not in survey based data collection. For many public health applications though, surveys are the primary way of determining the prevalence and distribution of disease, knowledge of which is required for effective public health policy. Methods such as multiple imputation can be used in the face of missing data, but this requires an assumption that the data are missing at random. For disease surveys this is unlikely to be true. For example, the stigma around HIV may make many people choose not to respond to an HIV survey, thus leading to a situation where data are missing not at random. This paper tackles the question of estimating HIV prevalence in the face of informative non-response. Most economists are familiar with the Heckman selection model, which is a way of correcting for sample selection bias. The Heckman model is typically estimated or viewed as a control function approach in which the residuals from a selection model are used in a model for the outcome of interest to control for unobserved heterogeneity. An alternative way of representing this model is as copula between a survey response variable and the response variable itself. This representation is more flexible and permits a variety of models for both selection and outcomes. This paper includes spatial effects (given the nature of disease transmission) not only in the selection and outcomes models, but also in the model for the mixing parameter between the two marginal distributions, which allows the degree of informative non-response to differ by location and be correlated over space. The instrumental variable used is the identity of the interviewer since different interviewers are expected to be more or less successful at collecting data independent of the status of the individual being interviewed.

Clustered multistate models with observation level random effects, mover–stayer effects and dynamic covariates: modelling transition intensities and sojourn times in a study of psoriatic arthritis. Journal of the Royal Statistical Society: Series C [ArXiv] Published 25th July 2017

Modelling the progression of disease accurately is important for economic evaluation. A delicate balance between bias and variance should be sought: a model too simple will be wrong for most people, a model too complex will be too uncertain. A huge range of models therefore exists from ‘simple’ decision trees to ‘complex’ patient-level simulations. A popular choice are multistate models, such as Markov models, which provide a convenient framework for examining the evolution of stochastic processes and systems. A common feature of such models is the Markov property, which is that the probability of moving to a given state is independent of what has happened previously. This can be relaxed by adding covariates to model transition properties that capture event history or other salient features. This paper provides a neat example of extending this approach further in the case of arthritis. The development of arthritic damage in a hand joint can be described by a multistate model, but there are obviously multiple joints in one hand. What is more, the outcomes in any one joint are not likely to be independent of one another. This paper describes a multilevel model of transition probabilities for multiple correlated processes along with other extensions like dynamic covariates and different mover-stayer probabilities.


Visualising PROMs data

The patient reported outcomes measures, or PROMs, is a large database with before and after health-related quality of life (HRQoL) measures for a large number of patients undergoing four key conditions: hip replacement, knee replacement, varicose vein surgery and surgery for groin hernia. The outcome measures are the EQ-5D index and visual analogue scale (and a disease-specific measure for three of the interventions). These data also contain the provider of the operation. Being publicly available, these data allow us to look at a range of different questions: what’s the average effect of the surgery on HRQoL? What are the differences between providers in gains to HRQoL or in patient casemix? Great!

The first thing we should always do with new data is to look at it. This might be in an exploratory way to determine the questions to ask of the data or in an analytical way to get an idea of the relationships between variables. Plotting the data communicates more about what’s going on than any table of statistics alone. However, the plots on the NHS Digital website might be accused of being a little uninspired as they collapse a lot of the variation into simple charts that conceal a lot of what’s going on. For example:

So let’s consider other ways of visualising this data. For all these plots a walk through of the code is at the end of this post.

Now, I’m not a regular user of PROMs data, so what I think are the interesting features of the data may not reflect what the data are generally used for. For this, I think the interesting features are:

  • The joint distribution of pre- and post-op scores
  • The marginal distributions of pre- and post-op scores
  • The relationship between pre- and post-op scores over time

We will pool all the data from six years’ worth of PROMs data. This gives us over 200,000 observations. A scatter plot with this information is useless as the density of the points will be very high. A useful alternative is hexagonal binning, which is like a two-dimensional histogram. Hexagonal tiles, which usefully tessellate and are more interesting to look at than squares, can be shaded or coloured with respect to the number of observations in each bin across the support of the joint distribution of pre- and post-op scores (which is [-0.5,1]x[-0.5,1]). We can add the marginal distributions to the axes and then add smoothed trend lines for each year. Since the data are constrained between -0.5 and 1, the mean may not be a very good summary statistic, so we’ll plot a smoothed median trend line for each year. Finally, we’ll add a line on the diagonal. Patients above this line have improved and patients below it deteriorated.

Hip replacement results

Hip replacement results

There’s a lot going on in the graph, but I think it reveals a number of key points about the data that we wouldn’t have seen from the standard plots on the website:

  • There appear to be four clusters of patients:
    • Those who were in close to full health prior to the operation and were in ‘perfect’ health (score = 1) after;
    • Those who were in close to full health pre-op and who didn’t really improve post-op;
    • Those who were in poor health (score close to zero) and made a full recovery;
    • Those who were in poor health and who made a partial recovery.
  • The median change is an improvement in health.
  • The median change improves modestly from year to year for a given pre-op score.
  • There are ceiling effects for the EQ-5D.

None of this is news to those who study these data. But this way of presenting the data certainly tells more of a story that the current plots on the website.

R code

We’re going to consider hip replacement, but the code is easily modified for the other outcomes. Firstly we will take the pre- and post-op score and their difference and pool them into one data frame.

# df 14/15
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1415.csv")

df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1415 <- df[,c('Provider.Code','pre','post','diff')]

# df 13/14
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1314.csv")

df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1314 <- df[,c('Provider.Code','pre','post','diff')]

# df 12/13
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1213.csv")

df$post<- df$Post.Op.Q.EQ5D.Index
df$diff<- df$post - df$pre

df1213 <- df[,c('Provider.Code','pre','post','diff')]

# df 11/12
df<-read.csv("C:/docs/proms/Hip Replacement 1112.csv")

df$post<- df$Q2_EQ5D_INDEX
df$diff<- df$post - df$pre

df1112 <- df[,c('Provider.Code','pre','post','diff')]

# df 10/11
df<-read.csv("C:/docs/proms/Record Level Hip Replacement 1011.csv")

df$post<- df$Q2_EQ5D_INDEX
df$diff<- df$post - df$pre

df1011 <- df[,c('Provider.Code','pre','post','diff')]




Now, for the plot. We will need the packages ggplot2, ggExtra, and extrafont. The latter package is just to change the plot fonts, not essential, but aesthetically pleasing.

loadfonts(device = "win")

 geom_quantile(aes(color=year),method = "rqss", lambda = 2,quantiles=0.5,size=1)+
 scale_fill_gradient2(name="Count (000s)",low="light grey",midpoint = 15000,
   mid="blue",high = "red",
 labs(x="Pre-op EQ-5D index score",y="Post-op EQ-5D index score")+
 theme(legend.position = "bottom",text=element_text(family="Gill Sans MT"))

ggMarginal(p, type = "histogram")

Chris Sampson’s journal round-up for 16th January 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Competition and quality indicators in the health care sector: empirical evidence from the Dutch hospital sector. The European Journal of Health Economics [PubMed] Published 3rd January 2017

In case you weren’t already convinced, this paper presents more evidence to support the notion that (non-price) competition between health care providers is good for quality. The Dutch system is based on compulsory insurance and information on quality of hospital care is made public. One feature of the Dutch health system is that – for many elective hospital services – prices are set following a negotiation between insurers and hospitals. This makes the setting of the study a bit different to some of the European evidence considered to date, because there is scope for competition on price. The study looks at claims data for 3 diagnosis groups – cataract, adenoid/tonsils and bladder tumor – between 2008 and 2011. The authors’ approach to measuring competition is a bit more sophisticated than some other studies’ and is based on actual market share. A variety of quality indicators are used for the 3 diagnosis groups relating mainly to the process of care (rather than health outcomes). Fixed and random effects linear regression models are used to estimate the impact of market share upon quality. Casemix was only controlled for in relation to the proportion of people over 65 and the proportion of women. Where a relationship was found, it tended to be in favour of lower market share (i.e. greater competition) being associated with higher quality. For cataract and for bladder tumor there was a ‘significant’ effect. So in this setting at least, competition seems to be good news for quality. But the effect sizes are neither huge nor certain. A look at each of the quality indicators separately showed plenty of ‘non-significant’ relationships in both directions. While a novelty of this study is the liberalised pricing context, the authors find that there is no relationship between price and quality scores. So even if we believe the competition-favouring results, we needn’t abandon the ‘non-price competition only’ mantra.

Cost-effectiveness thresholds in global health: taking a multisectoral perspective. Value in Health Published 3rd January 2017

We all know health care is not the only – and probably not even the most important – determinant of health. We call ourselves health economists, but most of us are simply health care economists. Rarely do we look beyond the domain of health care. If our goal as researchers is to help improve population health, then we should probably be allocating more of our mental resource beyond health care. The same goes for public spending. Publicly provided education might improve health in a way that the health service would be willing to fund. Likewise, health care might improve educational attainment. This study considers resource allocation decisions using the familiar ‘bookshelf approach’, but goes beyond the unisectoral perspective. The authors discuss a two-sector world of health and education, and demonstrate the ways in which there may be overlaps in costs and outcomes. In short, there are likely to be situations in which the optimal multisectoral decision would be for individual sectors to increase their threshold in order to incorporate the spillover benefits of an intervention in another sector. The authors acknowledge that – in a perfect world – a social-welfare-maximising government would have sufficient information to allocate resources earmarked for specific purposes (e.g. health improvement) across sectors. But this doesn’t happen. Instead the authors propose the use of a cofinancing mechanism, whereby funds would be transferred between sectors as needed. The paper provides an interesting and thought-provoking discussion, and the idea of transferring funds between sectors seems sensible. Personally I think the problem is slightly misspecified. I don’t believe other sectors face thresholds in the same way, because (generally speaking) they do not employ cost-effectiveness analysis. And I’m not sure they should. I’m convinced that for health we need to deviate from welfarism, but I’m not convinced of it for other sectors. So from my perspective it is simply a matter of health vs everything else, and we can incorporate the ‘everything else’ into a cost-effectiveness analysis (with a societal perspective) in monetary terms. Funds can be reallocated as necessary with each budget statement (of which there seem to be a lot nowadays).

Is the Rational Addiction model inherently impossible to estimate? Journal of Health Economics [RePEc] Published 28th December 2016

Saddle point dynamics. Something I’ve never managed to get my head around, but here goes… This paper starts from the problem that empirical tests of the Rational Addiction model serve up wildly variable and often ridiculous (implied) discount rates. That may be part of the reason why economists tend to support the RA model but at the same time believe that it has not been empirically proven. The paper sets out the basis for saddle point dynamics in the context of the RA model, and outlines the nature of the stable and unstable root within the function that determines a person’s consumption over time. The authors employ Monte Carlo estimation of RA-type equations, simulating panel data observations. These simulations demonstrate that the presence of the unstable root may make it very difficult to estimate the coefficients. So even if the RA model can truly represent behaviour, empirical estimation may contradict it. This raises the question of whether the RA model is essentially untestable. A key feature of the argument relates to use of the model where a person’s time horizon is not considered to be infinite. Some non-health economists like to assume it is, which, as the authors wryly note, is not particularly ‘rational’.