Method of the month: Shared parameter models

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is shared parameter models.

Principles

Missing data and data errors are an inevitability rather than a possibility. If these data were missing as a result of a random computer error, then there would be no problem, no bias would result in estimators of statistics from these data. But, this is probably not why they’re missing. People drop out of surveys and trials often because they choose to, if they move away, or worse if they die. The trouble with this is that those factors that influence these decisions and events are typically also those that affect the outcomes of interest in our studies, thus leading to bias. Unfortunately, missing data is often improperly dealt with. For example, a study of randomised controlled trials (RCTs) in the big four medical journals found that 95% had some missing data, and around 85% of those did not deal with it in a suitable way. An instructive article in the BMJ illustrated the potentially massive biases that dropout in RCTs can generate. Similar effects should be expected from dropout in panel studies and other analyses. Now, if the data are missing at random – i.e. the probability of missing data or dropout is independent of the data conditional on observed covariates – then we could base our inferences on just the observed data. But this is often not the case, so what do we do in these circumstances?

Implementation

If we have a full set of data Y and a set of indicators for whether each observation is missing R, plus some parameters \theta and \phi, then we can factorise their joint distribution, f(Y,R;\theta,\phi) in three ways:

Selection model

f_{R|Y}(R|Y;\phi)f_Y(Y;\theta)

Perhaps most familiar to econometricians, this factorisation involves the marginal distribution of the full data and the conditional distribution of missingness given the data. The Heckman selection model is an example of this factorisation. For example, one could specify a probit model for dropout and a normally distributed outcome, and then the full likelihood would involve the product of the two.

Pattern-mixture model

f_{Y|R}(Y|R;\theta_R)f_R(R;\phi)

This approach specifies a marginal distribution for the missingness or dropout mechanism and then the distribution of the data differs according to the type of missingness or dropout. The data are a mixture of different patterns, i.e. distributions. This type of model is implied when non-response is not considered missing data per se, and we’re interested in inferences within each sub-population. For example, when estimating quality of life at a given age, the quality of life of those that have died is not of interest, but their dying can bias the estimates.

Shared parameter model

f_{Y}(Y|\alpha;\theta)f_R(R|\alpha;\phi)

Now, the final way we can model these data posits unobserved variables, \alpha, conditional on which Y and R are independent. These models are most appropriate when the dropout or missingness is attributable to some underlying process changing over time, such as disease progression or household attitudes, or an unobserved variable, such as health status.

At the simplest level, one could consider two separate models with correlated random effects, for example, adding in covariates x and having a linear mixed model and probit selection model for person i at time t

Y_{it} = x_{it}'\theta + \alpha_{1,i} + u_{it}

R_{it} = \Phi(x_{it}'\theta + \alpha_{2,i})

(\alpha_{1,i},\alpha_{2,i}) \sim MVN(0,\Sigma) and u_{it} \sim N(0,\sigma^2)

so that the random effects are multivariate normally distributed.

A more complex and flexible specification for longitudinal settings would permit the random effects to vary over time, differently between models and individuals:

Y_{i}(t) = x_{i}(t)'\theta + z_{1,i} (t)\alpha_i + u_{it}

R_{i}(t) = G(x_{i}'\theta + z_{2,i} (t)\alpha_i)

\alpha_i \sim h(.) and u_{it} \sim N(0,\sigma^2)

As an example, if time were discrete in this model then z_{1,i} could be a series of parameters for each time period z_{1,i} = [\lambda_1,\lambda_2,...,\lambda_T], what are often referred to as ‘factor loadings’ in the structural equation modelling literature. We will run up against identifiability problems with these more complex models. For example, if the random effect was normally distributed i.e. \alpha_i \sim N(0,\sigma^2_\alpha) then we could multiply each factor loading by \rho and then \alpha_i \sim N(0,\sigma^2_\alpha / \rho^2) would give us an equivalent model. So, we would have to put restrictions on the parameters. We can set the variance of the random effect to be one, i.e. \alpha_i \sim N(0,1). We can also set one of the factor loadings to zero, without loss of generality, i.e. z_{1,i} = [0,...,\lambda_T].

The distributional assumptions about the random effects can have potentially large effects on the resulting inferences. It is possible therefore to non-parametrically model these as well – e.g. using a mixture distribution. Ultimately, these models are a useful method to deal with data that are missing not at random, such as informative dropout from panel studies.

Software

Estimation can be tricky with these models given the need to integrate out the random effects. For frequentist inferences, expectation maximisation (EM) is one way of estimating these models, but as far as I’m aware the algorithm would have to be coded for the problem specifically in Stata or R. An alternative is using some kind of quadrature based method. The Stata package stjm fits shared parameter models for longitudinal and survival data, with similar specifications to those above.

Otherwise, Bayesian tools, such as Hamiltonian Monte Carlo, may have more luck dealing with the more complex models. For the simpler correlated random effects specification specified above one can use the stan_mvmer command in the rstanarm package. For more complex models, one would need to code the model in something like Stan.

Applications

For a health economics specific discussion of these types of models, one can look to the chapter Latent Factor and Latent Class Models to Accommodate Heterogeneity, Using Structural Equation in the Encyclopedia of Health Economics, although shared parameter models only get a brief mention. However, given that that book is currently on sale for £1,000, it may be beyond the wallet of the average researcher! Some health-related applications may be more helpful. Vonesh et al. (2011) used shared parameter models to look at the effects of diet and blood pressure control on renal disease progression. Wu and others (2011) look at how to model the effects of a ‘concomitant intervention’, which is one applied when a patient’s health status deteriorates and so is confounded with health, using shared parameter models. And, Baghfalaki and colleagues (2017) examine heterogeneous random effect specification for shared parameter models and apply this to HIV data.

Credit

Sam Watson’s journal round-up for 13th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Scaling for economists: lessons from the non-adherence problem in the medical literature. Journal of Economic Perspectives [RePEcPublished November 2017

It has often been said that development economics has been at the vanguard of the use of randomised trials within economics. Other areas of economics have slowly caught up; the internal validity, and causal interpretation, offered by experimental randomised studies can provide reliable estimates for the effects of particular interventions. Health economics though has perhaps an even longer history with randomised controlled trials (RCTs), and now economic evaluation is often expected alongside clinical trials. RCTs of physician incentives and payments, investment programmes in child health, or treatment provision in schools all feature as other examples. However, even experimental studies can suffer from the same biases in the data analysis process as observational studies. The multiple decisions made in the data analysis and publication stages of research can lead to over-inflated estimates. Beyond that, the experimental conditions of the trial may not pertain in the real world – the study may lack external validity. The medical literature has long recognised this issue, as many as 50% of patients don’t take the medicines prescribed to them by a doctor. As a result, there has been considerable effort to develop an understanding of, and interventions to remedy, the lack of transferability between RCTs and real-world outcomes. This article summarises this literature and develops lessons for economists, who are only just starting to deal with, what they term, ‘the scaling problem’. For example, there are many reasons people don’t respond to incentives as expected: there are psychological costs to switching; people are hyperbolic discounters and often prefer small short-term gains for larger long-term costs; and, people can often fail to understand the implications of sets of complex options. We have also previously discussed the importance of social preferences in decision making. The key point is that, as policy is becoming more and more informed by randomised studies, we need to be careful about over-optimism of effect sizes and start to understand adherence to different policies in the real world. Only then are recommendations reliable.

Estimating the opportunity costs of bed-days. Health Economics [PubMedPublished 6th November 2017

The health economic evaluation of health service delivery interventions is becoming an important issue in health economics. We’ve discussed on many occasions questions surrounding the implementation of seven-day health services in England and Wales, for example. Other service delivery interventions might include changes to staffing levels more generally, medical IT technology, or an incentive to improve hand washing. Key to the evaluation of these interventions is that they are all generally targeted at improving quality of care – that is, to reduce preventable harm. The vast majority of patients who experience some sort of preventable harm do not die but are likely to experience longer lengths of stay in hospital. Consider a person suffering from bed sores or a fall in hospital. Therefore, we need to be able to value those extra bed days to be able to say what the value of improving hospital quality is. Typically we use reference costs or average accounting costs for the opportunity cost of a bed-day, mainly for pragmatic reasons, but also on the assumption that this is equivalent to the value of the second-best alternative foregone. This requires the assumption that health care markets operate properly, which they almost certainly do not. This paper explores the different ways economists have thought about opportunity costs and applies them to the question of the opportunity cost of a hospital bed-day. This includes definitions such as “Net health benefit forgone for the second-best patient‐equivalents”, “Net monetary benefit forgone for the second-best treatment-equivalents”, and “Expenditure incurred + highest net revenue forgone.” The key takeaway is that there is wide variation in the estimated opportunity costs using all the different methods and that, given the assumptions underpinning the most widely used methodologies are unlikely to hold, we may be routinely under- or over-valuing the effects of different interventions.

Universal investment in infants and long-run health: evidence from Denmark’s 1937 Home Visiting Program. American Economic Journal: Applied Economics [RePEcPublished October 2017

We have covered a raft of studies that look at the effects of in-utero health on later life outcomes, the so-called fetal origins hypothesis. A smaller, though by no means small, literature has considered what impact improving infant and childhood health has on later life adult outcomes. While many of these studies consider programmes that occurred decades ago in the US or Europe, their findings are still relevant today as many countries are grappling with high infant and childhood mortality. For many low-income countries, programmes with community health workers – lay-community members provided with some basic public health training – involving home visits, education, and referral services are being widely adopted. This article looks at the later life impacts of an infant health programme, the Home Visiting Program, implemented in Denmark in the 1930s and 40s. The aim of the programme was to provide home visits to every newborn in each district to provide education on feeding and hygiene practices and to monitor infant progress. The programme was implemented in a trial based fashion with different districts adopting the programme at different times and some districts remaining as control districts, although selection into treatment and control was not random. Data were obtained about the health outcomes in the period 1980-2012 of people born 1935-49. In short, the analyses suggest that the programme improved adult longevity and health outcomes, although the effects are small. For example, they estimate the programme reduced hospitalisations by half a day between the age of 45 and 64, and 2 to 6 more people per 1,000 survived past 60 years of age. However, these effect sizes may be large enough to justify what may be a reasonably low-cost programme when scaled across the population.

Credits