# Method of the month: Shared parameter models

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is shared parameter models.

## Principles

Missing data and data errors are an inevitability rather than a possibility. If these data were missing as a result of a random computer error, then there would be no problem, no bias would result in estimators of statistics from these data. But, this is probably not why they’re missing. People drop out of surveys and trials often because they choose to, if they move away, or worse if they die. The trouble with this is that those factors that influence these decisions and events are typically also those that affect the outcomes of interest in our studies, thus leading to bias. Unfortunately, missing data is often improperly dealt with. For example, a study of randomised controlled trials (RCTs) in the big four medical journals found that 95% had some missing data, and around 85% of those did not deal with it in a suitable way. An instructive article in the BMJ illustrated the potentially massive biases that dropout in RCTs can generate. Similar effects should be expected from dropout in panel studies and other analyses. Now, if the data are missing at random – i.e. the probability of missing data or dropout is independent of the data conditional on observed covariates – then we could base our inferences on just the observed data. But this is often not the case, so what do we do in these circumstances?

## Implementation

If we have a full set of data $Y$ and a set of indicators for whether each observation is missing $R$, plus some parameters $\theta$ and $\phi$, then we can factorise their joint distribution, $f(Y,R;\theta,\phi)$ in three ways:

### Selection model

$f_{R|Y}(R|Y;\phi)f_Y(Y;\theta)$

Perhaps most familiar to econometricians, this factorisation involves the marginal distribution of the full data and the conditional distribution of missingness given the data. The Heckman selection model is an example of this factorisation. For example, one could specify a probit model for dropout and a normally distributed outcome, and then the full likelihood would involve the product of the two.

### Pattern-mixture model

$f_{Y|R}(Y|R;\theta_R)f_R(R;\phi)$

This approach specifies a marginal distribution for the missingness or dropout mechanism and then the distribution of the data differs according to the type of missingness or dropout. The data are a mixture of different patterns, i.e. distributions. This type of model is implied when non-response is not considered missing data per se, and we’re interested in inferences within each sub-population. For example, when estimating quality of life at a given age, the quality of life of those that have died is not of interest, but their dying can bias the estimates.

### Shared parameter model

$f_{Y}(Y|\alpha;\theta)f_R(R|\alpha;\phi)$

Now, the final way we can model these data posits unobserved variables, $\alpha$, conditional on which $Y$ and $R$ are independent. These models are most appropriate when the dropout or missingness is attributable to some underlying process changing over time, such as disease progression or household attitudes, or an unobserved variable, such as health status.

At the simplest level, one could consider two separate models with correlated random effects, for example, adding in covariates $x$ and having a linear mixed model and probit selection model for person $i$ at time $t$

$Y_{it} = x_{it}'\theta + \alpha_{1,i} + u_{it}$

$R_{it} = \Phi(x_{it}'\theta + \alpha_{2,i})$

$(\alpha_{1,i},\alpha_{2,i}) \sim MVN(0,\Sigma)$ and $u_{it} \sim N(0,\sigma^2)$

so that the random effects are multivariate normally distributed.

A more complex and flexible specification for longitudinal settings would permit the random effects to vary over time, differently between models and individuals:

$Y_{i}(t) = x_{i}(t)'\theta + z_{1,i} (t)\alpha_i + u_{it}$

$R_{i}(t) = G(x_{i}'\theta + z_{2,i} (t)\alpha_i)$

$\alpha_i \sim h(.)$ and $u_{it} \sim N(0,\sigma^2)$

As an example, if time were discrete in this model then $z_{1,i}$ could be a series of parameters for each time period $z_{1,i} = [\lambda_1,\lambda_2,...,\lambda_T]$, what are often referred to as ‘factor loadings’ in the structural equation modelling literature. We will run up against identifiability problems with these more complex models. For example, if the random effect was normally distributed i.e. $\alpha_i \sim N(0,\sigma^2_\alpha)$ then we could multiply each factor loading by $\rho$ and then $\alpha_i \sim N(0,\sigma^2_\alpha / \rho^2)$ would give us an equivalent model. So, we would have to put restrictions on the parameters. We can set the variance of the random effect to be one, i.e. $\alpha_i \sim N(0,1)$. We can also set one of the factor loadings to zero, without loss of generality, i.e. $z_{1,i} = [0,...,\lambda_T]$.

The distributional assumptions about the random effects can have potentially large effects on the resulting inferences. It is possible therefore to non-parametrically model these as well – e.g. using a mixture distribution. Ultimately, these models are a useful method to deal with data that are missing not at random, such as informative dropout from panel studies.

### Software

Estimation can be tricky with these models given the need to integrate out the random effects. For frequentist inferences, expectation maximisation (EM) is one way of estimating these models, but as far as I’m aware the algorithm would have to be coded for the problem specifically in Stata or R. An alternative is using some kind of quadrature based method. The Stata package stjm fits shared parameter models for longitudinal and survival data, with similar specifications to those above.

Otherwise, Bayesian tools, such as Hamiltonian Monte Carlo, may have more luck dealing with the more complex models. For the simpler correlated random effects specification specified above one can use the stan_mvmer command in the rstanarm package. For more complex models, one would need to code the model in something like Stan.

## Applications

For a health economics specific discussion of these types of models, one can look to the chapter Latent Factor and Latent Class Models to Accommodate Heterogeneity, Using Structural Equation in the Encyclopedia of Health Economics, although shared parameter models only get a brief mention. However, given that that book is currently on sale for £1,000, it may be beyond the wallet of the average researcher! Some health-related applications may be more helpful. Vonesh et al. (2011) used shared parameter models to look at the effects of diet and blood pressure control on renal disease progression. Wu and others (2011) look at how to model the effects of a ‘concomitant intervention’, which is one applied when a patient’s health status deteriorates and so is confounded with health, using shared parameter models. And, Baghfalaki and colleagues (2017) examine heterogeneous random effect specification for shared parameter models and apply this to HIV data.

Credit

## Author

• Health economics, statistics, and health services research at the University of Warwick. Also like rock climbing and making noise on the guitar.

## One thought on “Method of the month: Shared parameter models”

This site uses Akismet to reduce spam. Learn how your comment data is processed.