*Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is constrained randomisation.*

## Principle

Randomised experimental studies are one of the best ways of estimating the causal effects of an intervention. They have become more and more widely used in economics; Banerjee and Duflo are often credited with popularising them among economists. When done well, randomly assigning a treatment ensures both observable and unobservable factors are independent of treatment status and likely to be balanced between treatment and control units.

Many of the interventions economists are interested in are at a ‘cluster’ level, be it a school, hospital, village, or otherwise. So the appropriate experimental design would be a cluster randomised controlled trial (cRCT), in which the clusters are randomised to treatment or control and individuals within each cluster are observed either cross-sectionally or longitudinally. But, except in cases of large budgets, the number of clusters participating can be fairly small. When randomising a relatively small number of clusters we could by chance end up with a quite severe imbalance in key covariates between trial arms. This presents a problem if we suspect *a priori* that these covariates have an influence on key outcomes.

One solution to the problem of potential imbalance is *covariate-based constrained randomisation*. The principle here is to conduct a large number of randomisations, assess the balance of covariates in each one using some balance metric, and then to randomly choose one of the most balanced according to this metric. This method preserves the important random treatment assignment while ensuring covariate balance. Stratified randomisation also has a similar goal, but in many cases may not be possible if there are continuous covariates of interest or too few clusters to distribute among many strata.

## Implementation

Conducting covariate constrained randomisation is straightforward and involves the following steps:

- Specifying the important baseline covariates to balance the clusters on. For each cluster we have covariates .
- Characterising each cluster in terms of these covariates, i.e. creating the .
- Enumerating all potential randomisation schemes or simulating a large number of them. For each one, we will need to measure the balance of the between trial arms.
- Selecting a candidate set of randomisation schemes that are sufficiently balanced according to some pre-specified criterion from which we can randomly choose our treatment allocation.

### Balance scores

A key ingredient in the above steps is the balance score. This score needs to be some univariate measure of potentially multivariate imbalance between two (or more) groups. A commonly used score is that proposed by Raab and Butcher:

where and are the mean values of covariate in the treatment and control groups respectively, and is some weight, which is often the inverse standard deviation of the covariate. Conceptually the score is a sum of standardised differences in means, so lower values indicate greater balance. But other scores would also work. Indeed, any statistic that measures the distance between the distributions of two variables would work and could be summed up over the covariates. This could include the *maximum distance:*

the *Manhattan distance:*

or even the *Symmetrised Bayesian Kullback-Leibler divergence *(I can’t be bothered to type this one out). Grischott has developed a Shiny application to estimate all these distances in a constrained randomisation framework, detailed in this paper.

Things become more complex if there are more than two trial arms. All of the above scores are only able to compare two groups. However, there already exist a number of univariate measures of multivariate balance in the form of MANOVA (multivariate analysis of variance) test statistics. For example, if we have trial arms and let then the between group covariance matrix is:

and the within group covariance matrix is:

which we can use in a variety of statistics including Wilks’ Lambda, for example:

No trial has previously used covariate constrained randomisation with multiple groups, as far as I am aware, but this is the subject of an ongoing paper investigating these scores – so watch this space!

Once the scores have been calculated for all possible schemes or a very large number of possible schemes, we select from among those which are most balanced. The most balanced are defined according to some quantile of the balance score, say the top 15%.

As a simple simulated example of how this might be coded in R, let’s consider a trial of 8 clusters with two standard-normally distributed covariates. We’ll use the Raab and Butcher score from above:

#simulate the covariates

n <- 8

x1 <- rnorm(n)

x2 <- rnorm(n)

x <- matrix(c(x1,x2),ncol=2)

#enumerate all possible schemes - you'll need the partitions package here

schemes <- partitions::setparts(c(n/2,n/2))

#write a function that will estimate the score

#for each scheme which we can apply over our

#set of schemes

balance_score <- function(scheme,covs){

treat.idx <- I(scheme==2)

control.idx <- I(scheme==1)

treat.means <- apply(covs[treat.idx,],2,mean)

control.means <- apply(covs[control.idx,],2,mean)

cov.sds <- apply(covs,2,sd)

#Raab-butcher score

score <- sum((treat.means - control.means)^2/cov.sds)

return(score)

}

#apply the function

scores <- apply(schemes,2,function(i)balance_score(i,x))

#find top 15% of schemes (lowest scores)

scheme.set <- which(scores <= quantile(scores,0.15))

#choose one at random

scheme.number <- sample(scheme.set,1)

scheme.chosen <- schemes[,scheme.number]

Analyses

A commonly used method of cluster trial analysis is by estimating a mixed-model, i.e. a hierarchical model with cluster-level random effects. Two key questions are whether to control for the covariates used in the randomisation, and which test to use for treatment effects. Fan Li has two great papers answering these questions for linear models and binomial models. One key conclusion is that the appropriate type I error rates are only achieved in models adjusted for the covariates used in the randomisation. For non-linear models type I error rates can be way off for many estimators especially with small numbers of clusters, which is often the reason for doing constrained randomisation in the first place, so a careful choice is needed here. I would recommend adjusted permutation tests if in doubt to ensure the appropriate type I error rates. Of course, one could take a Bayesian approach to analysis, although there is no analysis that I’m aware of, of the performance of these models for these analyses (another case of “watch this space!”).

## Application

There are many trials that used this procedure and listing even a fraction would be a daunting task. But I would be remiss for not noting a trial of my own that uses covariate constrained randomisation. It is investigating the effect of providing an incentive to small and medium sized enterprises to adhere to a workplace well-being programme. There are good applications used as examples in Fan Li’s papers mentioned above. A trial that featured in a journal round-up in February used covariate constrained randomisation to balance a very small number of clusters in a trial of a medicines access programme in Kenya.

**Credit **