# James Altunkaya’s journal round-up for 3rd September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Sensitivity analysis for not-at-random missing data in trial-based cost-effectiveness analysis: a tutorial. PharmacoEconomics [PubMed] [RePEc] Published 20th April 2018

Last month, we highlighted a Bayesian framework for imputing missing data in economic evaluation. The paper dealt with the issue of departure from the ‘Missing at Random’ (MAR) assumption by using a Bayesian approach to specify a plausible missingness model from the results of expert elicitation. This was used to estimate a prior distribution for the unobserved terms in the outcomes model.

For those less comfortable with Bayesian estimation, this month we highlight a tutorial paper from the same authors, outlining an approach to recognise the impact of plausible departures from ‘Missingness at Random’ assumptions on cost-effectiveness results. Given poor adherence to current recommendations for the best practice in handling and reporting missing data, an incremental approach to improving missing data methods in health research may be more realistic. The authors supply accompanying Stata code.

The paper investigates the importance of assuming a degree of ‘informative’ missingness (i.e. ‘Missingness not at Random’) in sensitivity analyses. In a case study, the authors present a range of scenarios which assume a decrement of 5-10% in the quality of life of patients with missing health outcomes, compared to multiple imputation estimates based on observed characteristics under standard ‘Missing at Random’ assumptions. This represents an assumption that, controlling for all observed characteristics used in multiple imputation, those with complete quality of life profiles may have higher quality of life than those with incomplete surveys.

Quality of life decrements were implemented in the control and treatment arm separately, and then jointly, in six scenarios. This aimed to demonstrate the sensitivity of cost-effectiveness judgements to the possibility of a different missingness mechanism in each arm. The authors similarly investigate sensitivity to higher health costs in those with missing data than predicted based on observed characteristics in imputation under ‘Missingness at Random’. Finally, sensitivity to a simultaneous departure from ‘Missingness at Random’ in both health outcomes and health costs is investigated.

The proposed sensitivity analyses provide a useful heuristic to assess what degree of difference between missing and non-missing subjects on unobserved characteristics would be necessary to change cost-effectiveness decisions. The authors admit this framework could appear relatively crude to those comfortable with more advanced missing data approaches such as those outlined in last month’s round-up. However, this approach should appeal to those interested in presenting the magnitude of uncertainty introduced by missing data assumptions, in a way that is easily interpretable to decision makers.

The impact of waiting for intervention on costs and effectiveness: the case of transcatheter aortic valve replacement. The European Journal of Health Economics [PubMed] [RePEc] Published September 2018

This paper appears in print this month and sparked interest as one of comparatively few studies on the cost-effectiveness of waiting lists. Given interest in using constrained optimisation methods in health outcomes research, highlighted in this month’s editorial in Value in Health, there is rightly interest in extending the traditional sphere of economic evaluation from drugs and devices to understanding the trade-offs of investing in a wider range of policy interventions, using a common metric of costs and QALYs. Rachel Meacock’s paper earlier this year did a great job at outlining some of the challenges involved broadening the scope of economic evaluation to more general decisions in health service delivery.

The authors set out to understand the cost-effectiveness of delaying a cardiac treatment (TVAR) using a waiting list of up to 12 months compared to a policy of immediate treatment. The effectiveness of treatment at 3, 6, 9 & 12 months after initial diagnosis, health decrements during waiting, and corresponding health costs during wait time and post-treatment were derived from a small observational study. As treatment is studied in an elderly population, a non-ignorable proportion of patients die whilst waiting for surgery. This translates to lower modelled costs, but also lower quality life years in modelled cohorts where there was any delay from a policy of immediate treatment. The authors conclude that eliminating all waiting time for TVAR would produce population health at a rate of ~€12,500 per QALY gained.

However, based on the modelling presented, the authors lack the ability to make cost-effectiveness judgements of this sort. Waiting lists exist for a reason, chiefly a lack of clinical capacity to treat patients immediately. In taking a decision to treat patients immediately in one disease area, we therefore need some judgement as to whether the health displaced in now untreated patients in another disease area is of greater, less or equal magnitude to that gained by treating TVAR patients immediately. Alternately, modelling should include the cost of acquiring additional clinical capacity (such as theatre space) to treat TVAR patients immediately, so as not to displace other treatments. In such a case, the ICER is likely to be much higher, due to the large cost of new resources needed to reduce waiting times to zero.

Given the data available, a simple improvement to the paper would be to reflect current waiting times (already gathered from observational study) as the ‘standard of care’ arm. As such, the estimated change in quality of life and healthcare resource cost from reducing waiting times to zero from levels observed in current practice could be calculated. This could then be used to calculate the maximum acceptable cost of acquiring additional treatment resources needed to treat patients with no waiting time, given current national willingness-to-pay thresholds.

Admittedly, there remain problems in using the authors’ chosen observational dataset to calculate quality of life and cost outcomes for patients treated at different time periods. Waiting times were prioritised in this ‘real world’ observational study, based on clinical assessment of patients’ treatment need. Thus it is expected that the quality of life lost during a waiting period would be lower for patients treated in the observational study at 12 months, compared to the expected quality of life loss of waiting for the group of patients judged to need immediate treatment. A previous study in cardiac care took on the more manageable task of investigating the cost-effectiveness of different prioritisation strategies for the waiting list, investigating the sensitivity of conclusions to varying a fixed maximum wait-time for the last patient treated.

This study therefore demonstrates some of the difficulties in attempting to make cost-effectiveness judgements about waiting time policy. Given that the cost-effectiveness of reducing waiting times in different disease areas is expected to vary, based on relative importance of waiting for treatment on short and long-term health outcomes and costs, this remains an interesting area for economic evaluation to explore. In the context of the current focus on constrained optimisation techniques across different areas in healthcare (see ISPOR task force), it is likely that extending economic evaluation to evaluate a broader range of decision problems on a common scale will become increasingly important in future.

Understanding and identifying key issues with the involvement of clinicians in the development of decision-analytic model structures: a qualitative study. PharmacoEconomics [PubMed] Published 17th August 2018

This paper gathers evidence from interviews with clinicians and modellers, with the aim to improve the nature of the working relationship between the two fields during model development.

Researchers gathered opinion from a variety of settings, including industry. The main report focusses on evidence from two case studies – one tracking the working relationship between modellers and a single clinical advisor at a UK university, with the second gathering evidence from a UK policy institute – where modellers worked with up to 11 clinical experts per meeting.

Some of the authors’ conclusions are not particularly surprising. Modellers reported difficulty in recruiting clinicians to advise on model structures, and further difficulty in then engaging recruited clinicians to provide relevant advice for the model building process. Specific comments suggested difficulty for some clinical advisors in identifying representative patient experiences, instead diverting modellers’ attention towards rare outlier events.

Study responses suggested currently only 1 or 2 clinicians were typically consulted during model development. The authors recommend involving a larger group of clinicians at this stage of the modelling process, with a more varied range of clinical experience (junior as well as senior clinicians, with some geographical variation). This is intended to help ensure clinical pathways modelled are generalizable. The experience of one clinical collaborator involved in the case study based at a UK university, compared to 11 clinicians at the policy institute studied, perhaps may also illustrate a general problem of inadequate compensation for clinical time within the university system. The authors also advocate the availability of some relevant training for clinicians in decision modelling to help enhance the efficiency of participants’ time during model building. Clinicians sampled were supportive of this view – citing the need for further guidance from modellers on the nature of their expected contribution.

This study ties into the general literature regarding structural uncertainty in decision analytic models. In advocating the early contribution of a larger, more diverse group of clinicians in model development, the authors advocate a degree of alignment between clinical involvement during model structuring, and guidelines for eliciting parameter estimates from clinical experts. Similar problems, however, remain for both fields, in recruiting clinical experts from sufficiently diverse backgrounds to provide a valid sample.

Credits

# Method of the month: Shared parameter models

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is shared parameter models.

## Principles

Missing data and data errors are an inevitability rather than a possibility. If these data were missing as a result of a random computer error, then there would be no problem, no bias would result in estimators of statistics from these data. But, this is probably not why they’re missing. People drop out of surveys and trials often because they choose to, if they move away, or worse if they die. The trouble with this is that those factors that influence these decisions and events are typically also those that affect the outcomes of interest in our studies, thus leading to bias. Unfortunately, missing data is often improperly dealt with. For example, a study of randomised controlled trials (RCTs) in the big four medical journals found that 95% had some missing data, and around 85% of those did not deal with it in a suitable way. An instructive article in the BMJ illustrated the potentially massive biases that dropout in RCTs can generate. Similar effects should be expected from dropout in panel studies and other analyses. Now, if the data are missing at random – i.e. the probability of missing data or dropout is independent of the data conditional on observed covariates – then we could base our inferences on just the observed data. But this is often not the case, so what do we do in these circumstances?

## Implementation

If we have a full set of data $Y$ and a set of indicators for whether each observation is missing $R$, plus some parameters $\theta$ and $\phi$, then we can factorise their joint distribution, $f(Y,R;\theta,\phi)$ in three ways:

### Selection model

$f_{R|Y}(R|Y;\phi)f_Y(Y;\theta)$

Perhaps most familiar to econometricians, this factorisation involves the marginal distribution of the full data and the conditional distribution of missingness given the data. The Heckman selection model is an example of this factorisation. For example, one could specify a probit model for dropout and a normally distributed outcome, and then the full likelihood would involve the product of the two.

### Pattern-mixture model

$f_{Y|R}(Y|R;\theta_R)f_R(R;\phi)$

This approach specifies a marginal distribution for the missingness or dropout mechanism and then the distribution of the data differs according to the type of missingness or dropout. The data are a mixture of different patterns, i.e. distributions. This type of model is implied when non-response is not considered missing data per se, and we’re interested in inferences within each sub-population. For example, when estimating quality of life at a given age, the quality of life of those that have died is not of interest, but their dying can bias the estimates.

### Shared parameter model

$f_{Y}(Y|\alpha;\theta)f_R(R|\alpha;\phi)$

Now, the final way we can model these data posits unobserved variables, $\alpha$, conditional on which $Y$ and $R$ are independent. These models are most appropriate when the dropout or missingness is attributable to some underlying process changing over time, such as disease progression or household attitudes, or an unobserved variable, such as health status.

At the simplest level, one could consider two separate models with correlated random effects, for example, adding in covariates $x$ and having a linear mixed model and probit selection model for person $i$ at time $t$

$Y_{it} = x_{it}'\theta + \alpha_{1,i} + u_{it}$

$R_{it} = \Phi(x_{it}'\theta + \alpha_{2,i})$

$(\alpha_{1,i},\alpha_{2,i}) \sim MVN(0,\Sigma)$ and $u_{it} \sim N(0,\sigma^2)$

so that the random effects are multivariate normally distributed.

A more complex and flexible specification for longitudinal settings would permit the random effects to vary over time, differently between models and individuals:

$Y_{i}(t) = x_{i}(t)'\theta + z_{1,i} (t)\alpha_i + u_{it}$

$R_{i}(t) = G(x_{i}'\theta + z_{2,i} (t)\alpha_i)$

$\alpha_i \sim h(.)$ and $u_{it} \sim N(0,\sigma^2)$

As an example, if time were discrete in this model then $z_{1,i}$ could be a series of parameters for each time period $z_{1,i} = [\lambda_1,\lambda_2,...,\lambda_T]$, what are often referred to as ‘factor loadings’ in the structural equation modelling literature. We will run up against identifiability problems with these more complex models. For example, if the random effect was normally distributed i.e. $\alpha_i \sim N(0,\sigma^2_\alpha)$ then we could multiply each factor loading by $\rho$ and then $\alpha_i \sim N(0,\sigma^2_\alpha / \rho^2)$ would give us an equivalent model. So, we would have to put restrictions on the parameters. We can set the variance of the random effect to be one, i.e. $\alpha_i \sim N(0,1)$. We can also set one of the factor loadings to zero, without loss of generality, i.e. $z_{1,i} = [0,...,\lambda_T]$.

The distributional assumptions about the random effects can have potentially large effects on the resulting inferences. It is possible therefore to non-parametrically model these as well – e.g. using a mixture distribution. Ultimately, these models are a useful method to deal with data that are missing not at random, such as informative dropout from panel studies.

### Software

Estimation can be tricky with these models given the need to integrate out the random effects. For frequentist inferences, expectation maximisation (EM) is one way of estimating these models, but as far as I’m aware the algorithm would have to be coded for the problem specifically in Stata or R. An alternative is using some kind of quadrature based method. The Stata package stjm fits shared parameter models for longitudinal and survival data, with similar specifications to those above.

Otherwise, Bayesian tools, such as Hamiltonian Monte Carlo, may have more luck dealing with the more complex models. For the simpler correlated random effects specification specified above one can use the stan_mvmer command in the rstanarm package. For more complex models, one would need to code the model in something like Stan.

## Applications

For a health economics specific discussion of these types of models, one can look to the chapter Latent Factor and Latent Class Models to Accommodate Heterogeneity, Using Structural Equation in the Encyclopedia of Health Economics, although shared parameter models only get a brief mention. However, given that that book is currently on sale for £1,000, it may be beyond the wallet of the average researcher! Some health-related applications may be more helpful. Vonesh et al. (2011) used shared parameter models to look at the effects of diet and blood pressure control on renal disease progression. Wu and others (2011) look at how to model the effects of a ‘concomitant intervention’, which is one applied when a patient’s health status deteriorates and so is confounded with health, using shared parameter models. And, Baghfalaki and colleagues (2017) examine heterogeneous random effect specification for shared parameter models and apply this to HIV data.

Credit

# Sam Watson’s journal round-up for 9th July 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluating the 2014 sugar-sweetened beverage tax in Chile: an observational study in urban areas. PLoS Medicine [PubMedPublished 3rd July 2018

Sugar taxes are one of the public health policy options currently in vogue. Countries including Mexico, the UK, South Africa, and Sri Lanka all have sugar taxes. The aim of such levies is to reduce demand for the most sugary drinks, or if the tax is absorbed on the supply side, which is rare, to encourage producers to reduce the sugar content of their drinks. One may also view it as a form of Pigouvian taxation to internalise the public health costs associated with obesity. Chile has long had an ad valorem tax on soft drinks fixed at 13%, but in 2014 decided to pursue a sugar tax approach. Drinks with more than 6.25g/100ml saw their tax rate rise to 18% and the tax on those below this threshold dropped to 10%. To understand what effect this change had, we would want to know three key things along the causal pathway from tax policy to sugar consumption: did people know about the tax change, did prices change, and did consumption behaviour change. On this latter point, we can consider both the overall volume of soft drinks and whether people substituted low sugar for high sugar beverages. Using the Kantar Worldpanel, a household panel survey of purchasing behaviour, this paper examines these questions.

Everyone in Chile was affected by the tax so there is no control group. We must rely on time series variation to identify the effect of the tax. Sometimes, looking at plots of the data reveals a clear step-change when an intervention is introduced (e.g. the plot in this post), not so in this paper. We therefore rely heavily on the results of the model for our inferences, and I have a couple of small gripes with it. First, the model captures household fixed effects, but no consideration is given to dynamic effects. Some households may be more or less likely to buy drinks, but their decisions are also likely to be affected by how much they’ve recently bought. Similarly, the errors may be correlated over time. Ignoring dynamic effects can lead to large biases. Second, the authors choose among different functional form specifications of time using Akaike Information Criterion (AIC). While AIC and the Bayesian Information Criterion (BIC) are often thought to be interchangeable, they are not; AIC estimates predictive performance on future data, while BIC estimates goodness of fit to the data. Thus, I would think BIC would be more appropriate. Additional results show the estimates are very sensitive to the choice of functional form by an order of magnitude and in sign. The authors estimate a fairly substantial decrease of around 22% in the volume of high sugar drinks purchased, but find evidence that the price paid changed very little (~1.5%) and there was little change in other drinks. While the analysis is generally careful and well thought out, I am not wholly convinced by the authors’ conclusions that “Our main estimates suggest a significant, sizeable reduction in the volume of high-tax soft drinks purchased.”

A Bayesian framework for health economic evaluation in studies with missing data. Health Economics [PubMedPublished 3rd July 2018

Missing data is a ubiquitous problem. I’ve never used a data set where no observations were missing and I doubt I’m alone. Despite its pervasiveness, it’s often only afforded an acknowledgement in the discussion or perhaps, in more complete analyses, something like multiple imputation will be used. Indeed, the majority of trials in the top medical journals don’t handle it correctly, if at all. The majority of the methods used for missing data in practice assume the data are ‘missing at random’ (MAR). One interpretation is that this means that, conditional on the observable variables, the probability of data being missing is independent of unobserved factors influencing the outcome. Another interpretation is that the distribution of the potentially missing data does not depend on whether they are actually missing. This interpretation comes from factorising the joint distribution of the outcome $Y$ and an indicator of whether the datum is observed $R$, along with some covariates $X$, into a conditional and marginal model: $f(Y,R|X) = f(Y|R,X)f(R|X)$, a so-called pattern mixture model. This contrasts with the ‘selection model’ approach: $f(Y,R|X) = f(R|Y,X)f(Y|X)$.

This paper considers a Bayesian approach using the pattern mixture model for missing data for health economic evaluation. Specifically, the authors specify a multivariate normal model for the data with an additional term in the mean if it is missing, i.e. the model of $f(Y|R,X)$. A model is not specified for $f(R|X)$. If it were then you would typically allow for correlation between the errors in this model and the main outcomes model. But, one could view the additional term in the outcomes model as some function of the error from the observation model somewhat akin to a control function. Instead, this article uses expert elicitation methods to generate a prior distribution for the unobserved terms in the outcomes model. While this is certainly a legitimate way forward in my eyes, I do wonder how specification of a full observation model would affect the results. The approach of this article is useful and they show that it works, and I don’t want to detract from that but, given the lack of literature on missing data in this area, I am curious to compare approaches including selection models. You could even add shared parameter models as an alternative, all of which are feasible. Perhaps an idea for a follow-up study. As a final point, the models run in WinBUGS, but regular readers will know I think Stan is the future for estimating Bayesian models, especially in light of the problems with MCMC we’ve discussed previously. So equivalent Stan code would have been a bonus.

This is an economics blog. But focusing solely on economics papers in these round-ups would mean missing out on some papers from related fields that may provide insight into our own work. Thus I present to you a politics and sociology paper. It is not my field and I can’t give a reliable appraisal of the methods, but the results are of interest. In the global fight against non-communicable diseases, there is a range of policy tools available to governments, including the sugar tax of the paper at the top. The WHO recommends a large number. However, there is ongoing debate about whether trade rules and agreements are used to undermine this public health legislation. One agreement, the Technical Barriers to Trade (TBT) Agreement that World Trade Organization (WTO) members all sign, states that members may not impose ‘unnecessary trade costs’ or barriers to trade, especially if the intended aim of the measure can be achieved without doing so. For example, Philip Morris cited a bilateral trade agreement when it sued the Australian government for introducing plain packaging claiming it violated the terms of trade. Philip Morris eventually lost but not after substantial costs were incurred. In another example, the Thai government were deterred from introducing a traffic light warning system for food after threats of a trade dispute from the US, which cited WTO rules. However, there was no clear evidence on the extent to which trade disputes have undermined public health measures.

This article presents results from a new database of all TBT WTO challenges. Between 1995 and 2016, 93 challenges were raised concerning food, beverage, and tobacco products, the number per year growing over time. The most frequent challenges were over labelling products and then restricted ingredients. The paper presents four case studies, including Indonesia delaying food labelling of fat, sugar, and salt after challenge by several members including the EU, and many members including the EU again and the US objecting to the size and colour of a red STOP sign that Chile wanted to put on products containing high sugar, fat, and salt.

We have previously discussed the politics and political economy around public health policy relating to e-cigarettes, among other things. Understanding the political economy of public health and phenomena like government failure can be as important as understanding markets and market failure in designing effective interventions.

Credits