Essen Economics of Mental Health Workshop

We are pleased to host the second Essen Economics of Mental Health Workshop on the 24 & 25 of June 2019.

The workshop is themed: “Mental Health over the Life-Course”.

This workshop aims to gather (junior) researchers with an interest in applying the tools of economics to problems surrounding mental health. Papers considering a life-course aspect of mental health are preferred. This includes, but is not limited to, mental health economics studies looking at informal care, loneliness, social exclusion, access to health care, insurance coverage, declines in physical health, age of onset, dementia, suicide, etc. Empirical analyses in this field are especially encouraged for submission.

Christopher J. Ruhm (University of Virginia) and Fabrizio Mazzonna (Universitá della Svizzera Italiana) will deliver the keynotes for this workshop.

Please submit full papers or extended abstracts to events@cinch.uni-due.de by 1st of March 2019. In contrast to other events, discussants and not authors will present their work to stimulate discussion. Participation in the workshop implies the willingness to act as a discussant.

Brent Gibbons’s journal round-up for 9th April 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The effect of Medicaid on management of depression: evidence from the Oregon Health Insurance Experiment. The Milbank Quarterly [PubMed] Published 5th March 2018

Tobacco regulation and cost-benefit analysis: how should we value foregone consumer surplus? American Journal of Health Economics [PubMed] [RePEcPublished 23rd January 2018

This second article addresses a very interesting theoretical question in cost-benefit analysis, that has emerged in the context of tobacco regulation. The general question is how should foregone consumer surplus, in the form of reduced smoking, be valued? The history of this particular question in the context of recent FDA efforts to regulate smoking is quite fascinating. I highly recommend reading the article just for this background. In brief, the FDA issued proposed regulations to implement graphic warning labels on cigarettes in 2010 and more recently proposed that cigars and e-cigarettes should also be subject to FDA regulation. In both cases, an economic impact analysis was required and debates ensued on if, and how, foregone consumer surplus should be valued. Economists on both sides weighed-in, some arguing that the FDA should not consider foregone consumer surplus because smoking behavior is irrational, others arguing consumers are perfectly rational and informed and the full consumer surplus should be valued, and still others arguing that some consumer surplus should be counted but there is likely bounded rationality and that it is methodologically unclear how to perform a valuation in such a case. The authors helpfully break down the debate into the following questions: 1) if we assume consumers are fully informed and rational, what is the right approach? 2) are consumers fully informed and rational? and 3) if consumers are not fully informed and rational, what is the right approach? The reason the first question is important is that the FDA was conducting the economic impact analysis by examining health gains and foregone consumer surplus separately. However, if consumers are perfectly rational and informed, their preferences already account for health impacts, meaning that only changes in consumer surplus should be counted. On the second question, the authors explore the literature on smoking behavior to understand “whether consumers are rational in the sense of reflecting stable preferences that fully take into account the available information on current and expected future consequences of current choices.” In general, the literature shows that consumers are pretty well aware of the risks, though they may underestimate the difficulty of quitting. On whether consumers are rational is a much harder question. The authors explore different rational addiction models, including quasi-rational addiction models that take into account more recent developments in behavioral economics, but declare that the literature at this point provides no clear answer and that no empirical test exists to distinguish between rational and quasi-rational models. Without answering whether consumers are fully informed and rational, the authors suggest that welfare analysis – even in the face of bounded rationality – can still use a similar valuation approach to consumer surplus as was recommended for when consumers are fully informed and rational. A series of simple supply and demand curves are presented where there is a biased demand curve (demand under bounded rationality) and an unbiased demand curve (demand where fully informed and rational) and different regulations are illustrated. The implication is that rather than trying to estimate health gains as a result of regulations, what is needed is to understand the amount of demand bias as result of bounded rationality. Foregone consumer surplus can then be appropriately measured. Of course, more research is needed to estimate if, and how much, ‘demand bias’ or bounded rationality exists. The framework of the paper is extremely useful and it pushes health economists to consider advances that have been made in environmental economics to account for bounded rationality in cost-benefit analysis.

2SLS versus 2SRI: appropriate methods for rare outcomes and/or rare exposures. Health Economics [PubMed] Published 26th March 2018

This third paper I will touch on only briefly, but I wanted to include it as it addresses an important methodological topic. The paper explores several alternative instrumental variable estimation techniques for situations when the treatment (exposure) variable is binary, compared to the common 2SLS (two-stage least squares) estimation technique which was developed for a linear setting with continuous endogenous treatments and outcome measures. A more flexible approach, referred to as 2SRI (two-stage residual inclusion) allows for non-linear estimation methods in the first stage (and second stage), including logit or probit estimation methods. As the title suggests, these alternative estimation methods may be particularly useful when treatment (exposure) and/or outcomes are rare (e.g below 5%). Monte Carlo simulations are performed on what the authors term ‘the simplest case’ where the outcome, treatment, and instrument are binary variables and a range of results are considered as the treatment and/or outcome become rarer. Model bias and consistency are assessed in the ability to produce average treatment effects (ATEs) and local average treatment effects (LATEs), comparing the 2SLS, several forms of probit-probit 2SRI models, and a bivariate probit model. Results are that the 2SLS produced biased estimates of the ATE, especially as treatment and outcomes become rarer. The 2SRI models had substantially higher bias than the bivariate probit in producing ATEs (though the bivariate probit requires the assumption of bivariate normality). For LATE, 2SLS always produces consistent estimates, even if the linear probability model produces out of range predictions. Estimates for 2SRI models and the bivariate probit model were biased in producing LATEs. An empirical example was also tested with data on the impact of long-term care insurance on long-term care use. Conclusions are that 2SRI models do not dependably produce unbiased estimates of ATEs. Among the 2SRI models though, there were varying levels of bias and the 2SRI model with generalized residuals appeared to produce the least ATE bias. For more rare treatments and outcomes, the 2SRI model with Anscombe residuals generated the least ATE bias. Results were similar to another simulation study by Chapman and Brooks. The study enhances our understanding of how different instrumental variable estimation methods may function under conditions where treatment and outcome variables have nonlinear distributions and where those same treatments and outcomes are rare. In general, the authors give a cautionary note to say that there is not one perfect estimation method in these types of conditions and that researchers should be aware of the potential pitfalls of different estimation methods.

Credits

Method of the month: Synthetic control

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is synthetic control.

Principles

Health researchers are often interested in estimating the effect of a policy of change at the aggregate level. This might include a change in admissions policies at a particular hospital, or a new public health policy applied to a state or city. A common approach to inference in these settings is difference in differences (DiD) methods. Pre- and post-intervention outcomes in a treated unit are compared with outcomes in the same periods for a control unit. The aim is to estimate a counterfactual outcome for the treated unit in the post-intervention period. To do this, DiD assumes that the trend over time in the outcome is the same for both treated and control units.

It is often the case in practice that we have multiple possible control units and multiple time periods of data. To predict the post-intervention counterfactual outcomes, we can note that there are three sources of information: i) the outcomes in the treated unit prior to the intervention, ii) the behaviour of other time series predictive of that in the treated unit, including outcomes in similar but untreated units and exogenous predictors, and iii) prior knowledge of the effect of the intervention. The latter of these only really comes into play in Bayesian set-ups of this method. With longitudinal data we could just throw all this into a regression model and estimate the parameters. However, generally, this doesn’t allow for unobserved confounders to vary over time. The synthetic control method does.

Implementation

Abadie, Diamond, and Haimueller motivate the synthetic control method using the following model:

$y_{it} = \delta_t + \theta_t Z_i + \lambda_t \mu_i + \epsilon_{it}$

where $y_{it}$ is the outcome for unit $i$ at time $t$, $\delta_t$ are common time effects, $Z_i$ are observed covariates with time-varying parameters $\theta_t$, $\lambda_t$ are unobserved common factors with $\mu_i$ as unobserved factor loadings, and $\epsilon_{it}$ is an error term. Abadie et al show in this paper that one can derive a set of weights for the outcomes of control units that can be used to estimate the post-intervention counterfactual outcomes in the treated unit. The weights are estimated as those that would minimise the distance between the outcome and covariates in the treated unit and the weighted outcomes and covariates in the control units. Kreif et al (2016) extended this idea to multiple treated units.

Inference is difficult in this framework. So to produce confidence intervals, ‘placebo’ methods are proposed. The essence of this is to re-estimate the models, but using a non-intervention point in time as the intervention date to determine the frequency with which differences of a given order of magnitude are observed.

Brodersen et al take a different approach to motivating these models. They begin with a structural time-series model, which is a form of state-space model:

$y_t = Z'_t \alpha_t + \epsilon_t$

$\alpha_{t+1} = T_t \alpha_t + R_t \eta_t$

where in this case, $y_t$ is the outcome at time $t$, $\alpha_t$ is the state vector and $Z_t$ is an output vector with $\epsilon_t$ as an error term. The second equation is the state equation that governs the evolution of the state vector over time where $T_t$ is a transition matrix, $R_t$ is a diffusion matrix, and $\eta_t$ is the system error.

From this setup, Brodersen et al expand the model to allow for control time series (e.g. $Z_t = X'_t \beta$), local linear time trends, seasonal components, and allowing for dynamic effects of covariates. In this sense the model is perhaps more flexible than that of Abadie et al. Not all of the large number of covariates may be necessary, so they propose a ‘slab and spike’ prior, which combines a point mass at zero with a weakly informative distribution over the non-zero values. This lets the data select the coefficients, as it were.

Inference in this framework is simpler than above. The posterior predictive distribution can be ‘simply’ estimated for the counterfactual time series to give posterior probabilities of differences of various magnitudes.

Software

Stata

• Synth Implements the method of Abadie et al.

R

• Synth Implements the method of Abadie et al.
• CausalImpact Implements the method of Brodersen et al.

Applications

Kreif et al (2016) estimate the effect of pay for performance schemes in hospitals in England and compare the synthetic control method to DiD. Pieters et al (2016) estimate the effects of democratic reform on under-five mortality. We previously covered this paper in a journal round-up and a subsequent post, for which we also used the Brodersen et al method described above. We recently featured a paper by Lépine et al (2017) in a discussion of user fees. The synthetic control method was used to estimate the impact that the removal of user fees had in various districts of Zambia on use of health care.

Credit