Chris Sampson’s journal round-up for 17th September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Does competition from private surgical centres improve public hospitals’ performance? Evidence from the English National Health Service. Journal of Public Economics Published 11th September 2018

This study looks at proper (supply-side) privatisation in the NHS. The subject is the government-backed introduction of Independent Sector Treatment Centres (ISTCs), which, in the name of profit, provide routine elective surgical procedures to NHS patients. ISTCs were directed to areas with high waiting times and began rolling out from 2003.

The authors take pre-surgery length of stay as a proxy for efficiency and hypothesise that the entry of ISTCs would improve efficiency in nearby NHS hospitals. They also hypothesise that the ISTCs would cream-skim healthier patients, leaving NHS hospitals to foot the bill for a more challenging casemix. Difference-in-difference regressions are used to test these hypotheses, the treatment group being those NHS hospitals close to ISTCs and the control being those not likely to be affected. The authors use patient-level Hospital Episode Statistics from 2002-2008 for elective hip and knee replacements.

The key difficulty here is that the trend in length of stay changed dramatically at the time ISTCs began to be introduced, regardless of whether a hospital was affected by their introduction. This is because there was a whole suite of policy and structural changes being implemented around this period, many targeting hospital efficiency. So we’re looking at comparing new trends, not comparing changes in existing levels or trends.

The authors’ hypotheses prove right. Pre-surgery length of stay fell in exposed hospitals by around 16%. The ISTCs engaged in risk selection, meaning that NHS hospitals were left with sicker patients. What’s more, the savings for NHS hospitals (from shorter pre-surgery length of stay) were more than undermined by an increase in post-surgery length of stay, which may have been due to the change in casemix.

I’m not sure how useful difference-in-difference is in this case. We don’t know what the trend would have been without the intervention because the pre-intervention trend provides no clues about it and, while the outcome is shown to be unrelated to selection into the intervention, we don’t know whether selection into the ISTC intervention was correlated with exposure to other policy changes. The authors do their best to quell these concerns about parallel trends and correlated policy shocks, and the results appear robust.

Broadly speaking, the study satisfies my prior view of for-profit providers as leeches on the NHS. Still, I’m left a bit unsure of the findings. The problem is, I don’t see the causal mechanism. Hospitals had the financial incentive to be efficient and achieve a budget surplus without competition from ISTCs. It’s hard (for me, at least) to see how reduced length of stay has anything to do with competition unless hospitals used it as a basis for getting more patients through the door, which, given that ISTCs were introduced in areas with high waiting times, the hospitals could have done anyway.

While the paper describes a smart and thorough analysis, the findings don’t tell us whether ISTCs are good or bad. Both the length of stay effect and the casemix effect are ambiguous with respect to patient outcomes. If only we had some PROMs to work with…

One method, many methodological choices: a structured review of discrete-choice experiments for health state valuation. PharmacoEconomics [PubMed] Published 8th September 2018

Discrete choice experiments (DCEs) are in vogue when it comes to health state valuation. But there is disagreement about how they should be conducted. Studies can differ in terms of the design of the choice task, the design of the experiment, and the analysis methods. The purpose of this study is to review what has been going on; how have studies differed and what could that mean for our use of the value sets that are estimated?

A search of PubMed for valuation studies using DCEs – including generic and condition-specific measures – turned up 1132 citations, of which 63 were ultimately included in the review. Data were extracted and quality assessed.

The ways in which the studies differed, and the ways in which they were similar, hint at what’s needed from future research. The majority of recent studies were conducted online. This could be problematic if we think self-selecting online panels aren’t representative. Most studies used five or six attributes to describe options and many included duration as an attribute. The methodological tweaks necessary to anchor at 0=dead were a key source of variation. Those using duration varied in terms of the number of levels presented and the range of duration (from 2 months to 50 years). Other studies adopted alternative strategies. In DCE design, there is a necessary trade-off between statistical efficiency and the difficulty of the task for respondents. A variety of methods have been employed to try and ease this difficulty, but there remains a lack of consensus on the best approach. An agreed criterion for this trade-off could facilitate consistency. Some of the consistency that does appear in the literature is due to conformity with EuroQol’s EQ-VT protocol.

Unfortunately, for casual users of DCE valuations, all of this means that we can’t just assume that a DCE is a DCE is a DCE. Understanding the methodological choices involved is important in the application of resultant value sets.

Trusting the results of model-based economic analyses: is there a pragmatic validation solution? PharmacoEconomics [PubMed] Published 6th September 2018

Decision models are almost never validated. This means that – save for a superficial assessment of their outputs – they are taken at good faith. That should be a worry. This article builds on the experience of the authors to outline why validation doesn’t take place and to try to identify solutions. This experience includes a pilot study in France, NICE Evidence Review Groups, and the perspective of a consulting company modeller.

There are a variety of reasons why validation is not conducted, but resource constraints are a big part of it. Neither HTA agencies, nor modellers themselves, have the time to conduct validation and verification exercises. The core of the authors’ proposed solution is to end the routine development of bespoke models. Models – or, at least, parts of models – need to be taken off the shelf. Thus, open source or otherwise transparent modelling standards are a prerequisite for this. The key idea is to create ‘standard’ or ‘reference’ models, which can be extensively validated and tweaked. The most radical aspect of this proposal is that they should be ‘freely available’.

But rather than offering a path to open source modelling, the authors offer recommendations for how we should conduct ourselves until open source modelling is realised. These include the adoption of a modular and incremental approach to modelling, combined with more transparent reporting. I agree; we need a shift in mindset. Yet, the barriers to open source models are – I believe – the same barriers that would prevent these recommendations from being realised. Modellers don’t have the time or the inclination to provide full and transparent reporting. There is no incentive for modellers to do so. The intellectual property value of models means that public release of incremental developments is not seen as a sensible thing to do. Thus, the authors’ recommendations appear to me to be dependent on open source modelling, rather than an interim solution while we wait for it. Nevertheless, this is the kind of innovative thinking that we need.

Credits

Brent Gibbons’s journal round-up for 22nd January 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Is retirement good for men’s health? Evidence using a change in the retirement age in Israel. Journal of Health Economics [PubMed] Published January 2018

This article is a tour de force from one chapter of a recently completed dissertation from the Hebrew University of Jerusalem. The article focuses on answering the question of what are the health implications of extending working years for older adults. As many countries are faced with critical decisions on how to adjust labor policies to solve rising pension costs (or in the case of the U.S., Social Security insolvency) in the face of aging populations, one obvious potential solution is to change the retirement age. Most OECD countries appear to have retirement ages in the mid-60’s with a number of countries on track to increase that threshold. Israel is one of these countries, having changed their retirement age for men from age 65 to age 67 in 2004. The author capitalizes on this exogenous change in retirement incentives, as workers will be incentivized to keep working to receive full pension benefits, to measure the causal effect of working in these later years, compared to retiring. As the relationship between employment and health is complicated by the endogenous nature of the decision to work, there is a growing literature that has attempted to deal with this endogeneity in different ways. Shai details the conflicting findings in this literature and describes various shortcomings of methods used. He helpfully categorizes studies into those that compare health between retirees and non-retirees (does not deal with selection problem), those that use variation in retirement age across countries (retirement ages could be correlated with individual health across countries), those that exploit variation in specific sector retirement ages (problem of generalizing to population), and those that use age-specific retirement eligibility (health may deteriorate at specific age regardless of eligibility for retirement). As this empirical question has amounted conflicting evidence, the author suggests that his methodology is an improvement on prior papers. He uses a difference-in-difference model that estimates the impact on various health outcomes, before and after the law change, comparing those aged 65-66 years after 2004 with both older and younger cohorts unaffected by the law. The assumption is that any differences in measured health between the age 65-66 group and the comparison group are a result of the extended work in later years. There are several different datasets used in the study and quite a number of analyses that attempt to assuage threats to a causal interpretation of results. Overall, results are that delaying the retirement age has a negative effect on individual health. The size of the effect found is in the ballpark of 1 standard deviation; outcome measures included a severe morbidity index, a poor health index, and the number of physician visits. In addition, these impacts were stronger for individuals with lower levels of education, which the author relates to more physically demanding jobs. Counterfactuals, for example number of dentist visits, which are not expected to be related to employment, are not found to be statistically different. Furthermore, there are non-trivial estimated effects on health care expenditures that are positive for the delayed retirement group. The author suggests that all of these findings are important pieces of evidence in retirement age policy decisions. The implication is that health, at least for men, and especially for those with lower education, may be negatively impacted by delaying retirement and that, furthermore, savings as a result of such policies may be tempered by increased health care expenditures.

Evaluating community-based health improvement programs. Health Affairs [PubMed] Published January 2018

For article 2, I see that the lead author is a doctoral student in health policy at Harvard, working with colleagues at Vanderbilt. Without intention, this round-up is highlighting two very impressive studies from extremely promising young investigators. This study takes on the challenge of evaluating community-based health improvement programs, which I will call CBHIPs. CBHIPs take a population-based approach to public health for their communities and often focus on issues of prevention and health promotion. Investment in CBHIPs has increased in recent years, emphasizing collaboration between the community and public and private sectors. At the heart of CBHIPs are the ideas of empowering communities to self-assess and make needed changes from within (in collaboration with outside partners) and that CBHIPs allow for more flexibility in creating programs that target a community’s unique needs. Evaluations of CBHIPs, however, suffer from limited resources and investment, and often use “easily-collectable data and pre-post designs without comparison or control communities.” Current overall evidence on the effectiveness of CBHIPs remains limited as a result. In this study, the authors attempt to evaluate a large set of CBHIPs across the United States using inverse propensity score weighting and a difference-in-difference analysis. Health outcomes on poor or fair health, smoking status, and obesity status were used at the county level from the BRFSS (Behavioral Risk Factor Surveillance System) SMART (Selected Metropolitan/Micropolitan Area Risk Trends) data. Information on counties implementing CBHIPs was compiled through a series of systematic web searches and through interviews with leaders in population health efforts in the public and private sector. With information on the exact years of implementation of CBHIPs in each county, a pre-post design was used that identified county treatment and control groups. With additional census data, untreated counties were weighted to achieve better balance on pre-implementation covariates. Importantly, treated counties were limited to those with CBHIPs that implemented programs related to smoking and obesity. Results showed little to no evidence that CBHIPs improved population health outcomes. For example, CBHIPs focusing on tobacco prevention were associated with a 0.2 percentage point reduction in the rate of smoking, which was not statistically significant. Several important limitations of the study were noted by the authors, such as limited information on the intensity of programs and resources available. It is recognized that it is difficult to improve population-level health outcomes and that perhaps the study period of 5-years post-implementation may not have been long enough. The researchers encourage future CBHIPs to utilize more rigorous evaluation methods, while acknowledging the uphill battle CBHIPs face to do this.

Through the looking glass: estimating effects of medical homes for people with severe mental illness. Health Services Research [PubMed] Published October 2017

The third article in this round-up comes from a publication from October of last year, however, it is from the latest issue of Health Services Research so I deem it fair play. The article uses the topic of medical homes for individuals with severe mental illness to critically examine the topic of heterogeneous treatment effects. While specifically looking to answer whether there are heterogeneous treatment effects of medical homes on different portions of the population with a severe mental illness, the authors make a strong case for the need to examine heterogeneous treatment effects as a more general practice in observational studies research, as well as to be more precise in interpretations of results and statements of generalizability when presenting estimated effects. Adults with a severe mental illness were identified as good candidates for medical homes because of complex health care needs (including high physical health care needs) and because barriers to care have been found to exist for these individuals. Medicaid medical homes establish primary care physicians and their teams as the managers of the individual’s overall health care treatment. The authors are particularly concerned with the reasons individuals choose to participate in medical homes, whether because of expected improvements in quality of care, regional availability of medical homes, or symptomatology. Very clever differences in estimation methods allow the authors to estimate treatment effects associated with these different enrollment reasons. As an example, an instrumental variables analysis, using measures of regional availability as instruments, estimated local average treatment effects that were much smaller than the fixed effects estimates or the generalized estimating equation model’s effects. This implies that differences in county-level medical home availability are a smaller portion of the overall measured effects from other models. Overall results were that medical homes were positively associated with access to primary care, access to specialty mental health care, medication adherence, and measures of routine health care (e.g. screenings); there was also a slightly negative association with emergency room use. Since unmeasured stable attributes (e.g. patient preferences) do not seem to affect outcomes, results should be generalizable to the larger patient population. Finally, medical homes do not appear to be a good strategy for cost-savings but do promise to increase access to appropriate levels of health care treatment.

Credits

Method of the month: Synthetic control

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is synthetic control.

Principles

Health researchers are often interested in estimating the effect of a policy of change at the aggregate level. This might include a change in admissions policies at a particular hospital, or a new public health policy applied to a state or city. A common approach to inference in these settings is difference in differences (DiD) methods. Pre- and post-intervention outcomes in a treated unit are compared with outcomes in the same periods for a control unit. The aim is to estimate a counterfactual outcome for the treated unit in the post-intervention period. To do this, DiD assumes that the trend over time in the outcome is the same for both treated and control units.

It is often the case in practice that we have multiple possible control units and multiple time periods of data. To predict the post-intervention counterfactual outcomes, we can note that there are three sources of information: i) the outcomes in the treated unit prior to the intervention, ii) the behaviour of other time series predictive of that in the treated unit, including outcomes in similar but untreated units and exogenous predictors, and iii) prior knowledge of the effect of the intervention. The latter of these only really comes into play in Bayesian set-ups of this method. With longitudinal data we could just throw all this into a regression model and estimate the parameters. However, generally, this doesn’t allow for unobserved confounders to vary over time. The synthetic control method does.

Implementation

Abadie, Diamond, and Haimueller motivate the synthetic control method using the following model:

y_{it} = \delta_t + \theta_t Z_i + \lambda_t \mu_i + \epsilon_{it}

where y_{it} is the outcome for unit i at time t, \delta_t are common time effects, Z_i are observed covariates with time-varying parameters \theta_t, \lambda_t are unobserved common factors with \mu_i as unobserved factor loadings, and \epsilon_{it} is an error term. Abadie et al show in this paper that one can derive a set of weights for the outcomes of control units that can be used to estimate the post-intervention counterfactual outcomes in the treated unit. The weights are estimated as those that would minimise the distance between the outcome and covariates in the treated unit and the weighted outcomes and covariates in the control units. Kreif et al (2016) extended this idea to multiple treated units.

Inference is difficult in this framework. So to produce confidence intervals, ‘placebo’ methods are proposed. The essence of this is to re-estimate the models, but using a non-intervention point in time as the intervention date to determine the frequency with which differences of a given order of magnitude are observed.

Brodersen et al take a different approach to motivating these models. They begin with a structural time-series model, which is a form of state-space model:

y_t = Z'_t \alpha_t + \epsilon_t

\alpha_{t+1} = T_t \alpha_t + R_t \eta_t

where in this case, y_t is the outcome at time t, \alpha_t is the state vector and Z_t is an output vector with \epsilon_t as an error term. The second equation is the state equation that governs the evolution of the state vector over time where T_t is a transition matrix, R_t is a diffusion matrix, and \eta_t is the system error.

From this setup, Brodersen et al expand the model to allow for control time series (e.g. Z_t = X'_t \beta), local linear time trends, seasonal components, and allowing for dynamic effects of covariates. In this sense the model is perhaps more flexible than that of Abadie et al. Not all of the large number of covariates may be necessary, so they propose a ‘slab and spike’ prior, which combines a point mass at zero with a weakly informative distribution over the non-zero values. This lets the data select the coefficients, as it were.

Inference in this framework is simpler than above. The posterior predictive distribution can be ‘simply’ estimated for the counterfactual time series to give posterior probabilities of differences of various magnitudes.

Software

Stata

  • Synth Implements the method of Abadie et al.

R

  • Synth Implements the method of Abadie et al.
  • CausalImpact Implements the method of Brodersen et al.

Applications

Kreif et al (2016) estimate the effect of pay for performance schemes in hospitals in England and compare the synthetic control method to DiD. Pieters et al (2016) estimate the effects of democratic reform on under-five mortality. We previously covered this paper in a journal round-up and a subsequent post, for which we also used the Brodersen et al method described above. We recently featured a paper by Lépine et al (2017) in a discussion of user fees. The synthetic control method was used to estimate the impact that the removal of user fees had in various districts of Zambia on use of health care.

Credit