Jason Shafrin’s journal round-up for 7th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Combined impact of future trends on healthcare utilisation of older people: a Delphi study. Health Policy [PubMed] [RePEc] Published October 2019

Governments need to plan for the future. This is particularly important in countries where the government pays for the lion’s share of health care expenditures. Predicting the future, however, is not an easy task. One could use quantitative approaches and simply extrapolate recent trends. One could attempt to consult with political experts to determine what policies are likely to be incurred. Another approach is to use a Delphi Panel to elicit expert opinions on future trends in health care utilization to help predict future health care needs. This approach was the one taken by Ravensbergen and co-authors in an attempt to predict trends in health care utilization among older adults in the Netherlands in 2040.

The Delphi Panel approach was applied in this study as follows. First, individuals received a questionnaire via email. Researchers presented the experts with trends from the Dutch Public Health Foresight Study (Volksgezondheid Toekomst Verkenning) to help ground all experts with the same baseline information. The data and questions largely asked separately about trends for either the old (65–80 years) or the oldest old (>80 years). After the responses from the first questionnaire were received, responses were summarized and provided back to each panelist in an anonymous manner. Panelists were then able to revise their views on a second questionnaire taking into account the feedback by the other panelists. Because the panelists did not meet in person, this approach should be considered a modified Delphi Panel.

The Delphi panel identified three broad trends: increased use of eHealth tools, less support, and change in health status. While the panel thought eHealth was important, experts rarely reached consensus how eHealth would affect healthcare utilization. The experts did find consensus, however, in believing that the the share of adults aged 50-64 will decline relative to the share of individuals aged ≥ 85 years, implying fewer caregivers will be available and more of the oldest old will be living independently (i.e. with less support). Because less informal care will be available, the Delphi believed that the demand for home care and general practitioner services will rise. The respondents also believed that in most cases changes in health status will increase health care utilization of general practitioner and specialist services. There was less agreement about trends in the need for long-term care or mental health services, however.

The Delphi Panel approach may be useful to help governments predict future demand for services. More rigorous approaches, such as betting markets, are likely not feasible since the payouts would take too long to generate much interest. Betting markets could be used to predict shorter-run trends in health care utilization. The risk with betting markets, however, is that some individuals could act strategically to drive up or down predictions to increase or decrease reimbursement for certain sectors.

In short, the Delphi Panel is likely a reasonable, low-cost approach for predicting trends in health care utilization. Future studies, however, should validate how good the predictions are from using this type of method.

The fold-in, fold-out design for DCE choice tasks: application to burden of disease. Medical Decision Making [PubMed] Published 29th May 2019

Discrete choice experiments (DCEs) are a useful way to determine what treatment attributes patients (or providers or caregivers) value. Respondents are presented with multiple treatment options and the options can be compared across a series of attributes. An attribute could be treatment efficacy, safety, dosing, cost, or a host of other attributes. One can use this approach to measure the marginal rate of substitution across attributes. If cost is one of the attributes, one can measure willingness to pay for specific attributes.

One of the key challenges of DCEs, however, is attribute selection. Most treatments differ across a range of attributes. Most published DCEs however have four, five, or at most seven attributes presented. Including more attributes makes comparisons too complicated for most respondents. Thus, researchers are left with a difficult choice: (i) a tractable but overly simplified survey, or (ii) a realistic, but overly complex survey unlikely to be comprehended by respondents.

One solution proposed by Lucas Goossens and co-authors is to use a Fold-in Fold-out (FiFo) approach. In this approach, related attributes may be grouped into domains. For some questions, all attributes within the same domain have the same attribute level (i.e., fold in); in other questions, attributes may vary within the domain (i.e., fold out).

To be concrete, in the Goossens paper, they examine treatments for chronic obstructive pulmonary disorder (COPD). They use 15 attributes divided into three domains plus two stand-alone attributes:

a respiratory symptoms domain (with four attributes: shortness of breath at rest, shortness of breath during physical activity, coughing, and sputum production), a limitations domain (four attributes: limitations in strenuous physical activities, limitations in moderate physical activities, limitations in daily activities, and limitations in social activities), a mental problems domain (five attributes: feeling depressed, fearing that breathing gets worse, worrying, listlessness, and tense feeling), a fatigue attribute, and an exacerbations attribute.

This creative approach simplifies the choice set for respondents, but allows for a large number of attributes. Using the data collected, the authors used a Bayesian mixed logit regression model to conduct the analysis. The utility function underlying this assumed domain-specific parameters, but also included parameters for within-domain attribute weights to vary in the questions where it was folded out.

One key challenge, however, is that the authors found that individuals placed more weight on attributes when their domains were folded out (i.e., attribute levels varied within domain) compared to when their domains were folded in (i.e., attribute levels were the same within the domain). Thus, I would say that if five, six or seven attributes can capture the lion’s share of differences in treatment attributes across treatments, use the standard approach; however, if more attributes are needed, the FiFo approach is an attractive option researchers should consider.

The health and cost burden of antibiotic resistant and susceptible Escherichia coli bacteraemia in the English hospital setting: a national retrospective cohort study. PLoS One [PubMed] Published 10th September 2019

Bacterial infections are bad. The good news is that we have antibiotics to treat them so they no longer are a worry, right? While conventional wisdom may believe that we have many antibiotics to treat these infections, in recent years antibiotic resistance has grown. If antibiotics no longer are effective, what is the cost to society?

One effort to quantify the economic burden of antibiotic resistance by Nichola Naylor and co-authors used national surveillance and administrative data from National Health Service (NHS) hospitals in England. They compared the cost for patients with similar observable characteristics with E. coli bacteraemia compared to those who did not have E. coli bacteraemia. Antibiotic resistance in the study was defined as E. coli bacteraemia using laboratory-based definitions of ‘resistant’ and ‘intermediate’ isolates. The antibiotics to which resistance was considered included ciprofloxacin, third generation cephalosporins (ceftazidime and/or cefotaxime), gentamicin, piperacillin/tazobactam and carbapenems (imipenem and/or meropenem).

The authors use an Aalen-Johansen estimator to measure cumulative incidence of in-hospital mortality and length of stay. Both approaches control for the patient’s age, sex, Elixhauser comorbidity index, and hospital trust type. It does not appear that the authors control for the reason for admission to the hospital nor do they propensity match people with those without antibiotic resistance. Thus, it is likely that significant unobserved heterogeneity across groups remains in the analysis.

Despite these limitations, the authors do have some interesting findings. First, bacterial infections are associated with increased risk of death. In-hospital mortality is 14.3% for individuals infected with E. Coli compared to 1.3% for those not infected. Accounting for covariates, the subdistribution hazard rate (SHR) for in-hospital mortality due to E. coli bacteraemia was 5.88. Second, E. coli bacteraemia was associated with 3.9 excess hospital days compared to patients who were not antibiotic resistance. These extra hospital days cost £1,020 per case of E. coli bacteraemia and the estimated annual cost of E. coli bacteraemia in England was £14.3m. If antibiotic resistance has increased in recent years, these estimates are likely to be conservative.

The issue of antibiotic resistance presents a conundrum for policymakers. If current antibiotics are effective, drug-makers will have no incentive to develop new antibiotics since the new treatments are unlikely to be prescribed. On the other hand, failing to identify new antibiotics in reserve means that as antibiotic resistance grows, there will be few treatment alternatives. To address this issue, the United Kingdom is considering a ‘subscription style‘ approach to pay for new antibiotics to incentivize the development of new treatments.

Nevertheless, the paper by Naylor and co-authors provides a useful data point on the cost of antibiotic resistance.

Credits

James Lomas’s journal round-up for 21st May 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Decision making for healthcare resource allocation: joint v. separate decisions on interacting interventions. Medical Decision Making [PubMed] Published 23rd April 2018

While it may be uncontroversial that including all of the relevant comparators in an economic evaluation is crucial, a careful examination of this statement raises some interesting questions. Which comparators are relevant? For those that are relevant, how crucial is it that they are not excluded? The answer to the first of these questions may seem obvious, that all feasible mutually exclusive interventions should be compared, but this is in fact deceptive. Dakin and Gray highlight inconsistency between guidelines as to what constitutes interventions that are ‘mutually exclusive’ and so try to re-frame the distinction according to whether interventions are ‘incompatible’ – when it is physically impossible to implement both interventions simultaneously – and, if not, whether interventions are ‘interacting’ – where the costs and effects of the simultaneous implementation of A and B do not equal the sum of these parts. What I really like about this paper is that it has a very pragmatic focus. Inspired by policy arrangements, for example single technology appraisals, and the difficulty in capturing all interactions, Dakin and Gray provide a reader-friendly flow diagram to illustrate cases where excluding interacting interventions from a joint evaluation is likely to have a big impact, and furthermore propose a sequencing approach that avoids the major problems in evaluating separately what should be considered jointly. Essentially when we have interacting interventions at different points of the disease pathway, evaluating separately may not be problematic if we start at the end of the pathway and move backwards, similar to the method of backward induction used in sequence problems in game theory. There are additional related questions that I’d like to see these authors turn to next, such as how to include interaction effects between interventions and, in particular, how to evaluate system-wide policies that may interact with a very large number of interventions. This paper makes a great contribution to answering all of these questions by establishing a framework that clearly distinguishes concepts that had previously been subject to muddied thinking.

When cost-effective interventions are unaffordable: integrating cost-effectiveness and budget impact in priority setting for global health programs. PLoS Medicine [PubMed] Published 2nd October 2017

In my opinion, there are many things that health economists shouldn’t try to include when they conduct cost-effectiveness analysis. Affordability is not one of these. This paper is great, because Bilinski et al shine a light on the worldwide phenomenon of interventions being found to be ‘cost-effective’ but not affordable. A particular quote – that it would be financially impossible to implement all interventions that are found to be ‘very cost-effective’ in many low- and middle-income countries – is quite shocking. Bilinski et al compare and contrast cost-effectiveness analysis and budget impact analysis, and argue that there are four key reasons why something could be ‘cost-effective’ but not affordable: 1) judging cost-effectiveness with reference to an inappropriate cost-effectiveness ‘threshold’, 2) adoption of a societal perspective that includes costs not falling upon the payer’s budget, 3) failing to make explicit consideration of the distribution of costs over time and 4) the use of an inappropriate discount rate that may not accurately reflect the borrowing and investment opportunities facing the payer. They then argue that, because of this, cost-effectiveness analysis should be presented along with budget impact analysis so that the decision-maker can base a decision on both analyses. I don’t disagree with this as a pragmatic interim solution, but – by highlighting these four reasons for divergence of results with such important economic consequences – I think that there will be further reaching implications of this paper. To my mind, Bilinski et al essentially serves as a call to arms for researchers to try to come up with frameworks and estimates so that the conduct of cost-effectiveness analysis can be improved in order that paradoxical results are no longer produced, decisions are more usefully informed by cost-effectiveness analysis, and the opportunity costs of large budget impacts are properly evaluated – especially in the context of low- and middle-income countries where the foregone health from poor decisions can be so significant.

Patient cost-sharing, socioeconomic status, and children’s health care utilization. Journal of Health Economics [PubMed] Published 16th April 2018

This paper evaluates a policy using a combination of regression discontinuity design and difference-in-difference methods. Not only does it do that, but it tackles an important policy question using a detailed population-wide dataset (a set of linked datasets, more accurately). As if that weren’t enough, one of the policy reforms was actually implemented as a result of a vote where two politicians ‘accidentally pressed the wrong button’, reducing concerns that the policy may have in some way not been exogenous. Needless to say I found the method employed in this paper to be a pretty convincing identification strategy. The policy question at hand is about whether demand for GP visits for children in the Swedish county of Scania (Skåne) is affected by cost-sharing. Cost-sharing for GP visits has occurred for different age groups over different periods of time, providing the basis for regression discontinuities around the age threshold and treated and control groups over time. Nilsson and Paul find results suggesting that when health care is free of charge doctor visits by children increase by 5-10%. In this context, doctor visits happened subject to telephone triage by a nurse and so in this sense it can be argued that all of these visits would be ‘needed’. Further, Nilsson and Paul find that the sensitivity to price is concentrated in low-income households, and is greater among sickly children. The authors contextualise their results very well and, in addition to that context, I can’t deny that it also particularly resonated with me to read this approaching the 70th birthday of the NHS – a system where cost-sharing has never been implemented for GP visits by children. This paper is clearly also highly relevant to that debate that has surfaced again and again in the UK.

Credits

 

Brent Gibbons’s journal round-up for 9th April 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The effect of Medicaid on management of depression: evidence from the Oregon Health Insurance Experiment. The Milbank Quarterly [PubMed] Published 5th March 2018

For the first journal article of this week’s AHE round-up, I selected a follow-up study on the Oregon health insurance experiment. The Oregon Health Insurance Experiment (OHIE) used a lottery system to expand Medicaid to low-income uninsured adults (and their associated households) who were previously ineligible for coverage. Those interested in being part of the study had to sign up. Individuals were then randomly selected through the lottery, after which individuals needed to take further action to complete enrollment in Medicaid, which included showing that enrollment criteria were satisfied (e.g. income below 100% of poverty line). These details are important because many who were selected for the lottery did not complete enrollment in Medicaid, though being selected through the lottery was associated with a 25 percentage point increase in the probability of having insurance (which the authors confirm was overwhelmingly due to Medicaid and not other insurance). More details on the study and data are publicly available. The OHIE is a seminal study in that it allows researchers to study the effects of having insurance in an experimental design – albeit in the U.S. health care system’s context. The other study that comes to mind is of course the famous RAND health insurance experiment that allowed researchers to study the effects of different levels of health insurance coverage. For the OHIE, the authors importantly point out that it is not necessarily obvious what the impact of having insurance is. While we would expect increases in health care utilization, it is possible that increases in primary care utilization could result in offsetting reductions in other settings (e.g. hospital or emergency department use). Also, while we would expect increases in health as a result of increases in health care use, it is possible that by reducing adverse financial consequences (e.g. of unhealthy behavior), health insurance could discourage investments in health. Medicaid has also been criticized by some as not very good insurance – though there are strong arguments to the contrary. First-year outcomes were detailed in another paper. These included increased health care utilization (across all settings), decreased out-of-pocket medical expenditures, decreased medical debt, improvements in self-reported physical and mental health, and decreased probability of screening positive for depression. In the follow-up paper on management of depression, the authors further explore the causal effect and causal pathway of having Medicaid on depression diagnosis, treatment, and symptoms. Outcomes of interest are the effect of having Medicaid on the prevalence of undiagnosed and untreated depression, the use of depression treatments including medication, and on self-reported depressive symptoms. Where possible, outcomes are examined for those with a prior depression diagnosis and those without. In order to examine the effect of Medicaid insurance (vs. being uninsured), the authors needed to control for the selection bias introduced from uncompleted enrollment into Medicaid. Instrumental variable 2SLS was used with lottery selection as the sole instrument. Local average treatment effects were reported with clustered standard errors on the household. The effect of Medicaid on the management of depression was overwhelmingly positive. For those with no prior depression diagnosis, it increased the chance of receiving a diagnosis and decreased the prevalence of undiagnosed depression (those who scored high on study survey depression instrument but with no official diagnosis). As far as treatment, Medicaid reduced the share of the population with untreated depression, virtually eliminating untreated depression among those with pre-lottery depression. There was a large reduction in unmet need for mental health treatment and an increased share who received specific mental health treatments (i.e. prescription drugs and talk therapy). For self-reported symptoms, Medicaid reduced the overall rate screened for depression symptoms in the post-lottery period. All effects were relatively strong in magnitude, giving an overall convincing picture that Medicaid increased access to treatment, which improved depression symptoms. The biggest limitation of this study is its generalizability. Much of the results were focused on the city of Portland, which may not represent more rural parts of the state. More importantly, this was limited to the state of Oregon for low-income adults who not only expressed interest in signing up, but who were able to follow through to complete enrollment. Other limitations were that the study only looked at the first two years of outcomes and that there was limited information on the types of treatments received.

Tobacco regulation and cost-benefit analysis: how should we value foregone consumer surplus? American Journal of Health Economics [PubMed] [RePEcPublished 23rd January 2018

This second article addresses a very interesting theoretical question in cost-benefit analysis, that has emerged in the context of tobacco regulation. The general question is how should foregone consumer surplus, in the form of reduced smoking, be valued? The history of this particular question in the context of recent FDA efforts to regulate smoking is quite fascinating. I highly recommend reading the article just for this background. In brief, the FDA issued proposed regulations to implement graphic warning labels on cigarettes in 2010 and more recently proposed that cigars and e-cigarettes should also be subject to FDA regulation. In both cases, an economic impact analysis was required and debates ensued on if, and how, foregone consumer surplus should be valued. Economists on both sides weighed-in, some arguing that the FDA should not consider foregone consumer surplus because smoking behavior is irrational, others arguing consumers are perfectly rational and informed and the full consumer surplus should be valued, and still others arguing that some consumer surplus should be counted but there is likely bounded rationality and that it is methodologically unclear how to perform a valuation in such a case. The authors helpfully break down the debate into the following questions: 1) if we assume consumers are fully informed and rational, what is the right approach? 2) are consumers fully informed and rational? and 3) if consumers are not fully informed and rational, what is the right approach? The reason the first question is important is that the FDA was conducting the economic impact analysis by examining health gains and foregone consumer surplus separately. However, if consumers are perfectly rational and informed, their preferences already account for health impacts, meaning that only changes in consumer surplus should be counted. On the second question, the authors explore the literature on smoking behavior to understand “whether consumers are rational in the sense of reflecting stable preferences that fully take into account the available information on current and expected future consequences of current choices.” In general, the literature shows that consumers are pretty well aware of the risks, though they may underestimate the difficulty of quitting. On whether consumers are rational is a much harder question. The authors explore different rational addiction models, including quasi-rational addiction models that take into account more recent developments in behavioral economics, but declare that the literature at this point provides no clear answer and that no empirical test exists to distinguish between rational and quasi-rational models. Without answering whether consumers are fully informed and rational, the authors suggest that welfare analysis – even in the face of bounded rationality – can still use a similar valuation approach to consumer surplus as was recommended for when consumers are fully informed and rational. A series of simple supply and demand curves are presented where there is a biased demand curve (demand under bounded rationality) and an unbiased demand curve (demand where fully informed and rational) and different regulations are illustrated. The implication is that rather than trying to estimate health gains as a result of regulations, what is needed is to understand the amount of demand bias as result of bounded rationality. Foregone consumer surplus can then be appropriately measured. Of course, more research is needed to estimate if, and how much, ‘demand bias’ or bounded rationality exists. The framework of the paper is extremely useful and it pushes health economists to consider advances that have been made in environmental economics to account for bounded rationality in cost-benefit analysis.

2SLS versus 2SRI: appropriate methods for rare outcomes and/or rare exposures. Health Economics [PubMed] Published 26th March 2018

This third paper I will touch on only briefly, but I wanted to include it as it addresses an important methodological topic. The paper explores several alternative instrumental variable estimation techniques for situations when the treatment (exposure) variable is binary, compared to the common 2SLS (two-stage least squares) estimation technique which was developed for a linear setting with continuous endogenous treatments and outcome measures. A more flexible approach, referred to as 2SRI (two-stage residual inclusion) allows for non-linear estimation methods in the first stage (and second stage), including logit or probit estimation methods. As the title suggests, these alternative estimation methods may be particularly useful when treatment (exposure) and/or outcomes are rare (e.g below 5%). Monte Carlo simulations are performed on what the authors term ‘the simplest case’ where the outcome, treatment, and instrument are binary variables and a range of results are considered as the treatment and/or outcome become rarer. Model bias and consistency are assessed in the ability to produce average treatment effects (ATEs) and local average treatment effects (LATEs), comparing the 2SLS, several forms of probit-probit 2SRI models, and a bivariate probit model. Results are that the 2SLS produced biased estimates of the ATE, especially as treatment and outcomes become rarer. The 2SRI models had substantially higher bias than the bivariate probit in producing ATEs (though the bivariate probit requires the assumption of bivariate normality). For LATE, 2SLS always produces consistent estimates, even if the linear probability model produces out of range predictions. Estimates for 2SRI models and the bivariate probit model were biased in producing LATEs. An empirical example was also tested with data on the impact of long-term care insurance on long-term care use. Conclusions are that 2SRI models do not dependably produce unbiased estimates of ATEs. Among the 2SRI models though, there were varying levels of bias and the 2SRI model with generalized residuals appeared to produce the least ATE bias. For more rare treatments and outcomes, the 2SRI model with Anscombe residuals generated the least ATE bias. Results were similar to another simulation study by Chapman and Brooks. The study enhances our understanding of how different instrumental variable estimation methods may function under conditions where treatment and outcome variables have nonlinear distributions and where those same treatments and outcomes are rare. In general, the authors give a cautionary note to say that there is not one perfect estimation method in these types of conditions and that researchers should be aware of the potential pitfalls of different estimation methods.

Credits