Sam Watson’s journal round-up for 10th September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Probabilistic sensitivity analysis in cost-effectiveness models: determining model convergence in cohort models. PharmacoEconomics [PubMed] Published 27th July 2018

Probabilistic sensitivity analysis (PSA) is rightfully a required component of economic evaluations. Deterministic sensitivity analyses are generally biased; averaging the outputs of a model based on a choice of values from a complex joint distribution is not likely to be a good reflection of the true model mean. PSA involves repeatedly sampling parameters from their respective distributions and analysing the resulting model outputs. But how many times should you do this? Most times, an arbitrary number is selected that seems “big enough”, say 1,000 or 10,000. But these simulations themselves exhibit variance; so-called Monte Carlo error. This paper discusses making the choice of the number of simulations more formal by assessing the “convergence” of simulation output.

In the same way as sample sizes are chosen for trials, the number of simulations should provide an adequate level of precision, anything more wastes resources without improving inferences. For example, if the statistic of interest is the net monetary benefit, then we would want the confidence interval (CI) to exclude zero as this should be a sufficient level of certainty for an investment decision. The paper, therefore, proposed conducting a number of simulations, examining the CI for when it is ‘narrow enough’, and conducting further simulations if it is not. However, I see a problem with this proposal: the variance of a statistic from a sequence of simulations itself has variance. The stopping points at which we might check CI are themselves arbitrary: additional simulations can increase the width of the CI as well as reduce them. Consider the following set of simulations from a simple ratio of random variables ICER = gamma(1,0.01)/normal(0.01,0.01):ciwidthThe “stopping rule” therefore proposed doesn’t necessarily indicate “convergence” as a few more simulations could lead to a wider, as well as narrower, CI. The heuristic approach is undoubtedly an improvement on the current way things are usually done, but I think there is scope here for a more rigorous method of assessing convergence in PSA.

Mortality due to low-quality health systems in the universal health coverage era: a systematic analysis of amenable deaths in 137 countries. The Lancet [PubMed] Published 5th September 2018

Richard Horton, the oracular editor-in-chief of the Lancet, tweeted last week:

There is certainly an argument that academic journals are good forums to make advocacy arguments. Who better to interpret the analyses presented in these journals than the authors and audiences themselves? But, without a strict editorial bulkhead between analysis and opinion, we run the risk that the articles and their content are influenced or dictated by the political whims of editors rather than scientific merit. Unfortunately, I think this article is evidence of that.

No-one debates that improving health care quality will improve patient outcomes and experience. It is in the very definition of ‘quality’. This paper aims to estimate the numbers of deaths each year due to ‘poor quality’ in low- and middle-income countries (LMICs). The trouble with this is two-fold: given the number of unknown quantities required to get a handle on this figure, the definition of quality notwithstanding, the uncertainty around this figure should be incredibly high (see below); and, attributing these deaths in a causal way to a nebulous definition of ‘quality’ is tenuous at best. The approach of the article is, in essence, to assume that the differences in fatality rates of treatable conditions between LMICs and the best performing health systems on Earth, among people who attend health services, are entirely caused by ‘poor quality’. This definition of quality would therefore seem to encompass low resourcing, poor supply of human resources, a lack of access to medicines, as well as everything else that’s different in health systems. Then, to get to this figure, the authors have multiple sources of uncertainty including:

  • Using a range of proxies for health care utilisation;
  • Using global burden of disease epidemiology estimates, which have associated uncertainty;
  • A number of data slicing decisions, such as truncating case fatality rates;
  • Estimating utilisation rates based on a predictive model;
  • Estimating the case-fatality rate for non-users of health services based on other estimated statistics.

Despite this, the authors claim to estimate a 95% uncertainty interval with a width of only 300,000 people, with a mean estimate of 5.0 million, due to ‘poor quality’. This seems highly implausible, and yet it is claimed to be a causal effect of an undefined ‘poor quality’. The timing of this article coincides with the Lancet Commission on care quality in LMICs and, one suspects, had it not been for the advocacy angle on care quality, it would not have been published in this journal.

Embedding as a pitfall for survey‐based welfare indicators: evidence from an experiment. Journal of the Royal Statistical Society: Series A Published 4th September 2018

Health economists will be well aware of the various measures used to evaluate welfare and well-being. Surveys are typically used that are comprised of questions relating to a number of different dimensions. These could include emotional and social well-being or physical functioning. Similar types of surveys are also used to collect population preferences over states of the world or policy options, for example, Kahneman and Knetsch conducted a survey of WTP for different environmental policies. These surveys can exhibit what is called an ’embedding effect’, which Kahneman and Knetsch described as when the value of a good varies “depending on whether the good is assessed on its own or embedded as part of a more inclusive package.” That is to say that the way people value single dimensional attributes or qualities can be distorted when they’re embedded as part of a multi-dimensional choice. This article reports the results of an experiment involving students who were asked to weight the relative importance of different dimensions of the Better Life Index, including jobs, housing, and income. The randomised treatment was whether they rated ‘jobs’ as a single category, or were presented with individual dimensions, such as the unemployment rate and job security. The experiment shows strong evidence of embedding – the overall weighting substantially differed by treatment. This, the authors conclude, means that the Better Life Index fails to accurately capture preferences and is subject to manipulation should a researcher be so inclined – if you want evidence to say your policy is the most important, just change the way the dimensions are presented.

Credits

Sam Watson’s journal round-up for 15th January 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness of publicly funded treatment of opioid use disorder in California. Annals of Internal Medicine [PubMed] Published 2nd January 2018

Deaths from opiate overdose have soared in the United States in recent years. In 2016, 64,000 people died this way, up from 16,000 in 2010 and 4,000 in 1999. The causes of public health crises like this are multifaceted, but we can identify two key issues that have contributed more than any other. Firstly, medical practitioners have been prescribing opiates irresponsibly for years. For the last ten years, well over 200,000,000 opiate prescriptions were issued per year in the US – enough for seven in every ten people. Once prescribed, opiate use is often not well managed. Prescriptions can be stopped abruptly, for example, leaving people with unexpected withdrawal syndromes and rebound pain. It is estimated that 75% of heroin users in the US began by using legal, prescription opiates. Secondly, drug suppliers have started cutting heroin with its far stronger but cheaper cousin, fentanyl. Given fentanyl’s strength, only a tiny amount is required to achieve the same effects as heroin, but the lack of pharmaceutical knowledge and equipment means it is often not measured or mixed appropriately into what is sold as ‘heroin’. There are two clear routes to alleviating the epidemic of opiate overdose: prevention, by ensuring responsible medical use of opiates, and ‘cure’, either by ensuring the quality and strength of heroin, or providing a means to stop opiate use. The former ‘cure’ is politically infeasible so it falls on the latter to help those already habitually using opiates. However, the availability of opiate treatment programs, such as opiate agonist treatment (OAT), is lacklustre in the US. OAT provides non-narcotic opiates, such as methadone or buprenorphine, to prevent withdrawal syndromes in users, from which they can slowly be weaned. This article looks at the cost-effectiveness of providing OAT for all persons seeking treatment for opiate use in California for an unlimited period versus standard care, which only provides OAT to those who have failed supervised withdrawal twice, and only for 21 days. The paper adopts a previously developed semi-Markov cohort model that includes states for treatment, relapse, incarceration, and abstinence. Transition probabilities for the new OAT treatment were determined from treatment data for current OAT patients (as far as I understand it). Although this does raise the question about the generalisability of this population to the whole population of opiate users – given the need to have already been through two supervised withdrawals, this population may have a greater motivation to quit, for example. In any case, the article estimates that the OAT program would be cost-saving, through reductions in crime and incarceration, and improve population health, by reducing the risk of death. Taken at face value these results seem highly plausible. But, as we’ve discussed before, drug policy rarely seems to be evidence-based.

The impact of aid on health outcomes in Uganda. Health Economics [PubMed] Published 22nd December 2017

Examining the response of population health outcomes to changes in health care expenditure has been the subject of a large and growing number of studies. One reason is to estimate a supply-side cost-effectiveness threshold: the health returns the health service achieves in response to budget expansions or contractions. Similarly, we might want to know the returns to particular types of health care expenditure. For example, there remains a debate about the effectiveness of aid spending in low and middle-income country (LMIC) settings. Aid spending may fail to be effective for reasons such as resource leakage, failure to target the right population, poor design and implementation, and crowding out of other public sector investment. Looking at these questions at an aggregate level can be tricky; the link between expenditure or expenditure decisions and health outcomes is long and causality flows in multiple directions. Effects are likely to therefore be small and noisy and require strong theoretical foundations to interpret. This article takes a different, and innovative, approach to looking at this question. In essence, the analysis boils down to a longitudinal comparison of those who live near large, aid funded health projects with those who don’t. The expectation is that the benefit of any aid spending will be felt most acutely by those who live nearest to actual health care facilities that come about as a result of it. Indeed, this is shown by the results – proximity to an aid project reduced disease prevalence and work days lost to ill health with greater effects observed closer to the project. However, one way of considering the ‘usefulness’ of this evidence is how it can be used to improve policymaking. One way is in understanding the returns to investment or over what area these projects have an impact. The latter is covered in the paper to some extent, but the former is hard to infer. A useful next step may be to try to quantify what kind of benefit aid dollars produce and its heterogeneity thereof.

The impact of social expenditure on health inequalities in Europe. Social Science & Medicine Published 11th January 2018

Let us consider for a moment how we might explore empirically whether social expenditure (e.g. unemployment support, child support, housing support, etc) affects health inequalities. First, we establish a measure of health inequality. We need a proxy measure of health – this study uses self-rated health and self-rated difficulty in daily living – and then compare these outcomes along some relevant measure of socioeconomic status (SES) – in this study they use level of education and a compound measure of occupation, income, and education (the ISEI). So far, so good. Data on levels of social expenditure are available in Europe and are used here, but oddly these data are converted to a percentage of GDP. The trouble with doing this is that this variable can change if social expenditure changes or if GDP changes. During the financial crisis, for example, social expenditure shot up as a proportion of GDP, which likely had very different effects on health and inequality than when social expenditure increased as a proportion of GDP due to a policy change under the Labour government. This variable also likely has little relationship to the level of support received per eligible person. Anyway, at the crudest level, we can then consider how the relationship between SES and health is affected by social spending. A more nuanced approach might consider who the recipients of social expenditure are and how they stand on our measure of SES, but I digress. In the article, the baseline category for education is those with only primary education or less, which seems like an odd category to compare to since in Europe I would imagine this is a very small proportion of people given compulsory schooling ages unless, of course, they are children. But including children in the sample would be an odd choice here since they don’t personally receive social assistance and are difficult to compare to adults. However, there are no descriptive statistics in the paper so we don’t know and no comparisons are made between other groups. Indeed, the estimates of the intercepts in the models are very noisy and variable for no obvious reason other than perhaps the reference group is very small. Despite the problems outlined so far though, there is a potentially more serious one. The article uses a logistic regression model, which is perfectly justifiable given the binary or ordinal nature of the outcomes. However, the authors justify the conclusion that “Results show that health inequalities measured by education are lower in countries where social expenditure is higher” by demonstrating that the odds ratio for reporting a poor health outcome in the groups with greater than primary education, compared to primary education or less, is smaller in magnitude when social expenditure as a proportion of GDP is higher. But the conclusion does not follow from the premise. It is entirely possible for these odds ratios to change without any change in the variance of the underlying distribution of health, the relative ordering of people, or the absolute difference in health between categories, simply by shifting the whole distribution up or down. For example, if the proportions of people in two groups reporting a negative outcome are 0.3 and 0.4, which then change to 0.2 and 0.3 respectively, then the odds ratio comparing the two groups changes from 0.64 to 0.58. The difference between them remains 0.1. No calculations are made regarding absolute effects in the paper though. GDP is also shown to have a positive effect on health outcomes. All that might have been shown is that the relative difference in health outcomes between those with primary education or less and others changes as GDP changes because everyone is getting healthier. The question of the article is interesting, it’s a shame about the execution.

Credits

 

Sam Watson’s journal round-up for 11th December 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Can incentives improve survey data quality in developing countries?: results from a field experiment in India. Journal of the Royal Statistical Society: Series A Published 17th November 2017

I must admit a keen interest in the topic of this paper. As part of a large project looking at the availability of health services in slums and informal settlements around the world, we are designing a household survey. Much like the Demographic and Health Surveys, which are perhaps the Gold standard of household surveys in low-income countries, interviewers will go door to door to sampled households to complete surveys. One of the issues with household surveys is that they take a long time, and so non-response can be an issue. A potential solution is to offer respondents incentives, cash or otherwise, either before the survey or conditionally on completing it. But any change in survey response as a result of an incentive might create suspicion around data quality. Work in high-income countries suggests incentives to participate have little or no effect on data quality. But there is little evidence about these effects in low-income countries. We might suspect the consequences of survey incentives to differ in poorer settings. For a start, many surveys are conducted on behalf of the government or an NGO, and respondents may misrepresent themselves if they believe further investment in their area might be forthcoming if they are sufficiently badly-off. There may also be larger differences between the interviewer and interviewee in terms of education or cultural background. And finally, incentives can affect the balance between a respondent’s so-called intrinsic and extrinsic motivations for doing something. This study presents the results of a randomised trial where the ‘treatment’ was a small conditional payment for completing a survey, and the ‘control’ was no incentive. In both arms, the response rate was very high (>96%), but it was higher in the treatment arm. More importantly, the authors compare responses to a broad range of socioeconomic and demographic questions between the study arms. Aside from the frequent criticism that statistical significance is interpreted here as the existence of a difference, there are some interesting results. The key observed difference is that in the incentive arm respondents reported having lower wealth consistently across a number of categories. This may result from any of the aforementioned effects of incentives, but may be evidence that incentives can affect data quality and should be used with caution.

Association of US state implementation of newborn screening policies for critical congenital heart disease with early infant cardiac deaths. JAMA [PubMedPublished 5th December 2017

Writing these journal round-ups obviously requires reading the papers that you choose. This can be quite an undertaking for papers published in economics journals, which are often very long, but they provide substantial detail allowing for a thorough appraisal. The opposite is true for articles in medical journals. They are pleasingly concise, but often at the expense of including detail or additional analyses. This paper falls into the latter camp. Using detailed panel data on infant deaths by cause by year and by state in the US, it estimates the effect of mandated screening policies for infant congenital heart defects on deaths from this condition. Given these data and more space, one might expect to see more flexible models than the differences in differences type analysis presented here, such as allowing for state-level correlated time trends. The results seem clear and robust – the policies were associated with a reduction in death from congenital heart conditions by around a third. Given this, one might ask: if it’s so effective, why weren’t doctors doing it anyway? Additional analyses reveal little to no association of the policies with death from other conditions, which may suggest that doctors didn’t have to reallocate their time from other beneficial functions. Perhaps then the screening bore other costs. In the discussion, the authors mention that a previous economic evaluation showed that universal screening was relatively costly (approximately $40,000 per life year saved), but that this may be an overestimate in light of these new results. Certainly then an updated economic evaluation is warranted. However, the models used in the paper may lead one to be cautious about causal interpretations and hence using the estimates in an evaluation. Given some more space the authors may have added additional analyses, but then I might not have read it…

Subsidies and structure: the lasting impact of the Hill-Burton program on the hospital industry. Review of Economics and Statistics [RePEcPublished 29th November 2017

As part of the Hospital Survey and Construction Act of 1946 in the United States, the Hill-Burton program was enacted. As a reaction to the perceived lack of health care services for workers during World War 2, the program provided subsidies of up to a third for building nonprofit and local hospitals. Poorer areas were prioritised. This article examines the consequences of this subsidy program on the structure of the hospital market and health care utilisation. The main result is that the program had the consequence of increasing hospital beds per capita and that this increase was lasting. More specific analyses are presented. Firstly, the increase in beds took a number of years to materialise and showed a dose-response; higher-funded counties had bigger increases. Secondly, the funding reduced private hospital bed capacity. The net effect on overall hospital beds was positive, so the program affected the composition of the hospital sector. Although this would be expected given that it substantially affected the relative costs of different types of hospital bed. And thirdly, hospital utilisation increased in line with the increases in capacity, indicating a previously unmet need for health care. Again, this was expected given the motivation for the program in the first place. It isn’t often that results turn out as neatly as this – the effects are exactly as one would expect and are large in magnitude. If only all research projects turned out this way.

Credits