Sam Watson’s journal round-up for 25th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Democracy does cause growth. Journal of Political Economy [RePEc] Published January 2019

Citizens of a country with a democratic system of government are able to affect change in its governors and influence policy. This threat of voting out the poorly performing from power provides an incentive for the government to legislate in a way that benefits the population. However, democracy is certainly no guarantee of good governance, economic growth, or population health as many events in the last ten years will testify. Similarly, non-democracies can also enact policy that benefits the people. A benevolent dictator is not faced with the same need to satisfy voters and can enact politically challenging but beneficial policies. People often point to China as a key example of this. So there remains the question as to whether democracy per se has any tangible economic or health benefits.

In a past discussion of an article on democratic reform and child health, I concluded that “Democratic reform is neither a sufficient nor necessary condition for improvements in child mortality.” Nevertheless democracy may still be beneficial, on average, given the in-built safeguards against poor leaders. This paper, which has been doing the rounds for years as a working paper, is another examination of the question of the impact of becoming democratic. Principally the article is focused on economic growth, but health and education outcomes feature (very) briefly. The concern I have with the article mentioned at the beginning of this paragraph and with this newly published article are that they do not consider in great detail why democratisation occurred. As much political science work points out, democratic reform can be demanded in poor economic conditions due to poor governance. For these endogenous changes economic growth causes democracy. Whereas in other countries democracy could come about in a more exogenous manner. Lumping them all in together may be misleading.

While the authors of this paper provide pages after pages of different regression specifications, including auto-regressive models and instrumental variables models, I remain unconvinced. For example, the instrument relies on ‘waves’ of transitions: a country is more likely to shift politically if its regional neighbours do, like the Arab Spring. But neither economic nor political conditions in a given country are independent of its neighbours. In somewhat of a rebuttal, Ruiz Pozuelo and other authors conducted a survey to try to identify and separate out those countries which transitioned to democracy endogenously and exogenously (from economic conditions). Their work suggests that the countries that transitioned exogenously did not experience growth benefits. Taken together this shows the importance of theory to guide empirical work, and not the other way round.

Effect of Novartis Access on availability and price of non-communicable disease medicines in Kenya: a cluster-randomised controlled trial. Lancet: Global Health Published February 2019

Access to medicines is one of the key barriers to achieving universal health care. The cost-effectiveness threshold for many low income countries rules out many potentially beneficial medicines. This is in part driven though by the high prices charged by pharmaceutical countries to purchase medicine, which often do not discriminate between purchasers with high and low abilities to pay. Novartis launched a scheme – Novartis Access – to provide access to medicines to low and middle income countries at a price of US$1 per treatment per month. This article presents a cluster randomised trial of this scheme in eight counties of Kenya.

The trial provided access to four treatment counties and used four counties as controls. Individuals selected at random within the counties with non-communicable diseases and pharmacies were the principal units within the counties at which outcomes were analysed. Given the small number of clusters, a covariate-constrained randomisation procedure was used, which generates randomisation that ensures a decent balance of covariates between arms. However, the analysis does not control for the covariates used in the constrained randomisation, which can lead to lower power and incorrect type one error rates. This problem is emphasized by the use of statistical significance to decide on what was and was not affected by the Novartis Access program. While practically all the drugs investigated show an improved availability, only the two with p<0.05 are reported to have improved. Given the very small sample of clusters, this is a tricky distinction to make! Significance aside, the programme appears to have had some success in improving access to diabetes and asthma medication, but not quite as much as hoped. Introductory microeconomics though would show how savings are not all passed on to the consumer.

Credits

Sam Watson’s journal round-up for 11th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biology [PubMed] Published 2nd January 2019

If you work in research you will have no doubt thought to yourself at one point that you spend more time applying to do research than actually doing it. You can spend weeks working on (what you believe to be) a strong proposal only for it to fail against other strong bids. That time could have been spent collecting and analysing data. Indeed, the opportunity cost of writing extensive proposals can be very high. The question arises as to whether there is another method of allocating research funding that reduces this waste and inefficiency. This paper compares the proposal competition to a partial lottery. In this lottery system, proposals are short, and among those that meet some qualifying standard those that are funded are selected at random. This system has the benefit of not taking up too much time but has the cost of reducing the average scientific value of the winning proposals. The authors compare the two approaches using an economic model of contests, which takes into account factors like proposal strength, public benefits, benefits to the scientist like reputation and prestige, and scientific value. Ultimately they conclude that, when the number of awards is smaller than the number of proposals worthy of funding, the proposal competition is inescapably inefficient. It means that researchers have to invest heavily to get a good project funded, and even if it is good enough it may still not get funded. The stiffer the competition the more researchers have to work to win the award. And what little evidence there is suggests that the format of the application makes little difference to the amount of time spent by researchers on writing it. The lottery mechanism only requires the researcher to propose something that is good enough to get into the lottery. Far less time would therefore be devoted to writing it and more time spent on actual science. I’m all for it!

Preventability of early versus late hospital readmissions in a national cohort of general medicine patients. Annals of Internal Medicine [PubMed] Published 5th June 2018

Hospital quality is hard to judge. We’ve discussed on this blog before the pitfalls of using measures such as adjusted mortality differences for this purpose. Just because a hospital has higher than expected mortality does not mean those death could have been prevented with higher quality care. More thorough methods assess errors and preventable harm in care. Case note review studies have suggested as little as 5% of deaths might be preventable in England and Wales. Another paper we have covered previously suggests then that the predictive value of standardised mortality ratios for preventable deaths may be less than 10%.

Another commonly used metric is readmission rates. Poor care can mean patients have to return to the hospital. But again, the question remains as to how preventable these readmissions are. Indeed, there may also be substantial differences between those patients who are readmitted shortly after discharge and those for whom it may take a longer time. This article explores the preventability of early and late readmissions in ten hospitals in the US. It uses case note review and a number of reviewers to evaluate preventability. The headline figures are that 36% of early readmissions are considered preventable compared to 23% of late readmissions. Moreover, it was considered that the early readmissions were most likely to have been preventable at the hospital whereas for late readmissions, an outpatient clinic or the home would have had more impact. All in all, another paper which provides evidence to suggest crude, or even adjusted rates, are not good indicators of hospital quality.

Visualisation in Bayesian workflow. Journal of the Royal Statistical Society: Series A (Statistics in Society) [RePEc] Published 15th January 2019

This article stems from a broader programme of work from these authors on good “Bayesian workflow”. That is to say, if we’re taking a Bayesian approach to analysing data, what steps ought we to be taking to ensure our analyses are as robust and reliable as possible? I’ve been following this work for a while as this type of pragmatic advice is invaluable. I’ve often read empirical papers where the authors have chosen, say, a logistic regression model with covariates x, y, and z and reported the outcomes, but at no point ever justified why this particular model might be any good at all for these data or the research objective. The key steps of the workflow include, first, exploratory data analysis to help set up a model, and second, performing model checks before estimating model parameters. This latter step is important: one can generate data from a model and set of prior distributions, and if the data that this model generates looks nothing like what we would expect the real data to look like, then clearly the model is not very good. Following this, we should check whether our inference algorithm is doing its job, for example, are the MCMC chains converging? We can also conduct posterior predictive model checks. These have had their criticisms in the literature for using the same data to both estimate and check the model which could lead to the model generalising poorly to new data. Indeed in a recent paper of my own, posterior predictive checks showed poor fit of a model to my data and that a more complex alternative was better fitting. But other model fit statistics, which penalise numbers of parameters, led to the alternative conclusions. So the simpler model was preferred on the grounds that the more complex model was overfitting the data. So I would argue posterior predictive model checks are a sensible test to perform but must be interpreted carefully as one step among many. Finally, we can compare models using tools like cross-validation.

This article discusses the use of visualisation to aid in this workflow. They use the running example of building a model to estimate exposure to small particulate matter from air pollution across the world. Plots are produced for each of the steps and show just how bad some models can be and how we can refine our model step by step to arrive at a convincing analysis. I agree wholeheartedly with the authors when they write, “Visualization is probably the most important tool in an applied statistician’s toolbox and is an important complement to quantitative statistical procedures.”

Credits

 

How important is healthcare for population health?

How important is a population’s access to healthcare as a determinant of population health? I have heard the claim that “as little as 10% of a population’s health is linked to access to healthcare”, or some variant of it, in many places. Some examples include the Health Foundation, the AHRQ, the King’s Fund, the WHO, and determinantsofhealth.org. This claim is appealing: it feels counter-intuitive and it brings to the fore questions of public health and health-related behaviour. But it’s not clear what it means.

I can think of two possible interpretations. One, 10% of the variation in population health outcomes is explained by variation in healthcare access. Or two, access to healthcare leads to a 10% change in population health outcomes compared to no access to healthcare. Both of these claims would be very hard to evaluate empirically. Within many countries, particularly the highest income countries, there is little variation in access to healthcare relative to possible levels of access across the world. Inter-country comparisons would provide a greater range of variation to compare to population outcomes. But even the most sophisticated statistical analysis will struggle to separate out the effects of other economic determinants of health.

It would also be difficult to make sense of any study that purported to estimate the effect of adding or removing healthcare beyond any within-country variation. The labour and capital resource needs of the most sophisticated hospitals are too great for the poorest settings, and it is unlikely that the wealthiest democratic countries could end up with the level of healthcare the world’s poorest face.

But what is the evidence for the claim of 10%? There are a handful of key citations, all of which were summarised at the time in a widely cited article in Health Affairs in 2014. For each of the two ways we could think about the contribution of healthcare above, we would need to look at estimates of the probability of health conditional on different levels of healthcare, Pr(health|healthcare). Each of the references for the 10% figure above in fact provides evidence for the proportion of deaths associated with ‘inadequate’ healthcare, or to put in another way, the probability of having received ‘inadequate’ care given death, Pr(healthcare|health). This is known as transposing the conditional: we have got our conditional probability the wrong way round. Even if we accept mortality rates as an acceptable proxy for population health, the two probabilities are not equal to one another.

Interpretation of this evidence is also complex. Smoking tobacco, for example, would be considered a behavioural determinant of health and deaths caused by it would be attributed to a behavioural cause rather than healthcare. But survival rates for lung cancers have improved dramatically over the last few decades due to improvements in healthcare. While it would be foolish to attribute a death in the past to a lack of access to treatments which had not been invented, contemporary lung cancer deaths in low income settings may well have been prevented by access to better healthcare. Thus using cause-of-death statistics to estimate the contributions of different factors to population health only typically picks up those deaths resulting from medical error or negligence. They are a wholly unreliable guide to the role of healthcare in determining population health.

A study published recently in The Lancet, timed to coincide with a commission on healthcare quality, adopted a different approach. The study aimed to estimate the annual number of deaths worldwide due to a lack of access to high-quality care. To do this they compared the mortality rates of conditions amenable to healthcare intervention around the world with those in the wealthiest nations. Any differences were attributed to either non-utilisation of or lack of access to high-quality care. 15.6 million ‘excess deaths’ were estimated. However, to attribute to these deaths a cause of inadequate healthcare access, one would need to conceive of a counter-factual world in which everyone was treated in the best healthcare systems. This is surely implausible in the extreme. A comparable question might be to ask how many people around the world are dying because their incomes are not as high as those of the top 10% of Americans.

On the normative question, there is little disagreement with the goal of achieving universal health coverage and improving population health. But these dramatic, eye-catching, or counter-intuitive figures do little to support achieving these ends: they can distort policy priorities and create unattainable goals and expectations. Health systems are not built overnight; an incremental approach is needed to ensure sustainability and affordability. Evidence to support this is where great strides are being made both methodologically and empirically, but it is not nearly as exciting as claiming healthcare isn’t very important or that millions of people are dying every year due to poor healthcare access. Healthcare systems are an integral and important part of overall population health; assigning a number to this importance is not.

Picture credit: pixabay