Alastair Canaway’s journal round-up for 20th March 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The use of quality-adjusted life years in cost-effectiveness analyses in palliative care: mapping the debate through an integrative review. Palliative Medicine [PubMed] Published 13th February 2017

February saw a health economics special within the journal Palliative Medicine – the editorials are very much worth a read to get a quick idea of how health economics has (and hasn’t) developed within the end of life care context. One of the most commonly encountered debates when discussing end of life care within health economics circles relates to the use of QALYs, and whether they’re appropriate. This paper aimed to map out the pros and cons of using the QALY framework to inform health economic decisions in the palliative care context. Being a review, there were no ground-breaking findings, more a refresher on what the issues are with the QALY at end of life: i) restrictions in life years gained, ii) conceptualisation of quality of life and its measurement, and iii) valuation and additivity of time. The review acknowledges the criticisms of the QALY but concludes that it is still of use for informing decision making. A key finding, and one which should be common sense, is that the EQ-5D should not be relied on as the sole measure within this context: the dimensions important to those at end of life are not adequately captured by the EQ-5D, and other measures should be considered. A limitation for me was that the review did not include Round’s (2016) book Care at the End of Life: An Economic Perspective (disclaimer: I’m a co-author on a chapter), which has significant overlap and builds on a number of the issues relevant to the paper. That aside, this is a useful paper for those new to the pitfalls of economic evaluation at the end of life and provides an excellent summary of many of the key issues.

The causal effect of retirement on mortality: evidence from targeted incentives to retire early. Health Economics [PubMed] [RePEc] Published 23rd February 2017

It’s been said that those who retire earlier die earlier, and a quick google search suggests there are many statistics supporting this. However, I’m unsure how robust the causality is in such studies. For example, the sick may choose to leave the workforce early. Previous academic literature had been inconclusive regarding the effects, and in which direction they occurred. This paper sought to elucidate this by taking advantage of pension reforms within the Netherlands which meant certain cohorts of Dutch civil servants could qualify for early retirement at a younger age. This change led to a steep increase in retirement and provided an opportunity to examine causal impacts by instrumenting retirement with the early retirement window. Administrative data from the entire population was used to examine the probability of dying resulting from earlier retirement. Contrary to preconceptions, the probability of men dying within five years dropped by 2.6% in those who took early retirement: a large and significant impact. The biggest impact was found within the first year of retirement. An explanation for this is that the reduction of stress and lifestyle change upon retiring may postpone death for the civil servants which were in poor health. The paper is an excellent example of harnessing a natural experiment for research purposes. It provides a valuable contribution to the evidence base whilst also being reassuring for those of us who plan to retire in the next few years (lottery win pending).

Mapping to estimate health-state utility from non–preference-based outcome measures: an ISPOR Good Practices for Outcomes Research Task Force report. Value in Health [PubMed] Published 16th February 2017

Finally, I just wanted to signpost this new good practice guide. If you ever attend HESG, ISPOR, or IHEA, you’ll nearly always encounter a paper on mapping (cross-walking). Given the ethical issues surrounding research waste and the increasing pressure to publish, mapping provides an excellent opportunity to maximise the value of your data. Of course, mapping also serves a purpose for the health economics community: it facilitates the estimation of QALYs in studies where no preference based measure exists. There are many iffy mapping functions out there so it’s good to see ISPOR have taken action by producing a report on best practice for mapping. As with most ISPOR guidelines the paper covers all the main areas you’d expect and guides you through the key considerations to undertaking a mapping exercise, this includes: pre-modelling considerations, data requirements, selection of statistical models, selection of covariates, reporting of results, and validation. Additionally there is also a short section for those who are keen to use a mapping function to generate QALYs but are unsure which to pick. As with any set of guidelines, it’s not exactly a thriller, it is however extremely useful for anyone seeking to conduct mapping.

Credits

Advertisements

Sam Watson’s journal round-up for 6th March 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

It’s good to be first: order bias in reading and citing NBER working papers. The Review of Economics and Statistics [RePEcPublished 23rd February 2017

Each week one of the authors at this blog choose three or four recently published studies to summarise and briefly discuss. Making this choice from the many thousands of articles published every week can be difficult. I browse those journals that publish in my area and search recently published economics papers on PubMed and Econlit for titles that pique my interest. But this strategy is not without its own flaws as this study aptly demonstrates. When making a choice among many alternatives, people aren’t typically presented with a set of choices, rather a list. This arises in healthcare as well. In an effort to promote competition, at least in the UK, patients are presented with a list of possible of providers and some basic information about those providers. We recently covered a paper that explored this expansion of choice ‘sets’ and investigated its effects on quality. We have previously criticised the use of such lists. People often skim these lists relying on simple heuristics to make choices. This article shows that for the weekly email of new papers published by the National Bureau of Economic Research (NBER), being listed first leads to an increase of approximately 30% in downloads and citations, despite the essentially random ordering of the list. This is certainly not the first study to illustrate the biases in human decision making, but it shows that this journal round-up may not be a fair reflection of the literature, and providing more information about healthcare providers may not have the impact on quality that might be hypothesised.

Economic conditions, illicit drug use, and substance use disorders in the United States. Journal of Health Economics [PubMed] Published March 2017

We have featured a large number of papers about the relationship between macroeconomic conditions and health and health-related behaviours on this blog. It is certainly one of the health economic issues du jour and one we have discussed in detail. Generally speaking, when looking at an aggregate level, such as countries or states, all-cause mortality appears to be pro-cyclical: it declines in economic downturns. Whereas an examination at individual or household levels suggest unemployment and reduced income is generally bad for health. It is certainly possible to reconcile these two effects as any discussion of Simpson’s paradox will reveal. This study takes the aggregate approach to looking at US state-level unemployment rates and their relationship with drug use. It’s relevant to the discussion around economic conditions and health; the US has seen soaring rates of opiate-related deaths recently, although whether this is linked to the prevailing economic conditions remains to be seen. Unfortunately, this paper predicates a lot of its discussion about whether there is an effect on whether there was statistical significance, a gripe we’ve contended with previously. And there are no corrections for multiple comparisons, despite the well over 100 hypothesis tests that are conducted. That aside, the authors conclude that the evidence suggests that use of ecstasy and heroin is procyclical with respect to unemployment (i.e increase with greater unemployment) and LSD, crack cocaine, and cocaine use is counter-cyclical. The results appear robust to the model specifications they compare, but I find it hard to reconcile some of the findings with the prior information about how people actually consume drugs. Many drugs are substitutes and/or compliments for one another. For example, many heroin users began using opiates through abuse of prescription drugs such as oxycodone but made the switch as heroin is generally much cheaper. Alcohol and marijuana have been shown to be substitutes for one another. All of this suggesting a lack of independence between the different outcomes considered. People may also lose their job because of drug use. Taken all together I remain a little sceptical of the conclusions from the study, but it is nevertheless an interesting and timely piece of research.

Child-to-adult neurodevelopmental and mental health trajectories after early life deprivation: the young adult follow-up of the longitudinal English and Romanian Adoptees study. The Lancet [PubMedPublished 22nd February 2017

Does early life deprivation lead to later life mental health issues? A question that is difficult to answer with observational data. Children from deprived backgrounds may be predisposed to mental health issues, perhaps through familial inheritance. To attempt to discern whether deprivation in early life is a cause of mental health issues this paper uses data derived from a cohort of Romanian children who spent time in one of the terribly deprived institutions of Ceaușescu’s Romania and who were later adopted by British families. These institutions were characterised by poor hygiene, inadequate food, and lack of social or educational stimulation. A cohort of British adoptees was used for comparison. For children who spent more than six months in one of the deprived institutions, there was a large increase in cognitive and social problems in later life compared with either British adoptees or those who spent less than six months in an institution. The evidence is convincing, with differences being displayed across multiple dimensions of mental health, and a clear causal mechanism by which deprivation acts. However, for this and many other studies that I write about on this blog, a disclaimer might be needed when there is significant (pun intended) abuse and misuse of p-values. Ziliak and McClosky’s damning diatribe on p-values, The Cult of Statistical Significance, presents examples of lists of p-values being given completely out of context, with no reference to the model or hypothesis test they are derived from, and with the implication that they represent whether an effect exists or not. This study does just that. I’ll leave you with this extract from the abstract:

Cognitive impairment in the group who spent more than 6 months in an institution remitted from markedly higher rates at ages 6 years (p=0·0001) and 11 years (p=0·0016) compared with UK controls, to normal rates at young adulthood (p=0·76). By contrast, self-rated emotional symptoms showed a late onset pattern with minimal differences versus UK controls at ages 11 years (p=0·0449) and 15 years (p=0·17), and then marked increases by young adulthood (p=0·0005), with similar effects seen for parent ratings. The high deprivation group also had a higher proportion of people with low educational achievement (p=0·0195), unemployment (p=0·0124), and mental health service use (p=0·0120, p=0·0032, and p=0·0003 for use when aged <11 years, 11–14 years, and 15–23 years, respectively) than the UK control group.

Credits

Sam Watson’s journal round-up for 23rd January 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Short-term and long-term effects of GDP on traffic deaths in 18 OECD countries, 1960–2011. Journal of Epidemiology and Community Health [PubMedPublished February 2017

Understanding the relationships between different aspects of the economy or society in the aggregate can reveal to us knowledge about the world. However, they are more complicated than analyses of individuals who either did or did not receive an intervention, as the objects of aggregate analyses don’t ‘exist’ per se but are rather descriptions of average behaviour of the system. To make sense of these analyses an understanding of the system is therefore required. On these grounds I am a little unsure of the results of this paper, which estimates the effect of GDP on road traffic fatalities in OECD countries over time. It is noted that previous studies have shown that in the short-run, road traffic deaths are procyclical, but in the long-run they have declined, likely as a result of improved road and car safety. Indeed, this is what they find with their data and models. But, what does this result mean in the long-run? Have they picked up anything more than a correlation with time? Time is not included in the otherwise carefully specified models, so is the conclusion to policy makers, ‘just keep doing what you’re doing, whatever that is…’? Models of aggregate phenomena can be among the most interesting, but also among the least convincing (my own included!). That being said, this is better than most.

Sources of geographic variation in health care: Evidence from patient migration. Quarterly Journal of Economics [RePEcPublished November 2016

There are large geographic differences in health care utilisation both between countries and within countries. In the US, for example, the average Medicare enrollee spent around $14,400 in 2010 in Miami, Florida compared with around $7,800 in Minneapolis, Minnesota, even after adjusting for demographic differences. However, higher health care spending is generally not associated with better health outcomes. There is therefore an incentive for policy makers to legislate to reduce this disparity, but what will be effective depends on the causes of the variation. On one side, doctors may be dispensing treatments differently; for example, we previously featured a paper looking at the variation in overuse of medical testing by doctors. On the other side, patients may be sicker or have differing preferences on the intensity of their treatment. To try and distinguish between these two possible sources of variation, this paper uses geographical migration to look at utilisation among people who move from one area to another. They find that (a very specific) 47% of the difference in use of health care is attributable to patient characteristics. However, I (as ever) remain skeptical: a previous post brought up the challenge of ‘transformative treatments’, which may apply here as this paper has to rely on the assumption that patient preferences remain the same when they move. If moving from one city to another changes your preferences over healthcare, then their identification strategy no longer works well.

Seeing beyond 2020: an economic evaluation of contemporary and emerging strategies for elimination of Trypanosoma brucei gambiense. Lancet Global Health Published November 2016

African sleeping sickness, or Human African trypanosomiasis, is targeted for eradication in the next decade. However, the strategy to do so has not been determined, nor whether any such strategy would be a cost-effective use of resources. This paper aims to model all of these different strategies to estimate incremental cost-effectiveness threshold (ICERs). Infectious disease presents an interesting challenge for health economic evaluation as the disease transmission dynamics need to be captured over time, which they achieve here with a ‘standard’ epidemiological model using ordinary differential equations. To reach elimination targets, an approach incorporating case detection, treatment, and vector control would be required, they find.

A conceptual introduction to Hamiltonian Monte Carlo. ArXiv Published 10th January 2017

It is certainly possible to drive a car without understanding how the engine works. But if we want to get more out of the car or modify its components then we will have to start learning some mechanics. The same is true of statistical software. We can knock out a simple logistic regression without ever really knowing the theory or what the computer is doing. But this ‘black box’ approach to statistics has clear problems. How do we know the numbers on the screen mean what we think they mean? What if it doesn’t work or if it is running slowly, how do we diagnose the problem? Programs for Bayesian inference can sometimes seem even more opaque than others: one might well ask what are those chains actually exploring, if it’s even the distribution of interest. Well, over the last few years a new piece of kit, Stan, has become a brilliant and popular tool for Bayesian inference. It achieves fast convergence with less autocorrelation between chains and so it achieves a high effective sample size for relatively few iterations. This is due to its implementation of Hamiltonian Monte Carlo. But it’s founded in the mathematics of differential geometry, which has restricted the understanding of how it works to a limited few. This paper provides an excellent account of Hamiltonian Monte Carlo, how it works, and when it fails, all replete with figures. While it’s not necessary to become a theoretical or computational statistician, it is important, I think, to have a grasp of what the engine is doing if we’re going to play around with it.

Credits