Chris Sampson’s journal round-up for 6th February 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A review of NICE methods and processes across health technology assessment programmes: why the differences and what is the impact? Applied Health Economics and Health Policy [PubMed] Published 27th January 2017

Depending on the type of technology under consideration, NICE adopts a variety of different approaches in coming up with their recommendations. Different approaches might result in different decisions, which could undermine allocative efficiency. This study explores this possibility. Data were extracted from the manuals and websites for 5 programmes, under the themes of ‘remit and scope’, ‘process of assessment’, ‘methods of evaluation’ and ‘appraisal of evidence’. Semi-structured interviews were conducted with 5 people with expertise in each of the 5 programmes. Results are presented in a series of tables – one for each theme – outlining the essential characteristics of the 5 programmes. In their discussion, the authors then go on to consider how the identified differences might impact on efficiency from either a ‘utilitarian’ health-maximisation perspective or NICE’s egalitarian aim of ensuring adequate levels of health care. Not all programmes deliver recommendations with mandatory funding status, and it is only the ones that do that have a formal appeals process. Allowing for local rulings on funding could be good or bad news for efficiency, depending on the capacity of local decision makers to conduct economic evaluations (so that means probably bad news). At the same time, regional variation could undermine NICE’s fairness agenda. The evidence considered by the programmes varies, from a narrow focus on clinical and cost-effectiveness to the incorporation of budget impact and wider ethical and social values. Only some of the programmes have reference cases, and those that do are the ones that use cost-per-QALY analysis, which probably isn’t a coincidence. The fact that some programmes use outcomes other than QALYs obviously has the potential to undermine health-maximisation. Most differences or borne of practicality; there’s no point in insisting on a CUA if there is no evidence at all to support one – the appraisal would simply not happen. The very existence of alternative programmes indicates that NICE is not simply concerned with health-maximisation. Additional weight is given to rare conditions, for example. And NICE want to encourage research and innovation. So it’s no surprise that we need to take into account NICE’s egalitarian view to understand the type of efficiency for which it strives.

Economic evaluations alongside efficient study designs using large observational datasets: the PLEASANT trial case study. PharmacoEconomics [PubMed] Published 21st January 2017

One of the worst things about working on trial-based economic evaluations is going to lots of effort to collect lots of data, then finding that at the end of the day you don’t have much to show for it. Nowadays, the health service routinely collects many data for other purposes. There have been proposals to use these data – instead of prospectively collecting data – to conduct clinical trials. This study explores the potential for doing an economic evaluation alongside such a trial. The study uses CPRD data, including diagnostic, clinical and resource use information, for 8,608 trial participants. The intervention was the sending out of a letter in the hope of reducing unscheduled medical contacts due to asthma exacerbation in children starting a new school year. QALYs couldn’t be estimated using the CPRD data, so values were derived from the literature and estimated on the basis of exacerbations indicated by changes in prescriptions or hospitalisations. Note here the potentially artificial correlation between costs and outcomes that this creates, thus somewhat undermining the benefit of some good old bootstrapping. The results suggest the intervention is cost-saving with little impact on QALYs. Lots of sensitivity analyses are conducted, which are interesting in themselves and say something about the concerns around some of the structural assumptions. The authors outline the pros and cons of the approach. It’s an important discussion as it seems that studies like this are going to become increasingly common. Regarding data collection, there’s little doubt that this approach is more efficient, and it should be particularly valuable in the evaluation of public health and service delivery type interventions. The problem is that the study is not able to use individual-level cost and outcome data from the same people, which is what sets a trial-based economic evaluation apart from a model-based study. So for me, this isn’t really a trial-based economic evaluation. Indeed, the analysis incorporates a Markov-type model of exacerbations. It’s a different kind of beast, which incorporates aspects of modelling and aspects of trial-based analysis, along with some unique challenges of its own. There’s a lot more methodological work that needs to be done in this area, but this study demonstrates that it could be fruitful.

“Too much medicine”: insights and explanations from economic theory and research. Social Science & Medicine [PubMed] Published 18th January 2017

Overconsumption of health care represents an inefficient use of resources, and so we wouldn’t recommend it. But is that all we – as economists – have to say on the matter? This study sought to dig a little deeper. A literature search was conducted to establish a working definition of overconsumption. Related notions such as overdiagnosis, overtreatment, overuse, low-value care, overmedicalisation and even ‘pharmaceuticalisation’ all crop up. The authors introduce ‘need’ as a basis for understanding overconsumption; it represents health care that should never be considered as “needed”. A useful distinction is identified between misconsumption – where an individual’s own consumption is detrimental to their own well-being – and overconsumption, which can be understood as having a negative effect on social welfare. Note that in a collectively funded system the two concepts aren’t entirely distinguishable. Misconsumption becomes the focus of the paper, as avoiding harm to patients has been the subject of the “too much medicine” movement. I think this is a shame, and not really consistent with an economist’s usual perspective. The authors go on to discuss issues such as moral hazard, supplier-induced demand, provider payment mechanisms, ‘indication creep’, regret theory, and physicians’ positional consumption, and whether or not such phenomena might lead to individual welfare losses and thus be considered causes of misconsumption. The authors provide a neat diagram showing the various causes of misconsumption on a plane. One dimension represents the extent to which the cause is imperfect knowledge or imperfect agency, and the other the degree to which the cause is at the individual or market level. There’s a big gap in the top right, where market level causes meet imperfect knowledge. This area could have included patent systems, research fraud and dodgy Pharma practices. Or maybe just a portrait of Ben Goldacre for shorthand. There are some warnings about the (limited) extent to which market reforms might address misconsumption, and the proposed remedy for overconsumption is not really an economic one. Rather, a change in culture is prescribed. More research looking at existing treatments rather than technology adoption, and to investigate subgroup effects, is also recommended. The authors further suggest collaboration between health economists and ecological economists.

Credits

Advertisements

Variations in NHS admissions at a glance

Variations in admissions to NHS hospitals are the source of a great deal of consternation. Over the long-run, admissions and the volume of activity required of the NHS have increased, without equivalent increases in funding or productivity. Over the course of the year, there are repeated claims of crises as hospitals are ill-equipped for the increase in demand in the winter. While different patterns of admissions at weekends relative to weekdays may be the foundation of the ‘weekend effect’ as we recently demonstrated. And yet all these different sources of variation produce a singular time series of numbers of daily admissions. But, each of the different sources of variation are important for different planning and research aims. So let’s decompose the daily number of admissions into its various components.

Data

Daily number of emergency admissions to NHS hospitals between April 2007 and March 2015 from Hospital Episode Statistics.

Methods

A similar analysis was first conducted on variations in the number of births by day of the year. A full description of the model can be found in Chapter 21 of the textbook Bayesian Data Analysis (indeed the model is shown on the front cover!). The model is a sum of Gaussian processes, each one modelling a different aspect of the data, such as the long-run trend or weekly periodic variation. We have previously used Gaussian processes in a geostatistical model on this blog. Gaussian processes are a flexible class of models for which any finite dimensional marginal distribution is Gaussian. Different covariance functions can be specified for different models, such as the aforementioned periodic or long-run trends. The model was run using the software GPstuff in Octave (basically an open-source version of Matlab) and we have modified code from the GPstuff website.

Results

admit5-1

The four panels of the figure reveal to us things we may claim to already know. Emergency admissions have been increasing over time and were about 15% higher in 2015 than in 2007 (top panel). The second panel shows us the day of the week effects: there are about 20% fewer admissions on a Saturday or Sunday than on a weekday. The third panel shows a decrease in summer and increase in winter as we often see reported, although perhaps not quite as large as we might have expected. And finally the bottom panel shows the effects of different days of the year. We should note that the large dip at the end of March/beginning of April is an artifact of coding at the end of the financial year in HES and not an actual drop in admissions. But, we do see expected drops for public holidays such as Christmas and the August bank holiday.

While none of this is unexpected it does show that there’s a lot going on underneath the aggregate data. Perhaps the most alarming aspect of the data is the long run increase in emergency admissions when we compare it to the (lack of) change in funding or productivity. It suggests that hospitals will often be running at capacity so other variation, such as over winter, may lead to an excess capacity problem. We might also speculate on other possible ‘weekend effects’, such as admission on a bank holiday.

As a final thought, the method used to model the data is an excellent way of modelling data with an unknown structure without posing assumptions such as linearity that might be too strong. Hence their use in geostatistics. They are widely used in machine learning and artificial intelligence as well. We often encounter data with unknown and potentially complicated structures in health care and public health research so hopefully this will serve as a good advert for some new methods. See this book, or the one referenced in the methods section, for an in depth look.

Credits

Sam Watson’s journal round-up for 23rd January 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Short-term and long-term effects of GDP on traffic deaths in 18 OECD countries, 1960–2011. Journal of Epidemiology and Community Health [PubMedPublished February 2017

Understanding the relationships between different aspects of the economy or society in the aggregate can reveal to us knowledge about the world. However, they are more complicated than analyses of individuals who either did or did not receive an intervention, as the objects of aggregate analyses don’t ‘exist’ per se but are rather descriptions of average behaviour of the system. To make sense of these analyses an understanding of the system is therefore required. On these grounds I am a little unsure of the results of this paper, which estimates the effect of GDP on road traffic fatalities in OECD countries over time. It is noted that previous studies have shown that in the short-run, road traffic deaths are procyclical, but in the long-run they have declined, likely as a result of improved road and car safety. Indeed, this is what they find with their data and models. But, what does this result mean in the long-run? Have they picked up anything more than a correlation with time? Time is not included in the otherwise carefully specified models, so is the conclusion to policy makers, ‘just keep doing what you’re doing, whatever that is…’? Models of aggregate phenomena can be among the most interesting, but also among the least convincing (my own included!). That being said, this is better than most.

Sources of geographic variation in health care: Evidence from patient migration. Quarterly Journal of Economics [RePEcPublished November 2016

There are large geographic differences in health care utilisation both between countries and within countries. In the US, for example, the average Medicare enrollee spent around $14,400 in 2010 in Miami, Florida compared with around $7,800 in Minneapolis, Minnesota, even after adjusting for demographic differences. However, higher health care spending is generally not associated with better health outcomes. There is therefore an incentive for policy makers to legislate to reduce this disparity, but what will be effective depends on the causes of the variation. On one side, doctors may be dispensing treatments differently; for example, we previously featured a paper looking at the variation in overuse of medical testing by doctors. On the other side, patients may be sicker or have differing preferences on the intensity of their treatment. To try and distinguish between these two possible sources of variation, this paper uses geographical migration to look at utilisation among people who move from one area to another. They find that (a very specific) 47% of the difference in use of health care is attributable to patient characteristics. However, I (as ever) remain skeptical: a previous post brought up the challenge of ‘transformative treatments’, which may apply here as this paper has to rely on the assumption that patient preferences remain the same when they move. If moving from one city to another changes your preferences over healthcare, then their identification strategy no longer works well.

Seeing beyond 2020: an economic evaluation of contemporary and emerging strategies for elimination of Trypanosoma brucei gambiense. Lancet Global Health Published November 2016

African sleeping sickness, or Human African trypanosomiasis, is targeted for eradication in the next decade. However, the strategy to do so has not been determined, nor whether any such strategy would be a cost-effective use of resources. This paper aims to model all of these different strategies to estimate incremental cost-effectiveness threshold (ICERs). Infectious disease presents an interesting challenge for health economic evaluation as the disease transmission dynamics need to be captured over time, which they achieve here with a ‘standard’ epidemiological model using ordinary differential equations. To reach elimination targets, an approach incorporating case detection, treatment, and vector control would be required, they find.

A conceptual introduction to Hamiltonian Monte Carlo. ArXiv Published 10th January 2017

It is certainly possible to drive a car without understanding how the engine works. But if we want to get more out of the car or modify its components then we will have to start learning some mechanics. The same is true of statistical software. We can knock out a simple logistic regression without ever really knowing the theory or what the computer is doing. But this ‘black box’ approach to statistics has clear problems. How do we know the numbers on the screen mean what we think they mean? What if it doesn’t work or if it is running slowly, how do we diagnose the problem? Programs for Bayesian inference can sometimes seem even more opaque than others: one might well ask what are those chains actually exploring, if it’s even the distribution of interest. Well, over the last few years a new piece of kit, Stan, has become a brilliant and popular tool for Bayesian inference. It achieves fast convergence with less autocorrelation between chains and so it achieves a high effective sample size for relatively few iterations. This is due to its implementation of Hamiltonian Monte Carlo. But it’s founded in the mathematics of differential geometry, which has restricted the understanding of how it works to a limited few. This paper provides an excellent account of Hamiltonian Monte Carlo, how it works, and when it fails, all replete with figures. While it’s not necessary to become a theoretical or computational statistician, it is important, I think, to have a grasp of what the engine is doing if we’re going to play around with it.

Credits