Sam Watson’s journal round-up for 11th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biology [PubMed] Published 2nd January 2019

If you work in research you will have no doubt thought to yourself at one point that you spend more time applying to do research than actually doing it. You can spend weeks working on (what you believe to be) a strong proposal only for it to fail against other strong bids. That time could have been spent collecting and analysing data. Indeed, the opportunity cost of writing extensive proposals can be very high. The question arises as to whether there is another method of allocating research funding that reduces this waste and inefficiency. This paper compares the proposal competition to a partial lottery. In this lottery system, proposals are short, and among those that meet some qualifying standard those that are funded are selected at random. This system has the benefit of not taking up too much time but has the cost of reducing the average scientific value of the winning proposals. The authors compare the two approaches using an economic model of contests, which takes into account factors like proposal strength, public benefits, benefits to the scientist like reputation and prestige, and scientific value. Ultimately they conclude that, when the number of awards is smaller than the number of proposals worthy of funding, the proposal competition is inescapably inefficient. It means that researchers have to invest heavily to get a good project funded, and even if it is good enough it may still not get funded. The stiffer the competition the more researchers have to work to win the award. And what little evidence there is suggests that the format of the application makes little difference to the amount of time spent by researchers on writing it. The lottery mechanism only requires the researcher to propose something that is good enough to get into the lottery. Far less time would therefore be devoted to writing it and more time spent on actual science. I’m all for it!

Preventability of early versus late hospital readmissions in a national cohort of general medicine patients. Annals of Internal Medicine [PubMed] Published 5th June 2018

Hospital quality is hard to judge. We’ve discussed on this blog before the pitfalls of using measures such as adjusted mortality differences for this purpose. Just because a hospital has higher than expected mortality does not mean those death could have been prevented with higher quality care. More thorough methods assess errors and preventable harm in care. Case note review studies have suggested as little as 5% of deaths might be preventable in England and Wales. Another paper we have covered previously suggests then that the predictive value of standardised mortality ratios for preventable deaths may be less than 10%.

Another commonly used metric is readmission rates. Poor care can mean patients have to return to the hospital. But again, the question remains as to how preventable these readmissions are. Indeed, there may also be substantial differences between those patients who are readmitted shortly after discharge and those for whom it may take a longer time. This article explores the preventability of early and late readmissions in ten hospitals in the US. It uses case note review and a number of reviewers to evaluate preventability. The headline figures are that 36% of early readmissions are considered preventable compared to 23% of late readmissions. Moreover, it was considered that the early readmissions were most likely to have been preventable at the hospital whereas for late readmissions, an outpatient clinic or the home would have had more impact. All in all, another paper which provides evidence to suggest crude, or even adjusted rates, are not good indicators of hospital quality.

Visualisation in Bayesian workflow. Journal of the Royal Statistical Society: Series A (Statistics in Society) [RePEc] Published 15th January 2019

This article stems from a broader programme of work from these authors on good “Bayesian workflow”. That is to say, if we’re taking a Bayesian approach to analysing data, what steps ought we to be taking to ensure our analyses are as robust and reliable as possible? I’ve been following this work for a while as this type of pragmatic advice is invaluable. I’ve often read empirical papers where the authors have chosen, say, a logistic regression model with covariates x, y, and z and reported the outcomes, but at no point ever justified why this particular model might be any good at all for these data or the research objective. The key steps of the workflow include, first, exploratory data analysis to help set up a model, and second, performing model checks before estimating model parameters. This latter step is important: one can generate data from a model and set of prior distributions, and if the data that this model generates looks nothing like what we would expect the real data to look like, then clearly the model is not very good. Following this, we should check whether our inference algorithm is doing its job, for example, are the MCMC chains converging? We can also conduct posterior predictive model checks. These have had their criticisms in the literature for using the same data to both estimate and check the model which could lead to the model generalising poorly to new data. Indeed in a recent paper of my own, posterior predictive checks showed poor fit of a model to my data and that a more complex alternative was better fitting. But other model fit statistics, which penalise numbers of parameters, led to the alternative conclusions. So the simpler model was preferred on the grounds that the more complex model was overfitting the data. So I would argue posterior predictive model checks are a sensible test to perform but must be interpreted carefully as one step among many. Finally, we can compare models using tools like cross-validation.

This article discusses the use of visualisation to aid in this workflow. They use the running example of building a model to estimate exposure to small particulate matter from air pollution across the world. Plots are produced for each of the steps and show just how bad some models can be and how we can refine our model step by step to arrive at a convincing analysis. I agree wholeheartedly with the authors when they write, “Visualization is probably the most important tool in an applied statistician’s toolbox and is an important complement to quantitative statistical procedures.”

Credits

 

Meeting round-up: Health Economists’ Study Group (HESG) Winter 2019

2019 started with aplomb with the HESG Winter meeting, superbly organised by the Centre for Health Economics, University of York.

Andrew Jones kicked off proceedings with his brilliant course on data visualisation in health econometrics. The eager audience learnt about Edward Tufte’s and others’ ideas about how to create charts that help to make it much easier to understand information. The course was tremendously well received by the HESG audience. And I know that I’ll find it incredibly useful too, as there were lots of ideas that apply to my work. So I’m definitely going to be looking further into Andrew’s chapter on data visualisation to know more.

The conference proper started in the afternoon. I had the pleasure to chair the fascinating paper by Manuela Deidda et al on an economic evaluation using observational data on the Healthy Start Voucher, which was discussed by Anne Ludbrook. We had an engaging discussion, that not only delved into the technical aspects of the paper, such as the intricacies of implementing propensity score matching and regression discontinuity, but also about the policy implications of the results.

I continued with the observational data theme by enjoying the discussion led by Panos Kasteridis on the Andrew McCarthy et al paper. Then I quickly followed this by popping over to catch Attakrit Leckcivilize’s excellent discussion of Padraig Dixon’s et al paper on the effect of obesity on hospital costs. This impressive paper uses Mendelian randomisation, which is a fascinating approach using a type of instrumental variable analysis with individuals’ genetic variants as the instrument.

The meeting continued in the stunning setting of the Yorkshire Museum for the plenary session, which also proved a fitting location to pay tribute to the inspirational Alan Maynard, who sadly passed away in 2018. Unfortunately, I was unable to hear the tributes to Alan Maynard in person, but fellow attendees were able to paint a moving portrait of the event on Twitter, that kept me in touch.

The plenary was chaired by Karen Bloor and included presentations by Kalipso Chalkidou, Brian Ferguson, Becky Henderson and Danny PalnochJane Hall, Steve Birch and Maria Goddard gave personal tributes.

The health economics community was united in gratitude to Professor Alan Maynard, who did so much to advance and disseminate the discipline. It made for a wonderful way to finish day 1!

Day 2 started bright and was full of stimulating sessions to choose from.

I chose to zone in on the cost-effectiveness topic in particular. I started with the David Glynn et al paper about using “back of the envelope” calculations to inform funding and research decisions, discussed by Ed Wilson. This paper is an excellent step towards making value of information easy to use.

I then attended Matthew Quaife’s discussion of Matthew Taylor’s paper on the consequences of assuming independence of parameters to decision uncertainty. This is a relevant paper for the cost-effectiveness world, in particular for those tasked with building and appraising cost-effectiveness models.

Next up it was my turn in the hot seat, as I presented the Jose Robles-Zurita et al paper on the economic evaluation of diagnostic tests. This thought-provoking paper presents a method to account for the effect of accuracy on the uptake of the test, in the context of maximising health.

As always, we were spoilt for choice in the afternoon. The paper “Drop dead: is anchoring at ‘dead’ a theoretical requirement in health state valuation” by Chris Sampson et al, competed very strongly with “Is it really ‘Grim up North’? The causes and consequences of inequalities on health and wider outcomes” by Anna Wilding et al, for the most provocative title. “Predicting the unpredictable? Using discrete choice experiments in economic evaluation to characterise uncertainty and account for heterogeneity”, from Matthew Quaife et al, also gave them a run for their money! I’ll leave a sample here of the exciting papers in discussion, so you can make your own mind up:

Dinner was in the splendid Merchant Adventurers’ Hall. Built in 1357, it is one of the finest Medieval buildings in the UK. Another stunning setting that provided a beautiful backdrop for a wonderful evening!

Andrew Jones presented the ‘Health Economics’ PhD Poster Prize, sponsored by Health Economics Wiley. Rose Atkins took the top honours by winning the Wiley prize for best poster. With Ashleigh Kernohan’s poster being highly commended, given its brilliant use of technology. Congratulations both!

Unfortunately, the vagaries of public transport meant I had to go home straight after dinner, but I heard from many trustworthy sources, on the following day, that the party continued well into the early hours. Clearly, health economics is a most energising topic!

For me, day 3 was all about cost-effectiveness decision rules. I started with the paper by Mark Sculpher et al, discussed by Chris Sampson. This remarkable paper sums up the evidence on the marginal productivity of the NHS, discussing how to use it to inform decisions, and proposes an agenda for research. There were many questions and comments from the floor, showing how important and challenging this topic is. As are so many papers in HESG, this is clearly one to look out for when it appears in print!

The next paper was on a very different way to solve the problem of resource allocation in health care. Philip Clarke and Paul Frijters propose an interesting system of auctions to set prices. The paper was well discussed by James Lomas, which kick-started an animated discussion with the audience about practicalities and implications for investment decisions by drug companies. Great food for thought!

Last, but definitely not least, I took in the paper by Bernarda Zamora et al on the relationship between health outcomes and expenditure across geographical areas in England. David Glynn did a great job discussing the paper, and especially in explaining data envelopment analysis. As ever, the audience was highly engaged and put forward many questions and comments. Clearly, the productivity of the NHS is a central question for health economics and will keep us busy for some time to come.

As always, this was a fantastic HESG meeting that was superbly organised, providing an environment where authors, discussants and participants alike were able to excel.

I really felt a feeling of collegiality, warmth and energy permeate the event. We are part of such an amazing scientific community. Next stop, HESG Summer meeting, hosted by the University of East Anglia. I’m already looking forward to it!

Credit

Thesis Thursday: Miqdad Asaria

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Miqdad Asaria who graduated with a PhD from the University of York. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
The economics of health inequality in the English National Health Service
Supervisors
Richard Cookson, Tim Doran
Repository link
http://etheses.whiterose.ac.uk/16189

What types of inequality are relevant in the context of the NHS?

For me the inequalities that really matter are the inequalities in health outcomes, in the English context it is particularly the socioeconomic patterning of these inequalities that is of concern. The focus of health policy in England over the last 200 years has been on improving the average health of the population as well as on providing financial risk protection against catastrophic health expenditure. Whilst great strides have been made in improving average population health through various pioneering interventions including the establishment of the NHS, health inequality has in fact consistently widened over this period. Recent research suggests that in terms of quality-adjusted life expectancy the gap between people living in the most deprived fifth of neighbourhoods in the country as compared to those living in the most affluent fifth is now approximately 11 quality-adjusted life years.

However, these socio-economic inequalities in health typically accumulate across the life course and there is a limited amount that health care on its own can do to prevent these gaps from widening or indeed to close these gaps once they emerge. This is why health systems including the NHS typically focus on measuring and tackling the inequalities that they can influence even though eliminating such inequalities can have at best only modest impacts on reducing health inequality overall. These comprise of inequalities in access to and quality of healthcare as well as inequality of those health outcomes specifically amenable to healthcare.

What were the key methods and data that you used to identify levels of health inequality?

I am currently working on a project with the Ministry of Health and Family Welfare in India and it is really making me appreciate the amazingly detailed and comprehensive administrative datasets available to researchers in England. For the work underpinning my thesis I linked 10 years of data looking at every hospital admission and outpatient visit in the country with the quality and outcomes achieved for patients registered at each primary care practice, the number of doctors working at each primary care practice, general population census data, cause-specific mortality data, hospital cost data and deprivation data all at neighbourhood level. I spent a lot of time assembling, cleaning and linking these data sets and then used this data platform to build a range of health inequality indicators – some of which can be seen in an interactive tool I built to present the data to clinical commissioning groups.

As well as measuring inequality retrospectively in order to provide evidence to evaluate past NHS policies, and building tools to enable the NHS to monitor inequality going forward, another key focus of my thesis was to develop methods to model and incorporate health inequality impacts into cost-effectiveness analysis. These methods allow analysts to evaluate proposed health interventions in terms of their impact on the distribution of health rather than just their impact on the mythical average citizen. The distributional cost-effectiveness analysis framework I developed is based on the idea of using social welfare functions to evaluate the estimated health distributions arising from the rollout of different health care interventions and compute the equity-efficiency trade-offs that would need to be made in order to prefer one intervention over another. A key parameter in this analysis required in order to make equity-efficiency trade-offs is the level of health inequality aversion. This parameter was quite tricky to estimate with methods used to elicit it from the general public being prone to various framing effects. The preliminary estimates that I used in my analysis for this parameter suggested that at the margin the general public thought people living in the most deprived fifth of neighbourhoods in the country deserve approximately 7 times the priority in terms of health care spending as those who live in the most affluent fifth of neighbourhoods.

Does your PhD work enable us to attach a ‘cost’ to inequality, and ‘value’ to policies that reduce it?

As budding economists, we are ever cautious to distinguish association and causation. My thesis starts by estimating the cost associated with inequality to the NHS. That is the additional cost to the NHS spent on treating the excess morbidity in those living in relatively deprived neighbourhoods. I estimated the difference between the actual NHS hospital budget and what the cost would have been if everybody in the country had the morbidity profile of those who live in just the most affluent fifth of neighbourhoods. For inpatient hospital costs this difference came to £4.8 billion per year and widening this to all NHS costs this came to £12.5 billion per year approximately a fifth of the total NHS budget. I looked both cross-sectionally and also modelled lifetime estimated health care use and found that even over their entire lifetimes people living in more deprived neighbourhoods consumed more health care despite their substantially shorter life expectancies.

This cost is of course very different to the value of policies to reduce inequality. This difference arises for two main reasons. First, my estimates were not causal but rather associations so we are unable to conclude that reducing socioeconomic inequality would actually result in everybody in the country gaining the morbidity profile of those living in the most affluent fifth of neighbourhoods. Second and perhaps more significantly, my estimates do not value any of the health benefits that would result from reducing health inequality they just count the costs that could be saved by the NHS due to the excess morbidity avoided. The value of these health benefits forgone in terms of quality adjusted life years gained would have to be converted into monetary terms using an estimate of willingness to pay for health and added to these cost savings (which themselves would need to be converted to consumption values) to get a total value of reducing inequality from a health perspective. There would also, of course, be a range of non-health impacts of reducing inequality that would need to be accounted for if this exercise were to be comprehensively conducted.

In simple terms, if the causal link between socioeconomic inequality and health could be determined then the value to the health sector of policies that could substantially reduce this inequality would likely be far greater than the costs quoted here.

How did you find the PhD-by-publication route? Would you recommend it?

I came to academia relatively late having previously worked in both the government and the private sector for a number of years. The PhD by publication route suited me well as it allowed me to get stuck into a number of projects, work with a wide range of academics and build an academic career whilst simultaneously curating a set of papers to submit as a thesis. However, it is certainly not the fastest way to achieve PhD status, my thesis took 6 years to compile. The publication route is also still relatively uncommon in England and I found both my supervisors and examiners somewhat perplexed about how to approach it. Additionally, my wife who did her PhD by the traditional route assures me that it is not a ‘proper’ PhD!

For those fresh out of an MSc programme the traditional route probably works well, giving you the opportunity to develop research skills and focus on one area in depth with lots of guidance from a dedicated supervisor. However, for people like me who probably would never have got around to doing a traditional PhD, it is nice that there is an alternative way to acquire the ‘Dr’ title which I am finding confers many unanticipated benefits.

What advice would you give to a researcher looking to study health inequality?

The most important thing that I have learnt from my research is that health inequality, particularly in England, has very little to do with health care and everything to do with socioeconomic inequality. I would encourage researchers interested in this area to look at broader interventions tackling the social determinants of health. There is lots of exciting work going on at the moment around basic income and social housing as well as around the intersection between the environment and health which I would love to get stuck into given the chance.