Brendan Collins’s journal round-up for 3rd December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A framework for conducting economic evaluations alongside natural experiments. Social Science & Medicine Published 27th November 2018

I feel like Social Science & Medicine is publishing some excellent health economics papers lately and this is another example. Natural experiment methods, like instrumental variables, difference in difference, and propensity matching, are increasingly used to evaluate public health policy interventions. This paper provides a review and a framework for how to incorporate economic evaluation alongside this. And even better, it has a checklist! It goes into some detail in describing each item in the checklist which I think will be really useful. A couple of the items seemed a bit peculiar to me, like talking about “Potential behavioural responses (e.g. ‘nudge effects’)” – I would prefer a more general term like causal mechanism. And it has multi-criteria decision analysis (MCDA) as a potential method. I love MCDA but I think that using MCDA would surely require a whole new set of items on the checklist, for instance, to record how MCDA weights have been decided. (For me, saying that CEA is insufficient so we should use MCDA instead is like saying I find it hard to put IKEA furniture together so I will make my own furniture from scratch.) My hope with checklists is that they actually improve practice, rather than just being used in a post hoc way to include a few caveats and excuses in papers.

Autonomy, accountability, and ambiguity in arm’s-length meta-governance: the case of NHS England. Public Management Review Published 18th November 2018

It has been said that NICE in England serves a purpose of insulating politicians from the fallout of difficult investment decisions, for example recommending that people with mild Alzheimers disease do not get certain drugs. When the coalition government gained power in the UK in 2010, there was initially talk that NICE’s role of approving drugs may be reduced. But the government may have realised that NICE serve a useful role of being a focus of public and media anger when new drugs are rejected on cost-effectiveness grounds. And so it may be with NHS England (NHSE), which according to this paper, as an arms-length body (ALB), has powers that exceed what was initially planned.

This paper uses meta-governance theory, examining different types of control mechanisms and the relationship between the ALB and the sponsor (Department for Health and Social Care), and how they impact on autonomy and accountability. It suggests that NHSE is operating at a macro, policy-making level, rather than an operational, implementation level. Policy changes from NHSE are presented by ministers as coming ‘from’ the NHS but, in reality, the NHS is much bigger than NHSE. NHSE was created to take political interference out of decision-making and let civil servants get on with things. But before reading this paper, it had not occurred to me how much power NHSE had accrued, and how this may create difficulties in terms of accountability for reasonableness. For instance, NHSE have a very complicated structure and do not publish all of their meeting minutes so it is difficult to understand how investment decisions are made. It may be that the changes that have happened in the NHS since 2012 were intended to involve healthcare professionals more in local investment decisions. But actually, a lot of power in terms of shaping the balance of hierarchies, markets and networks has ended up in NHSE, sitting in a hinterland between politicians in Whitehall and local NHS organisations. With a new NHS Plan reportedly delayed because of Brexit chaos, it will be interesting to see what this plan says about accountability.

How health policy shapes healthcare sector productivity? Evidence from Italy and UK. Health Policy [PubMed] Published 2nd November 2018

This paper starts with an interesting premise: the English and Italian state healthcare systems (the NHS and the SSN) are quite similar (which I didn’t know before). But the two systems have had different priorities in the time period from 2004-2011. England focused on increasing activity, reducing waiting times and quality improvements while Italy focused on reducing hospital beds as well as reducing variation and unnecessary treatments. This paper finds that productivity increased more quickly in the NHS than the SSN from 2004-2011. This paper is ambitious in its scope and the data the authors have used. The model uses input-specific price deflators, so it includes the fact that healthcare inputs increase in price faster than other industries but treats this as exogenous to the production function. This price inflation may be because around 75% of costs are staff costs, and wage inflation in other industries produces wage inflation in the NHS. It may be interesting in future to analyse to what extent the rate of inflation for healthcare is inevitable and if it is linked in some way to the inputs and outputs. We often hear that productivity in the NHS has not increased as much as other industries, so it is perhaps reassuring to read a paper that says the NHS has performed better than a similar health system elsewhere.

Credits

Thesis Thursday: Cheryl Jones

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Cheryl Jones who has a PhD from the University of Manchester. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
The economics of presenteeism in the context of rheumatoid arthritis, ankylosing spondylitis and psoriatic arthritis
Supervisors
Katherine Payne, Suzanne Verstappen, Brenda Gannon
Repository link
https://www.research.manchester.ac.uk/portal/en/theses/the-economics-of-presenteeism-in-the-context-of-rheumatoid-arthritis-ankylosing-spondylitis-and-psoriatic-arthritis%288215e79a-925e-4664-9a3c-3fd42d643528%29.html

What attracted you to studying health-related presenteeism?

I was attracted to study presenteeism because it gave me a chance to address both normative and positive issues. Presenteeism, a concept related to productivity, is a controversial topic in the economic evaluation of healthcare technologies and is currently excluded from health economic evaluations, following the recommendation made by the NICE reference case. The reasons why productivity is excluded from economic evaluations are important and valid, however, there are some circumstances where excluding productivity is difficult to defend. Presenteeism offered an opportunity for me to explore and question the social value judgements that underpin economic evaluation methods with respect to productivity. In terms of positive issues related to presenteeism, research into the development of methods that can be used to measure and value presenteeism was (and still is) limited. This provided an opportunity to think creatively about the types of methods we could use, both quantitative and qualitative, to address and further methods for quantifying presenteeism.

Are existing tools adequate for measuring and valuing presenteeism in inflammatory arthritic conditions?

That is the question! Research into methods that can be used to quantify presenteeism is still in its infancy. Presenteeism is difficult to measure accurately because there are a lack of objective measures that can be used, for example, the number of cars assembled per day. As a consequence, many methods rely on self-report surveys, which tend to suffer from bias, such as reporting or recall bias. Methods that have been used to value presenteeism have largely focused on valuing presenteeism as a cost using the human capital approach (HCA: volume of presenteeism multiplied by a monetary factor). The monetary factor typically used to convert the volume of presenteeism into a cost value is wages. Valuing productivity using wages risks taking account of discriminatory factors that are associated with wages, such as age. There are also economic arguments that question whether the value of the wage truly reflects the value of productivity. My PhD focused on developing a method that values presenteeism as a non-monetary benefit, thereby avoiding the need to value it as a cost using wages. Overall, methods to measure and value presenteeism still have some way to go before a ‘gold standard’ can be established, however, there are many experts from many disciplines who are working to improve these methods.

Why was it important to conduct qualitative interviews as part of your research?

The quantitative component of my PhD was to develop an algorithm, using mapping methods, that links presenteeism with health status and capability measures. A study by Connolly et al. recommend conducting qualitative interviews to provide some evidence of face/content validity to establish whether a quantitative link between two measures (or concepts) is feasible and potentially valid. The qualitative study I conducted was designed to understand the extent to which the EQ-5D-5L, SF6D and ICECAP-C were able to capture those aspects of rheumatic conditions that negatively impact presenteeism. The results suggested that all three measures were able to capture those important aspects of rheumatic conditions that affect presenteeism; however, the results indicated that the SF6D would most likely be the most appropriate measure. The results from the quantitative mapping study identified the SF6D as the most suitable outcome measure able to predict presenteeism in working populations with rheumatic conditions. The advantage of the qualitative results was that it provided some evidence that explained why the SF6D was the more suitable measure rather than relying on speculation.

Is it feasible to predict presenteeism using outcome measures within economic evaluation?

I developed an algorithm that links presenteeism, measured using the Work Activity Productivity Impairment (WPAI) questionnaire, with health and capability. Health status was measured using the EQ-5D-5L and SF6D, and capability was measured using the ICECAP-A. The SF6D was identified as the most suitable measure to predict presenteeism in a population of employees with rheumatoid arthritis or ankylosing spondylitis. The results indicate that it is possible to predict presenteeism using generic outcome measures; however, the results have yet to be externally validated. The qualitative interviews provided evidence as to why the SF6D was the better predictor for presenteeism and the result gave rise to questions about the suitability of outcome measures given a specific population. The results indicate that it is potentially feasible to predict presenteeism using outcome measures.

What would be your key recommendation to a researcher hoping to capture the impact of an intervention on presenteeism?

Due to the lack of a ‘gold standard’ method for capturing the impact of presenteeism, I would recommend that the researcher reports and justifies their selection of the following:

  1. Provide a rationale that explains why presenteeism is an important factor that needs to be considered in the analysis.
  2. Explain how and why presenteeism will be captured and included in the analysis; as a cost, monetary benefit, or non-monetary benefit.
  3. Justify the methods used to measure and value presenteeism. It is important that the research clearly reports why specific tools, such as presenteeism surveys, have been selected for use.

Because there is no ‘gold standard’ method for measuring and valuing presenteeism and guidelines do not exist to inform the reporting of methods used to quantify presenteeism, it is important that the researcher reports and justifies their selection of methods used in their analysis.

Sam Watson’s journal round-up for 25th June 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The efficiency of slacking off: evidence from the emergency department. Econometrica [RePEc] Published May 2018

Scheduling workers is a complex task, especially in large organisations such as hospitals. Not only should one consider when different shifts start throughout the day, but also how work is divided up over the course of each shift. Physicians, like anyone else, value their leisure time and want to go home at the end of a shift. Given how they value this leisure time, as the end of a shift approaches physicians may behave differently. This paper explores how doctors in an emergency department behave at ‘end of shift’, in particular looking at whether doctors ‘slack off’ by accepting fewer patients or tasks and also whether they rush to finish those tasks they have. Both cases can introduce inefficiency by either under-using their labour time or using resources too intensively to complete something. Immediately, from the plots of the raw data, it is possible to see a drop in patients ‘accepted’ both close to end of shift and close to the next shift beginning (if there is shift overlap). Most interestingly, after controlling for patient characteristics, time of day, and day of week, there is a decrease in the length of stay of patients accepted closer to the end of shift, which is ‘dose-dependent’ on time to end of shift. There are also marked increases in patient costs, orders, and inpatient admissions in the final hour of the shift. Assuming that only the number of patients assigned and not the type of patient changes over the course of a shift (a somewhat strong assumption despite the additional tests), then this would suggest that doctors are rushing care and potentially providing sub-optimal or inefficient care closer to the end of their shift. The paper goes on to explore optimal scheduling on the basis of the results, among other things, but ultimately shows an interesting, if not unexpected, pattern of physician behaviour. The results relate mainly to efficiency, but it’d be interesting to see how they relate to quality in the form of preventable errors.

Semiparametric estimation of longitudinal medical cost trajectory. Journal of the American Statistical Association Published 19th June 2018

Modern computational and statistical methods have opened up a range of statistical models to estimation hitherto inestimable. This includes complex latent variable structures, non-linear models, and non- and semi-parametric models. Recently we covered the use of splines for semi-parametric modelling in our Method of the Month series. Not that complexity is everything of course, but given this rich toolbox to more faithfully replicate the data generating process, one does wonder why the humble linear model estimated with OLS remains so common. Nevertheless, I digress. This paper addresses the problem of estimating the medical cost trajectory for a given disease from diagnosis to death. There are two key issues: (i) the trajectory is likely to be non-linear with costs probably increasing near death and possibly also be higher immediately after diagnosis (a U-shape), and (ii) we don’t observe the costs of those who die, i.e. there is right-censoring. Such a set-up is also applicable in other cases, for example looking at health outcomes in panel data with informative dropout. The authors model medical costs for each month post-diagnosis and time of censoring (death) by factorising their joint distribution into a marginal model for censoring and a conditional model for medical costs given the censoring time. The likelihood then has contributions from the observed medical costs and their times, and the times of the censored outcomes. We then just need to specify the individual models. For medical costs, they use a multivariate normal with mean function consisting of a bivariate spline of time and time of censoring. The time of censoring is modelled non-parametrically. This setup of the missing data problem is sometimes referred to as a pattern mixing model, in that the outcome is modelled as a mixture density over different populations dying at different times. The authors note another possibility for informative missing data, which was considered not to be estimable for complex non-linear structures, was the shared parameter model (to soon appear in another Method of the Month) that assumes outcomes and dropout are independent conditional on an underlying latent variable. This approach can be more flexible, especially in cases with varying treatment effects. One wonders if the mixed model representation of penalised splines wouldn’t fit nicely in a shared parameter framework and provide at least as good inferences. An idea for a future paper perhaps… Nevertheless, the authors illustrate their method by replicating the well-documented U-shaped costs from the time of diagnosis in patients with stage IV breast cancer.

Do environmental factors drive obesity? Evidence from international graduate students. Health Economics [PubMedPublished 21st June 2018

‘The environment’ can encompass any number of things including social interactions and networks, politics, green space, and pollution. Sometimes referred to as ‘neighbourhood effects’, the impact of the shared environment above and beyond the effect of individual risk factors is of great interest to researchers and policymakers alike. But there are a number of substantive issues that hinder estimation of neighbourhood effects. For example, social stratification into neighbourhoods likely means people living together are similar so it is difficult to compare like with like across neighbourhoods; trying to model neighbourhood choice will also, therefore, remove most of the variation in the data. Similarly, this lack of common support, i.e. overlap, between people from different neighbourhoods means estimated effects are not generalisable across the population. One way of getting around these problems is simply to randomise people to neighbourhoods. As odd as that sounds, that is what occurred in the Moving to Opportunity experiments and others. This paper takes a similar approach in trying to look at neighbourhood effects on the risk of obesity by looking at the effects of international students moving to different locales with different local obesity rates. The key identifying assumption is that the choice to move to different places is conditionally independent of the local obesity rate. This doesn’t seem a strong assumption – I’ve never heard a prospective student ask about the weight of our student body. Some analysis supports this claim. The raw data and some further modelling show a pretty strong and robust relationship between local obesity rates and weight gain of the international students. Given the complexity of the causes and correlates of obesity (see the crazy diagram in this post) it is hard to discern why certain environments contribute to obesity. The paper presents some weak evidence of differences in unhealthy behaviours between high and low obesity places – but this doesn’t quite get at the environmental link, such as whether these behaviours are shared through social networks or perhaps the structure and layout of the urban area, for example. Nevertheless, here is some strong evidence that living in an area where there are obese people means you’re more likely to become obese yourself.

Credits