Rita Faria’s journal round-up for 10th December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Calculating the expected value of sample information using efficient nested Monte Carlo: a tutorial. Value in Health [PubMed] Published 17th July 2018

The expected value of sample information (EVSI) represents the added benefit from collecting new information on specific parameters in future studies. It can be compared to the cost of conducting these future studies to calculate the expected net benefit of sampling. The objective is to help inform which study design is best, given the information it can gather and its costs. The theory and methods to calculate EVSI have been around for some time, but we rarely see it in applied economic evaluations.

In this paper, Anna Heath and Gianluca Baio present a tutorial about how to implement a method they had previously published on, which is more computationally efficient than the standard nested Monte Carlo simulations.

The authors start by explaining the method in theory, then illustrate it with a simple worked example. I’ll admit that I got a bit lost with the theory, but I found that the example made it much clearer. They demonstrate the method’s performance using a previously published cost-effectiveness model. Additionally, they have very helpfully published a suite of functions to apply this method in practice.

I really enjoyed reading this paper, as it takes the reader step-by-step through the method. However, I wasn’t sure about when this method is applicable, given that the authors note that it requires a large number of probabilistic simulations to perform well, and it is only appropriate when EVPPI is high. The issue is, how large is large and how high is high? Hopefully, these and other practical questions are on the list for this brilliant research team.

As an applied researcher, I find tutorial papers such as this one incredibly useful to learn new methods and help implement them in practice. Thanks to work such as this one and others, we’re getting close to making value of information analysis a standard element of cost-effectiveness studies.

Future costs in cost-effectiveness analyses: past, present, future. PharmacoEconomics [PubMed] Published 26th November 2018

Linda de Vries, Pieter van Baal and Werner Brouwer help illuminate the debate on future costs with this fascinating paper. Future costs are the costs of resources used by patients during the years of life added by the technology under evaluation. Future costs can be distinguished between related or unrelated, depending on whether the resources are used for the target disease. They can also be distinguished between medical or non-medical, depending on whether the costs fall on the healthcare budget.

The authors very skilfully summarise the theoretical literature on the inclusion of future costs. They conclude that future related and unrelated medical costs should be included and present compelling arguments to do so.

They also discuss empirical research, such as studies that estimate future unrelated costs. The references are a useful starting point for other researchers. For example, I noted that there is a tool to include future unrelated medical costs in the Netherlands and some studies on their estimation in the UK (see, for example, here).

There is a thought-provoking section on ethical concerns. If unrelated costs are included, technologies that increase the life expectancy of people who need a lot of resources will look less cost-effective. The authors suggest that these issues should not be concealed in the analysis, but instead dealt with in the decision-making process.

This is an enjoyable paper that provides an overview of the literature on future costs. I highly recommend it to get up to speed with the arguments and the practical implications. There is clearly a case for including future costs, and the question now is whether the cost-effectiveness practice follows suit.

Cost-utility analysis using EQ-5D-5L data: does how the utilities are derived matter? Value in Health Published 4th July 2018

We’ve recently become spoilt for choice when it comes to the EQ-5D. To obtain utility values, just in the UK, there are a few options: the 3L tariff, the 5L tariff, and crosswalk tariffs by Ben van Hout and colleagues and Mónica Hernandez and colleagues [PDF]. Which one to choose? And does it make any difference?

Fan Yang and colleagues have done a good job in getting us closer to the answer. They estimated utilities obtained from EQ-5D-5L data using the 5L value set and crosswalk tariffs to EQ-5D-3L and tested the values in cost-effectiveness models of hemodialysis compared to peritoneal dialysis.

Reassuringly, hemodialysis had always greater utilities than peritoneal dialysis. However, the magnitude of the difference varied with the approach. Therefore, using either EQ-5D-5L or the crosswalk tariff to EQ-5D-3L can influence the cost-effectiveness results. These results are in line with earlier work by Mónica Hernandez and colleagues, who compared the EQ-5D-3L with the EQ-5D-5L.

The message is clear in that both the type of EQ-5D questionnaire and the EQ-5D tariff makes a difference to the cost-effectiveness results. This can have huge policy implications as decisions by HTA agencies, such as NICE, depend on these results.

Which EQ-5D-5L to use in a new primary research study remains an open question. In the meantime, NICE recommends the use of the EQ-5D-3L or, if EQ-5D-5L was collected, Ben van Hout and colleagues’ mapping function to the EQ-5D-3L. Hopefully, a definite answer won’t be long in coming.

Credits

Brendan Collins’s journal round-up for 3rd December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A framework for conducting economic evaluations alongside natural experiments. Social Science & Medicine Published 27th November 2018

I feel like Social Science & Medicine is publishing some excellent health economics papers lately and this is another example. Natural experiment methods, like instrumental variables, difference in difference, and propensity matching, are increasingly used to evaluate public health policy interventions. This paper provides a review and a framework for how to incorporate economic evaluation alongside this. And even better, it has a checklist! It goes into some detail in describing each item in the checklist which I think will be really useful. A couple of the items seemed a bit peculiar to me, like talking about “Potential behavioural responses (e.g. ‘nudge effects’)” – I would prefer a more general term like causal mechanism. And it has multi-criteria decision analysis (MCDA) as a potential method. I love MCDA but I think that using MCDA would surely require a whole new set of items on the checklist, for instance, to record how MCDA weights have been decided. (For me, saying that CEA is insufficient so we should use MCDA instead is like saying I find it hard to put IKEA furniture together so I will make my own furniture from scratch.) My hope with checklists is that they actually improve practice, rather than just being used in a post hoc way to include a few caveats and excuses in papers.

Autonomy, accountability, and ambiguity in arm’s-length meta-governance: the case of NHS England. Public Management Review Published 18th November 2018

It has been said that NICE in England serves a purpose of insulating politicians from the fallout of difficult investment decisions, for example recommending that people with mild Alzheimers disease do not get certain drugs. When the coalition government gained power in the UK in 2010, there was initially talk that NICE’s role of approving drugs may be reduced. But the government may have realised that NICE serve a useful role of being a focus of public and media anger when new drugs are rejected on cost-effectiveness grounds. And so it may be with NHS England (NHSE), which according to this paper, as an arms-length body (ALB), has powers that exceed what was initially planned.

This paper uses meta-governance theory, examining different types of control mechanisms and the relationship between the ALB and the sponsor (Department for Health and Social Care), and how they impact on autonomy and accountability. It suggests that NHSE is operating at a macro, policy-making level, rather than an operational, implementation level. Policy changes from NHSE are presented by ministers as coming ‘from’ the NHS but, in reality, the NHS is much bigger than NHSE. NHSE was created to take political interference out of decision-making and let civil servants get on with things. But before reading this paper, it had not occurred to me how much power NHSE had accrued, and how this may create difficulties in terms of accountability for reasonableness. For instance, NHSE have a very complicated structure and do not publish all of their meeting minutes so it is difficult to understand how investment decisions are made. It may be that the changes that have happened in the NHS since 2012 were intended to involve healthcare professionals more in local investment decisions. But actually, a lot of power in terms of shaping the balance of hierarchies, markets and networks has ended up in NHSE, sitting in a hinterland between politicians in Whitehall and local NHS organisations. With a new NHS Plan reportedly delayed because of Brexit chaos, it will be interesting to see what this plan says about accountability.

How health policy shapes healthcare sector productivity? Evidence from Italy and UK. Health Policy [PubMed] Published 2nd November 2018

This paper starts with an interesting premise: the English and Italian state healthcare systems (the NHS and the SSN) are quite similar (which I didn’t know before). But the two systems have had different priorities in the time period from 2004-2011. England focused on increasing activity, reducing waiting times and quality improvements while Italy focused on reducing hospital beds as well as reducing variation and unnecessary treatments. This paper finds that productivity increased more quickly in the NHS than the SSN from 2004-2011. This paper is ambitious in its scope and the data the authors have used. The model uses input-specific price deflators, so it includes the fact that healthcare inputs increase in price faster than other industries but treats this as exogenous to the production function. This price inflation may be because around 75% of costs are staff costs, and wage inflation in other industries produces wage inflation in the NHS. It may be interesting in future to analyse to what extent the rate of inflation for healthcare is inevitable and if it is linked in some way to the inputs and outputs. We often hear that productivity in the NHS has not increased as much as other industries, so it is perhaps reassuring to read a paper that says the NHS has performed better than a similar health system elsewhere.

Credits

Sam Watson’s journal round-up for 26th November 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Alcohol and self-control: a field experiment in India. American Economic Review Forthcoming

Addiction is complex. For many people it is characterised by a need or compulsion to take something, often to prevent withdrawal, often in conflict with a desire to not take it. This conflicts with Gary Becker’s much-maligned rational theory of addiction, which views the addiction as a choice to maximise utility in the long term. Under Becker’s model, one could use market-based mechanisms to end repeated, long-term drug or alcohol use. By making the cost of continuing to use higher then people would choose to stop. This has led to the development of interventions like conditional payment or cost mechanisms: a user would receive a payment on condition of sobriety. Previous studies, however, have found little evidence people would be willing to pay for such sobriety contracts. This article reports a randomised trial among rickshaw drivers in Chennai, India, a group of people with a high prevalence of high alcohol use and dependency. The three trial arms consisted of a control arm who received an unconditional daily payment, a treatment arm who received a small payment plus extra if they passed a breathalyser test, and a third arm who had the choice between either of the two payment mechanisms. Two findings are of much interest. First, the incentive payments significantly increased daytime sobriety, and second, over half the participants preferred the conditional sobriety payments over the unconditional payments when they were weakly dominated, and a third still preferred them even when the unconditional payments were higher than the maximum possible conditional payment. This conflicts with a market-based conception of addiction and its treatment. Indeed, the nature of addiction means it can override all intrinsic motivation to stop, or do anything else frankly. So it makes sense that individuals are willing to pay for extrinsic motivation, which in this case did make a difference.

Heterogeneity in long term health outcomes of migrants within Italy. Journal of Health Economics [PubMed] [RePEc] Published 2nd November 2018

We’ve discussed neighbourhood effects a number of times on this blog (here and here, for example). In the absence of a randomised allocation to different neighbourhoods or areas, it is very difficult to discern why people living there or who have moved there might be better or worse off than elsewhere. This article is another neighbourhood effects analysis, this time framed through the lens of immigration. It looks at those who migrated within Italy in the 1970s during a period of large northward population movements. The authors, in essence, identify the average health and mental health of people who moved to different regions conditional on duration spent in origin destinations and a range of other factors. The analysis is conceptually similar to that of two papers we discussed at length on internal migration in the US and labour market outcomes in that it accounts for the duration of ‘exposure’ to poorer areas and differences between destinations. In the case of the labour market outcomes papers, the analysis couldn’t really differentiate between a causal effect of a neighbourhood increasing human capital, differences in labour market conditions, and unobserved heterogeneity between migrating people and families. Now this article examining Italian migration looks at health outcomes, such as the SF-12, which limit the explanations since one cannot ‘earn’ more health by moving elsewhere. Nevertheless, the labour market can still impact upon health strongly.

The authors carefully discuss the difficulties in identifying causal effects here. A number of model extensions are also estimated to try to deal with some issues discussed. This includes a type of propensity score weighting approach, although I would emphasize that this categorically does not deal with issues of unobserved heterogeneity. A finite mixture model is also estimated. Generally a well-thought-through analysis. However, there is a reliance on statistical significance here. I know I do bang on about statistical significance a lot, but it is widely used inappropriately. A rule of thumb I’ve adopted for reviewing papers for journals is that if the conclusions would change if you changed the statistical significance threshold then there’s probably an issue. This article would fail that test. They use a threshold of p<0.10 which seems inappropriate for an analysis with a sample size in the tens of thousands and they build a concluding narrative around what is and isn’t statistically significant. This is not to detract from the analysis, merely its interpretation. In future, this could be helped by banning asterisks in tables, like the AER has done, or better yet developing submission guidelines around its use.

Credits