Brendan Collins’s journal round-up for 3rd December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A framework for conducting economic evaluations alongside natural experiments. Social Science & Medicine Published 27th November 2018

I feel like Social Science & Medicine is publishing some excellent health economics papers lately and this is another example. Natural experiment methods, like instrumental variables, difference in difference, and propensity matching, are increasingly used to evaluate public health policy interventions. This paper provides a review and a framework for how to incorporate economic evaluation alongside this. And even better, it has a checklist! It goes into some detail in describing each item in the checklist which I think will be really useful. A couple of the items seemed a bit peculiar to me, like talking about “Potential behavioural responses (e.g. ‘nudge effects’)” – I would prefer a more general term like causal mechanism. And it has multi-criteria decision analysis (MCDA) as a potential method. I love MCDA but I think that using MCDA would surely require a whole new set of items on the checklist, for instance, to record how MCDA weights have been decided. (For me, saying that CEA is insufficient so we should use MCDA instead is like saying I find it hard to put IKEA furniture together so I will make my own furniture from scratch.) My hope with checklists is that they actually improve practice, rather than just being used in a post hoc way to include a few caveats and excuses in papers.

Autonomy, accountability, and ambiguity in arm’s-length meta-governance: the case of NHS England. Public Management Review Published 18th November 2018

It has been said that NICE in England serves a purpose of insulating politicians from the fallout of difficult investment decisions, for example recommending that people with mild Alzheimers disease do not get certain drugs. When the coalition government gained power in the UK in 2010, there was initially talk that NICE’s role of approving drugs may be reduced. But the government may have realised that NICE serve a useful role of being a focus of public and media anger when new drugs are rejected on cost-effectiveness grounds. And so it may be with NHS England (NHSE), which according to this paper, as an arms-length body (ALB), has powers that exceed what was initially planned.

This paper uses meta-governance theory, examining different types of control mechanisms and the relationship between the ALB and the sponsor (Department for Health and Social Care), and how they impact on autonomy and accountability. It suggests that NHSE is operating at a macro, policy-making level, rather than an operational, implementation level. Policy changes from NHSE are presented by ministers as coming ‘from’ the NHS but, in reality, the NHS is much bigger than NHSE. NHSE was created to take political interference out of decision-making and let civil servants get on with things. But before reading this paper, it had not occurred to me how much power NHSE had accrued, and how this may create difficulties in terms of accountability for reasonableness. For instance, NHSE have a very complicated structure and do not publish all of their meeting minutes so it is difficult to understand how investment decisions are made. It may be that the changes that have happened in the NHS since 2012 were intended to involve healthcare professionals more in local investment decisions. But actually, a lot of power in terms of shaping the balance of hierarchies, markets and networks has ended up in NHSE, sitting in a hinterland between politicians in Whitehall and local NHS organisations. With a new NHS Plan reportedly delayed because of Brexit chaos, it will be interesting to see what this plan says about accountability.

How health policy shapes healthcare sector productivity? Evidence from Italy and UK. Health Policy [PubMed] Published 2nd November 2018

This paper starts with an interesting premise: the English and Italian state healthcare systems (the NHS and the SSN) are quite similar (which I didn’t know before). But the two systems have had different priorities in the time period from 2004-2011. England focused on increasing activity, reducing waiting times and quality improvements while Italy focused on reducing hospital beds as well as reducing variation and unnecessary treatments. This paper finds that productivity increased more quickly in the NHS than the SSN from 2004-2011. This paper is ambitious in its scope and the data the authors have used. The model uses input-specific price deflators, so it includes the fact that healthcare inputs increase in price faster than other industries but treats this as exogenous to the production function. This price inflation may be because around 75% of costs are staff costs, and wage inflation in other industries produces wage inflation in the NHS. It may be interesting in future to analyse to what extent the rate of inflation for healthcare is inevitable and if it is linked in some way to the inputs and outputs. We often hear that productivity in the NHS has not increased as much as other industries, so it is perhaps reassuring to read a paper that says the NHS has performed better than a similar health system elsewhere.

Credits

OHE Lunchtime Seminar: What Can NHS Trusts Do to Reduce Cancer Waiting Times?

OHE Lunchtime Seminar with Sarah Karlsberg, Steve Paling, and Júlia Esquerré on ‘What can NHS trusts do to reduce cancer waiting times?’ To be held on 14th November 2018 from 12 p.m. to 2 p.m.

Rapid diagnosis and access to treatment for cancer are vital for both clinical outcomes and patient experience of care. The NHS Constitution contains several waiting times targets, including that 85% of patients diagnosed with cancer should receive treatment within 62 days of referral. However, waiting times are increasing in England: the 62-day target has not been met since late 2013 and, in July 2018, the NHS recorded its worst performance since records began in October 2009.

This seminar will present evidence on where NHS trusts can take practical steps to reduce cancer waiting times. The work uses patient-level data (Hospital Episode Statistics) from 2016/17 and an econometric model to quantify the potential effects of several recommendations on the average length of patients’ cancer pathways. The project won the 2018 John Hoy Memorial Award for the best piece of economic analysis produced by government economists.

Sarah Karlsberg, Steven Paling, and Júlia González Esquerré work in the NHS Improvement Economics Team, which provides economics expertise to NHS Improvement (previously Monitor and the Trust Development Authority) and the provider sector. Their work covers all aspects of provider policy, including operational and financial performance, quality of care, leadership and strategic change. Sarah is also a Visiting Fellow at OHE.

Download the full seminar invite here.

The seminar will be held in the Sir Alexander Fleming Room, Southside, 7th Floor, 105 Victoria Street, London SW1E 6QT. A buffet lunch will be available from 12 p.m. The seminar will start promptly at 12:30 p.m. and finish promptly at 2 p.m.

If you would like to attend this seminar, please reply to ohegeneral@ohe.org.

Chris Sampson’s journal round-up for 17th September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Does competition from private surgical centres improve public hospitals’ performance? Evidence from the English National Health Service. Journal of Public Economics Published 11th September 2018

This study looks at proper (supply-side) privatisation in the NHS. The subject is the government-backed introduction of Independent Sector Treatment Centres (ISTCs), which, in the name of profit, provide routine elective surgical procedures to NHS patients. ISTCs were directed to areas with high waiting times and began rolling out from 2003.

The authors take pre-surgery length of stay as a proxy for efficiency and hypothesise that the entry of ISTCs would improve efficiency in nearby NHS hospitals. They also hypothesise that the ISTCs would cream-skim healthier patients, leaving NHS hospitals to foot the bill for a more challenging casemix. Difference-in-difference regressions are used to test these hypotheses, the treatment group being those NHS hospitals close to ISTCs and the control being those not likely to be affected. The authors use patient-level Hospital Episode Statistics from 2002-2008 for elective hip and knee replacements.

The key difficulty here is that the trend in length of stay changed dramatically at the time ISTCs began to be introduced, regardless of whether a hospital was affected by their introduction. This is because there was a whole suite of policy and structural changes being implemented around this period, many targeting hospital efficiency. So we’re looking at comparing new trends, not comparing changes in existing levels or trends.

The authors’ hypotheses prove right. Pre-surgery length of stay fell in exposed hospitals by around 16%. The ISTCs engaged in risk selection, meaning that NHS hospitals were left with sicker patients. What’s more, the savings for NHS hospitals (from shorter pre-surgery length of stay) were more than undermined by an increase in post-surgery length of stay, which may have been due to the change in casemix.

I’m not sure how useful difference-in-difference is in this case. We don’t know what the trend would have been without the intervention because the pre-intervention trend provides no clues about it and, while the outcome is shown to be unrelated to selection into the intervention, we don’t know whether selection into the ISTC intervention was correlated with exposure to other policy changes. The authors do their best to quell these concerns about parallel trends and correlated policy shocks, and the results appear robust.

Broadly speaking, the study satisfies my prior view of for-profit providers as leeches on the NHS. Still, I’m left a bit unsure of the findings. The problem is, I don’t see the causal mechanism. Hospitals had the financial incentive to be efficient and achieve a budget surplus without competition from ISTCs. It’s hard (for me, at least) to see how reduced length of stay has anything to do with competition unless hospitals used it as a basis for getting more patients through the door, which, given that ISTCs were introduced in areas with high waiting times, the hospitals could have done anyway.

While the paper describes a smart and thorough analysis, the findings don’t tell us whether ISTCs are good or bad. Both the length of stay effect and the casemix effect are ambiguous with respect to patient outcomes. If only we had some PROMs to work with…

One method, many methodological choices: a structured review of discrete-choice experiments for health state valuation. PharmacoEconomics [PubMed] Published 8th September 2018

Discrete choice experiments (DCEs) are in vogue when it comes to health state valuation. But there is disagreement about how they should be conducted. Studies can differ in terms of the design of the choice task, the design of the experiment, and the analysis methods. The purpose of this study is to review what has been going on; how have studies differed and what could that mean for our use of the value sets that are estimated?

A search of PubMed for valuation studies using DCEs – including generic and condition-specific measures – turned up 1132 citations, of which 63 were ultimately included in the review. Data were extracted and quality assessed.

The ways in which the studies differed, and the ways in which they were similar, hint at what’s needed from future research. The majority of recent studies were conducted online. This could be problematic if we think self-selecting online panels aren’t representative. Most studies used five or six attributes to describe options and many included duration as an attribute. The methodological tweaks necessary to anchor at 0=dead were a key source of variation. Those using duration varied in terms of the number of levels presented and the range of duration (from 2 months to 50 years). Other studies adopted alternative strategies. In DCE design, there is a necessary trade-off between statistical efficiency and the difficulty of the task for respondents. A variety of methods have been employed to try and ease this difficulty, but there remains a lack of consensus on the best approach. An agreed criterion for this trade-off could facilitate consistency. Some of the consistency that does appear in the literature is due to conformity with EuroQol’s EQ-VT protocol.

Unfortunately, for casual users of DCE valuations, all of this means that we can’t just assume that a DCE is a DCE is a DCE. Understanding the methodological choices involved is important in the application of resultant value sets.

Trusting the results of model-based economic analyses: is there a pragmatic validation solution? PharmacoEconomics [PubMed] Published 6th September 2018

Decision models are almost never validated. This means that – save for a superficial assessment of their outputs – they are taken at good faith. That should be a worry. This article builds on the experience of the authors to outline why validation doesn’t take place and to try to identify solutions. This experience includes a pilot study in France, NICE Evidence Review Groups, and the perspective of a consulting company modeller.

There are a variety of reasons why validation is not conducted, but resource constraints are a big part of it. Neither HTA agencies, nor modellers themselves, have the time to conduct validation and verification exercises. The core of the authors’ proposed solution is to end the routine development of bespoke models. Models – or, at least, parts of models – need to be taken off the shelf. Thus, open source or otherwise transparent modelling standards are a prerequisite for this. The key idea is to create ‘standard’ or ‘reference’ models, which can be extensively validated and tweaked. The most radical aspect of this proposal is that they should be ‘freely available’.

But rather than offering a path to open source modelling, the authors offer recommendations for how we should conduct ourselves until open source modelling is realised. These include the adoption of a modular and incremental approach to modelling, combined with more transparent reporting. I agree; we need a shift in mindset. Yet, the barriers to open source models are – I believe – the same barriers that would prevent these recommendations from being realised. Modellers don’t have the time or the inclination to provide full and transparent reporting. There is no incentive for modellers to do so. The intellectual property value of models means that public release of incremental developments is not seen as a sensible thing to do. Thus, the authors’ recommendations appear to me to be dependent on open source modelling, rather than an interim solution while we wait for it. Nevertheless, this is the kind of innovative thinking that we need.

Credits