Chris Sampson’s journal round-up for 17th September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Does competition from private surgical centres improve public hospitals’ performance? Evidence from the English National Health Service. Journal of Public Economics Published 11th September 2018

This study looks at proper (supply-side) privatisation in the NHS. The subject is the government-backed introduction of Independent Sector Treatment Centres (ISTCs), which, in the name of profit, provide routine elective surgical procedures to NHS patients. ISTCs were directed to areas with high waiting times and began rolling out from 2003.

The authors take pre-surgery length of stay as a proxy for efficiency and hypothesise that the entry of ISTCs would improve efficiency in nearby NHS hospitals. They also hypothesise that the ISTCs would cream-skim healthier patients, leaving NHS hospitals to foot the bill for a more challenging casemix. Difference-in-difference regressions are used to test these hypotheses, the treatment group being those NHS hospitals close to ISTCs and the control being those not likely to be affected. The authors use patient-level Hospital Episode Statistics from 2002-2008 for elective hip and knee replacements.

The key difficulty here is that the trend in length of stay changed dramatically at the time ISTCs began to be introduced, regardless of whether a hospital was affected by their introduction. This is because there was a whole suite of policy and structural changes being implemented around this period, many targeting hospital efficiency. So we’re looking at comparing new trends, not comparing changes in existing levels or trends.

The authors’ hypotheses prove right. Pre-surgery length of stay fell in exposed hospitals by around 16%. The ISTCs engaged in risk selection, meaning that NHS hospitals were left with sicker patients. What’s more, the savings for NHS hospitals (from shorter pre-surgery length of stay) were more than undermined by an increase in post-surgery length of stay, which may have been due to the change in casemix.

I’m not sure how useful difference-in-difference is in this case. We don’t know what the trend would have been without the intervention because the pre-intervention trend provides no clues about it and, while the outcome is shown to be unrelated to selection into the intervention, we don’t know whether selection into the ISTC intervention was correlated with exposure to other policy changes. The authors do their best to quell these concerns about parallel trends and correlated policy shocks, and the results appear robust.

Broadly speaking, the study satisfies my prior view of for-profit providers as leeches on the NHS. Still, I’m left a bit unsure of the findings. The problem is, I don’t see the causal mechanism. Hospitals had the financial incentive to be efficient and achieve a budget surplus without competition from ISTCs. It’s hard (for me, at least) to see how reduced length of stay has anything to do with competition unless hospitals used it as a basis for getting more patients through the door, which, given that ISTCs were introduced in areas with high waiting times, the hospitals could have done anyway.

While the paper describes a smart and thorough analysis, the findings don’t tell us whether ISTCs are good or bad. Both the length of stay effect and the casemix effect are ambiguous with respect to patient outcomes. If only we had some PROMs to work with…

One method, many methodological choices: a structured review of discrete-choice experiments for health state valuation. PharmacoEconomics [PubMed] Published 8th September 2018

Discrete choice experiments (DCEs) are in vogue when it comes to health state valuation. But there is disagreement about how they should be conducted. Studies can differ in terms of the design of the choice task, the design of the experiment, and the analysis methods. The purpose of this study is to review what has been going on; how have studies differed and what could that mean for our use of the value sets that are estimated?

A search of PubMed for valuation studies using DCEs – including generic and condition-specific measures – turned up 1132 citations, of which 63 were ultimately included in the review. Data were extracted and quality assessed.

The ways in which the studies differed, and the ways in which they were similar, hint at what’s needed from future research. The majority of recent studies were conducted online. This could be problematic if we think self-selecting online panels aren’t representative. Most studies used five or six attributes to describe options and many included duration as an attribute. The methodological tweaks necessary to anchor at 0=dead were a key source of variation. Those using duration varied in terms of the number of levels presented and the range of duration (from 2 months to 50 years). Other studies adopted alternative strategies. In DCE design, there is a necessary trade-off between statistical efficiency and the difficulty of the task for respondents. A variety of methods have been employed to try and ease this difficulty, but there remains a lack of consensus on the best approach. An agreed criterion for this trade-off could facilitate consistency. Some of the consistency that does appear in the literature is due to conformity with EuroQol’s EQ-VT protocol.

Unfortunately, for casual users of DCE valuations, all of this means that we can’t just assume that a DCE is a DCE is a DCE. Understanding the methodological choices involved is important in the application of resultant value sets.

Trusting the results of model-based economic analyses: is there a pragmatic validation solution? PharmacoEconomics [PubMed] Published 6th September 2018

Decision models are almost never validated. This means that – save for a superficial assessment of their outputs – they are taken at good faith. That should be a worry. This article builds on the experience of the authors to outline why validation doesn’t take place and to try to identify solutions. This experience includes a pilot study in France, NICE Evidence Review Groups, and the perspective of a consulting company modeller.

There are a variety of reasons why validation is not conducted, but resource constraints are a big part of it. Neither HTA agencies, nor modellers themselves, have the time to conduct validation and verification exercises. The core of the authors’ proposed solution is to end the routine development of bespoke models. Models – or, at least, parts of models – need to be taken off the shelf. Thus, open source or otherwise transparent modelling standards are a prerequisite for this. The key idea is to create ‘standard’ or ‘reference’ models, which can be extensively validated and tweaked. The most radical aspect of this proposal is that they should be ‘freely available’.

But rather than offering a path to open source modelling, the authors offer recommendations for how we should conduct ourselves until open source modelling is realised. These include the adoption of a modular and incremental approach to modelling, combined with more transparent reporting. I agree; we need a shift in mindset. Yet, the barriers to open source models are – I believe – the same barriers that would prevent these recommendations from being realised. Modellers don’t have the time or the inclination to provide full and transparent reporting. There is no incentive for modellers to do so. The intellectual property value of models means that public release of incremental developments is not seen as a sensible thing to do. Thus, the authors’ recommendations appear to me to be dependent on open source modelling, rather than an interim solution while we wait for it. Nevertheless, this is the kind of innovative thinking that we need.

Credits

Thesis Thursday: Frank Sandmann

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Frank Sandmann who has a PhD from the London School of Hygiene & Tropical Medicine. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
The true cost of epidemic and outbreak diseases in hospitals
Supervisors
Mark Jit, Sarah Deeny, Julie Robotham, John Edmunds
Repository link
http://researchonline.lshtm.ac.uk/4648208/

Do you refer to the ‘true’ cost because some costs are hidden in this context?

That’s a good observation. Economists use the term “true cost” as a synonym for “opportunity cost”, which can be defined as the net value of the forgone second-best use of a resource. The true value of a hospital bed is therefore determined by its second-best use, which may indeed be less easily observed and less obvious, or somewhat hidden.

In the context of infectious disease outbreaks in hospital, the most visible costs are the direct expenditures on treatments of infected cases and any measures of containment. However, they do not capture the full extent of the “alternative” costs and therefore cannot equal opportunity costs. Slightly less visible are the potential knock-on effects for visitors to the hospital who, unbeknown to them, may get infected and contribute to sustained transmission in the community. Least seen are the externalities borne by patients who have not been admitted so far but who are awaiting admission, and for whom there is no space in hospital yet due to the ongoing outbreak.

In my thesis, I provided a general overview of the historical development of the concept of opportunity costs of resources before I looked in detail at bed-days and the application for hospitals.

How should the opportunity cost of hospital stays be determined?

That depends on for whom you want to determine these costs.

For individual patients, it depends on the very subjective decision of how else they would spend their time instead, and how urgent it is to receive hospital care.

From the perspective of hospital administrators, it is straightforward to calculate the opportunity costs based on the revenues and expenditures of the inpatients, their length of stays, and the existing demand of care from the community. This is quite important because whether there are opportunity costs from forgone admissions will depend on whether there are other patients actually waiting to be admitted, which is somewhat reflected in occupancy rates and of course waiting lists.

Any other decision maker who is acting as an agent on behalf of a collective group or the public should look into the forgone health impact of patients who cannot be admitted when the beds are unavailable to them. In my thesis, I proposed a method for quantifying the opportunity costs of bed-days with the net benefit of the second-best patients forgone, which I illustrated with the example of norovirus-associated gastroenteritis.

How important are differences in methods for costing in the context of gastroenteritis and norovirus?

The results can differ quite substantially when using different costing methods. Norovirus is an ideal illness to illustrate this issue given that otherwise healthy people with gastrointestinal symptoms and no further comorbidities or complications shouldn’t be admitted to hospital in order to minimise the risk of an outbreak. Patients with norovirus are therefore often not the patient group that is benefitting the most from a hospital stay.

In one of the studies of my PhD, I was able to show that the annual burden of norovirus in public hospitals in England amounts to a mean £110 million using conventional costing methods, while the opportunity costs were two-to-three times higher of up to £300 million.

This means that there is the potential for a situation where an intervention is disadvantaged when using conventional methods for costing and ignoring the opportunity costs. When evaluating such an intervention against established decision rules of cost-effectiveness, this may lead to an incorrect decision.

What were some of the key challenges that you encountered in estimating the cost of norovirus to hospitals, and how did you overcome them?

There were at least four key challenges:

First was the number of admissions. Many inpatients with norovirus won’t get recorded as such if they haven’t been laboratory-confirmed. That is why I regressed national inpatient episodes of gastroenteritis against laboratory surveillance reports for ten different gastrointestinal pathogens to estimate the norovirus-attributable proportion.

Second was the number of bed-days used by inpatients that were infected with norovirus during their hospital stay. Using their total length of stay, or some form of propensity matching, suffers from time-dependent biases and overestimates the number of bed-days. Instead, I used a multi-state model and patient-level data from a local hospital.

Third was the bed-days that were left unoccupied for infection control. One of the datasets tracked them mandatorily for acute hospitals during winters, while another surveillance system was voluntary, but recorded outbreaks throughout the year. For a more accurate estimate, I compared both datasets with each other to explore their potential overlap.

Fourth was the forgone health of alternative admissions who had otherwise occupied the beds. I had to make assumptions about the disease progression with and without hospital treatment, for which I used health-state utilities that accounted for age, sex, and the primary medical condition.

If you could have wished for one additional set of data that wasn’t available, what would it have been?

I have been very fortunate to work with a number of colleagues at Public Health England and University College London who provided me with much of the epidemiological data that I needed. My research could have benefitted though from a dataset that tracked the time of infection for a larger patient population and for longer observation periods, and a dataset that included more robust estimates for the health gain from hospital care.

If I could make a wish about the existing datasets on norovirus that I have used, I would wish for a higher rate of reporting given that it became clear from our comparison of datasets that there is a highly-correlated trend, but the number of outbreaks reported and the details of reporting leave room for improvement. Another wish of mine for daily reporting of bed-days during winter became reality only recently; during my PhD, I had to impute missing values that were non-randomly missing at weekends and over the Christmas period. This was changed in winter 2016, and I have recently shown that the mean of our lowest-to-highest imputation scenarios is surprisingly close to the daily number of bed-days recorded since then.

Parts of your thesis are made up of journal articles that you published before submission. Was this always your intention and how did you find the experience?

I always wanted to publish parts of my thesis in separate journal articles as I believe this to be a great chance to reach different audiences. That is because my theoretical research on opportunity costs may be of broader interest than just to those who work on norovirus or bed-days given that my findings are generalisable to other diseases as well as other resources. At the same time, others may be more interested in my results for norovirus, and still others in my application of the various statistical, economic, and mathematical modelling techniques.

After all, I honestly suspect that some people may place a higher value on their next-best alternative use of time than reading my thesis from cover to cover.

Writing up my thoughts early on also helped me refine them, and the peer-review process was a great opportunity to get some additional feedback. It did require good time management skills though to keep coming back to previous studies to address the peer-reviewers’ comments while I was already busy working on the next studies.

All in all, I can recommend others to consider it and, looking back, I’d do it again this way.

Thesis Thursday: Thomas Hoe

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Thomas Hoe who has a PhD from University College London. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Essays on the economics of health care provision
Supervisors
Richard Blundell, Orazio Attanasio
Repository link
http://discovery.ucl.ac.uk/10048627/

What data do you use in your analyses and what are your main analytical methods?

I use data from the English National Health Service (NHS). One of the great features of the NHS is the centralized data it collects, with the Hospital Episodes Statistics (HES) containing information on every public hospital visit in England.

In my thesis, I primarily use two empirical approaches. In my work on trauma and orthopaedic departments, I exploit the fact that the number of emergency trauma admissions to hospital each day is random. This randomness allows me to conduct a quasi-experiment to assess how hospitals perform when they are more or less busy.

The second approach I use, in my work on emergency departments with Jonathan Gruber and George Stoye, is based on bunching techniques that originated in the tax literature (Chetty et al, 2013; Kleven and Waseem, 2013; Saez, 2010). These techniques use interpolation to infer how discontinuities in incentive schemes affect outcomes. We apply and extend these techniques to evaluate the impact of the ‘4-hour target’ in English emergency departments.

How did you characterise and measure quality in your research?

Measuring the quality of health care outcomes is always a challenge in empirical research. Since my research primarily relies on administrative data from HES, I use the patient outcomes that can be directly constructed from this data: in-hospital mortality, and unplanned readmission.

Mortality is, of course, an outcome that is widely used, and offers an unambiguous interpretation. Readmission, on the other hand, is an outcome that has gained more acceptance as a measure of quality in recent years, particularly following the implementation of readmission penalties in the UK and the US.

What is ‘crowding’, and how can it affect the quality of care?

I use the term crowding to refer, in a fairly general sense, to how busy a hospital is. This could mean that the hospital is physically very crowded, with lots of patients in close proximity to one another, or that the number of patients outstrips the available resources.

In practice, I evaluate how crowding affects quality of care by comparing hospital performance and patient outcomes on days when hospitals deal with different levels of admissions (due to random spikes in the number of trauma admissions). I find that hospitals respond by not only cancelling some planned admissions, such as elective hip and knee replacements, but also discharge existing patients sooner. For these discharged patients, the shorter-than-otherwise stay in the hospital is associated with poorer health outcomes for patients, most notably an increase in subsequent hospital visits (unplanned readmissions).

How might incentives faced by hospitals lead to negative consequences?

One of the strongest incentives faced by public hospitals in England is to meet the government-set waiting time target for elective care. This target has been very successful at reducing wait times. In doing so, however, it may have contributed to hospitals shortening patient stays and increasing patient admissions.

My research shows that shorter hospitals stays, in turn, can lead to increases in unplanned readmissions. Setting strong wait time targets, then, is in effect trading off shorter waits (from which patients benefit) with crowding effects (which may harm patients).

Your research highlights the importance of time in the hospital production process. How does this play out?

I look at this from three dimensions, each a separate part of a patient’s journey through hospital.

The first two relate to waiting for treatment. For elective patients, this means waiting for an appointment, and previous work has shown that patients attach significant value to reductions in these wait times. I show that trauma and orthopaedic patients would be better off with further wait time reductions, even if that leads to more crowding.

Emergency patients, in contrast, wait for treatment while physically in a hospital emergency department. I show that these waiting times can be very harmful and that by shortening these wait times we can actually save lives.

The third dimension relates to how long a patient spends in hospital recovering from surgery. I show that, at least on the margin of care for trauma and orthopaedic patients, an additional day in hospital has tangible benefits in terms of reducing the likelihood of experiencing an unplanned readmission.

How could your findings be practically employed in the NHS to improve productivity?

I would highlight two areas of my research that speak directly to the policy debate about NHS productivity.

First, while the wait time targets for elective care may have led to some crowding problems and subsequently more readmissions, the net benefit of these targets to trauma and orthopaedic patients is positive. Second, the wait time target for emergency departments also appears to have benefited patients: it saved lives at a reasonably cost-effective rate.

From the perspective of patients, therefore, I would argue these policies have been relatively successful and should be maintained.