Now that the Health and Social Care Bill is the Health and Social Care Act, more focus will be placed on sources of information regarding provider performance. In order for patients to make informed choices about where they wish to receive their care they need good quality information about outcomes at different institutions. The typical patient will look at which of their local providers performs the best and interpret this causally – if they go to that hospital, they will have a better outcome. A recent paper by Varkevisser et al shows that patients do indeed choose the hospitals with the highest ratings. But are our methods of hospital ranking reliable for this purpose?
Ideally, in assessing provider performance, we want to identify the treatment effect of a specific hospital versus the others. However, we lack the counter-factual that informs us of the outcomes for a patient at different hospitals; moreover, patients are not generally randomised to hospitals.
Typically case-mix adjustment or propensity score matching is used to compare hospitals. However, even with appropriate case-mix adjustment hospitals can only be compared on the outcomes of patients whose case mix profiles overlap. We can empirically estimate the counter-factual of a hypothetical patient with the same characteristics at two hospitals, but we cannot reliably estimate the outcome of, for example, a very severe patient in a hospital which only treats mild patients. Moreover, any assumption in the case of hospitals, wherein we suppose we can extrapolate the treatment effect for a patient without an empirical counter-factual, is clearly very strong. To put it another way: we can’t estimate treatment effects outside the region of common support.
A frequently used measure of hospital performance is the standardised mortality ratio (SMR). The SMR is the ratio of observed deaths to expected deaths. Expected deaths are calculated with a risk-adjusted logistic regression for risk of mortality. This measure is therefore interpreted as the provider’s performance for its specific case mix relative to an average provider with the same case mix. It is a positive judgement about hospital performance conditional on its case mix. It is not a normative judgement about how any patient will fare at a given hospital.
If Mrs Smith requires a hip replacement she cannot infer that her outcomes are more likely to be better at the hospital with the lower SMR. This is particularly true if she hasn’t compared her individual characteristics to those of the patients treated in that hospital. Furthermore, if she goes to a different hospital than the one to which she would have gone without the choice, and she is a different ‘sort’ of patient than is typically seen at that hospital, then it’s possible that her outcomes may be worse. The experience of the staff – their intensive human capital – is specialised to perform the tasks that are required of their most frequently seen patients.
Another issue is that outcomes are not independent of one another. A current working paper by Mauro Laudicella and colleagues identifies that patients who are readmitted are necessarily those patients who have survived. Therefore, there is a selection mechanism at work; the authors formulate it as an omitted variable problem:
Where Ri* is the latent propensity for readmission and Si* that for survival, ε1i is the error, and x1i are the covariates for individual i. They find that those hospitals that perform well on mortality perform less well on readmissions; moreover, they find that an overall trend in increased readmissions is accounted for by increased survival.
In creating league tables of a specific outcome, providers can game the system to appear to be highly achieving. They can send patients, who are very likely to die, home or to another hospital. A hospital could artificially reduce length of stay metrics by discharging patients earlier – this may increase the risk of readmission but they would appear to be doing well in terms of length of stay. As previously mentioned, the study by Varkevisser and colleagues shows that a 1% reduction in readmission rate (relative to a 8.5% mean) results in a 12% increase in patient demand. However, since the signal of quality is not necessarily reliable this could result in sub-optimal outcomes for the patient, notwithstanding the fact that the increase in patient demand will lower staff patient ratios – staff patient ratios have been shown to be related to mortality (see this example).
Those groups that produce hospital performance statistics, for example Dr Foster, now more than ever, have a large responsibility. Whether or not more choice is good (an argument I daren’t wade into), I don’t believe the tools generally used at the moment are adequate to provide reliable signals of hospital quality so that patients may choose the hospital that is right for them.