Today sees the publication of surgeons’ death rates on the MyNHS website (see Guardian and BBC stories). The website presents full lists of surgeons by specialty alongside either blue circles with a large ‘OK’ inside, grey circles with question marks, or green circles with ticks, to reflect, respectively, whether the surgeons’ risk adjusted mortality (or other significant morbidity) falls within expected limits or is a negative or positive outlier. The important question here is whether these measures actually reflect surgeon quality.
This issue returns to the perennial question of measuring healthcare quality. In terms of surgeon quality, we should consider that a high quality surgeon is one which makes fewer errors, and as such causes fewer preventable adverse events. Deaths, or other adverse health outcomes, that would have occurred regardless of the responsible consultant cannot be attributed to variations in surgeon quality. Therefore, the question we should ask is whether risk-adjusted mortality is a good proxy for preventable mortality. Girling et al (2012) ask exactly this question in relation to case-mix adjusted hospital mortality and preventable mortality and conclude, ‘If 6% of hospital deaths are preventable (as suggested by the literature), the predictive value of the SMR can be no greater than 9%. This value could rise to 30%, if 15% of deaths are preventable.’ A similar argument applies to individual physicians.
It is also important to ask what the consequences of publishing such data would be on patient and surgeon behaviour. In the latter case, surgeons may become more risk averse, avoiding cases in which there is a greater chance of non-preventable mortality since these cases would reflect badly against them. Indeed, speaking on this morning’s (Wednesday 19th November) Today programme, Ian Martin, from the Federation of Surgical Speciality Associations, suggested that there was anecdotal evidence indicating that this was the case. This is certainly not in the interests of the patient population. The publication of these data may also alter the way in which patients and surgeons are matched to one another, since patients will likely decide not to visit a surgeon with a high risk adjusted mortality rate. Yet, this altering of a specific surgeon’s case-mix resulting from patient choice, will mean that previous adjusted mortality rates will have poor predictive value for future adjusted mortality rates, and even less predictive value for preventable mortality.
These figures are published in the name of patient choice. Yet they may actually contain little useful information to support such a choice.
[…] review studies have suggested as little as 5% of deaths might be preventable in England and Wales. Another paper we have covered previously suggests then that the predictive value of standardised mortality ratios […]
[…] surgical gaffes. However, determining when an error has been made is difficult and quality is often poorly correlated with typical measures of performance like standardised mortality ratios. Evaluating quality is […]
[…] Despite the development of sophisticated methods in a large and growing literature, public bodies continue to use demonstrably inaccurate or misleading statistics such as the standardised mortality ratio (SMR). […]
[…] expansion of choice ‘sets’ and investigated its effects on quality. We have previously criticised the use of such lists. People often skim these lists relying on simple heuristics to make choices. […]
[…] are provided with information about quality. This normally comes in the form of SMRs as we have previously discussed. Gaynor, Propper, and Seiler demonstrate that patients respond to this information. But, as we […]