Rugby, Rugby, Rugby

This morning saw headlines reporting the publication of an open letter to MPs calling for a ban on contact rugby in schools. The letter argues that the high impact collisions in rugby lead to a number of serious injuries including concussion and fractures. For a child that plays a full season of rugby there is a 28% risk of experiencing and injury and an 11% risk of concussion.

These high numbers may seem to warrant a ban on full contact rugby as the letter argues. Indeed, the expected harms that befall a rugby playing teenager may be much higher than those associated with other prohibited actions, such as the use of cannabis. Rugby may therefore be over a ‘threshold’ required to enforce a prohibition on contact rugby in schools. Such a threshold might exist where the marginal costs of enforcing a prohibition were less than the marginal benefits. The costs may be quite low as schools would presumably be compliant with the ban, while the benefits are high in terms of harms avoided. However, this is obviously not how prohibitions are worked out; the marginal costs, for example, of a prohibition on recreational drug use are arguably very high relative to the potential benefits since the ban is particularly ineffective as a harm reduction strategy.

The choice then is political. The median voter enjoys rugby and sees it as a part of their cultural heritage. Opponents of contact rugby may be more likely to lie at one end of the political spectrum and have little effect on sports policy in a representative democracy. For many people the perceived loss of a cultural institution in schools, the loss of liberty for their child, or the overreach of the state is a great cost. (Just view some of the comments on today’s news reports). Even in the face of strong evidence of harms, many are likely to be recalcitrant in their views, and even be annoyed that evidence was presented at all. Prof. David Nutt discovered this the hard way when he was fired from the Advisory Council for the Misuse of Drugs for comparing the risks associated with ecstasy alongside those of horse riding.

Nutt’s strategy is the correct one though. To make informed choices about the policies that are chosen from amongst the set of possible policies in any given context one must be able to compare them. People are well aware that full contact rugby carries risks, but for most people they have never had personal experience of any life changing risks from the sport. Saying that there is a risk of 1 adverse event every ~4 exposures is potentially meaningless. But add in the information that horse riding carries an equivalent risk of 1 in ~350 exposures and ecstasy 1 in ~10,000 exposures and individuals are better prepared to understand the risks.

When considering investment in healthcare technologies, a benefit of 10 QALYs, or cost-effectiveness of £10,000/QALY is meaningless in isolation. The choice to invest takes place against the background of a myriad other choices. Trying to implement harm reduction strategies in sport or elsewhere is the same. When placed alongside other potentially harmful activities rugby appears to be one of the highest risk and the costs of prohibiting full contact would be small, thus the burden of proof would seem to fall on those who oppose any prohibition. However, I would suspect that the letter as currently written would have little impact on many people’s views and therefore little impact on public consensus or policy.

E-cigarettes and the role of science in public health policy

E-cigarettes have become, without doubt, one of the public health issues du jour. Many countries and states have been quick to prohibit them, while others continue to debate the issue. The debate ostensibly revolves around the relative harms of e-cigarettes: Are they dangerous? Will they reduce the harms caused by smoking tobacco? Will children take them up? Questions which would typically be informed by the available evidence. However, there is a growing schism within the scientific community about what indeed the evidence does say. On the one hand, there is the view that the evidence, when taken altogether, overwhelmingly suggests that e-cigarettes are significantly less harmful than cigarettes and would reduce the harms caused by nicotine use. On the other hand, there is vocal group that doubt the veracity of the available evidence and are critical of e-cigarette availability in general. Indeed, this latter view has been adopted by the biggest journals in medicine, The Lancet, the BMJ, the New England Journal of Medicine, and JAMA, each of whom have published either research or editorials along this line.

The evidence around e-cigarettes was recently summarised and reviewed by Public Health England. The conclusion of the review was the e-cigarettes were 95% less harmful than smoking tobacco. So why might these journals take a position that is arguably contrary to the evidence? From a sociological perspective, epistemological conflicts in science are also political conflicts. Actions within the scientific field are directed to acquiring scientific authority, and that authority requires social recognition. However, e-cigarette policy is also a political issue, and as such actions in this area are also directed at gaining political capital. If the e-cigarette issue can be delimited to a purely scientific problem then scientific capital can be translated into political capital. One way of achieving this is to try to establish oneself as the authoritative scientific voice on such matters and to doubt the claims made by others.

We can also view the issue in a broader context. The traditional journal format is under threat from other models of scientific publishing, including blogs, open access publishers, pre-print archives, and post-publication peer review. Much of the debate around e-cigarettes has come from these new sources. Dominant producers in the scientific field must necessarily be conservative since it is the established structure of scientific field that grants these producers their dominant status. But this competition is the scientific field may have wider, pernicious consequences.

Typically, we try to formulate policies that maximise social welfare. But, as Acemoglu and Robinson point out, the policy that may maximise social welfare now, may not maximise welfare in the long run. Different policies today affect the political equilibrium tomorrow and thus the policies that are available to policy makers tomorrow. Prohibiting e-cigarettes today may be socially optimal if there were no reliable evidence on their harms or benefits and there were suspicions that they could cause public harm. But, it is very difficult politically to reverse prohibition policies, even if evidence were to later emerge that e-cigarettes were an effective harm reduction product. Thus, even if the journals were to doubt the evidence around e-cigarettes, then the best policy position would arguably be to remain agnostic and await further evidence. But, this would not be a position that would grant them socially recognised scientific capital.

Perhaps this e-cigarette debate is reflective of a broader shift in the way in which scientific evidence and those with scientific capital are engaged in public health policy decisions. Different forms of evidence beyond RCTs are being more widely accepted in biomedical research and methods of evidence synthesis are being developed. New forums are also becoming available for their dissemination. This, I would say, can only be a positive thing.

What’s wrong with a simple model?

Healthcare institutions are large, complex systems and evaluating the effects of policies or structural interventions within these systems is challenging. In many cases it is not possible to directly measure the effect of the intervention at the patient level. The impact on any one patient is too small, necessitating a prohibitively large sample size in any one study, and yet when applied across all patients the effects of the intervention may be both clinically and economically significant (Lilford et al, 2010). Consider the frequently discussed “Seven day NHS”: at the margin, the patients’ risk of mortality may only change by four tenths of a percentage point at best (if the intervention works as discussed in parliament). We may therefore rely on measuring changes to more “upstream” outcomes which may act as proxies for the clinically important outcomes such as mortality. But then how do we make sense of the various pieces of evidence produced and evaluate the intervention in terms that can be compared to other interventions? A model!

In the not so distant past, I presented a simple model of how an electronic prescribing system may impact on patient clinical and economic outcomes. The commentary on this model was that one should not trust a model so simple. Simple models do not capture this or that aspect of the world and cannot account for this or that observation. But, I would argue, this critique does not hold water.

Kieran Healey distinguishes, in the excellent essay Fuck Nuance that explores the value of nuance in sociological theory, three “nuance traps” that the person who views nuance as an important virtue may fall into. Firstly, the fine-grain nuance that is the detailed, merely empirical description of the world. Secondly, the conceptual framework, which is the “extensive expansion of some theoretical system in a way that effectively closes it off from rebuttal or disconfirmation by anything in the world.” And, thirdly, the connoisseur, the valuing of nuance to demonstrate one’s ability “to grasp and express the richness, texture, and flow of social reality itself.” Any of these manifestations of the nuance trap could be applied to any model presented to an audience, but I think the criticisms that most often arise in the context of healthcare systems research is the nuance of the fine-grain.

The aim of models in our context is to predict and explain phenomena in the healthcare system. Phenomena are distinct from the data from which they are inferred (Bogen & Woodward, 1988). Phenomena are generally stable and are the result of the confluence of a manageable number of causal factors, whereas data are noisy measures of the phenomena that are generated by a very large number of factors, including measurement error and bias. To quote Bogen and Woodward:

In undertaking to explain phenomena rather than data, a scientist can avoid having to tell an enormous number of independent, highly local, and idiosyncratic causal stories involving the (often inaccessible and intractable) details of specific experimental and observational contexts. He can focus instead on what is constant and stable across different contexts. This opens up the possibility of explaining a wide range of cases in terms of a few factors or general principles. It also facilitates derivability and the systematic exhibition of dependency-relations.

The models can and should reflect important aspects of the system, such as why there are nonlinearities in the system, but ultimately we are trying to explain why an intervention works and predict it effects. The function of the data is to help with this task. Simple models are not bad merely by virtue of being simple.