What’s wrong with a simple model?

Healthcare institutions are large, complex systems and evaluating the effects of policies or structural interventions within these systems is challenging. In many cases it is not possible to directly measure the effect of the intervention at the patient level. The impact on any one patient is too small, necessitating a prohibitively large sample size in any one study, and yet when applied across all patients the effects of the intervention may be both clinically and economically significant (Lilford et al, 2010). Consider the frequently discussed “Seven day NHS”: at the margin, the patients’ risk of mortality may only change by four tenths of a percentage point at best (if the intervention works as discussed in parliament). We may therefore rely on measuring changes to more “upstream” outcomes which may act as proxies for the clinically important outcomes such as mortality. But then how do we make sense of the various pieces of evidence produced and evaluate the intervention in terms that can be compared to other interventions? A model!

In the not so distant past, I presented a simple model of how an electronic prescribing system may impact on patient clinical and economic outcomes. The commentary on this model was that one should not trust a model so simple. Simple models do not capture this or that aspect of the world and cannot account for this or that observation. But, I would argue, this critique does not hold water.

Kieran Healey distinguishes, in the excellent essay Fuck Nuance that explores the value of nuance in sociological theory, three “nuance traps” that the person who views nuance as an important virtue may fall into. Firstly, the fine-grain nuance that is the detailed, merely empirical description of the world. Secondly, the conceptual framework, which is the “extensive expansion of some theoretical system in a way that effectively closes it off from rebuttal or disconfirmation by anything in the world.” And, thirdly, the connoisseur, the valuing of nuance to demonstrate one’s ability “to grasp and express the richness, texture, and flow of social reality itself.” Any of these manifestations of the nuance trap could be applied to any model presented to an audience, but I think the criticisms that most often arise in the context of healthcare systems research is the nuance of the fine-grain.

The aim of models in our context is to predict and explain phenomena in the healthcare system. Phenomena are distinct from the data from which they are inferred (Bogen & Woodward, 1988). Phenomena are generally stable and are the result of the confluence of a manageable number of causal factors, whereas data are noisy measures of the phenomena that are generated by a very large number of factors, including measurement error and bias. To quote Bogen and Woodward:

In undertaking to explain phenomena rather than data, a scientist can avoid having to tell an enormous number of independent, highly local, and idiosyncratic causal stories involving the (often inaccessible and intractable) details of specific experimental and observational contexts. He can focus instead on what is constant and stable across different contexts. This opens up the possibility of explaining a wide range of cases in terms of a few factors or general principles. It also facilitates derivability and the systematic exhibition of dependency-relations.

The models can and should reflect important aspects of the system, such as why there are nonlinearities in the system, but ultimately we are trying to explain why an intervention works and predict it effects. The function of the data is to help with this task. Simple models are not bad merely by virtue of being simple.


  • Health economics, statistics, and health services research at the University of Warwick. Also like rock climbing and making noise on the guitar.


We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

Join the discussion