In a previous post I asked whether the study by Claxton et al can or should inform the cost-effectiveness threshold used by NICE. The authors argued that, “it is the expected health effects … of the average displacement within the current NHS … that is relevant to the estimate of the threshold.” Accepting this premise, the authors aimed to estimate the average net effect (in terms of QALYs) that has historically resulted from contraction and expansion of the healthcare budget. I want to explore here briefly whether their empirical estimates can indeed be interpreted as such.
When the healthcare budget contracts we may remove services or technologies with a low cost-effectiveness and when it expands we may implement services or technologies with a high cost-effectiveness. The argument is that we don’t want to reimburse a new technology that has a lower cost-effectiveness than what the healthcare service already achieves in practice when the budget changes, which is the net effect of contraction and expansion. The approach taken by Claxton and colleagues to estimate this effect was to use local healthcare authority level data on healthcare expenditure across different programmes of healthcare (e.g. respiratory healthcare or oncology) and examine how changes in this expenditure affected healthcare outcomes, such as mortality. Since healthcare expenditure is likely to be correlated with unobservable determinants of mortality, the authors adopted an instrumental variables approach.
At this point it is important to note that total healthcare expenditure may vary for two reasons. On the supply side, there may be changes in unit costs or shifts in the overall budget constraint; on the demand side, population health may change affecting the need for healthcare, the identity of the patients, and the types of services demanded as well as reductions in the use of healthcare by current patients. I would argue that it is the supply side changes that we are interested in here. Demand side changes may shift which programmes are utilised, and the resulting productivity of those programmes, since the characteristics of the patients will change.
The estimates from an instrumental variables estimator can be interpreted as the local average treatment effect (LATE) which is the average effect of a change in the variable of interest (in this case healthcare expenditure) resulting from a change in the instrumental variable (IV). The IVs utilised by Claxton et al are socio-economic variables (such as the index of multiple deprivation and the proportion of the population providing unpaid care). These variables are arguably on the demand and supply sides since they both affect population healthcare needs leading to different populations being treated and may affect the healthcare utilisation of current patients.
The empirical estimates of Claxton et al may therefore possibly be interpreted as the effect of both changes due to contractions and expansions of the budget (the effect of interest) and a change in the programmes of care provided, the treatments within them, and their productivity resulting from changes to population health needs and the identity of patients.*
Overall, I think that even if we accept the authors’ arguments about why they are trying to identify this effect, their empirical strategy may possibly not identify it.
*It may be argued that a test of over-identifying restrictions (OID), which tests if the instruments are correlated with the errors, would detect if these instruments were related to health needs. However, note that we are looking within a programme of care, for example, cancer expenditure on cancer mortality so that we are conditioning on having cancer (and being diagnosed with it) in this analysis. Socio-economic variables may be determinants of getting cancer or which type of cancer a patient gets but may not be determinants of (i.e. are independent of) the health outcomes from cancer once we’ve conditioned on having cancer and some other factors determining health outcomes. They may therefore pass the OID test.