Analysing Patient-Level Data using HES Workshop

This intensive workshop introduces participants to HES (Hospital Episode Statistics) data and how to handle and manipulate these very large patient-level data sets using computer software. Understanding and interpreting the data is a key first step for using these data in economic evaluation or evaluating health care policy and practice. Participants will engage in lectures and problem-solving exercises, analysing the information in highly interactive sessions. Data manipulation and statistical analysis will be taught and demonstrated using Stata.

This workshop is offered to people in the academic, public and commercial sectors.  It is useful for analysts who wish to harness the power of HES non-randomised episode level patient data to shed further light on such things as patient costs and pathways, re-admissions and outcomes and provider performance.  The workshop is suitable for individuals working in NHS hospitals, commissioning organisations, NHS England, Monitor, and the Department of Health and Social Care, pharmaceutical companies or consultancy companies and for health care researchers and PhD students.  Overseas participants may find the tuition helpful for their own country, but note that the course is heavily oriented towards understanding HES data for England.

The workshop fee is 900GBP for the public sector; 1,400GBP for the commercial sector. This includes all tuition, course materials, lunches, the welcome and drinks reception, the workshop dinner and refreshments, but does not include accommodation.

Online registration is now open; further information and registration is at: https://www.york.ac.uk/che/courses/patient-data/

Subsidised places are available for full-time PhD students. If this is applicable to you, please email the workshop administrators and request an Application Form.

Contact: Gillian or Louise, Workshop Administrators, at: che-apd@york.ac.uk;  tel: +44 (0)1904 321436

“Economists are the gods of global health.” Richard Horton at it again!

Richard Horton dislikes the economics discipline. That should not come as a shock to anyone. But worse still, this animus appears to arise from a misunderstanding of what economists actually do. Not so long ago, we discussed the fundamental errors in a piece in The Lancet Horton had published. Well, a new tweet from Horton leads us to yet another piece denigrating the economics profession:

The essence of Horton’s latest tirade is that (1) “economists silence the smaller voices of medicine”, (2) economists are responsible for austerity, (3) austerity has had a harmful effect both socially and economically, so that (4) “The task of health professionals is to resist and to oppose the egregious economics of our times.” The implication of the four points being that the influence of economists should be (at least partially) extricated from medicine and medical research. I would agree with point (3) here, as do a large consensus of academic economists (read this post and others from Simon Wren-Lewis for a good summary). But the other points don’t really stand up to close scrutiny.

One of the goals of academic economics is to provide evidence to support an optimal allocation of resources. From a macro perspective this may be the allocative efficiency of spending on different sectors like education and health care. Or, in a context used for much health economic analysis, how to allocate a health care budget fixed by the government through a political decision making process. What is considered ‘optimal’ in each of these circumstances is a normative decision and is, again, a political choice in practice. Perhaps this overlap with politics and economics has confused Horton, who mistakes one for the other with claims like

It is economists we must thank for the modern epidemic of austerity that has engulfed our world. Austerity is the calling card of neoliberalism.

But Horton also claims economics has displaced the “modest discipline of biology” in medicine. So, a reductio ad absurdum argument would have economists doing all the medical research and then implementing all medical and health care policy. Why do we need anyone else?

One can certainly claim this blog post is an apologia for economics. Of course would defend it. But it is true that there are good examples of poor economics and academic overreach. The work of the late Gary Becker was often criticized along these lines; his rational theory of addiction in particular. However, criticisms of the work of economists frequently come from economists themselves. I hope this blog serves as a case in point. More and more, health economists work as members of interdisciplinary teams, where a plurality of approaches, qualitative and quantitative, can aid in making sound inferences and supporting effective policy.

Horton’s views cannot unfortunately be dismissed as the ravings of the uninformed. He occupies an important position in medical research, serving as the editor-in-chief of one of the top medical journals, and his voice is influential. It serves no useful purpose to anyone and undermines the positions he advocates for, which many economists actually agree with, to publish false claims about economics.

 

Credits

The curse of endogeneity in the clinical literature

Endogeneity is everywhere. There is always a reason to assume that there is some endogeneity in a model; sometimes it can’t be totally eliminated and we must just reduce it to acceptable levels. Health economists produce research that often is relevant for both economics and clinical journals, but often the requirements of these two types of journal differ by quite a lot. One way they differ is that the clinical literature and biostatisticians don’t generally care about endogeneity, which is probably the exact opposite opinion of economics and econometricians. When it comes to making policy decisions, not being aware of the effects of endogeneity may have disastrous consequences. Here’s an example why.

Patient and procedure volume has been shown to be inversely correlated with clinical outcomes such as mortality. This suggests that big hospitals are good. But, what about causality? Is there any? And, if so, in which direction does it run?

The hypothesis that volume causes better outcomes is called ‘practice makes perfect’ (PMP). This could be due either to ‘learning by doing’ or ‘scale economies’. If PMP were the case then we could identify if a learning by doing mechanism was responsible either by looking at the effects of lagged volume, or by seeing if a clinician who had been at a high volume hospital ‘took’ his skills with him. The competing hypothesis is ‘selective referral’ (SR). Hospitals which have superior outcomes attract more patients which consequently boosts their volume.

In the case of a possibly simultaneous mechanism like this we resort to instrumental variables. A common instrument for volume in this case exploits the exogenous preference of individuals to go to their nearest hospital. The instrument could then be, at the patient level, the nearest hospital, or at the hospital level, the size of the catchment level.

In the clinical literature this issue of the direction of causality has often been ignored. The association of volume and positive clinical effects in some areas of medicine has led to calls for centralisation of healthcare services. This implicitly assumes that the PMP hypothesis is true, or at least plays a stronger role that SR. But what if the volume-outcome effect is driven more by SR than PMP? Then the sickest patients will all be sent to the new large hospitals which will not cause any effect to outcomes and may even have a negative effect by increasing burden on staff, inefficient use of resources, and making patients travel further among other things.

For many areas of medicine a causal link has been demonstrated between volume and outcome, and it may be shown that both PMP and SR play a role. But this is just a demonstration of the problems of ignoring endogeneity. Causal inference is demonstrated for many healthcare interventions through a randomised experiment – something which health economics could do with more of – but often it is unethical or impractical to perform such an experiment. If we do rely on observational research then economists and econometricians should be trying to communicate these issues.