OHE Lunchtime Seminar: What Can NHS Trusts Do to Reduce Cancer Waiting Times?

OHE Lunchtime Seminar with Sarah Karlsberg, Steve Paling, and Júlia Esquerré on ‘What can NHS trusts do to reduce cancer waiting times?’ To be held on 14th November 2018 from 12 p.m. to 2 p.m.

Rapid diagnosis and access to treatment for cancer are vital for both clinical outcomes and patient experience of care. The NHS Constitution contains several waiting times targets, including that 85% of patients diagnosed with cancer should receive treatment within 62 days of referral. However, waiting times are increasing in England: the 62-day target has not been met since late 2013 and, in July 2018, the NHS recorded its worst performance since records began in October 2009.

This seminar will present evidence on where NHS trusts can take practical steps to reduce cancer waiting times. The work uses patient-level data (Hospital Episode Statistics) from 2016/17 and an econometric model to quantify the potential effects of several recommendations on the average length of patients’ cancer pathways. The project won the 2018 John Hoy Memorial Award for the best piece of economic analysis produced by government economists.

Sarah Karlsberg, Steven Paling, and Júlia González Esquerré work in the NHS Improvement Economics Team, which provides economics expertise to NHS Improvement (previously Monitor and the Trust Development Authority) and the provider sector. Their work covers all aspects of provider policy, including operational and financial performance, quality of care, leadership and strategic change. Sarah is also a Visiting Fellow at OHE.

Download the full seminar invite here.

The seminar will be held in the Sir Alexander Fleming Room, Southside, 7th Floor, 105 Victoria Street, London SW1E 6QT. A buffet lunch will be available from 12 p.m. The seminar will start promptly at 12:30 p.m. and finish promptly at 2 p.m.

If you would like to attend this seminar, please reply to ohegeneral@ohe.org.

Chris Sampson’s journal round-up for 17th September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Does competition from private surgical centres improve public hospitals’ performance? Evidence from the English National Health Service. Journal of Public Economics Published 11th September 2018

This study looks at proper (supply-side) privatisation in the NHS. The subject is the government-backed introduction of Independent Sector Treatment Centres (ISTCs), which, in the name of profit, provide routine elective surgical procedures to NHS patients. ISTCs were directed to areas with high waiting times and began rolling out from 2003.

The authors take pre-surgery length of stay as a proxy for efficiency and hypothesise that the entry of ISTCs would improve efficiency in nearby NHS hospitals. They also hypothesise that the ISTCs would cream-skim healthier patients, leaving NHS hospitals to foot the bill for a more challenging casemix. Difference-in-difference regressions are used to test these hypotheses, the treatment group being those NHS hospitals close to ISTCs and the control being those not likely to be affected. The authors use patient-level Hospital Episode Statistics from 2002-2008 for elective hip and knee replacements.

The key difficulty here is that the trend in length of stay changed dramatically at the time ISTCs began to be introduced, regardless of whether a hospital was affected by their introduction. This is because there was a whole suite of policy and structural changes being implemented around this period, many targeting hospital efficiency. So we’re looking at comparing new trends, not comparing changes in existing levels or trends.

The authors’ hypotheses prove right. Pre-surgery length of stay fell in exposed hospitals by around 16%. The ISTCs engaged in risk selection, meaning that NHS hospitals were left with sicker patients. What’s more, the savings for NHS hospitals (from shorter pre-surgery length of stay) were more than undermined by an increase in post-surgery length of stay, which may have been due to the change in casemix.

I’m not sure how useful difference-in-difference is in this case. We don’t know what the trend would have been without the intervention because the pre-intervention trend provides no clues about it and, while the outcome is shown to be unrelated to selection into the intervention, we don’t know whether selection into the ISTC intervention was correlated with exposure to other policy changes. The authors do their best to quell these concerns about parallel trends and correlated policy shocks, and the results appear robust.

Broadly speaking, the study satisfies my prior view of for-profit providers as leeches on the NHS. Still, I’m left a bit unsure of the findings. The problem is, I don’t see the causal mechanism. Hospitals had the financial incentive to be efficient and achieve a budget surplus without competition from ISTCs. It’s hard (for me, at least) to see how reduced length of stay has anything to do with competition unless hospitals used it as a basis for getting more patients through the door, which, given that ISTCs were introduced in areas with high waiting times, the hospitals could have done anyway.

While the paper describes a smart and thorough analysis, the findings don’t tell us whether ISTCs are good or bad. Both the length of stay effect and the casemix effect are ambiguous with respect to patient outcomes. If only we had some PROMs to work with…

One method, many methodological choices: a structured review of discrete-choice experiments for health state valuation. PharmacoEconomics [PubMed] Published 8th September 2018

Discrete choice experiments (DCEs) are in vogue when it comes to health state valuation. But there is disagreement about how they should be conducted. Studies can differ in terms of the design of the choice task, the design of the experiment, and the analysis methods. The purpose of this study is to review what has been going on; how have studies differed and what could that mean for our use of the value sets that are estimated?

A search of PubMed for valuation studies using DCEs – including generic and condition-specific measures – turned up 1132 citations, of which 63 were ultimately included in the review. Data were extracted and quality assessed.

The ways in which the studies differed, and the ways in which they were similar, hint at what’s needed from future research. The majority of recent studies were conducted online. This could be problematic if we think self-selecting online panels aren’t representative. Most studies used five or six attributes to describe options and many included duration as an attribute. The methodological tweaks necessary to anchor at 0=dead were a key source of variation. Those using duration varied in terms of the number of levels presented and the range of duration (from 2 months to 50 years). Other studies adopted alternative strategies. In DCE design, there is a necessary trade-off between statistical efficiency and the difficulty of the task for respondents. A variety of methods have been employed to try and ease this difficulty, but there remains a lack of consensus on the best approach. An agreed criterion for this trade-off could facilitate consistency. Some of the consistency that does appear in the literature is due to conformity with EuroQol’s EQ-VT protocol.

Unfortunately, for casual users of DCE valuations, all of this means that we can’t just assume that a DCE is a DCE is a DCE. Understanding the methodological choices involved is important in the application of resultant value sets.

Trusting the results of model-based economic analyses: is there a pragmatic validation solution? PharmacoEconomics [PubMed] Published 6th September 2018

Decision models are almost never validated. This means that – save for a superficial assessment of their outputs – they are taken at good faith. That should be a worry. This article builds on the experience of the authors to outline why validation doesn’t take place and to try to identify solutions. This experience includes a pilot study in France, NICE Evidence Review Groups, and the perspective of a consulting company modeller.

There are a variety of reasons why validation is not conducted, but resource constraints are a big part of it. Neither HTA agencies, nor modellers themselves, have the time to conduct validation and verification exercises. The core of the authors’ proposed solution is to end the routine development of bespoke models. Models – or, at least, parts of models – need to be taken off the shelf. Thus, open source or otherwise transparent modelling standards are a prerequisite for this. The key idea is to create ‘standard’ or ‘reference’ models, which can be extensively validated and tweaked. The most radical aspect of this proposal is that they should be ‘freely available’.

But rather than offering a path to open source modelling, the authors offer recommendations for how we should conduct ourselves until open source modelling is realised. These include the adoption of a modular and incremental approach to modelling, combined with more transparent reporting. I agree; we need a shift in mindset. Yet, the barriers to open source models are – I believe – the same barriers that would prevent these recommendations from being realised. Modellers don’t have the time or the inclination to provide full and transparent reporting. There is no incentive for modellers to do so. The intellectual property value of models means that public release of incremental developments is not seen as a sensible thing to do. Thus, the authors’ recommendations appear to me to be dependent on open source modelling, rather than an interim solution while we wait for it. Nevertheless, this is the kind of innovative thinking that we need.

Credits

Method of the month: Distributional cost effectiveness analysis

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is distributional cost effectiveness analysis.

Principles

Variation in population health outcomes, particularly when socially patterned by characteristics such as income and race, are often of concern to policymakers. For example, the fact that people born in the poorest tenth of neighbourhoods in England can expect to live 19 fewer years of healthy life than those living in the richest tenth of neighbourhoods in the country, or the fact that black Americans born today can expect to die 4 years earlier than white Americans, are often considered to be unfair and in need of policy attention. As policymakers look to implement health programmes to tackle such unfair health disparities, they need the tools to enable them to evaluate the likely impacts of alternative programmes available to them in terms of the programmes’ impact on reducing these undesirable health inequalities, as well as their impact on improving population health.

Traditional tools for prospectively evaluating health programmes – that is to say, estimating the likely impacts of health programmes prior to their implementation – are typically based on cost-effectiveness analysis (CEA). CEA selects those programmes that improve the health of the average recipient of the programme the most, taking into consideration the health opportunity costs involved in implementing the programme. When using CEA to select health programmes there is, therefore, a risk that the programmes selected will not necessarily reduce the health disparities of concern to policymakers as these disparities are not part of the evaluation process used when comparing programmes. Indeed, in some cases, the programmes chosen using CEA may even unintentionally exacerbate these health inequalities.

There has been recent methodological work to build upon the standard CEA methods explicitly incorporating concerns for reducing health disparities into them. This equity augmented form of CEA is called distributional cost effectiveness analysis (DCEA). DCEA estimates the impacts of health interventions on different groups within the population and evaluates the health distributions resulting from these interventions in term of both health inequality and population health. Where necessary, DCEA can then be used to guide the trade-off between these different dimensions to pick the most “socially beneficial” programme to implement.

Implementation

The six core steps in implementing a DCEA are outlined below – full details of how DCEA is conducted in practice and applied to evaluate alternative options in a real case study (the NHS Bowel Cancer Screening Programme in England) can be found in a published tutorial.

1. Identify policy-relevant subgroups in the population

The first step in the analysis is to decide which characteristics of the population are of policy concern when thinking about health inequalities. For example, in England, there is a lot of concern about the fact that people born in poor neighbourhoods expect to die earlier than those born in rich neighbourhoods but little concern about the fact that men have shorter life expectancies than women.

2. Construct the baseline distribution of health

The next step is to construct a baseline distribution of health for the population. This baseline distribution describes the health of the population, typically measured in quality-adjusted life expectancy at birth, to show the level of health and health inequality prior to implementing the proposed interventions. This distribution can be standardised (using methods of either direct or indirect standardisation) to remove any variation in health that is not associated with the characteristics of interest. For example, in England, we might standardise the health distribution to remove variation associated with gender but retain variation associated with neighbourhood deprivation. This then gives us a description of the population health distribution with a particular focus on the health disparities we are trying to reduce. An example of how to construct such a ‘social distribution of health’ for England is given in another published article.

3. Estimate post-intervention distributions of health

We next estimate the health impacts of the interventions we are comparing. In producing these estimates we need to take into account differences by each of the equity relevant subgroups identified in the:

  • prevalence and incidence of the diseases impacted by the intervention,
  • rates of uptake and adherence to the intervention,
  • efficacy of the intervention,
  • mortality and morbidity, and
  • health opportunity costs.

Standardising these health impacts and combining with the baseline distribution of health derived above gives us estimated post-intervention distributions of health for each intervention.

4. Compare post-intervention distributions using the health equity impact plane

Once post-intervention distributions of health have been estimated for each intervention we can compare them both in terms of their level of average health and in terms of their level of health inequality. Whilst calculating average levels of health in the distributions is straightforward, calculating levels of inequality requires some value judgements to be made. There is a wide range of alternative inequality measures that could be employed each of which captures different aspects of inequality. For example, relative inequality measures would conclude that a health distribution where half the population lives for 40 years and the other half lives for 50 years is just as unequal as a health distribution where half the population lives for 80 years and the other half lives for 100 years. An absolute inequality measure would instead conclude that the equivalence is with a population where half the population lives for 80 years and the other half lives for 90 years.

Two commonly used inequality measures are the Atkinson relative inequality measure and the Kolm absolute inequality measure. These both have the additional feature that they can be calibrated using an inequality aversion parameter to vary the level of priority given to those worst off in the distribution. We will see these inequality aversion parameters in action in the next step of the DCEA process.

Having selected a suitable inequality measure we can plot our post interventions distributions on a health equity impact plane. Let us assume we are comparing two interventions A and B, we can plot intervention A at the origin of the plane and plot intervention B relative to A on the plane.

 

 

If intervention B falls in the north-east quadrant of the health equity impact plane we know it both improves health overall and reduces health inequality relative to intervention A and so intervention B should be selected. If, however, intervention B falls in the south-west quadrant of the health equity impact plane we know it both reduces health and increases health inequality relative to intervention A and so intervention A should be selected. If intervention B falls either in the north-west or south-east quadrants of the health equity impact plane there is no obvious answer as to which intervention should be preferred as there is a trade-off to be made between health equity and total health.

5. Evaluate trade-offs between inequality and efficiency using social welfare functions

We use social welfare functions to trade-off between inequality reduction and average health improvement. These social welfare functions are constructed by combining our chosen measure of inequality with the average health in the distribution. This combination of inequality and average health is used to calculate what is known as an equally distributed equivalent (EDE) level of health. The EDE summarises the health distribution being analysed as one number representing the amount of health that each person in a hypothetically perfectly equal health distribution would need to have for us to be indifferent between the actual health distribution analysed and this perfectly equal health distribution. Where our social welfare function is built around an inequality measure with an inequality aversion parameter this EDE level of health will also be a function of the inequality aversion parameter. Where inequality aversion is set to zero there is no concern for inequality and the EDE simply reflects the average health in the distribution replicating results we would see under standard utilitarian CEA. As the inequality aversion level approaches infinity, our focus becomes increasingly on those worse off in the health distribution until at the limit we reflect the Rawlsian idea of focusing entirely on improving the lot of the worst-off in society.

 

Social welfare functions derived from the Atkinson relative inequality measure and the Kolm absolute inequality measure are given below, with the inequality aversion parameters circled. Research carried out with members of the public in England suggests that suitable values for the Atkinson and Kolm inequality aversion parameters are 10.95 and 0.15 respectively.

Atkinson Relative Social Welfare Function Kolm Absolute Social Welfare Function

When comparing interventions where one intervention does not simply dominate the others on the health equity impact plane we need to use our social welfare functions to calculate EDE levels of health associated with each of the interventions and then select the intervention that produces the highest EDE level of health.

In the example depicted in the figure above we can see that pursuing intervention A results in a health distribution which appears less unequal but has a lower average level of health than the health distribution resulting from intervention B. The choice of intervention, in this case, will be determined by the form of social welfare function selected and the level of inequality this social welfare function is parameterised to embody.

6. Conduct sensitivity analysis on forms of social welfare function and extent of inequality aversion

Given that the conclusions drawn from DCEA may be dependent on the social value judgments made around the inequality measure used and the level of inequality aversion embodied in it, we should present results for a range of alternative social welfare functions parameterised at a range of inequality aversion levels. This will allow decision makers to clearly understand how robust conclusions are to alternative social value judgements.

Applications

DCEA is of particular use when evaluating large-scale public health programmes that have an explicit goal of tackling health inequality. It has been applied to the NHS bowel cancer screening programme in England and to the rotavirus vaccination programme in Ethiopia.

Some key limitations of DCEA are that: (1) it currently only analyses programmes in terms of their health impacts whilst large public health programmes often have important impacts across a range of sectors beyond health; and (2) it requires a range of data beyond that required by standard CEA which may not be readily available in all contexts.

For low and middle-income settings an alternative augmented CEA methodology called extended cost effectiveness analysis (ECEA) has been developed to combine estimates of health impacts with estimates of impacts on financial risk protection. More information on ECEA can be found here.

There are ongoing efforts to generalise the DCEA methods to be applied to interventions having impacts across multiple sectors. Follow the latest developments on DCEA at the dedicated website based at the Centre for Health Economics, University of York.

Credit