Chris Sampson’s journal round-up for 29th April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Here comes the SUN: self‐assessed unmet need, worsening health outcomes, and health care inequity. Health Economics [PubMed] Published 24th April 2019

How should we measure inequity in health care? Often, it is measured on the basis of health care use, and the extent to which people with different socioeconomic circumstances – conditional on their level of need – access services. One problem with this approach is that differences might not only reflect barriers to access but also heterogeneity in preferences. If people of lower socioeconomic status prefer to access services less (conditional on need), then this is arguably an artificial signal of inequities in the system. Instead, we could just ask people. But can self-assessed unmet need provide a valid and meaningful measure of inequity?

In this study, the researchers looked at whether self-reported unmet need can predict deterioration in health. The idea here is that we would expect there to be negative health consequences if people genuinely need health care but cannot access it. The Canadian National Population Health Survey asks whether, during the preceding 12 months, the individual needed health care but did not receive it, with around 10% reporting unmet need. General health outcomes are captured by self-assessed health and by the HUI3, and there are also variables for specific chronic conditions. A few model specifications, controlling for a variety of health-related and demographic variables, are implemented. For the continuous variables, the authors use a fixed effects model with lagged health, and for the categorical outcomes they used a random effects probit.

The findings are consistent across models and outcomes. People who report self-assessed unmet need are more likely to have poorer health outcomes in subsequent periods, in terms of both general health and the number of self-reported chronic conditions. This suggests that self-assessed unmet need is probably a meaningful indicator of barriers to access in health care. I’m not aware of any UK-based surveys that include self-assessed unmet need, but this study provides some reason to think that they should.

Cost effectiveness of treatments for diabetic retinopathy: a systematic literature review. PharmacoEconomics [PubMed] Published 22nd April 2019

I’ve spent a good chunk of the last 8 years doing research in the context of diabetic eye disease. Over that time, treatment has changed, and there have been some interesting controversies relating to the costs of new treatments. So this review is timely.

There are four groups of treatments that the authors consider – laser, anti-VEGF eye injections, corticosteroids, and surgery. The usual databases were searched, turning up 1915 abstracts, and 17 articles were included in the review. That’s not a lot of studies, which is why I’d like to call the authors out for excluding one HTA report, which I assume was Royle et al 2015 and which probably should have been included. The results are summarised according to whether the evaluations were of treatments for diabetic macular oedema (DMO) or proliferative diabetic retinopathy (PDR), which are the two main forms of sight-threatening diabetic eye disease. The majority of studies focussed on DMO. As ever, in reviews of this sort, the studies and their findings are difficult to compare. Different methods were employed, for different purposes. The reason that there are so few economic evaluations in the context of PDR is probably that treatments have been so decisively shown to be effective. Yet there is evidence to suggest that, for PDR, the additional benefits of injections do not justify the much higher cost compared with laser. However, this depends on the choice of drug that is being injected, because prices vary dramaticly. For DMO, injections are cost-effective whether combined with laser or not. The evidence on corticosteroids is mixed and limited, but there is promise in recently-developed fluocinolone implants.

Laser might still be king in PDR, and early surgical intervention is also still cost-effective where indicated. For DMO, the strongest evidence is in favour of using an injection (bevacizumab) that can only be used off-label. You can blame Novartis for that, or you can blame UK regulators. Either way, there’s good reason to be angry about it. The authors of this paper clearly have a good understanding of the available treatments, which is not always the case for reviews of economic evaluations. The main value of this study is as a reference point for people developing research in this area, to identify the remaining gaps in the evidence and appropriately align (or not) with prevailing methods.

Exploring the impacts of the 2012 Health and Social Care Act reforms to commissioning on clinical activity in the English NHS: a mixed methods study of cervical screening. BMJ Open [PubMed] Published 14th April 2019

Not everybody loves the Health and Social Care Act of 2012. But both praise and criticism of far-reaching policies like this are usually confined to political arguments. It’s nice to see – and not too long after the fact – some evidence of its impact. In this paper, we learn about the impact of the Act on cervical screening activity.

The researchers used both qualitative and quantitative methods in their study in an attempt to identify whether the introduction of the Act influenced rates of screening coverage. With the arrival of the Act, responsibility for commissioning screening services shifted from primary care trusts to regional NHS England teams, while sexual health services were picked up by local authorities. The researchers conducted 143 (!) interviews with commissioners, clinicians, managers, and administrators from various organisations. Of these, 93 related to the commissioning of sexual health services, with questions regarding the commissioning system before and after the introduction of the Act. How did participants characterise the impact of the Act? Confusion, complexity, variability, uncertainty, and the idea that these characteristics could result in a drop in screening rates.

The quantitative research plan, and in particular the focus on cervical screening, arose from the qualitative findings. The quantitative analysis sought to validate the qualitative findings. But everyone had the Act dropped on them at the same time (those wily politicians know how to evade blame), so the challenge for the researchers was to identify some source of variation that could represent exposure to the effects of the Act. Informed by the interviewees, the authors differentiated between areas based on the number of local authorities that the clinical commissioning group (CCG) had to work with. Boundaries don’t align, so while some CCGs only have to engage with one local authority, some have to do so with as many as three, increasing the complexity created by the Act. As a kind of control, the researchers looked at the rate of unassisted births, which we wouldn’t expect to have been affected by the introduction of the Act. From this, they estimated the triple difference in cervical screening rates before and after the introduction of the Act, between CCGs with one or more than one local authority, minus the difference in the unassisted birth rate. Screening rates (and unassisted delivery rates) were both declining before the introduction of the Act. Without any adjustment, screening rates before and after the introduction of the act decreased by 0.39% more for GP practices in those CCGs that had to work with multiple local authorities. Conversely, unassisted delivery rates actually increased by a similar amount. The adjusted impact of the Act on screening rates was a drop of around 0.62%.

Clearly, there are big disclaimers attached to findings from a study of this sort, though the main finding seems to be robust to a variety of specifications. Any number of other things could explain the change in screening rates over the period, which the researchers couldn’t capture. But the quantitative findings are backed-up by the qualitative reports, making this a far more convincing piece of work. There’s little doubt that NHS redisorganisations of this kind create challenges in the short term, and we can now see the impact that this has on the provision of care.

Public involvement in health outcomes research: lessons learnt from the development of the recovering quality of life (ReQoL) measures. Health and Quality of Life Outcomes [PubMed] Published 11th April 2019

We’ve featured a few papers from the ReQoL project on this blog. The researchers developed several outcome measures to be used in the context of mental health. A couple of weeks ago, we also featured a paper turning a sceptical eye to the idea of co-production, whereby service users or members of the public are not simply research participants but research partners. This paper describes the experience of coproduction in the context of the ReQoL study. The authors are decidedly positive about co-production.

The logic behind the involvement of service users in the development of patient-reported outcome measures is obvious; measures need to be meaningful and understandable to patients, and enabling service users to inform research decisions could facilitate that. But there is little guidance on co-production in the context of developing patient-reported outcomes. Key decisions in the development of ReQoL were made by a ‘scientific group’, which included academics, clinicians, and seven expert service users. An overlapping ‘expert service user group’ also supported the study. In these roles, service users contributed to all stages of the research, confirming themes and items, supporting recruitment, collecting and analysing data, agreeing the final items for the measures, and engaging in dissemination activities. It seems that the involvement was in large part attendance at meetings, discussing data and findings to achieve an interpretation that includes the perspectives of services users. This resulted in decisions – about which items to take forward – that probably would not have been made if the academics and clinicians were left to their own devices. Service users were also involved in the development of research materials, such as the interview topic guide. In some examples, however, it seems like the line between research partner and research participant was blurred. If an expert service user group is voting on candidate items and editing them according to their experience, this is surely a data collection process and the services users become research subjects.

The authors describe the benefits as they saw them, in terms of the expert service users’ positive influence on the research. The costs and challenges are also outlined, including the need to manage disagreements and make additional preparations for meetings. We’re even provided with the resource implications in terms of the additional days of work. The comprehensive description of the researchers’ experiences in this context and the recommendations that they provide make this paper an important companion for anybody designing a research study to develop a new patient-reported outcome measure.

Credits

Brendan Collins’s journal round-up for 14th January 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Income distribution and health: can polarization explain health outcomes better than inequality? The European Journal of Health Economics [PubMed] Published 4th December 2018

One of my main interests is health inequalities. I thought polarisation was intuitive; I had seen it in the context of the UK and the US employment market; an increase in poorly-paid ‘McJobs’ and an increase in well-paid ‘MacJobs’, with fewer jobs in the middle. But I hadn’t seen polarisation measured in a statistical way.

Traditional measures of population inequalities like Gini or Atkinson index measure the share of income or the ratio of richest to poorest. But polarisation goes a step further and looks whether there are discrete clusters or groups who have similar incomes. The theory goes that having discrete groups increases social alienation, conflict and socioeconomic comparison and increases health inequalities. Now, I get how you can test statistically for discrete income clusters, and there is an evidence base for the relationship between polarisation and social tension. But groups will cluster based on other factors besides income. I feel like it may be taking a leap to assume a statistical finding (income polarisation) will always represent a sociological construct (alienation) but I confess I don’t know the literature behind this.

China is a country with an increasing degree of polarisation as measured by the Duclos, Esteban and Ray (DER) polarisation indices, and this study suggests that it is related to health status. This study looked at trends in BMI and systolic blood pressure from 1991 to 2011 and found both to increase with increased polarisation. I imagine a lot of other social change went on in this time period in China. I think BMI might not be a good candidate for measuring the effect of polarisation, as being poor is associated with malnourishment and low weight as well as obesity. The authors found that social capital (based on increasing family size, community size, and living in the same community for a long time) had a protective effect against the effects of polarisation on health. Whether this study provides more evidence for the socioeconomic comparison or status anxiety theories of health inequalities, I am not sure; it could equally provide evidence for the neo-materialist (i.e. simply not having enough resources for a healthy life) theories – the relative importance will likely differ by country anyway.

Maybe we don’t need to add more measures of inequality to the mix but I am intrigued. I am just starting my journey with polarisation but I think it has promise.

Two-year evaluation of mandatory bundled payments for joint replacement. The New England Journal of Medicine [PubMed] Published 2nd January 2019

Joint replacements are a big cost to western healthcare systems and often delayed or rationed (partly because replacement joints may only have a 10-20 year lifespan on average). In the UK, for instance, joint replacements have been rationed based on factors like BMI or pain levels (in my opinion, often in an arbitrary way to save money).

This paper found that having a bundled payments and penalties model (Comprehensive Care for Joint Replacement; CJR) for optimal care around hip and knee replacements reduced Medicare spending per episode compared to areas that did not pilot the programme. The overall difference was small in absolute terms at $812 against a total cost of around $24,000 per episode. The programme involves the hospital meeting a set of performance measures, and if they can do so at a lower cost, any savings are shared between the hospital and the payer. Cost savings were mainly driven by a reduction in patients being discharged to post-acute care facilities. Rates of complex patients were similar between pilot and control areas – this is important because a lower rate of complex cases in the CJR trial areas might indicate hospitals ‘cherry picking’ easier to treat, less expensive cases. Also, rates of complications were not significantly different between the CJR pilot areas and controls.
This paper suggests that having this kind of bundled payment programme can save money while maintaining quality.

Association of the Hospital Readmissions Reduction Program with mortality among Medicare beneficiaries hospitalized for heart failure, acute myocardial infarction, and pneumonia. JAMA [PubMed] Published 25th December 2018

Nobody likes being in hospital. But sometimes hospitals are the best places for people. This paper looks at possible unintended consequences of a US programme; the Hospital Readmissions Reduction Program (HRRP) where the Centers for Medicare & Medicaid Services (CMS) impose financial penalties (almost $2billion dollars’ worth since 2012) on hospitals with elevated 30-day readmission rates for patients with heart failure, acute myocardial infarction, and pneumonia. This study compared four time periods (no control group) and found that, after the programme was implemented, death rates for people who had been admitted with pneumonia and heart failure increased, with these increased deaths occurring more in people who had not been readmitted to hospital. The analysis controlled for differences in demographics, comorbidities, and calendar month using propensity scores and inverse probability weighting.

The authors are clear that their results do not establish cause and effect but are concerning nonetheless and worthy of more analysis. Incidentally, there is another paper this week in Health Affairs which suggests that the benefits of the programme in reducing readmissions was overstated.

There has been a similar financial incentive in the English NHS where hospitals are subject to the 30-day readmission rule, meaning they are not paid for people who are readmitted as an emergency within 30 days of being discharged. This is shortly to be abolished for 2019/20. I wonder if there has been similar research on whether this also led to unintended consequences in the NHS. Maybe there is a general lesson here about thinking a bit deeper about the potential outcomes of incentives in healthcare markets?

In these last two papers, we have had two examples of financial incentive programmes from Medicare. The CJR, which seems to have worked, has been dampened down from a mandatory to a voluntary programme, while the HRRP, which may not have worked, has been extended.

Credits

Method of the month: Distributional cost effectiveness analysis

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is distributional cost effectiveness analysis.

Principles

Variation in population health outcomes, particularly when socially patterned by characteristics such as income and race, are often of concern to policymakers. For example, the fact that people born in the poorest tenth of neighbourhoods in England can expect to live 19 fewer years of healthy life than those living in the richest tenth of neighbourhoods in the country, or the fact that black Americans born today can expect to die 4 years earlier than white Americans, are often considered to be unfair and in need of policy attention. As policymakers look to implement health programmes to tackle such unfair health disparities, they need the tools to enable them to evaluate the likely impacts of alternative programmes available to them in terms of the programmes’ impact on reducing these undesirable health inequalities, as well as their impact on improving population health.

Traditional tools for prospectively evaluating health programmes – that is to say, estimating the likely impacts of health programmes prior to their implementation – are typically based on cost-effectiveness analysis (CEA). CEA selects those programmes that improve the health of the average recipient of the programme the most, taking into consideration the health opportunity costs involved in implementing the programme. When using CEA to select health programmes there is, therefore, a risk that the programmes selected will not necessarily reduce the health disparities of concern to policymakers as these disparities are not part of the evaluation process used when comparing programmes. Indeed, in some cases, the programmes chosen using CEA may even unintentionally exacerbate these health inequalities.

There has been recent methodological work to build upon the standard CEA methods explicitly incorporating concerns for reducing health disparities into them. This equity augmented form of CEA is called distributional cost effectiveness analysis (DCEA). DCEA estimates the impacts of health interventions on different groups within the population and evaluates the health distributions resulting from these interventions in term of both health inequality and population health. Where necessary, DCEA can then be used to guide the trade-off between these different dimensions to pick the most “socially beneficial” programme to implement.

Implementation

The six core steps in implementing a DCEA are outlined below – full details of how DCEA is conducted in practice and applied to evaluate alternative options in a real case study (the NHS Bowel Cancer Screening Programme in England) can be found in a published tutorial.

1. Identify policy-relevant subgroups in the population

The first step in the analysis is to decide which characteristics of the population are of policy concern when thinking about health inequalities. For example, in England, there is a lot of concern about the fact that people born in poor neighbourhoods expect to die earlier than those born in rich neighbourhoods but little concern about the fact that men have shorter life expectancies than women.

2. Construct the baseline distribution of health

The next step is to construct a baseline distribution of health for the population. This baseline distribution describes the health of the population, typically measured in quality-adjusted life expectancy at birth, to show the level of health and health inequality prior to implementing the proposed interventions. This distribution can be standardised (using methods of either direct or indirect standardisation) to remove any variation in health that is not associated with the characteristics of interest. For example, in England, we might standardise the health distribution to remove variation associated with gender but retain variation associated with neighbourhood deprivation. This then gives us a description of the population health distribution with a particular focus on the health disparities we are trying to reduce. An example of how to construct such a ‘social distribution of health’ for England is given in another published article.

3. Estimate post-intervention distributions of health

We next estimate the health impacts of the interventions we are comparing. In producing these estimates we need to take into account differences by each of the equity relevant subgroups identified in the:

  • prevalence and incidence of the diseases impacted by the intervention,
  • rates of uptake and adherence to the intervention,
  • efficacy of the intervention,
  • mortality and morbidity, and
  • health opportunity costs.

Standardising these health impacts and combining with the baseline distribution of health derived above gives us estimated post-intervention distributions of health for each intervention.

4. Compare post-intervention distributions using the health equity impact plane

Once post-intervention distributions of health have been estimated for each intervention we can compare them both in terms of their level of average health and in terms of their level of health inequality. Whilst calculating average levels of health in the distributions is straightforward, calculating levels of inequality requires some value judgements to be made. There is a wide range of alternative inequality measures that could be employed each of which captures different aspects of inequality. For example, relative inequality measures would conclude that a health distribution where half the population lives for 40 years and the other half lives for 50 years is just as unequal as a health distribution where half the population lives for 80 years and the other half lives for 100 years. An absolute inequality measure would instead conclude that the equivalence is with a population where half the population lives for 80 years and the other half lives for 90 years.

Two commonly used inequality measures are the Atkinson relative inequality measure and the Kolm absolute inequality measure. These both have the additional feature that they can be calibrated using an inequality aversion parameter to vary the level of priority given to those worst off in the distribution. We will see these inequality aversion parameters in action in the next step of the DCEA process.

Having selected a suitable inequality measure we can plot our post interventions distributions on a health equity impact plane. Let us assume we are comparing two interventions A and B, we can plot intervention A at the origin of the plane and plot intervention B relative to A on the plane.

 

 

If intervention B falls in the north-east quadrant of the health equity impact plane we know it both improves health overall and reduces health inequality relative to intervention A and so intervention B should be selected. If, however, intervention B falls in the south-west quadrant of the health equity impact plane we know it both reduces health and increases health inequality relative to intervention A and so intervention A should be selected. If intervention B falls either in the north-west or south-east quadrants of the health equity impact plane there is no obvious answer as to which intervention should be preferred as there is a trade-off to be made between health equity and total health.

5. Evaluate trade-offs between inequality and efficiency using social welfare functions

We use social welfare functions to trade-off between inequality reduction and average health improvement. These social welfare functions are constructed by combining our chosen measure of inequality with the average health in the distribution. This combination of inequality and average health is used to calculate what is known as an equally distributed equivalent (EDE) level of health. The EDE summarises the health distribution being analysed as one number representing the amount of health that each person in a hypothetically perfectly equal health distribution would need to have for us to be indifferent between the actual health distribution analysed and this perfectly equal health distribution. Where our social welfare function is built around an inequality measure with an inequality aversion parameter this EDE level of health will also be a function of the inequality aversion parameter. Where inequality aversion is set to zero there is no concern for inequality and the EDE simply reflects the average health in the distribution replicating results we would see under standard utilitarian CEA. As the inequality aversion level approaches infinity, our focus becomes increasingly on those worse off in the health distribution until at the limit we reflect the Rawlsian idea of focusing entirely on improving the lot of the worst-off in society.

 

Social welfare functions derived from the Atkinson relative inequality measure and the Kolm absolute inequality measure are given below, with the inequality aversion parameters circled. Research carried out with members of the public in England suggests that suitable values for the Atkinson and Kolm inequality aversion parameters are 10.95 and 0.15 respectively.

Atkinson Relative Social Welfare Function Kolm Absolute Social Welfare Function

When comparing interventions where one intervention does not simply dominate the others on the health equity impact plane we need to use our social welfare functions to calculate EDE levels of health associated with each of the interventions and then select the intervention that produces the highest EDE level of health.

In the example depicted in the figure above we can see that pursuing intervention A results in a health distribution which appears less unequal but has a lower average level of health than the health distribution resulting from intervention B. The choice of intervention, in this case, will be determined by the form of social welfare function selected and the level of inequality this social welfare function is parameterised to embody.

6. Conduct sensitivity analysis on forms of social welfare function and extent of inequality aversion

Given that the conclusions drawn from DCEA may be dependent on the social value judgments made around the inequality measure used and the level of inequality aversion embodied in it, we should present results for a range of alternative social welfare functions parameterised at a range of inequality aversion levels. This will allow decision makers to clearly understand how robust conclusions are to alternative social value judgements.

Applications

DCEA is of particular use when evaluating large-scale public health programmes that have an explicit goal of tackling health inequality. It has been applied to the NHS bowel cancer screening programme in England and to the rotavirus vaccination programme in Ethiopia.

Some key limitations of DCEA are that: (1) it currently only analyses programmes in terms of their health impacts whilst large public health programmes often have important impacts across a range of sectors beyond health; and (2) it requires a range of data beyond that required by standard CEA which may not be readily available in all contexts.

For low and middle-income settings an alternative augmented CEA methodology called extended cost effectiveness analysis (ECEA) has been developed to combine estimates of health impacts with estimates of impacts on financial risk protection. More information on ECEA can be found here.

There are ongoing efforts to generalise the DCEA methods to be applied to interventions having impacts across multiple sectors. Follow the latest developments on DCEA at the dedicated website based at the Centre for Health Economics, University of York.

Credit