Chris Sampson’s journal round-up for 29th April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Here comes the SUN: self‐assessed unmet need, worsening health outcomes, and health care inequity. Health Economics [PubMed] Published 24th April 2019

How should we measure inequity in health care? Often, it is measured on the basis of health care use, and the extent to which people with different socioeconomic circumstances – conditional on their level of need – access services. One problem with this approach is that differences might not only reflect barriers to access but also heterogeneity in preferences. If people of lower socioeconomic status prefer to access services less (conditional on need), then this is arguably an artificial signal of inequities in the system. Instead, we could just ask people. But can self-assessed unmet need provide a valid and meaningful measure of inequity?

In this study, the researchers looked at whether self-reported unmet need can predict deterioration in health. The idea here is that we would expect there to be negative health consequences if people genuinely need health care but cannot access it. The Canadian National Population Health Survey asks whether, during the preceding 12 months, the individual needed health care but did not receive it, with around 10% reporting unmet need. General health outcomes are captured by self-assessed health and by the HUI3, and there are also variables for specific chronic conditions. A few model specifications, controlling for a variety of health-related and demographic variables, are implemented. For the continuous variables, the authors use a fixed effects model with lagged health, and for the categorical outcomes they used a random effects probit.

The findings are consistent across models and outcomes. People who report self-assessed unmet need are more likely to have poorer health outcomes in subsequent periods, in terms of both general health and the number of self-reported chronic conditions. This suggests that self-assessed unmet need is probably a meaningful indicator of barriers to access in health care. I’m not aware of any UK-based surveys that include self-assessed unmet need, but this study provides some reason to think that they should.

Cost effectiveness of treatments for diabetic retinopathy: a systematic literature review. PharmacoEconomics [PubMed] Published 22nd April 2019

I’ve spent a good chunk of the last 8 years doing research in the context of diabetic eye disease. Over that time, treatment has changed, and there have been some interesting controversies relating to the costs of new treatments. So this review is timely.

There are four groups of treatments that the authors consider – laser, anti-VEGF eye injections, corticosteroids, and surgery. The usual databases were searched, turning up 1915 abstracts, and 17 articles were included in the review. That’s not a lot of studies, which is why I’d like to call the authors out for excluding one HTA report, which I assume was Royle et al 2015 and which probably should have been included. The results are summarised according to whether the evaluations were of treatments for diabetic macular oedema (DMO) or proliferative diabetic retinopathy (PDR), which are the two main forms of sight-threatening diabetic eye disease. The majority of studies focussed on DMO. As ever, in reviews of this sort, the studies and their findings are difficult to compare. Different methods were employed, for different purposes. The reason that there are so few economic evaluations in the context of PDR is probably that treatments have been so decisively shown to be effective. Yet there is evidence to suggest that, for PDR, the additional benefits of injections do not justify the much higher cost compared with laser. However, this depends on the choice of drug that is being injected, because prices vary dramaticly. For DMO, injections are cost-effective whether combined with laser or not. The evidence on corticosteroids is mixed and limited, but there is promise in recently-developed fluocinolone implants.

Laser might still be king in PDR, and early surgical intervention is also still cost-effective where indicated. For DMO, the strongest evidence is in favour of using an injection (bevacizumab) that can only be used off-label. You can blame Novartis for that, or you can blame UK regulators. Either way, there’s good reason to be angry about it. The authors of this paper clearly have a good understanding of the available treatments, which is not always the case for reviews of economic evaluations. The main value of this study is as a reference point for people developing research in this area, to identify the remaining gaps in the evidence and appropriately align (or not) with prevailing methods.

Exploring the impacts of the 2012 Health and Social Care Act reforms to commissioning on clinical activity in the English NHS: a mixed methods study of cervical screening. BMJ Open [PubMed] Published 14th April 2019

Not everybody loves the Health and Social Care Act of 2012. But both praise and criticism of far-reaching policies like this are usually confined to political arguments. It’s nice to see – and not too long after the fact – some evidence of its impact. In this paper, we learn about the impact of the Act on cervical screening activity.

The researchers used both qualitative and quantitative methods in their study in an attempt to identify whether the introduction of the Act influenced rates of screening coverage. With the arrival of the Act, responsibility for commissioning screening services shifted from primary care trusts to regional NHS England teams, while sexual health services were picked up by local authorities. The researchers conducted 143 (!) interviews with commissioners, clinicians, managers, and administrators from various organisations. Of these, 93 related to the commissioning of sexual health services, with questions regarding the commissioning system before and after the introduction of the Act. How did participants characterise the impact of the Act? Confusion, complexity, variability, uncertainty, and the idea that these characteristics could result in a drop in screening rates.

The quantitative research plan, and in particular the focus on cervical screening, arose from the qualitative findings. The quantitative analysis sought to validate the qualitative findings. But everyone had the Act dropped on them at the same time (those wily politicians know how to evade blame), so the challenge for the researchers was to identify some source of variation that could represent exposure to the effects of the Act. Informed by the interviewees, the authors differentiated between areas based on the number of local authorities that the clinical commissioning group (CCG) had to work with. Boundaries don’t align, so while some CCGs only have to engage with one local authority, some have to do so with as many as three, increasing the complexity created by the Act. As a kind of control, the researchers looked at the rate of unassisted births, which we wouldn’t expect to have been affected by the introduction of the Act. From this, they estimated the triple difference in cervical screening rates before and after the introduction of the Act, between CCGs with one or more than one local authority, minus the difference in the unassisted birth rate. Screening rates (and unassisted delivery rates) were both declining before the introduction of the Act. Without any adjustment, screening rates before and after the introduction of the act decreased by 0.39% more for GP practices in those CCGs that had to work with multiple local authorities. Conversely, unassisted delivery rates actually increased by a similar amount. The adjusted impact of the Act on screening rates was a drop of around 0.62%.

Clearly, there are big disclaimers attached to findings from a study of this sort, though the main finding seems to be robust to a variety of specifications. Any number of other things could explain the change in screening rates over the period, which the researchers couldn’t capture. But the quantitative findings are backed-up by the qualitative reports, making this a far more convincing piece of work. There’s little doubt that NHS redisorganisations of this kind create challenges in the short term, and we can now see the impact that this has on the provision of care.

Public involvement in health outcomes research: lessons learnt from the development of the recovering quality of life (ReQoL) measures. Health and Quality of Life Outcomes [PubMed] Published 11th April 2019

We’ve featured a few papers from the ReQoL project on this blog. The researchers developed several outcome measures to be used in the context of mental health. A couple of weeks ago, we also featured a paper turning a sceptical eye to the idea of co-production, whereby service users or members of the public are not simply research participants but research partners. This paper describes the experience of coproduction in the context of the ReQoL study. The authors are decidedly positive about co-production.

The logic behind the involvement of service users in the development of patient-reported outcome measures is obvious; measures need to be meaningful and understandable to patients, and enabling service users to inform research decisions could facilitate that. But there is little guidance on co-production in the context of developing patient-reported outcomes. Key decisions in the development of ReQoL were made by a ‘scientific group’, which included academics, clinicians, and seven expert service users. An overlapping ‘expert service user group’ also supported the study. In these roles, service users contributed to all stages of the research, confirming themes and items, supporting recruitment, collecting and analysing data, agreeing the final items for the measures, and engaging in dissemination activities. It seems that the involvement was in large part attendance at meetings, discussing data and findings to achieve an interpretation that includes the perspectives of services users. This resulted in decisions – about which items to take forward – that probably would not have been made if the academics and clinicians were left to their own devices. Service users were also involved in the development of research materials, such as the interview topic guide. In some examples, however, it seems like the line between research partner and research participant was blurred. If an expert service user group is voting on candidate items and editing them according to their experience, this is surely a data collection process and the services users become research subjects.

The authors describe the benefits as they saw them, in terms of the expert service users’ positive influence on the research. The costs and challenges are also outlined, including the need to manage disagreements and make additional preparations for meetings. We’re even provided with the resource implications in terms of the additional days of work. The comprehensive description of the researchers’ experiences in this context and the recommendations that they provide make this paper an important companion for anybody designing a research study to develop a new patient-reported outcome measure.

Credits

Chris Sampson’s journal round-up for 11th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Identification, review, and use of health state utilities in cost-effectiveness models: an ISPOR Good Practices for Outcomes Research Task Force report. Value in Health [PubMed] Published 1st March 2019

When modellers select health state utility values to plug into their models, they often do it in an ad hoc and unsystematic way. This ISPOR Task Force report seeks to address that.

The authors discuss the process of searching, reviewing, and synthesising utility values. Searches need to use iterative techniques because evidence requirements develop as a model develops. Due to the scope of models, it may be necessary to develop multiple search strategies (for example, for different aspects of disease pathways). Searches needn’t be exhaustive, but they should be systematic and transparent. The authors provide a list of factors that should be considered in defining search criteria. In reviewing utility values, both quality and appropriateness should be considered. Quality is indicated by the precision of the evidence, the response rate, and missing data. Appropriateness relates to the extent to which the evidence being reviewed conforms to the context of the model in which it is to be used. This includes factors such as the characteristics of the study population, the measure used, value sets used, and the timing of data collection. When it comes to synthesis, the authors suggest it might not be meaningful in most cases, because of variation in methods. We can’t pool values if they aren’t (at least roughly) equivalent. Therefore, one approach is to employ strict inclusion criteria (e.g only EQ-5D, only a particular value set), but this isn’t likely to leave you with much. Meta-regression can be used to analyse more dissimilar utility values and provide insight into the impact of methodological differences. But the extent to which this can provide pooled values for a model is questionable, and the authors concede that more research is needed.

This paper can inform that future research. Not least in its attempt to specify minimum reporting standards. We have another checklist, with another acronym (SpRUCE). The idea isn’t so much that this will guide publications of systematic reviews of utility values, but rather that modellers (and model reviewers) can use it to assess whether the selection of utility values was adequate. The authors then go on to offer methodological recommendations for using utility values in cost-effectiveness models, considering issues such as modelling technique, comorbidities, adverse events, and sensitivity analysis. It’s early days, so the recommendations in this report ought to be changed as methods develop. Still, it’s a first step away from the ad hoc selection of utility values that (no doubt) drives the results of many cost-effectiveness models.

Estimating the marginal cost of a life year in Sweden’s public healthcare sector. The European Journal of Health Economics [PubMed] Published 22nd February 2019

It’s only recently that health economists have gained access to data that enables the estimation of the opportunity cost of health care expenditure on a national level; what is sometimes referred to as a supply-side threshold. We’ve seen studies in the UK, Spain, Australia, and here we have one from Sweden.

The authors use data on health care expenditure at the national (1970-2016) and regional (2003-2016) level, alongside estimates of remaining life expectancy by age and gender (1970-2016). First, they try a time series analysis, testing the nature of causality. Finding an apparently causal relationship between longevity and expenditure, the authors don’t take it any further. Instead, the results are based on a panel data analysis, employing similar methods to estimates generated in other countries. The authors propose a conceptual model to support their analysis, which distinguishes it from other studies. In particular, the authors assert that the majority of the impact of expenditure on mortality operates through morbidity, which changes how the model should be specified. The number of newly graduated nurses is used as an instrument indicative of a supply-shift at the national rather than regional level. The models control for socioeconomic and demographic factors and morbidity not amenable to health care.

The authors estimate the marginal cost of a life year by dividing health care expenditure by the expenditure elasticity of life expectancy, finding an opportunity cost of €38,812 (with a massive 95% confidence interval). Using Swedish population norms for utility values, this would translate into around €45,000/QALY.

The analysis is considered and makes plain the difficulty of estimating the marginal productivity of health care expenditure. It looks like a nail in the coffin for the idea of estimating opportunity costs using time series. For now, at least, estimates of opportunity cost will be based on variation according to geography, rather than time. In their excellent discussion, the authors are candid about the limitations of their model. Their instrument wasn’t perfect and it looks like there may have been important confounding variables that they couldn’t control for.

Frequentist and Bayesian meta‐regression of health state utilities for multiple myeloma incorporating systematic review and analysis of individual patient data. Health Economics [PubMed] Published 20th February 2019

The first paper in this round-up was about improving practice in the systematic review of health state utility values, and it indicated the need for more research on the synthesis of values. Here, we have some. In this study, the authors conduct a meta-analysis of utility values alongside an analysis of registry and clinical study data for multiple myeloma patients.

A literature search identified 13 ‘methodologically appropriate’ papers, providing 27 health state utility values. The EMMOS registry included data for 2,445 patients in 22 counties and the APEX clinical study included 669 patients, all with EQ-5D-3L data. The authors implement both a frequentist meta-regression and a Bayesian model. In both cases, the models were run including all values and then with a limited set of only EQ-5D values. These models predicted utility values based on the number of treatment classes received and the rate of stem cell transplant in the sample. The priors used in the Bayesian model were based on studies that reported general utility values for the presence of disease (rather than according to treatment).

The frequentist models showed that utility was low at diagnosis, higher at first treatment, and lower at each subsequent treatment. Stem cell transplant had a positive impact on utility values independent of the number of previous treatments. The results of the Bayesian analysis were very similar, which the authors suggest is due to weak priors. An additional Bayesian model was run with preferred data but vague priors, to assess the sensitivity of the model to the priors. At later stages of disease (for which data were more sparse), there was greater uncertainty. The authors provide predicted values from each of the five models, according to the number of treatment classes received. The models provide slightly different results, except in the case of newly diagnosed patients (where the difference was 0.001). For example, the ‘EQ-5D only’ frequentist model gave a value of 0.659 for one treatment, while the Bayesian model gave a value of 0.620.

I’m not sure that the study satisfies the recommendations outlined in the ISPOR Task Force report described above (though that would be an unfair challenge, given the timing of publication). We’re told very little about the nature of the studies that are included, so it’s difficult to judge whether they should have been combined in this way. However, the authors state that they have made their data extraction and source code available online, which means I could check that out (though, having had a look, I can’t find the material that the authors refer to, reinforcing my hatred for the shambolic ‘supplementary material’ ecosystem). The main purpose of this paper is to progress the methods used to synthesise health state utility values, and it does that well. Predictably, the future is Bayesian.

Credits

Rita Faria’s journal round-up for 4th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cheap and dirty: the effect of contracting out cleaning on efficiency and effectiveness. Public Administration Review Published 25th February 2019

Before I was a health economist, I used to be a pharmacist and worked for a well-known high street chain for some years. My impression was that the stores with in-house cleaners were cleaner, but I didn’t know if this was a true difference, my leftie bias or my small sample size of 2! This new study by Shimaa Elkomy, Graham Cookson and Simon Jones confirms my suspicions, albeit in the context of NHS hospitals, so I couldn’t resist to select it for my round-up.

They looked at how contracted-out services fare in terms of perceived cleanliness, costs and MRSA rate in NHS hospitals. MRSA is a type of hospital-associated infection that is affected by how clean a hospital is.

They found that contracted-out services are cheaper than in-house cleaning, but that perceived cleanliness is worse. Importantly, contracted-out services increase the MRSA rate. In other words, contracting-out cleaning services could harm patients’ health.

This is a fascinating paper that is well worth a read. One wonders if the cost of managing MRSA is more than offset by the savings of contracting-out services. Going a step further, are in-house services cost-effective given the impact on patients’ health and costs of managing infections?

What’s been the bang for the buck? Cost-effectiveness of health care spending across selected conditions in the US. Health Affairs [PubMed] Published 1st January 2019

Staying on the topic of value for money, this study by David Wamble and colleagues looks at the extent to which the increased spending in health care in the US has translated into better health outcomes over time.

It’s clearly reassuring that, for 6 out of the 7 conditions they looked at, health outcomes have improved in 2015 compared to 1996. After all, that’s the goal of investing in medical R&D, although it remains unclear how much of this difference can be attributed to health care versus other things that have happened at the same time that could have improved health outcomes.

I wasn’t sure about the inflation adjustment for the costs, so I’d be grateful for your thoughts via comments or Twitter. In my view, we would underestimate the costs if we used medical price inflation indices. This is because these indices reflect the specific increase in prices in health care, such as due to new drugs being priced high at launch. So I understand that the main results use the US Consumer Price Index, which means that this reflects the average increase in prices over time rather than the increase in health care.

However, patients may not have seen their income rise with inflation. This means that the cost of health care may represent a disproportionally greater share of people’s income. And that the inflation adjustment may downplay the impact of health care costs on people’s pockets.

This study caught my eye and it is quite thought-provoking. It’s a good addition to the literature on the cost-effectiveness of US health care. But I’d wager that the question remains: to what extent is today’s medical care better value for money that in the past?

The dos and don’ts of influencing policy: a systematic review of advice to academics. Palgrave Communications Published 19th February 2019

We all would like to see our research findings influence policy, but how to do this in practice? Well, look no further, as Kathryn Oliver and Paul Cairney reviewed the literature, summarised it in 8 key tips and thought through their implications.

To sum up, it’s not easy to influence policy; advice about how to influence policy is rarely based on empirical evidence, and there are a few risks to trying to become a mover-and-shaker in policy circles.

They discuss three dilemmas in policy engagement. Should academics try to influence policy? How should academics influence policy? What is the purpose of academics’ engagement in policy making?

I particularly enjoyed reading about the approaches to influence policy. Tools such as evidence synthesis and social media should make evidence more accessible, but their effectiveness is unclear. Another approach is to craft stories to create a compelling case for the policy change, which seems to me to be very close to marketing. The third approach is co-production, which they note can give rise to accusations of bias and can have some practical challenges in terms of intellectual property and keeping one’s independence.

I found this paper quite refreshing. It not only boiled down the advice circulating online about how to influence policy into its key messages but also thought through the practical challenges in its application. The impact agenda seems to be here to stay, at least in the UK. This paper is an excellent source of advice on the risks and benefits of trying to navigate the policy world.

Credits