Rita Faria’s journal round-up for 15th April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Emulating a trial of joint dynamic strategies: an application to monitoring and treatment of HIV‐positive individuals. Statistics in Medicine [PubMed] Published 18th March 2019

Have you heard about the target trial approach? This is a causal inference method for using observational evidence to compare strategies. This outstanding paper by Ellen Caniglia and colleagues is a great way to get introduced to it!

The question is: what is the best test-and-treat strategy for HIV-positive individuals? Given that patients weren’t randomised to each of the 4 alternative strategies, chances are that their treatment was informed by their prognostic factors. And these also influence their outcome. It’s a typical situation of bias due to confounding. The target trial approach consists of designing the RCT which would estimate the causal effect of interest, and to think through how its design can be emulated by the observational data. Here, it would be a trial in which patients would be randomly assigned to one of the 4 joint monitoring and treatment strategies. The goal is to estimate the difference in outcomes if all patients had followed their assigned strategies.

The method is fascinating albeit a bit complicated. It involves censoring individuals, fitting survival models, estimating probability weights, and replicating data. It is worthy of a detailed read! I’m very excited about the target trial methodology for cost-effectiveness analysis with observational data. But I haven’t come across any application yet. Please do get in touch via comments or Twitter if you know of a cost-effectiveness application.

Achieving integrated care through commissioning of primary care services in the English NHS: a qualitative analysis. BMJ Open [PubMed] Published 1st April 2019

Are you confused about the set-up of primary health care services in England? Look no further than Imelda McDermott and colleagues’ paper.

The paper starts by telling the story of how primary care has been organised in England over time, from its creation in 1948 to current times. For example, I didn’t know that there are new plans to allow clinical commissioning groups (CCGs) to design local incentive schemes as an alternative to the Quality and Outcomes Framework pay-for-performance scheme. The research proper is a qualitative study using interviews, telephone surveys and analysis of policy documents to understand how the CCGs commission primary care services. CCG Commissioning is intended to make better and more efficient use of resources to address increasing demand for health care services, staff shortage and financial pressure. The issue is that it is not easy to implement in practice. Furthermore, there seems to be some “reinvention of the wheel”. For example, from one of the interviewees: “…it’s no great surprise to me that the three STPs that we’ve got are the same as the three PCT clusters that we broke up to create CCGs…” Hum, shall we just go back to pre-2012 then?

Even if CCG commissioning does achieve all it sets out to do, I wonder about its value for money given the costs of setting it up. This paper is an exceptional read about the practicalities of implementing this policy in practice.

The dark side of coproduction: do the costs outweight the benefits for health research? Health Research Policy and Systems [PubMed] Published 28th March 2019

Last month, I covered the excellent paper by Kathryn Oliver and Paul Cairney about how to get our research to influence policy. This week I’d like to suggest another remarkable paper by Kathryn, this time with Anita Kothari and Nicholas Mays, on the costs and benefits of coproduction.

If you are in the UK, you have certainly heard about public and patient involvement or PPI. In this paper, coproduction refers to any collaborative working between academics and non-academics, of which PPI is one type, but it includes working with professionals, policy makers and any other people affected by the research. The authors discuss a wide range of costs to coproduction. From the direct costs of doing collaborative research, such as organising meetings, travel arrangements, etc., to the personal costs on an individual researcher to manage conflicting views and disagreements between collaborators, of having research products seen to be of lower quality, of being seen as partisan, etc., and costs to the stakeholders themselves

As a detail, I loved the term “hit-and-run research” to describe the current climate: get funding, do research, achieve impact, leave. Indeed, the way that research is funded, with budgets only available for the period that the research is being developed, does not help academics to foster relationships.

This paper reinforced my view that there may well be benefits to coproduction, but that there are also quite a lot of costs. And there tends to be not much attention to the magnitude of those costs, in whom they fall, and what’s displaced. I found the authors’ advice about the questions to ask oneself when thinking about coproduction to be really useful. I’ll keep it to hand when writing my next funding application, and I recommend you do too!

Credits

Chris Sampson’s journal round-up for 1st April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Toward a centralized, systematic approach to the identification, appraisal, and use of health state utility values for reimbursement decision making: introducing the Health Utility Book (HUB). Medical Decision Making [PubMed] Published 22nd March 2019

Every data point reported in research should be readily available to us all in a structured knowledge base. Most of us waste most of our time retreading old ground, meaning that we don’t have the time to do the best research possible. One instance of this is in the identification of health state utility values to plug into decision models. Everyone who builds a model in a particular context goes searching for utility values – there is no central source. The authors of this paper are hoping to put an end to that.

The paper starts with an introduction to the importance of health state utility values in cost-effectiveness analysis, which most of us don’t need to read. Of course, the choice of utility values in a model is very important and can dramatically alter estimates of cost-effectiveness. The authors also discuss issues around the identification of utility values and the assessment of their quality and applicability. Then we get into the objectives of the ‘Health Utility Book’, which is designed to tackle these issues.

The Health Utility Book will consist of a registry (I like registries), backed by a systematic approach to the identification and inclusion (registration?) of utility values. The authors plan to develop a quality assessment tool for studies that report utility values, using a Delphi panel method to identify appropriate indicators of quality to be included. The quality assessment tool will be complemented by a tool to assess applicability, which will be developed through interviews with stakeholders involved in the reimbursement process.

In the first place, the Health Utility Book will only compile utility values for cancer, and some of the funding for the project is cancer specific. To survive, the project will need more money from more sources. To be sustainable, the project will need to attract funding indefinitely. Or perhaps it could morph into a crowd-sourced platform. Either way, the Health Utility Book has my support.

A review of attitudes towards the reuse of health data among people in the European Union: the primacy of purpose and the common good. Health Policy Published 21st March 2019

We all agree that data protection is important. We all love the GDPR. Organisations such as the European Council and the OECD are committed to facilitating the availability of health data as a means of improving population health. And yet, there often seem to be barriers to accessing health data, and we occasionally hear stories of patients opposing data sharing (e.g. care.data). Maybe people don’t want researchers to be using their data, and we just need to respect that. Or, more likely, we need to figure out what it is that people are opposed to, and design systems that recognise this.

This study reviews research on attitudes towards the sharing of health data for purposes other than treatment, among people living in the EU, employing a ‘configurative literature synthesis’ (a new one for me). From 5,691 abstracts, 29 studies were included. Most related to the use of health data in research in general, while some focused on registries. A few studies looked at other uses, such as for planning and policy purposes. And most were from the UK.

An overarching theme was a low awareness among the population about the reuse of health data. However, in some studies, a desire to be better informed was observed. In general, views towards the use of health data were positive. But this was conditional on the data being used to serve the common good. This includes such purposes as achieving a better understanding of diseases, improving treatments, or achieving more efficient health care. Participants weren’t so happy with health data reuse if it was seen to conflict with the interests of patients providing the data. Commercialisation is a big concern, including the sale of data and private companies profiting from the data. Employers and insurance companies were also considered a threat to patients’ interests. There were conflicting views about whether it is positive for pharmaceutical companies to have access to health data. A minority of people were against sharing data altogether. Certain types of data are seen as being particularly sensitive, including those relating to mental health or sexual health. In general, people expressed concern about data security and the potential for leaks. The studies also looked at the basis for consent that people would prefer. A majority accepted that their data could be used without consent so long as the data were anonymised. But there were no clear tendencies of preference for the various consent models.

It’s important to remember that – on the whole – patients want their data to be used to further the common good. But support can go awry if the data are used to generate profits for private firms or used in a way that might be perceived to negatively affect patients.

Health-related quality of life in injury patients: the added value of extending the EQ-5D-3L with a cognitive dimension. Quality of Life Research [PubMed] Published 18th March 2019

I’m currently working on a project to develop a cognition ‘bolt-on’ for the EQ-5D. Previous research has demonstrated that a cognition bolt-on could provide additional information to distinguish meaningful differences between health states, and that cognition might be a more important candidate than other bolt-ons. Injury – especially traumatic brain injury – can be associated with cognitive impairments. This study explores the value of a cognition bolt-on in this context.

The authors sought to find out whether cognition is sufficiently independent of other dimensions, whether the impact of cognitive problems is reflected in the EuroQol visual analogue scale (EQ VAS), and how a cognition bolt-on affects the overall explanatory power of the EQ-5D-3L. The data used are from the Dutch Injury Surveillance System, which surveys people who have attended an emergency department with an injury, including EQ-5D-3L. The survey adds a cognitive bolt-on relating to memory and concentration.

Data were available for 16,624 people at baseline, with 5,346 complete responses at 2.5-month follow-up. The cognition item was the least affected, with around 20% reporting any problems (though it’s worth noting that the majority of the cohort had injuries to parts of the body other than the head). The frequency of different responses suggests that cognition is dominant over other dimensions in the sense that severe cognitive problems tend to be observed alongside problems in other dimensions, but not vice versa. The mean EQ VAS for people reporting severe cognitive impairment was 41, compared with a mean of 75 for those reporting no problems. Regression analysis showed that moderate and severe cognitive impairment explained 8.7% and 6.2% of the variance of the EQ VAS. Multivariate analysis suggested that the cognitive dimension added roughly the same explanatory power as any other dimension. This was across the whole sample. Interestingly (or, perhaps, worryingly) when the authors looked at the subset of people with traumatic brain injury, the explanatory power of the cognitive dimension was slightly lower than overall.

There’s enough in this paper to justify further research into the advantages and disadvantages of using a cognition bolt-on. But I would say that. Whether or not the bolt-on descriptors used in this study are meaningful to patients remains an open question.

Developing the role of electronic health records in economic evaluation. The European Journal of Health Economics [PubMed] Published 14th March 2019

One way that we can use patients’ routinely collected data is to support the conduct of economic evaluations. In this commentary, the authors set out some of the ways to make the most of these data and discuss some of the methodological challenges. Large datasets have the advantage of being large. When this is combined with the collection of sociodemographic data, estimates for sub-groups can be produced. The data can also facilitate the capture of outcomes not otherwise available. For example, the impact of bariatric surgery on depression outcomes could be identified beyond the timeframe of a trial. The datasets also have the advantage of being representative, where trials are not. This could mean more accurate estimates of costs and outcomes. But there are things to bear in mind when using the data, such as the fact that coding might not always be very accurate, and coding practices could vary between observations. Missing data are likely to be missing for a reason (i.e. not at random), which creates challenges for the analyst. I had hoped that this paper would discuss novel uses of routinely collected data systems, such as the embedding of economic evaluations within them, rather than simply their use to estimate parameters for a model. But if you’re just getting started with using routine data, I suppose you could do worse than start with this paper.

Credits

Brendan Collins’s journal round-up for 18th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluation of intervention impact on health inequality for resource allocation. Medical Decision Making [PubMed] Published 28th February 2019

How should decision-makers factor equity impacts into economic decisions? Can we trade off an intervention’s cost-effectiveness with its impact on unfair health inequalities? Is a QALY just a QALY or should we weight it more if it is gained by someone from a disadvantaged group? Can we assume that, because people of lower socioeconomic position lose more QALYs through ill health, that most interventions should, by default, reduce inequalities?

I really like the health equity plane. This is where you show health impacts (usually including a summary measure of cost-effectiveness like net health benefit or net monetary benefit) and equity impacts (which might be a change in slope index of inequality [SII] or relative index of inequality) on the same plane. This enables decision-makers to identify potential trade-offs between interventions that produce a greater benefit, but have less impact on inequalities, and those that produce a smaller benefit, but increase equity. I think there has been a debate over whether the ‘win-win’ quadrant should be south-east (which would be consistent with the dominant quadrant of the cost-effectiveness plane) or north-east, which is what seems to have been adopted as the consensus and is used here.

This paper showcases a reproducible method to estimate the equity impact of interventions. It considers public health interventions recommended by NICE from 2006-2016, with equity impacts estimated based on whether they targeted specific diseases, risk factors or populations. The disease distributions were based on hospital episode statistics data by deprivation (IMD). The study used equity weights to convert QALYs gained to different social groups into net social welfare. In this case, valuing the most disadvantaged fifth of people’s health at around 6-7 times that of the least disadvantaged fifth. I think there might still be work to be done around reaching consensus for equity weights.

The total expected effect on inequalities is small – full implementation of all recommendations would produce a reduction of the quality-adjusted life expectancy gap between the healthiest and least healthy from 13.78 to 13.34 QALYs. But maybe this is to be expected; NICE does not typically look at vaccinations or screening and has not looked at large scale public health programmes like the Healthy Child Programme in the whole. Reassuringly, where recommended interventions were likely to increase inequality, the trade-off between efficiency and equity was within the social welfare function they had used. The increase in inequality might be acceptable because the interventions were cost-effective – producing 5.6million QALYs while increasing the SII by 0.005. If these interventions are buying health at a good price, then you would hope this might then release money for other interventions that would reduce inequalities.

I suspect that public health folks might not like equity trade-offs at all – trading off equity and cost-effectiveness might be the moral equivalent of trading off human rights – you can’t choose between them. But the reality is that these kinds of trade-offs do happen, and like a lot of economic methods, it is about revealing these implicit trade-offs so that they become explicit, and having ‘accountability for reasonableness‘.

Future unrelated medical costs need to be considered in cost effectiveness analysis. The European Journal of Health Economics [PubMed] [RePEc] Published February 2019

This editorial says that NICE should include unrelated future medical costs in its decision making. At the moment, if NICE looks at a cardiovascular disease (CVD) drug, it might look at future costs related to CVD but it won’t include changes in future costs of cancer, or dementia, which may occur because individuals live longer. But usually unrelated QALY gains will be implicitly included; so there is an inconsistency. If you are a health economic modeller, you know that including unrelated costs properly is technically difficult. You might weight average population costs by disease prevalence so you get a cost estimate for people with coronary heart disease, diabetes, and people without either disease. Or you might have a general healthcare running cost that you can apply to future years. But accounting for a full matrix of competing causes of morbidity and mortality is very tricky if not impossible. To help with this, this group of authors produced the excellent PAID tool, which helps with doing this for the Netherlands (can we have one for the UK please?).

To me, including unrelated future costs means that in some cases ICERs might be driven more by the ratio of future costs to QALYs gained. Whereas currently, ICERs are often driven by the ratio of the intervention costs to QALYs gained. So it might be that a lot of treatments that are currently cost-effective no longer are, or we need to judge all interventions with a higher ICER willingness to pay threshold or value of a QALY. The authors suggest that, although including unrelated medical costs usually pushes up the ICER, it should ultimately result in better decisions that increase health.

There are real ethical issues here. I worry that including future unrelated costs might be used for an integrated care agenda in the NHS, moving towards a capitation system where the total healthcare spend on any one individual is capped, which I don’t necessarily think should happen in a health insurance system. Future developments around big data mean we will be able to segment the population a lot better and estimate who will benefit from treatments. But I think if someone is unlucky enough to need a lot of healthcare spending, maybe they should have it. This is risk sharing and, without it, you may get the ‘double jeopardy‘ problem.

For health economic modellers and decision-makers, a compromise might be to present analyses with related and unrelated medical costs and to consider both for investment decisions.

Overview of cost-effectiveness analysis. JAMA [PubMed] Published 11th March 2019

This paper probably won’t offer anything new to academic health economists in terms of methods, but I think it might be a useful teaching resource. It gives an interesting example of a model of ovarian cancer screening in the US that was published in February 2018. There has been a large-scale trial of ovarian cancer screening in the UK (the UKCTOCS), which has been extended because the results have been promising but mortality reductions were not statistically significant. The model gives a central ICER estimate of $106,187/QALY (based on $100 per screen) which would probably not be considered cost-effective in the UK.

I would like to explore one statement that I found particularly interesting, around the willingness to pay threshold; “This willingness to pay is often represented by the largest ICER among all the interventions that were adopted before current resources were exhausted, because adoption of any new intervention would require removal of an existing intervention to free up resources.”

The Culyer bookshelf model is similar to this, although as well as the ICER you also need to consider the burden of disease or size of the investment. Displacing a $110,000/QALY intervention for 1000 people with a $109,000/QALY intervention for a million people will bust your budget.

This idea works intuitively – if Liverpool FC are signing a new player then I might hope they are better than all of the other players, or at least better than the average player. But actually, as long as they are better than the worst player then the team will be improved (leaving aside issues around different positions, how they play together, etc.).

However, I think that saying that the reference ICER should be the largest current ICER might be a bit dangerous. Leaving aside inefficient legacy interventions (like unnecessary tonsillectomies etc), it is likely that the intervention being considered for investment and the current maximum ICER intervention to be displaced may both be new, expensive immunotherapies. It might be last in, first out. But I can’t see this happening; people are loss averse, so decision-makers and patients might not accept what is seen as a fantastic new drug for pancreatic cancer being approved then quickly usurped by a fantastic new leukaemia drug.

There has been a lot of debate around what the threshold should be in the UK; in England NICE currently use £20,000 – £30,000, up to a hypothetical maximum £300,000/QALY in very specific circumstances. UK Treasury value QALYs at £60,000. Work by Karl Claxton and colleagues suggests that marginal productivity (the ‘shadow price’) in the NHS is nearer to £5,000 – £15,000 per QALY.

I don’t know what the answer to this is. I don’t think the willingness-to-pay threshold for a new treatment should be the maximum ICER of a current portfolio of interventions; maybe it should be the marginal health production cost in a health system, as might be inferred from the Claxton work. Of course, investment decisions are made on other factors, like impact on health inequalities, not just on the ICER.

Credits