Thesis Thursday: Logan Trenaman

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Logan Trenaman who has a PhD from the University of British Columbia. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Economic evaluation of interventions to support shared decision-making: an extension of the valuation framework
Supervisors
Nick Bansback, Stirling Bryan
Repository link
http://hdl.handle.net/2429/66769

What is shared decision-making?

Shared decision-making is a process whereby patients and health care providers work together to make decisions. For most health care decisions, where there is no ‘best’ option, the most appropriate course of action depends on the clinical evidence and the patient’s informed preferences. In effect, shared decision-making is about reducing information asymmetry, by allowing providers to inform patients about the potential benefits and harms of alternative tests or treatments, and patients to express their preferences to their provider. The goal is to reach agreement on the most appropriate decision for that patient.

My thesis focused on individuals with advanced osteoarthritis who were considering whether to undergo total hip or knee replacement, or use non-surgical treatments such as pain medication, exercise, or mobility aids. Joint replacement alleviates pain and improves mobility for most patients, however, as many as 20-30% of recipients have reported insignificant improvement in symptoms and/or dissatisfaction with results. Shared decision-making can help ensure that those considering joint replacement are aware of alternative treatments and have realistic expectations about the potential benefits and harms of each option.

There are different types of interventions available to help support shared decision-making, some of which target the patient (e.g. patient decision aids) and some of which target providers (e.g. skills training). My thesis focused on a randomized controlled trial that evaluated a pre-consultation patient decision aid, which generated a summary report for the surgeon that outlined the patient’s knowledge, values, and preferences.

How can the use of decision aids influence health care costs?

The use of patient decision aids can impact health care costs in several ways. Some patient decision aids, such as those evaluated in my thesis, are designed for use by patients in preparation for a consultation where a treatment decision is made. Others are designed to be used during the consultation with the provider. There is some evidence that decision aids may increase up-front costs, by increasing the length of consultations, requiring investments to integrate decision aids into routine care, or train clinicians. These interventions may impact downstream costs by influencing treatment decision-making. For example, the Cochrane review of patient decision aids found that, across 18 studies in major elective surgery, those exposed to decision aids were less likely to choose surgery compared to those in usual care (RR: 0.86, 95% CI: 0.75 to 1.00).

This was observed in the trial-based economic evaluation which constituted the first chapter of my thesis. This analysis found that decision aids were highly cost-effective, largely due to a smaller proportion of patients undergoing joint replacement. Of course, this conclusion could change over time. One of the challenges of previous cost-effectiveness analysis (CEA) of patient decision aids has been a lack of long-term follow-up. Patients who choose not to have surgery over the short-term may go on to have surgery later. To look at the longer-term impact of decision aids, the third chapter of my thesis linked trial participants to administrative data with an average of 7-years follow-up. I found that, from a resource use perspective, the conclusion was the same as observed during the trial: fewer patients exposed to decision aids had undergone surgery, resulting in lower costs.

What is it about shared decision-making that patients value?

On the whole, the evidence suggests that patients value being informed, listened to, and offered the opportunity to participate in decision-making (should they wish!). To better understand how much shared decision-making is valued, I performed a systematic review of discrete choice experiments (DCEs) that had valued elements of shared decision-making. This review found that survey respondents (primarily patients) were willing to wait longer, pay, and in some cases willing to accept poorer health outcomes for greater shared decision-making.

It is important to consider preference heterogeneity in this context. The last chapter of my PhD performed a DCE to value shared decision-making in the context of advanced knee osteoarthritis. The DCE included three attributes: waiting time, health outcomes, and shared decision-making. The latent class analysis found four distinct subgroups of patients. Two groups were balanced, and traded between all attributes, while one group had a strong preference for shared decision-making, and another had a strong preference for better health outcomes. One important finding from this analysis was that having a strong preference for shared decision-making was not associated with demographic or clinical characteristics. This highlights the importance of each clinical encounter in determining the appropriate level of shared decision-making for each patient.

Is it meaningful to estimate the cost-per-QALY of shared decision-making interventions?

One of the challenges of my thesis was grappling with the potential conflict between the objectives of CEA using QALYs (maximizing health) and shared decision-making interventions (improved decision-making). Importantly, encouraging shared decision-making may result in patients choosing alternatives that do not maximize QALYs. For example, informed patients may choose to delay or forego elective surgery due to potential risks, despite it providing more QALYs (on average).

In cases where a CEA finds that shared decision-making interventions result in poorer health outcomes at lower cost, I think this is perfectly acceptable (provided patients are making informed choices). However, it becomes more complicated when shared decision-making interventions increase costs, result in poorer health outcomes, but provide other, non-health benefits such as informing patients or involving them in treatment decisions. In such cases, decision-makers need to consider whether it is justified to allocate scarce health care resources to encourage shared decision-making when it requires sacrificing health outcomes elsewhere. The latter part of my thesis tried to inform this trade-off, by valuing the non-health benefits of shared decision-making which would not otherwise be captured in a CEA that uses QALYs.

How should the valuation framework be extended, and is this likely to indicate different decisions?

I extended the valuation framework by attempting to value non-health benefits of shared decision-making. I followed guidelines from the Canadian Agency for Drugs and Technologies in Health, which state that “the value of non-health effects should be based on being traded off against health” and that societal preferences be used for this valuation. Requiring non-health benefits to be valued relative to health reflects the opportunity cost of allocating resources toward these outcomes. While these guidelines do not specifically state how to do so, I chose to value shared decision-making relative to life-years using a chained (or two-stage) valuation approach so that they could be incorporated within the QALY.

Ultimately, I found that the value of the process of shared decision-making was small, however, this may have an impact on cost-effectiveness. The reasons for this are twofold. First, there are few cases where shared decision-making interventions improve health outcomes. A 2018 sub-analysis of the Cochrane review of patient decision aids found little evidence that they impact health-related quality of life. Secondly, the up-front cost of implementing shared decision-making interventions may be small. Thus, in cases where shared decision-making interventions require a small investment but provide no health benefit, the non-health value of shared decision-making may impact cost-effectiveness. One recent example from Dr Victoria Brennan found that incorporating process utility associated with improved consultation quality, resulting from a new online assessment tool, increased the probability that the intervention was cost-effective from 35% to 60%.

Chris Sampson’s journal round-up for 18th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

An educational review about using cost data for the purpose of cost-effectiveness analysis. PharmacoEconomics [PubMed] Published 12th February 2019

Costing can seem like a cinderella method in the health economist’s toolkit. If you’re working on an economic evaluation, estimating resource use and costs can be tedious. That is perhaps why costing methodology has been relatively neglected in the literature compared to health state valuation (for example). This paper tries to redress the balance slightly by providing an overview of the main issues in costing, explaining why they’re important, so that we can do a better job. The issues are more complex than many assume.

Supported by a formidable reference list (n=120), the authors tackle 9 issues relating to costing: i) costs vs resource use; ii) trial-based vs model-based evaluations; iii) costing perspectives; iv) data sources; v) statistical methods; vi) baseline adjustments; vii) missing data; viii) uncertainty; and ix) discounting, inflation, and currency. It’s a big paper with a lot to say, so it isn’t easily summarised. Its role is as a reference point for us to turn to when we need it. There’s a stack of papers and other resources cited in here that I wasn’t aware of. The paper itself doesn’t get technical, leaving that to the papers cited therein. But the authors provide a good discussion of the questions that ought to be addressed by somebody designing a study, relating to data collection and analysis.

The paper closes with some recommendations. The main one is that people conducting cost-effectiveness analysis should think harder about why they’re making particular methodological choices. The point is also made that new developments could change the way we collect and analyse cost data. For example, the growing use of observational data demands that greater consideration be given to unobserved confounding. Costing methods are important and interesting!

A flexible open-source decision model for value assessment of biologic treatment for rheumatoid arthritis. PharmacoEconomics [PubMed] Published 9th February 2019

Wherever feasible, decision models should be published open-source, so that they can be reviewed, reused, recycled, or, perhaps, rejected. But open-source models are still a rare sight. Here, we have one for rheumatoid arthritis. But the paper isn’t really about the model. After all, the model and supporting documentation are already available online. Rather, the paper describes the reasoning behind publishing a model open-source, and the process for doing so in this case.

This is the first model released as part of the Open Source Value Project, which tries to convince decision-makers that cost-effectiveness models are worth paying attention to. That is, it’s aimed at the US market, where models are largely ignored. The authors argue that models need to be flexible to be valuable into the future and that, to achieve this, four steps should be followed in the development: 1) release the initial model, 2) invite feedback, 3) convene an expert panel to determine actions in light of the feedback, and 4) revise the model. Then, repeat as necessary. Alongside this, people with the requisite technical skills (i.e. knowing how to use R, C++, and GitHub) can proffer changes to the model whenever they like. This paper was written after step 3 had been completed, and the authors report receiving 159 comments on their model.

The model itself (which you can have a play with here) is an individual patient simulation, which is set-up to evaluate a variety of treatment scenarios. It estimates costs and (mapped) QALYs and can be used to conduct cost-effectiveness analysis or multi-criteria decision analysis. The model was designed to be able to run 32 different model structures based on different assumptions about treatment pathways and outcomes, meaning that the authors could evaluate structural uncertainties (which is a rare feat). A variety of approaches were used to validate the model.

The authors identify several challenges that they experienced in the process, including difficulties in communication between stakeholders and the large amount of time needed to develop, test, and describe a model of this sophistication. I would imagine that, compared with most decision models, the amount of work underlying this paper is staggering. Whether or not that work is worthwhile depends on whether researchers and policymakers make us of the model. The authors have made it as easy as possible for stakeholders to engage with and build on their work, so they should be hopeful that it will bear fruit.

EQ-5D-Y-5L: developing a revised EQ-5D-Y with increased response categories. Quality of Life Research [PubMed] Published 9th February 2019

The EQ-5D-Y has been a slow burner. It’s been around 10 years since it first came on the scene, but we’ve been without a value set and – with the introduction of the EQ-5D-5L – the questionnaire has lost some comparability with its adult equivalent. But the EQ-5D-Y has almost caught-up, and this study describes part of how that’s been achieved.

The reason to develop a 5L version for the EQ-5D-Y is the same as for the adult version – to reduce ceiling effects and improve sensitivity. A selection of possible descriptors was identified through a review of the literature. Focus groups were conducted with children between 8 and 15 years of age in Germany, Spain, Sweden, and the UK in order to identify labels that can be understood by young people. Specifically, the researchers wanted to know the words used by children and adolescents to describe the quantity or intensity of health problems. Participants ranked the labels according to severity and specified which labels they didn’t like. Transcripts were analysed using thematic content analysis. Next, individual interviews were conducted with 255 participants across the four countries, which involved sorting and response scaling tasks. Younger children used a smiley scale. At this stage, both 4L and 5L versions were being considered. In a second phase of the research, cognitive interviews were used to test for comprehensibility and feasibility.

A 5-level version was preferred by most, and 5L labels were identified in each language. The English version used terms like ‘a little bit’, ‘a lot’, and ‘really’. There’s plenty more research to be done on the EQ-5D-Y-5L, including psychometric testing, but I’d expect it to be coming to studies near you very soon. One of the key takeaways from this study, and something that I’ve been seeing more in research in recent years, is that kids are smart. The authors make this point clear, particulary with respect to the response scaling tasks that were conducted with children as young as 8. Decision-making criteria and frameworks that relate to children should be based on children’s preferences and ideas.

Credits

Chris Sampson’s journal round-up for 4th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Patient choice and provider competition – quality enhancing drivers in primary care? Social Science & Medicine Published 29th January 2019

There’s no shortage of studies in economics claiming to identify the impact (or lack of impact) of competition in the market for health care. The evidence has brought us close to a consensus that greater competition might improve quality, so long as providers don’t compete on price. However, many of these studies aren’t able to demonstrate the mechanism through which competition might improve quality, and the causality is therefore speculative. The research reported in this article was an attempt to see whether the supposed mechanisms for quality improvement actually exist. The authors distinguish between the demand-side mechanisms of competition-increasing quality-improving reforms (i.e. changes in patient behaviour) and the supply-side mechanisms (i.e. changes in provider behaviour), asserting that the supply-side has been neglected in the research.

The study is based on primary care in Sweden’s two largest cities, where patients can choose their primary care practice, which could be a private provider. Key is the fact that patients can switch between providers as often as they like, and with fewer barriers to doing so than in the UK. Prospective patients have access to some published quality indicators. With the goal of maximum variation, the researchers recruited 13 primary health care providers for semi-structured interviews with the practice manager and (in most cases) one or more of the practice GPs. The interview protocol included questions about the organisation of patient visits, information received about patients’ choices, market situation, reimbursement, and working conditions. Interview transcripts were coded and a framework established. Two overarching themes were ‘local market conditions’ and ‘feedback from patient choice’.

Most interviewees did not see competitors in the local market as a threat – conversely, providers are encouraged to cooperate on matters such as public health. Where providers did talk about competing, it was in terms of (speed of) access for patients, or in competition to recruit and keep staff. None of the interviewees were automatically informed of patients being removed from their list, and some managers reported difficulties in actually knowing which patients on their list were still genuinely on it. Even where these data were more readily available, nobody had access to information on reasons for patients leaving. Managers saw greater availability of this information as useful for quality improvement, while GPs tended to think it could be useful in ensuring continuity of care. Still, most expressed no desire to expand their market share. Managers reported using marketing efforts in response to greater competition generally, rather than as a response to observed changes within their practice. But most relied on reputation. Some reported becoming more service-minded as a result of choice reforms.

It seems that practices need more information to be able to act on competitive pressures. But, most practices don’t care about it because they don’t want to expand and they face no risk of there being a shortage of patients (in cities, at least). And, even if they did want to act on the information, chances are it would just create an opportunity for them to improve access as a way of cherry-picking younger and healthier people who demand convenience. Primary care providers (in this study, at least) are not income maximisers, but satisficers (they want to break-even), so there isn’t much scope for reforms to encourage providers to compete for new patients. Patient choice reforms may improve quality, but it isn’t clear that this has anything to do with competitive pressure.

Maximising the impact of patient reported outcome assessment for patients and society. BMJ [PubMed] Published 24th January 2019

Patient-reported outcome measures (PROMs) have been touted as a way of improving patient care. Yet, their use around the world is fragmented. In this paper, the authors make some recommendations about how we might use PROMs to improve patient care. The authors summarise some of the benefits of using PROMs and discuss some of the ways that they’ve been used in the UK.

Five key challenges in the use of PROMs are specified: i) appropriate and consistent selection of the best measures; ii) ethical collection and reporting of PROM data; iii) data collection, analysis, reporting, and interpretation; iv) data logistics; and v) a lack of coordination and efficiency. To address these challenges, the authors recommend an ‘integrated’ approach. To achieve this, stakeholder engagement is important and a governance framework needs to be developed. A handy table of current uses is provided.

I can’t argue with what the paper proposes, but it outlines an idealised scenario rather than any firm and actionable recommendations. What the authors don’t discuss is the fact that the use of PROMs in the UK is flailing. The NHS PROMs programme has been scaled back, measures have been dropped from the QOF, the EQ-5D has been dropped from the GP Patient Survey. Perhaps we need bolder recommendations and new ideas to turn the tide.

Check your checklist: the danger of over- and underestimating the quality of economic evaluations. PharmacoEconomics – Open [PubMed] Published 24th January 2019

This paper outlines the problems associated with misusing methodological and reporting checklists. The author argues that the current number of checklists available in the context of economic evaluation and HTA (13, apparently) is ‘overwhelming’. Three key issues are discussed. First, researchers choose the wrong checklist. A previous review found that the Drummond, CHEC, and Philips checklists were regularly used in the wrong context. Second, checklists can be overinterpreted, resulting in incorrect conclusions. A complete checklist does not mean that a study is perfect, and different features are of varying importance in different studies. Third, checklists are misused, with researchers deciding which items are or aren’t relevant to their study, without guidance.

The author suggests that more guidance is needed and that a checklist for selecting the correct checklist could be the way to go. The issue of updating checklists over time – and who ought to be responsible for this – is also raised.

In general, the tendency seems to be to broaden the scope of general checklists and to develop new checklists for specific methodologies, requiring the application of multiple checklists. As methods develop, they become increasingly specialised and heterogeneous. I think there’s little hope for checklists in this context unless they’re pared down and used as a reminder of the more complex guidance that’s needed to specify suitable methods and achieve adequate reporting. ‘Check your checklist’ is a useful refrain, though I reckon ‘chuck your checklist’ can sometimes be a better strategy.

A systematic review of dimensions evaluating patient experience in chronic illness. Health and Quality of Life Outcomes [PubMed] Published 21st January 2019

Back to PROMs and PRE(xperience)Ms. This study sets out to understand what it is that patient-reported measures are being used to capture in the context of chronic illness. The authors conducted a systematic review, screening 2,375 articles and ultimately including 107 articles that investigated the measurement properties of chronic (physical) illness PROMs and PREMs.

29 questionnaires were about (health-related) quality of life, 19 about functional status or symptoms, 20 on feelings and attitudes about illness, 19 assessing attitudes towards health care, and 20 on patient experience. The authors provide some nice radar charts showing the percentage of questionnaires that included each of 12 dimensions: i) physical, ii) functional, iii) social, iv) psychological, v) illness perceptions, vi) behaviours and coping, vii) effects of treatment, viii) expectations and satisfaction, ix) experience of health care, x) beliefs and adherence to treatment, xi) involvement in health care, and xii) patient’s knowledge.

The study supports the idea that a patient’s lived experience of illness and treatment, and adaptation to that, has been judged to be important in addition to quality of life indicators. The authors recommend that no measure should try to capture everything because there are simply too many concepts that could be included. Rather, researchers should specify the domains of interest and clearly define them for instrument development.

Credits