Thesis Thursday: Logan Trenaman

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Logan Trenaman who has a PhD from the University of British Columbia. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Economic evaluation of interventions to support shared decision-making: an extension of the valuation framework
Supervisors
Nick Bansback, Stirling Bryan
Repository link
http://hdl.handle.net/2429/66769

What is shared decision-making?

Shared decision-making is a process whereby patients and health care providers work together to make decisions. For most health care decisions, where there is no ‘best’ option, the most appropriate course of action depends on the clinical evidence and the patient’s informed preferences. In effect, shared decision-making is about reducing information asymmetry, by allowing providers to inform patients about the potential benefits and harms of alternative tests or treatments, and patients to express their preferences to their provider. The goal is to reach agreement on the most appropriate decision for that patient.

My thesis focused on individuals with advanced osteoarthritis who were considering whether to undergo total hip or knee replacement, or use non-surgical treatments such as pain medication, exercise, or mobility aids. Joint replacement alleviates pain and improves mobility for most patients, however, as many as 20-30% of recipients have reported insignificant improvement in symptoms and/or dissatisfaction with results. Shared decision-making can help ensure that those considering joint replacement are aware of alternative treatments and have realistic expectations about the potential benefits and harms of each option.

There are different types of interventions available to help support shared decision-making, some of which target the patient (e.g. patient decision aids) and some of which target providers (e.g. skills training). My thesis focused on a randomized controlled trial that evaluated a pre-consultation patient decision aid, which generated a summary report for the surgeon that outlined the patient’s knowledge, values, and preferences.

How can the use of decision aids influence health care costs?

The use of patient decision aids can impact health care costs in several ways. Some patient decision aids, such as those evaluated in my thesis, are designed for use by patients in preparation for a consultation where a treatment decision is made. Others are designed to be used during the consultation with the provider. There is some evidence that decision aids may increase up-front costs, by increasing the length of consultations, requiring investments to integrate decision aids into routine care, or train clinicians. These interventions may impact downstream costs by influencing treatment decision-making. For example, the Cochrane review of patient decision aids found that, across 18 studies in major elective surgery, those exposed to decision aids were less likely to choose surgery compared to those in usual care (RR: 0.86, 95% CI: 0.75 to 1.00).

This was observed in the trial-based economic evaluation which constituted the first chapter of my thesis. This analysis found that decision aids were highly cost-effective, largely due to a smaller proportion of patients undergoing joint replacement. Of course, this conclusion could change over time. One of the challenges of previous cost-effectiveness analysis (CEA) of patient decision aids has been a lack of long-term follow-up. Patients who choose not to have surgery over the short-term may go on to have surgery later. To look at the longer-term impact of decision aids, the third chapter of my thesis linked trial participants to administrative data with an average of 7-years follow-up. I found that, from a resource use perspective, the conclusion was the same as observed during the trial: fewer patients exposed to decision aids had undergone surgery, resulting in lower costs.

What is it about shared decision-making that patients value?

On the whole, the evidence suggests that patients value being informed, listened to, and offered the opportunity to participate in decision-making (should they wish!). To better understand how much shared decision-making is valued, I performed a systematic review of discrete choice experiments (DCEs) that had valued elements of shared decision-making. This review found that survey respondents (primarily patients) were willing to wait longer, pay, and in some cases willing to accept poorer health outcomes for greater shared decision-making.

It is important to consider preference heterogeneity in this context. The last chapter of my PhD performed a DCE to value shared decision-making in the context of advanced knee osteoarthritis. The DCE included three attributes: waiting time, health outcomes, and shared decision-making. The latent class analysis found four distinct subgroups of patients. Two groups were balanced, and traded between all attributes, while one group had a strong preference for shared decision-making, and another had a strong preference for better health outcomes. One important finding from this analysis was that having a strong preference for shared decision-making was not associated with demographic or clinical characteristics. This highlights the importance of each clinical encounter in determining the appropriate level of shared decision-making for each patient.

Is it meaningful to estimate the cost-per-QALY of shared decision-making interventions?

One of the challenges of my thesis was grappling with the potential conflict between the objectives of CEA using QALYs (maximizing health) and shared decision-making interventions (improved decision-making). Importantly, encouraging shared decision-making may result in patients choosing alternatives that do not maximize QALYs. For example, informed patients may choose to delay or forego elective surgery due to potential risks, despite it providing more QALYs (on average).

In cases where a CEA finds that shared decision-making interventions result in poorer health outcomes at lower cost, I think this is perfectly acceptable (provided patients are making informed choices). However, it becomes more complicated when shared decision-making interventions increase costs, result in poorer health outcomes, but provide other, non-health benefits such as informing patients or involving them in treatment decisions. In such cases, decision-makers need to consider whether it is justified to allocate scarce health care resources to encourage shared decision-making when it requires sacrificing health outcomes elsewhere. The latter part of my thesis tried to inform this trade-off, by valuing the non-health benefits of shared decision-making which would not otherwise be captured in a CEA that uses QALYs.

How should the valuation framework be extended, and is this likely to indicate different decisions?

I extended the valuation framework by attempting to value non-health benefits of shared decision-making. I followed guidelines from the Canadian Agency for Drugs and Technologies in Health, which state that “the value of non-health effects should be based on being traded off against health” and that societal preferences be used for this valuation. Requiring non-health benefits to be valued relative to health reflects the opportunity cost of allocating resources toward these outcomes. While these guidelines do not specifically state how to do so, I chose to value shared decision-making relative to life-years using a chained (or two-stage) valuation approach so that they could be incorporated within the QALY.

Ultimately, I found that the value of the process of shared decision-making was small, however, this may have an impact on cost-effectiveness. The reasons for this are twofold. First, there are few cases where shared decision-making interventions improve health outcomes. A 2018 sub-analysis of the Cochrane review of patient decision aids found little evidence that they impact health-related quality of life. Secondly, the up-front cost of implementing shared decision-making interventions may be small. Thus, in cases where shared decision-making interventions require a small investment but provide no health benefit, the non-health value of shared decision-making may impact cost-effectiveness. One recent example from Dr Victoria Brennan found that incorporating process utility associated with improved consultation quality, resulting from a new online assessment tool, increased the probability that the intervention was cost-effective from 35% to 60%.

Chris Sampson’s journal round-up for 17th September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Does competition from private surgical centres improve public hospitals’ performance? Evidence from the English National Health Service. Journal of Public Economics Published 11th September 2018

This study looks at proper (supply-side) privatisation in the NHS. The subject is the government-backed introduction of Independent Sector Treatment Centres (ISTCs), which, in the name of profit, provide routine elective surgical procedures to NHS patients. ISTCs were directed to areas with high waiting times and began rolling out from 2003.

The authors take pre-surgery length of stay as a proxy for efficiency and hypothesise that the entry of ISTCs would improve efficiency in nearby NHS hospitals. They also hypothesise that the ISTCs would cream-skim healthier patients, leaving NHS hospitals to foot the bill for a more challenging casemix. Difference-in-difference regressions are used to test these hypotheses, the treatment group being those NHS hospitals close to ISTCs and the control being those not likely to be affected. The authors use patient-level Hospital Episode Statistics from 2002-2008 for elective hip and knee replacements.

The key difficulty here is that the trend in length of stay changed dramatically at the time ISTCs began to be introduced, regardless of whether a hospital was affected by their introduction. This is because there was a whole suite of policy and structural changes being implemented around this period, many targeting hospital efficiency. So we’re looking at comparing new trends, not comparing changes in existing levels or trends.

The authors’ hypotheses prove right. Pre-surgery length of stay fell in exposed hospitals by around 16%. The ISTCs engaged in risk selection, meaning that NHS hospitals were left with sicker patients. What’s more, the savings for NHS hospitals (from shorter pre-surgery length of stay) were more than undermined by an increase in post-surgery length of stay, which may have been due to the change in casemix.

I’m not sure how useful difference-in-difference is in this case. We don’t know what the trend would have been without the intervention because the pre-intervention trend provides no clues about it and, while the outcome is shown to be unrelated to selection into the intervention, we don’t know whether selection into the ISTC intervention was correlated with exposure to other policy changes. The authors do their best to quell these concerns about parallel trends and correlated policy shocks, and the results appear robust.

Broadly speaking, the study satisfies my prior view of for-profit providers as leeches on the NHS. Still, I’m left a bit unsure of the findings. The problem is, I don’t see the causal mechanism. Hospitals had the financial incentive to be efficient and achieve a budget surplus without competition from ISTCs. It’s hard (for me, at least) to see how reduced length of stay has anything to do with competition unless hospitals used it as a basis for getting more patients through the door, which, given that ISTCs were introduced in areas with high waiting times, the hospitals could have done anyway.

While the paper describes a smart and thorough analysis, the findings don’t tell us whether ISTCs are good or bad. Both the length of stay effect and the casemix effect are ambiguous with respect to patient outcomes. If only we had some PROMs to work with…

One method, many methodological choices: a structured review of discrete-choice experiments for health state valuation. PharmacoEconomics [PubMed] Published 8th September 2018

Discrete choice experiments (DCEs) are in vogue when it comes to health state valuation. But there is disagreement about how they should be conducted. Studies can differ in terms of the design of the choice task, the design of the experiment, and the analysis methods. The purpose of this study is to review what has been going on; how have studies differed and what could that mean for our use of the value sets that are estimated?

A search of PubMed for valuation studies using DCEs – including generic and condition-specific measures – turned up 1132 citations, of which 63 were ultimately included in the review. Data were extracted and quality assessed.

The ways in which the studies differed, and the ways in which they were similar, hint at what’s needed from future research. The majority of recent studies were conducted online. This could be problematic if we think self-selecting online panels aren’t representative. Most studies used five or six attributes to describe options and many included duration as an attribute. The methodological tweaks necessary to anchor at 0=dead were a key source of variation. Those using duration varied in terms of the number of levels presented and the range of duration (from 2 months to 50 years). Other studies adopted alternative strategies. In DCE design, there is a necessary trade-off between statistical efficiency and the difficulty of the task for respondents. A variety of methods have been employed to try and ease this difficulty, but there remains a lack of consensus on the best approach. An agreed criterion for this trade-off could facilitate consistency. Some of the consistency that does appear in the literature is due to conformity with EuroQol’s EQ-VT protocol.

Unfortunately, for casual users of DCE valuations, all of this means that we can’t just assume that a DCE is a DCE is a DCE. Understanding the methodological choices involved is important in the application of resultant value sets.

Trusting the results of model-based economic analyses: is there a pragmatic validation solution? PharmacoEconomics [PubMed] Published 6th September 2018

Decision models are almost never validated. This means that – save for a superficial assessment of their outputs – they are taken at good faith. That should be a worry. This article builds on the experience of the authors to outline why validation doesn’t take place and to try to identify solutions. This experience includes a pilot study in France, NICE Evidence Review Groups, and the perspective of a consulting company modeller.

There are a variety of reasons why validation is not conducted, but resource constraints are a big part of it. Neither HTA agencies, nor modellers themselves, have the time to conduct validation and verification exercises. The core of the authors’ proposed solution is to end the routine development of bespoke models. Models – or, at least, parts of models – need to be taken off the shelf. Thus, open source or otherwise transparent modelling standards are a prerequisite for this. The key idea is to create ‘standard’ or ‘reference’ models, which can be extensively validated and tweaked. The most radical aspect of this proposal is that they should be ‘freely available’.

But rather than offering a path to open source modelling, the authors offer recommendations for how we should conduct ourselves until open source modelling is realised. These include the adoption of a modular and incremental approach to modelling, combined with more transparent reporting. I agree; we need a shift in mindset. Yet, the barriers to open source models are – I believe – the same barriers that would prevent these recommendations from being realised. Modellers don’t have the time or the inclination to provide full and transparent reporting. There is no incentive for modellers to do so. The intellectual property value of models means that public release of incremental developments is not seen as a sensible thing to do. Thus, the authors’ recommendations appear to me to be dependent on open source modelling, rather than an interim solution while we wait for it. Nevertheless, this is the kind of innovative thinking that we need.

Credits

Chris Sampson’s journal round-up for 4th June 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A qualitative investigation of the health economic impacts of bariatric surgery for obesity and implications for improved practice in health economics. Health Economics [PubMed] Published 1st June 2018

Few would question the ‘economic’ nature of the challenge of obesity. Bariatric surgery is widely recommended for severe cases but, in many countries, the supply is not sufficient to satisfy the demand. In this context, this study explores the value of qualitative research in informing economic evaluation. The authors assert that previous economic evaluations have adopted a relatively narrow focus and thus might underestimate the expected value of bariatric surgery. But rather than going and finding data on what they think might be additional dimensions of value, the authors ask patients. Emotional capital, ‘societal’ (i.e. non-health) impacts, and externalities are identified as theories for the types of value that might be derived from bariatric surgery. These theories were used to guide the development of questions and prompts that were used in a series of 10 semi-structured focus groups. Thematic analysis identified the importance of emotional costs and benefits as part of the ‘socioemotional personal journey’ associated with bariatric surgery. Out-of-pocket costs were also identified as being important, with self-funding being a challenge for some respondents. The information seems useful in a variety of ways. It helps us understand the value of bariatric surgery and how individuals make decisions in this context. This information could be used to determine the structure of economic evaluations or the data that are collected and used. The authors suggest that an EQ-5D bolt-on should be developed for ’emotional capital’ but, given that this ‘theory’ was predefined by the authors and does not arise from the qualitative research as being an important dimension of value alongside the existing EQ-5D dimensions, that’s a stretch.

Developing accessible, pictorial versions of health-related quality-of-life instruments suitable for economic evaluation: a report of preliminary studies conducted in Canada and the United Kingdom. PharmacoEconomics – Open [PubMed] Published 25th May 2018

I’ve been telling people about this study for ages (apologies, authors, if that isn’t something you wanted to read!). In my experience, the need for more (cognitively / communicatively) accessible outcome measures is widely recognised by health researchers working in contexts where this is relevant, such as stroke. If people can’t read or understand the text-based descriptors that make up (for example) the EQ-5D, then we need some alternative format. You could develop an entirely new measure. Or, as the work described in this paper set out to do, you could modify existing measures. There are three descriptive systems described in this study: i) a pictorial EQ-5D-3L by the Canadian team, ii) a pictorial EQ-5D-3L by the UK team, and iii) a pictorial EQ-5D-5L by the UK team. Each uses images to represent the different levels of the different dimensions. For example, the mobility dimension might show somebody walking around unaided, walking with aids, or in bed. I’m not going to try and describe what they all look like, so I’ll just encourage you to take a look at the Supplementary Material (click here to download it). All are described as ‘pilot’ instruments and shouldn’t be picked up and used at this stage. Different approaches were used in the development of the measures, and there are differences between the measures in terms of the images selected and the ways in which they’re presented. But each process referred to conventions in aphasia research, used input from clinicians, and consulted people with aphasia and/or their carers. The authors set out several remaining questions and avenues for future research. The most interesting possibility to most readers will be the notion that we could have a ‘generic’ pictorial format for the EQ-5D, which isn’t aphasia-specific. This will require continued development of the pictorial descriptive systems, and ultimately their validation.

QALYs in 2018—advantages and concerns. JAMA [PubMed] Published 24th May 2018

It’s difficult not to feel sorry for the authors of this article – and indeed all US-based purveyors of economic evaluation in health care. With respect to social judgments about the value of health technologies, the US’s proverbial head remains well and truly buried in the sand. This article serves as a primer and an enticement for the use of QALYs. The ‘concerns’ cited relate almost exclusively to decision rules applied to QALYs, rather than the underlying principles of QALYs, presumably because the authors didn’t feel they could ignore the points made by QALY opponents (even if those arguments are vacuous). What it boils down to is this: trade-offs are necessary, and QALYs can be used to promote value in those trade-offs, so unless you offer some meaningful alternative then QALYs are here to stay. Thankfully, the Institute for Clinical and Economic Review (ICER) has recently added some clout to the undeniable good sense of QALYs, so the future is looking a little brighter. Suck it up, America!

The impact of hospital costing methods on cost-effectiveness analysis: a case study. PharmacoEconomics [PubMed] Published 22nd May 2018

Plugging different cost estimates into your cost-effectiveness model could alter the headline results of your evaluation. That might seems obvious, but there are a variety of ways in which the selection of unit costs might be somewhat arbitrary or taken for granted. This study considers three alternative sources of information for hospital-based unit costs for hip fractures in England: (a) spell-level tariffs, (b) finished consultant episode (FCE) reference costs, and (c) spell-level reference costs. Source (b) provides, in theory, a more granular version of (a), describing individual episodes within a person’s hospital stay. Reference costs are estimated on the basis of hospital activity, while tariffs are prices estimated on the basis of historic reference costs. The authors use a previously reported cohort state transition model to evaluate different models of care for hip fracture and explore how the use of the different cost figures affects their results. FCE-level reference costs produced the highest total first-year hospital care costs (£14,440), and spell-level tariffs the lowest (£10,749). The more FCEs within a spell, the greater the discrepancy. This difference in costs affected ICERs, such that the net-benefit-optimising decision would change. The study makes an important point – that selection of unit costs matters. But it isn’t clear why the difference exists. It could just be due to a lack of precision in reference costs in this context (rather than a lack of accuracy, per se), or it could be that reference costs misestimate the true cost of care across the board. Without clear guidance on how to select the most appropriate source of unit costs, these different costing methodologies represent another source of uncertainty in modelling, which analysts should consider and explore.

Credits