Using Discrete Choice Experiments in Health Economics Course

This popular course, offered by the Health Economics Research Unit (HERU) at the University of Aberdeen, Scotland, covers the theoretical and practical issues of discrete choice experiments (DCEs) in health economics. The course takes place annually and in 2018 was fully booked.

The course provides:

  • An introduction to the theoretical basis for the development and application of DCEs in health economics.
  • Step by step guide to the design of DCEs, questionnaire development, data input, data analysis and interpretation of results.
  • An update on methodological issues raised in the application of DCEs in health economics.

Jason Shafrin’s journal round-up for 7th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Combined impact of future trends on healthcare utilisation of older people: a Delphi study. Health Policy [PubMed] [RePEc] Published October 2019

Governments need to plan for the future. This is particularly important in countries where the government pays for the lion’s share of health care expenditures. Predicting the future, however, is not an easy task. One could use quantitative approaches and simply extrapolate recent trends. One could attempt to consult with political experts to determine what policies are likely to be incurred. Another approach is to use a Delphi Panel to elicit expert opinions on future trends in health care utilization to help predict future health care needs. This approach was the one taken by Ravensbergen and co-authors in an attempt to predict trends in health care utilization among older adults in the Netherlands in 2040.

The Delphi Panel approach was applied in this study as follows. First, individuals received a questionnaire via email. Researchers presented the experts with trends from the Dutch Public Health Foresight Study (Volksgezondheid Toekomst Verkenning) to help ground all experts with the same baseline information. The data and questions largely asked separately about trends for either the old (65–80 years) or the oldest old (>80 years). After the responses from the first questionnaire were received, responses were summarized and provided back to each panelist in an anonymous manner. Panelists were then able to revise their views on a second questionnaire taking into account the feedback by the other panelists. Because the panelists did not meet in person, this approach should be considered a modified Delphi Panel.

The Delphi panel identified three broad trends: increased use of eHealth tools, less support, and change in health status. While the panel thought eHealth was important, experts rarely reached consensus how eHealth would affect healthcare utilization. The experts did find consensus, however, in believing that the the share of adults aged 50-64 will decline relative to the share of individuals aged ≥ 85 years, implying fewer caregivers will be available and more of the oldest old will be living independently (i.e. with less support). Because less informal care will be available, the Delphi believed that the demand for home care and general practitioner services will rise. The respondents also believed that in most cases changes in health status will increase health care utilization of general practitioner and specialist services. There was less agreement about trends in the need for long-term care or mental health services, however.

The Delphi Panel approach may be useful to help governments predict future demand for services. More rigorous approaches, such as betting markets, are likely not feasible since the payouts would take too long to generate much interest. Betting markets could be used to predict shorter-run trends in health care utilization. The risk with betting markets, however, is that some individuals could act strategically to drive up or down predictions to increase or decrease reimbursement for certain sectors.

In short, the Delphi Panel is likely a reasonable, low-cost approach for predicting trends in health care utilization. Future studies, however, should validate how good the predictions are from using this type of method.

The fold-in, fold-out design for DCE choice tasks: application to burden of disease. Medical Decision Making [PubMed] Published 29th May 2019

Discrete choice experiments (DCEs) are a useful way to determine what treatment attributes patients (or providers or caregivers) value. Respondents are presented with multiple treatment options and the options can be compared across a series of attributes. An attribute could be treatment efficacy, safety, dosing, cost, or a host of other attributes. One can use this approach to measure the marginal rate of substitution across attributes. If cost is one of the attributes, one can measure willingness to pay for specific attributes.

One of the key challenges of DCEs, however, is attribute selection. Most treatments differ across a range of attributes. Most published DCEs however have four, five, or at most seven attributes presented. Including more attributes makes comparisons too complicated for most respondents. Thus, researchers are left with a difficult choice: (i) a tractable but overly simplified survey, or (ii) a realistic, but overly complex survey unlikely to be comprehended by respondents.

One solution proposed by Lucas Goossens and co-authors is to use a Fold-in Fold-out (FiFo) approach. In this approach, related attributes may be grouped into domains. For some questions, all attributes within the same domain have the same attribute level (i.e., fold in); in other questions, attributes may vary within the domain (i.e., fold out).

To be concrete, in the Goossens paper, they examine treatments for chronic obstructive pulmonary disorder (COPD). They use 15 attributes divided into three domains plus two stand-alone attributes:

a respiratory symptoms domain (with four attributes: shortness of breath at rest, shortness of breath during physical activity, coughing, and sputum production), a limitations domain (four attributes: limitations in strenuous physical activities, limitations in moderate physical activities, limitations in daily activities, and limitations in social activities), a mental problems domain (five attributes: feeling depressed, fearing that breathing gets worse, worrying, listlessness, and tense feeling), a fatigue attribute, and an exacerbations attribute.

This creative approach simplifies the choice set for respondents, but allows for a large number of attributes. Using the data collected, the authors used a Bayesian mixed logit regression model to conduct the analysis. The utility function underlying this assumed domain-specific parameters, but also included parameters for within-domain attribute weights to vary in the questions where it was folded out.

One key challenge, however, is that the authors found that individuals placed more weight on attributes when their domains were folded out (i.e., attribute levels varied within domain) compared to when their domains were folded in (i.e., attribute levels were the same within the domain). Thus, I would say that if five, six or seven attributes can capture the lion’s share of differences in treatment attributes across treatments, use the standard approach; however, if more attributes are needed, the FiFo approach is an attractive option researchers should consider.

The health and cost burden of antibiotic resistant and susceptible Escherichia coli bacteraemia in the English hospital setting: a national retrospective cohort study. PLoS One [PubMed] Published 10th September 2019

Bacterial infections are bad. The good news is that we have antibiotics to treat them so they no longer are a worry, right? While conventional wisdom may believe that we have many antibiotics to treat these infections, in recent years antibiotic resistance has grown. If antibiotics no longer are effective, what is the cost to society?

One effort to quantify the economic burden of antibiotic resistance by Nichola Naylor and co-authors used national surveillance and administrative data from National Health Service (NHS) hospitals in England. They compared the cost for patients with similar observable characteristics with E. coli bacteraemia compared to those who did not have E. coli bacteraemia. Antibiotic resistance in the study was defined as E. coli bacteraemia using laboratory-based definitions of ‘resistant’ and ‘intermediate’ isolates. The antibiotics to which resistance was considered included ciprofloxacin, third generation cephalosporins (ceftazidime and/or cefotaxime), gentamicin, piperacillin/tazobactam and carbapenems (imipenem and/or meropenem).

The authors use an Aalen-Johansen estimator to measure cumulative incidence of in-hospital mortality and length of stay. Both approaches control for the patient’s age, sex, Elixhauser comorbidity index, and hospital trust type. It does not appear that the authors control for the reason for admission to the hospital nor do they propensity match people with those without antibiotic resistance. Thus, it is likely that significant unobserved heterogeneity across groups remains in the analysis.

Despite these limitations, the authors do have some interesting findings. First, bacterial infections are associated with increased risk of death. In-hospital mortality is 14.3% for individuals infected with E. Coli compared to 1.3% for those not infected. Accounting for covariates, the subdistribution hazard rate (SHR) for in-hospital mortality due to E. coli bacteraemia was 5.88. Second, E. coli bacteraemia was associated with 3.9 excess hospital days compared to patients who were not antibiotic resistance. These extra hospital days cost £1,020 per case of E. coli bacteraemia and the estimated annual cost of E. coli bacteraemia in England was £14.3m. If antibiotic resistance has increased in recent years, these estimates are likely to be conservative.

The issue of antibiotic resistance presents a conundrum for policymakers. If current antibiotics are effective, drug-makers will have no incentive to develop new antibiotics since the new treatments are unlikely to be prescribed. On the other hand, failing to identify new antibiotics in reserve means that as antibiotic resistance grows, there will be few treatment alternatives. To address this issue, the United Kingdom is considering a ‘subscription style‘ approach to pay for new antibiotics to incentivize the development of new treatments.

Nevertheless, the paper by Naylor and co-authors provides a useful data point on the cost of antibiotic resistance.

Credits

David Mott’s journal round-up for 16th September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Opening the ‘black box’: an overview of methods to investigate the decision‑making process in choice‑based surveys. The Patient [PubMed] Published 5th September 2019

Choice-based surveys using methods such as discrete choice experiments (DCEs) and best-worst scaling (BWS) exercises are increasingly being used in health to understand people’s preferences. A lot of time and energy is spent on analysing the data that come out from these surveys but increasingly there is an interest in better understanding respondents’ decision-making processes. Whilst many will be aware of ‘think aloud’ interviews (often used for piloting), other methods may be less familiar as they’re not applied frequently in health. That’s where this fascinating paper by Dan Rigby and colleagues comes in. It provides an overview of five different methods of what they call ‘pre-choice process analysis’ of decision-making, describing the application, state of knowledge, and future research opportunities.

Eye-tracking has been used in health recently. It’s intuitive and provides an insight into where the participants’ focus is (or isn’t). The authors explained that one of the ways it has been used is to explore attribute non-attendance (ANA), which essentially occurs when people are ignoring attributes either because they’re irrelevant to them, or simply because it makes the task easier. However, surprisingly, it has been suggested that ‘visual ANA’ (not looking at the attribute) doesn’t always align with ‘stated ANA’ (participants stating that they ignored the attribute) – which raises some interesting questions!

However, the real highlight for me was the overview of the use of brain imaging techniques to explore choices being made in DCEs. One study highlighted by the authors – which was a DCE about eggs and is now at least #2 on my list of the bizarre preference study topics after this oddly specific one on Iberian ham – predicted choices from an initial ‘passive viewing’ using functional magnetic resonance imaging (fMRI). They found that incorporating changes in blood flow (prompted by changes in attribute levels during ‘passive viewing’) into a random utility model accounted for a lot of the variation in willingness to pay for eggs – pretty amazing stuff.

Whilst I’ve highlighted the more unusual methods here, after reading this overview I have to admit that I’m an even bigger advocate for the ‘think aloud’ technique now. Although it may have some limitations, the amount of insight offered combined with its practicality is hard to beat. Though maybe I’m biased because I know that I won’t get my hands on any eye-tracking or brain imaging devices any time soon. In any case, I highly recommend that any researchers conducting preference studies give this paper a read as it’s really well written and will surely be of interest.

Disentangling public preferences for health gains at end-of-life: further evidence of no support of an end-of-life premium. Social Science & Medicine [PubMed] Published 21st June 2019

The end of life (EOL) policy introduced by NICE in 2009 [PDF] has proven controversial. The policy allows treatments that are not cost-effective within the usual range to be considered for approval, provided that certain criteria are met. Specifically, that the treatment targets patients with a short life expectancy (≤24 months), offers a life extension (of ≥3 months) and is for a ‘small patient population’. One of the biggest issues with this policy is that it is unclear whether the general population actually supports the idea of valuing health gains (specifically life extension) at EOL more than other health gains.

Numerous academic studies, usually involving some form of stated preference exercise, have been conducted to test whether the public might support this EOL premium. A recent review by Koonal Shah and colleagues summarised the existing published studies (up to October 2017), highlighting that evidence is extremely mixed. This recently published Danish study, by Lise Desireé Hansen and Trine Kjær, adds to this literature. The authors conducted an incredibly thorough stated preference exercise to test whether quality of life (QOL) gains and life extension (LE) at EOL are valued differently from other similarly sized health gains. Not only that, but the study also explored the effect of perspective on results (social vs individual), the effect of age (18-35 vs. 65+), and impact of initial severity (25% vs. 40% initial QOL) on results.

Overall, they did not find evidence of support for an EOL premium for QOL gains or for LEs (regardless of perspective) but their results do suggest that QOL gains are preferred over LE. In some scenarios, there was slightly more support for EOL in the social perspective variant, relative to the individual perspective – which seems quite intuitive. Both age and initial severity had an impact on results, with respondents preferring to treat the young and those with worse QOL at baseline. One of the most interesting results for me was within their subgroup analyses, which suggested that women and those with a relation to a terminally ill patient had a significantly positive preference for EOL – but only in the social perspective scenarios.

This is a really well-designed study, which covers a lot of different concepts. This probably doesn’t end the debate on NICE’s use of the EOL criteria – not least because the study wasn’t conducted in England and Wales – but it contributes a lot. I’d consider it a must-read for anyone interested in this area.

How should we capture health state utility in dementia? Comparisons of DEMQOL-Proxy-U and of self- and proxy-completed EQ-5D-5L. Value in Health Published 26th August 2019

Capturing quality of life (QOL) in dementia and obtaining health state utilities is incredibly challenging; which is something that I’ve started to really appreciate recently upon getting involved in a EuroQol-funded ‘bolt-ons’ project. The EQ-5D is not always able to detect meaningful changes in cognitive function and condition-specific preference-based measures (PBMs), such as the DEMQOL, may be preferred as a result. However, this isn’t the only challenge because in many cases patients are not in a position to complete the surveys themselves. This means that proxy-reporting is often required, which could be done by either a professional (formal) carer, or a friend or family member (informal carer). Researchers that want to use a PBM in this population therefore have a lot to consider.

This paper compares the performance of the EQ-5D-5L and the DEMQOL-Proxy when completed by care home residents (EQ-5D-5L only), formal carers and informal carers. The impressive dataset that the authors use contains 1,004 care home residents, across up to three waves, and includes a battery of different cognitive and QOL measures. The overall objective was to compare the performance of the EQ-5D-5L and DEMQOL-Proxy, across the three respondent groups, based on 1) construct validity, 2) criterion validity, and 3) responsiveness.

The authors found that self-reported EQ-5D-5L scores were larger and less responsive to changes in the cognitive measures, but better at capturing residents’ self-reported QOL (based on a non-PBM) relative to proxy-reported scores. It is unclear whether this is a case of adaptation as seen in many other patient groups, or if the residents’ cognitive impairments prevent them from reliably assessing their current status. The proxy-reported EQ-5D-5L scores were generally more responsive to changes in the cognitive measures relative to the DEMQOL-Proxy (irrespective of which type of proxy), which the authors note is probably due to the fact that the DEMQOL-Proxy focuses more on the emotional impact of dementia rather than functional impairment.

Overall, this is a really interesting paper, which highlights the challenges well and illustrates that there is value in collecting these data from both patients and proxies. In terms of the PBM comparison, whilst the authors do not explicitly state it, it does seem that the EQ-5D-5L may have a slight upper hand due to its responsiveness, as well as for pragmatic reasons (the DEMQOL-Proxy has >30 questions). Perhaps a cognition ‘bolt-on’ to the EQ-5D-5L might help to improve the situation in future?

Credits