Alastair Canaway’s journal round-up for 27th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Elevated mortality among weekend hospital admissions is not associated with adoption of seven day clinical standards. Emergency Medicine Journal [PubMedPublished 8th November 2017

Our esteemed colleagues in Manchester brought more evidence to the seven-day NHS debate (debacle?). Patients who are admitted to hospital in an emergency at weekends have higher mortality rates than those during the week. Despite what our Secretary of State will have you believe, there is an increasing body of evidence suggesting that once case-mix is adequately adjusted for, the ‘weekend effect’ becomes negligible. This paper takes a slightly different angle for examining the same phenomenon. It harnesses the introduction of four priority clinical standards in England, which aim to reduce the number of deaths associated with the weekend effect. These are time to first consultant review; access to diagnostics; access to consultant-directed interventions; and on-going consultant review. The study uses publicly available data on the performance of NHS Trusts in relation to these four priority clinical standards. For the latest financial year (2015/16), Trusts’ weekend effect odds ratios were compared to their achievement against the four clinical standards. Data were available for 123 Trusts. The authors found that adoption of the four clinical standards was not associated with the extent to which mortality was elevated for patients admitted at the weekend. Furthermore, they found no association between the Trusts’ performance against any of the four standards and the magnitude of the weekend effect. The authors offer three reasons as to why this may be the case. First, data quality could be poor, second, it could be that the standards themselves are inadequate for reducing mortality, finally, it could be that the weekend effect in terms of mortality may be the wrong metric by which to judge the benefits of a seven-day service. They note that their previous research demonstrated that the weekend effect is driven by admission volumes at the weekend rather than the number of deaths, so it will not be impacted by care provision, and this is consistent with the findings in this study. The spectre of opportunity cost looms over the implementation of these standards; although no direct harm may arise from the introduction of these standards, resources will be diverted away from potentially more beneficial alternatives, this is a serious concern. The seven-day debate continues.

The effect of level overlap and color coding on attribute non-attendance in discrete choice experiments. Value in Health Published 16th November 2017

I think discrete choice experiments (DCE) are difficult to complete. That may be due to me not being the sharpest knife in the drawer, or it could be due to the nature of DCEs, or a bit of both. For this reason, I like best-worst scaling (BWS). BWS aside, DCEs are a common tool used in health economics research to assess and understand preferences. Given the difficulty of DCEs, people often resort to heuristics, that is, respondents often simplify choice tasks by taking shortcuts, e.g. ignoring one or more attribute (attribute non-attendance) or always selecting the option with the highest level of a certain attribute. This has downstream consequences leading to bias within preference estimates. Furthermore, difficulty with comprehension leads to high attrition rates. This RCT sought to examine whether participant dropout and attribute non-attendance could be reduced through two methods: level overlap, and colour coding. Level overlap refers to the DCE design whereby in each choice task a certain number of attributes are presented with the same level; in different choice tasks different attributes are overlapped. The idea of this is to prevent dominant attribute strategies whereby participants always choose the option with the highest level of one specific attribute and forces them to evaluate all attributes. The second method involves colour coding and the provision of other visual cues to reduce task complexity, e.g. colour coding levels to make it easy to see which levels are equal. There were five trial arms. The control arm featured no colour coding and no attribute overlap. The other four arms featured either colour coding (two different types were tested), attribute overlap, or a combination of them both. A nationally (Dutch) representative sample in relation to age, gender, education and geographic region were recruited online. In total 3394 respondents were recruited and each arm contained over 500 respondents. Familiarisation and warm-up questions were followed by 21 pairwise choice tasks in a randomised order. For the control arm (no overlap, no colour coding) 13.9% dropped out whilst only attending to on average 2.1 out of the five attributes. Colour coding reduced this to 9.6% with 2.8 attributes being attended. Combining level overlap with intensity colour coding reduced drop out further to 7.2% whilst increasing attribute attendance to four out of five. Thus, the combination of level overlap and colour coding nearly halved the dropout and doubled the attribute attendance within the DCE task. An additional, and perhaps most important benefit of the improvement in attribute attendance is that it reduces the need to model for potential attribute non-attendance post-hoc. Given the difficult of DCE completion, it seems colour coding in combination with level overlap should be implored for future DCE tasks.

Evidence on the longitudinal construct validity of major generic and utility measures of health-related quality of life in teens with depression. Quality of Life Research [PubMed] Published 17th November 2017

There appears to be increasing recognition of the prevalence and seriousness of youth mental health problems. Nearly 20% of young people will suffer depression during their adolescent years. To facilitate cost-utility analysis it is necessary to have a measure of preference based health-related quality of life (HRQL). However, there are few measures designed for use in adolescents. This study sought to examine various existing HRQL measures in relation to their responsiveness for the evaluation of interventions targeting depression in young people. This builds on previous work conducted by Brazier et al that found the EQ-5D and SF-6D performed adequately for depression in adults. In total 392 adolescents aged between 13 and 17 years joined the study, 376 of these completed follow up assessments. Assessments were taken at baseline and 12 weeks. The justification for 12 weeks is that it represented the modal time to clinical change. The following utility instruments were included: the HUI suite, the EQ-5D-3L, Quality of Well-Being Scale (QWB), and the SF-6D (derived from SF-36). Other non-preference based HRQL measures were also included: disease-specific ratings and scales, and the PedsQL 4.0. All (yes, you read that correctly) measures were found to be responsive to change in depression symptomology over the 12-week follow up period and each of the multi-attribute utility instruments was able to detect clinically meaningful change. In terms of comparing the utility instruments, the HUI-3, the QWB and the SF-6D were the most responsive whilst the EQ-5D-3L was the least responsive. In summary, any of the utility instruments could be used. One area of disappointment for me was that the CHU-9D was not included within this study – it’s one of the few instruments that has been developed by and for children and would have very much been a worthy addition. Regardless, this is an informative study for those of us working within the youth mental health sphere.

Credits

Alastair Canaway’s journal round-up for 18th September 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Selection of key health domains from PROMIS® for a generic preference-based scoring system. Quality of Life Research [PubMedPublished 19th August 2017

The US Panel on Cost-Effectiveness recommends the use of QALYs. It doesn’t, however, instruct (unlike the UK) as to what measure should be used. This leaves the door ajar for both new and established measures. This paper sets about developing a new preference-based measure from the Patient-Reported Outcomes Measurement System (PROMIS). PROMIS is a US National Institutes of Health funded suite of person-centred measures of physical, mental, and social health. Across all the PROMIS measures there exist over 70 domains of health relevant to adult health. For all its promise, the PROMIS system does not produce a summary score amenable to the calculation of QALYs, nor for general descriptive purposes such as measuring HRQL over time. This study aimed to reduce the 70 items down to a number suitable for valuation. To do this, Delphi methods were used. The Delphi approach is something that seems to be increasing in popularity in the health economics world. For those unfamiliar, it essentially involves obtaining the opinions of experts independently and iteratively conducting rounds of questioning to reach a consensus (over two or more rounds). In this case nine health outcomes experts were recruited, they were presented with ‘all 37 domains’ (no mention is made of how they got from 70 to 37!) and asked to remove any domains that were not appropriate for inclusion in a general health utility measure or were redundant due to another PROMIS domain. If more than seven experts agreed, then the domain was removed. Responses were combined and presented until consensus was reached. This left 10 domains. They then used a community sample of 50 participants to test for independence of domains using a pairwise independence evaluation test. They were given the option of removing a domain they felt was not important to overall HRQL and asked to rate the importance of remaining domains using a VAS. These findings were used by the research team to whittle down from nine domains to seven. The final domains were: Cognitive function- abilities; Depression; Fatigue; Pain Interference; Physical Function; Ability to participate in social roles and activities; and Sleep disturbance. Many of these are common to existing measures but I did rather like the inclusion of cognitive function and fatigue – something that is missing in many, and to me appear important. The next step is valuation. Upon valuation, this is a promising candidate for use in economic evaluation – particularly in the US where the PROMIS measurement suite is already established.

Predictive validation and the re-analysis of cost-effectiveness: do we dare to tread? PharmacoEconomics [PubMedPublished 22nd August 2017

PharmacoEconomics treated us to a provocative editorial regarding predictive validation and re-analysis of cost-effectiveness models – a call to arms of sorts. For those (like me) who are not modelling experts, predictive validation (aka 4th order validation) refers to the comparison of model outputs with data that are collected after the initial analysis of the model. So essentially you’re comparing what you modelled would happen with what actually happened. The literature suggests that predictive validation is widely ignored. The importance of predictive validity is highlighted with a case study where predictive-validity was examined three years after the end of a trial – upon reanalysis the model was poor. This was then revised, which led to a much better fit of the prospective data. Predictive validation can, therefore, be used to identify sources of inaccuracies in models. If predictive validity was examined more commonly, improvements in model quality more generally are possible. Furthermore, it might be possible to identify specific contexts where poor predictive validity is prevalent and thus require further research. The authors highlight the field of advanced cancers as a particularly relevant context where uncertainty around survival curves is prevalent. By actively scheduling further data collection and updating the survival curves we can reduce the uncertainty surrounding the value of high-cost drugs. Predictive validation can also inform other aspects of the modelling process, such as the best choice of time point from which to extrapolate, or credible rates of change in predicted hazards. The authors suggest using expected value of information analysis to identify technologies with the largest costs of uncertainty to prioritise where predictive validity could be assessed. NICE and other reimbursement bodies require continued data collection for ‘some’ new technologies, the processes are therefore in place for future studies to be designed and implemented in a way to capture such data which allows later re-analysis. Assessing predictive validity seems eminently sensible, there are however barriers. Money is the obvious issue, extended prospective data collection and re-analysis of models requires resources. It does, however, have the potential to save money and improve health in the long run. The authors note how in a recent study they demonstrated that a drug for osteoporosis that had been recommended by Australia’s Pharmaceutical Benefits Advisory Committee was not actually cost-effective when further data were examined. There is clearly value to be achieved in predictive validation and re-analysis – it’s hard to disagree with the authors and we should probably be campaigning for longer term follow-ups, re-analysis and increased acknowledgement of the desirability of predictive validity.

How should cost-of-illness studies be interpreted? The Lancet Psychiatry [PubMed] Published 7th September 2017

It’s a good question – cost of illness studies are commonplace, but are they useful from a health economics perspective? A comment piece in The Lancet Psychiatry examines this issue using the case study of self-harm and suicide. It focuses on a recent publication by Tsiachristas et al, which examines the hospital resource use and care costs for all presentations of self-harm in a UK hospital. Each episode of self-harm cost £809, and when extrapolated to the UK cost £162 million. Over 30% of these costs were psychological assessments which despite being recommended by NICE only 75% of self-harming patients received. If all self-harming patients received assessments as recommended by NICE then another £51 million would be added to the bill. The author raises the question of how much use is this information for health economists. Nearly all cost of illness studies end up concluding that i) they cost a lot, and ii) money could be saved by reducing or ameliorating the underlying factors that cause the illness. Is this helpful? Well, not particularly, by focusing only on one illness there is no consideration of the opportunity cost: if you spend money preventing one condition then that money will be displacing resources elsewhere, likewise, resources spent reducing one illness will likely be balanced by increased spending on another illness. The author highlights this with a thought experiment: “imagine a world where a cost of illness study has been done for every possible diseases and that the total cost of illness was aggregated. The counterfactual from such an exercise is a world where nobody gets sick and everybody dies suddenly at some pre-determined age”. Another issue is that more often than not, cost of illness studies identify that more, not less should be spent on a problem, in the self-harm example it was that an extra £51 million should be spent on psychological assessments. Similarly, it highlights the extra cost of psychological assessments, rather than the glaring issue that 25% who attend hospital for self-harm are not getting the required psychological assessments. This very much links into the final point that cost of illness studies neglect the benefits being achieved. Now all the negatives are out the way, there are at least a couple of positives I can think of off the top of my head i) identification of key cost drivers, and ii) information for use in economic models. The take home message is that although there is some use to cost of illness studies, from a health economics perspective we (as a field) would be better off spending our time steering clear.

Credits

Alastair Canaway’s journal round-up for 28th August 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Valuing health-related quality of life: an EQ-5D-5L value set for England. Health Economics [PubMed] Published 22nd August 2017

With much anticipation, the new EQ-5D-5L value set was officially published. For over 18 months we’ve had access to values via the OHE’s discussion paper but the formal peer-reviewed paper has (I imagine) been in publication purgatory. This paper presents the results of the value-set for the new (ish) EQ-5D-5L measure. The study used the internationally agreed hybrid model combining TTO and DCE data to generate the values for the 3125 health states. It’s worth noting that the official values are marginally different to those in the discussion paper, although in practice this is likely to have little impact on results. Important results of the new value set include fewer health states worse than death (5.1% vs over 33%), and a higher minimum value (-0.285 vs -0.594). I’d always been a bit suspect of the values for worse than death states for the 3L measure, so this if anything is encouraging. This does, however, have important implications, primarily for interventions seeking to improve those in the worst health, where potential gains may be reduced. Many of us are actively using the EQ-5D-5L within trials and have been eagerly awaiting this value set. Perhaps naively, I always anticipated that with more levels and an improved algorithm it would naturally supersede the 3L and the outdated 3L value set upon publication. Unfortunately, to mark the release of the new value set, NICE released a ‘position statement’ [PDF] regarding the choice of measure and value sets for the NICE reference case. NICE specifies that i) the 5L value set is not recommended for use, ii) the EQ-5D-3L with the original UK TTO value set is recommended and if both measures are included then the 3L should be preferred, iii) if the 5L measure is included, then scores should be mapped to the EQ-5D-3L using the van Hout et al algorithm, iv) NICE supports the use of the EQ-5D-5L generally to collect data on quality of life, and v) NICE will review this decision in August 2018 in light of future evidence. So, unfortunately, for the next year at least, we will be either sticking to the original 3L measure or mapping from the 5L. I suspect NICE is buying some time as transitioning to the 5L is going to raise lots of interesting issues e.g. if a measure is cost-effective according to the 3L, but not the 5L, or vice-versa, and comparability of 5L results to old 3L results. Interesting times lie ahead. As a final note, it’s worth reading the OHE blog post outlining the position statement and OHE’s plans to satisfy NICE.

Long-term QALY-weights among spouses of dependent and independent midlife stroke survivors. Quality of Life Research [PubMed] Published 29th June 2017

For many years, spillover impacts were largely being ignored within economic evaluation. There is increased interest in capturing wider impacts, indeed, the NICE reference case recommends including carer impacts where relevant, whilst the US Panel on Cost-Effectiveness in Health and Medicine now advocates the inclusion of other affected parties. This study sought to examine whether the dependency of midlife stroke survivors impacted on their spouses’ HRQL as measured using the SF-6D. An OLS approach was used whilst controlling for covariates (age, sex and education, amongst others). Spouses of dependent stroke survivors had a lower utility (0.69) than those whose spouses were independent (0.77). This has interesting implications for economic evaluation. For example, if a treatment were to prevent dependence, then there could potentially be large QALY gains to spouses. Spillover impacts are clearly important. If we are to broaden the evaluative scope as suggested by NICE and the US Panel to include spillover impacts, then work is vital in terms of identifying relevant contexts, measuring spillover impacts, and understanding the implications of spillover impacts within economic evaluation. This remains an important area for future research.

Conducting a discrete choice experiment study following recommendations for good research practices: an application for eliciting patient preferences for diabetes treatments. Value in Health Published 7th August 2017

To finish this week’s round-up I thought it’d be helpful to signpost this article on conducting DCEs, which I feel may be helpful for researchers embarking on their first DCE. The article hasn’t done anything particularly radical or made ground-breaking discoveries. What it does however do is give you a practical guide to walk you through each step of the DCE process following the ISPOR guidelines/checklist. Furthermore, it expands upon the ISPOR checklist to provide researchers with a further resource to consider when conducting DCEs. The case study used relates to measuring patient preferences for type 2 diabetes mellitus medications. For every item on the ISPOR checklist, it explains how they made the choices that they did, and what influenced them. The paper goes through the entire process from identifying the research question all the way through to presenting results and discussion (for those interested in diabetes – it turns out people have a preference for immediate consequences and have a high discount rate for future benefits). For people who are keen to conduct a DCE and find a worked example easier to follow, this paper alongside the ISPOR guidelines is definitely one to add to your reference manager.

Credits