Chris Sampson’s journal round-up for 5th March 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Healthy working days: the (positive) effect of work effort on occupational health from a human capital approach. Social Science & Medicine Published 28th February 2018

If you look at the literature on the determinants of subjective well-being (or happiness), you’ll see that unemployment is often cited as having a big negative impact. The same sometimes applies for its impact on health, but here – of course – the causality is difficult to tease apart. Then, in research that digs deeper, looking at hours worked and different types of jobs, we see less conclusive results. In this paper, the authors start by asserting that the standard approach in labour economics (on which I’m not qualified to comment) is to assume that there is a negative association between work effort and health. This study extends the framework by allowing for positive effects of work that are related to individuals’ characteristics and working conditions, and where health is determined in a Grossman-style model of health capital that accounts for work effort in the rate of health depreciation. This model is used to examine health as a function of work effort (as indicated by hours worked) in a single wave of the European Working Conditions Survey (EWCS) from 2010 for 15 EU member states. Key items from the EWCS included in this study are questions such as “does your work affect your health or not?”, “how is your health in general?”, and “how many hours do you usually work per week?”. Working conditions are taken into account by looking at data on shift working and the need to wear protective equipment. One of the main findings of the study is that – with good working conditions – greater work effort can improve health. The Marxist in me is not very satisfied with this. We need to ask the question, compared to what? Working fewer hours? For most people, that simply isn’t an option. Aren’t the people who work fewer hours the people who can afford to work fewer hours? No attention is given to the sociological aspects of employment, which are clearly important. The study also shows that overworking or having poorer working conditions reduces health. We also see that, for many groups, longer hours do not negatively impact on health until we reach around 120 hours a week. This fails a good sense check. Who are these people?! I’d be very interested to see if these findings hold for academics. That the key variables are self-reported undermines the conclusions somewhat, as we can expect people to adjust their expectations about work effort and health in accordance with their colleagues. It would be very difficult to avoid a type 2 error (with respect to the negative impact of effort on health) using these variables to represent health and the role of work effort.

Agreement between retrospectively and contemporaneously collected patient-reported outcome measures (PROMs) in hip and knee replacement patients. Quality of Life Research [PubMed] Published 26th February 2018

The use of patient-reported outcomes (PROMs) in elective care in the NHS has been a boon for researchers in our field, providing before-and-after measurement of health-related quality of life so that we can look at the impact of these interventions. But we can’t do this in emergency care because the ‘before’ is never observed – people only show up when they’re in the middle of the emergency. But what if people could accurately recall their pre-emergency health state? There’s some evidence to suggest that people can, so long as the recall period is short. This study looks at NHS PROMs data (n=443), with generic and condition-specific outcomes collected from patients having hip or knee replacements. Patients included in the study were additionally asked to recall their health state 4 weeks prior to surgery. The authors assess the extent to which the contemporary PROM measurements agree with the retrospective measurements, and the extent to which any disagreement relates to age, socioeconomic status, or the length of time to recall. There wasn’t much difference between contemporary and retrospective measurements, though patients reported slightly lower health on the retrospective questionnaires. And there weren’t any compelling differences associated with age or socioeconomic status or the length of recall. These findings are promising, suggesting that we might be able to rely on retrospective PROMs. But the elective surgery context is very different to the emergency context, and I don’t think we can expect the two types of health care to impact recollection in the same way. In this study, responses may also have been influenced by participants’ memories of completing the contemporary questionnaire, and the recall period was very short. But the only way to find out more about the validity of retrospective PROM collection is to do more of it, so hopefully we’ll see more studies asking this question.

Adaptation or recovery after health shocks? Evidence using subjective and objective health measures. Health Economics [PubMed] Published 26th February 2018

People’s expectations about their health can influence their behaviour and determine their future health, so it’s important that we understand people’s expectations and any ways in which they diverge from reality. This paper considers the effect of a health shock on people’s expectations about how long they will live. The authors focus on survival probability, measured objectively (i.e. what actually happens to these patients) and subjectively (i.e. what the patients expect), and the extent to which the latter corresponds to the former. The arguments presented are couched within the concept of hedonic adaptation. So the question is – if post-shock expectations return to pre-shock expectations after a period of time – whether this is because people are recovering from the disease or because they are moving their reference point. Data are drawn from the Health and Retirement Study. Subjective survival probability is scaled to whether individuals expect to survive for 2 years. Cancer, stroke, and myocardial infarction are the health shocks used. The analysis uses some lagged regression models, separate for each of the three diagnoses, with objective and subjective survival probability as the dependent variable. There’s a bit of a jumble of things going on in this paper, with discussions of adaptation, survival, self-assessed health, optimism, and health behaviours. So it’s a bit difficult to see the wood for the trees. But the authors find the effect they’re looking for. Objective survival probability is negatively affected by a health shock, as is subjective survival probability. But then subjective survival starts to return to pre-shock trends whereas objective survival does not. The authors use this finding to suggest that there is adaptation. I’m not sure about this interpretation. To me it seems as if subjective life expectancy is only weakly responsive to changes in objective life expectancy. The findings seem to have more to do with how people process information about their probability of survival than with how they adapt to a situation. So while this is an interesting study about how people process changes in survival probability, I’m not sure what it has to do with adaptation.

3L, 5L, what the L? A NICE conundrum. PharmacoEconomics [PubMed] Published 26th February 2018

In my last round-up, I said I was going to write a follow-up blog post to an editorial on the EQ-5D-5L. I didn’t get round to it, but that’s probably best as there has since been a flurry of other editorials and commentaries on the subject. Here’s one of them. This commentary considers the perspective of NICE in deciding whether to support the use of the EQ-5D-5L and its English value set. The authors point out the differences between the 3L and 5L, namely the descriptive systems and the value sets. Examples of the 5L descriptive system’s advantages are provided: a reduced ceiling effect, reduced clustering, better discriminative ability, and the benefits of doing away with the ‘confined to bed’ level of the mobility domain. Great! On to the value set. There are lots of differences here, with 3 main causes: the data, the preference elicitation methods, and the modelling methods. We can’t immediately determine whether these differences are improvements or not. The authors stress the point that any differences observed will be in large part due to quirks in the original 3L value set rather than in the 5L value set. Nevertheless, the commentary is broadly supportive of a cautionary approach to 5L adoption. I’m not. Time for that follow-up blog post.

Credits

 

Alastair Canaway’s journal round-up for 27th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Elevated mortality among weekend hospital admissions is not associated with adoption of seven day clinical standards. Emergency Medicine Journal [PubMedPublished 8th November 2017

Our esteemed colleagues in Manchester brought more evidence to the seven-day NHS debate (debacle?). Patients who are admitted to hospital in an emergency at weekends have higher mortality rates than those during the week. Despite what our Secretary of State will have you believe, there is an increasing body of evidence suggesting that once case-mix is adequately adjusted for, the ‘weekend effect’ becomes negligible. This paper takes a slightly different angle for examining the same phenomenon. It harnesses the introduction of four priority clinical standards in England, which aim to reduce the number of deaths associated with the weekend effect. These are time to first consultant review; access to diagnostics; access to consultant-directed interventions; and on-going consultant review. The study uses publicly available data on the performance of NHS Trusts in relation to these four priority clinical standards. For the latest financial year (2015/16), Trusts’ weekend effect odds ratios were compared to their achievement against the four clinical standards. Data were available for 123 Trusts. The authors found that adoption of the four clinical standards was not associated with the extent to which mortality was elevated for patients admitted at the weekend. Furthermore, they found no association between the Trusts’ performance against any of the four standards and the magnitude of the weekend effect. The authors offer three reasons as to why this may be the case. First, data quality could be poor, second, it could be that the standards themselves are inadequate for reducing mortality, finally, it could be that the weekend effect in terms of mortality may be the wrong metric by which to judge the benefits of a seven-day service. They note that their previous research demonstrated that the weekend effect is driven by admission volumes at the weekend rather than the number of deaths, so it will not be impacted by care provision, and this is consistent with the findings in this study. The spectre of opportunity cost looms over the implementation of these standards; although no direct harm may arise from the introduction of these standards, resources will be diverted away from potentially more beneficial alternatives, this is a serious concern. The seven-day debate continues.

The effect of level overlap and color coding on attribute non-attendance in discrete choice experiments. Value in Health Published 16th November 2017

I think discrete choice experiments (DCE) are difficult to complete. That may be due to me not being the sharpest knife in the drawer, or it could be due to the nature of DCEs, or a bit of both. For this reason, I like best-worst scaling (BWS). BWS aside, DCEs are a common tool used in health economics research to assess and understand preferences. Given the difficulty of DCEs, people often resort to heuristics, that is, respondents often simplify choice tasks by taking shortcuts, e.g. ignoring one or more attribute (attribute non-attendance) or always selecting the option with the highest level of a certain attribute. This has downstream consequences leading to bias within preference estimates. Furthermore, difficulty with comprehension leads to high attrition rates. This RCT sought to examine whether participant dropout and attribute non-attendance could be reduced through two methods: level overlap, and colour coding. Level overlap refers to the DCE design whereby in each choice task a certain number of attributes are presented with the same level; in different choice tasks different attributes are overlapped. The idea of this is to prevent dominant attribute strategies whereby participants always choose the option with the highest level of one specific attribute and forces them to evaluate all attributes. The second method involves colour coding and the provision of other visual cues to reduce task complexity, e.g. colour coding levels to make it easy to see which levels are equal. There were five trial arms. The control arm featured no colour coding and no attribute overlap. The other four arms featured either colour coding (two different types were tested), attribute overlap, or a combination of them both. A nationally (Dutch) representative sample in relation to age, gender, education and geographic region were recruited online. In total 3394 respondents were recruited and each arm contained over 500 respondents. Familiarisation and warm-up questions were followed by 21 pairwise choice tasks in a randomised order. For the control arm (no overlap, no colour coding) 13.9% dropped out whilst only attending to on average 2.1 out of the five attributes. Colour coding reduced this to 9.6% with 2.8 attributes being attended. Combining level overlap with intensity colour coding reduced drop out further to 7.2% whilst increasing attribute attendance to four out of five. Thus, the combination of level overlap and colour coding nearly halved the dropout and doubled the attribute attendance within the DCE task. An additional, and perhaps most important benefit of the improvement in attribute attendance is that it reduces the need to model for potential attribute non-attendance post-hoc. Given the difficult of DCE completion, it seems colour coding in combination with level overlap should be implored for future DCE tasks.

Evidence on the longitudinal construct validity of major generic and utility measures of health-related quality of life in teens with depression. Quality of Life Research [PubMed] Published 17th November 2017

There appears to be increasing recognition of the prevalence and seriousness of youth mental health problems. Nearly 20% of young people will suffer depression during their adolescent years. To facilitate cost-utility analysis it is necessary to have a measure of preference based health-related quality of life (HRQL). However, there are few measures designed for use in adolescents. This study sought to examine various existing HRQL measures in relation to their responsiveness for the evaluation of interventions targeting depression in young people. This builds on previous work conducted by Brazier et al that found the EQ-5D and SF-6D performed adequately for depression in adults. In total 392 adolescents aged between 13 and 17 years joined the study, 376 of these completed follow up assessments. Assessments were taken at baseline and 12 weeks. The justification for 12 weeks is that it represented the modal time to clinical change. The following utility instruments were included: the HUI suite, the EQ-5D-3L, Quality of Well-Being Scale (QWB), and the SF-6D (derived from SF-36). Other non-preference based HRQL measures were also included: disease-specific ratings and scales, and the PedsQL 4.0. All (yes, you read that correctly) measures were found to be responsive to change in depression symptomology over the 12-week follow up period and each of the multi-attribute utility instruments was able to detect clinically meaningful change. In terms of comparing the utility instruments, the HUI-3, the QWB and the SF-6D were the most responsive whilst the EQ-5D-3L was the least responsive. In summary, any of the utility instruments could be used. One area of disappointment for me was that the CHU-9D was not included within this study – it’s one of the few instruments that has been developed by and for children and would have very much been a worthy addition. Regardless, this is an informative study for those of us working within the youth mental health sphere.

Credits

Chris Sampson’s journal round-up for 20th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Effects of health and social care spending constraints on mortality in England: a time trend analysis. BMJ Open [PubMed] Published 15th November 2017

I’d hazard a guess that I’m not the only one here who gets angry about the politics of austerity. Having seen this study’s title, it’s clear that the research could provide fuel for that anger. It doesn’t disappoint. Recent years have seen very low year-on-year increases in public expenditure on health in England. Even worse, between 2010 and 2014, public expenditure on social care actually fell in real terms. This is despite growing need for health and social care. In this study, the authors look at health and social care spending and try to estimate the impact that reduced expenditure has had on mortality in England. The analysis uses spending and mortality data from 2001 onwards and also incorporates mortality projections for 2015-2020. Time trend analyses are conducted using Poisson regression models. From 2001-2010, deaths decreased by 0.77% per year (on average). The mortality rate was falling. Now it seems to be increasing; from 2011-2014, the average number of deaths per year increased by 0.87%. This corresponds to 18,324 additional deaths in 2014, for example. But everybody dies. Extra deaths are really sooner deaths. So the question, really, is how much sooner? The authors look at potential years of life lost and find this figure to be 75,496 life-years greater than expected in 2014, given pre-2010 trends. This shouldn’t come as much of a surprise. Spending less generally achieves less. What makes this study really interesting is that it can tell us who is losing these potential years of life as a result of spending cuts. The authors find that it’s the over-60s. Care home deaths were the largest contributor to increased mortality. A £10 cut in social care spending per capita resulted in 5 additional care home deaths per 100,000 people. When the authors looked at deaths by local area, no association was found with the level of deprivation. If health and social care expenditure are combined in a single model, we see that it’s social care spending that is driving the number of excess deaths. The impact of health spending on hospital deaths was less robust. The number of nurses acted as a mediator for the relationship between spending and mortality. The authors estimate that current spending projections will result in 150,000 additional deaths compared with pre-2010 trends. There are plenty of limitations to this study. It’s pretty much impossible (though the authors do try) to separate the effects of austerity from the effect of a weak economy. Still, I’m satisfied with the conclusion that austerity kills older people (no jokes about turkeys and Christmas, please). For me, the findings also highlight the need for more research in the context of social care, and how we (as researchers) might effectively direct policy to prevent ‘excess’ deaths.

Should cost effectiveness analyses for NICE always consider future unrelated medical costs? BMJ [PubMed] Published 10th November 2017

The question of whether or not ‘unrelated’ future medical costs should be included in economic evaluation is becoming a hot topic. So much so that the BMJ has published this Head To Head, which introduces some of the arguments for and against. NICE currently recommends excluding unrelated future medical costs. An example given in this article is the case of the expected costs of dementia care having saved someone’s life by heart transplantation. The argument in favour of including unrelated costs is quite obvious – these costs can’t be ignored if we seek to maximise social welfare. Their inclusion is described as “not difficult” by the authors defending this move. By ignoring unrelated future costs (but accounting for the benefit of longer life), the relative cost-effectiveness of life-extending treatments, compared with life-improving treatments, is artificially inflated. The argument against including unrelated medical costs is presented as one of fairness. The author suggests that their inclusion could preclude access to health care for certain groups of people that are likely to have high needs in the future. So perhaps NICE should ignore unrelated medical costs in certain circumstances. I sympathise with this view, but I feel it is less a fairness issue and more a demonstration of the current limits of health-related quality of life measurement, which don’t reflect adaptation and coping. However, I tend to disagree with both of the arguments presented here. I really don’t think NICE should include or exclude unrelated future medical costs according to the context because that could create some very perverse incentives for certain stakeholders. But then, I do not agree that it is “not difficult” to include all unrelated future costs. ‘All’ is an important qualifier here because the capacity for analysts to pick and choose unrelated future costs creates the potential to pick and choose results. When it comes to unrelated future medical costs, NICE’s position needs to be all-or-nothing, and right now the ‘all’ bit is a high bar to clear. NICE should include unrelated future medical costs – it’s difficult to formulate a sound argument against that – but they should only do so once more groundwork has been done. In particular, we need to develop more valid methods for valuing quality of life against life-years in health technology assessment across different patient groups. And we need more reliable methods for estimating future medical costs in all settings.

Oncology modeling for fun and profit! Key steps for busy analysts in health technology assessment. PharmacoEconomics [PubMed] Published 6th November 2017

Quite a title(!). The subject of this essay is ‘partitioned survival modelling’. Honestly,  I never really knew what that was until I read this article. It seems the reason for my ignorance could be that I haven’t worked on the evaluation of cancer treatments, for which it’s a popular methodology. Apparently, a recent study found that almost 75% of NICE cancer drug appraisals were informed by this sort of analysis. Partitioned survival modelling is a simple means by which to extrapolate outcomes in a context where people can survive (or not) with or without progression. Often this can be on the basis of survival analyses and standard trial endpoints. This article seeks to provide some guidance on the development and use of partitioned survival models. Or, rather, it provides a toolkit for calling out those who might seek to use the method as a means of providing favourable results for a new therapy when data and analytical resources are lacking. The ‘key steps’ can be summarised as 1) avoiding/ignoring/misrepresenting current standards of economic evaluation, 2) using handpicked parametric approaches for extrapolation in order to maximise survival benefits, 3) creatively estimating relative treatment effects using indirect comparisons without adjustment, 4) make optimistic assumptions about post-progression outcomes, and 5) deny the possibility of any structural uncertainty. The authors illustrate just how much an analyst can influence the results of an evaluation (if they want to “keep ICERs in the sweet spot!”). Generally, these tactics move the model far from being representative of reality. However, the prevailing secrecy around most models means that it isn’t always easy to detect these shortcomings. Sometimes it is though, and the authors make explicit reference to technology appraisals that they suggest demonstrate these crimes. Brilliant!

Credits