Chris Sampson’s journal round-up for 29th May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

“Naming and framing”: The impact of labeling on health state values for multiple sclerosis. Medical Decision Making [PubMedPublished 21st May 2017

Tell someone that the health state that they’re valuing is actually related to cancer, and they’ll give you a different value than if you hadn’t mentioned cancer. A lower value, probably. There’s a growing amount of evidence that ‘labelling’ health state descriptions with the name of a particular disease can influence the resulting values. Generally, the evidence is that mentioning the disease will lower values, though that’s probably because researchers have been selecting diseases that they think will show this. (Has anyone tried it for hayfever?) The jury is out on whether labelling is a good thing or a bad thing, so in the meantime, we need evidence for particular diseases to help us understand what’s going on. This study looks at MS. Two UK-representative samples (n = 1576; n = 1641) completed an online TTO valuation task for states defined using the condition-specific preference-based MSIS-8D. Participants were first asked to complete the MSIS-8D to provide their own health state, and then to rank three MSIS-8D states and also complete a practice TTO task. For the preference elicitation proper, individuals were presented with a set of 5 MSIS-8D health states. One group were asked to imagine that they had MS and were provided with some information and a link to the NHS Choices website. The authors’ first analysis tests for a difference due to labelling. Their second analysis creates two alternative tariffs for the MSIS-8D based on the two surveys. People in the label group reported lower health state values on average. The size of this labelling-related decrement was greater for less severe health states. The creation of the tariffs seemed to show that labelling does not have a consistent impact across dimensions. This means that, in practice, the two tariffs could favour different types of interventions, depending on for which dimensions benefits might be observed. The tariff derived from the label group demonstrated slightly poorer predictive performance. This study tells us that label-or-not is a decision that will influence the relative cost-effectiveness of interventions for MS. But we still need a sound basis for making that choice.

Nudges in a post-truth world. Journal of Medical Ethics [PubMed] Published 19th May 2017

Not everyone likes the idea of nudges. They can be used to get people to behave in ways that are ‘better’… but who decides what is better? Truth, surely, we can all agree, is better. There are strong forces against the truth, whether they be our own cognitive biases, the mainstream media (FAKE NEWS!!!), or Nutella trying to tell us they offer a healthy breakfast option thanks to all that calcium. In this essay, the author outlines a special kind of nudge, which he refers to as a ‘nudge to reason’. The paper starts with a summary of the evidence regarding the failure of people to change their minds in response to evidence, and the backfire effect, whereby false beliefs become even more entrenched in light of conflicting evidence. Memory failures, and the ease with which people can handle the information, are identified as key reasons for perverse responses to evidence. The author then goes on to look at the evidence in relation to the conditions in which people do respond to evidence. In particular, where people get their evidence matters (we still trust academics, right?). The persuasiveness of evidence can be influenced by the way it is delivered. So why not nudge towards the truth? The author focuses on a key objection to nudges; that they do not protect freedom in a substantive sense because they bypass people’s capacities for deliberation. Nudges take advantage of non-rational features of human nature and fail to treat people as autonomous agents deserving of respect. One of the reasons I’ve never much like nudges is that they could promote ignorance and reinforce biases. Nudges to reason, on the other hand, influence behaviour indirectly via beliefs: changing behaviour by changing minds by improving responses to genuine evidence. The author argues that nudges to reason do not bypass the deliberative capacities of agents at all, but rather appeal to them, and are thus permissible. They operate by appealing to mechanisms that are partially constitutive of rationality and this is itself part of what defines our substantive freedom. We could also extend this to argue that we have a moral responsibility to frame arguments in a way that is truth-conducive, in order to show respect to individuals. I think health economists are in a great position to contribute to these debates. Our subfield exists principally because of uncertainty and asymmetry of information in health care. We’ve been studying these things for years. I’m convinced by the author’s arguments about the permissibility of nudges to reason. But they’d probably make for flaccid public policy. Nudges to reason would surely be dominated by nudges to ignorance. Either people need coercing towards the truth or those nudges to ignorance need to be shut down.

How should hospital reimbursement be refined to support concentration of complex care services? Health Economics [PubMed] Published 19th May 2017

Treating rare and complex conditions in specialist centres may be good for patients. We might expect these patients to be especially expensive to treat compared with people treated in general hospitals. Therefore, unless reimbursement mechanisms are able to account for this, specialist hospitals will be financially disadvantaged and concentration might not be sustainable. Healthcare Resource Groups (HRGs) – the basis for current payments – only work if variation in cost is not related to any differences in the types of patients treated at particular hospitals. This study looks at hospitals that might be at risk of financial disadvantage due to differences in casemix complexity. Individual-level Hospital Episode Statistics for 2013-14 were matched to hospital-level Reference Costs and a set of indicators for the use of specialist services were applied. The data included 12.4 million patients of whom 766,204 received complex care. The authors construct a random effects model estimating the cost difference associated with complex care, by modelling the impact of a set of complex care markers on individual-level cost estimates. The Gini coefficient is estimated to look at the concentration of complex care across hospitals. Most of the complex care markers were associated with significantly higher costs. 26 of 69 types of complex care were associated with costs more than 10% higher. What’s more, complex care was concentrated among relatively few hospitals with a mean Gini coefficient of 0.88. Two possible approaches to fixing the payment system are considered: i) recalculation of the HRG price to include a top-up or ii) a more complex refinement of the allocation of patients to different HRGs. The second option becomes less attractive as more HRGs are subject to this refinement as we could end up with just one hospital reporting all of the activity for a particular HRG. Based on the expected impact of these differences – in view of the size of the cost difference and the extent of distribution across different HRGs and hospitals – the authors are able to make recommendations about which HRGs might require refinement. The study also hints at an interesting challenge. Some of the complex care services were associated with lower costs where care was concentrated in very few centres, suggesting that concentration could give rise to cost savings. This could imply that some HRGs may need refining downwards with complexity, which feels a bit counterintuitive. My only criticism of the paper? The references include at least 3 web pages that are no longer there. Please use WebCite, people!

Credits

Chris Sampson’s journal round-up for 8th May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Verification of decision-analytic models for health economic evaluations: an overview. PharmacoEconomics [PubMed] Published 29th April 2017

Increasingly, it’s expected that model-based economic evaluations can be validated and shown to be fit-for-purpose. However, up to now, discussions have focussed on scientific questions about conceptualisation and external validity, rather than technical questions, such as whether the model is programmed correctly and behaves as expected. This paper looks at how things are done in the software industry with a view to creating guidance for health economists. Given that Microsoft Excel remains one of the most popular software packages for modelling, there is a discussion of spreadsheet errors. These might be errors in logic, simple copy-paste type mistakes and errors of omission. A variety of tactics is discussed. In particular, the authors describe unit testing, whereby individual parts of the code are demonstrated to be correct. Unit testing frameworks do not exist for application to spreadsheets, so the authors recommend the creation of a ‘Tests’ spreadsheet with tests for parameter assignments, functions, equations and exploratory items. Independent review by another modeller is also recommended. Six recommendations are given for taking model verification forward: i) the use of open source models, ii) standardisation in model storage and communication (anyone for a registry?), iii) style guides for script, iv) agency and journal mandates, v) training and vi) creation of an ISPOR/SMDM task force. This is a worthwhile read for any modeller, with some neat tactics that you can build into your workflow.

How robust are value judgments of health inequality aversion? Testing for framing and cognitive effects. Medical Decision Making [PubMed] Published 25th April 2017

Evidence shows that people are often extremely averse to health inequality. Sometimes these super-egalitarian responses imply such extreme preferences that monotonicity is violated. The starting point for this study is the idea that these findings are probably influenced by framing effects and cognitive biases, and that they may therefore not constitute a reliable basis for policy making. The authors investigate 4 hypotheses that might indicate the presence of bias: i) realistic small health inequality reductions vs larger one, ii) population- vs individual-level descriptions, iii) concrete vs abstract intervention scenarios and iv) online vs face-to-face administration. Two samples were recruited: one with a face-to-face discussion (n=52) and the other online (n=83). The questionnaire introduced respondents to health inequality in England before asking 4 questions in the form of a choice experiment, with 20 paired choices. Responses are grouped according to non-egalitarianism, prioritarianism and strict egalitarianism. The main research question is whether or not the alternative strategies resulted in fewer strict egalitarian responses. Not much of an effect was found with regard to large gains or to population-level descriptions. There was evidence that the abstract scenarios resulted in a greater proportion of people giving strong egalitarian responses. And the face-to-face sample did seem to exhibit some social desirability bias, with more egalitarian responses. But the main take-home message from this study for me is that it is not easy to explain-away people’s extreme aversion to health inequality, which is heartening. Yet, as with all choice experiments, we see that the mode of administration – and cognitive effects induced by the question – can be very important.

Adaptation to health states: sick yet better off? Health Economics [PubMed] Published 20th April 2017

Should patients or the public value health states for the purpose of resource allocation? It’s a question that’s cropped up plenty of times on this blog. One of the trickier challenges is understanding and dealing with adaptation. This paper has a pretty straightforward purpose – to look for signs of adaptation in a longitudinal dataset. The authors’ approach is to see whether there is a positive relationship between the length of time a person has an illness and the likelihood of them reporting better health. I did pretty much the same thing (for SF-6D and satisfaction with life) in my MSc dissertation, and found little evidence of adaptation, so I’m keen to see where this goes! The study uses 4 waves of data from the British Cohort Study, looking at self-assessed health (on a 4-point scale) and self-reported chronic illness and health shocks. Latent self-assessed health is modelled using a dynamic ordered probit model. In short, there is evidence of adaptation. People who have had a long-standing illness for a greater duration are more likely to report a higher level of self-assessed health. An additional 10 years of illness is associated with an 8 percentage point increase in the likelihood of reporting ‘excellent’ health. The study is opaque about sample sizes, but I’d guess that finding is based on not-that-many people. Further analyses are conducted to show that adaptation seems to become important only after a relatively long duration (~20 years) and that better health before diagnosis may not influence adaptation. The authors also look at specific conditions, finding that some (e.g. diabetes, anxiety, back problems) are associated with adaptation, while others (e.g. depression, cancer, Crohn’s disease) are not. I have a bit of a problem with this study though, in that it’s framed as being relevant to health care resource allocation and health technology assessment. But I don’t think it is. Self-assessed health in the ‘how healthy are you’ sense is very far removed from the process by which health state utilities are obtained using the EQ-5D. And they probably don’t reflect adaptation in the same way.

Credits