Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
Health-related resource-use measurement instruments for intersectoral costs and benefits in the education and criminal justice sectors. PharmacoEconomics [PubMed] Published 8th June 2017
Increasingly, people are embracing a societal perspective for economic evaluation. This often requires the identification of costs (and benefits) in non-health sectors such as education and criminal justice. But it feels as if we aren’t as well-versed in capturing these as we are in the health sector. This study reviews the measures that are available to support a broader perspective. The authors search the Database of Instruments for Resource Use Measurement (DIRUM) as well as the usual electronic journal databases. The review also sought to identify the validity and reliability of the instruments. From 167 papers assessed in the review, 26 different measures were identified (half of which were in DIRUM). 21 of the instruments were only used in one study. Half of the measures included items relating to the criminal justice sector, while 21 included education-related items. Common specifics for education included time missed at school, tutoring needs, classroom assistance and attendance at a special school. Criminal justice sector items tended to include legal assistance, prison detainment, court appearances, probation and police contacts. Assessments of the psychometric properties was found for only 7 of the 26 measures, with specific details on the non-health items available for just 2: test-retest reliability for the Child and Adolescent Services Assessment (CASA) and validity for the WPAI+CIQ:SHP,V2 (there isn’t room on the Internet for the full name). So there isn’t much evidence of any validity for any of these measures in the context of intersectoral (non-health) costs and benefits. It’s no doubt the case that health-specific resource use measures aren’t subject to adequate testing, but this study has identified that the problem may be even greater when it comes to intersectoral costs and benefits. Most worrying, perhaps, is the fact that 1 in 5 of the articles identified in the review reported using some unspecified instrument, presumably developed specifically for the study or adapted from an off-the-shelf instrument. The authors propose that a new resource use measure for intersectoral costs and benefits (RUM ICB) be developed from scratch, with reference to existing measures and guidance from experts in education and criminal justice.
Use of large-scale HRQoL datasets to generate individualised predictions and inform patients about the likely benefit of surgery. Quality of Life Research [PubMed] Published 31st May 2017
In the NHS, EQ-5D data are now routinely collected from patients before and after undergoing one of four common procedures. These data can be used to see how much patients’ health improves (or deteriorates) following the operations. However, at the individual level, for a person deciding whether or not to undergo the procedure, aggregate outcomes might not be all that useful. This study relates to the development of a nifty online tool that a prospective patient can use to find out the expected likelihood that they will feel better, the same or worse following the procedure. The data used include EQ-5D-3L responses associated with almost half a million unilateral hip or knee replacements or groin hernia repairs between April 2009 and March 2016. Other variables are also included, and central to this analysis is a Likert scale about improvement or worsening of hip/knee/hernia problems compared to before the operation. The purpose of the study is to group people – based on their pre-operation characteristics – according to their expected postoperative utility scores. The authors employed a recursive Classification and Regression Tree (CART) algorithm to split the datasets into strata according to the risk factors. The final set of risk variables were age, gender, pre-operative EQ-5D-3L profile and symptom duration. The CART analysis grouped people into between 55 and 60 different groups for each of the procedures, with the groupings explaining 14-27% of the variation in postoperative utility scores. Minimally important (positive and negative) differences in the EQ-5D utility score were estimated with reference to changes in the Likert scale for each of the procedures. These ranged in magnitude from 0.041 to 0.106. The resulting algorithms are what drive the results delivered by the online interface (you can go and have a play with it). There are a few limitations to the study, such as the reliance on complete case analysis and the fact that the CART analysis might lack predictive ability. And there’s an interesting problem inherent in all of this, that the more people use the tool, the less representative it will become as it influences selection into treatment. The validity of the tool as a precise risk calculator is quite limited. But that isn’t really the point. The point is that it unlocks some of the potential value of PROMs to provide meaningful guidance in the process of shared decision-making.
Can present biasedness explain early onset of diabetes and subsequent disease progression? Exploring causal inference by linking survey and register data. Social Science & Medicine [PubMed] Published 26th May 2017
The term ‘irrational’ is overused by economists. But one situation in which I am willing to accept it is with respect to excessive present bias. That people don’t pay enough attention to future outcomes seems to be a fundamental limitation of the human brain in the 21st century. When it comes to diabetes and its complications, there are lots of treatments available, but there is only so much that doctors can do. A lot depends on the patient managing their own disease, and it stands to reason that present bias might cause people to manage their diabetes poorly, as the value of not going blind or losing a foot 20 years in the future seems less salient than the joy of eating your own weight in carbs right now. But there’s a question of causality here; does the kind of behaviour associated with time-inconsistent preferences lead to poorer health or vice versa? This study provides some insight on that front. The authors outline an expected utility model with quasi-hyperbolic discounting and probability weighting, and incorporate a present bias coefficient attached to payoffs occurring in the future. Postal questionnaires were collected from 1031 type 2 diabetes patients in Denmark with an online discrete choice experiment as a follow-up. These data were combined with data from a registry of around 9000 diabetes patients, from which the postal/online participants were identified. BMI, HbA1c, age and year of diabetes onset were all available in the registry and the postal survey included physical activity, smoking, EQ-5D, diabetes literacy and education. The DCE was designed to elicit time preferences using the offer of (monetary) lottery wins, with 12 different choice sets presented to all participants. Unfortunately, despite the offer of a real-life lottery award for taking part in the research, only 79 of 1031 completed the online DCE survey. Regression analyses showed that individuals with diabetes since 1999 or earlier, or who were 48 or younger at the time of onset, exhibited present bias. And the present bias seems to be causal. Being inactive, obese, diabetes illiterate and having lower quality of life or poorer glycaemic control were associated with being present biased. These relationships hold when subject to a number of control measures. So it looks as if present bias explains at least part of the variation in self-management and health outcomes for people with diabetes. Clearly, the selection of the small sample is a bit of a concern. It may have meant that people with particular risk preferences (given that the reward was a lottery) were excluded, and so the sample might not be representative. Nevertheless, it seems that at least some people with diabetes could benefit from interventions that increase the salience of future health-related payoffs associated with self-management.
Credits
[…] my last round-up, I included a study looking at resource use measures for intersectoral costs and benefits; costs […]
PS ‘catastrophic decline’ following joint replacement is a very real phenomenon that worries the beejesus out of physicians – if the HSRC had not closed in 2009 I’d have been continuing research on this with the top rheumatologist in his day (who literally wrote the textbook). There’s NOTHING in the sociodemographic/clinical pre-op data that predicts if a patient will rapidly go downhill after total joint replacement (TJR) and die…but the rheumatologist “knew” ICECAP-O was at least part of the answer, in terms of helping identify psychosocial variables that should alert clinicians that a given patient might be better off without TJR. It’s partly why I like the ‘security’ (worries about the future) attribute….it does a lot to mop up this stuff on its own if you don’t have attitudinal data.
I’ve used CART/MARS – nifty software and was impressed that (as usual with variable selection) if used with care and intimate knowledge of your data it is very versatile and ‘passes the sniff test’.
It validated my (manual – very very time-consuming) analysis concerning which sociodemographic but particularly attitudinal variables, when interacted with ICECAP-O outcomes, help explain why the main effect of certain variables (health being the primary one!) on quality of life are surprisingly small (7% IIRC for health for ICECAP-O in our AHEHP paper)…..and sociodemographics don’t improve this by much.
If you feel ‘disempowered’ in terms of social capital variables then losing your health imposes a truly stonkingly large effect on your quality of life (30 something percent). Traditional validation studies (which show the instrument does show the right correlations and which I’ve co-authored) are therefore, IMO, extremely lacking.
It helped me along the pathway to realisation that none of these general health or quality of life instruments can possibly predict the effect on your quality of life unless attitudinal variables on topics not normally asked in major health surveys – *correctly quantified* via (for instance) choice models, so no Likert scale biases – are also asked.
My MARS model for Australia was powerful enough to identify what I’d heard so often anecdotally and experienced about all that is bad about Sydney compared to other parts of Australia.
Of course another finding was that (as in one of the chapters in the BWS book), the use of the population tariff in these kind of models automatically could give different (incorrect) inferences for an individual patient. I redid the analysis using tariffs that were correct based on specific age/sex/relationship status group (more like a PROM), and unsurprisingly got different results (not published – in my UTS lecture which is still floating round the web). The crucial weakness here is using values that have not been elicited correctly and relevant to the individual respondent.
If these two issues were addressed then I think your (valid) criticism regarding increasing sample selection bias would be solved…..but people aren’t ready for such major changes to surveys yet. However, hats off to them for the steps forward they did make.