Chris Sampson’s journal round-up for 5th March 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Healthy working days: the (positive) effect of work effort on occupational health from a human capital approach. Social Science & Medicine Published 28th February 2018

If you look at the literature on the determinants of subjective well-being (or happiness), you’ll see that unemployment is often cited as having a big negative impact. The same sometimes applies for its impact on health, but here – of course – the causality is difficult to tease apart. Then, in research that digs deeper, looking at hours worked and different types of jobs, we see less conclusive results. In this paper, the authors start by asserting that the standard approach in labour economics (on which I’m not qualified to comment) is to assume that there is a negative association between work effort and health. This study extends the framework by allowing for positive effects of work that are related to individuals’ characteristics and working conditions, and where health is determined in a Grossman-style model of health capital that accounts for work effort in the rate of health depreciation. This model is used to examine health as a function of work effort (as indicated by hours worked) in a single wave of the European Working Conditions Survey (EWCS) from 2010 for 15 EU member states. Key items from the EWCS included in this study are questions such as “does your work affect your health or not?”, “how is your health in general?”, and “how many hours do you usually work per week?”. Working conditions are taken into account by looking at data on shift working and the need to wear protective equipment. One of the main findings of the study is that – with good working conditions – greater work effort can improve health. The Marxist in me is not very satisfied with this. We need to ask the question, compared to what? Working fewer hours? For most people, that simply isn’t an option. Aren’t the people who work fewer hours the people who can afford to work fewer hours? No attention is given to the sociological aspects of employment, which are clearly important. The study also shows that overworking or having poorer working conditions reduces health. We also see that, for many groups, longer hours do not negatively impact on health until we reach around 120 hours a week. This fails a good sense check. Who are these people?! I’d be very interested to see if these findings hold for academics. That the key variables are self-reported undermines the conclusions somewhat, as we can expect people to adjust their expectations about work effort and health in accordance with their colleagues. It would be very difficult to avoid a type 2 error (with respect to the negative impact of effort on health) using these variables to represent health and the role of work effort.

Agreement between retrospectively and contemporaneously collected patient-reported outcome measures (PROMs) in hip and knee replacement patients. Quality of Life Research [PubMed] Published 26th February 2018

The use of patient-reported outcomes (PROMs) in elective care in the NHS has been a boon for researchers in our field, providing before-and-after measurement of health-related quality of life so that we can look at the impact of these interventions. But we can’t do this in emergency care because the ‘before’ is never observed – people only show up when they’re in the middle of the emergency. But what if people could accurately recall their pre-emergency health state? There’s some evidence to suggest that people can, so long as the recall period is short. This study looks at NHS PROMs data (n=443), with generic and condition-specific outcomes collected from patients having hip or knee replacements. Patients included in the study were additionally asked to recall their health state 4 weeks prior to surgery. The authors assess the extent to which the contemporary PROM measurements agree with the retrospective measurements, and the extent to which any disagreement relates to age, socioeconomic status, or the length of time to recall. There wasn’t much difference between contemporary and retrospective measurements, though patients reported slightly lower health on the retrospective questionnaires. And there weren’t any compelling differences associated with age or socioeconomic status or the length of recall. These findings are promising, suggesting that we might be able to rely on retrospective PROMs. But the elective surgery context is very different to the emergency context, and I don’t think we can expect the two types of health care to impact recollection in the same way. In this study, responses may also have been influenced by participants’ memories of completing the contemporary questionnaire, and the recall period was very short. But the only way to find out more about the validity of retrospective PROM collection is to do more of it, so hopefully we’ll see more studies asking this question.

Adaptation or recovery after health shocks? Evidence using subjective and objective health measures. Health Economics [PubMed] Published 26th February 2018

People’s expectations about their health can influence their behaviour and determine their future health, so it’s important that we understand people’s expectations and any ways in which they diverge from reality. This paper considers the effect of a health shock on people’s expectations about how long they will live. The authors focus on survival probability, measured objectively (i.e. what actually happens to these patients) and subjectively (i.e. what the patients expect), and the extent to which the latter corresponds to the former. The arguments presented are couched within the concept of hedonic adaptation. So the question is – if post-shock expectations return to pre-shock expectations after a period of time – whether this is because people are recovering from the disease or because they are moving their reference point. Data are drawn from the Health and Retirement Study. Subjective survival probability is scaled to whether individuals expect to survive for 2 years. Cancer, stroke, and myocardial infarction are the health shocks used. The analysis uses some lagged regression models, separate for each of the three diagnoses, with objective and subjective survival probability as the dependent variable. There’s a bit of a jumble of things going on in this paper, with discussions of adaptation, survival, self-assessed health, optimism, and health behaviours. So it’s a bit difficult to see the wood for the trees. But the authors find the effect they’re looking for. Objective survival probability is negatively affected by a health shock, as is subjective survival probability. But then subjective survival starts to return to pre-shock trends whereas objective survival does not. The authors use this finding to suggest that there is adaptation. I’m not sure about this interpretation. To me it seems as if subjective life expectancy is only weakly responsive to changes in objective life expectancy. The findings seem to have more to do with how people process information about their probability of survival than with how they adapt to a situation. So while this is an interesting study about how people process changes in survival probability, I’m not sure what it has to do with adaptation.

3L, 5L, what the L? A NICE conundrum. PharmacoEconomics [PubMed] Published 26th February 2018

In my last round-up, I said I was going to write a follow-up blog post to an editorial on the EQ-5D-5L. I didn’t get round to it, but that’s probably best as there has since been a flurry of other editorials and commentaries on the subject. Here’s one of them. This commentary considers the perspective of NICE in deciding whether to support the use of the EQ-5D-5L and its English value set. The authors point out the differences between the 3L and 5L, namely the descriptive systems and the value sets. Examples of the 5L descriptive system’s advantages are provided: a reduced ceiling effect, reduced clustering, better discriminative ability, and the benefits of doing away with the ‘confined to bed’ level of the mobility domain. Great! On to the value set. There are lots of differences here, with 3 main causes: the data, the preference elicitation methods, and the modelling methods. We can’t immediately determine whether these differences are improvements or not. The authors stress the point that any differences observed will be in large part due to quirks in the original 3L value set rather than in the 5L value set. Nevertheless, the commentary is broadly supportive of a cautionary approach to 5L adoption. I’m not. Time for that follow-up blog post.

Credits

 

Chris Sampson’s journal round-up for 25th September 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Good practices for real‐world data studies of treatment and/or comparative effectiveness: recommendations from the Joint ISPOR‐ISPE Special Task Force on Real‐World Evidence in Health Care Decision Making. Value in Health Published 15th September 2017

I have an instinctive mistrust of buzzwords. They’re often used to avoid properly defining something, either because it’s too complicated or – worse – because it isn’t worth defining in the first place. For me, ‘real-world evidence’ falls foul. If your evidence isn’t from the real world, then it isn’t evidence at all. But I do like a good old ISPOR Task Force report, so let’s see where this takes us. Real-world evidence (RWE) and its sibling buzzword real-world data (RWD) relate to observational studies and other data not collected in an experimental setting. The purpose of this ISPOR task force (joint with the International Society for Pharmacoepidemiology) was to prepare some guidelines about the conduct of RWE/RWD studies, with a view to improving decision-makers’ confidence in them. Essentially, the hope is to try and create for RWE the kind of ecosystem that exists around RCTs, with procedures for study registration, protocols, and publication: a noble aim. The authors distinguish between 2 types of RWD: ‘Exploratory Treatment Effectiveness Studies’ and ‘Hypothesis Evaluating Treatment Effectiveness Studies’. The idea is that the latter test a priori hypotheses, and these are the focus of this report. Seven recommendations are presented: i) pre-specify the hypotheses, ii) publish a study protocol, iii) publish the study with reference to the protocol, iv) enable replication, v) test hypotheses on a separate dataset than the one used to generate the hypotheses, vi) publically address methodological criticisms, and vii) involve key stakeholders. Fair enough. But these are just good practices for research generally. It isn’t clear how they are in any way specific to RWE. Of course, that was always going to be the case. RWE-specific recommendations would be entirely contingent on whether or not one chose to define a study as using ‘real-world evidence’ (which you shouldn’t, because it’s meaningless). The authors are trying to fit a bag of square pegs into a hole of undefined shape. It isn’t clear to me why retrospective observational studies, prospective observational studies, registry studies, or analyses of routinely collected clinical data should all be treated the same, yet differently to randomised trials. Maybe someone can explain why I’m mistaken, but this report didn’t do it.

Are children rational decision makers when they are asked to value their own health? A contingent valuation study conducted with children and their parents. Health Economics [PubMed] [RePEc] Published 13th September 2017

Obtaining health state utility values for children presents all sorts of interesting practical and theoretical problems, especially if we want to use them in decisions about trade-offs with adults. For this study, the researchers conducted a contingent valuation exercise to elicit children’s (aged 7-19) preferences for reduced risk of asthma attacks in terms of willingness to pay. The study was informed by two preceding studies that sought to identify the best way in which to present health risk and financial information to children. The participating children (n=370) completed questionnaires at school, which asked about socio-demographics, experience of asthma, risk behaviours and altruism. They were reminded (in child-friendly language) about the idea of opportunity cost, and to consider their own budget constraint. Baseline asthma attack risk and 3 risk-reduction scenarios were presented graphically. Two weeks later, the parents completed similar questionnaires. Only 9% of children were unwilling to pay for risk reduction, and most of those said that it was the mayor’s problem! In some senses, the children did a better job than their parents. The authors conducted 3 tests for ‘incorrect’ responses – 14% of adults failed at least one, while only 4% of children did so. Older children demonstrated better scope sensitivity. Of course, children’s willingness to pay was much lower in absolute terms than their parents’, because children have a much smaller budget. As a percentage of the budget, parents were – on average – willing to pay more than children. That seems reassuringly predictable. Boys and fathers were willing to pay more than girls and mothers. Having experience of frequent asthma attacks increased willingness to pay. Interestingly, teenagers were willing to pay less (as a proportion of their budget) than younger children… and so were the teenagers’ parents! Children’s willingness to pay was correlated with that of their own parent’s at the higher risk reductions but not the lowest. This study reports lots of interesting findings and opens up plenty of avenues for future research. But the take-home message is obvious. Kids are smart. We should spend more time asking them what they think.

Journal of Patient-Reported Outcomes: aims and scope. Journal of Patient-Reported Outcomes Published 12th September 2017

Here we have a new journal that warrants a mention. The journal is sponsored by the International Society for Quality of Life Research (ISOQOL), making it a sister journal of Quality of Life Research. One of its Co-Editors-in-Chief is the venerable David Feeny, of HUI fame. They’ll be looking to publish research using PRO(M) data from trials or routine settings, studies of the determinants of PROs, qualitative studies in the development of PROs; anything PRO-related, really. This could be a good journal for more thorough reporting of PRO data that can get squeezed out of a study’s primary outcome paper. Also, “JPRO” is fun to say. The editors don’t mention that the journal is open access, but the website states that it is, so APCs at the ready. ISOQOL members get a discount.

Research and development spending to bring a single cancer drug to market and revenues after approval. JAMA Internal Medicine [PubMed] Published 11th September 2017

We often hear that new drugs are expensive because they’re really expensive to develop. Then we hear about how much money pharmaceutical companies spend on marketing, and we baulk. The problem is, pharmaceutical companies aren’t forthcoming with their accounts, so researchers have to come up with more creative ways to estimate R&D spending. Previous studies have reported divergent estimates. Whether R&D costs ‘justify’ high prices remains an open question. For this study, the authors looked at public data from the US for 10 companies that had only one cancer drug approved by the FDA between 2007 and 2016. Not very representative, perhaps, but useful because it allows for the isolation of the development costs associated with a single drug reaching the market. The median time for drug development was 7.3 years. The most generous estimate of the mean cost of development came in at under a billion dollars; substantially less than some previous estimates. This looks like a bargain; the mean revenue for the 10 companies up to December 2016 was over $6.5 billion. This study may seem a bit back-of-the-envelope in nature. But that doesn’t mean it isn’t accurate. If anything, it begs more confidence than some previous studies because the methods are entirely transparent.

Credits

 

 

Chris Sampson’s journal round-up for 19th June 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Health-related resource-use measurement instruments for intersectoral costs and benefits in the education and criminal justice sectors. PharmacoEconomics [PubMed] Published 8th June 2017

Increasingly, people are embracing a societal perspective for economic evaluation. This often requires the identification of costs (and benefits) in non-health sectors such as education and criminal justice. But it feels as if we aren’t as well-versed in capturing these as we are in the health sector. This study reviews the measures that are available to support a broader perspective. The authors search the Database of Instruments for Resource Use Measurement (DIRUM) as well as the usual electronic journal databases. The review also sought to identify the validity and reliability of the instruments. From 167 papers assessed in the review, 26 different measures were identified (half of which were in DIRUM). 21 of the instruments were only used in one study. Half of the measures included items relating to the criminal justice sector, while 21 included education-related items. Common specifics for education included time missed at school, tutoring needs, classroom assistance and attendance at a special school. Criminal justice sector items tended to include legal assistance, prison detainment, court appearances, probation and police contacts. Assessments of the psychometric properties was found for only 7 of the 26 measures, with specific details on the non-health items available for just 2: test-retest reliability for the Child and Adolescent Services Assessment (CASA) and validity for the WPAI+CIQ:SHP,V2 (there isn’t room on the Internet for the full name). So there isn’t much evidence of any validity for any of these measures in the context of intersectoral (non-health) costs and benefits. It’s no doubt the case that health-specific resource use measures aren’t subject to adequate testing, but this study has identified that the problem may be even greater when it comes to intersectoral costs and benefits. Most worrying, perhaps, is the fact that 1 in 5 of the articles identified in the review reported using some unspecified instrument, presumably developed specifically for the study or adapted from an off-the-shelf instrument. The authors propose that a new resource use measure for intersectoral costs and benefits (RUM ICB) be developed from scratch, with reference to existing measures and guidance from experts in education and criminal justice.

Use of large-scale HRQoL datasets to generate individualised predictions and inform patients about the likely benefit of surgery. Quality of Life Research [PubMed] Published 31st May 2017

In the NHS, EQ-5D data are now routinely collected from patients before and after undergoing one of four common procedures. These data can be used to see how much patients’ health improves (or deteriorates) following the operations. However, at the individual level, for a person deciding whether or not to undergo the procedure, aggregate outcomes might not be all that useful. This study relates to the development of a nifty online tool that a prospective patient can use to find out the expected likelihood that they will feel better, the same or worse following the procedure. The data used include EQ-5D-3L responses associated with almost half a million unilateral hip or knee replacements or groin hernia repairs between April 2009 and March 2016. Other variables are also included, and central to this analysis is a Likert scale about improvement or worsening of hip/knee/hernia problems compared to before the operation. The purpose of the study is to group people – based on their pre-operation characteristics – according to their expected postoperative utility scores. The authors employed a recursive Classification and Regression Tree (CART) algorithm to split the datasets into strata according to the risk factors. The final set of risk variables were age, gender, pre-operative EQ-5D-3L profile and symptom duration. The CART analysis grouped people into between 55 and 60 different groups for each of the procedures, with the groupings explaining 14-27% of the variation in postoperative utility scores. Minimally important (positive and negative) differences in the EQ-5D utility score were estimated with reference to changes in the Likert scale for each of the procedures. These ranged in magnitude from 0.041 to 0.106. The resulting algorithms are what drive the results delivered by the online interface (you can go and have a play with it). There are a few limitations to the study, such as the reliance on complete case analysis and the fact that the CART analysis might lack predictive ability. And there’s an interesting problem inherent in all of this, that the more people use the tool, the less representative it will become as it influences selection into treatment. The validity of the tool as a precise risk calculator is quite limited. But that isn’t really the point. The point is that it unlocks some of the potential value of PROMs to provide meaningful guidance in the process of shared decision-making.

Can present biasedness explain early onset of diabetes and subsequent disease progression? Exploring causal inference by linking survey and register data. Social Science & Medicine [PubMed] Published 26th May 2017

The term ‘irrational’ is overused by economists. But one situation in which I am willing to accept it is with respect to excessive present bias. That people don’t pay enough attention to future outcomes seems to be a fundamental limitation of the human brain in the 21st century. When it comes to diabetes and its complications, there are lots of treatments available, but there is only so much that doctors can do. A lot depends on the patient managing their own disease, and it stands to reason that present bias might cause people to manage their diabetes poorly, as the value of not going blind or losing a foot 20 years in the future seems less salient than the joy of eating your own weight in carbs right now. But there’s a question of causality here; does the kind of behaviour associated with time-inconsistent preferences lead to poorer health or vice versa? This study provides some insight on that front. The authors outline an expected utility model with quasi-hyperbolic discounting and probability weighting, and incorporate a present bias coefficient attached to payoffs occurring in the future. Postal questionnaires were collected from 1031 type 2 diabetes patients in Denmark with an online discrete choice experiment as a follow-up. These data were combined with data from a registry of around 9000 diabetes patients, from which the postal/online participants were identified. BMI, HbA1c, age and year of diabetes onset were all available in the registry and the postal survey included physical activity, smoking, EQ-5D, diabetes literacy and education. The DCE was designed to elicit time preferences using the offer of (monetary) lottery wins, with 12 different choice sets presented to all participants. Unfortunately, despite the offer of a real-life lottery award for taking part in the research, only 79 of 1031 completed the online DCE survey. Regression analyses showed that individuals with diabetes since 1999 or earlier, or who were 48 or younger at the time of onset, exhibited present bias. And the present bias seems to be causal. Being inactive, obese, diabetes illiterate and having lower quality of life or poorer glycaemic control were associated with being present biased. These relationships hold when subject to a number of control measures. So it looks as if present bias explains at least part of the variation in self-management and health outcomes for people with diabetes. Clearly, the selection of the small sample is a bit of a concern. It may have meant that people with particular risk preferences (given that the reward was a lottery) were excluded, and so the sample might not be representative. Nevertheless, it seems that at least some people with diabetes could benefit from interventions that increase the salience of future health-related payoffs associated with self-management.

Credits