Rita Faria’s journal round-up for 4th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cheap and dirty: the effect of contracting out cleaning on efficiency and effectiveness. Public Administration Review Published 25th February 2019

Before I was a health economist, I used to be a pharmacist and worked for a well-known high street chain for some years. My impression was that the stores with in-house cleaners were cleaner, but I didn’t know if this was a true difference, my leftie bias or my small sample size of 2! This new study by Shimaa Elkomy, Graham Cookson and Simon Jones confirms my suspicions, albeit in the context of NHS hospitals, so I couldn’t resist to select it for my round-up.

They looked at how contracted-out services fare in terms of perceived cleanliness, costs and MRSA rate in NHS hospitals. MRSA is a type of hospital-associated infection that is affected by how clean a hospital is.

They found that contracted-out services are cheaper than in-house cleaning, but that perceived cleanliness is worse. Importantly, contracted-out services increase the MRSA rate. In other words, contracting-out cleaning services could harm patients’ health.

This is a fascinating paper that is well worth a read. One wonders if the cost of managing MRSA is more than offset by the savings of contracting-out services. Going a step further, are in-house services cost-effective given the impact on patients’ health and costs of managing infections?

What’s been the bang for the buck? Cost-effectiveness of health care spending across selected conditions in the US. Health Affairs [PubMed] Published 1st January 2019

Staying on the topic of value for money, this study by David Wamble and colleagues looks at the extent to which the increased spending in health care in the US has translated into better health outcomes over time.

It’s clearly reassuring that, for 6 out of the 7 conditions they looked at, health outcomes have improved in 2015 compared to 1996. After all, that’s the goal of investing in medical R&D, although it remains unclear how much of this difference can be attributed to health care versus other things that have happened at the same time that could have improved health outcomes.

I wasn’t sure about the inflation adjustment for the costs, so I’d be grateful for your thoughts via comments or Twitter. In my view, we would underestimate the costs if we used medical price inflation indices. This is because these indices reflect the specific increase in prices in health care, such as due to new drugs being priced high at launch. So I understand that the main results use the US Consumer Price Index, which means that this reflects the average increase in prices over time rather than the increase in health care.

However, patients may not have seen their income rise with inflation. This means that the cost of health care may represent a disproportionally greater share of people’s income. And that the inflation adjustment may downplay the impact of health care costs on people’s pockets.

This study caught my eye and it is quite thought-provoking. It’s a good addition to the literature on the cost-effectiveness of US health care. But I’d wager that the question remains: to what extent is today’s medical care better value for money that in the past?

The dos and don’ts of influencing policy: a systematic review of advice to academics. Palgrave Communications Published 19th February 2019

We all would like to see our research findings influence policy, but how to do this in practice? Well, look no further, as Kathryn Oliver and Paul Cairney reviewed the literature, summarised it in 8 key tips and thought through their implications.

To sum up, it’s not easy to influence policy; advice about how to influence policy is rarely based on empirical evidence, and there are a few risks to trying to become a mover-and-shaker in policy circles.

They discuss three dilemmas in policy engagement. Should academics try to influence policy? How should academics influence policy? What is the purpose of academics’ engagement in policy making?

I particularly enjoyed reading about the approaches to influence policy. Tools such as evidence synthesis and social media should make evidence more accessible, but their effectiveness is unclear. Another approach is to craft stories to create a compelling case for the policy change, which seems to me to be very close to marketing. The third approach is co-production, which they note can give rise to accusations of bias and can have some practical challenges in terms of intellectual property and keeping one’s independence.

I found this paper quite refreshing. It not only boiled down the advice circulating online about how to influence policy into its key messages but also thought through the practical challenges in its application. The impact agenda seems to be here to stay, at least in the UK. This paper is an excellent source of advice on the risks and benefits of trying to navigate the policy world.

Credits

Chris Sampson’s journal round-up for 14th May 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A practical guide to conducting a systematic review and meta-analysis of health state utility values. PharmacoEconomics [PubMed] Published 10th May 2018

I love articles that outline the practical application of a particular method to solve a particular problem, especially when the article shares analysis code that can be copied and adapted. This paper does just that for the case of synthesising health state utility values. Decision modellers use utility values as parameters. Most of the time these are drawn from a single source which almost certainly introduces some kind of bias to the resulting cost-effectiveness estimates. So it’s better to combine all of the relevant available information. But that’s easier said than done, as numerous researchers (myself included) have discovered. This paper outlines the various approaches and some of the merits and limitations of each. There are some standard stages, for which advice is provided, relating to the identification, selection, and extraction of data. Those are by no means simple tasks, but the really tricky bit comes when you try and pool the utility values that you’ve found. The authors outline three strategies: i) fixed effect meta-analysis, ii) random effects meta-analysis, and iii) mixed effects meta-regression. Each is illustrated with a hypothetical example, with Stata and R commands provided. Broadly speaking, the authors favour mixed effects meta-regression because of its ability to identify the extent of similarity between sources and to help explain heterogeneity. The authors insist that comparability between sources is a precondition for pooling. But the thing about health state utility values is that they are – almost by definition – never comparable. Different population? Not comparable. Different treatment pathway? No chance. Different utility measure? Ha! They may or may not appear to be similar statistically, but that’s totally irrelevant. What matters is whether the decision-maker ‘believes’ the values. If they believe them then they should be included and pooled. If decision-makers have reason to believe one source more or less than another then this should be accounted for in the weighting. If they don’t believe them at all then they should be excluded. Comparability is framed as a statistical question, when in reality it is a conceptual one. For now, researchers will have to tackle that themselves. This paper doesn’t solve all of the problems around meta-analysis of health state utility values, but it does a good job of outlining methodological developments to date and provides recommendations in accordance with them.

Unemployment, unemployment duration, and health: selection or causation? The European Journal of Health Economics [PubMed] Published 3rd May 2018

One of the major socioeconomic correlates of poor health is unemployment. It appears not to be very good for you. But there’s an obvious challenge here – does unemployment cause ill-health, or are unhealthy people just more likely to be unemployed? Both, probably, but that answer doesn’t make for clear policy solutions. This paper – following a large body of literature – attempts to explain what’s going on. Its novelty comes in the way the author considers timing and distinguishes between mental and physical health. The basis for the analysis is that selection into unemployment by the unhealthy ought to imply time-constant effects of unemployment on health. On the other hand, the negative effect of unemployment on health ought to grow over time. Using seven waves of data from the German Socio-economic Panel, a sample of 17,000 people (chopped from 48,000) is analysed, of which around 3,000 experienced unemployment. The basis for measuring mental and physical health is summary scores from the SF-12. A fixed-effects model is constructed based on the dependence of health on the duration and timing of unemployment, rather than just the occurrence of unemployment per se. The author finds a cumulative effect of unemployment on physical ill-health over time, implying causation. This is particularly pronounced for people unemployed in later life, and there was essentially no impact on physical health for younger people. The longer people spent unemployed, the more their health deteriorated. This was accompanied by a strong long-term selection effect of less physically healthy people being more likely to become unemployed. In contrast, for mental health, the findings suggest a short-term selection effect of people who experience a decline in mental health being more likely to become unemployed. But then, following unemployment, mental health declines further, so the balance of selection and causation effects is less clear. In contrast to physical health, people’s mental health is more badly affected by unemployment at younger ages. By no means does this study prove the balance between selection and causality. It can’t account for people’s anticipation of unemployment or future ill-health. But it does provide inspiration for better-targeted policies to limit the impact of unemployment on health.

Different domains – different time preferences? Social Science & Medicine [PubMed] Published 30th April 2018

Economists are often criticised by non-economists. Usually, the criticisms are unfounded, but one of the ways in which I think some (micro)economists can have tunnel vision is in thinking that preferences elicited with respect to money exhibit the same characteristics as preferences about things other than money. My instinct tells me that – for most people – that isn’t true. This study looks at one of those characteristics of preferences – namely, time preferences. Unfortunately for me, it suggests that my instincts aren’t correct. The authors outline a quasi-hyperbolic discounting model, incorporating both short-term present bias and long-term impatience, to explain gym members’ time preferences in the health and monetary domains. A survey was conducted with members of a chain of fitness centres in Denmark, of which 1,687 responded. Half were allocated to money-related questions and half to health-related questions. Respondents were asked to match an amount of future gains with an amount of immediate gains to provide a point of indifference. Health problems were formulated as back pain, with an EQ-5D-3L level 2 for usual activities and a level 2 for pain or discomfort. The findings were that estimates for discount rates and present bias in the two domains are different, but not by very much. On average, discount rates are slightly higher in the health domain – a finding driven by female respondents and people with more education. Present bias is the same – on average – in each domain, though retired people are more present biased for health. The authors conclude by focussing on the similarity between health and monetary time preferences, suggesting that time preferences in the monetary domain can safely be applied in the health domain. But I’d still be wary of this. For starters, one would expect a group of gym members – who have all decided to join the gym – to be relatively homogenous in their time preferences. Findings are similar on average, and there are only small differences in subgroups, but when it comes to health care (even public health) we’re never dealing with average people. Targeted interventions are increasingly needed, which means that differential discount rates in the health domain – of the kind identified in this study – should be brought into focus.

Credits

 

Chris Sampson’s journal round-up for 2nd April 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Quality-adjusted life-years without constant proportionality. Value in Health Published 27th March 2018

The assumption of constant proportional trade-offs (CPTO) is at the heart of everything we do with QALYs. It assumes that duration has no impact on the value of a given health state, and so the value of a health state is constant regardless of its duration. This assumption has been repeatedly demonstrated to fail. This study looks for a non-constant alternative, which hasn’t been done before. The authors consider a quality-adjusted lifespan and four functional forms for the relationship between time and the value of life: constant, discount, logarithmic, and power. These relationships were tested in an online survey with more than 5,000 people, which involved the completion of 30-40 time trade-off pairs based on the EQ-5D-5L. Respondents traded off health states of varying severities and durations. Initially, a saturated model (making no assumptions about functional form) was estimated. This demonstrated that the marginal value of lifespan is decreasing. The authors provide a set of values attached to different health states at different durations. Then, the econometric model is adjusted to suit a power model, with the power estimated for duration expressed in days, weeks, months, or years. The power value for time is 0.415, but different expressions of time could introduce bias; time expressed in days (power=0.403) loses value faster than time expressed in years (power=0.654). There are also some anomalies that arise from the data that don’t fit the power function. For example, a single day of moderate problems can be worse than death, whereas 7 days or more is not. Using ‘power QALYs’ could be the future. But the big remaining question is whether decisionmakers ought to respond to people’s time preferences in this way.

A systematic review of studies comparing the measurement properties of the three-level and five-level versions of the EQ-5D. PharmacoEconomics [PubMed] Published 23rd March 2018

The debate about the EQ-5D-5L continues (on Twitter, at least). Conveniently, this paper addresses a concern held by some people – that we don’t understand the implications of using the 5L descriptive system. The authors systematically review papers comparing the measurement properties of the 3L and 5L, written in English or German. The review ended up including 24 studies. The measurement properties that were considered by the authors were: i) distributional properties, ii) informativity, iii) inconsistencies, iv) responsiveness, and v) test-retest reliability. The last property involves consideration of index values. Each study was also quality-assessed, with all being considered of good to excellent quality. The studies covered numerous countries and different respondent groups, with sample sizes from the tens to the thousands. For most measurement properties, the findings for the 3L and 5L were very similar. Floor effects were generally below 5% and tended to be slightly reduced for the 5L. In some cases, the 5L was associated with major reductions in the proportion of people responding as 11111 – a well-recognised ceiling effect associated with the 3L. Just over half of the studies reported on informativity using Shannon’s H’ and Shannon’s J’. The 5L provided consistently better results. Only three studies looked at responsiveness, with two slightly favouring the 5L and one favouring the 3L. The latter could be explained by the use of the 3L-5L crosswalk, which is inherently less responsive because it is a crosswalk. The overarching message is consistency. Business as usual. This is important because it means that the 3L and 5L descriptive systems provide comparable results (which is the basis for the argument I recently made that they are measuring the same thing). In some respects, this could be disappointing for 5L proponents because it suggests that the 5L descriptive system is not a lot better than the 3L. But it is a little better. This study demonstrates that there are still uncertainties about the differences between 3L and 5L assessments of health-related quality of life. More comparative studies, of the kind included in this review, should be conducted so that we can better understand the differences in results that are likely to arise now that we have moved (relatively assuredly) towards using the 5L instead of the 3L.

Preference-based measures to obtain health state utility values for use in economic evaluations with child-based populations: a review and UK-based focus group assessment of patient and parent choices. Quality of Life Research [PubMed] Published 21st March 2018

Calculating QALYs for kids continues to be a challenge. One of the challenges is the choice of which preference-based measure to use. Part of the problem here is that the EuroQol group – on which we rely for measuring adult health preferences – has been a bit slow. There’s the EQ-5D-Y, which has been around for a while, but it wasn’t developed with any serious thought about what kids value and there still isn’t a value set for the UK. So, if we use anything, we use a variety of measures. In this study, the authors review the use of generic preference-based measures. 45 papers are identified, including 5 different measures: HUI2, HUI3, CHU-9D, EQ-5D-Y, and AQOL-6D. No prizes for guessing that the EQ-5D (adult version) was the most commonly used measure for child-based populations. Unfortunately, the review is a bit of a disappointment. And I’m not just saying that because at least one study on which I’ve worked isn’t cited. The search strategy is likely to miss many (perhaps most) trial-based economic evaluations with children, for which cost-utility analyses don’t usually get a lot of airtime. It’s hard to see how a review of this kind is useful if it isn’t comprehensive. But the goal of the paper isn’t just to summarise the use of measures to date. The focus is on understanding when researchers should use self- or proxy-response, and when a parent-child dyad might be most useful. The literature review can’t do much to guide that question except to assert that the identified studies tended to use parent–proxy respondents. But the study also reports on some focus groups, which are potentially more useful. These were conducted as part of a wider study relating to the design of an RCT. In five focus groups, participants were presented with the EQ-5D-Y and the CHU-9D. It isn’t clear why these two measures were selected. The focus groups included parents and some children over the age of 11. Unfortunately, there’s no real (qualitative) analysis conducted, so the findings are limited. Parents expressed concern about a lack of sensitivity. Naturally, they thought that they knew best and should be the respondents. Of the young people reviewing the measures themselves, the EQ-5D-Y was perceived as more straightforward in referring to tangible experiences, whereas the CHU-9D’s severity levels were seen as more representative. Older adolescents tended to prefer the CHU-9D. The youths weren’t so sure of themselves as the adults and, though they expressed concern about their parents not understanding how they feel, they were generally neutral to who ought to respond. The older kids wanted to speak for themselves. The paper provides a good overview of the different measures, which could be useful for researchers planning data collection for child health utility measurement. But due to the limitations of the review and the lack of analysis of the focus groups, the paper isn’t able to provide any real guidance.

Credits