Chris Sampson’s journal round-up for 25th September 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Good practices for real‐world data studies of treatment and/or comparative effectiveness: recommendations from the Joint ISPOR‐ISPE Special Task Force on Real‐World Evidence in Health Care Decision Making. Value in Health Published 15th September 2017

I have an instinctive mistrust of buzzwords. They’re often used to avoid properly defining something, either because it’s too complicated or – worse – because it isn’t worth defining in the first place. For me, ‘real-world evidence’ falls foul. If your evidence isn’t from the real world, then it isn’t evidence at all. But I do like a good old ISPOR Task Force report, so let’s see where this takes us. Real-world evidence (RWE) and its sibling buzzword real-world data (RWD) relate to observational studies and other data not collected in an experimental setting. The purpose of this ISPOR task force (joint with the International Society for Pharmacoepidemiology) was to prepare some guidelines about the conduct of RWE/RWD studies, with a view to improving decision-makers’ confidence in them. Essentially, the hope is to try and create for RWE the kind of ecosystem that exists around RCTs, with procedures for study registration, protocols, and publication: a noble aim. The authors distinguish between 2 types of RWD: ‘Exploratory Treatment Effectiveness Studies’ and ‘Hypothesis Evaluating Treatment Effectiveness Studies’. The idea is that the latter test a priori hypotheses, and these are the focus of this report. Seven recommendations are presented: i) pre-specify the hypotheses, ii) publish a study protocol, iii) publish the study with reference to the protocol, iv) enable replication, v) test hypotheses on a separate dataset than the one used to generate the hypotheses, vi) publically address methodological criticisms, and vii) involve key stakeholders. Fair enough. But these are just good practices for research generally. It isn’t clear how they are in any way specific to RWE. Of course, that was always going to be the case. RWE-specific recommendations would be entirely contingent on whether or not one chose to define a study as using ‘real-world evidence’ (which you shouldn’t, because it’s meaningless). The authors are trying to fit a bag of square pegs into a hole of undefined shape. It isn’t clear to me why retrospective observational studies, prospective observational studies, registry studies, or analyses of routinely collected clinical data should all be treated the same, yet differently to randomised trials. Maybe someone can explain why I’m mistaken, but this report didn’t do it.

Are children rational decision makers when they are asked to value their own health? A contingent valuation study conducted with children and their parents. Health Economics [PubMed] [RePEc] Published 13th September 2017

Obtaining health state utility values for children presents all sorts of interesting practical and theoretical problems, especially if we want to use them in decisions about trade-offs with adults. For this study, the researchers conducted a contingent valuation exercise to elicit children’s (aged 7-19) preferences for reduced risk of asthma attacks in terms of willingness to pay. The study was informed by two preceding studies that sought to identify the best way in which to present health risk and financial information to children. The participating children (n=370) completed questionnaires at school, which asked about socio-demographics, experience of asthma, risk behaviours and altruism. They were reminded (in child-friendly language) about the idea of opportunity cost, and to consider their own budget constraint. Baseline asthma attack risk and 3 risk-reduction scenarios were presented graphically. Two weeks later, the parents completed similar questionnaires. Only 9% of children were unwilling to pay for risk reduction, and most of those said that it was the mayor’s problem! In some senses, the children did a better job than their parents. The authors conducted 3 tests for ‘incorrect’ responses – 14% of adults failed at least one, while only 4% of children did so. Older children demonstrated better scope sensitivity. Of course, children’s willingness to pay was much lower in absolute terms than their parents’, because children have a much smaller budget. As a percentage of the budget, parents were – on average – willing to pay more than children. That seems reassuringly predictable. Boys and fathers were willing to pay more than girls and mothers. Having experience of frequent asthma attacks increased willingness to pay. Interestingly, teenagers were willing to pay less (as a proportion of their budget) than younger children… and so were the teenagers’ parents! Children’s willingness to pay was correlated with that of their own parent’s at the higher risk reductions but not the lowest. This study reports lots of interesting findings and opens up plenty of avenues for future research. But the take-home message is obvious. Kids are smart. We should spend more time asking them what they think.

Journal of Patient-Reported Outcomes: aims and scope. Journal of Patient-Reported Outcomes Published 12th September 2017

Here we have a new journal that warrants a mention. The journal is sponsored by the International Society for Quality of Life Research (ISOQOL), making it a sister journal of Quality of Life Research. One of its Co-Editors-in-Chief is the venerable David Feeny, of HUI fame. They’ll be looking to publish research using PRO(M) data from trials or routine settings, studies of the determinants of PROs, qualitative studies in the development of PROs; anything PRO-related, really. This could be a good journal for more thorough reporting of PRO data that can get squeezed out of a study’s primary outcome paper. Also, “JPRO” is fun to say. The editors don’t mention that the journal is open access, but the website states that it is, so APCs at the ready. ISOQOL members get a discount.

Research and development spending to bring a single cancer drug to market and revenues after approval. JAMA Internal Medicine [PubMed] Published 11th September 2017

We often hear that new drugs are expensive because they’re really expensive to develop. Then we hear about how much money pharmaceutical companies spend on marketing, and we baulk. The problem is, pharmaceutical companies aren’t forthcoming with their accounts, so researchers have to come up with more creative ways to estimate R&D spending. Previous studies have reported divergent estimates. Whether R&D costs ‘justify’ high prices remains an open question. For this study, the authors looked at public data from the US for 10 companies that had only one cancer drug approved by the FDA between 2007 and 2016. Not very representative, perhaps, but useful because it allows for the isolation of the development costs associated with a single drug reaching the market. The median time for drug development was 7.3 years. The most generous estimate of the mean cost of development came in at under a billion dollars; substantially less than some previous estimates. This looks like a bargain; the mean revenue for the 10 companies up to December 2016 was over $6.5 billion. This study may seem a bit back-of-the-envelope in nature. But that doesn’t mean it isn’t accurate. If anything, it begs more confidence than some previous studies because the methods are entirely transparent.

Credits

 

 

Chris Sampson’s journal round-up for 3rd April 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Return on investment of public health interventions: a systematic review. Journal of Epidemiology & Community Health [PubMed] Published 29th March 2017

Cost-effectiveness analysis in the context of public health is tricky. Often the health benefits are small at the individual level and the returns to investment might be cross-sectoral. Lots of smart people believe that spending on public health is low in proportion to other health spending. Here we have a systematic review of studies reporting cost-benefit ratios (CBR) or return on investment (ROI) estimates for public health interventions. The stated aim of the paper is to demonstrate the false economy associated with cuts to public health spending. 52 titles were included from a search that identified 2957. The inclusion and exclusion criteria are not very clear, with some studies rejected on the basis of ‘poor generalisability to the UK’. There’s a bit too much subjectivity sneaking around in the methods for my liking.  Results for CBR and ROI estimates are presented according to local or national level and grouped by ‘specialism’. From all studies, the median CBR was 8.3 and the median ROI was 14.3. As we might have suspected, public health interventions are cost-saving in a big way. National health protection and legislative interventions offered the greatest return on investment. While there is wide variation in the results, all specialism groupings showed a positive return on average. I don’t doubt the truth of the study’s message – that cuts to public health spending are foolish. But the review doesn’t really demonstrate what the authors want it to demonstrate. We don’t know what (if any) disinvestment is taking place with respect to the interventions identified in the review. The results presented in the study represent a useful reference point for discussion and further analysis, but they aren’t a sufficient basis for supporting general increases in public health spending. That said, the study adds to an already resounding call and may help bring more attention to the issue.

Acceptable health and priority weighting: discussing a reference-level approach using sufficientarian reasoning. Social Science & Medicine Published 27th March 2017

In some ways, the moral principle of sufficiency is very attractive. It acknowledges a desire for redistribution from the haves to the have-nots and may also make for a more manageable goal than all-out maximisation. It may also be particularly useful in specific situations, such as evaluating health care for the elderly, for whom ‘full health’ is never achievable and not a meaningful reference point. This paper presents a discussion of the normative issues at play, drawing insights from the distributive justice literature. We’re reminded of the fair innings argument as a familiar sufficientarian flavoured allocation principle. The sufficientarian approach is outlined in contrast to egalitarianism and prioritarianism. Strict sufficientarian value weighting is not a good idea. If we suppose a socially ‘acceptable’ health state value of 0.7, such an approach would – for example – value an improvement from 0.69 to 0.71 for one person as infinitely more valuable than an improvement from 0.2 to 0.6 for the whole population. The authors go on to outline some more relaxed sufficiency weightings, whereby improvements below the threshold are attributed a value greater than 0 (though still less than those achieving sufficiency). The sufficientarian approach alone is (forgive me) an insufficient framework for the allocation of health care resources and cannot represent the kind of societal preferences that have been observed in the literature. Thus, hybrids are proposed. In particular, a sufficientarian-prioritarian weighting function is presented and the authors suggest that this may be a useful basis for priority setting. One can imagine a very weak form of the sufficientarian approach that corresponds to a prioritarian weighting function that is (perhaps) concave below the threshold and convex above it. Still, we have the major problem of identifying a level of acceptable health that is not arbitrary. The real question you need to ask yourself is this: do you really want health economists to start arguing about another threshold?

Emotions and scope effects in the monetary valuation of health. The European Journal of Health Economics [PubMed] Published 24th March 2017

It seems obvious that emotions could affect the value people attach to goods and services, but little research has been conducted with respect to willingness to pay for health services. This study considers the relationship between a person’s self-reported fear of being operated on and their willingness to pay for risk-reducing drug-eluting stents. A sample of 1479 people in Spain made a series of choices between bare-metal stents at no cost and drug-eluting stents with some out-of-pocket cost, alongside a set of sociodemographic questions and a fear of surgery Likert scale. Each respondent provided 8 responses with 4 different risk reductions and 2 different willingness to pay ‘bids’. The authors outline what they call a ‘cognitive-emotional random utility model’ including an ’emotional shift effect’. Four different models are presented to demonstrate the predictive value of the emotion levels interacting with the risk reduction levels. The sample was split roughly in half according to whether people reported high emotion (8, 9 or 10 on the fear Likert) or low emotion (<8). People who reported more fear of being operated on were willing to pay more for risk reductions, which is the obvious result. More interesting is that the high emotion group exhibited a lower sensitivity to scope – that is, there wasn’t much difference in their valuation of the alternative magnitudes of risk reduction. This constitutes a problem for willingness to pay estimates in this group as it may prevent the elicitation of meaningful values, and it is perhaps another reason why we usually go for collective approaches to health state valuation. The authors conclude that emotional response is a bias that needs to be corrected. I don’t buy this interpretation and would tend to the view that the bias that needs correcting here is that of the economist. Emotions may be a justifiable reflection of personality traits that ought to determine preferences, at least at the individual level. But I do agree with the authors that this is an interesting field for further research if only to understand possible sources of heterogeneity in health state valuation.

Credits

The well-being valuation approach: Solution or convolution?

The ‘well-being valuation method’ is a recently developed technique for valuing the effect, in monetary terms, of a health problem on an individual’s well-being. The method involves calculating the compensating variation necessary to maintain the same level of well-being after suffering from a particular health problem, and is hoped to offer a solution to the problems of revealed preference and contingent valuation methods. A recent IZA paper investigated whether there was consistency in well-being measures for valuations of different health problems. The authors find (as might be expected) that different well-being measures give very different results. This post is inspired by the paper.

Solution?

Monetary valuation of health problems is certainly a decision-makers dream. If done right it would also be a health economist’s dream. The QALY was developed as a substitute currency for health, as money was not deemed appropriate and willingness to pay and accept methods are notoriously biased, controversial and inconsistent. The well-being valuation approach has the potential to allow us to scrap this stand-in currency by using ‘The Leydon Approach‘ and aiming questions about health problems at a representative sample of the public. Using this method we can figure out the value that individuals assign to losses in well-being associated with particular health problems and can thus decide whether a particular intervention represents good value for money. This method can also, very easily, provide different values for people with different socio-demographics.

Convolution?

In calculating QALYs there is some consensus in the use of generic preference-based measures of health-related quality of life. The well-being measure that should be used in the well-being valuation approach is unclear and, as the recent IZA paper showed, different measures give different results. And besides, is this even the direction in which health economics should be heading? Wouldn’t we be better-off adapting the QALY method and possibly working harder to assign monetary values to QALYs? Using willingness to pay methods this is not something which is really possible at the moment, due to numerous methodological problems. However, this is not to say we shouldn’t still be trying to do this using different methods. There are also massive equity concerns when we start assigning monetary values to health problems, as different people value money differently; arguably not in a way that is representative of underlying preferences for health.

Personally I think that the well-being valuation approach is, in principle, a potentially great new idea for the health economics field to adopt. It seems particularly relevant to the current debate in the UK over value-based pricing. However, I have many reservations over the direct monetary valuation of health problems as they are currently carried out. I would like to see a future analysis in the literature of the implications for equity issues of using the well-being valuation approach instead of QALYs. With the piece of mind that these methods can provide equitable outcomes I feel this new method could (and possibly should) be adopted more widely.

Please provide your thoughts on this subject using the comments box below. Also, please highlight any literature relevant to this debate.