Brent Gibbons’s journal round-up for 10th February 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Impact of comprehensive smoking bans on the health of infants and children. American Journal of Health Economics [RePEc] Published 15th January 2020

While debates on tobacco control policies have recently focused on the rising use of e-cigarettes and vaping devices, along with recent associated lung injuries in the U.S., there is still much to learn on the effectiveness of established tobacco control options. In the U.S., while strategies to increase cigarette taxes and to promote smoke-free public spaces have contributed to a decline in smoking prevalence, more stringent policies such as plain packaging, pictorial warning labels, and no point-of-sale advertising have generally not been implemented. Furthermore, comprehensive smoking bans that include restaurants, bars, and workplaces have only been implemented in approximately 60 percent of localities. This article fills an important gap in the evidence on comprehensive smoking bans, answering how this policy affects the health of children. It also provides interesting evidence on the effect of comprehensive smoking bans on smoking behavior in private residences.

There is ample evidence to support the conclusion that smoking bans reduce smoking prevalence and the exposure of nonsmoking adults to second-hand smoke. This reduced second-hand smoke exposure has been linked to reductions in related health conditions for adults, but has not been studied for infants and children. Of particular concern is that smoking bans may have the unintended ‘displacement’ effect of increasing smoking in private residences, potentially increasing exposure for some children and pregnant women.

For their analyses, the authors use nationally representative data from the US Vital Statistics Natality Data and the National Health Interview Survey (NHIS), coupled with detailed local and state tobacco policy data. The policy data allows the authors to look at partial smoking bans (e.g. limited smoking bans in bars and restaurants) versus comprehensive smoking bans, which are defined as 100 percent smoke-free environments in restaurants, bars, and workplaces in a locale. For their main analyses, a difference-in-difference model is used, comparing locales with comprehensive smoking bans to locales with no smoking bans; a counter factual of no smoking bans or partial bans is also used. Outcomes for infants are low birth weight and gestation, while smoke-related adverse health conditions (e.g. asthma) are used for children under 18.

Results support the conclusion that comprehensive smoking bans are linked to positive health effects for infants and children. The authors included local geographic fixed effects, controlled for excise taxes, and tested an impressive array of sensitivity analyses, all of which support the positive findings. For birth outcomes, the mechanism of effect is explored, using self-reported smoking status. The authors find that a majority of the birth outcome effects are likely due to pregnant mothers’ second-hand smoke exposure (80-85 percent), as opposed to a reduction in prenatal smoking. And regarding displacement concerns, the authors examine NHIS data and find no evidence that smoking bans were associated with displacement of smoking to private residences.

This paper is worth a deep dive. The authors have made an important contribution to the evidence on smoking bans, addressing a possible unintended consequence and adding further weight to arguments for extending comprehensive smoking bans nationwide in the U.S. The health implications are non-trivial, where impacts on birth outcomes alone “can prevent between approximately 1,100 and 1,750 low birth weight births among low-educated mothers, resulting in economic cost savings of about $71-111 million annually.”

Europeans’ willingness to pay for ending homelessness: a contingent valuation study. Social Science & Medicine Published 15th January 2020

Housing First (HF) is a social program that originates from a program in the U.S. to address homelessness in Los Angeles. Over time, it has been adapted particularly for individuals with unstable housing who have long-term behavioral health disorders, including mental health and substance use disorders. Similar to other community mental health services, HF has incorporated a philosophy of not requiring conditions before providing services. For example, with supported employment services, to help those with persistent behavioral health disorders gain employment, the currently accepted approach is to ‘place’ individuals in jobs and then provide training and other support; this is opposed to traditional models of ‘train, then place’. Similarly, for housing, the philosophy is to provide housing first, with various wraparound supports available, whether those wraparound services are accepted or not, and whether the person has refrained from substance use or not. The model is based on the logic that without stable housing, other health and social services will be less effective. It is also based on the assertion that stable housing is a basic human right.

Evidence for HF has generally supported its advantage over more traditional policies, especially in its effectiveness in improving stable housing. Other cost offsets have been reported, including health service use reductions, however, the literature is more inconclusive on the existence and amount of cost offsets. The Substance Abuse and Mental Health Services Administration (SAMHSA) has identified HF as an evidence-based model and a number of countries, including the U.S., Canada, and several European countries, have begun incorporating HF into their homelessness policies. Yet the cost effectiveness of HF is not firmly addressed in the literature. At present, results appear favorable towards HF in comparison to other housing policies, though there are considerable difficulties in HF CEAs, most notably that there are multiple measures of effectiveness (e.g. stable housing days and QALYs). More research needs to be done to better establish the cost-effectiveness of HF.

I’ve chosen to highlight this background because Loubiere et al., in this article, have pushed a large contingent valuation (CV) study to assess willingness to pay (WTP) for HF, which the title implies is commensurate with “ending homelessness”. Contingent valuation is generally accepted as one method for valuing resources where no market is available, though not without considerable past criticism. Discrete choice experiments are favored (though not with their own criticism), but the authors decided on CV as the survey was embedded in a longer questionnaire. The study is aimed at policy makers who must take into account broader public preferences for either increased taxation or for a shifting of resources. The intention is laudable in the respect that it attempts to highlight how much the average person would be willing to give up to not have homelessness exist in her country; this information may help policy makers to act. But more important, I would argue, is to have more definitive information on HF’s cost-effectiveness.

As far as the rigor of the study, I was disappointed to see that the survey was performed through telephone, which goes against recommendations to use personal interviews in CV. An iterative bidding process was used which helps to mitigate overvaluation, though there is still the threat of anchoring bias, which was not randomly allocated. There was limited description of what was conveyed to respondents, including what efficacy results were used for HF. This information is important to make appropriate sense of the results. Aside from other survey limitations such as acquiescence bias and non-response bias, the authors did attempt to deal with the issue of ‘protest’ answers by performing alternative analyses with and without protest answers, where protest answers were assigned a €0 value. WTP ranged from an average of €23 (€16 in Poland to €57 in Sweden) to €28 Euros. Analyses were also conducted to understand factors related to reported WTP. The results suggest that Europeans are supportive of reducing homelessness and will give up considerable hard earned cash toward this cause. This reader for one is not convinced. However, I would hope that policy makers, armed with better cost effectiveness research, could make policy decisions for a marginalized group, even without a more rigorous WTP estimate.

Credits

Chris Sampson’s journal round-up for 15th October 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Reliability and validity of the contingent valuation method for estimating willingness to pay: a case of in vitro fertilisation. Applied Health Economics and Health Policy [PubMed] Published 13th October 2018

In vitro fertilisation (IVF) is a challenge for standard models of valuation in health economics. Mostly, that’s because, despite it falling within the scope of health care, and despite infertility being a health problem, many of the benefits of IVF can’t be considered health-specific. QALYs can’t really do the job, so there’s arguably a role for cost-benefit analysis, and for using stated preference methods to determine the value of IVF. This study adds to an existing literature studying willingness to pay for IVF, but differs in that it tries to identify willingness to pay (WTP) from the general population. This study is set in Australia, where IVF is part-funded by universal health insurance, so asking the public is arguably the right thing to do.

Three contingent valuation surveys were conducted online with 1,870 people from the general public. The first survey used a starting point bid of $10,000, and then, 10 months later, two more surveys were conducted with starting point bids of $4,000 and $10,000. Each included questions for a 10%, 20%, and 50% success rate. Respondents were asked to adopt an ex-post perspective, assuming that they were infertile and could conceive by IVF. Individuals could respond to starting bids with ‘yes’, ‘no’, ‘not sure’, or ‘I am not willing to pay anything’. WTP for one IVF cycle with a 20% success rate ranged from $6,353 in the $4,000 survey to $11,750 in the first $10,000 survey. WTP for a year of treatment ranged from $18,433 to $28,117. The method was reliable insofar as there were no differences between the first and second $10,000 surveys. WTP values corresponded to the probability of success, providing support for the internal construct validity of the survey. However, the big difference between values derived using the alternative starting point bids indicates a strong anchoring bias. The authors also tested the external criterion validity by comparing the number of respondents willing to pay more than $4,000 for a cycle with a 20% success rate (roughly equivalent to the out of pocket cost in Australia) with the number of people who actually choose to pay for IVF in Australia. Around 63% of respondents were willing to pay at that price, which is close to the estimated 60% in Australia.

This study provides some support for the use of contingent valuation methods in the context of IVF, and for its use in general population samples. But the anchoring effect is worrying and justifies further research to identify appropriate methods to counteract this bias. The exclusion of the “not sure” and “I will not pay anything” responses from the analysis – as ‘non-demanders’ – arguably undermines the ‘societal valuation’ aspect of the estimates.

Pharmaceutical expenditure and gross domestic product: evidence of simultaneous effects using a two‐step instrumental variables strategy. Health Economics [PubMed] Published 10th October 2018

The question of how governments determine spending on medicines is pertinent in the UK right now, as the Pharmaceutical Price Regulation Scheme approaches its renewal date. The current agreement includes a cap on pharmaceutical expenditure. It should go without saying that GDP ought to have some influence on how much public spending is dedicated to medicines. But, when medicines expenditure might also influence GDP, the actual relationship is difficult to estimate. In this paper, the authors seek to identify both effects: the income elasticity of government spending on pharmaceuticals and the effect of that spending on income.

The authors use a variety of data sources from the World Health Organization, World Bank, and International Monetary Fund to construct an unbalanced panel for 136 countries from 1995 to 2006. To get around the challenge of two-way causality, the authors implement a two-step instrumental variable approach. In the first step of the procedure, a model estimates the impact of GDP per capita on government spending on pharmaceuticals. International tourist receipts are used as an instrument that is expected to correlate strongly with GDP per capita, but which is expected to be unrelated to medicines expenditure (except through its correlation with GDP). The model attempts to control for health care expenditure, life expectancy, and other important country-specific variables. In the second step, a reverse causality model is used to assess the impact of pharmaceutical expenditure on GDP per capita, with pharmaceutical expenditure adjusted to partial-out the response to GDP estimated in the first step.

The headline average results are that GDP increases pharmaceutical expenditure and that pharmaceutical expenditure reduces GDP. A 1% increase in GDP per capita increases public pharmaceutical expenditure per capita by 1.4%, suggesting that pharmaceuticals are a luxury good. A 1% increase in public pharmaceutical expenditure is associated with a 0.09% decrease in GDP per capita. But the results are more nuanced than that. The authors outline various sources of heterogeneity. The positive effect of GDP on pharmaceutical expenditure only holds for high-income countries and the negative effect of pharmaceutical expenditure on GDP only holds for low-income countries. Quantile regressions show that income elasticity decreases for higher quantiles of expenditure. GDP only influences pharmaceutical spending in countries classified as ‘free’ on the index of Economic Freedom of the World, and pharmaceutical expenditure only has a negative impact on GDP in countries that are ‘not free’.

I’ve never come across this kind of two-step approach before, so I’m still trying to get my head around whether the methods and the data are adequate. But a series of robustness checks provide some reassurance. In particular, an analysis of intertemporal effects using lagged GDP and lagged pharmaceutical expenditure demonstrates the robustness of the main findings. Arguably, the findings of this study are more important for policymaking in low- and middle-income countries, where pharmaceutical expenditures might have important consequences for GDP. In high-income (and ‘free’) economies that spend a lot on medicines, like the UK, there is probably less at stake. This could be because of effective price regulation and monitoring, and better adherence, ensuring that pharmaceutical expenditure is not wasteful.

Parental health spillover in cost-effectiveness analysis: evidence from self-harming adolescents in England. PharmacoEconomics [PubMed] [RePEc] Published 8th October 2018

Any intervention has the potential for spillover effects, whereby people other than the recipient of care are positively or negatively affected by the consequences of the intervention. Where a child is the recipient of care, it stands to reason that any intervention could affect the well-being of the parents and that these impacts should be considered in economic evaluation. But how should parental spillovers be incorporated? Are parental utilities additive to that of the child patient? Or should a multiplier effect be used with reference to the effect of an intervention on the child’s utility?

The study reports on a trial-based economic evaluation of family therapy for self-harming adolescents aged 11-17. Data collection included EQ-5D-3L for the adolescents and HUI2 for the main caregiver (86% mothers) at baseline, 6-month follow-up, and 12-month follow-up, collected from 731 patient-parent pairs. The authors outline six alternative methods for including parental health spillovers: i) relative health spillover, ii) relative health spillover per treatment arm, iii) absolute health spillover, iv) absolute global health spillover per treatment arm, v) additive accrued health benefits, and vi) household equivalence scales. These differ according to whether parental utility is counted as depending on adolescent’s utility, treatment allocation, the primary outcome of the study, or some combination thereof. But the authors’ primary focus (and the main contribution of this study) is the equivalence scale option. This involves adding together the spillover effects for other members of the household and using alternative weightings depending on the importance of parental utility compared with adolescent utility.

Using Tobit models, controlling for a variety of factors, the authors demonstrate that parental utility is associated with adolescent utility. Then, economic evaluations are conducted using each of the alternative spillover accounting methods. The base case of including only adolescents’ utility delivers an ICER of around £40,453. Employing the alternative methods gives quite different results, with the intervention dominated in two of the cases and an ICER below £30,000 per QALY in others. For the equivalence scale approach, the authors employ several elasticities for spillover utility, ranging from 0 (where parental utility is of equivalent value to adolescent utility and therefore additive) to 1 (where the average health spillover per household member is estimated for each patient). The ICER estimates using the equivalence scale approach ranged from £27,166 to £32,504. Higher elasticity implied lower cumulated QALYs.

The paper’s contribution is methodological, and I wouldn’t read too much into the magnitude of the results. For starters, the use of HUI2 (a measure for children) in adults and the use of EQ-5D-3L (a measure for adults) in the children is somewhat confusing. The authors argue that health gains should only be aggregated at the household level if the QALY gain for the patient is greater or equal to zero, because the purpose of treatment is to benefit the adolescents, not the parents. And they argue in favour of using an equivalence scale approach. By requiring an explicit judgement to set the elasticity within the estimation, the method provides a useful and transparent approach to including parental spillovers.

Credits

Chris Sampson’s journal round-up for 25th September 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Good practices for real‐world data studies of treatment and/or comparative effectiveness: recommendations from the Joint ISPOR‐ISPE Special Task Force on Real‐World Evidence in Health Care Decision Making. Value in Health Published 15th September 2017

I have an instinctive mistrust of buzzwords. They’re often used to avoid properly defining something, either because it’s too complicated or – worse – because it isn’t worth defining in the first place. For me, ‘real-world evidence’ falls foul. If your evidence isn’t from the real world, then it isn’t evidence at all. But I do like a good old ISPOR Task Force report, so let’s see where this takes us. Real-world evidence (RWE) and its sibling buzzword real-world data (RWD) relate to observational studies and other data not collected in an experimental setting. The purpose of this ISPOR task force (joint with the International Society for Pharmacoepidemiology) was to prepare some guidelines about the conduct of RWE/RWD studies, with a view to improving decision-makers’ confidence in them. Essentially, the hope is to try and create for RWE the kind of ecosystem that exists around RCTs, with procedures for study registration, protocols, and publication: a noble aim. The authors distinguish between 2 types of RWD: ‘Exploratory Treatment Effectiveness Studies’ and ‘Hypothesis Evaluating Treatment Effectiveness Studies’. The idea is that the latter test a priori hypotheses, and these are the focus of this report. Seven recommendations are presented: i) pre-specify the hypotheses, ii) publish a study protocol, iii) publish the study with reference to the protocol, iv) enable replication, v) test hypotheses on a separate dataset than the one used to generate the hypotheses, vi) publically address methodological criticisms, and vii) involve key stakeholders. Fair enough. But these are just good practices for research generally. It isn’t clear how they are in any way specific to RWE. Of course, that was always going to be the case. RWE-specific recommendations would be entirely contingent on whether or not one chose to define a study as using ‘real-world evidence’ (which you shouldn’t, because it’s meaningless). The authors are trying to fit a bag of square pegs into a hole of undefined shape. It isn’t clear to me why retrospective observational studies, prospective observational studies, registry studies, or analyses of routinely collected clinical data should all be treated the same, yet differently to randomised trials. Maybe someone can explain why I’m mistaken, but this report didn’t do it.

Are children rational decision makers when they are asked to value their own health? A contingent valuation study conducted with children and their parents. Health Economics [PubMed] [RePEc] Published 13th September 2017

Obtaining health state utility values for children presents all sorts of interesting practical and theoretical problems, especially if we want to use them in decisions about trade-offs with adults. For this study, the researchers conducted a contingent valuation exercise to elicit children’s (aged 7-19) preferences for reduced risk of asthma attacks in terms of willingness to pay. The study was informed by two preceding studies that sought to identify the best way in which to present health risk and financial information to children. The participating children (n=370) completed questionnaires at school, which asked about socio-demographics, experience of asthma, risk behaviours and altruism. They were reminded (in child-friendly language) about the idea of opportunity cost, and to consider their own budget constraint. Baseline asthma attack risk and 3 risk-reduction scenarios were presented graphically. Two weeks later, the parents completed similar questionnaires. Only 9% of children were unwilling to pay for risk reduction, and most of those said that it was the mayor’s problem! In some senses, the children did a better job than their parents. The authors conducted 3 tests for ‘incorrect’ responses – 14% of adults failed at least one, while only 4% of children did so. Older children demonstrated better scope sensitivity. Of course, children’s willingness to pay was much lower in absolute terms than their parents’, because children have a much smaller budget. As a percentage of the budget, parents were – on average – willing to pay more than children. That seems reassuringly predictable. Boys and fathers were willing to pay more than girls and mothers. Having experience of frequent asthma attacks increased willingness to pay. Interestingly, teenagers were willing to pay less (as a proportion of their budget) than younger children… and so were the teenagers’ parents! Children’s willingness to pay was correlated with that of their own parent’s at the higher risk reductions but not the lowest. This study reports lots of interesting findings and opens up plenty of avenues for future research. But the take-home message is obvious. Kids are smart. We should spend more time asking them what they think.

Journal of Patient-Reported Outcomes: aims and scope. Journal of Patient-Reported Outcomes Published 12th September 2017

Here we have a new journal that warrants a mention. The journal is sponsored by the International Society for Quality of Life Research (ISOQOL), making it a sister journal of Quality of Life Research. One of its Co-Editors-in-Chief is the venerable David Feeny, of HUI fame. They’ll be looking to publish research using PRO(M) data from trials or routine settings, studies of the determinants of PROs, qualitative studies in the development of PROs; anything PRO-related, really. This could be a good journal for more thorough reporting of PRO data that can get squeezed out of a study’s primary outcome paper. Also, “JPRO” is fun to say. The editors don’t mention that the journal is open access, but the website states that it is, so APCs at the ready. ISOQOL members get a discount.

Research and development spending to bring a single cancer drug to market and revenues after approval. JAMA Internal Medicine [PubMed] Published 11th September 2017

We often hear that new drugs are expensive because they’re really expensive to develop. Then we hear about how much money pharmaceutical companies spend on marketing, and we baulk. The problem is, pharmaceutical companies aren’t forthcoming with their accounts, so researchers have to come up with more creative ways to estimate R&D spending. Previous studies have reported divergent estimates. Whether R&D costs ‘justify’ high prices remains an open question. For this study, the authors looked at public data from the US for 10 companies that had only one cancer drug approved by the FDA between 2007 and 2016. Not very representative, perhaps, but useful because it allows for the isolation of the development costs associated with a single drug reaching the market. The median time for drug development was 7.3 years. The most generous estimate of the mean cost of development came in at under a billion dollars; substantially less than some previous estimates. This looks like a bargain; the mean revenue for the 10 companies up to December 2016 was over $6.5 billion. This study may seem a bit back-of-the-envelope in nature. But that doesn’t mean it isn’t accurate. If anything, it begs more confidence than some previous studies because the methods are entirely transparent.

Credits