Rita Faria’s journal round-up for 20th January 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Opportunity cost neglect in public policy. Journal of Economic Behavior & Organization Published 10th January 2020

Opportunity cost is a key concept in economics, and health economics is no exception. We all agree that policy-makers should consider the opportunity cost alongside the benefits of the various policy options. The question is… do they? This fascinating paper by Emil Persson and Gustav Tinghög suggests that they may not.

The paper reports two studies: one in the general population, and the other in a sample of experts on priority setting in health. In both studies, the participants were asked to choose between making a purchase or not, and were randomised to choices with and without a reminder about the opportunity cost. The reminder consisted of the “no” option having the comment “saving the money for other purchases“. There were choices about private consumption (e.g. buying a new mobile phone) and health care policy (e.g. funding a new cancer screening programme).

In the study in the general population, the participants were 6% less likely to invest in public policies if they were reminded of the opportunity cost. There was no effect in private consumption decisions. In the study with experts on health care priority setting, the participants were 10% less likely to invest in a health programme when reminded about opportunity costs, although the result was “marginally significant“. There was a numerical difference of -6% regarding private consumption, but non-significant. The authors concluded that both lay people and experts neglect opportunity cost in public policy but much less so in their own private consumption decisions.

It struck me that this effect is driven by quite a small difference between the scenarios – simply stating that choosing to reject the policy means that the money will be saved for future purchases. I wonder about how this information affects the decision. After all, the scenarios only quantify the costs of the policy, without information about the benefits or the opportunity cost. For example, the benefits of the cancer screening programme were that “cancer treatment will be more effective, lives will be saved and human suffering will be avoided” and the cost was 48 million SEK per year. Whether this policy is good or bad value for money all depends on how much suffering it avoids and how much would be avoided by investing the money in something else. It would be interesting to have coupled the survey with interviews to understand how the participants interpreted the information and their decision making process.

On a wider note, this paper agrees with health economists’ anecdotal experience that policy-makers find it hard to think about opportunity cost. This is not helped by settings where they hear about the experience of people who would benefit from a positive recommendation and from doctors who would like to have the new drug in their medical arsenal, but not much about the people who will bear the opportunity cost. The message is clear: we need to do better at communicating the opportunity cost of public policies!

Assessment of progression-free survival as a surrogate end point of overall survival in first-line treatment of ovarian cancer. JAMA Network Open [PubMed] Published 10th January 2020

A study about the relationship between progression-free survival and overall survival may seem an odd choice for a health economics journal round-up, but it is actually quite relevant. In cost-effectiveness analysis of new cancer drugs, the trial primary endpoint may be progression-free survival (PFS). Data on overall survival (OS) may be too immature to assess the treatment effect or for extrapolation to the longer term. To predict QALYs and lifetime costs with and without the new drug, the cost-effectiveness model may need to assume a surrogate relationship between PFS and OS. That is, that an effect on PFS is reflected, to some extent, in an effect on OS. The question is, how strong is that surrogate relationship? This study tries to answer this question in advanced ovarian cancer.

Xavier Paoletti and colleagues conducted a systematic review and meta-analysis using individual patient data from 11,029 people who took part in 17 RCTs of first-line therapy in advanced ovarian cancer. They assessed the surrogate relationship at the individual and at the trial-level. The individual-level surrogate relationship refers to the correlation between PFS and OS for the individual patient. As the authors note, this may only reflect that people who have longer life expectancy also take longer to progress. At the trial-level, they looked at the correlation between the hazard ratio (HR) on OS and the HR on PFS. This reflects how much of the effect on OS could be predicted by the effect on PFS. They used the surrogate criteria proposed by the Follicular Lymphoma Analysis of Surrogacy Hypothesis initiative. As this is outside my area of expertise, I won’t comment on the methodology.

One of their results is quite striking: in 16/17 RCTs, the experimental drug did not have HRs for PFS and OS statistically different from the control. This means that there have not been any new drugs with statistically significant benefits! In terms of the surrogate relationship, they found that there is an individual-level association – that is, people who take longer to progress also survive for longer. In contrast, they did not find a surrogate relationship between PFS and OS at the trial-level. Given that the HRs were centred around 1, the poor correlation may be partly due to the lack of variation in HRs rather than a poor surrogate relationship.

Now the challenge remains in cost-effectiveness modelling when OS is immature. Extrapolate OS with high uncertainty? Use a poor surrogate relationship with PFS? Or formal expert elicitation? Hopefully methodologists are looking into this! In the meantime, regulators may wish to think again about licensing drugs with evidence only on PFS.

After 20 years of using economic evaluation, should NICE be considered a methods innovator? PharmacoEconomics [PubMed] Published 13th January 2020

NICE is currently starting a review of the methods and process for health technology assessment. Mark Sculpher and Steve Palmer take this opportunity to reflect on how NICE’s methods have evolved over time and to propose areas ripe for an update.

It was very enjoyable to read about the history of the Methods Guide and how NICE has responded to its changing context, responsibilities, and new challenges. For example, the cost-effectiveness threshold of £20k-£30k/QALY was introduced by the 2004 Methods Guide. This threshold was reinforced by the 2019 Voluntary Scheme for Branded Medicines Pricing and Access. The funny thing is, although NICE is constrained to the £20k-£30k/QALY threshold, the Department of Health and Social Care routinely uses Claxton et al’s £13k/QALY benchmark.

Mark and Steve go through five key topics in health technology assessment to pick out the areas that should be considered for an update. The topics are: health measurement and valuation, broader benefits, perspective, modelling, and uncertainty.  For example, whether/how to consider caregiver burden, and benefits (and opportunity costs) on caregivers, guidance on model validation, and formal incorporation of value of information methods. These are all sorely needed and would definitely cement NICE’s position as the international standard-setter for health technology assessment.

Beyond NICE and the UK, I found that this paper provides a good overview on hot topics in cost-effectiveness for the next few years. Must read for cost-effectiveness analysts!

Credits

5th IRDES-DAUPHINE Workshop on Applied Health Economics and Policy Evaluation

The fifth IRDES Workshop on Applied Health Economics and Policy Evaluation, will take place in Paris, France, on June 20th-21st 2019. The workshop is organized by IRDES, Institute for Research and Information in Health Economics, and the Chaire Santé Dauphine.

Submission and selection of papers. You are invited to submit a full paper before January 14th 2019. Papers will be selected by the scientific committee on the basis of a full or advanced draft papers, written in English. Papers should include empirical material, and only unpublished papers at the time of the submission will be accepted. The submission should contain author’s name(s) and affiliation(s), a structured abstract and keywords (up to five).
Authors have to submit their complete papers in PDF format through the Submission form.

Registration and fees. Registration fees are 200 euros. Only authors or coauthors can apply for registration. PhD students or early career researchers may benefit from free registration upon request.

ProgramThe workshop will cover the following topics, with an emphasis on Public Policies analysis and evaluation: Social Health Inequalities, Health Services Utilization, Insurance, Health Services Delivery and Organization, Specific Populations: The Elderly, Migrants, High Needs-High Costs Patients, Low Income Households…. About 16 papers will be selected. Each paper will be allocated 20 minutes for presentation and 20 minutes for discussion (introduced by a participant or a member of the scientific committee).

Scientific committee. Damien Bricard (IRDES), Andrew Clark (Paris School of Economics), Brigitte Dormont (Paris Dauphine University and Chaire santé Dauphine), Paul Dourgnon (IRDES), Agnès Gramain (Université Lorraine)Julien Mousquès (IRDES), Aurélie Pierre (IRDES), Erin Strumpf (McGill University, Montreal), Matt Sutton (University of Manchester)

Contact: ahepe@irdes.fr

Sam Watson’s journal round-up for 15th January 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness of publicly funded treatment of opioid use disorder in California. Annals of Internal Medicine [PubMed] Published 2nd January 2018

Deaths from opiate overdose have soared in the United States in recent years. In 2016, 64,000 people died this way, up from 16,000 in 2010 and 4,000 in 1999. The causes of public health crises like this are multifaceted, but we can identify two key issues that have contributed more than any other. Firstly, medical practitioners have been prescribing opiates irresponsibly for years. For the last ten years, well over 200,000,000 opiate prescriptions were issued per year in the US – enough for seven in every ten people. Once prescribed, opiate use is often not well managed. Prescriptions can be stopped abruptly, for example, leaving people with unexpected withdrawal syndromes and rebound pain. It is estimated that 75% of heroin users in the US began by using legal, prescription opiates. Secondly, drug suppliers have started cutting heroin with its far stronger but cheaper cousin, fentanyl. Given fentanyl’s strength, only a tiny amount is required to achieve the same effects as heroin, but the lack of pharmaceutical knowledge and equipment means it is often not measured or mixed appropriately into what is sold as ‘heroin’. There are two clear routes to alleviating the epidemic of opiate overdose: prevention, by ensuring responsible medical use of opiates, and ‘cure’, either by ensuring the quality and strength of heroin, or providing a means to stop opiate use. The former ‘cure’ is politically infeasible so it falls on the latter to help those already habitually using opiates. However, the availability of opiate treatment programs, such as opiate agonist treatment (OAT), is lacklustre in the US. OAT provides non-narcotic opiates, such as methadone or buprenorphine, to prevent withdrawal syndromes in users, from which they can slowly be weaned. This article looks at the cost-effectiveness of providing OAT for all persons seeking treatment for opiate use in California for an unlimited period versus standard care, which only provides OAT to those who have failed supervised withdrawal twice, and only for 21 days. The paper adopts a previously developed semi-Markov cohort model that includes states for treatment, relapse, incarceration, and abstinence. Transition probabilities for the new OAT treatment were determined from treatment data for current OAT patients (as far as I understand it). Although this does raise the question about the generalisability of this population to the whole population of opiate users – given the need to have already been through two supervised withdrawals, this population may have a greater motivation to quit, for example. In any case, the article estimates that the OAT program would be cost-saving, through reductions in crime and incarceration, and improve population health, by reducing the risk of death. Taken at face value these results seem highly plausible. But, as we’ve discussed before, drug policy rarely seems to be evidence-based.

The impact of aid on health outcomes in Uganda. Health Economics [PubMed] Published 22nd December 2017

Examining the response of population health outcomes to changes in health care expenditure has been the subject of a large and growing number of studies. One reason is to estimate a supply-side cost-effectiveness threshold: the health returns the health service achieves in response to budget expansions or contractions. Similarly, we might want to know the returns to particular types of health care expenditure. For example, there remains a debate about the effectiveness of aid spending in low and middle-income country (LMIC) settings. Aid spending may fail to be effective for reasons such as resource leakage, failure to target the right population, poor design and implementation, and crowding out of other public sector investment. Looking at these questions at an aggregate level can be tricky; the link between expenditure or expenditure decisions and health outcomes is long and causality flows in multiple directions. Effects are likely to therefore be small and noisy and require strong theoretical foundations to interpret. This article takes a different, and innovative, approach to looking at this question. In essence, the analysis boils down to a longitudinal comparison of those who live near large, aid funded health projects with those who don’t. The expectation is that the benefit of any aid spending will be felt most acutely by those who live nearest to actual health care facilities that come about as a result of it. Indeed, this is shown by the results – proximity to an aid project reduced disease prevalence and work days lost to ill health with greater effects observed closer to the project. However, one way of considering the ‘usefulness’ of this evidence is how it can be used to improve policymaking. One way is in understanding the returns to investment or over what area these projects have an impact. The latter is covered in the paper to some extent, but the former is hard to infer. A useful next step may be to try to quantify what kind of benefit aid dollars produce and its heterogeneity thereof.

The impact of social expenditure on health inequalities in Europe. Social Science & Medicine Published 11th January 2018

Let us consider for a moment how we might explore empirically whether social expenditure (e.g. unemployment support, child support, housing support, etc) affects health inequalities. First, we establish a measure of health inequality. We need a proxy measure of health – this study uses self-rated health and self-rated difficulty in daily living – and then compare these outcomes along some relevant measure of socioeconomic status (SES) – in this study they use level of education and a compound measure of occupation, income, and education (the ISEI). So far, so good. Data on levels of social expenditure are available in Europe and are used here, but oddly these data are converted to a percentage of GDP. The trouble with doing this is that this variable can change if social expenditure changes or if GDP changes. During the financial crisis, for example, social expenditure shot up as a proportion of GDP, which likely had very different effects on health and inequality than when social expenditure increased as a proportion of GDP due to a policy change under the Labour government. This variable also likely has little relationship to the level of support received per eligible person. Anyway, at the crudest level, we can then consider how the relationship between SES and health is affected by social spending. A more nuanced approach might consider who the recipients of social expenditure are and how they stand on our measure of SES, but I digress. In the article, the baseline category for education is those with only primary education or less, which seems like an odd category to compare to since in Europe I would imagine this is a very small proportion of people given compulsory schooling ages unless, of course, they are children. But including children in the sample would be an odd choice here since they don’t personally receive social assistance and are difficult to compare to adults. However, there are no descriptive statistics in the paper so we don’t know and no comparisons are made between other groups. Indeed, the estimates of the intercepts in the models are very noisy and variable for no obvious reason other than perhaps the reference group is very small. Despite the problems outlined so far though, there is a potentially more serious one. The article uses a logistic regression model, which is perfectly justifiable given the binary or ordinal nature of the outcomes. However, the authors justify the conclusion that “Results show that health inequalities measured by education are lower in countries where social expenditure is higher” by demonstrating that the odds ratio for reporting a poor health outcome in the groups with greater than primary education, compared to primary education or less, is smaller in magnitude when social expenditure as a proportion of GDP is higher. But the conclusion does not follow from the premise. It is entirely possible for these odds ratios to change without any change in the variance of the underlying distribution of health, the relative ordering of people, or the absolute difference in health between categories, simply by shifting the whole distribution up or down. For example, if the proportions of people in two groups reporting a negative outcome are 0.3 and 0.4, which then change to 0.2 and 0.3 respectively, then the odds ratio comparing the two groups changes from 0.64 to 0.58. The difference between them remains 0.1. No calculations are made regarding absolute effects in the paper though. GDP is also shown to have a positive effect on health outcomes. All that might have been shown is that the relative difference in health outcomes between those with primary education or less and others changes as GDP changes because everyone is getting healthier. The question of the article is interesting, it’s a shame about the execution.

Credits