Meeting round-up: Health Economists’ Study Group (HESG) Winter 2018

Last week’s biannual intellectual knees-up for UK health economists took place at City, University of London. We’ve written before about HESG, but if you need a reminder of the format you can read Lucy Abel’s blog post on the subject. This was the first HESG I’ve been to in a while that took place in an actual university building.

The conference kicked off for me with my colleague Grace Hampson‘s first ever HESG discussion. It was an excellent discussion of Toby Watt‘s paper on the impact of price promotions for cola, in terms of quantities purchased (they increase) and – by extension – sugar consumption. It was a nice paper with a clear theoretical framework and empirical strategy, which generated a busy discussion. Nutrition is a subject that I haven’t seen represented much at past HESG meetings, but there were several on the schedule this time around with other papers by Jonathan James and Ben Gershlick. I expect it’s something we’ll see becoming more prevalent as policymaking becomes more insistent.

The second and third sessions I attended were on the relationship between health and social care, which is a pressing matter in the UK, particular with regard to achieving integrated care. Ben Zaranko‘s paper considered substitution effects arising from changes in the relative budgets of health and social care. Jonathan Stokes and colleagues attempted to identify whether the Better Care Fund has achieved its goal of reducing secondary care use. That paper got a blazing discussion from Andrew Street that triggered an insightful discussion in the room.

A recurring theme in many sessions was the challenge of communicating with local decision-makers, and the apparent difficulty in working without a reference case to fall back on (such as that of NICE). This is something that I have heard regularly discussed at least since the Winter 2016 meeting in Manchester. At City, this was most clearly discussed in Emma Frew‘s paper describing the researchers’ experiences working with local government. Qualitative research has clearly broken through at HESG, including Emma’s paper and a study by Hareth Al-Janabi on the subject of treatment spillovers on family carers.

I also saw a few papers that related primarily to matters of research conduct and publishing. Charitini Stavropoulou‘s paper explored whether highly-cited researchers are more likely to receive public funding, while the paper I chaired by Anum Shaikh explored the potential for recycling cost-effectiveness models. The latter was a joy for me, with much discussion of model registries!

There were plenty of papers that satisfied my own particular research interests. Right up my research street was Mauro Laudicella‘s paper, which used real-world data to assess the cost savings associated with redirecting cancer diagnoses to GP referral rather than emergency presentation. I wasn’t quite as optimistic about the potential savings, with the standard worries about lead time bias and selection effects. But it was a great paper nonetheless. Also using real-world evidence was Ewan Gray‘s study, which supported the provision of adjuvant chemotherapy for early stage breast cancer but delivered some perplexing findings about patient-GP decision-making. Ewan’s paper explored technical methodological challenges, though the prize for the most intellectually challenging paper undoubtedly goes to Manuel Gomes, who continued his crusade to make health economists better at dealing with missing data – this time for the case of quality of life data. Milad Karimi‘s paper asked whether preferences over health states are informed. This is the kind of work I enjoy thinking about – whether measures like the EQ-5D capture what really matters and how we might do better.

As usual, many delegates worked hard and played hard. I took a beating from the schedule at this HESG, with my discussion taking place during the first session after the conference dinner (where we walked in the footsteps of the Spice Girls) and my chairing responsibilities falling on the last session of the last day. But in both cases, the audience was impressive.

I’ll leave the final thought for the blog post with Peter Smith’s plenary, which considered the role of health economists in a post-truth world. Happily, for me, Peter’s ideas chimed with my own view that we ought to be taking our message to the man on the Clapham omnibus and supporting public debate. Perhaps our focus on (national) policymakers is too strong. If not explicit, this was a theme that could be seen throughout the meeting, whether it be around broader engagement with stakeholders, recognising local decision-making processes, or harnessing the value of storytelling through qualitative research. HESG members are STRETCHing the truth.

Credit

Chris Sampson’s journal round-up for 25th September 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Good practices for real‐world data studies of treatment and/or comparative effectiveness: recommendations from the Joint ISPOR‐ISPE Special Task Force on Real‐World Evidence in Health Care Decision Making. Value in Health Published 15th September 2017

I have an instinctive mistrust of buzzwords. They’re often used to avoid properly defining something, either because it’s too complicated or – worse – because it isn’t worth defining in the first place. For me, ‘real-world evidence’ falls foul. If your evidence isn’t from the real world, then it isn’t evidence at all. But I do like a good old ISPOR Task Force report, so let’s see where this takes us. Real-world evidence (RWE) and its sibling buzzword real-world data (RWD) relate to observational studies and other data not collected in an experimental setting. The purpose of this ISPOR task force (joint with the International Society for Pharmacoepidemiology) was to prepare some guidelines about the conduct of RWE/RWD studies, with a view to improving decision-makers’ confidence in them. Essentially, the hope is to try and create for RWE the kind of ecosystem that exists around RCTs, with procedures for study registration, protocols, and publication: a noble aim. The authors distinguish between 2 types of RWD: ‘Exploratory Treatment Effectiveness Studies’ and ‘Hypothesis Evaluating Treatment Effectiveness Studies’. The idea is that the latter test a priori hypotheses, and these are the focus of this report. Seven recommendations are presented: i) pre-specify the hypotheses, ii) publish a study protocol, iii) publish the study with reference to the protocol, iv) enable replication, v) test hypotheses on a separate dataset than the one used to generate the hypotheses, vi) publically address methodological criticisms, and vii) involve key stakeholders. Fair enough. But these are just good practices for research generally. It isn’t clear how they are in any way specific to RWE. Of course, that was always going to be the case. RWE-specific recommendations would be entirely contingent on whether or not one chose to define a study as using ‘real-world evidence’ (which you shouldn’t, because it’s meaningless). The authors are trying to fit a bag of square pegs into a hole of undefined shape. It isn’t clear to me why retrospective observational studies, prospective observational studies, registry studies, or analyses of routinely collected clinical data should all be treated the same, yet differently to randomised trials. Maybe someone can explain why I’m mistaken, but this report didn’t do it.

Are children rational decision makers when they are asked to value their own health? A contingent valuation study conducted with children and their parents. Health Economics [PubMed] [RePEc] Published 13th September 2017

Obtaining health state utility values for children presents all sorts of interesting practical and theoretical problems, especially if we want to use them in decisions about trade-offs with adults. For this study, the researchers conducted a contingent valuation exercise to elicit children’s (aged 7-19) preferences for reduced risk of asthma attacks in terms of willingness to pay. The study was informed by two preceding studies that sought to identify the best way in which to present health risk and financial information to children. The participating children (n=370) completed questionnaires at school, which asked about socio-demographics, experience of asthma, risk behaviours and altruism. They were reminded (in child-friendly language) about the idea of opportunity cost, and to consider their own budget constraint. Baseline asthma attack risk and 3 risk-reduction scenarios were presented graphically. Two weeks later, the parents completed similar questionnaires. Only 9% of children were unwilling to pay for risk reduction, and most of those said that it was the mayor’s problem! In some senses, the children did a better job than their parents. The authors conducted 3 tests for ‘incorrect’ responses – 14% of adults failed at least one, while only 4% of children did so. Older children demonstrated better scope sensitivity. Of course, children’s willingness to pay was much lower in absolute terms than their parents’, because children have a much smaller budget. As a percentage of the budget, parents were – on average – willing to pay more than children. That seems reassuringly predictable. Boys and fathers were willing to pay more than girls and mothers. Having experience of frequent asthma attacks increased willingness to pay. Interestingly, teenagers were willing to pay less (as a proportion of their budget) than younger children… and so were the teenagers’ parents! Children’s willingness to pay was correlated with that of their own parent’s at the higher risk reductions but not the lowest. This study reports lots of interesting findings and opens up plenty of avenues for future research. But the take-home message is obvious. Kids are smart. We should spend more time asking them what they think.

Journal of Patient-Reported Outcomes: aims and scope. Journal of Patient-Reported Outcomes Published 12th September 2017

Here we have a new journal that warrants a mention. The journal is sponsored by the International Society for Quality of Life Research (ISOQOL), making it a sister journal of Quality of Life Research. One of its Co-Editors-in-Chief is the venerable David Feeny, of HUI fame. They’ll be looking to publish research using PRO(M) data from trials or routine settings, studies of the determinants of PROs, qualitative studies in the development of PROs; anything PRO-related, really. This could be a good journal for more thorough reporting of PRO data that can get squeezed out of a study’s primary outcome paper. Also, “JPRO” is fun to say. The editors don’t mention that the journal is open access, but the website states that it is, so APCs at the ready. ISOQOL members get a discount.

Research and development spending to bring a single cancer drug to market and revenues after approval. JAMA Internal Medicine [PubMed] Published 11th September 2017

We often hear that new drugs are expensive because they’re really expensive to develop. Then we hear about how much money pharmaceutical companies spend on marketing, and we baulk. The problem is, pharmaceutical companies aren’t forthcoming with their accounts, so researchers have to come up with more creative ways to estimate R&D spending. Previous studies have reported divergent estimates. Whether R&D costs ‘justify’ high prices remains an open question. For this study, the authors looked at public data from the US for 10 companies that had only one cancer drug approved by the FDA between 2007 and 2016. Not very representative, perhaps, but useful because it allows for the isolation of the development costs associated with a single drug reaching the market. The median time for drug development was 7.3 years. The most generous estimate of the mean cost of development came in at under a billion dollars; substantially less than some previous estimates. This looks like a bargain; the mean revenue for the 10 companies up to December 2016 was over $6.5 billion. This study may seem a bit back-of-the-envelope in nature. But that doesn’t mean it isn’t accurate. If anything, it begs more confidence than some previous studies because the methods are entirely transparent.

Credits