Meeting round-up: ISPOR Europe 2019

For many health economists, November is ISPOR Europe month, and this year was no exception! We gathered in the fantastic Bella Center in Copenhagen to debate, listen and breathe health economics and outcomes research from the 2nd to the 6th November. Missed it? Would like a recap? Stay tuned for the #ISPOREurope 2019 round-up!

Bella Center

My ISPOR week started with the fascinating course ‘Tools for reproducible real-world data analysis’ by Blythe Adamson and Rachael Sorg. My key take-home messages? Use an interface like R-markdown to produce a document with code and results automatically. Use a version control platform like Phabricator to make code review easy. Write a detailed protocol, write the code to follow the protocol, and then check the code side by side with the protocol.

Monday started with the impressive workshop on translating oncology clinical trial endpoints to real-world data (RWD) for decision making.

Keith Abrams set the scene. Electronic health records (EHRs) may be used to derive the overall survival (OS) benefit given the observed benefit on progression-free survival (PFS). Sylwia Bujkiewicz showed an example where a bivariate meta-analysis of RCTs was used to estimate the surrogate relationship between PFS and OS (paper here). Jessica Davies discussed some of the challenges, such as the lack of data on exposure to treatments in a way that matches the data recorded in trials. Federico Felizzi presented a method to determine the optimal treatment duration of a cancer drug (see here for the code).

Next up, the Women in HEOR session! Women in HEOR is an ISPOR initiative that aims to support the growth, development, and contribution of women. It included various initiatives at ISPOR Europe, such as dinners, receptions and, of course, this session.

Shelby Reed introduced, and Olivia Wu presented on the overwhelming evidence on the benefits of diversity and on how to foster it in our work environment. Nancy Berg presented on ISPOR’s commitment to diversity and equality. We then heard from Sabina Hutchison about how to network in a conference environment, how to develop a personal brand and present our pitch. Have a look at my twitter thread for the tips. For more information on the Women in HEOR activities at ISPOR Europe, search #WomenInHEOR on twitter. Loads of cool information!

My Monday afternoon started with the provocatively titled ‘Time for change? Has time come for the pharma industry to accept modest prices?’. Have a look here for my live twitter thread. Kate Dion started by noting that the pressure is on for the pharmaceutical industry to reduce drug prices. Sarah Garner argued that lower prices lead to more patients being able to access the drug, which in turn increases the company’s income. Michael Schröter argued that innovative products should have a premium price, such as with Hemlibra. Lastly, Jens Grueger supported the implementation of value-based price, given the cost-effectiveness threshold.

Keeping with the drug pricing theme, my next session was on indication-based pricing. Mireia Jofre Bonet tackled the question of whether a single price is stifling innovation. Adrian Towse was supportive of indication-based pricing because it allows for the price to depend on the value of each indication and expand access to the full licensed population. Andrew Briggs argued against indication-based pricing for three reasons. First, it would give companies the maximum value-based price across all indications. Second, it would lead to greater drug expenditure, leading to greater opportunity costs. Third, it would be difficult to enforce, given that it would require cooperation of all payers. Francis Arickx explained the pricing system in Belgium. Remarkably, prices can be renegotiated over time depending on new entrants to market and new evidence. Another excellent session at ISPOR Europe!

My final session on Monday was about the timely and important topic of approaches for OS extrapolation. Elisabeth Fenwick introduced the session by noting that innovations in oncology have given rise to different patterns of survival, with implications for extrapolation. Sven Klijn presented on the various available methods for survival extrapolation. John Whalen focused on mixture cure models for cost-effectiveness analysis. Steve Palmer argued that, although new methods, such as mixture cure models, may provide additional insight, the approach should be justified, evidence-based and alternatives explored. In sum, there is no single optimal method.

On Tuesday, my first session was the impressive workshop on estimating cost-effectiveness thresholds based on the opportunity cost (twitter thread). Nancy Devlin set the scene by explaining the importance of getting the cost-effectiveness threshold right. James Lomas explained how to estimate the opportunity cost to the health care system following the seminal work by Karl Claxton et al and also touching on some of James’s recent work. Martin Henriksson noted that, by itself, the opportunity cost is not sufficient to define the threshold if we wish to consider solidarity and need alongside cost-effectiveness. The advantage of knowing the opportunity cost is that we can make informed trade-offs between health maximisation and other elements of value. Danny Palnoch finished the panel by explaining the challenges when deciding what to pay for a new treatment.

Clearly there is a tension between the price that pharmaceutical companies feel is reasonable, the opportunity cost to the health care service, and the desire by stakeholders to use the drug. I feel this in every session of the NICE appraisal committee!

My next session was the compelling panel on the use of RWD to revisit the HTA decision (twitter thread). Craig Brooks-Rooney noted that, as regulators increasingly license technologies based on weaker evidence, HTA agencies are under pressure to adapt their methods to the available evidence. Adrian Towse proposed a conceptual framework to use RWD to revisit decisions based on value of information analysis. Jeanette Kusel went through examples where RWD has been used to inform NICE decisions, such as brentuximab vendotin. Anna Halliday discussed the many practical challenges to implement RWD collection to inform re-appraisals. Anna finished with the caution against prolonging negotiations and appraisals, which could lead to delays to patient access.

My Wednesday started with the stimulating panel on drugs with tumour agnostic indications. Clarissa Higuchi Zerbini introduced the panel and proposed some questions to be addressed. Rosa Giuliani contributed with the clinical perspective. Jacoline Bouvy discussed the challenges faced by NICE and ways forward in appraising tumour-agnostic drugs. Marc van den Bulcke finished the panel with an overview of how next generation sequencing has been implemented in Belgium.

My last session was the brilliant workshop on HTA methods for antibiotics.

Mark Sculpher introduced the topic. Antibiotic resistance is a major challenge for humanity, but the development of new antibiotics is declining. Beth Woods presented a new framework for HTA of antibiotics. The goal is to reflect the full value of antibiotics whilst accounting for the opportunity cost and uncertainties in the evidence (see this report for more details). Angela Blake offered the industry perspective. She argued that revenues should be delinked to volume, to be holistic in the value assessment, and to be mindful of the incentives faced by drug companies. Nick Crabb finished by introducing a new project, by NICE and NHS England, on the feasibility of innovative value assessments for antibiotics.

And this is the end of the absolutely outstanding ISPOR Europe 2019! If you’re eager for more, have a look at the video below with my conference highlights!

Rita Faria’s journal round-up for 21st October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Quantifying how diagnostic test accuracy depends on threshold in a meta-analysis. Statistics in Medicine [PubMed] Published 30th September 2019

A diagnostic test is often based on a continuous measure, e.g. cholesterol, which is dichotomised at a certain threshold to classify people as ‘test positive’, who should be treated, or ‘test negative’, who should not. In an economic evaluation, we may wish to compare the costs and benefits of using the test at different thresholds. For example, the cost-effectiveness of offering lipid lowering therapy for people with cholesterol over 7 mmol/L vs over 5 mmol/L. This is straightforward to do if we have access to a large dataset comparing the test to its gold standard to estimate its sensitivity and specificity at various thresholds. It is quite the challenge if we only have aggregate data from multiple publications.

In this brilliant paper, Hayley Jones and colleagues report on a new method to synthesise diagnostic accuracy data from multiple studies. It consists of a multinomial meta-analysis model that can estimate how accuracy depends on the diagnostic threshold. This method produces estimates that can be used to parameterise an economic model.

These new developments in evidence synthesis are very exciting and really important to improve the data going into economic models. My only concern is that the model is implemented in WinBUGS, which is not a software that many applied analysts use. Would it be possible to have a tutorial, or even better, include this method in the online tools available in the Complex Reviews Support Unit website?

Early economic evaluation of diagnostic technologies: experiences of the NIHR Diagnostic Evidence Co-operatives. Medical Decision Making [PubMed] Published 26th September 2019

Keeping with the diagnostic theme, this paper by Lucy Abel and colleagues reports on the experience of the Diagnostic Evidence Co-operatives in conducting early modelling of diagnostic tests. These were established in 2013 to help developers of diagnostic tests link-up with clinical and academic experts.

The paper discusses eight projects where economic modelling was conducted at an early stage of project development. It was fascinating to read about the collaboration between academics and test developers. One of the positive aspects was the buy-in of the developers, while a less positive one was the pressure to produce evidence quickly and that supported the product.

The paper is excellent in discussing the strengths and challenges of these projects. Of note, there were challenges in mapping out a clinical pathway, selecting the appropriate comparators, and establishing the consequences of testing. Furthermore, they found that the parameters around treatment effectiveness were the key driver of cost-effectiveness in many of the evaluations. This is not surprising given that the benefits of a test are usually in better informing the management decisions, rather than via its direct costs and benefits. It definitely resonates with my own experience in conducting economic evaluations of diagnostic tests (see, for example, here).

Following on from the challenges, the authors suggest areas for methodological research: mapping the clinical pathway, ensuring model transparency, and modelling sequential tests. They finish with advice for researchers doing early modelling of tests, although I’d say that it would be applicable to any economic evaluation. I completely agree that we need better methods for economic evaluation of diagnostic tests. This paper is a useful first step in setting up a research agenda.

A second chance to get causal inference right: a classification of data science tasks. Chance [arXiv] Published 14th March 2019

This impressive paper by Miguel Hernan, John Hsu and Brian Healy is an essential read for all researchers, analysts and scientists. Miguel and colleagues classify data science tasks into description, prediction and counterfactual prediction. Description is using data to quantitatively summarise some features of the world. Prediction is using the data to know some features of the world given our knowledge about other features. Counterfactual prediction is using the data to know what some features of the world would have been if something hadn’t happened; that is, causal inference.

I found the explanation of the difference between prediction and causal inference quite enlightening. It is not about the amount of data or the statistical/econometric techniques. The key difference is in the role of expert knowledge. Predicting requires expert knowledge to specify the research question, the inputs, the outputs and the data sources. Additionally, causal inference requires expert knowledge “also to describe the causal structure of the system under study”. This causal knowledge is reflected in the assumptions, the ideas for the data analysis, and for the interpretation of the results.

The section on implications for decision-making makes some important points. First, that the goal of data science is to help people make better decisions. Second, that predictive algorithms can tell us that decisions need to be made but not which decision is most beneficial – for that, we need causal inference. Third, many of us work on complex systems for which we don’t know everything (the human body is a great example). Because we don’t know everything, it is impossible to predict with certainty what would be the consequences of an intervention in a specific individual from routine health records. At most, we can estimate the average causal effect, but even for that we need assumptions. The relevance to the latest developments in data science is obvious, given all the hype around real world data, artificial intelligence and machine learning.

I absolutely loved reading this paper and wholeheartedly recommend it for any health economist. It’s a must read!

Credits

Chris Sampson’s journal round-up for 14th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Transparency in health economic modeling: options, issues and potential solutions. PharmacoEconomics [PubMed] Published 8th October 2019

Reading this paper was a strange experience. The purpose of the paper, and its content, is much the same as a paper of my own, which was published in the same journal a few months ago.

The authors outline what they see as the options for transparency in the context of decision modelling, with a focus on open source models and a focus on for whom the details are transparent. Models might be transparent to a small number of researchers (e.g. in peer review), to HTA agencies, or to the public at large. The paper includes a figure showing the two aspects of transparency, termed ‘reach’ and ‘level’, which relate to the number of people who can access the information and the level of detail made available. We provided a similar figure in our paper, using the terms ‘breadth’ and ‘depth’, which is at least some validation of our idea. The authors then go on to discuss five ‘issues’ with transparency: copyright, model misuse, confidential data, software, and time/resources. These issues are framed as questions, to which the authors posit some answers as solutions.

Perhaps inevitably, I think our paper does a better job, and so I’m probably over-critical of this article. Ours is more comprehensive, if nothing else. But I also think the authors make a few missteps. There’s a focus on models created by academic researchers, which oversimplifies the discussion somewhat. Open source modelling is framed as a more complete solution than it really is. The ‘issues’ that are discussed are at points framed as drawbacks or negative features of transparency, which they aren’t. Certainly, they’re challenges, but they aren’t reasons not to pursue transparency. ‘Copyright’ seems to be used as a synonym for intellectual property, and transparency is considered to be a threat to this. The authors’ proposed solution here is to use licensing fees. I think that’s a bad idea. Levying a fee creates an incentive to disregard copyright, not respect it.

It’s a little ironic that both this paper and my own were published, when both describe the benefits of transparency in terms of reducing “duplication of efforts”. No doubt, I read this paper with a far more critical eye than I normally would. Had I not published a paper on precisely the same subject, I might’ve thought this paper was brilliant.

If we recognize heterogeneity of treatment effect can we lessen waste? Journal of Comparative Effectiveness Research [PubMed] Published 1st October 2019

This commentary starts from the premise that a pervasive overuse of resources creates a lot of waste in health care, which I guess might be true in the US. Apparently, this is because clinicians have an insufficient understanding of heterogeneity in treatment effects and therefore assume average treatment effects for their patients. The authors suggest that this situation is reinforced by clinical trial publications tending to only report average treatment effects. I’m not sure whether the authors are arguing that clinicians are too knowledgable and dependent on the research, or that they don’t know the research well enough. Either way, it isn’t a very satisfying explanation of the overuse of health care. Certainly, patients could benefit from more personalised care, and I would support the authors’ argument in favour of stratified studies and the reporting of subgroup treatment effects. The most insightful part of this paper is the argument that these stratifications should be on the basis of observable characteristics. It isn’t much use to your general practitioner if personalisation requires genome sequencing. In short, I agree with the authors’ argument that we should do more to recognise heterogeneity of treatment effects, but I’m not sure it has much to do with waste.

No evidence for a protective effect of education on mental health. Social Science & Medicine Published 3rd October 2019

When it comes to the determinants of health and well-being, I often think back to my MSc dissertation research. As part of that, I learned that a) stuff that you might imagine to be important often isn’t and b) methodological choices matter a lot. Though it wasn’t the purpose of my study, it seemed from this research that higher education has a negative effect on people’s subjective well-being. But there isn’t much research out there to help us understand the association between education and mental health in general.

This study add to a small body of literature on the impact of changes in compulsory schooling on mental health. In (West) Germany, education policy was determined at the state level, so when compulsory schooling was extended from eight to nine years, different states implemented the change at different times between 1949 and 1969. This study includes 5,321 people, with 20,290 person-year observations, from the German Socio-Economic Panel survey (SOEP). Inclusion was based on people being born seven years either side of the cutoff birth year for which the longer compulsory schooling was enacted, with a further restriction to people aged between 50 and 85. The SOEP includes the SF-12 questionnaire, which includes a mental health component score (MCS). There is also an 11-point life satisfaction scale. The authors use an instrumental variable approach, using the policy change as an instrument for years of schooling and estimating a standard two-stage least squares model. The MCS score, life satisfaction score, and a binary indicator for MCS score lower than or equal to 45.6, are all modelled as separate outcomes.

Estimates using an OLS model show a positive and highly significant effect of years of schooling on all three outcomes. But when the instrumental variable model is used, this effect disappears. An additional year of schooling in this model is associated with a statistically and clinically insignificant decrease in the MCS score. Also insignificant was the finding that more years of schooling increases the likelihood of developing symptoms of a mental health disorder (as indicated by the MCS threshold of 45.6) and that life satisfaction is slightly lower. The same model shows a positive effect on physical health, which corresponds with previous research and provides some reassurance that the model could detect an effect if one existed.

The specification of the model seems reasonable and a host of robustness checks are reported. The only potential issue I could spot is that a person’s state of residence at the time of schooling is not observed, and so their location at entry into the sample is used. Given that education is associated with mobility, this could be a problem, and I would have liked to see the authors subject it to more testing. The overall finding – that an additional year of school for people who might otherwise only stay at school for eight years does not improve mental health – is persuasive. But the extent to which we can say anything more general about the impact of education on well-being is limited. What if it had been three years of additional schooling, rather than one? There is still much work to be done in this area.

Scientific sinkhole: the pernicious price of formatting. PLoS One [PubMed] Published 26th September 2019

This study is based on a survey that asked 372 researchers from 41 countries about the time they spent formatting manuscripts for journal submission. Let’s see how I can frame this as health economics… Well, some of the participants are health researchers. The time they spend on formatting journal submissions is time not spent on health research. The opportunity cost of time spent formatting could be measured in terms of health.

The authors focused on the time and wage costs of formatting. The results showed that formatting took a median time of 52 hours per person per year, at a cost of $477 per manuscript or $1,908 per person per year. Researchers spend – on average – 14 hours on formatting a manuscript. That’s outrageous. I have never spent that long on formatting. If you do, you only have yourself to blame. Or maybe it’s just because of what I consider to constitute formatting. The survey asked respondents to consider formatting of figures, tables, and supplementary files. Improving the format of a figure or a table can add real value to a paper. A good figure or table can change a bad paper to a good paper. I’d love to know how the time cost differed for people using LaTeX.

Credits