Ian Cromwell’s journal round-up for 17th February 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Does the use of health technology assessment have an impact on the utilisation of health care resources? Evidence from two European countries. European Journal of Health Economics [PubMed] Published 5th February 2020

The ostensible purpose of health technology assessment (HTA) is to provide health care decision-makers with the information they need when considering whether to change existing policies. One of the questions I’ve heard muttered sotto voce (and that I will admit to having asked myself in more cynical moments) is whether or not HTAs actually make a difference. We are generating lots of evidence, but does it have any real impact on decision making? Do the complex analyses health economists undertake make any impact on policy?

This paper used data from Catalonia and England to estimate the impact of a positive HTA recommendation from the regulatory bodies – the National Institute for Health and Care Excellence (NICE) in England and a collection of regional approval bodies in Catalonia and Spain – to assess trends in medical usage prior to and following the publication of HTA-guided recommendations for new cancer drugs between 2011 and the end of 2016. Utilization (volume of drugs dispensed) and expenditure were extracted from retrospective records. The authors built a Poisson regression model that allowed them to observe temporal effects of usage before and following a positive recommendation.

The authors noted that a lack of pre-recommendation utilization data made it difficult to compute a model of negative recommendations (which is the more cynical version of the question!), so it is important to recognize that as a limitation of the approach. They also note, however, that it is typically the case in the UK and Catalonia that approvals for new drugs are conditional on a positive recommendation. Spain has a different system in which medicines may still be available even if they are not recommended.

The results of the model are a bit more complex than is easy to fit into a blog post, but the bottom line is that a positive recommendation does produce an increase in utilization. What stuck out to me about the descriptive findings was the consistent presence of a trend toward increased usage happening before the recommendation was published. But the Poisson model found a significant effect of the recommendation even controlling for that temporal trend. The authors helpfully noted that the criteria going into a recommendation are different between England and Spain (cost per QALY in England, clinical effectiveness alone sometimes in Spain), which makes inter-country comparisons challenging.

Health‐related quality of life in oncology drug reimbursement submissions in Canada: a review of submissions to the pan‐Canadian Oncology Drug Review. Cancer [PubMed] Published 1st January 2020

In Canada, newly-developed cancer drugs undergo HTA through the pan-Canadian Oncology Drug Review (pCODR), a program run under the auspices of the Canadian Agency for Drugs and Technologies in Health (CADTH). Unlike NICE in the UK, the results of CADTH’s pCODR recommendations are not binding; they are intended instead to provide provincial decision-makers with expert evidence they can use when deciding whether or not to add drugs to their formulary.

This paper, written by researchers at the Canadian Centre for Applied Research in Cancer Control (ARCC), reviewed the publicly-available reports governing 43 pCODR recommendations between 2015 and 2018. The paper summarizes the findings of the cost-effectiveness analyses generated in each report, including incremental costs and incremental QALYs (incremental cost per QALY being the reference case used by CADTH). The authors also appraised the methods chosen within each submission, both in terms of decision model structure and data inputs.

Interestingly, and perhaps disconcertingly, the paper reports a notable discrepancy between the ICERs reported by the submitting manufacturer and those calculated by CADTH’s Economics Guidance Panel. This appeared to be largely driven by the kind of health-related quality of life (HRQoL) data used to generate the QALYs in each submission. The authors note that the majority (56%) of the submissions provided to pCODR didn’t collect HRQoL data alongside clinical trials, preferring instead to use values published in the literature. In the face of high levels of uncertainty and relatively small incremental benefits (the median change in QALYs was 0.86), it seems crucial to have reliable information about HRQoL for making these kinds of decisions.

Regulatory and advisory agencies like CADTH have a rather weighty responsibility, not only to help decision makers identify which new drugs and technologies the health care system should adopt, but also which ones they should reject. When manufacturers’ submissions rely on inappropriate data with high levels of uncertainty, this task becomes much more difficult. The authors suggest that manufacturers should be collecting their own HRQoL data in clinical trials they fund. After all, if we want HTAs to have an effect on policy-making, we should also make sure they’re having a positive effect.

The cost-effectiveness of limiting federal housing vouchers to use in low-poverty neighborhoods in the United States. Public Health [PubMed] Published January 2020

My undergraduate education was heavily steeped in discussions of the social determinants of health. Another cynical opinion I’ve heard (again sometimes from myself) is that health economics is disproportionately concerned with the adoption of new drugs that have a marginal effect on health, often at the expense of investment in the other non-health-care determinants. This is a particularly persuasive bit of cynicism when you consider cancer drugs like in our previous two examples, where the incremental benefits are typically modest and the costs typically high. That’s why I was especially excited to see this paper published by my friend Dr. Zafar Zafari, applying health economic analysis frameworks to something atypical: housing policy.

The authors evaluated a trial running alongside a program providing housing vouchers to 4600 low-income households. The experimental condition in this case was that the vouchers could only be used in well-off neighbourhoods (i.e., those with a low level of poverty). The authors considered the evidence showing a link between neighbourhood wealth and lowering rates of obesity-related health conditions like diabetes, and used that evidence to construct a Markov decision model to measure incremental cost per QALY over the length of the study (10-15 years). Cohort characteristics, relative clinical effectiveness, and costs of the voucher program were estimated from trial results, with other costs and probabilities derived from the literature.

Compared to the control group (public housing), use of the housing vouchers provided an additional 0.23 QALYs per person, at a lower cost (about $750 less per person). Importantly, these findings were highly robust to parameter uncertainty, with 99% of ICERs falling below a willingness-to-pay threshold of $20,000/QALY (>90% below a WTP threshold of $0/QALY). The model was highly sensitive to the discount rate, which makes sense considering that we would expect, for a chronic condition like diabetes and a distal relationship like housing, that all the incremental health gains would be occurring years after the initial intervention.

There are a lot of things to like about this paper, but the one that stands out to me is the way they’ve framed the question:

We seek to inform the policy debate over the wisdom of spending health dollars on non-health sectors of the economy by defining the trade-off, or ‘opportunity cost’ of such a decision.

The idea that “health funds” should be focussed on “health care” robs us of the opportunity to consider the health impact of interventions in other policy areas. By bringing something like housing explicitly into the realm of cost-per-QALY analysis, the authors invite us all to consider the kinds of trade-offs we make when we relegate our consideration of health only to the kinds of things that happen inside hospitals.

A multidimensional array representation of state-transition model dynamics. Medical Decision Making [PubMed] Published 28th January 2020

I’ve been building models in R for a few years now, and developed a method of my own more or less out of necessity. So I’ve always been impressed with and drawn to the work of the group Decision Analysis in R for Technologies in Health (the amazingly-named DARTH). I’ve had the opportunity to meet a couple of their scientists and have followed their work for a while, and so I was really pleased to see the publication of this paper, hot on the heels of another paper discussing a formalized approach to model construction in R, and timed to coincide with the publication of a step-by-step guidebook on how to build models according to the DARTH recipe.

The DARTH approach (and, as a happy coincidence, mine too) involves tapping into R’s powerful ability to organize data into multidimensional arrays. The paper talks in depth about how R arrays can be used to represent health states, and how to set up and program models of essentially any level of complexity using a set of basic R commands. As a bonus they include publicly-accessible sample code that you can follow along as you read (which is the best way to learn something like this).

The authors argue that the method they propose is ideal for capturing and reflecting “transition rewards” – that is, effects on the cohort that occur during transitions between health states – in addition to “state rewards” (effects that happen as a consequence of being within a state). The key to this Dynamics Array approach is the use of a three-dimensional array to store the transitions, with the third array representing the passage of time. After walking the reader through the theory, the authors present a sample three-state model and show that the new method is fast, efficient, and accurate.

I hope that I have been sufficiently clear that I am a big fan of DARTH and admire their work a great deal. Because there is one big criticism I have to level at them, which is that this paper (and the others I have cited) is not terribly easy to follow. It sort of presumes that you already understand a lot of the topics that are discussed, which I personally do not. And if I, someone who has built many array-based models in R, am having a tough time understanding the explanation of their approach then woe betide anyone else who is reading this paper without a firm grasp of R, decision modelling theory, matrix algebra, and a handful of the other topics required to benefit from this (truly excellent) work.

DARTH is laying down a well-thought-out path to revolutionizing the standard approach to model building, but they can only do that if people start adopting their approach. If I were a grad student hoping to build my first model, this paper would likely intimidate me enough to maybe go back to the default of building it in Excel. As a postdoc with my own way of doing things there is a big opportunity cost of switching, and part of that cost is feeling too dumb to follow the instructions. I know that DARTH has tutorials and courses and workshops to help people get up to speed, but I hope that they also have a plan to translate some of this knowledge into a form that is more accessible for casual coders, non-economists, and other people who need this info but who (like me) might find this format opaque.

Credits

Chris Sampson’s journal round-up for 30th September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A need for change! A coding framework for improving transparency in decision modeling. PharmacoEconomics [PubMed] Published 24th September 2019

We’ve featured a few papers in recent round-ups that (I assume) will be included in an upcoming themed issue of PharmacoEconomics on transparency in modelling. It’s shaping up to be a good one. The value of transparency in decision modelling has been recognised, but simply making the stuff visible is not enough – it needs to make sense. The purpose of this paper is to help make that achievable.

The authors highlight that the writing of analyses, including coding, involves personal style and preferences. To aid transparency, we need a systematic framework of conventions that make the inner workings of a model understandable to any (expert) user. The paper describes a framework developed by the Decision Analysis in R for Technologies in Health (DARTH) group. The DARTH framework builds on a set of core model components, generalisable to all cost-effectiveness analyses and model structures. There are five components – i) model inputs, ii) model implementation, iii) model calibration, iv) model validation, and v) analysis – and the paper describes the role of each. Importantly, the analysis component can be divided into several parts relating to, for example, sensitivity analyses and value of information analyses.

Based on this framework, the authors provide recommendations for organising and naming files and on the types of functions and data structures required. The recommendations build on conventions established in other fields and in the use of R generally. The authors recommend the implementation of functions in R, and relate general recommendations to the context of decision modelling. We’re also introduced to unit testing, which will be unfamiliar to most Excel modellers but which can be relatively easily implemented in R. The role of various tools are introduced, including R Studio, R Markdown, Shiny, and GitHub.

The real value of this work lies in the linked R packages and other online material, which you can use to test out the framework and consider its application to whatever modelling problem you might have. The authors provide an example using a basic Sick-Sicker model, which you can have a play with using the DARTH packages. In combination with the online resources, this is a valuable paper that you should have to hand if you’re developing a model in R.

Accounts from developers of generic health state utility instruments explain why they produce different QALYs: a qualitative study. Social Science & Medicine [PubMed] Published 19th September 2019

It’s well known that different preference-based measures of health will generate different health state utility values for the same person. Yet, they continue to be used almost interchangeably. For this study, the authors spoke to people involved in the development of six popular measures: QWB, 15D, HUI, EQ-5D, SF-6D, and AQoL. Their goal was to understand the bases for the development of the measures and to explain why the different measures should give different results.

At least one original developer for each instrument was recruited, along with people involved at later stages of development. Semi-structured interviews were conducted with 15 people, with questions on the background, aims, and criteria for the development of the measure, and on the descriptive system, preference weights, performance, and future development of the instrument.

Five broad topics were identified as being associated with differences in the measures: i) knowledge sources used for conceptualisation, ii) development purposes, iii) interpretations of what makes a ‘good’ instrument, iv) choice of valuation techniques, and v) the context for the development process. The online appendices provide some useful tables that summarise the differences between the measures. The authors distinguish between measures based on ‘objective’ definitions (QWB) and items that people found important (15D). Some prioritised sensitivity (AQoL, 15D), others prioritised validity (HUI, QWB), and several focused on pragmatism (SF-6D, HUI, 15D, EQ-5D). Some instruments had modest goals and opportunistic processes (EQ-5D, SF-6D, HUI), while others had grand goals and purposeful processes (QWB, 15D, AQoL). The use of some measures (EQ-5D, HUI) extended far beyond what the original developers had anticipated. In short, different measures were developed with quite different concepts and purposes in mind, so it’s no surprise that they give different results.

This paper provides some interesting accounts and views on the process of instrument development. It might prove most useful in understanding different measures’ blind spots, which can inform the selection of measures in research, as well as future development priorities.

The emerging social science literature on health technology assessment: a narrative review. Value in Health Published 16th September 2019

Health economics provides a good example of multidisciplinarity, with economists, statisticians, medics, epidemiologists, and plenty of others working together to inform health technology assessment. But I still don’t understand what sociologists are talking about half of the time. Yet, it seems that sociologists and political scientists are busy working on the big questions in HTA, as demonstrated by this paper’s 120 references. So, what are they up to?

This article reports on a narrative review, based on 41 empirical studies. Three broad research themes are identified: i) what drove the establishment and design of HTA bodies? ii) what has been the influence of HTA? and iii) what have been the social and political influences on HTA decisions? Some have argued that HTA is inevitable, while others have argued that there are alternative arrangements. Either way, no two systems are the same and it is not easy to explain differences. It’s important to understand HTA in the context of other social tendencies and trends, and that HTA influences and is influenced by these. The authors provide a substantial discussion on the role of stakeholders in HTA and the potential for some to attempt to game the system. Uncertainty abounds in HTA and this necessarily requires negotiation and acts as a limit on the extent to which HTA can rely on objectivity and rationality.

Something lacking is a critical history of HTA as a discipline and the question of what HTA is actually good for. There’s also not a lot of work out there on culture and values, which contrasts with medical sociology. The authors suggest that sociologists and political scientists could be more closely involved in HTA research projects. I suspect that such a move would be more challenging for the economists than for the sociologists.

Credits

Meeting round-up: R for Cost-Effectiveness Analysis Workshop 2019

I have switched to using R for my cost-effectiveness models, but I know that I am not using it to its full potential. As a fledgling R user, I was keen to hear about other people’s experiences. I’m in the process of updating one of my models and know that I could be coding things better. But with so many packages and ways to code, the options seem infinite and I struggle to know where to begin. In an attempt to remedy this, I attended the Workshop on R for trial and model-based cost-effectiveness analysis hosted at UCL. I was not disappointed.

The day showcased speakers with varying levels of coding expertise doing a wide range of cool things in R. We started with examples of implementing decision trees using the CEdecisiontree package and cohort Markov models. We also got to hear about a population model using the HEEMOD package, and the purrr package was suggested for probabilistic sensitivity analyses. These talks highlighted how, compared to Excel, R can be reusable, faster, transparent, iterative, and open source.

The open source nature of R, however, has its drawbacks. One of the more interesting conversations that was woven in throughout the day was around the challenges. Can we can trust open-source software? When will NICE begin accepting models coded in R? How important is it that we have models in something like Excel that people can intuitively understand? I’ve not experienced problems choosing to use R for my work; for me, it’s always been around getting the support and information I need to get things done efficiently. The steep learning curve seems to be a major hurdle for many people. I had hoped to attend the short course introduction that was held the day before the workshop, but I was not fast enough to secure my spot as the course sold out within 36 hours. Never fear, the short course will be held again next year in Bristol.

To get around some of the aforementioned barriers to using R, James O’Mahony presented work on an open-source simplified screening model that his team is developing for teaching. An Excel interface with VBA code writes the parameter values in a file that can be imported into R, which has a single file of short model code. Beautiful graphs show the impact of important parameters on the efficiency frontier. He said that they would love to have people look at the code and give suggestions as they want to keep it simple but there is a nonlinear relationship between additional features and complexity.

And then we moved on to more specific topics, such as setting up a community for R users in the NHS, packages for survival curves, and how to build packages in R. I found Gianluca Baio’s presentation on what is a package and why we should be using them really helpful. I realised that I hadn’t really thought about what a package was before (a bundle of code, data, documentation and tests that is easy to share with others) or that it was something that I could or (as he argued) should be thinking about doing for myself as a time-saving tool even if I’m not sharing with others. It’s no longer difficult to build a package when you use packages like devtools and roxygen2 and tools like rstudio and github. He pointed out that packages can be stored on github if you’re not keen to share with the wider world via CRAN.

Another talk that I found particularly helpful was on R methods to prepare routine healthcare data for disease modelling. Claire Simons from The University of Oxford outlined her experiences of using R and ended her talk with a plethora of useful tips. These included using the data.table package for big data sets as it saves time when merging, use meaningful file names to avoid confusion later, and investing in doing things properly from the start as this will save time later. She also suggested using code profiling to identify which code takes the most time. Finally, she reminded us that we should be constantly learning about R: read books on R and writing algorithms and talk to other people who are using R (programmers and other people, not just health economists).

For those who agree that the future is R, check out resources from the Decision Analysis in R for Technologies in Health, a hackathon at Imperial on 6-7th November, hosted by Nathan Green, or join the ISPOR Open Source Models Special Interest Group.

Overall, the workshop structure allowed for a lot of great discussions in a relaxed atmosphere and I look forward to attending the next one.