Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
An educational review about using cost data for the purpose of cost-effectiveness analysis. PharmacoEconomics [PubMed] Published 12th February 2019
Costing can seem like a cinderella method in the health economist’s toolkit. If you’re working on an economic evaluation, estimating resource use and costs can be tedious. That is perhaps why costing methodology has been relatively neglected in the literature compared to health state valuation (for example). This paper tries to redress the balance slightly by providing an overview of the main issues in costing, explaining why they’re important, so that we can do a better job. The issues are more complex than many assume.
Supported by a formidable reference list (n=120), the authors tackle 9 issues relating to costing: i) costs vs resource use; ii) trial-based vs model-based evaluations; iii) costing perspectives; iv) data sources; v) statistical methods; vi) baseline adjustments; vii) missing data; viii) uncertainty; and ix) discounting, inflation, and currency. It’s a big paper with a lot to say, so it isn’t easily summarised. Its role is as a reference point for us to turn to when we need it. There’s a stack of papers and other resources cited in here that I wasn’t aware of. The paper itself doesn’t get technical, leaving that to the papers cited therein. But the authors provide a good discussion of the questions that ought to be addressed by somebody designing a study, relating to data collection and analysis.
The paper closes with some recommendations. The main one is that people conducting cost-effectiveness analysis should think harder about why they’re making particular methodological choices. The point is also made that new developments could change the way we collect and analyse cost data. For example, the growing use of observational data demands that greater consideration be given to unobserved confounding. Costing methods are important and interesting!
A flexible open-source decision model for value assessment of biologic treatment for rheumatoid arthritis. PharmacoEconomics [PubMed] Published 9th February 2019
Wherever feasible, decision models should be published open-source, so that they can be reviewed, reused, recycled, or, perhaps, rejected. But open-source models are still a rare sight. Here, we have one for rheumatoid arthritis. But the paper isn’t really about the model. After all, the model and supporting documentation are already available online. Rather, the paper describes the reasoning behind publishing a model open-source, and the process for doing so in this case.
This is the first model released as part of the Open Source Value Project, which tries to convince decision-makers that cost-effectiveness models are worth paying attention to. That is, it’s aimed at the US market, where models are largely ignored. The authors argue that models need to be flexible to be valuable into the future and that, to achieve this, four steps should be followed in the development: 1) release the initial model, 2) invite feedback, 3) convene an expert panel to determine actions in light of the feedback, and 4) revise the model. Then, repeat as necessary. Alongside this, people with the requisite technical skills (i.e. knowing how to use R, C++, and GitHub) can proffer changes to the model whenever they like. This paper was written after step 3 had been completed, and the authors report receiving 159 comments on their model.
The model itself (which you can have a play with here) is an individual patient simulation, which is set-up to evaluate a variety of treatment scenarios. It estimates costs and (mapped) QALYs and can be used to conduct cost-effectiveness analysis or multi-criteria decision analysis. The model was designed to be able to run 32 different model structures based on different assumptions about treatment pathways and outcomes, meaning that the authors could evaluate structural uncertainties (which is a rare feat). A variety of approaches were used to validate the model.
The authors identify several challenges that they experienced in the process, including difficulties in communication between stakeholders and the large amount of time needed to develop, test, and describe a model of this sophistication. I would imagine that, compared with most decision models, the amount of work underlying this paper is staggering. Whether or not that work is worthwhile depends on whether researchers and policymakers make us of the model. The authors have made it as easy as possible for stakeholders to engage with and build on their work, so they should be hopeful that it will bear fruit.
EQ-5D-Y-5L: developing a revised EQ-5D-Y with increased response categories. Quality of Life Research [PubMed] Published 9th February 2019
The EQ-5D-Y has been a slow burner. It’s been around 10 years since it first came on the scene, but we’ve been without a value set and – with the introduction of the EQ-5D-5L – the questionnaire has lost some comparability with its adult equivalent. But the EQ-5D-Y has almost caught-up, and this study describes part of how that’s been achieved.
The reason to develop a 5L version for the EQ-5D-Y is the same as for the adult version – to reduce ceiling effects and improve sensitivity. A selection of possible descriptors was identified through a review of the literature. Focus groups were conducted with children between 8 and 15 years of age in Germany, Spain, Sweden, and the UK in order to identify labels that can be understood by young people. Specifically, the researchers wanted to know the words used by children and adolescents to describe the quantity or intensity of health problems. Participants ranked the labels according to severity and specified which labels they didn’t like. Transcripts were analysed using thematic content analysis. Next, individual interviews were conducted with 255 participants across the four countries, which involved sorting and response scaling tasks. Younger children used a smiley scale. At this stage, both 4L and 5L versions were being considered. In a second phase of the research, cognitive interviews were used to test for comprehensibility and feasibility.
A 5-level version was preferred by most, and 5L labels were identified in each language. The English version used terms like ‘a little bit’, ‘a lot’, and ‘really’. There’s plenty more research to be done on the EQ-5D-Y-5L, including psychometric testing, but I’d expect it to be coming to studies near you very soon. One of the key takeaways from this study, and something that I’ve been seeing more in research in recent years, is that kids are smart. The authors make this point clear, particulary with respect to the response scaling tasks that were conducted with children as young as 8. Decision-making criteria and frameworks that relate to children should be based on children’s preferences and ideas.
Credits