Rachel Houten’s journal round-up for 8th July 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Adjusting for inflation and currency changes within health economic studies. Value in Health Published 13th June 2019

The purpose of the paper is to highlight the need for transparency in the reporting of methods of currency conversions and adjustments to costs to take inflation into account, in economic evaluations. It chimes with other recent literature which is less prescriptive in terms of providing methods guidelines and more about advocating the “tell us what you did and why” approach. It reminds me of my very first science lesson in high school where we were eager to get our hands on the experiments yet the teacher (met by much eye-rolling) insisted on the importance of describing the methods of any ‘study’. With space at a premium in academic writing, I know, and I’m likely guilty of, some transparency in assumptions being culled, but papers such as this highlight their necessity.

The authors discuss which inflation measure to base the adjustments on, whether to convert local currencies to US or International dollars, three methods of adjusting for inflation, and what to do when costs from other settings are part of the analysis. With a focus on low- and middle-income countries, and using a hypothetical example, the authors demonstrate that employing three different methods of adjusting for inflation can result in a large range in the final estimates.

The authors acknowledge that it is not a one-size-fits-all approach but favour a ‘mixed approach’ where micro-costing is possible and items can be classified as tradable and non-tradable, as they say this is likely to produce the most accurate estimates. However, a study reliant on previously published costing information would need to follow an alternative approach, of which there are two others detailed in the paper.

In terms of working with data from low- and middle-income countries, I can’t say it is my forté. However, the paper summarises the pros and cons of each of their proposed approaches in a straightforward way. The authors include a table that I think would provide an excellent reference point for anyone considering the best approach for their specific set of circumstances.

An updated systematic review of studies mapping (or cross‑walking) measures of health‑related quality of life to generic preference‑based measures to generate utility value. Applied Health Economics and Health Policy [PubMed] [RePEc] Published 3rd April 2019

This is an update of a review of studies published before 2007, which found 30 studies mapping to generic preference-based measures. This latest paper cites 180 included studies with a total of 233 mapping functions reported. The majority of the mapping functions were to the EQ-5D (147 mapping functions) with the second largest group mapping to the SF-6D (45 mapping functions).

Along with an increase in volume of mapping studies since the last review, there has been a marked increase in the different types of regression methods used, which signals a greater consideration of the distribution of the underlying utility data. Reporting on how well the mapping algorithms predict utility in different sub-groups has also increased.

The authors highlight that although mapping can fill an evidence gap, the uncertainty in the estimates is greater than directly measuring health-related quality of life in prospective studies. The authors signpost to ISPOR guidelines for the reporting of mapping studies and emphasise the need to include measurements of error as well as a plot of predicted versus observed values, to enable the user to understand and incorporate the accuracy of the mapping in their economic evaluations.

As stated by the authors, the results of this review provides a useful resource in terms of a catalogue of mapping studies, however it lacks any quality assessment of the studies (also made clear by the authors), so the choice of which mapping algorithm to use is still ours, and takes some thought.  The supplementary Excel file is a great resource to aid the choice as it includes some information about the populations used in the mapping studies alongside the methods, but more studies comparing mapping functions with the same aim against each other would be welcomed.

Investigating the relationship between formal and informal care: an application using panel data for people living together. Health Economics [PubMed] Published 7th June 2019

This paper adds to the literature on informal care by considering co-resident informal care in a UK setting using data from the British Household Panel Survey (BHPS). There has been an increase in the proportion of people receiving non-state provided care in recent years in the UK, and the BHPS also enables the impact of informal care on the use of each of these types of formal care to be explored.

The authors used an instrument for informal care to try to prevent bias due to correlations with other variables such as health. The instrument used for the availability of informal care was the number of adult daughters as it was found to be the most predictive (oh dear, I’ve two sons!). The authors then estimated the impact of informal care on home help, health visitor use, GP visits, and hospital stays.

In this study, informal care was a substitute for both state and non-state home help (with the impact greater for state home help) and complimentary to health visitor use, GP visits, and hospital stays. The authors suggest this may be due to the tasks completed by these different types of service providers and how household tasks are more likely to be undertaken by informal care givers than those more medical in nature. The fact this study considers co-residential care from any household member may explain the stronger substitution effect in this study compared to previous studies looking at informal caregivers living elsewhere as it could be assumed the caregiver residing with the care recipient is more able to provide care.

I find the make-up of households and how that impacts on the need for healthcare resources really interesting, especially as it is generally considered that informal care and the work of charities bolsters the NHS. The results of this study suggest that increases in informal care could generate savings in terms of the need for home help, but an increase in formal care resource use. The reasons for the complimentary relationship between informal care and health visitor, GP, and hospital visits need further exploration.

Credits

Chris Sampson’s journal round-up for 18th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

An educational review about using cost data for the purpose of cost-effectiveness analysis. PharmacoEconomics [PubMed] Published 12th February 2019

Costing can seem like a cinderella method in the health economist’s toolkit. If you’re working on an economic evaluation, estimating resource use and costs can be tedious. That is perhaps why costing methodology has been relatively neglected in the literature compared to health state valuation (for example). This paper tries to redress the balance slightly by providing an overview of the main issues in costing, explaining why they’re important, so that we can do a better job. The issues are more complex than many assume.

Supported by a formidable reference list (n=120), the authors tackle 9 issues relating to costing: i) costs vs resource use; ii) trial-based vs model-based evaluations; iii) costing perspectives; iv) data sources; v) statistical methods; vi) baseline adjustments; vii) missing data; viii) uncertainty; and ix) discounting, inflation, and currency. It’s a big paper with a lot to say, so it isn’t easily summarised. Its role is as a reference point for us to turn to when we need it. There’s a stack of papers and other resources cited in here that I wasn’t aware of. The paper itself doesn’t get technical, leaving that to the papers cited therein. But the authors provide a good discussion of the questions that ought to be addressed by somebody designing a study, relating to data collection and analysis.

The paper closes with some recommendations. The main one is that people conducting cost-effectiveness analysis should think harder about why they’re making particular methodological choices. The point is also made that new developments could change the way we collect and analyse cost data. For example, the growing use of observational data demands that greater consideration be given to unobserved confounding. Costing methods are important and interesting!

A flexible open-source decision model for value assessment of biologic treatment for rheumatoid arthritis. PharmacoEconomics [PubMed] Published 9th February 2019

Wherever feasible, decision models should be published open-source, so that they can be reviewed, reused, recycled, or, perhaps, rejected. But open-source models are still a rare sight. Here, we have one for rheumatoid arthritis. But the paper isn’t really about the model. After all, the model and supporting documentation are already available online. Rather, the paper describes the reasoning behind publishing a model open-source, and the process for doing so in this case.

This is the first model released as part of the Open Source Value Project, which tries to convince decision-makers that cost-effectiveness models are worth paying attention to. That is, it’s aimed at the US market, where models are largely ignored. The authors argue that models need to be flexible to be valuable into the future and that, to achieve this, four steps should be followed in the development: 1) release the initial model, 2) invite feedback, 3) convene an expert panel to determine actions in light of the feedback, and 4) revise the model. Then, repeat as necessary. Alongside this, people with the requisite technical skills (i.e. knowing how to use R, C++, and GitHub) can proffer changes to the model whenever they like. This paper was written after step 3 had been completed, and the authors report receiving 159 comments on their model.

The model itself (which you can have a play with here) is an individual patient simulation, which is set-up to evaluate a variety of treatment scenarios. It estimates costs and (mapped) QALYs and can be used to conduct cost-effectiveness analysis or multi-criteria decision analysis. The model was designed to be able to run 32 different model structures based on different assumptions about treatment pathways and outcomes, meaning that the authors could evaluate structural uncertainties (which is a rare feat). A variety of approaches were used to validate the model.

The authors identify several challenges that they experienced in the process, including difficulties in communication between stakeholders and the large amount of time needed to develop, test, and describe a model of this sophistication. I would imagine that, compared with most decision models, the amount of work underlying this paper is staggering. Whether or not that work is worthwhile depends on whether researchers and policymakers make us of the model. The authors have made it as easy as possible for stakeholders to engage with and build on their work, so they should be hopeful that it will bear fruit.

EQ-5D-Y-5L: developing a revised EQ-5D-Y with increased response categories. Quality of Life Research [PubMed] Published 9th February 2019

The EQ-5D-Y has been a slow burner. It’s been around 10 years since it first came on the scene, but we’ve been without a value set and – with the introduction of the EQ-5D-5L – the questionnaire has lost some comparability with its adult equivalent. But the EQ-5D-Y has almost caught-up, and this study describes part of how that’s been achieved.

The reason to develop a 5L version for the EQ-5D-Y is the same as for the adult version – to reduce ceiling effects and improve sensitivity. A selection of possible descriptors was identified through a review of the literature. Focus groups were conducted with children between 8 and 15 years of age in Germany, Spain, Sweden, and the UK in order to identify labels that can be understood by young people. Specifically, the researchers wanted to know the words used by children and adolescents to describe the quantity or intensity of health problems. Participants ranked the labels according to severity and specified which labels they didn’t like. Transcripts were analysed using thematic content analysis. Next, individual interviews were conducted with 255 participants across the four countries, which involved sorting and response scaling tasks. Younger children used a smiley scale. At this stage, both 4L and 5L versions were being considered. In a second phase of the research, cognitive interviews were used to test for comprehensibility and feasibility.

A 5-level version was preferred by most, and 5L labels were identified in each language. The English version used terms like ‘a little bit’, ‘a lot’, and ‘really’. There’s plenty more research to be done on the EQ-5D-Y-5L, including psychometric testing, but I’d expect it to be coming to studies near you very soon. One of the key takeaways from this study, and something that I’ve been seeing more in research in recent years, is that kids are smart. The authors make this point clear, particulary with respect to the response scaling tasks that were conducted with children as young as 8. Decision-making criteria and frameworks that relate to children should be based on children’s preferences and ideas.

Credits

Chris Sampson’s journal round-up for 4th June 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A qualitative investigation of the health economic impacts of bariatric surgery for obesity and implications for improved practice in health economics. Health Economics [PubMed] Published 1st June 2018

Few would question the ‘economic’ nature of the challenge of obesity. Bariatric surgery is widely recommended for severe cases but, in many countries, the supply is not sufficient to satisfy the demand. In this context, this study explores the value of qualitative research in informing economic evaluation. The authors assert that previous economic evaluations have adopted a relatively narrow focus and thus might underestimate the expected value of bariatric surgery. But rather than going and finding data on what they think might be additional dimensions of value, the authors ask patients. Emotional capital, ‘societal’ (i.e. non-health) impacts, and externalities are identified as theories for the types of value that might be derived from bariatric surgery. These theories were used to guide the development of questions and prompts that were used in a series of 10 semi-structured focus groups. Thematic analysis identified the importance of emotional costs and benefits as part of the ‘socioemotional personal journey’ associated with bariatric surgery. Out-of-pocket costs were also identified as being important, with self-funding being a challenge for some respondents. The information seems useful in a variety of ways. It helps us understand the value of bariatric surgery and how individuals make decisions in this context. This information could be used to determine the structure of economic evaluations or the data that are collected and used. The authors suggest that an EQ-5D bolt-on should be developed for ’emotional capital’ but, given that this ‘theory’ was predefined by the authors and does not arise from the qualitative research as being an important dimension of value alongside the existing EQ-5D dimensions, that’s a stretch.

Developing accessible, pictorial versions of health-related quality-of-life instruments suitable for economic evaluation: a report of preliminary studies conducted in Canada and the United Kingdom. PharmacoEconomics – Open [PubMed] Published 25th May 2018

I’ve been telling people about this study for ages (apologies, authors, if that isn’t something you wanted to read!). In my experience, the need for more (cognitively / communicatively) accessible outcome measures is widely recognised by health researchers working in contexts where this is relevant, such as stroke. If people can’t read or understand the text-based descriptors that make up (for example) the EQ-5D, then we need some alternative format. You could develop an entirely new measure. Or, as the work described in this paper set out to do, you could modify existing measures. There are three descriptive systems described in this study: i) a pictorial EQ-5D-3L by the Canadian team, ii) a pictorial EQ-5D-3L by the UK team, and iii) a pictorial EQ-5D-5L by the UK team. Each uses images to represent the different levels of the different dimensions. For example, the mobility dimension might show somebody walking around unaided, walking with aids, or in bed. I’m not going to try and describe what they all look like, so I’ll just encourage you to take a look at the Supplementary Material (click here to download it). All are described as ‘pilot’ instruments and shouldn’t be picked up and used at this stage. Different approaches were used in the development of the measures, and there are differences between the measures in terms of the images selected and the ways in which they’re presented. But each process referred to conventions in aphasia research, used input from clinicians, and consulted people with aphasia and/or their carers. The authors set out several remaining questions and avenues for future research. The most interesting possibility to most readers will be the notion that we could have a ‘generic’ pictorial format for the EQ-5D, which isn’t aphasia-specific. This will require continued development of the pictorial descriptive systems, and ultimately their validation.

QALYs in 2018—advantages and concerns. JAMA [PubMed] Published 24th May 2018

It’s difficult not to feel sorry for the authors of this article – and indeed all US-based purveyors of economic evaluation in health care. With respect to social judgments about the value of health technologies, the US’s proverbial head remains well and truly buried in the sand. This article serves as a primer and an enticement for the use of QALYs. The ‘concerns’ cited relate almost exclusively to decision rules applied to QALYs, rather than the underlying principles of QALYs, presumably because the authors didn’t feel they could ignore the points made by QALY opponents (even if those arguments are vacuous). What it boils down to is this: trade-offs are necessary, and QALYs can be used to promote value in those trade-offs, so unless you offer some meaningful alternative then QALYs are here to stay. Thankfully, the Institute for Clinical and Economic Review (ICER) has recently added some clout to the undeniable good sense of QALYs, so the future is looking a little brighter. Suck it up, America!

The impact of hospital costing methods on cost-effectiveness analysis: a case study. PharmacoEconomics [PubMed] Published 22nd May 2018

Plugging different cost estimates into your cost-effectiveness model could alter the headline results of your evaluation. That might seems obvious, but there are a variety of ways in which the selection of unit costs might be somewhat arbitrary or taken for granted. This study considers three alternative sources of information for hospital-based unit costs for hip fractures in England: (a) spell-level tariffs, (b) finished consultant episode (FCE) reference costs, and (c) spell-level reference costs. Source (b) provides, in theory, a more granular version of (a), describing individual episodes within a person’s hospital stay. Reference costs are estimated on the basis of hospital activity, while tariffs are prices estimated on the basis of historic reference costs. The authors use a previously reported cohort state transition model to evaluate different models of care for hip fracture and explore how the use of the different cost figures affects their results. FCE-level reference costs produced the highest total first-year hospital care costs (£14,440), and spell-level tariffs the lowest (£10,749). The more FCEs within a spell, the greater the discrepancy. This difference in costs affected ICERs, such that the net-benefit-optimising decision would change. The study makes an important point – that selection of unit costs matters. But it isn’t clear why the difference exists. It could just be due to a lack of precision in reference costs in this context (rather than a lack of accuracy, per se), or it could be that reference costs misestimate the true cost of care across the board. Without clear guidance on how to select the most appropriate source of unit costs, these different costing methodologies represent another source of uncertainty in modelling, which analysts should consider and explore.

Credits