Skip to content

Rita Faria’s journal round-up for 1st March 2021

Every Monday our authors provide a round-up of the latest peer-reviewed journal publications. We cover all issues of major health economics journals as well as some other notable releases. If you’d like to write one of our weekly journal round-ups, get in touch.

Medical Decision Making

Volume 41, Issue 2

I started this month’s issue of Medical Decision Making with Louise Russell’s thoughtful editorial about using electronic health records (EHRs) for research. EHR data are, by definition, generated depending on patients’ health and health care seeking behaviour as well as depending on health care providers’ practice. Therefore, as Louise puts it, “There is a lot of noise in EHR data, as well as a lot of potential for bias”. It takes a lot of time and effort to create a dataset, even before thinking about any estimation.

But all is not lost! This MDM issue has a couple of papers about using EHRs for research. One study is about the experience of creating datasets based on EHRs. Glen Taksler and colleagues discuss nine problems that can befall researchers: (1) defining the patient population; (2) defining the patient’s primary health care provider; (3) recognising that the EHRs are episodic; (4) considering that EHRs reflect changes in the health care system over time; (5) assuming that EHRs are well-organised and accurate (they are not!); (6) accounting for the same patient being in different health care systems within the same dataset; (7) managing huge datasets; (8) assuming that EHRs are representative of the general population; and, finally, (9) actually getting hold of the EHRs, given the barriers to access. Some of these problems may be specific to the authors’ context, but I’ve certainly experienced most of these when accessing and using EHRs for research in the UK. Helpfully, the paper includes the analyst time required to create each dataset. This is clearly a must-read paper for anyone looking to venture into the EHR universe!

The other EHR paper is about estimating transition probabilities using EHRs in the context of paediatric eating disorders. The authors faced a challenge in establishing patients’ health states at each point in time because the EHRs did not record some health events consistently, did not record death, and more than one health event could be recorded on the same day. A second challenge was that EHRs are censored when patients change care provider, and episodes of care (hence observations) often depend on patients accessing care. Therefore censoring may be informative, which creates issues when estimating survival models. These, and other challenges, are well discussed in the paper. Do have a look if you are planning to use EHRs to inform decision modelling.

If you’re finishing up a paper using EHR data, or have a manuscript using EHR data looking for a home, MDM has a call for papers on the topic. The areas are quite broad, from papers on the usefulness, benefits, and limitations of EHRs for health research to empirical papers and methodological studies.

For cost-effectiveness/HTA analysts, MDM has quite a few good papers to look at. There is a cost-effectiveness analysis comparing 108 surveillance scanning strategies in non-small cell lung cancer. Remarkably, the authors found that The European Society for Medical Oncology guideline strategy was more expensive and less effective than several other strategies. Hopefully, this finding will not be confined to the health economic literature and lead to a guideline update!

Another good read is the paper about the cost-effectiveness analysis of histology-independent technologies, by the York team who did the modelling for NICE. In sum, if we believe that response is similar across all tumour histologies, the probability that the technology is cost-effective is 78%. But if it is not, as suggested by their analysis, the probability is 11%-93%. So this assumption makes a huge difference to the results; HTA agencies, do take note.

Also in this bumper issue, and still on cost-effectiveness/HTA topics, there is a paper investigating extrapolation of survival curves, which recommends adding spline models to standard parametric models in our usual toolbox of methods for extrapolation; a study about model-based network meta-analysis to link disconnected networks; and a commentary arguing that discount rates should be lower than the usual 3% per annum.

Value in Health

Volume 24, Issue 2

This month’s Value in Health starts with a themed section on opioid misuse. It’s introduced with an editorial describing the scale of the problem. For example, 1 in 100 adults in the US have an active opioid use disorder, while few have access to treatment. Almost 450,000 Americans have died between 1998-2008, a trend set to continue if access to treatment does not change.

Tackling the opioid misuse crisis effectively is likely to require a multi-pronged approach. On prevention, one study concluded that an opioid misuse prevention programme in South Korea is cost-effective. On treatment, one US study found greater use of branded buprenorphine than of cheaper versions, which is a drug to treat opioid use disorder. The reasons are specific to the US, but this is likely to be an inefficient use of health care funds and may pose a barrier to accessing treatment. Another US study suggests that there are stark inequalities in access to treatment of opioid use disorder, particularly in more deprived areas. The good news is that treatment is likely to be cost-effective, as concluded by a systematic review of economic evaluation studies in opioid research.

My highlight is a paper on metamodeling to identify the cost-effective strategy, in the context of colorectal screening. The traditional approach to cost-effectiveness analysis is to put each strategy through the simulation model to estimate their costs and outcomes. This works well when there aren’t many strategies to compare and the simulation model is quick to run. Neither conditions are likely to be met in the cost-effectiveness analysis of screening strategies. This is because it is possible to define many strategies according to, for example, different cut-offs of a biomarker to classify patients as having had a positive test. And because simulation models of screening are often complex microsimulations, they can take some time to run.

Hendrik Koffijberg and colleagues’ propose using metamodeling to approximate rather than evaluate the outcomes of each strategy. To implement the metamodeling approach, they evaluated a set of strategies using the microsimulation model. Then they fitted Gaussian process metamodels to the results from the microsimulation model and predicted the results for strategies that had not been evaluated. This sounds simple, yet it’s anything but!

And they didn’t stop there. The next step was to compare the results from the metamodel to the results obtained directly from the microsimulation model. Lastly, they took the results from the metamodel to identify the best strategy given a colonoscopy capacity constraint, and the cost-effective strategy. This is truly a remarkable paper, which I’m sure will set the scene for other studies using metamodeling for cost-effectiveness analysis.

This month’s Value in Health is really a bumper issue, full of interesting papers. Darius Lakdawalla and Charles Phelps report on the Generalised Risk-Adjusted Cost-Effectiveness (GRACE) approach. Standard cost-effectiveness analysis assumes constant returns to scale to improvements in health. For example, 0.1 QALY gained in a condition where patients have little quality-adjusted survival is worth the same as in a condition with a better prognosis. Lakdawalla and Phelps argue that there are diminishing returns to quality of life gains, and propose the GRACE approach as a way forward.

There are a few studies on patient-reported outcomes and quality of life: a review on the evidence about patient-reported outcomes as independent prognostic factors for survival, a study comparing the costs and data completeness of independent and industry-funded orphan disease registries for post-authorisation surveillance, a study estimating utility decrements related to diabetes using a large German population dataset, a study estimating the preference weights for a new quality of life measure for mental health and a review on the effect of telehealth interventions on the quality of life of asthma patients.

Cost-effectiveness analysis is represented by two applied studies and one review. One applied study examined the cost-effectiveness of an intervention for depression care in patients with cancer in the US setting using a decision model. The other applied study investigated the cost-effectiveness of the 2018 American College of Physicians guidance about Type 2 diabetes, also using a decision model. Reassuringly, the guidance emerged as cost-effective compared to the status quo. Last but not least, a review examined the cost-effectiveness evidence on targeted genetic-based screen-and-treat strategies to prevent breast and ovarian cancer, concluding that it is likely to be cost-effective.

Credits

Support the blog, become a patron on Patreon.

By

  • Rita Faria

    Rita is a health economist at the University of York working mainly in economic evaluation. See https://tinyurl.com/y8ogvhjw for her academic profile.

We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Join the conversation, add a commentx
()
x
%d bloggers like this: