Skip to content

Meeting round-up: R for Cost-Effectiveness Analysis Workshop 2019

I have switched to using R for my cost-effectiveness models, but I know that I am not using it to its full potential. As a fledgling R user, I was keen to hear about other people’s experiences. I’m in the process of updating one of my models and know that I could be coding things better. But with so many packages and ways to code, the options seem infinite and I struggle to know where to begin. In an attempt to remedy this, I attended the Workshop on R for trial and model-based cost-effectiveness analysis hosted at UCL. I was not disappointed.

The day showcased speakers with varying levels of coding expertise doing a wide range of cool things in R. We started with examples of implementing decision trees using the CEdecisiontree package and cohort Markov models. We also got to hear about a population model using the HEEMOD package, and the purrr package was suggested for probabilistic sensitivity analyses. These talks highlighted how, compared to Excel, R can be reusable, faster, transparent, iterative, and open source.

The open source nature of R, however, has its drawbacks. One of the more interesting conversations that was woven in throughout the day was around the challenges. Can we can trust open-source software? When will NICE begin accepting models coded in R? How important is it that we have models in something like Excel that people can intuitively understand? I’ve not experienced problems choosing to use R for my work; for me, it’s always been around getting the support and information I need to get things done efficiently. The steep learning curve seems to be a major hurdle for many people. I had hoped to attend the short course introduction that was held the day before the workshop, but I was not fast enough to secure my spot as the course sold out within 36 hours. Never fear, the short course will be held again next year in Bristol.

To get around some of the aforementioned barriers to using R, James O’Mahony presented work on an open-source simplified screening model that his team is developing for teaching. An Excel interface with VBA code writes the parameter values in a file that can be imported into R, which has a single file of short model code. Beautiful graphs show the impact of important parameters on the efficiency frontier. He said that they would love to have people look at the code and give suggestions as they want to keep it simple but there is a nonlinear relationship between additional features and complexity.

And then we moved on to more specific topics, such as setting up a community for R users in the NHS, packages for survival curves, and how to build packages in R. I found Gianluca Baio’s presentation on what is a package and why we should be using them really helpful. I realised that I hadn’t really thought about what a package was before (a bundle of code, data, documentation and tests that is easy to share with others) or that it was something that I could or (as he argued) should be thinking about doing for myself as a time-saving tool even if I’m not sharing with others. It’s no longer difficult to build a package when you use packages like devtools and roxygen2 and tools like rstudio and github. He pointed out that packages can be stored on github if you’re not keen to share with the wider world via CRAN.

Another talk that I found particularly helpful was on R methods to prepare routine healthcare data for disease modelling. Claire Simons from The University of Oxford outlined her experiences of using R and ended her talk with a plethora of useful tips. These included using the data.table package for big data sets as it saves time when merging, use meaningful file names to avoid confusion later, and investing in doing things properly from the start as this will save time later. She also suggested using code profiling to identify which code takes the most time. Finally, she reminded us that we should be constantly learning about R: read books on R and writing algorithms and talk to other people who are using R (programmers and other people, not just health economists).

For those who agree that the future is R, check out resources from the Decision Analysis in R for Technologies in Health, a hackathon at Imperial on 6-7th November, hosted by Nathan Green, or join the ISPOR Open Source Models Special Interest Group.

Overall, the workshop structure allowed for a lot of great discussions in a relaxed atmosphere and I look forward to attending the next one.

Authors

  • Chris Sampson

    Founder of the Academic Health Economists' Blog. Senior Principal Economist at the Office of Health Economics. ORCID: 0000-0001-9470-2369

  • Angela Devine

We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Join the conversation, add a commentx
()
x
%d bloggers like this: