On 24th and 25th of June 2019, the second Essen Economics of Mental Health Workshop took place addressing the topic of ‘Mental Health over the Life-Course’. Like last year, the workshop was organized by Ansgar Wübker and Christoph Kronenberg.
Two keynote presentations and 13 paper presentations covered a wide variety of topics concerning the economics of mental health and were thoroughly discussed by 18 participants. The group was a well-rounded mix of junior researchers as well as senior researchers.
The workshop started with a keynote given by Christopher J. Ruhm about mortality trends in the US. He compared mortality trends between 2001 and 2017 for different groups (SES status, gender, age groups, and race/ethnicity) and found that mortality trends differ greatly between groups. In contrast to previous papers, he showed that mortality rate increases were mainly driven by younger age groups. He aims to advance the research by looking at different causes of death.
At the end of the workshop, the second keynote was given by Fabrizio Mazzonna who talked about cognitive decline. He showed that people who experience cognitive decline, but are not aware of it, are much more likely to experience wealth losses, especially in terms of financial wealth. Since those losses are not found among people without cognitive decline or among people that are aware of their cognitive decline, overestimation might play an important role.
In between the keynote sessions, we discussed the work of the participants using a relatively new workshop format compared to the usual workshop procedures in German academia. Each paper was presented by the discussant instead of the author, who then in turn only clarifies or responds. The presentation is followed by questions and discussion from all participants. Since all papers were shared in the group before the workshop, everybody could contribute, which led to thorough and fruitful discussions.
The presentations covered a wide range of topics concerning the economics of mental health. For example, Jakob Everding discussed the work of Michael Shields and his co-authors. They examined how changes of commodity prices translate into job security among Australian miners and how this consequently affects their mental health. Anwen Zhang’s work was discussed by Daniel Kamhöfer. He analyzed whether the mental health of students is influenced by the mental health of their peers in class.
The first day ended with a dinner at Ponistra, a restaurant in Essen that specializes in organic food. The food was not just healthy, but also very delicious and there was enough time for conversations about economics and beyond.
After two days of presentations and discussions, we were all exhausted, but had gained good input on our papers and learned a great deal about the economics of mental health.
Missed iHEA 2019? Or were you there but could not make it to all of the amazing sessions? Stay tuned for my conference highlights!
iHEA started on Saturday 13th with pre-congress sessions on fascinating research as well as more prosaic topics, such as early-career networking sessions with senior health economists. All attendees got a super useful plastic bottle – great idea iHEA team!
The conference proper launched on Sunday evening with the
brilliant plenary session by Raj Chetty
from Harvard University.
Monday morning started bright and early with the thought-provoking session on validation of CE models. It was chaired and discussed by Stefan Lhachimi and featured presentations by Isaac Corro Ramos, Talitha Feenstra and Salah Ghabri. I’m pleased to see that validation is coming to the forefront of current topics! Clearly, we need to do better in validating our models and documenting code, but we’re on the right track and engaged in making this happen.
The case was expertly made that taking a single sector perspective can be misleading when evaluating policies with cross-sectoral effects, hence the impact inventory by Simon and colleagues is a useful tool to guide the choice of sectors to include. At the same time, we should be mindful of the requirements of the decision-maker for whom CEA is intended. This was a compelling session, which will definitely set the scene for much more research to come.
After a tasty lunch (well done catering team!), I headed to the session on evaluations using non-randomised data. The presenters included Maninie Molatseli, Fernando Antonio Postali, James Love-Koh and Taufik Hidayat, on case studies from South Africa, Brazil and Indonesia. Marc Suhrcke chaired. I really enjoyed hearing about the practicalities of applying econometric methods to estimate treatment effects of system wide policies. And James’s presentation was a great application of distributional cost-effectiveness analysis.
I was on the presenter’s chair next, discussing the challenges in implementing policies in the southwest quadrant of the CE plane. This session was chaired by Anna Vassall and discussed by Gesine Meyer-Rath. Jack Dowie started by convincingly arguing that the decision rule should be the same regardless of where in the CE plane the policy falls. David Bath and Sergio Torres-Rueda presented fascinating case studies of south west policies. And I argued that the barrier was essentially a problem of communication (presentation available here). An energetic discussion followed and showed that, even in our field, the matter is far from settled.
The day finished with the memorial session for the wonderful Alan Maynard and Uwe Reinhardt, both of whom did so much for health economics. It was a beautiful session, where people got together to share incredible stories from these health economics heroes. And if you’d like to know more, both Alan and Uwe have published books here and here.
Tuesday started with the session on precision medicine, chaired by Dean Regier, and featuring Rosalie Viney, Chris McCabe and Stuart Peacock. Rather than slides, the screen was filled with a video of a cosy fireplace, inviting the audience to take part in the discussion.
Under debate was whether precision medicine is a completely different type of technology, with added benefits over and above improvement to health, and needing a different CE framework. The panellists were absolutely outstanding in debating the issues! Although I understand the benefits beyond health that these technologies can offer, I side with the view that, like with other technologies, value is about whether the added benefits are worth the losses given the opportunity cost.
My final session of the day was by the great Mike Drummond, comparing how HTA has influenced the uptake of new anticancer drugs in Spain versus England (summary in thread below). Mike and colleagues found that positive recommendations do increase utilisation, but the magnitude of change differs by country and region. The work is ongoing in checking that utilisation has been picked up accurately in the routine data sources.
The conference dinner was at the Markthalle, with plenty of drinks and loads of international food to choose from. I had to have an early night given that I was presenting at 8:30 the next morning. Others, though, enjoyed the party until the early hours!
Indeed, Wednesday started with my session on cost-effectiveness analysis of diagnostic tests. Alison Smith presented on her remarkable work on measurement uncertainty while Hayley Jones gave a masterclass on her new method for meta-analysis of test accuracy across multiple thresholds. I presented on the CEA of test sequences (available here). Simon Walker and James Buchanan added insightful points as discussants. We had a fantastically engaged audience, with great questions and comments. It shows that the CEA of diagnostic tests is becoming a hugely important topic.
Sadly, some other morning sessions were not as well attended.
One session, also on CEA, was even cancelled due to lack of audience! For
future conferences, I’d suggest scheduling the sessions on the day after the
conference dinner a bit later, as well as having fewer sessions to choose from.
Next up on my agenda was the exceptional session on equity, chaired by Paula Lorgelly, and with presentations by Richard Cookson, Susan Griffin and Ijeoma Edoka. I was unable to attend, but I have watched it at home via YouTube (from 1:57:10)! That’s right, some sessions were live streamed and are still available via the iHEA website. Do have a look!
Lastly, the outstanding plenary session by Lise Rochaix and Joseph Kutzin on how to translate health economics research into policy. Lise and Joseph had pragmatic suggestions and insightful comments on the communication of health economics research to policy makers. Superb! Also available on the live stream here (from 06:09:44).
iHEA 2019 was truly an amazing conference. Expertly
organised, well thought-out and with lots of interesting sessions to choose
from. iHEA 2021 in Cape Town is firmly in my diary!
I have switched to using R for my cost-effectiveness models, but I know that I am not using it to its full potential. As a fledgling R user, I was keen to hear about other people’s experiences. I’m in the process of updating one of my models and know that I could be coding things better. But with so many packages and ways to code, the options seem infinite and I struggle to know where to begin. In an attempt to remedy this, I attended the Workshop on R for trial and model-based cost-effectiveness analysis hosted at UCL. I was not disappointed.
The day showcased speakers with varying levels of coding expertise doing a wide range of cool things in R. We started with examples of implementing decision trees using the CEdecisiontree package and cohort Markov models. We also got to hear about a population model using the HEEMOD package, and the purrr package was suggested for probabilistic sensitivity analyses. These talks highlighted how, compared to Excel, R can be reusable, faster, transparent, iterative, and open source.
The open source nature of R, however, has its drawbacks. One of the more interesting conversations that was woven in throughout the day was around the challenges. Can we can trust open-source software? When will NICE begin accepting models coded in R? How important is it that we have models in something like Excel that people can intuitively understand? I’ve not experienced problems choosing to use R for my work; for me, it’s always been around getting the support and information I need to get things done efficiently. The steep learning curve seems to be a major hurdle for many people. I had hoped to attend the short course introduction that was held the day before the workshop, but I was not fast enough to secure my spot as the course sold out within 36 hours. Never fear, the short course will be held again next year in Bristol.
To get around some of the aforementioned barriers to using R, James O’Mahony presented work on an open-source simplified screening model that his team is developing for teaching. An Excel interface with VBA code writes the parameter values in a file that can be imported into R, which has a single file of short model code. Beautiful graphs show the impact of important parameters on the efficiency frontier. He said that they would love to have people look at the code and give suggestions as they want to keep it simple but there is a nonlinear relationship between additional features and complexity.
And then we moved on to more specific topics, such as setting up a community for R users in the NHS, packages for survival curves, and how to build packages in R. I found Gianluca Baio’s presentation on what is a package and why we should be using them really helpful. I realised that I hadn’t really thought about what a package was before (a bundle of code, data, documentation and tests that is easy to share with others) or that it was something that I could or (as he argued) should be thinking about doing for myself as a time-saving tool even if I’m not sharing with others. It’s no longer difficult to build a package when you use packages like devtools and roxygen2 and tools like rstudio and github. He pointed out that packages can be stored on github if you’re not keen to share with the wider world via CRAN.
Another talk that I found particularly helpful was on R methods to prepare routine healthcare data for disease modelling. Claire Simons from The University of Oxford outlined her experiences of using R and ended her talk with a plethora of useful tips. These included using the data.table package for big data sets as it saves time when merging, use meaningful file names to avoid confusion later, and investing in doing things properly from the start as this will save time later. She also suggested using code profiling to identify which code takes the most time. Finally, she reminded us that we should be constantly learning about R: read books on R and writing algorithms and talk to other people who are using R (programmers and other people, not just health economists).