Chris Sampson’s journal round-up for 19th August 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Paying for kidneys? A randomized survey and choice experiment. American Economic Review [RePEc] Published August 2019

This paper starts with a quote from Alvin Roth about ‘repugnant transactions’, of which markets for organs provide a prime example. This idea of ‘repugnant transactions’ has been hijacked by some pop economists to represent the stupid opinions of non-economists. If you ask me, markets for organs aren’t repugnant, they just seem like a very bad idea in terms of both efficiency and equity. But it doesn’t matter what I think; it matters what the people of the United States think.

The authors of this study conducted an online survey with a representative sample of 2,666 Americans. Each respondent was randomised to evaluate one of eight systems compared with the current system. The eight systems differed with respect to i) cash or non-cash compensation of ii) different sizes ($30,000 or $100,000), iii) paid by either a public agency or the organ recipient. Participants made five binary choices that differed according to the gain – in transplants generated – associated with the new system. Half of the participants were also asked to express moral judgements.

Both the system features (e.g. who pays) and the outcomes of the new system influenced people’s choices. Broadly speaking, the results suggest that people aren’t opposed to donors being paid, but are opposed to patients paying. (Remember, we’re talking about the US here!). Around 21% of respondents opposed payment no matter what, 46% were in favour no matter what, and 18% were sensitive to the gain in the number of transplants. A 10% point increase in transplants resulted in a 2.6% point increase in support. Unsurprisingly, individuals’ moral judgements were predictive of the attitudes they expressed, particularly with respect to fairness. The authors describe their results as exhibiting ‘strong polarisation’, which is surely inevitable for questions that involve moral judgement.

Being in AER, this is a long meandering paper with extensive analyses and thoroughly reported results. There’s lots of information and findings that I can’t share here. It’s a valuable study with plenty of food for thought, but I can’t help but think that it is, methodologically, a bit weak. If we want to understand the different views in society, surely some Q methodology would be more useful than a basic online survey. And if we want to elicit stated preferences, surely a discrete choice experiment with a well-thought-out efficient design would give us more meaningful results.

Estimating local need for mental healthcare to inform fair resource allocation in the NHS in England: cross-sectional analysis of national administrative data linked at person level. The British Journal of Psychiatry [PubMed] Published 8th August 2019

The need to fairly (and efficiently) allocate NHS resources across the country played an important part in the birth of health economics in the UK, and resulted in resource allocation formulas. Since 1996 there has been a separate formula for mental health services, which is periodically updated. This study describes the work undertaken for the latest update.

The model is based on predicting service use and total mental health care costs observed in 2015 from predictors in the years 2013-2014, to inform allocations in 2019-2024. Various individual-level data sources available to the NHS were used for 43.7 million people registered with a GP practice and over the age of 20. The cost per patient who used mental health services ranged from £94 to over one million, averaging around £2,000. The predictor variables included individual indicators such as age, sex, ethnicity, physical diagnoses, and household type (e.g. number of adults and kids). The model also used variables observed at the local or GP practice level, such as the proportion of people receiving out-of-work benefits and the distance from the mental health trust. All of this got plugged into a good old OLS regression. From individual-level predictions, the researchers created aggregated indices of need for each clinical commission group (CCG).

A lot went into the model, which explained 99% of the variation in costs between CCGs. A key way in which this model differs from previous versions is that it relies on individual-level indicators rather than those observed at the level of GP practice or CCG. There was a lot of variation in the CCG need indices, ranging from 0.65 for Surrey Heath to 1.62 for Southwark, where 1.00 is the average. You’ll need to check the online appendices for your own CCG’s level of need (Lewisham: 1.52). As one might expect, the researchers observed a strong correlation between a CCG’s need index and the CCG’s area’s level of deprivation. Compared with previous models, this new model indicates a greater allocation of resources to more deprived and older populations.

Measuring, valuing and including forgone childhood education and leisure time costs in economic evaluation: methods, challenges and the way forward. Social Science & Medicine [PubMed] Published 7th August 2019

I’m a ‘societal perspective’ sceptic, not because I don’t care about non-health outcomes (though I do care less) but because I think it’s impossible to capture everything that is of value to society, and that capturing just a few things will introduce a lot of bias and noise. I would also deny that time has any intrinsic value. But I do think we need to do a better job of evaluating interventions for children. So I expected this paper to provide me with a good mix of satisfaction and exasperation.

Health care often involves a loss of leisure or work time, which can constitute an opportunity cost and is regularly included in economic evaluations – usually proxied by wages – for adults. The authors outline the rationale for considering ‘time-related’ opportunity costs in economic evaluations and describe the nature of lost time for children. For adults, the distinction is generally between paid or unpaid work and leisure time. Arguably, this distinction is not applicable to children. Two literature reviews are described. One looked at economic evaluations in the context of children’s health, to see how researchers have valued lost time. The other sought to identify ideas about the value of lost time for children from a broader literature.

The authors do a nice job of outlining how difficult it is to capture non-health-related costs and outcomes in the context of childhood. There is a handful of economic evaluations that have tried to measure and value children’s foregone time. The valuations generally focussed on the costs of childcare rather than the costs to the child, though one looked at the rate of return to education. There wasn’t a lot to go off in the non-health literature, which mostly relates to adults. From what there is, the recommendation is to capture absence from formal education and foregone leisure time. Of course, consideration needs to be given to the importance of lost time and thus the value of capturing it in research. We also need to think about the risk of double counting. When it comes to measurement, we can probably use similar methods as we would for adults, such as diaries. But we need very different approaches to valuation. On this, the authors found very little in the way of good examples to follow. More research needed.

Credits

Chris Sampson’s journal round-up for 12th August 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Developing open-source models for the US health system: practical experiences and challenges to date with the Open-Source Value Project. PharmacoEconomics [PubMed] Published 7th August 2019

PharmacoEconomics will soon publish a themed issue on transparency in decision modelling (to which I’ve contributed), and this paper – I assume – is one that will feature. At least one output from the Open-Source Value Project has featured in these round-ups before. The purpose of this paper is to describe the experiences of the initiative in developing and releasing two open-source models, one in rheumatoid arthritis and one in lung cancer.

The authors outline the background to the project and its goal to develop credible models that are more tuned-in to stakeholders’ needs. By sharing the R and C++ source code, developing interactive web applications, and providing extensive documentation, the models are intended to be wholly transparent and flexible. The model development process also involves feedback from experts and the public, followed by revision and re-release. It’s a huge undertaking. The paper sets out the key challenges associated with this process, such as enabling stakeholders with different backgrounds to understand technical models and each other. The authors explain how they have addressed such difficulties along the way. The resource implications of this process are also challenging, because the time and expertise required are much greater than for run-of-the-mill decision models. The advantages of the tools used by the project, such as R and GitHub, are explained, and the paper provides some ammunition for the open-source movement. One of the best parts of the paper is the authors’ challenge to those who question open-source modelling on the basis of intellectual property concerns. For example, they state that, “Claiming intellectually property on the implementation of a relatively common modeling approach in Excel or other programming software, such as a partitioned survival model in oncology, seems a bit pointless.” Agreed.

The response to date from the community has been broadly positive, though there has been a lack of engagement from US decision-makers. Despite this, the initiative has managed to secure adequate funding. This paper is a valuable read for anyone involved in open-source modelling or in establishing a collaborative platform for the creation and dissemination of research tools.

Incorporating affordability concerns within cost-effectiveness analysis for health technology assessment. Value in Health Published 30th July 2019

The issue of affordability is proving to be a hard nut to crack for health economists. That’s probably because we’ve spent a very long time conducting incremental cost-effectiveness analyses that pay little or no attention to the budget constraint. This paper sets out to define a framework that finally brings affordability into the fold.

The author sets up an example with a decision-maker that seeks to maximise population health with a fixed budget – read, HTA agency – and the motivating example is new medicines for hepatitis C. The core of the proposal is an alternative decision rule. Rather than simply comparing the incremental cost-effectiveness ratio (ICER) to a fixed threshold, it incorporates a threshold that is a function of the budget impact. At it’s most basic, a bigger budget impact (all else equal) means a greater opportunity cost and thus a lower threshold. The author suggests doing away with the ICER (which is almost impossible to work with) and instead using net health benefits. In this framework, whether or not net health benefit is greater than zero depends on the size of the budget impact at any given ICER. If we accept the core principle that budget impact should be incorporated into the decision rule, it raises two other issues – time and uncertainty – which are also addressed in the paper. The framework moves us beyond the current focus on net present value, which ignores the distribution of costs over time beyond simply discounting future expenditure. Instead, the opportunity cost ‘threshold’ depends on the budget impact in each time period. The description of the framework also addresses uncertainty in budget impact, which requires the estimation of opportunity costs in each iteration of a probabilistic analysis.

The paper is thorough in setting out the calculations needed to implement this framework. If you’re conducting an economic evaluation of a technology that could have a non-marginal (big) budget impact, you should tag this on to your analysis plan. Once researchers start producing these estimates, we’ll be able to understand how important these differences could be for resource allocation decision-making and determine whether the likes of NICE ought to incorporate it into their methods guide.

Did UberX reduce ambulance volume? Health Economics [PubMed] [RePEc] Published 24th June 2019

In London, you can probably – at most times of day – get an Uber quicker than you can get an ambulance. That isn’t necessarily a bad thing, as ambulances aren’t there to provide convenience. But it does raise an interesting question. Could the availability of super-fast, low-cost, low-effort taxi hailing reduce pressure on ambulance services? If so, we might anticipate the effect to be greatest where people have to actually pay for ambulances.

This study combines data on Uber market entry in the US, by state and city, with ambulance rates. Between Q1 2012 and Q4 2015, the proportion of the US population with access to Uber rose from 0% to almost 25%. The authors are also able to distinguish ‘lights and sirens’ ambulance rides from ‘no lights and sirens’ rides. A difference-in-differences model estimates the ambulance rate for a given city by quarter-year. The analysis suggests that there was a significant decline in ambulance rates in the years following Uber’s entry to the market, implying an average of 1.2 fewer ambulance trips per 1,000 population per quarter.

There are some questionable results in here, including the fact that a larger effect was found for the ‘lights and sirens’ ambulance rate, so it’s not entirely clear what’s going on. The authors describe a variety of robustness checks for our consideration. Unfortunately, the discussion of the results is lacking in detail and insight, so readers need to figure it out themselves. I’d be very interested to see a similar analysis in the UK. I suspect that I would be inclined to opt for an Uber over an ambulance in many cases. And I wouldn’t have the usual concern about Uber exploiting its drivers, as I dare say ambulance drivers aren’t treated much better.

Credits

R for trial and model-based cost-effectiveness analysis: workshop

Background and objectives

It is our pleasure to announce a workshop and training event on the use of R for trial and model-based cost-effectiveness analysis (CEA). This follows our successful workshop on R for CEA in 2018.

Our event will begin with a half-day short course on R for decision trees and Markov models and the use of the BCEA package for graphical and statistical analysis of results; this will be delivered by Gianluca Baio of UCL and Howard Thom of Bristol University.

This will be followed by a one-day workshop in which we will present a wide variety of technical aspects by experts from academia, industry, and government institutions (including NICE). Topics will include decision trees, Markov models, discrete event simulation, integration of network meta-analysis, extrapolation of survival curves, and development of R packages.

We will include a pre-workshop virtual code challenge on a problem set by our scientific committee. This will take place over Github and a Slack channel with participants encouraged to submit final R code solutions for peer review on efficiency, flexibility, elegance and transparency. Prizes will be provided for the best entry.

Participants are also invited to submit abstracts for potential oral presentations. An optional dinner and networking event will be held on the evening of 8th July.

Registration is open until 1 June 2019 at https://onlinestore.ucl.ac.uk/conferences-and-events/faculty-of-mathematical-physical-sciences-c06/department-of-statistical-science-f61/f61-workshop-on-r-for-trial-modelbased-costeffectiveness-analysis

To submit an abstract, please send it to howard.thom@bristol.ac.uk with the subject “R for CEA abstract”. The word limit is 300. Abstract submission deadline is 15 May 2019 and the scientific committee will make decisions on acceptance by 1st June 2018.

Preliminary Programme

Day 2: Workshop. Tuesday 9th July.

  • 9:30-9:45. Howard Thom. Welcome
  • 9:45-10:15. Nathan Green. Imperial College London. _Simple, pain-free decision trees in R for the Excel user
  • 10:15-10:35 Pedro Saramago. Centre for Health Economics, University of York. Using R for Markov modelling: an introduction
  • 10:35-10:55. Alison Smith. University of Leeds. Discrete event simulation models in R
  • 10:55-11:10. Coffee
  • 11:10-12:20. Participants oral presentation session (4 speakers, 15 minutes each)
  • 12:20-13:45. Lunch
  • 13:45-14:00. Gianluca Baio. University College London. Packing up, shacking up’s (going to be) all you wanna do!. Building packages in R and Github
  • 14:00-14:15. Jeroen Jansen. Innovation and Value Initiative. State transition models and integration with network meta-analysis
  • 14:15-14:25. Ash Bullement. Delta Hat Analytics, UK. Fitting and extrapolating survival curves for CEA models
  • 14:25-14:45. Iryna Schlackow. Nuffield Department of Public Health, University of Oxford. Generic R methods to prepare routine healthcare data for disease modelling
  • 14:45-15:00. Coffee
  • 15:00-15:15. Initiatives for the future and challenges in gaining R acceptance (ISPOR Taskforce, ISPOR Special Interest Group, future of the R for CEA workshop)
  • 15:15-16:30. Participant discussion.
  • 16:30-16:45. Anthony Hatswell. Close and conclusions