Skip to content

Chris Sampson’s journal round-up for 13th April 2020

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A simple decision analysis of a mandatory lockdown response to the COVID-19 pandemic. Applied Health Economics and Health Policy [PubMed] [RePEc] Published 5th April 2020

Some national policymakers have set up camp as either ‘lock-downers’ or ‘gradual steppers’ in their response to the COVID-19 pandemic. Hopefully, both camps have made their decisions in view of the evidence available to them and to the context in which they are making the decisions. But what factors might be driving these decisions? In this brief paper, a simple decision model is described in order to reveal the assumptions underlying decision-making about an overall policy response to COVID-19.

The author describes a simple decision tree with three outcomes: i) number of COVID-19 cases, ii) effects on the economy, and iii) effects of isolation on well-being. An important component of the model is that opting for ‘gradual steps’ includes a risk of being forced to adopt a full lockdown at a later date. The model is not meant to estimate the best option of the two. Rather, some hypothetical parameters are plugged in to identify the circumstances under which each option might be preferable. Yet, the author identifies some clear conclusions. Gradual steppers are willing to trade an increased number of deaths for a reduction in the impact on the economy and on well-being. Given some ballpark estimates of the magnitude of effects, it also suggests that gradual steppers hold all of three views: 1) that the need for later lockdown is highly unlikely, 2) that an extended but less intense isolation will have less impact on the economy or well-being, and 3) that an equitable distribution of the impacts is not a key policy concern.

This is a useful exercise, but it brushes over a lot of complexity. The two camps are not so cleanly distinguishable. There is a multitude of policy levers that are being pulled in response to COVID-19. And timing is everything. But we need to start somewhere. COVID-19 isn’t going away any time soon. Decision modellers ought to be establishing an open source initiative to develop a cost-effectiveness model (or models) that can incorporate this complexity. So far, I have spotted one other modelling attempt, but as far as I can tell it is not open source.

Why only test symptomatic patients? Consider random screening for COVID-19. Applied Health Economics and Health Policy [PubMed] [RePEc] Published 8th April 2020

Continuing the theme – and discussing one of those complexities absent from the paper above – is this editorial suggesting that random testing of the asymptomatic population might be a more effective public health measure than only testing those with symptoms. Many countries have adopted strict eligibility criteria for scarce testing resources. The problem with this approach is that many of the tests – perhaps most – will not provide useful information. In the absence of curative treatment, they will simply confirm what we already suspect and they will not be used to change behaviour. Consider an alternative scenario in which a person with no symptoms is tested and receives a positive result. This person will likely change their behaviour in a dramatic way, isolating themselves and thus reducing the spread of the disease. They’re also likely to tell their friends and family, who may change their own behaviour to prevent transmission. The goal of testing should be to obtain as much information as possible. While no treatments are available, confirming positive cases is far less valuable than contradicting negative cases. And false negatives are potentially more dangerous than false positives. All of this points towards the need for more testing of people without symptoms.

Want to improve public health access? Let’s start with the basics: measuring efficiency correctly. PharmacoEconomics – Open [PubMed] [RePEc] Published 3rd April 2020

Health services have long been concerned with efficiency. One reading of history is that this concern enabled health economics to establish itself in the UK. Yet, the author of this article argues, we’ve been doing it all wrong.

What it comes down to is that old argument that ‘health is different’. In general, that’s an argument I buy. The specific difference with respect to the measurement of efficiency, described in this paper, is the necessary existence of excess capacity. Health care differs from other sectors because it operates in a context of extreme uncertainty. This ranges from random variation, over time, in the need for expensive services, to existential threats such as COVID-19. As such, it is necessary to have excess capacity to be ready for these possibilities. The authors cites an estimate of bed occupancy rates in the US of around 63%.

The author asserts that current analyses – mostly using data envelopment analysis and stochastic frontier analysis – aren’t up to scratch, because they assume that the inputs to production can be increased or decreased according to demand. Meanwhile, the inputs to health care are rigid. The resulting argument is that methods should be developed that take excess capacity into account, and that health services should only be benchmarked against their own equilibrium level of inefficiency. It’s difficult to judge the magnitude of this problem, but it seems that it could be important. Without appropriate benchmarking, health services could be inadvertently run into the ground by efficiency objectives.

How are incremental cost-effectiveness, contextual considerations, and other benefits viewed in health technology assessment recommendations in the United States? Value in Health Published 1st April 2020

The US still doesn’t have a national public agency for health technology assessment, but the independent Institute for Clinical and Economic Review is doing its best to demonstrate how it could be done. (Annoyingly, they’re known as ICER, which more commonly means incremental cost-effectiveness ratio, so for this blog post I’ll be referring to them as The IfCaER, because I’m pety.) How HTA agency decisions are determined is an important question for research and, of course, for developers of new technologies. This study sought to answer that question for the case of The IfCaER.

The IfCaER’s guidance on value establishes two ICER thresholds of $50,000 and $175,000 per QALY, below which a technology is high value, between which it is intermediate value, and above which it is low value. But this isn’t a strict basis for recommendations and is one of several aspects of value considered by the council. The main ‘outcome’ of interest for this study is the decision made by council members about the long-term value for money of a technology. The authors reviewed 31 assessments completed by The IfCaER and analysed 51 votes on long-term value. Council members also consider a range of other benefits, disadvantages, and contextual considerations, such as spillover effects, characteristics of the technology, and current treatment provision.

The majority of decisions were made in line with the general guidance on value thresholds, but there were many exceptions. The findings from the other value considerations are revealing, with key differences in decisions illustrated using two case studies. Voretigene neparvovec had an ICER of between $288,000 and $644,000 per QALY, but only three out of twelve council members judged it as low value. This is because it was seen to reduce caregiver burden and improve productivity, while the drug used a novel mechanism of action and was expected to bring improvements in the infrastructure of care. On the flipside, tisagenlecleucel had an ICER of $45,000 and used a novel mechanism of action, but most council members did not judge it to represent high value. This was because there was a lot of uncertainty in the benefits of the treatment and in the possibility of serious side effects.

The authors close with an extensive discussion of deliberative processes and the potential for quantitative approaches to multi-criteria decision analysis. This seems a bit misplaced, but is interesting nonetheless. On the whole, the study offers some reassurance about the value of The IfCaER’s deliberative processes. Yes, ICERs and cost-effectiveness thresholds play an important part in the decision-making, but they are not followed religiously and the evidence is considered for what its worth.

Does the EQ-5D-5L benefit from extension with a cognitive domain: testing a multi-criteria psychometric strategy in trauma patients. Quality of Life Research [PubMed] Published 10th April 2020

I’m working on the development of a cognition bolt-on for the EQ-5D-5L. Potential descriptors for the EQ-5D-3L have been around for a while and have undergone testing, but this is one of the only studies to test the performance of a cognition bolt-on for the 5L.

A sample of 1,799 trauma patients in the Netherlands completed questionnaires at six and twelve months after their trauma. These included the EQ-5D-5L with a cognition bolt-on and the impact of events scale-revised (IES-R) as a condition-specific measure. Only slightly fewer people reported ‘no problems’ when the cognition bolt-on was considered. But there was higher convergent validity between the EQ-5D and the EQ-VAS when the cognition bolt-on was included. There was also some evidence from the Shannon Evenness Index that the bolt-on might increase the yield of information. With two points of observation, the study also considers change over time. There was some indication that cognition changed in the opposite direction to changes in other EQ-5D domains, particularly for people with PTSD, while people with traumatic brain injury saw improvement in the cognition domain.

The findings are vaguely in favour of a cognition bolt-on, but they’re a bit… meh. Any improvements in measurement properties are very slight and probably wouldn’t lead to any substantive changes in decision-making. This might be because the cognitive impact of trauma in this population was of relatively low severity. Or, it might be that the cognition bolt-on itself was not up to scratch and somebody needs to develop a better one.

Credits

By

  • Chris Sampson

    Founder of the Academic Health Economists' Blog. Senior Principal Economist at the Office of Health Economics. ORCID: 0000-0001-9470-2369

We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Join the conversation, add a commentx
()
x
%d bloggers like this: