Don Husereau’s journal round-up for 25th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Development and validation of the TRansparent Uncertainty ASsessmenT (TRUST) tool for assessing uncertainties in health economic decision models. PharmacoEconomics [PubMed] Published 11th November 2019

You’re going to quickly see that all three papers in today’s round-up align with some strong personal pet peeves that I harbour toward the nebulous world of market access and health technology assessment – most prominent is how loose we seem to be with language and form without overarching standards. This may be of no surprise to some when discussing a field which lacks a standard definition and for which many international standards of what constitutes good practice have never been defined.

This first paper deals with both issues and provides a useful tool for characterizing uncertainty. The authors state the purpose of the tool is “for systematically identifying, assessing, and reporting uncertainty in health economic models.” They suggest, to the best of their knowledge, no such tool exists. They also support the need for the tool by asserting that uncertainty in health economic modelling is often not fully characterized. The reasons, they suggest, are twofold: (1) there has been too much emphasis on imprecision; and (2) it is difficult to express all uncertainty.

I couldn’t agree more. What I sometimes deeply believe about those planning and conducting economic evaluation is that they obsess too often about uncertainty that is is less relevant (but more amenable to statistical adjustment) and don’t address uncertainty that payers actually care about. To wit, while it may be important to explore and adopt methods that deal with imprecision (dealing with known unknowns), such as improving utility variance estimates (from an SE of 0.003 to 0.011, yes sorry Kelvin and Feng for the callout), not getting this right is unlikely to lead to truly bad decisions. (Kelvin and Feng both know this.)

What is much more important for decision makers is uncertainty that stems from a lack of knowledge. These are unknown unknowns. In my experience this typically has to do with generalizability (how well will it work in different patients or against a different comparator?) and durability (how do I translate 16 weeks of data into a lifetime?); not things resolved by better variance estimates and probabilistic analysis. In Canada, our HTA body has even gone so far as to respond to the egregious act of not providing different parametric forms for extrapolation with the equally egregious act of using unrealistic time horizon adjustments to deal with this. Two wrongs don’t make a right.

To develop the tool, the authors first conducted a (presumably narrative) review of uncertainty frameworks and then ran identified concepts across a bunch of HTA expert committee types. They also used a previously developed framework as a basis for identifying all the places where uncertainty in HTA could occur. Using the concepts and the HTA areas they developed a tool which was presented a few times, and then validated through semi-structured interviews with different international stakeholders (N = 11), as well as insights into barriers to its use, user-friendliness, and feasibility.

Once the tool was developed, six case studies were worked up with an illustration of one of them (pembrolizumab for Hodgkin’s lymphoma) in the manuscript. While the tool does not provide a score or coefficient to adjust estimates or deal with uncertainty, it is not supposed to. What it is trying to do is make sure you are aware of them all so that you can make some determination as to whether the uncertainties are dealt with. One of the challenges of developing the tool is the lack of standardized terminology regarding uncertainty itself. While a short primer exists in the manuscript, for those who have looked into it, uncertainty terminology is far more uncertain than even the authors let on.

While I appreciate the tool and the attempt to standardize things, I do suspect the approach could have been strengthened (a systematic review and possibly a nominal group technique as is done for reporting guidelines). However, I’m not sure this would have gotten us much closer to the truth. Uncertainty needs to be sorted first and I am happy at their attempt. I hope it raises some awareness of how we can’t simply say we are “uncertain” as if that means something.

Unmet medical need: an introduction to definitions and stakeholder perceptions. Value in Health [PubMed] Published November 2019

The second, and also often-abused, term without an obvious definition is unmet medical need (UMN). My theory is that some confusion has arisen due to a confluence of marketing and clinical development teams and regulators. UMN has come to mean patients with rare diseases, drugs with ‘novel’ mechanisms of action, patients with highly prevalent disease, drugs with a more convenient formulation, or drugs with fewer side effects. And yet payers (in my experience) usually recognize none of these. Payers tend to characterize UMN in different ways: no drugs available to treat the condition, available drugs do not provide consistent or durable responses, and there have been no new medical developments in the area for > 10 years.

The purpose of this research then was to unpack the term UMN further. The authors conducted a comprehensive (gray) literature review to identify definitions of UMN in use by different stakeholders and then unpacked their meaning through definitions consultations with multi-European stakeholder discussions, trying to focus on the key elements of unmet medical need with a regulatory and reimbursement lens. This consisted of six one-hour teleconference calls and two workshops held in 2018. One open workshop involved 69 people from regulatory agencies, industry, payers, HTA bodies, patient organizations, healthcare, and academia.

A key finding of this work was that, yes indeed, UMN means different things to different people. A key dimension is whether unmet need is being defined in terms of individuals or populations. Population size (whether prevalent or rare) was not felt to be an element of the definition while there was general consensus that disease severity was. This means UMN should really only consider the UMNs of individual patients, not whether very few or very many patients are at need. It also means we see people who have higher rates of premature mortality and severe morbidity as having more of an unmet need, regardless of how many people are affected by the condition.

And last but not least was the final dimension of how many treatments are actually available. This, the authors point out, is the current legal definition in Europe (as laid down in Article 4, paragraph 2 of Commission Regulation [EC] No. 507/2006). And while this seems the most obvious definition of ‘need’ (we usually need things that are lacking) there was some acknowledgement by stakeholders that simply counting existing therapies is not adequate. There was also acknowledgement that there may be existing therapies available and still an UMN. Certainly this reflects my experience on the pan-Canadian Oncology Drug Review expert review committee, where unmet medical need was an explicit subdomain in their value framework, and where on more than one occasion it was felt, to my surprise, there was an unmet need despite the availability of two or more treatments.

Like the previous paper, the authors did not conduct a systematic review and could have consulted more broadly (no clinician stakeholders were consulted) or used more objective methods, a limitation they acknowledge but also unlikely to get them much further ahead in understanding. So what to do with this information? Well, the authors do propose an HTA approach that would triage reimbursement decision based on UMN. However, stakeholders commented that the method you use really depends on the HTA context. As such, the authors conclude that “the application of the definition within a broader framework depends on the scope of the stakeholder.” In other words, HTA must be fit for purpose (something we knew already). However, like uncertainty, I’m happy someone is actually trying to create reasonable coherent definitions of such an important concept.

On value frameworks and opportunity costs in health technology assessment. International Journal of Technology Assessment in Health Care [PubMed] Published 18th September 2019

The final, and most-abused term is that of ‘value’. While value seems an obvious prerequisite to those making investments in healthcare, and that we (some of us) are willing to acknowledge that value is what we are willing to give up to get something, what is less clear is what we want to get and what we want to give up.

The author of this paper, then, hopes to remind us of the various schools of thought on defining value in health that speak to these trade-offs. The first is broadly consistent with the welfarist school of economics and proposes that the value of health care used by decision makers should reflect individuals’ willingness to pay for it. An alternative approach – sometimes referred to as the extra-welfarist framework, argues that the value of a health technology should be consistent with the policy objectives of the health care system, typically health (the author states it is ‘health’ but I’m not sure it has to be). The final school of thought (which I was not familiar with and neither might you be which is the point of the paper) is what he terms ‘classical’, where the point is not to maximize a maximand or be held up to notions of efficiency but rather to discuss how consumers will be affected. The reference cited to support this framework is this interesting piece although I couldn’t find any allusion to the framework within.

What follows is a relatively fair treatment of extra-welfarist and welfarist applications to decision-making with a larger critical swipe at the former (using legitimate arguments that have been previously published – yes, extra-welfarists assume resources are divisible and, yes, extra-welfarists don’t identify the health-producing resources that will actually be displaced and, yes, using thresholds doesn’t always maximize health) and much downplay of the latter (how we might measure trade-offs reliably under a welfarist framework appears to be a mere detail until this concession is finally mentioned: “On account of the measurement issues surrounding [willingness to pay], there may be many situations in which no valid and reliable methods of operationalizing [welfarist economic value frameworks] exist.”) Given the premise of this commentary is that a recent commentary by Culyer seemed to overlook concepts of value beyond extra-welfarist ones, the swipe at extra-welfarist views is understandable. Hence, this paper can be seen as a kind of rebuttal and reminder that other views should not be ignored.

I like the central premise of the paper as summarized here:

“Although the concise term “value for money” may be much easier to sell to HTA decision makers than, for example, “estimated mean valuation of estimated change in mean health status divided by the estimated change in mean health-care costs,” the former loses too much in precision; it seems much less honest. Because loose language could result in dire consequences of economic evaluation being oversold to the HTA community, it should be avoided at all costs”

However, while I am really sympathetic to warning against conceptual shortcuts and loose language, I wonder if this paper misses the bigger point. Firstly, I’m not convinced we are making such bad decisions as those who wish the lambda to be silenced tend to want us to believe. But more importantly, while it is easy to be critical about economics applied loosely or misapplied, this paper (like others) offers no real practical solutions other than the need to acknowledge other frameworks. It is silent on the real reason extra-welfarist approaches and thresholds seem to have stuck around, namely, they have provided a practical and meaningful way forward for difficult decision-making and the HTA processes that support them. They make sense to decision-makers who are willing to overlook some of the conceptual wrinkles. And I’m a firm believer that conceptual models are a starting point for pragmatism. We shouldn’t be slaves to them.

Credits

Advancing the science of value assessment: new open-source model for non-small cell lung cancer

The OSVP is designed to improve how we measure value through an iterative and transparent process driven by input from all health care stakeholders. Join us for a webinar to better understand how the OSVP process works, learn about our recently released model in non-small cell lung cancer, and find out how to get involved. Please click here to register and reserve your spot!

 

Chris Sampson’s journal round-up for 1st August 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Individualised and personalised QALYs in exceptional treatment decisions. Journal of Medical Ethics [PubMedPublished 22nd July 2016

I’ve written previously about the notion of individualised cost-effectiveness analysis – or iCEA. With the rise of personalised medicine it will become an increasingly important idea. But it’s one that needs more consideration and research. So I was very pleased to see this essay in JME. The starting point for the author’s argument is that – in some cases – people will be denied treatment that would be cost-effective for them, because it has not been judged to be cost-effective for the target population on average. The author’s focus is upon people at the extremes of the distribution in terms of treatment effectiveness or costs: exceptional cases. There are two features to the argument. First, cost-effectiveness should be individualised in the sense that we should be providing treatment according to the costs and effects for that individual. Second, QALYs should be ‘personalised’ in the sense that individual’s own (health) preferences should be used to determine whether or not treatment is cost-effective. The author argues that ‘individual funding requests’ (where patients apply for eligibility for treatment that is not normally approved) represent an ideal context in which to use individualised and personalised QALYs. Unfortunately there are a lot of problems with the arguments presented in this essay, both in terms of their formulation and their practical implications. Some of the ideas are a bit dangerous. That there is no discussion of uncertainty or expectations is telling. If I can find the time I’ll write a full response to the journal. Nevertheless, it’s good to see discussion around this issue.

The value of medicines: a crucial but vague concept. PharmacoEconomics [PubMed] Published 21st July 2016

That we can’t define value is perhaps why the practice of value-based pricing has floundered in the UK. Yes, there’s cost-per-QALY, but none of us really think that’s the end of the value story. This article reports on a systematic review to try and identify how value has been defined in a number of European countries. Apparently none of the identified articles in the published literature included an explicit definition of value. This may not come as a surprise – value is in the eye of the beholder, and analysts defer to decision makers. Some vague definitions were found in the grey literature. The paper highlights a number of studies that demonstrate the ways in which different stakeholders might define value. In the countries that consider costs in reimbursement decisions, QALYs were (unsurprisingly) the most common way of measuring “the value of healthcare products”. But the authors note that most also take into account wider societal benefits and broader aspects of value. The review also identifies safety as being important. The authors seem to long for a universal definition of value, but acknowledge that it cannot be a fixed target. Value is heavily dependent on the context of a decision, so it makes sense to me that there should be inconsistencies. We just need to make sure we know what these inconsistencies are, and that we feel they are just.

The value of mortality risk reductions. Pure altruism – a confounder? Journal of Health Economics Published 19th July 2016

Only the most belligerent of old-school economists would argue that all human choices can be accounted for in purely selfish terms. There’s been much economic research into altruistic preferences. Pure altruism is the idea that people might be concerned with the general welfare of others, rather than just specific factors. In the context of tax-funded initiatives it can be either positive or negative, as people could either be willing to pay more for benefits to other people or less due to a reluctance to enforce higher costs (say nothing of sadism). This study reports on a discrete choice experiment regarding mortality reductions through traffic safety. Pure altruism is tested by the randomised inclusion of a statement about the amount paid by other people. An additional question about what the individual thinks the average citizen would choose is used to identify the importance of pure altruism (if it exists). The findings are both heartening and disappointing. People are considerate of other people’s preferences, but unfortunately they think that other people don’t value mortality reductions as highly as them. Therefore, individuals reduce their own willingness to pay, resulting in negative altruism. Furthermore, the analysis suggests that this is due to (negative) pure altruism because the stated values increase when the notion of coercive taxation is removed.

Realism and resources: towards more explanatory economic evaluation. Evaluation Published July 2016

This paper was doing the rounds on Twitter, having piqued people’s interest with an apparently alternative approach to economic evaluation. Realist evaluation – we are told – is expressed primarily as a means of answering the question ‘what works for whom, under what circumstances and why?’ Economic evaluation, on the other hand, might be characterised as ‘does this work for these people under these circumstances?’ We’re not really bothered why. Realist evaluation is concerned with the theory underlying the effectiveness of an intervention – it is seen as necessary to identify the cause of the benefit. This paper argues for more use of realist evaluation approaches in economic evaluation, providing an overview of the two approaches. The authors present an example of shared care and review literature relating to cost-effectiveness-specific ‘programme theories’: the mechanisms affecting resource use. The findings are vague and inconclusive, and for me this is a problem – I’m not sure what we’ve learned. I am somewhat on the fence. I agree with the people who think we need more data to help us identify causality and support theories. I agree with the people who say we need to better recognise context and complexity. But alternative approaches to economic evaluation like PBMA could handle this better without any express use of ‘realist evaluation’. And I agree that we could learn a lot from more qualitative analysis. I agree with most of what this article’s authors’ say. But I still don’t see how realist evaluation helps us get there any more than us just doing economic evaluation better. If understanding the causal pathways is relevant to decision-making (i.e., understanding it could change decisions in certain contexts) then we ought to be considering it in economic evaluation. If it isn’t then why would we bother? This article demonstrates that it is possible to carry out realist evaluation to support cost-effectiveness analysis, but it isn’t clear why we should. But then, that might just be because I don’t understand realist evaluation.

Photo credit: Antony Theobald (CC BY-NC-ND 2.0)