Don Husereau’s journal round-up for 25th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Development and validation of the TRansparent Uncertainty ASsessmenT (TRUST) tool for assessing uncertainties in health economic decision models. PharmacoEconomics [PubMed] Published 11th November 2019

You’re going to quickly see that all three papers in today’s round-up align with some strong personal pet peeves that I harbour toward the nebulous world of market access and health technology assessment – most prominent is how loose we seem to be with language and form without overarching standards. This may be of no surprise to some when discussing a field which lacks a standard definition and for which many international standards of what constitutes good practice have never been defined.

This first paper deals with both issues and provides a useful tool for characterizing uncertainty. The authors state the purpose of the tool is “for systematically identifying, assessing, and reporting uncertainty in health economic models.” They suggest, to the best of their knowledge, no such tool exists. They also support the need for the tool by asserting that uncertainty in health economic modelling is often not fully characterized. The reasons, they suggest, are twofold: (1) there has been too much emphasis on imprecision; and (2) it is difficult to express all uncertainty.

I couldn’t agree more. What I sometimes deeply believe about those planning and conducting economic evaluation is that they obsess too often about uncertainty that is is less relevant (but more amenable to statistical adjustment) and don’t address uncertainty that payers actually care about. To wit, while it may be important to explore and adopt methods that deal with imprecision (dealing with known unknowns), such as improving utility variance estimates (from an SE of 0.003 to 0.011, yes sorry Kelvin and Feng for the callout), not getting this right is unlikely to lead to truly bad decisions. (Kelvin and Feng both know this.)

What is much more important for decision makers is uncertainty that stems from a lack of knowledge. These are unknown unknowns. In my experience this typically has to do with generalizability (how well will it work in different patients or against a different comparator?) and durability (how do I translate 16 weeks of data into a lifetime?); not things resolved by better variance estimates and probabilistic analysis. In Canada, our HTA body has even gone so far as to respond to the egregious act of not providing different parametric forms for extrapolation with the equally egregious act of using unrealistic time horizon adjustments to deal with this. Two wrongs don’t make a right.

To develop the tool, the authors first conducted a (presumably narrative) review of uncertainty frameworks and then ran identified concepts across a bunch of HTA expert committee types. They also used a previously developed framework as a basis for identifying all the places where uncertainty in HTA could occur. Using the concepts and the HTA areas they developed a tool which was presented a few times, and then validated through semi-structured interviews with different international stakeholders (N = 11), as well as insights into barriers to its use, user-friendliness, and feasibility.

Once the tool was developed, six case studies were worked up with an illustration of one of them (pembrolizumab for Hodgkin’s lymphoma) in the manuscript. While the tool does not provide a score or coefficient to adjust estimates or deal with uncertainty, it is not supposed to. What it is trying to do is make sure you are aware of them all so that you can make some determination as to whether the uncertainties are dealt with. One of the challenges of developing the tool is the lack of standardized terminology regarding uncertainty itself. While a short primer exists in the manuscript, for those who have looked into it, uncertainty terminology is far more uncertain than even the authors let on.

While I appreciate the tool and the attempt to standardize things, I do suspect the approach could have been strengthened (a systematic review and possibly a nominal group technique as is done for reporting guidelines). However, I’m not sure this would have gotten us much closer to the truth. Uncertainty needs to be sorted first and I am happy at their attempt. I hope it raises some awareness of how we can’t simply say we are “uncertain” as if that means something.

Unmet medical need: an introduction to definitions and stakeholder perceptions. Value in Health [PubMed] Published November 2019

The second, and also often-abused, term without an obvious definition is unmet medical need (UMN). My theory is that some confusion has arisen due to a confluence of marketing and clinical development teams and regulators. UMN has come to mean patients with rare diseases, drugs with ‘novel’ mechanisms of action, patients with highly prevalent disease, drugs with a more convenient formulation, or drugs with fewer side effects. And yet payers (in my experience) usually recognize none of these. Payers tend to characterize UMN in different ways: no drugs available to treat the condition, available drugs do not provide consistent or durable responses, and there have been no new medical developments in the area for > 10 years.

The purpose of this research then was to unpack the term UMN further. The authors conducted a comprehensive (gray) literature review to identify definitions of UMN in use by different stakeholders and then unpacked their meaning through definitions consultations with multi-European stakeholder discussions, trying to focus on the key elements of unmet medical need with a regulatory and reimbursement lens. This consisted of six one-hour teleconference calls and two workshops held in 2018. One open workshop involved 69 people from regulatory agencies, industry, payers, HTA bodies, patient organizations, healthcare, and academia.

A key finding of this work was that, yes indeed, UMN means different things to different people. A key dimension is whether unmet need is being defined in terms of individuals or populations. Population size (whether prevalent or rare) was not felt to be an element of the definition while there was general consensus that disease severity was. This means UMN should really only consider the UMNs of individual patients, not whether very few or very many patients are at need. It also means we see people who have higher rates of premature mortality and severe morbidity as having more of an unmet need, regardless of how many people are affected by the condition.

And last but not least was the final dimension of how many treatments are actually available. This, the authors point out, is the current legal definition in Europe (as laid down in Article 4, paragraph 2 of Commission Regulation [EC] No. 507/2006). And while this seems the most obvious definition of ‘need’ (we usually need things that are lacking) there was some acknowledgement by stakeholders that simply counting existing therapies is not adequate. There was also acknowledgement that there may be existing therapies available and still an UMN. Certainly this reflects my experience on the pan-Canadian Oncology Drug Review expert review committee, where unmet medical need was an explicit subdomain in their value framework, and where on more than one occasion it was felt, to my surprise, there was an unmet need despite the availability of two or more treatments.

Like the previous paper, the authors did not conduct a systematic review and could have consulted more broadly (no clinician stakeholders were consulted) or used more objective methods, a limitation they acknowledge but also unlikely to get them much further ahead in understanding. So what to do with this information? Well, the authors do propose an HTA approach that would triage reimbursement decision based on UMN. However, stakeholders commented that the method you use really depends on the HTA context. As such, the authors conclude that “the application of the definition within a broader framework depends on the scope of the stakeholder.” In other words, HTA must be fit for purpose (something we knew already). However, like uncertainty, I’m happy someone is actually trying to create reasonable coherent definitions of such an important concept.

On value frameworks and opportunity costs in health technology assessment. International Journal of Technology Assessment in Health Care [PubMed] Published 18th September 2019

The final, and most-abused term is that of ‘value’. While value seems an obvious prerequisite to those making investments in healthcare, and that we (some of us) are willing to acknowledge that value is what we are willing to give up to get something, what is less clear is what we want to get and what we want to give up.

The author of this paper, then, hopes to remind us of the various schools of thought on defining value in health that speak to these trade-offs. The first is broadly consistent with the welfarist school of economics and proposes that the value of health care used by decision makers should reflect individuals’ willingness to pay for it. An alternative approach – sometimes referred to as the extra-welfarist framework, argues that the value of a health technology should be consistent with the policy objectives of the health care system, typically health (the author states it is ‘health’ but I’m not sure it has to be). The final school of thought (which I was not familiar with and neither might you be which is the point of the paper) is what he terms ‘classical’, where the point is not to maximize a maximand or be held up to notions of efficiency but rather to discuss how consumers will be affected. The reference cited to support this framework is this interesting piece although I couldn’t find any allusion to the framework within.

What follows is a relatively fair treatment of extra-welfarist and welfarist applications to decision-making with a larger critical swipe at the former (using legitimate arguments that have been previously published – yes, extra-welfarists assume resources are divisible and, yes, extra-welfarists don’t identify the health-producing resources that will actually be displaced and, yes, using thresholds doesn’t always maximize health) and much downplay of the latter (how we might measure trade-offs reliably under a welfarist framework appears to be a mere detail until this concession is finally mentioned: “On account of the measurement issues surrounding [willingness to pay], there may be many situations in which no valid and reliable methods of operationalizing [welfarist economic value frameworks] exist.”) Given the premise of this commentary is that a recent commentary by Culyer seemed to overlook concepts of value beyond extra-welfarist ones, the swipe at extra-welfarist views is understandable. Hence, this paper can be seen as a kind of rebuttal and reminder that other views should not be ignored.

I like the central premise of the paper as summarized here:

“Although the concise term “value for money” may be much easier to sell to HTA decision makers than, for example, “estimated mean valuation of estimated change in mean health status divided by the estimated change in mean health-care costs,” the former loses too much in precision; it seems much less honest. Because loose language could result in dire consequences of economic evaluation being oversold to the HTA community, it should be avoided at all costs”

However, while I am really sympathetic to warning against conceptual shortcuts and loose language, I wonder if this paper misses the bigger point. Firstly, I’m not convinced we are making such bad decisions as those who wish the lambda to be silenced tend to want us to believe. But more importantly, while it is easy to be critical about economics applied loosely or misapplied, this paper (like others) offers no real practical solutions other than the need to acknowledge other frameworks. It is silent on the real reason extra-welfarist approaches and thresholds seem to have stuck around, namely, they have provided a practical and meaningful way forward for difficult decision-making and the HTA processes that support them. They make sense to decision-makers who are willing to overlook some of the conceptual wrinkles. And I’m a firm believer that conceptual models are a starting point for pragmatism. We shouldn’t be slaves to them.

Credits

The potential of the super QALY to reconcile the key contentions in health economics

Economics is largely about trade-offs and compromise. Academics study the former but don’t often engage in the latter. In health economics, as in other fields, a key trade-off is between equity and efficiency. We’ve been studying this for a.very.long.time. Despite this, as Culyer has identified, equity is hardly considered in current health technology assessments. We all agree it should be, but just can’t seem to figure it out. Indeed, ihas been argued that incorporating equity concerns into cost-effectiveness analyses could still be a long time coming.

But let’s be a bit more positive. The elusive `Super QALY’, as it has been described, should come eventually. And when it does, it’ll be great! One of the reasons, I propose here, is that it has the power to reconcile many of the disagreements that currently fuel (hamper?) debate in our field. Hence, the super QALY might just allow us to get on with fussing over minutia issues of economic evaluation.

Trade-offs

There are necessary trade-offs in decisions of resource allocation. These might be described as the ‘positive’ tensions economists deal with; they relate to decisions that must be made, regardless of our values. The equity–efficiency trade-off is the main one here. But there are others. For example, health care interventions have the dual aim of increasing both the quantity and quality of an individual’s life. The QALY attempts to address this. However, the way we value quality of life also incorporates considerations of length of life in so much as ‘death’ is used in the valuation of health states. This is problematic, as has been discussed. Economists haven’t really gotten round to disagreeing about this yet, but there’s plenty else on which we disagree.

Disagreements

These might be described as ‘normative’ tensions. They concern what different economists think should and should not be done; mainly relating to the process of valuing health states. There are welfarists and non-welfarists. There are those who support societal preferences, and those who support capturing patient experience. It should be clear to most that neither side in these debates is wrong. Most health economists acknowledge the value of capturing utility as well as the importance of capabilities. Most will attach some value to society’s preferences and some to those of the individual.

A super-QALY solution

It’s never been completely clear what the ‘extra’ in extra-welfarism (as currently practiced) actually consists. The super QALY will surely formalise this; it could involve some completely non-welfarist notions. The most common idea of the super QALY is one where the current health-related QALY is weighted based on some equity considerations. So, if this is where economic evaluation is heading, we’re likely to end up with an extra step of estimating the equity impact of an intervention. But, while most studies seem to suggest that this might just be an add-on process, I think it would require a realignment of the methods we already use.

Equity analysis

There’s no need for me to reiterate the importance of equity considerations. Plainly we (economists, the public) care about needs, capabilities, opportunities and equality. How we define the equity analysis is incidental. More important is that we get on with doing it and just see what happens. There are lots of measures we could use and different approaches we could take. For arguments sake (and because I quite like it), let’s say the equity analysis is characterised by a ‘minimum capabilities‘ approach. Something similar to Daniels’s normal opportunity range. People could have the normal opportunity range, have fewer opportunities or have more opportunities. We can argue later about where the threshold lies. People below the threshold could be said to be in ‘need’. Again, argue about this later. States could be defined using a capabilities measure; let’s just say the ICECAP-A for now (though I don’t much like it). Here in the world of health economics we like 0-1 scales, so the ICECAP-A could be valued based on these anchors. So, let’s say 1 is the minimum capabilities or normal opportunity range threshold. Zero equates to being dead. Values can drop below zero where opportunity sets represent a state worse that non-existence. For the equity analysis we are not interested in utility or satisfaction, so the valuation would not be by the individual. Values could be elicited from society, possibly. The valuation technique could be a person trade-off, maybe. Or we could let ethicists come up with weightings. This framework, surely, would satisfy the non-welfarists.

Health utility analysis

I see no reason why the estimation of health benefits cannot be utility-based. Utilitarian satisfaction is sufficient if non-welfarist concerns are incorporated in an equity analysis. Personally I believe that whether this is based on experiences or preferences is largely inconsequential and that, in terms of health, most of the differences demonstrated between the 2 are a function of the elicitation methods. Therefore, utility analysis would remain largely unchanged. However, the value of 0 would change. Zero currently represents either being dead or in a health state equivalent to being dead, despite these two things not being of equivalent value to a person. Under the new framework there is no need to incorporate death into the health utility analysis, as it is accounted for in the equity analysis. 0 should represent the worst health state imaginable. There would be no negative values.

Cost-effectiveness analysis

These 2 analyses would then be combined to form a relatively routine cost-effectiveness analysis to address the efficiency of the intervention. The QALY would be calculated in the usual way, but the ‘Q’ would become ‘super’ by being a function of the 2 different outcomes. Tentatively this could be done by multiplying the two values (alternative formulations could be defined by societal values or by ethicists, depending on your wont). Costings would be carried out in the usual manner and a super ICER could be calculated. Furthermore, the net benefit approach could be implemented in the usual way; possibly with separate willingness-to-pay values for each input to the super QALY (indeed, they may be willingness to pay values from different agents). The table below summarises how the approach might accommodate the various tensions in health economics.

Equity analysis Health utility analysis
Equity Effectiveness
Life Morbidity
Non-welfarism Welfarism
Flourishing Satisfaction
Society The individual

All public policies could be subject to an equity analysis in the way set out above. It is in no way health-specific. Each policy field could then us this to weight their usual outcomes measures – preferably utility-based – to estimate the cost-effectiveness of their intervention. At this point the super QALY makes it onto daytime TV and health economists form a new unelected chamber at the Palace of Westminster.

No doubt this explicitly extra-welfarist approach to the super QALY raises more questions than it is currently able to answer, but we need to get on with trying stuff like this. The super QALY has proven elusive to date but, if we do make it, it may solve a lot of our problems. We may find ourselves having to invent new things to argue about.