Chris Sampson’s journal round-up for 23rd September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Can you repeat that? Exploring the definition of a successful model replication in health economics. PharmacoEconomics [PubMed] Published 18th September 2019

People talk a lot about replication and its role in demonstrating the validity and reliability of analyses. But what does a successful replication in the context of cost-effectiveness modelling actually mean? Does it mean coming up with precisely the same estimates of incremental costs and effects? Does it mean coming up with a model that recommends the same decision? The authors of this study sought to bring us closer to an operational definition of replication success.

There is potentially much to learn from other disciplines that have a more established history of replication. The authors reviewed literature on the definition of ‘successful replication’ across all disciplines, and used their findings to construct a variety of candidate definitions for use in the context of cost-effectiveness modelling in health. Ten definitions of a successful replication were pulled out of the cross-disciplinary review, which could be grouped into ‘data driven’ replications and ‘experimental’ replications – the former relating to the replication of analyses and the latter relating to the replication of specific observed effects. The ten definitions were from economics, biostatistics, cognitive science, psychology, and experimental philosophy. The definitions varied greatly, with many involving subjective judgments about the proximity of findings. A few studies were found that reported on replications of cost-effectiveness models and which provided some judgment on the level of success. Again, these were inconsistent and subjective.

Quite reasonably, the authors judge that the lack of a fixed definition of successful replication in any scientific field is not just an oversight. The threshold for ‘success’ depends on the context of the replication and on how the evidence will be used. This paper provides six possible definitions of replication success for use in cost-effectiveness modelling, ranging from an identical replication of the results, through partial success in replicating specific pathways within a given margin of error, to simply replicating the same implied decision.

Ultimately, ‘data driven’ replications are a solution to a problem that shouldn’t exist, namely, poor reporting. This paper mostly convinced me that overall ‘success’ isn’t a useful thing to judge in the context of replicating decision models. Replication of certain aspects of a model is useful to evaluate. Whether the replication implied the same decision is a key thing to consider. Beyond this, it is probably worth considering partial success in replicating specific parts of a model.

Differential associations between interpersonal variables and quality-of-life in a sample of college students. Quality of Life Research [PubMed] Published 18th September 2019

There is growing interest in the well-being of students and the distinct challenges involved in achieving good mental health and addressing high levels of demand for services in this group. Students go through many changes that might influence their mental health, prominent among these is the change to their social situation.

This study set out to identify the role of key interpersonal variables on students’ quality of life. The study recruited 1,456 undergraduate students from four universities in the US. The WHOQOL measure was used for quality of life and a barrage of measures were used to collect information on loneliness, social connectedness, social support, emotional intelligence, intimacy, empathic concern, and more. Three sets of analyses of increasing sophistication were conducted, from zero-order correlations between each measure and the WHOQOL, to a network analysis using a Gaussian Graphical Model to identify both direct and indirect relationships while accounting for shared variance.

In all analyses, loneliness stuck out as the strongest driver of quality of life. Social support, social connectedness, emotional intelligence, intimacy with one’s romantic partner, and empathic concern were also significantly associated with quality of life. But the impact of loneliness was greatest, with other interpersonal variables influencing quality of life through their impact on loneliness.

This is a well-researched and reported study. The findings are informative to student support and other services that seek to improve the well-being of students. There is reason to believe that such services should recognise the importance of interpersonal determinants of well-being and in particular address loneliness. But it’s important to remember that this study is only as good as the measures it uses. If you don’t think WHOQOL is adequately measuring student well-being, or you don’t think the UCLA Loneliness Scale tells us what we need to know, you might not want these findings to influence practice. And, of course, the findings may not be generalisable, as the extent to which different interpersonal variables affect quality of life is very likely dependent on the level of service provision, which varies greatly between different universities, let alone countries.

Affordability and non-perfectionism in moral action. Ethical Theory and Moral Practice [PhilPapers] Published 14th September 2019

The ‘cost-effective but unaffordable’ challenge has been bubbling for a while now, at least since sofosbuvir came on the scene. This study explores whether “we can’t afford it” is a justifiable position to take. The punchline is that, no, affordability is not a sound ethical basis on which to support or reject the provision of a health technology. I was extremely sceptical when I first read the claim. If we can’t afford it, it’s impossible, and how can there by a moral imperative in an impossibility? But the authors proceeded to convince me otherwise.

The authors don’t go into great detail on this point, but it all hinges on divisibility. The reason that a drug like sofosbuvir might be considered unaffordable is that loads of people would be eligible to receive it. If sofosbuvir was only provided to a subset of this population, it could be affordable. On this basis, the authors propose the ‘principle of non-perfectionism’. This states that not being able to do all the good we can do (e.g. provide everyone who needs it with sofosbuvir) is not a reason for not doing some of the good we can do. Thus, if we cannot support provision of a technology to everyone who could benefit from it, it does not follow (ethically) to provide it to nobody, but rather to provide it to some people. The basis for selecting people is not of consequence to this argument but could be based on a lottery, for example.

Building on this, the authors explain to us why this is wrong, with the notion of ‘numerical discrimination’. They argue that it is not OK to prioritise one group over another simply because we can meet the needs of everyone within that group as opposed to only some members of the other group. This is exactly what’s happening when we are presented with notions of (un)affordability. If the population of people who could benefit from sofosbuvir was much smaller, there wouldn’t be an issue. But the simple fact that the group is large does not make it morally permissible to deny cost-effective treatment to any individual member within that group. You can’t discriminate against somebody because they are from a large population.

I think there are some tenuous definitions in the paper and some questionable analogies. Nevertheless, the authors succeeded in convincing me that total cost has no moral weight. It is irrelevant to moral reasoning. We should not refuse any health technology to an entire population on the grounds that it is ‘unaffordable’. The authors frame it as a ‘mistake in moral mathematics’. For this argument to apply in the HTA context, it relies wholly on the divisibility of health technologies. To some extent, NICE and their counterparts are in the business of defining models of provision, which might result in limited use criteria to get around the affordability issue. Though these issues are often handled by payers such as NHS England.

The authors of this paper don’t consider the implications for cost-effectiveness thresholds, but this is where my thoughts turned. Does the principle of non-perfectionism undermine the morality of differentiating cost-effectiveness thresholds according to budget impact? I think it probably does. Reducing the threshold because the budget impact is great will result in discrimination (‘numerical discrimination’) against individuals simply because they are part of a large population that could benefit from treatment. This seems to be the direction in which we’re moving. Maybe the efficiency cart is before the ethical horse.

Credits

Author

  • punk rock health economist ORCID: 0000-0001-9470-2369

13 thoughts on “Chris Sampson’s journal round-up for 23rd September 2019

  1. Another great journal round-up Chris, thanks!

    I have a different take regarding your final comment on the third paper. You ask “Does the principle of non-perfectionism undermine the morality of differentiating cost-effectiveness thresholds according to budget impact?”. You answer that it “probably does”, on the basis that “Reducing the threshold because the budget impact is great will result in discrimination (‘numerical discrimination’) against individuals simply because they are part of a large population that could benefit from treatment”.

    The reason a lower threshold is recommended for technologies with relatively large budget impact is because the health opportunity cost is disproportionately larger (due to the concave nature of the health production function). In other words, technologies with larger budget impact result in proportionately more health forgone by other patients. Failing to adjust the threshold to account for this means that less weight is implicitly placed on the health of these patients than is placed on the health of those patients who bear the opportunity cost of different technologies with smaller budget impact.

    If one adopts an ethical position that equivalent health gains and losses should be valued equally for all individuals, then it logically follows from the above that a lower threshold must be used for technologies with relatively large budget impact. A lower threshold does not “discriminate” against the patients who stand to benefit from such technologies, since health gains and losses for these patients are still valued exactly the same as those for other patients. Rather, a lower threshold is required to account for the disproportionately large health losses incurred by other patients, whose health must be correctly accounted for in order that the same value can be assigned to their health gains and losses as is assigned to equivalent health gains and losses for other patients.

    Of course, one might wish to depart from the position that all health gains and losses should be valued equally. In doing so, we must still account for the proportionately larger health losses that are incurred when adopting medicines with relatively large budget impact. This can be done by using a lower threshold for technologies with larger budget impact to account for their relatively greater health opportunity cost, and then subsequently applying weights directly to the health gained or forgone by subgroups of patients according to their specific characteristics (within a modified ‘net health benefit’ framework). Critically, this would not be equivalent to applying a single threshold to all technologies, regardless of budget impact, except in the special case where a very specific set of weights is applied to the health of patients who bear the opportunity cost of new technologies, with lower weights applied to the health of patients whose health is forgone by technologies with higher budget impact. In the absence of an ethical justification for such weights, it follows that the practice of applying a single threshold to all technologies, regardless of budget impact, is itself unethical.

    1. Thanks for the comment, Mike!

      The perspective you characterise is the one I started with, but the paper persuaded me otherwise.

      You state that “If one adopts an ethical position that equivalent health gains and losses should be valued equally for all individuals, then it logically follows from the above that a lower threshold must be used for technologies with relatively large budget impact”, but this only applies if we take an all-or-nothing approach, such that technologies are not divisible. The argument in Rumbold et al’s paper is that this all-or-nothing approach is not ethical. And why should it be?

      Why should an individual be denied acccess to sofosbuvir (for example) simply because lots of people have hepatitis C, while somebody with a less common ailment is eligible to receive a less cost-effective treatment? I appreciate the arithmetic of higher budget impacts implying lower thresholds, but it relies on the notion that technologies aren’t divisible, which simply isn’t true. Where they are divisible, your own logic (of equal value of gains/losses to all individuals) leads to the conclusion that basing decisions about different individuals on different thresholds is wrong, because it implies different values of gains/losses for different individuals.

      It is not the threshold that should change with budget impact, but the level of coverage.

      1. Hi Chris,

        Thanks for your response.

        I agree that it’s important to consider whether technologies are divisible, so we shouldn’t take a simple “all or nothing” approach. But the arguments I made still stand in the presence of divisibility.

        Since budget impact is a continuous variable and health production functions are concave, it follows that, in principle, we ought to have a gradually decreasing threshold as the budget impact increases. (I understand that this might not be feasible in practice, but let’s focus on the principles first).

        If a technology is divisible, it follows that the appropriate threshold would be lowest if the technology is provided to the maximum number of patients possible, with this threshold rising gradually as the technology is restricted to a progressively smaller population. Therefore, a divisible technology with a relatively large budget impact (under any given adoption strategy, whether that is providing it to everyone who can benefit or only to a specific subpopulation) should be compared to a lower threshold than another divisible technology with a smaller budget impact.

        Of course, divisibility may give rise to lots of potential adoption strategies, and I agree with you that we may wish to consider different levels of coverage (since partial coverage might turn out to be more cost-effective than either full coverage or no coverage). But it is incorrect to say that we shouldn’t change the threshold as we change the level of coverage, because a greater level of coverage imposes a disproportionally higher opportunity cost which must be accounted for through a lower threshold.

        You ask: “Why should an individual be denied acccess to sofosbuvir (for example) simply because lots of people have hepatitis C, while somebody with a less common ailment is eligible to receive a less cost-effective treatment?”.

        We need to be clear what we mean by “less cost-effective”. If the objective is to improve population health, then I would define a “less cost-effective” technology as one with a lower net health benefit than its comparator. Critically, this does not necessarily imply that its ICER is higher. Using your example, suppose (for the sake of argument) that providing sofosbuvir to all patients who can benefit (a strategy with relatively high budget impact) has a lower ICER than some other technology with smaller budget impact. Because this high budget impact has a disproportionately large health opportunity cost, it is quite possible that the net health benefit of sofosbuvir is smaller than that of the comparator (despite its lower ICER), and hence it is “less cost-effective”.
        You state that “basing decisions about different individuals on different thresholds is wrong, because it implies different values of gains/losses for different individuals”.

        I disagree. If we value health gains and losses equally for all individuals (including those who bear the opportunity cost of new technologies) then logically we must use a lower threshold for technologies with disproportionately high opportunity costs. This is not because we value the health of those who benefit from such technologies any less than others: on the contrary, we value their health gains and losses exactly the same as equivalent gains and losses for everyone else. Rather, every dollar spent on such technologies results in a disproportionate loss of other patients’ health, so if we value equivalent health gains or losses the same for all individuals then we must use a lower threshold. (If we don’t use a lower threshold, then we would be implicitly valuing the health of those patients who bear the opportunity cost of technologies with large budget impact less than that of other patients, which would be “wrong” under your own criteria because it “implies different values of gains/losses for different individuals”).

        Finally, I would note that there are some potential ethical issues associated with partial coverage which do not arise under an ‘all or nothing’ approach, including horizontal inequalities (since you would be denying treatment to some patients with identical characteristics as other patients who are provided treatment). Unfortunately, avoiding these horizontal inequities (by expanding coverage of the technology to all patients) results in a disproportionately large opportunity cost in terms of other patients’ health. So I would personally avoid labelling either one of these approaches as “ethical” and the other as not: both have their challenges and decision makers should carefully consider all the options available.

        In my view, decision makers faced with an adoption decision for a new technology should consider whether it is divisible. If it is, then there may be numerous possible adoption strategies with different levels of coverage, each with a different budget impact. If all equivalent health gains and losses are valued equally across individuals, then a lower threshold should be used for those strategies in which the budget impact is larger – this is for the sole purpose of accounting for the disproportionately large opportunity cost, and not because some patients’ health is valued more or less than others’. If a strategy of partial coverage appears cost-effective, then consideration should be given as to the potential horizontal inequalities that might arise from this. Alternatively, if the technology is not divisible, then partial coverage cannot be considered, but a lower threshold should nevertheless still be used where the budget impact is relatively high in order to account for the disproportionately large opportunity cost associated with adoption.

        1. “a lower threshold should nevertheless still be used where the budget impact is relatively high in order to account for the disproportionately large opportunity cost associated with adoption.”

          In view of this paper, I disagree. Because there is only such a thing as a disproportionately high opportunity cost when we consider total cost impact for a population, which, as outlined in this paper, should hold no moral weight and thus not unduly influence decision-making.

          I don’t deny that there is logic in what you say, but it involves several leaps of faith with respect to fairness. Leaps which, I think, are undermined by this paper. Principally, by the notion that people shouldn’t be treated differently simply because they are part of a large population.

          Once we accept that the fairness of an outcome depends on the consequences for the individual (and not some arbitrary grouping), total budget impact becomes irrelevant to the question of fairness and the ethical imperative is to consider whether technologies are maximally divisible. (Which, I believe, in most cases, they are.) Allowing decision-making to be influenced by budget impact should be the exception.

          The rule should be that decisions are made according to consequences for the individual. In this case, the notion of shifting the threshold in light of budget impact becomes nonsensical (or, at least, pointless), because every decision is being made at the margin.

          1. Thanks for taking the time to respond – I’m very much enjoying this exchange.

            I respectfully disagree with you for two reasons: one logical and one normative.

            Let’s begin with the logical issue. You advance the notion that “people shouldn’t be treated differently simply because they are part of a large population”, and you argue that “once we accept that the fairness of an outcome depends on the consequences for the individual (and not some arbitrary grouping), total budget impact becomes irrelevant to the question of fairness”.

            Suppose we adopt your position that it is ethically wrong to consider “arbitrary” groups of patients, and that we must instead consider all patients separately. (I disagree that the grouping of patients with the same disease is ‘arbitrary’, but that’s not of importance – let’s assume for now that any grouping of individuals is wrong).

            Building upon your example from previous posts, the first individual to receive sofosbuvir imposes a marginal cost on the health budget, so for this patient the conventional ‘marginal’ cost-effectiveness threshold is appropriate for considering the resulting health opportunity cost – on this we agree.

            However, if a second individual subsequently receives sofosbuvir, the health opportunity cost is very slightly greater – this is because we already made a displacement within the health system to fund the first patient, and now we need to make another slightly less efficient displacement to fund the second patient. As a result, we must use a slightly lower threshold to consider the health opportunity cost for the second patient as we did for the first. In turn, the health opportunity cost of treating the third patient will be slightly higher than it was for the second patient, and so requires an even lower threshold, and so on for all subsequent patients.

            It follows that, even under this individualized approach where we consider every patient separately, we still end up with a progressively lower threshold as the coverage (and hence budget impact) increases. So it’s incorrect to say that the threshold should remain unchanged.

            In order to support your position that the marginal threshold should be used for all patients, regardless of budget impact, we need to go further than simply considering patients as individuals rather than members of a group. Specifically, we also need to assume that every patient who receives the technology is the first and only patient to receive it. So when we assess whether sofosbuvir is cost-effective for the second patient, we must ignore that the opportunity cost is now slightly higher due to funding the first patient. And so on for all other patients – we treat every patient as if they are the first and only patient to receive sofosbuvir, such that using the marginal threshold for everyone is appropriate.

            The problem with this is that it ignores reality. We can insist on treating all patients as individuals as much as we like, but the reality is that if a large number of individuals are receiving the same treatment at the same time then the opportunity cost of providing the treatment to each patient is higher. Ignoring this ‘group effect’ doesn’t make the decision maker more ‘moral’ or ‘ethical’, it just means that some of the opportunity cost has been overlooked, which itself raises moral and ethical issues (which I return to below).

            The perverse consequences of ignoring group effects are apparent even in mundane, everyday settings. Consider a large group of individuals who wish to swim in a small pool. The first individual can enter the pool without imposing any meaningful opportunity cost on others. But once the pool is at capacity, any further individuals who enter the pool impose a substantial opportunity cost, since overcrowding the pool has consequences for the enjoyment and safety of everyone. It would clearly be inappropriate (not to mention unethical, given the safety implications) to assess the merits of an additional individual entering a crowded swimming pool on the same basis as that of an individual entering an empty pool. Yet, if we ignore this group effect and consider each individual as if they were the first and only individual to enter the pool (analogous to using the marginal cost-effectiveness threshold to assess every patient receiving sofosbuvir), then we would allow for the overcrowding of the pool to the detriment of everyone.

            Since it’s clearly inappropriate to treat every patient as if they’re the first and only individual to receive treatment, it follows that we need to progressively lower the threshold as the budget impact increases. This is true regardless of whether we consider patients as individuals or as part of a larger group.

            Now let’s move on to the normative issue. You state that a “disproportionately high opportunity cost” should “hold no moral weight and thus not unduly influence decision-making”, since it only arises “when we consider total cost impact for a population”.

            Notwithstanding that a disproportionately high opportunity cost also arises when we consider the impact of providing treatment separately to a large number of individuals (as noted above), I have a strong normative objection to this position.

            I think academics and decision makers too often consider the opportunity cost of new technologies in purely abstract terms (“X QALYs displaced”), while simultaneously granting special status to the identifiable individuals who stand to benefit from these technologies. This creates an asymmetry between the rights afforded to the individuals who benefit from new technologies and the rights afforded to those individuals who bear the opportunity cost.

            The opportunity cost of new technologies is not abstract. It is borne by real patients, including many with the same characteristics as those individuals who are afforded special status when they stand to benefit from the technology in question (such as patients at the ‘end of life’, or with cancer, or who have a rare disease, etc.). When we talk about “displaced QALYs” in an abstract sense, we don’t do justice to what this actually means in practice: that there are real individuals – members of our society – who are now living shorter lives, or lives of worse quality, or both.

            So when we fail to adjust the threshold for technologies with large budget impact, what we are really doing is underestimating the full extent of the health loss borne by these individuals. I don’t believe there are any circumstances under which this health loss should carry “no moral weight”. In my view, the ‘right’ of patients not to be considered as members of a group does not justify ignoring the very real health consequences incurred by other members of society.

            Instead, I believe that the full extent of the health opportunity cost is of ethical relevance to decision makers, including any additional health opportunity cost that arises when adopting technologies with large budget impact. It is for this reason that it is morally necessary to lower the threshold when assessing technologies with large budget impact.

            1. Thanks, Mike! I should say that I’m not wholly convinced by the arguments I’m presenting here, so I appreciate you taking the time to test them out.

              I agree with your characterisation of the ‘logical’ issue, though not your characterisation of my perspective. Perhaps part of the confusion is in the term ‘budget impact’, which generally doesn’t relate to the individual-level cost. When I say the threshold shouldn’t change due to budget impact, I mean it shouldn’t change with reference to ‘total budget impact of a new technology for the indicated population’.

              I never said that the threshold shouldn’t change. The threshold should, of course, change with each patient funded. Nor am I saying that every patient should be treated as ‘the first’. Rather, every patient should be treated with reference to the prevailing threshold (i.e. the opportunity cost). My point is that it is unethical to make this threshold change only with reference to a specific new technology. Once the first patient is treated, the prevailing threshold has shifted with respect to the full health budget. It has nothing to do with the total budget impact of funding a new technology. I think the way that sofosbuvir was rolled out in the UK – which was far from an all-or-nothing approach – is partly acknowledging the unfairness of an outright rejection.

              Your pool analogy is a nice one. Lowering a threshold on the basis of total budget impact is akin to not letting anybody get in the pool because we can’t afford to let everyone in, but then later letting a different group of people in because they are fewer in number. There is no reason to prioritise the smaller group’s access. Therein lies the unfairness.

              I agree with your normative argument, of course. But given that I think the threshold should (or, rather, does) change, it isn’t of consequence to this matter.

  2. Interesting discussions, on a subject which I have some experience.

    An aspect often overlooked in HTA threshold vs affordability discussions is the commercial pricing policy that underpins affordability challenges.

    As drug development incurrs a largely fixed cost, the profit making price is a function of the marginal cost of production through the lifetime supply of the product until competitor entry.

    Ceteris paribus, larger revenue markets can sustain lower prices at a given rate of return. It is entirely reasonable for payers to not take price as a given regardless of volume if volume leads to affordability challenges due to the sheer scale of revenue that would be afforded by free pricing during the patent protected monopoly phase.

    The ethics of decision making for large treatment populations necessarily needs a commercial restraint on health consequences of supernormal profits. And the bigger the revenue prize the greater the impetus for payer-supplier bargaining which results in better ICERS achieved.

    1. I completely agree, Peter.

      The Canadian federal government recently published regulations which will allow the PMPRB (the federal agency responsible for setting maximum drug prices) to apply a lower price for drugs with large ‘market size’: http://www.gazette.gc.ca/rp-pr/p2/2019/2019-08-21/html/sor-dors298-eng.html

      This can be justified for precisely the reason you give: that drugs with larger treatment populations / market size / budget impact are (ceteris paribus) more profitable at lower prices, increasing the scope for bargaining the price down.

      I recently wrote a conceptual framework for the PMPRB that considers (among other things) how the ‘consumer’ and ‘producer’ surplus associated with a new drug is determined by the cost-effectiveness threshold, and how these might be influenced by the market size. See section 6 and Appendix 1 of the following report: http://www.pmprb-cepmb.gc.ca/view.asp?ccid=1449

      In short, the ‘producer surplus’ (manufacturer profit) is a function of the price, cost of development, marginal production costs and market size, while the ‘consumer surplus’ (net population health benefit) is a function of the price, ‘supply-side’ cost-effectiveness threshold (which determines the opportunity cost) and market size. (If we wish to consider the ‘consumer’ and ‘producer’ surplus in a common metric, we also need to convert dollars into health, or vice versa, which is where a ‘demand-side’ threshold might be useful).

      An important result is that pricing up to the ‘marginal’ supply-side threshold results in no consumer surplus during the patent period for a drug with marginal budget impact (since, by definition, one QALY is displaced for every QALY gained). For drugs with larger budget impact, pricing below this marginal threshold is necessary to obtain zero consumer surplus (since pricing at this marginal threshold would result in a loss in population health).

      Regardless of the market size, achieving a positive consumer surplus (i.e. improving population health) during the patent period requires pricing *below* the marginal supply-side threshold. However, pricing too low might result in the drug no longer being profitable, which might in turn result in the drug not being supplied, such that both consumer and producer surplus are zero.

      So if we want drugs to be both profitable and improve population health during the patent period then a balance is required – for drugs with marginal budget impact, we need to price them below the supply-side threshold, but not so low that the drug isn’t supplied. For drugs with large budget impact, we need to price them substantially below the supply-side threshold in order to improve population health during the patent period (due to the disproportionately high opportunity cost). There is also greater scope for negotiating lower prices for drugs with large budget impact because the producer surplus at any given price is greater.

      In short, we need to develop more sophisticated theoretical models in order that we can consider consumer and producer interests simultaneously. When we consider such a model (such as that published in the report above) we find that lower prices for drugs with substantial budget impact may be justified not only on the basis of their disproportionately high opportunity cost, but also because the producer surplus for such drugs at any given price is higher, such that a lower price is required to balance the returns for consumers and producers.

  3. Hi all,

    I need some time to properly process all of the above arguments, but as an author on the original paper I wanted to say thanks so much for the engagement and discussion. The paper was written by quite an interdisplinary group (philosophy, medicine, policy) but didn’t include any health economists, so it’s great to hear another perspective. At first glance, I think we’re possibly talking at cross purposes at some points and Mike and Chris (McCabe), I think we probably agree on more than we disagree. But budget impact/affordability is an area that we’ve had quite a long-standing interest in, so I (and I suspect several of the other authors) would be really interested to continue the discussion at some point.

    Thanks!

    1. Thanks Victoria!

      I’d be very interested in continuing the conversation and hearing your perspectives on the arguments discussed above.

      Best wishes,

      Mike

    2. Right, I’ve had time to process now so here’s my view. (I don’t claim to speak for the other authors – we haven’t had a chance to discuss).

      For me, Chris Sampson makes the key point when he says “it is unethical to make this threshold change only with reference to a specific new technology”. This is what we were trying to get at in the paper with our notion of numerical discrimination.

      Assuming for a moment that the operational threshold (let’s call it £30k/QALY) is accurate and that the NHS always disinvests from the least efficient existing technology, then logically I see why the threshold should constantly reduce to avoid underestimating opp cost. (And I/we agree with @McCabeCJM’s normative point that OCs are very real & should carry equal weight. Which they probably don’t under the current NICE threshold.)

      But decisions to approve large BI technologies are essentially just larger, more noticeable jumps in a constantly diminishing threshold. If the threshold reduces slightly every time someone accesses a new technology, why should this only be taken into consideration for people who happen to be members of large groups? (Or small, very expensive groups).

      If anything, a more equitable approach would be to place all patients accessing new technologies (i.e. not just a particular technology) in a single group & providing access based on what the threshold is at the time that they want to access the technology. I’m not necessarily recommending this – there are obviously equity issues and it’s not feasible from a policy perspective. But it would be consistent with our principle of non-perfectionism and would avoid numerical discrimination based on group size, while acknowledging the increased opp cost being suffered by individuals not accessing new technologies.

      Of course, in reality the NHS certainly isn’t disinvesting efficiently and we can’t estimate the true threshold with any accuracy, so pretending to make decisions based on precise knowledge of the threshold is, I’d suggest, inherently unethical. But that’s another issue!

      Thoughts?

Join the discussion

This site uses Akismet to reduce spam. Learn how your comment data is processed.