Chris Sampson’s journal round-up for 23rd September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Can you repeat that? Exploring the definition of a successful model replication in health economics. PharmacoEconomics [PubMed] Published 18th September 2019

People talk a lot about replication and its role in demonstrating the validity and reliability of analyses. But what does a successful replication in the context of cost-effectiveness modelling actually mean? Does it mean coming up with precisely the same estimates of incremental costs and effects? Does it mean coming up with a model that recommends the same decision? The authors of this study sought to bring us closer to an operational definition of replication success.

There is potentially much to learn from other disciplines that have a more established history of replication. The authors reviewed literature on the definition of ‘successful replication’ across all disciplines, and used their findings to construct a variety of candidate definitions for use in the context of cost-effectiveness modelling in health. Ten definitions of a successful replication were pulled out of the cross-disciplinary review, which could be grouped into ‘data driven’ replications and ‘experimental’ replications – the former relating to the replication of analyses and the latter relating to the replication of specific observed effects. The ten definitions were from economics, biostatistics, cognitive science, psychology, and experimental philosophy. The definitions varied greatly, with many involving subjective judgments about the proximity of findings. A few studies were found that reported on replications of cost-effectiveness models and which provided some judgement on the level of success. Again, these were inconsistent and subjective.

Quite reasonably, the authors judge that the lack of a fixed definition of successful replication in any scientific field is not just an oversight. The threshold for ‘success’ depends on the context of the replication and on how the evidence will be used. This papers provides six possible definitions of replication success for use in cost-effectiveness modelling, ranging from an identical replication of the results, through partial success in replicating specific pathways within a given margin of error, to simply replicating the same implied decision.

Ultimately, ‘data driven’ replications are a solution to a problem that shouldn’t exist, namely, poor reporting. This paper mostly convinced me that overall ‘success’ isn’t a useful thing to judge in the context of replicating decision models. Replication of certain aspects of a model is useful to evaluate. Whether the replication implied the same decision is a key thing to consider. Beyond this, it is probably worth considering partial success in replicating specific parts of a model.

Differential associations between interpersonal variables and quality-of-life in a sample of college students. Quality of Life Research [PubMed] Published 18th September 2019

There is growing interest in the well-being of students and the distinct challenges involved in achieving good mental health and addressing high levels of demand for services in this group. Students go through many changes that might influence their mental health, prominent among these is the change to their social situation.

This study set out to identify the role of key interpersonal variables on students’ quality of life. The study recruited 1,456 undergraduate students from four universities in the US. The WHOQOL measure was used for quality of life and a barrage of measures were used to collect information on loneliness, social connectedness, social support, emotional intelligence, intimacy, empathic concern, and more. Three sets of analyses of increasing sophistication were conducted, from zero-order correlations between each measure and the WHOQOL, to a network analysis using a Gaussian Graphical Model to identify both direct and indirect relationships while accounting for shared variance.

In all analyses, loneliness stuck out as the strongest driver of quality of life. Social support, social connectedness, emotional intelligence, intimacy with one’s romantic partner, and empathic concern were also significantly associated with quality of life. But the impact of loneliness was greatest, with other interpersonal variables influencing quality of life through their impact on loneliness.

This is a well-researched and reported study. The findings are informative to student support and other services that seek to improve the well-being of students. There is reason to believe that such services should recognise the importance of interpersonal determinants of well-being and in particular address loneliness. But it’s important to remember that this study is only as good as the measures it uses. If you don’t think WHOQOL is adequately measuring student well-being, or you don’t think the UCLA Loneliness Scale tells us what we need to know, you might not want these findings to influence practice. And, of course, the findings may not be generalisable, as the extent to which different interpersonal variables affect quality of life is very likely dependent on the level of service provision, which varies greatly between different universities, let alone countries.

Affordability and non-perfectionism in moral action. Ethical Theory and Moral Practice [PhilPapers] Published 14th September 2019

The ‘cost-effective but unaffordable’ challenge has been bubbling for a while now, at least since sofosbuvir came on the scene. This study explores whether “we can’t afford it” is a justifiable position to take. The punchline is that, no, affordability is not a sound ethical basis on which to support or reject the provision of a health technology. I was extremely sceptical when I first read the claim. If we can’t afford it, it’s impossible, and how can there by a moral imperative in an impossibility? But the authors proceeded to convince me otherwise.

The authors don’t go into great detail on this point, but it all hinges on divisibility. The reason that a drug like sofosbuvir might be considered unaffordable is that loads of people would be eligible to receive it. If sofosbuvir was only provided to a subset of this population, it could be affordable. On this basis, the authors propose the ‘principle of non-perfectionism’. This states that not being able to do all the good we can do (e.g. provide everyone who needs it with sofosbuvir) is not a reason for not doing some of the good we can do. Thus, if we cannot support provision of a technology to everyone who could benefit from it, it does not follow (ethically) to provide it to nobody, but rather to provide it to some people. The basis for selecting people is not of consequence to this argument but could be based on a lottery, for example.

Building on this, the authors explain to us why this is wrong, with the notion of ‘numerical discrimination’. They argue that it is not OK to prioritise one group over another simply because we can meet the needs of everyone within that group as opposed to only some members of the other group. This is exactly what’s happening when we are presented with notions of (un)affordability. If the population of people who could benefit from sofosbuvir was much smaller, there wouldn’t be an issue. But the simple fact that the group is large does not make it morally permissible to deny cost-effective treatment to any individual member within that group. You can’t discriminate against somebody because they are from a large population.

I think there are some tenuous definitions in the paper and some questionable analogies. Nevertheless, the authors succeeded in convincing me that total cost has no moral weight. It is irrelevant to moral reasoning. We should not refuse any health technology to an entire population on the grounds that it is ‘unaffordable’. The authors frame it as a ‘mistake in moral mathematics’. For this argument to apply in the HTA context, it relies wholly on the divisibility of health technologies. To some extent, NICE and their counterparts are in the business of defining models of provision, which might result in limited use criteria to get around the affordability issue. Though these issues are often handled by payers such as NHS England.

The authors of this paper don’t consider the implications for cost-effectiveness thresholds, but this is where my thoughts turned. Does the principle of non-perfectionism undermine the morality of differentiating cost-effectiveness thresholds according to budget impact? I think it probably does. Reducing the threshold because the budget impact is great will result in discrimination (‘numerical discrimination’) against individuals simply because they are part of a large population that could benefit from treatment. This seems to be the direction in which we’re moving. Maybe the efficiency cart is before the ethical horse.

Credits

Are QALYs #ableist?

As many of us who have had to review submitted journal articles, thesis defenses, grant applications, white papers, and even published literature know, providing feedback on something that is poorly conceived is much harder than providing feedback on something well done.

This is going to be hard.

Who is ValueOurHealth?

The video above comes from the website of “ValueOurHealth.org”; I would tell you more about them, but there is no “About Us” menu item on the website. However, the website indicates that they are a group of patient organizations concerned about:

“The use of flawed, discriminatory value assessments [that] could threaten access to care for patients with chronic illnesses and people with disabilities.”

In particular, who find issue with value assessments that

“place a value on the life of a human based on their health status and assume every patient will respond the same way to treatments.”

QALYs, according to these concerned patient groups, assign a value to human beings. People with lower values (like Jessica, in the video above), then, will be denied coverage because their life is “valued less than someone in perfect health” which means “less value is also placed on treating” them. (Many will be quick to notice that health states and QALYs are used interchangeably here. I try to explain why below.)

It’s not like this is a well-intended rogue group who simply misunderstands the concept of a QALY, requires someone to send them a polite email, and then we can all move on. Other groups have also asserted that QALYs unfairly discriminate against the aged and disabled, and include AimedAlliance, Alliance for Patient Access, Institute for Patient Access, Alliance for Aging Research, and Global Liver Institute. There are likely many more patient groups that abhor QALYs (and definite articles/determiners, it seems) out there, and are justifiably concerned about patient access to therapy. But these are all the ones I could find through a quick search and sitting from my perch in Canada.

Why do they hate QALYs?

One can infer pretty quickly that ValueOurHealth and their illustrative message is largely motivated by another very active organization, the “Partnership to Improve Patient Care” (PIPC). The video, and the arguments about “assigning QALYs” to people, seem to stem from a white paper produced by the PIPC, which in turn cites a very nicely written paper by Franco Sassi (of Imperial College London), that explains QALY and DALY calculations for researchers and policymakers.

The PIPC white paper, in fact, uses the very same calculation provided by Prof. Sassi to illustrate the impact of preventing a case of tuberculosis. However, unlike Prof. Sassi’s illustrative example, the PIPC fails to quantify the QALYs gained by the intervention. Instead they simply focus on the QALYs an individual who has tuberculosis for 6 months will experience. (0.36, versus 0.50, for those keeping score). After some further discussion about problems with measuring health states, the PIPC white paper then skips ahead to ethical problems with QALYs central to their position, citing a Value in Health paper by Erik Nord and colleagues. One of the key problems with the QALY according to the PIPC and argued in the Nord paper goes as follows:

“Valuing health gains in terms of QALYs means that life-years gained in full health—through, for instance, prevention of fatal accidents in people in normal health—are counted as more valuable than life-years gained by those who are chronically ill or disabled—for instance, by averting fatal episodes in people with asthma, heart disease, or mental illness.”

It seems the PIPC assume the lower number of QALYs experienced by those who are sick equates with the value of lives to payers. Even more interestingly, Prof. Nord’s analysis says nothing about costs. While those who are older have fewer QALYs to potentially gain, they also incur fewer costs. This is why, contrary to the assertion of preventing accidents in healthy people, preventive measures may offer a similar value to treatments when both QALYS and costs are considered.

It is also why an ICER review showed that alemtuzumab is good value in individuals requiring second-line treatment for relapse-remitting multiple sclerosis (1.34 QALYs can be gained compared to the next best alternative and at a lower cost then comparators), while a policy of annual mammography screening of similarly aged (i.e., >40) healthy women is of poor economic value (0.036 QALYs can be gained compared to no screening at an additional cost of $5,500 for every woman). Mammography provides better value in older individuals. It is not unlike fracture prevention and a myriad of other interventions in healthy, asymptomatic people in this regard. Quite contrary to the assertion of these misinformed groups, many interventions represent increasingly better value in frail, disabled, and older patients. Relative risks create larger yields when baseline risks are high.

None of this is to say that QALYs (and incremental cost-effectiveness ratios) do not have problems. And the PIPC, at the very least, should be commended for trying to advance alternative metrics, something that very few critics have offered. Instead, the PIPC and like-minded organizations are likely trapped in a filter bubble. They know there are problems with QALYs, and they see expensive and rare disease treatments being valued harshly. So, ergo, blame the QALY. (Note to PIPC: it is because the drugs are expensive, relative to other life-saving things, not because of your concerns about the QALY.) They then see that others feel the same way, which means their concerns are likely justified. A critique of QALYs issued by the Pioneer Institute identifies many of these same arguments. One Twitterer, a disabled Massachusetts lawyer “alive because of Medicaid” has offered further instruction for the QALY-naive.

What to do about it?

As a friend recently told me, not everyone is concerned with the QALY. Some don’t like what they see as a rationing approach promoted by the Institute for Clinical and Economic Review (ICER) assessments. Some hate the QALY. Some hate both. Last year, Joshua T. Cohen, Dan Ollendorf, and Peter Neumann published their own blog entry on the effervescing criticism of ICER, even allowing the PIPC head to have a say about QALYs. They then tried to set the record straight with these thoughts:

While we applaud the call for novel measures and to work with patient and disability advocates to understand attributes important to them, there are three problems with PIPC’s position.

First, simply coming up with that list of key attributes does not address how society should allocate finite resources, or how to price a drug given individual or group preferences.

Second, the diminished weight QALYs assign to life with disability does not represent discrimination. Instead, diminished weight represents recognition that treatments mitigating disability confer value by restoring quality of life to levels typical among most of the population.

Finally, all value measures that inform allocation of finite resources trade off benefits important to some patients against benefits potentially important to others. PIPC itself notes that life years not weighted for disability (e.g., the equal value life-year gained, or evLYG, introduced by ICER for sensitivity analysis purposes) do not award value for improved quality of life. Indeed, any measure that does not “discriminate” against patients with disability cannot award treatments credit for improving their quality of life. Failing to award that credit would adversely affect this population by ruling out spending on such improvements.

Certainly a lot more can be said here.

But for now, I am more curious what others have to say…

Thesis Thursday: Andrea Gabrio

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Andrea Gabrio who has a PhD from University College London. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Full Bayesian methods to handle missing data in health economic evaluation
Supervisors
Gianluca Baio, Alexina Mason, Rachael Hunter
Repository link
http://discovery.ucl.ac.uk/10072087

What kind of assumptions about missing data are made in trial-based economic evaluations?

In any analysis, assumptions about the missing values are always made, about those values which are not observed. Since the final results may depend on these assumptions, it is important that they are as plausible as possible within the context considered. For example, in trial-based economic evaluations, missing values often occur when data are collected through self-reported patient questionnaires and in many cases it is plausible that patients with unobserved responses are different from the others (e.g. have worse health states). In general, it is very important that a range of plausible scenarios (defined according to the available information) are considered, and that the robustness of our conclusions across them is assessed in sensitivity analysis. Often, however, analysts prefer to ignore this uncertainty and rely on ‘default’ approaches (e.g. remove the missing data from the analysis) which implicitly make unrealistic assumptions and possibly lead to biased results. For a more in-depth overview of current practice, I refer to my published review.

Given that any assumption about the missing values cannot be checked from the data at hand, an ideal approach to handle missing data should combine a well-defined model for the observed data and explicit assumptions about missingness.

What do you mean by ‘full Bayesian’?

The term ‘full Bayesian’ is a technicality and typically indicates that, in the Bayesian analysis, the prior distributions are freely specified by the analyst, rather than being based on the data (e.g. ’empirical Bayesian’). Being ‘fully’ Bayesian has some key advantages for handling missingness compared to other approaches, especially in small samples. First, a flexible choice of the priors may help to stabilise inference and avoid giving too much weight to implausible parameter values. Second, external information about missingness (e.g. expert opinion) can be easily incorporated into the model through the priors. This is essential when performing sensitivity analysis to missingness, as it allows assessment of the robustness of the results to a range of assumptions, with the uncertainty of any unobserved quantity (parameters or missing data) being fully propagated and quantified in the posterior distribution.

How did you use case studies to support the development of your methods?

In my PhD I had access to economic data from two small trials, which were characterised by considerable amounts of missing outcome values and which I used as motivating examples to implement my methods. In particular, individual-level economic data are characterised by a series of complexities that make it difficult to justify the use of more ‘standardised’ methods and which, if not taken into account, may lead to biased results.

Examples of these include the correlation between effectiveness and costs, the skewness in the empirical distributions of both outcomes, the presence of identical values for many individuals (e.g. excess zeros or ones), and, on top of that, missingness. In many cases, the implementation of methods to handle these issues is not straightforward, especially when multiple types of complexities affect the data.

The flexibility of the Bayesian framework allows the specification of a model whose level of complexity can be increased in a relatively easy way to handle all these problems simultaneously, while also providing a natural way to perform probabilistic sensitivity analysis. I refer to my published work to see an example of how Bayesian models can be implemented to handle trial-based economic data.

How does your framework account for longitudinal data?

Since the data collected within a trial have a longitudinal nature (i.e. collected at different times), it is important that any missingness methods for trial-based economic evaluations take into account this feature. I therefore developed a Bayesian parametric model for a bivariate health economic longitudinal response which, together with accounting for the typical complexities of the data (e.g. skewness), can be fitted to all the effectiveness and cost variables in a trial.

Time dependence between the responses is formally taken into account by means of a series of regressions, where each variable can be modelled conditionally on other variables collected at the same or at previous time points. This also offers an efficient way to handle missingness, as the available evidence at each time is included in the model, which may provide valuable information for imputing the missing data and therefore improve the confidence in the final results. In addition, sensitivity analysis to a range of missingness assumptions can be performed using a ‘pattern mixture’ approach. This allows the identification of certain parameters, known as sensitivity parameters, on which priors can be specified to incorporate external information and quantify its impact on the conclusions. A detailed description of the longitudinal model and the missing data analyses explored is also available online.

Are your proposed methods easy to implement?

Most of the methods that I developed in my project were implemented in JAGS, a software specifically designed for the analysis of Bayesian models using Markov Chain Monte Carlo simulation. Like other Bayesian software (e.g. OpenBUGS and STAN), JAGS is freely available and can be interfaced with different statistical programs, such as R, SAS, Stata, etc. Therefore, I believe that, once people are willing to overcome the initial barrier of getting familiar with a new software language, these programs provide extremely powerful tools to implement Bayesian methods. Although in economic evaluations analysts are typically more familiar with frequentist methods (e.g. multiple imputations), it is clear that as the complexity of the analysis increases, the implementation of these methods would require tailor-made routines for the optimisation of non-standard likelihood functions, and a full Bayesian approach is likely to be a preferable option as it naturally allows the propagation of uncertainty to the wider economic model and to perform sensitivity analysis.