Chris Sampson’s journal round-up for 30th September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A need for change! A coding framework for improving transparency in decision modeling. PharmacoEconomics [PubMed] Published 24th September 2019

We’ve featured a few papers in recent round-ups that (I assume) will be included in an upcoming themed issue of PharmacoEconomics on transparency in modelling. It’s shaping up to be a good one. The value of transparency in decision modelling has been recognised, but simply making the stuff visible is not enough – it needs to make sense. The purpose of this paper is to help make that achievable.

The authors highlight that the writing of analyses, including coding, involves personal style and preferences. To aid transparency, we need a systematic framework of conventions that make the inner workings of a model understandable to any (expert) user. The paper describes a framework developed by the Decision Analysis in R for Technologies in Health (DARTH) group. The DARTH framework builds on a set of core model components, generalisable to all cost-effectiveness analyses and model structures. There are five components – i) model inputs, ii) model implementation, iii) model calibration, iv) model validation, and v) analysis – and the paper describes the role of each. Importantly, the analysis component can be divided into several parts relating to, for example, sensitivity analyses and value of information analyses.

Based on this framework, the authors provide recommendations for organising and naming files and on the types of functions and data structures required. The recommendations build on conventions established in other fields and in the use of R generally. The authors recommend the implementation of functions in R, and relate general recommendations to the context of decision modelling. We’re also introduced to unit testing, which will be unfamiliar to most Excel modellers but which can be relatively easily implemented in R. The role of various tools are introduced, including R Studio, R Markdown, Shiny, and GitHub.

The real value of this work lies in the linked R packages and other online material, which you can use to test out the framework and consider its application to whatever modelling problem you might have. The authors provide an example using a basic Sick-Sicker model, which you can have a play with using the DARTH packages. In combination with the online resources, this is a valuable paper that you should have to hand if you’re developing a model in R.

Accounts from developers of generic health state utility instruments explain why they produce different QALYs: a qualitative study. Social Science & Medicine [PubMed] Published 19th September 2019

It’s well known that different preference-based measures of health will generate different health state utility values for the same person. Yet, they continue to be used almost interchangeably. For this study, the authors spoke to people involved in the development of six popular measures: QWB, 15D, HUI, EQ-5D, SF-6D, and AQoL. Their goal was to understand the bases for the development of the measures and to explain why the different measures should give different results.

At least one original developer for each instrument was recruited, along with people involved at later stages of development. Semi-structured interviews were conducted with 15 people, with questions on the background, aims, and criteria for the development of the measure, and on the descriptive system, preference weights, performance, and future development of the instrument.

Five broad topics were identified as being associated with differences in the measures: i) knowledge sources used for conceptualisation, ii) development purposes, iii) interpretations of what makes a ‘good’ instrument, iv) choice of valuation techniques, and v) the context for the development process. The online appendices provide some useful tables that summarise the differences between the measures. The authors distinguish between measures based on ‘objective’ definitions (QWB) and items that people found important (15D). Some prioritised sensitivity (AQoL, 15D), others prioritised validity (HUI, QWB), and several focused on pragmatism (SF-6D, HUI, 15D, EQ-5D). Some instruments had modest goals and opportunistic processes (EQ-5D, SF-6D, HUI), while others had grand goals and purposeful processes (QWB, 15D, AQoL). The use of some measures (EQ-5D, HUI) extended far beyond what the original developers had anticipated. In short, different measures were developed with quite different concepts and purposes in mind, so it’s no surprise that they give different results.

This paper provides some interesting accounts and views on the process of instrument development. It might prove most useful in understanding different measures’ blind spots, which can inform the selection of measures in research, as well as future development priorities.

The emerging social science literature on health technology assessment: a narrative review. Value in Health Published 16th September 2019

Health economics provides a good example of multidisciplinarity, with economists, statisticians, medics, epidemiologists, and plenty of others working together to inform health technology assessment. But I still don’t understand what sociologists are talking about half of the time. Yet, it seems that sociologists and political scientists are busy working on the big questions in HTA, as demonstrated by this paper’s 120 references. So, what are they up to?

This article reports on a narrative review, based on 41 empirical studies. Three broad research themes are identified: i) what drove the establishment and design of HTA bodies? ii) what has been the influence of HTA? and iii) what have been the social and political influences on HTA decisions? Some have argued that HTA is inevitable, while others have argued that there are alternative arrangements. Either way, no two systems are the same and it is not easy to explain differences. It’s important to understand HTA in the context of other social tendencies and trends, and that HTA influences and is influenced by these. The authors provide a substantial discussion on the role of stakeholders in HTA and the potential for some to attempt to game the system. Uncertainty abounds in HTA and this necessarily requires negotiation and acts as a limit on the extent to which HTA can rely on objectivity and rationality.

Something lacking is a critical history of HTA as a discipline and the question of what HTA is actually good for. There’s also not a lot of work out there on culture and values, which contrasts with medical sociology. The authors suggest that sociologists and political scientists could be more closely involved in HTA research projects. I suspect that such a move would be more challenging for the economists than for the sociologists.

Credits

Chris Sampson’s journal round-up for 23rd September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Can you repeat that? Exploring the definition of a successful model replication in health economics. PharmacoEconomics [PubMed] Published 18th September 2019

People talk a lot about replication and its role in demonstrating the validity and reliability of analyses. But what does a successful replication in the context of cost-effectiveness modelling actually mean? Does it mean coming up with precisely the same estimates of incremental costs and effects? Does it mean coming up with a model that recommends the same decision? The authors of this study sought to bring us closer to an operational definition of replication success.

There is potentially much to learn from other disciplines that have a more established history of replication. The authors reviewed literature on the definition of ‘successful replication’ across all disciplines, and used their findings to construct a variety of candidate definitions for use in the context of cost-effectiveness modelling in health. Ten definitions of a successful replication were pulled out of the cross-disciplinary review, which could be grouped into ‘data driven’ replications and ‘experimental’ replications – the former relating to the replication of analyses and the latter relating to the replication of specific observed effects. The ten definitions were from economics, biostatistics, cognitive science, psychology, and experimental philosophy. The definitions varied greatly, with many involving subjective judgments about the proximity of findings. A few studies were found that reported on replications of cost-effectiveness models and which provided some judgment on the level of success. Again, these were inconsistent and subjective.

Quite reasonably, the authors judge that the lack of a fixed definition of successful replication in any scientific field is not just an oversight. The threshold for ‘success’ depends on the context of the replication and on how the evidence will be used. This paper provides six possible definitions of replication success for use in cost-effectiveness modelling, ranging from an identical replication of the results, through partial success in replicating specific pathways within a given margin of error, to simply replicating the same implied decision.

Ultimately, ‘data driven’ replications are a solution to a problem that shouldn’t exist, namely, poor reporting. This paper mostly convinced me that overall ‘success’ isn’t a useful thing to judge in the context of replicating decision models. Replication of certain aspects of a model is useful to evaluate. Whether the replication implied the same decision is a key thing to consider. Beyond this, it is probably worth considering partial success in replicating specific parts of a model.

Differential associations between interpersonal variables and quality-of-life in a sample of college students. Quality of Life Research [PubMed] Published 18th September 2019

There is growing interest in the well-being of students and the distinct challenges involved in achieving good mental health and addressing high levels of demand for services in this group. Students go through many changes that might influence their mental health, prominent among these is the change to their social situation.

This study set out to identify the role of key interpersonal variables on students’ quality of life. The study recruited 1,456 undergraduate students from four universities in the US. The WHOQOL measure was used for quality of life and a barrage of measures were used to collect information on loneliness, social connectedness, social support, emotional intelligence, intimacy, empathic concern, and more. Three sets of analyses of increasing sophistication were conducted, from zero-order correlations between each measure and the WHOQOL, to a network analysis using a Gaussian Graphical Model to identify both direct and indirect relationships while accounting for shared variance.

In all analyses, loneliness stuck out as the strongest driver of quality of life. Social support, social connectedness, emotional intelligence, intimacy with one’s romantic partner, and empathic concern were also significantly associated with quality of life. But the impact of loneliness was greatest, with other interpersonal variables influencing quality of life through their impact on loneliness.

This is a well-researched and reported study. The findings are informative to student support and other services that seek to improve the well-being of students. There is reason to believe that such services should recognise the importance of interpersonal determinants of well-being and in particular address loneliness. But it’s important to remember that this study is only as good as the measures it uses. If you don’t think WHOQOL is adequately measuring student well-being, or you don’t think the UCLA Loneliness Scale tells us what we need to know, you might not want these findings to influence practice. And, of course, the findings may not be generalisable, as the extent to which different interpersonal variables affect quality of life is very likely dependent on the level of service provision, which varies greatly between different universities, let alone countries.

Affordability and non-perfectionism in moral action. Ethical Theory and Moral Practice [PhilPapers] Published 14th September 2019

The ‘cost-effective but unaffordable’ challenge has been bubbling for a while now, at least since sofosbuvir came on the scene. This study explores whether “we can’t afford it” is a justifiable position to take. The punchline is that, no, affordability is not a sound ethical basis on which to support or reject the provision of a health technology. I was extremely sceptical when I first read the claim. If we can’t afford it, it’s impossible, and how can there by a moral imperative in an impossibility? But the authors proceeded to convince me otherwise.

The authors don’t go into great detail on this point, but it all hinges on divisibility. The reason that a drug like sofosbuvir might be considered unaffordable is that loads of people would be eligible to receive it. If sofosbuvir was only provided to a subset of this population, it could be affordable. On this basis, the authors propose the ‘principle of non-perfectionism’. This states that not being able to do all the good we can do (e.g. provide everyone who needs it with sofosbuvir) is not a reason for not doing some of the good we can do. Thus, if we cannot support provision of a technology to everyone who could benefit from it, it does not follow (ethically) to provide it to nobody, but rather to provide it to some people. The basis for selecting people is not of consequence to this argument but could be based on a lottery, for example.

Building on this, the authors explain to us why this is wrong, with the notion of ‘numerical discrimination’. They argue that it is not OK to prioritise one group over another simply because we can meet the needs of everyone within that group as opposed to only some members of the other group. This is exactly what’s happening when we are presented with notions of (un)affordability. If the population of people who could benefit from sofosbuvir was much smaller, there wouldn’t be an issue. But the simple fact that the group is large does not make it morally permissible to deny cost-effective treatment to any individual member within that group. You can’t discriminate against somebody because they are from a large population.

I think there are some tenuous definitions in the paper and some questionable analogies. Nevertheless, the authors succeeded in convincing me that total cost has no moral weight. It is irrelevant to moral reasoning. We should not refuse any health technology to an entire population on the grounds that it is ‘unaffordable’. The authors frame it as a ‘mistake in moral mathematics’. For this argument to apply in the HTA context, it relies wholly on the divisibility of health technologies. To some extent, NICE and their counterparts are in the business of defining models of provision, which might result in limited use criteria to get around the affordability issue. Though these issues are often handled by payers such as NHS England.

The authors of this paper don’t consider the implications for cost-effectiveness thresholds, but this is where my thoughts turned. Does the principle of non-perfectionism undermine the morality of differentiating cost-effectiveness thresholds according to budget impact? I think it probably does. Reducing the threshold because the budget impact is great will result in discrimination (‘numerical discrimination’) against individuals simply because they are part of a large population that could benefit from treatment. This seems to be the direction in which we’re moving. Maybe the efficiency cart is before the ethical horse.

Credits

Chris Sampson’s journal round-up for 19th August 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Paying for kidneys? A randomized survey and choice experiment. American Economic Review [RePEc] Published August 2019

This paper starts with a quote from Alvin Roth about ‘repugnant transactions’, of which markets for organs provide a prime example. This idea of ‘repugnant transactions’ has been hijacked by some pop economists to represent the stupid opinions of non-economists. If you ask me, markets for organs aren’t repugnant, they just seem like a very bad idea in terms of both efficiency and equity. But it doesn’t matter what I think; it matters what the people of the United States think.

The authors of this study conducted an online survey with a representative sample of 2,666 Americans. Each respondent was randomised to evaluate one of eight systems compared with the current system. The eight systems differed with respect to i) cash or non-cash compensation of ii) different sizes ($30,000 or $100,000), iii) paid by either a public agency or the organ recipient. Participants made five binary choices that differed according to the gain – in transplants generated – associated with the new system. Half of the participants were also asked to express moral judgements.

Both the system features (e.g. who pays) and the outcomes of the new system influenced people’s choices. Broadly speaking, the results suggest that people aren’t opposed to donors being paid, but are opposed to patients paying. (Remember, we’re talking about the US here!). Around 21% of respondents opposed payment no matter what, 46% were in favour no matter what, and 18% were sensitive to the gain in the number of transplants. A 10% point increase in transplants resulted in a 2.6% point increase in support. Unsurprisingly, individuals’ moral judgements were predictive of the attitudes they expressed, particularly with respect to fairness. The authors describe their results as exhibiting ‘strong polarisation’, which is surely inevitable for questions that involve moral judgement.

Being in AER, this is a long meandering paper with extensive analyses and thoroughly reported results. There’s lots of information and findings that I can’t share here. It’s a valuable study with plenty of food for thought, but I can’t help but think that it is, methodologically, a bit weak. If we want to understand the different views in society, surely some Q methodology would be more useful than a basic online survey. And if we want to elicit stated preferences, surely a discrete choice experiment with a well-thought-out efficient design would give us more meaningful results.

Estimating local need for mental healthcare to inform fair resource allocation in the NHS in England: cross-sectional analysis of national administrative data linked at person level. The British Journal of Psychiatry [PubMed] Published 8th August 2019

The need to fairly (and efficiently) allocate NHS resources across the country played an important part in the birth of health economics in the UK, and resulted in resource allocation formulas. Since 1996 there has been a separate formula for mental health services, which is periodically updated. This study describes the work undertaken for the latest update.

The model is based on predicting service use and total mental health care costs observed in 2015 from predictors in the years 2013-2014, to inform allocations in 2019-2024. Various individual-level data sources available to the NHS were used for 43.7 million people registered with a GP practice and over the age of 20. The cost per patient who used mental health services ranged from £94 to over one million, averaging around £2,000. The predictor variables included individual indicators such as age, sex, ethnicity, physical diagnoses, and household type (e.g. number of adults and kids). The model also used variables observed at the local or GP practice level, such as the proportion of people receiving out-of-work benefits and the distance from the mental health trust. All of this got plugged into a good old OLS regression. From individual-level predictions, the researchers created aggregated indices of need for each clinical commission group (CCG).

A lot went into the model, which explained 99% of the variation in costs between CCGs. A key way in which this model differs from previous versions is that it relies on individual-level indicators rather than those observed at the level of GP practice or CCG. There was a lot of variation in the CCG need indices, ranging from 0.65 for Surrey Heath to 1.62 for Southwark, where 1.00 is the average. You’ll need to check the online appendices for your own CCG’s level of need (Lewisham: 1.52). As one might expect, the researchers observed a strong correlation between a CCG’s need index and the CCG’s area’s level of deprivation. Compared with previous models, this new model indicates a greater allocation of resources to more deprived and older populations.

Measuring, valuing and including forgone childhood education and leisure time costs in economic evaluation: methods, challenges and the way forward. Social Science & Medicine [PubMed] Published 7th August 2019

I’m a ‘societal perspective’ sceptic, not because I don’t care about non-health outcomes (though I do care less) but because I think it’s impossible to capture everything that is of value to society, and that capturing just a few things will introduce a lot of bias and noise. I would also deny that time has any intrinsic value. But I do think we need to do a better job of evaluating interventions for children. So I expected this paper to provide me with a good mix of satisfaction and exasperation.

Health care often involves a loss of leisure or work time, which can constitute an opportunity cost and is regularly included in economic evaluations – usually proxied by wages – for adults. The authors outline the rationale for considering ‘time-related’ opportunity costs in economic evaluations and describe the nature of lost time for children. For adults, the distinction is generally between paid or unpaid work and leisure time. Arguably, this distinction is not applicable to children. Two literature reviews are described. One looked at economic evaluations in the context of children’s health, to see how researchers have valued lost time. The other sought to identify ideas about the value of lost time for children from a broader literature.

The authors do a nice job of outlining how difficult it is to capture non-health-related costs and outcomes in the context of childhood. There is a handful of economic evaluations that have tried to measure and value children’s foregone time. The valuations generally focussed on the costs of childcare rather than the costs to the child, though one looked at the rate of return to education. There wasn’t a lot to go off in the non-health literature, which mostly relates to adults. From what there is, the recommendation is to capture absence from formal education and foregone leisure time. Of course, consideration needs to be given to the importance of lost time and thus the value of capturing it in research. We also need to think about the risk of double counting. When it comes to measurement, we can probably use similar methods as we would for adults, such as diaries. But we need very different approaches to valuation. On this, the authors found very little in the way of good examples to follow. More research needed.

Credits