Simon McNamara’s journal round-up for 6th August 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Euthanasia, religiosity and the valuation of health states: results from an Irish EQ5D5L valuation study and their implications for anchor values. Health and Quality of Life Outcomes [PubMed] Published 31st July 2018

Do you support euthanasia? Do you think there are health states worse than death? Are you religious? Don’t worry – I am not commandeering this week’s AHE journal round-up just to bombard you with a series of difficult questions. These three questions form the foundation of the first article selected for this week’s round-up.

The paper is based upon the hypothesis that your religiosity (“adherence to religious beliefs”) is likely to impact your support for euthanasia and, subsequently, the likelihood of you valuing severe health states as worse than death. This seems like a logical hypothesis. Religions tend to be anti-euthanasia, and so it appears likely that religious people will have lower levels of support for euthanasia than non-religious people. Equally, if you don’t support the principle of euthanasia, it stands to reason that you are likely to be less willing to choose immediate death over living in a severe health state – something you would need to do for a health state to be considered as being worse than death in a time trade-off (TTO) study.

The authors test this hypothesis using a sub-sample of data (n=160) collected as part of the Irish EQ-5D-5L TTO valuation study. Perhaps unsurprisingly, the authors find evidence in support of the above hypotheses. Those that attend a religious service weekly were more likely to oppose euthanasia than those who attend a few times a year or less, and those who oppose euthanasia were less likely to give “worse than death” responses in the TTO than those that support it.

I found this paper really interesting, as it raises a number of challenging questions. If a society is made up of people with heterogeneous beliefs regarding religion, how should we balance these in the valuation of health? If a society is primarily non-religious is it fair to apply this valuation tariff to the lives of the religious, and vice versa? These certainly aren’t easy questions to answer, but may be worth reflecting on.

E-learning and health inequality aversion: A questionnaire experiment. Health Economics [PubMed] [RePEc] Published 22nd July 2018

Moving on from the cheery topic of euthanasia, what do you think about socioeconomic inequalities in health? In my home country, England, if you are from the poorest quintile of society, you can expect to experience 62 years in full health in your lifetime, whilst if you are from the richest quintile, you can expect to experience 74 years – a gap of 12 years.

In the second paper to be featured in this round-up, Cookson et al. explore the public’s willingness to sacrifice incremental population health gains in order to reduce these inequalities in health – their level of “health inequality aversion”. This is a potentially important area of research, as the vast majority of economic evaluation in health is distributionally-naïve and effectively assumes that members of the public aren’t at all concerned with inequalities in health.

The paper builds on prior work conducted by the authors in this area, in which they noted a high proportion of respondents in health inequality aversion elicitation studies appear to be so averse to inequalities that they violate monotonicity – they choose scenarios that reduce inequalities in health even if these scenarios reduce the health of the rich at no gain to the poor, or they reduce the health of the poor, or they may reduce the health of both groups. The authors hypothesise that these monotonicity violations may be due to incomplete thinking from participants, and suggest that the quality of their thinking could be improved by two e-learning educational interventions. The primary aim of the paper is to test the impact of these interventions in a sample of the UK public (n=60).

The first e-learning intervention was an animated video that described a range of potential positions that a respondent could take (e.g. health maximisation, or maximising the health of the worst off). The second was an interactive spreadsheet-based questionnaire that presented the consequences of the participant’s choices, prior to them confirming their selection. Both interventions are available online.

The authors found that the interactive tool significantly reduced the amount of extreme egalitarian (monotonicity-violating) responses, compared to a non-interactive, paper-based version of the study. Similarly, when the video was watched before completing the paper-based exercise, the number of extreme egalitarian responses reduced. However, when the video was watched before the interactive tool there was no further decrease in extreme egalitarianism. Despite this reduction in extreme egalitarianism, the median levels of inequality aversion remained high, with implied weights of 2.6 and 7.0 for QALY gains granted to someone from the poorest fifth of society, compared to the richest fifth of society for the interactive questionnaire and video groups respectively.

This is an interesting study that provides further evidence of inequality aversion, and raises further concern about the practical dominance of distributionally-naïve approaches to economic evaluation. The public does seem to care about distribution. Furthermore, the paper demonstrates that participant responses to inequality aversion exercises are shaped by the information given to them, and the way that information is presented. I look forward to seeing more studies like this in the future.

A new method for valuing health: directly eliciting personal utility functions. The European Journal of Health Economics [PubMed] [RePEc] Published 20th July 2018

Last, but not least, for this round-up, is a paper by Devlin et al. on a new method for valuing health.

The relative valuation of health states is a pretty important topic for health economists. If we are to quantify the effectiveness, and subsequently cost-effectiveness, of an intervention, we need to understand which health states are better than others, and how much better they are. Traditionally, this is done by asking members of the public to choose between different health profiles featuring differing levels of fulfilment of a range of domains of health, in order to ‘uncover’ the relative importance the respondent places on these domains, and levels. These can then be used in order to generate social tariffs that assign a utility value to a given health state for use in economic evaluation.

The authors point out that, in the modern day, valuation studies can be conducted rapidly, and at scale, online, but at the potential cost of deliberation from participants, and the resultant risk of heuristic dominated decision making. In response to this, the authors propose a new method – direct elicitation of personal utility functions, and pilot its use for the valuation of EQ-5D in a sample of the English public (n=76).

The proposed approach differs from traditional approaches in three key ways. Firstly, instead of simply attempting to infer the relative importance that participants place on differing domains based upon choices between health profiles, the respondents are asked directly about the relative importance they place on differing domains of health, prior to validating these with profile choices. Secondly, the authors place a heavy emphasis on deliberation, and the construction, rather than uncovering, of preferences during the elicitation exercises. Thirdly, a “personal utility function” for each individual is constructed (in effect a personal EQ-5D tariff), and these individual utility functions are subsequently aggregated into a social utility function.

In the pilot, the authors find that the method appears feasible for wider use, albeit with some teething troubles associated with the computer-based tool developed to implement it, and the skills of the interviewers.

This direct method raises an interesting question for health economics – should we be inferring preferences based upon choices that differ in terms of certain attributes, or should we just ask directly about the attributes? This is a tricky question. It is possible that the preferences elicited via these different approaches could result in different preferences – if they do, on what grounds should we choose one or other? This requires a normative judgment, and at present, it appears both are (potentially) as legitimate as each other.

Whilst the authors apply this direct method to the valuation of health, I don’t see why similar approaches couldn’t be applied to any multi-attribute choice experiment. Keep your eyes out for future uses of it in valuation, and perhaps beyond? It will be interesting to see how it develops.

Credits

Alastair Canaway’s journal round-up for 30th July 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Is there an association between early weight status and utility-based health-related quality of life in young children? Quality of Life Research [PubMed] Published 10th July 2018

Childhood obesity is an issue which has risen to prominence in recent years. Concurrently, there has been an increased interest in measuring utility values in children for use in economic evaluation. In the obesity context, there are relatively few studies that have examined whether childhood weight status is associated with preference-based utility and, following, whether such measures are useful for the economic evaluation of childhood obesity interventions. This study sought to tackle this issue using the proxy version of the Health Utilities Index Mark 3 (HUI-3) and weight status data in 368 children aged five years. Associations between weight status and HUI-3 score were assessed using various regression techniques. No statistically significant differences were found between weight status and preference-based health-related quality of life (HRQL). This adds to several recent studies with similar findings which imply that young children may not experience any decrements in HRQL associated with weight status, or that the measures we have cannot capture these decrements. When considering trial-based economic evaluation of childhood obesity interventions, this highlights that we should not be solely relying on preference-based instruments.

Time is money: investigating the value of leisure time and unpaid work. Value in Health Published 14th July 2018

For those of us who work on trials, we almost always attempt to do some sort of ‘societal’ perspective incorporating benefits beyond health. When it comes to valuing leisure time and unpaid work there is a dearth of literature and numerous methodological challenges which has led to a bit of a scatter-gun approach to measuring and valuing (usually by ignoring) this time. The authors in the paper sought to value unpaid work (e.g. household chores and voluntary work) and leisure time (“non-productive” time to be spent on one’s likings, nb. this includes lunch breaks). They did this using online questionnaires which included contingent valuation exercises (WTP and WTA) in a sample of representative adults in the Netherlands. Regression techniques following best practice were used (two-part models with transformed data). Using WTA they found an additional hour of unpaid work and leisure time was valued at €16 Euros, whilst the WTP value was €9.50. These values fall into similar ranges to those used in other studies. There are many issues with stated preference studies, which the authors thoroughly acknowledge and address. These costs, so often omitted in economic evaluation, have the potential to be substantial and there remains a need to accurately value this time. Capturing and valuing these time costs remains an important issue, specifically, for those researchers working in countries where national guidelines for economic evaluation prefer a societal perspective.

The impact of depression on health-related quality of life and wellbeing: identifying important dimensions and assessing their inclusion in multi-attribute utility instruments. Quality of Life Research [PubMed] Published 13th July 2018

At the start of every trial, we ask “so what measures should we include?” In the UK, the EQ-5D is the default option, though this decision is not often straightforward. Mental health disorders have a huge burden of impact in terms of both costs (economic and healthcare) and health-related quality of life. How we currently measure the impact of such disorders in economic evaluation often receives scrutiny and there has been recent interest in broadening the evaluative space beyond health to include wellbeing, both subjective wellbeing (SWB) and capability wellbeing (CWB). This study sought to identify which dimensions of HRQL, SWB and CWB were most affected by depression (the most common mental health disorder) and to examine the sensitivity of existing multi-attribute utility instruments (MAUIs) to these dimensions. The study used data from the “Multi-Instrument Comparison” study – this includes lots of measures, including depression measures (Depression Anxiety Stress Scale, Kessler Psychological Distress Scale); SWB measures (Personal Wellbeing Index, Satisfaction with Life Scale, Integrated Household Survey); CWB (ICECAP-A); and multi-attribute utility instruments (15D, AQoL-4D, AQoL-8D, EQ-5D-5L, HUI-3, QWB-SA, and SF-6D). To identify dimensions that were important, the authors used the ‘Glass’s Delta effect size’ (the difference between the mean scores of healthy and self-reported groups divided by the standard deviation of the healthy group). To investigate the extent to which current MAUIs capture these dimensions, each MAUI was regressed on each dimension of HRQL, CWB and SWB. There were lots of interesting findings. Unsurprisingly, the most important dimensions were in the psychosocial dimensions of HRQL (e.g. the ‘coping’, ‘happiness’, and ‘self-worth’ dimensions of the AQoL-8D). Interestingly, the ICECAP-A proved to be the best measure for distinguishing between healthy individuals and those with depression. The SWB measures, on the other hand, were less impacted by depression. Of the MAUIs, the AQoL-8D was the most sensitive, whilst our beloved EQ-5D-5L and SF-6D were the least sensitive at distinguishing dimensions. There is a huge amount to unpack within this study, but it does raise interesting questions regarding measurement issues and the impact of broadening the evaluative space for decision makers. Finally, it’s worth noting that a new MAUI (ReQoL) for mental health has been recently developed – although further testing is needed, this is something to consider in future.

Credits

Method of the month: Shared parameter models

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is shared parameter models.

Principles

Missing data and data errors are an inevitability rather than a possibility. If these data were missing as a result of a random computer error, then there would be no problem, no bias would result in estimators of statistics from these data. But, this is probably not why they’re missing. People drop out of surveys and trials often because they choose to, if they move away, or worse if they die. The trouble with this is that those factors that influence these decisions and events are typically also those that affect the outcomes of interest in our studies, thus leading to bias. Unfortunately, missing data is often improperly dealt with. For example, a study of randomised controlled trials (RCTs) in the big four medical journals found that 95% had some missing data, and around 85% of those did not deal with it in a suitable way. An instructive article in the BMJ illustrated the potentially massive biases that dropout in RCTs can generate. Similar effects should be expected from dropout in panel studies and other analyses. Now, if the data are missing at random – i.e. the probability of missing data or dropout is independent of the data conditional on observed covariates – then we could base our inferences on just the observed data. But this is often not the case, so what do we do in these circumstances?

Implementation

If we have a full set of data Y and a set of indicators for whether each observation is missing R, plus some parameters \theta and \phi, then we can factorise their joint distribution, f(Y,R;\theta,\phi) in three ways:

Selection model

f_{R|Y}(R|Y;\phi)f_Y(Y;\theta)

Perhaps most familiar to econometricians, this factorisation involves the marginal distribution of the full data and the conditional distribution of missingness given the data. The Heckman selection model is an example of this factorisation. For example, one could specify a probit model for dropout and a normally distributed outcome, and then the full likelihood would involve the product of the two.

Pattern-mixture model

f_{Y|R}(Y|R;\theta_R)f_R(R;\phi)

This approach specifies a marginal distribution for the missingness or dropout mechanism and then the distribution of the data differs according to the type of missingness or dropout. The data are a mixture of different patterns, i.e. distributions. This type of model is implied when non-response is not considered missing data per se, and we’re interested in inferences within each sub-population. For example, when estimating quality of life at a given age, the quality of life of those that have died is not of interest, but their dying can bias the estimates.

Shared parameter model

f_{Y}(Y|\alpha;\theta)f_R(R|\alpha;\phi)

Now, the final way we can model these data posits unobserved variables, \alpha, conditional on which Y and R are independent. These models are most appropriate when the dropout or missingness is attributable to some underlying process changing over time, such as disease progression or household attitudes, or an unobserved variable, such as health status.

At the simplest level, one could consider two separate models with correlated random effects, for example, adding in covariates x and having a linear mixed model and probit selection model for person i at time t

Y_{it} = x_{it}'\theta + \alpha_{1,i} + u_{it}

R_{it} = \Phi(x_{it}'\theta + \alpha_{2,i})

(\alpha_{1,i},\alpha_{2,i}) \sim MVN(0,\Sigma) and u_{it} \sim N(0,\sigma^2)

so that the random effects are multivariate normally distributed.

A more complex and flexible specification for longitudinal settings would permit the random effects to vary over time, differently between models and individuals:

Y_{i}(t) = x_{i}(t)'\theta + z_{1,i} (t)\alpha_i + u_{it}

R_{i}(t) = G(x_{i}'\theta + z_{2,i} (t)\alpha_i)

\alpha_i \sim h(.) and u_{it} \sim N(0,\sigma^2)

As an example, if time were discrete in this model then z_{1,i} could be a series of parameters for each time period z_{1,i} = [\lambda_1,\lambda_2,...,\lambda_T], what are often referred to as ‘factor loadings’ in the structural equation modelling literature. We will run up against identifiability problems with these more complex models. For example, if the random effect was normally distributed i.e. \alpha_i \sim N(0,\sigma^2_\alpha) then we could multiply each factor loading by \rho and then \alpha_i \sim N(0,\sigma^2_\alpha / \rho^2) would give us an equivalent model. So, we would have to put restrictions on the parameters. We can set the variance of the random effect to be one, i.e. \alpha_i \sim N(0,1). We can also set one of the factor loadings to zero, without loss of generality, i.e. z_{1,i} = [0,...,\lambda_T].

The distributional assumptions about the random effects can have potentially large effects on the resulting inferences. It is possible therefore to non-parametrically model these as well – e.g. using a mixture distribution. Ultimately, these models are a useful method to deal with data that are missing not at random, such as informative dropout from panel studies.

Software

Estimation can be tricky with these models given the need to integrate out the random effects. For frequentist inferences, expectation maximisation (EM) is one way of estimating these models, but as far as I’m aware the algorithm would have to be coded for the problem specifically in Stata or R. An alternative is using some kind of quadrature based method. The Stata package stjm fits shared parameter models for longitudinal and survival data, with similar specifications to those above.

Otherwise, Bayesian tools, such as Hamiltonian Monte Carlo, may have more luck dealing with the more complex models. For the simpler correlated random effects specification specified above one can use the stan_mvmer command in the rstanarm package. For more complex models, one would need to code the model in something like Stan.

Applications

For a health economics specific discussion of these types of models, one can look to the chapter Latent Factor and Latent Class Models to Accommodate Heterogeneity, Using Structural Equation in the Encyclopedia of Health Economics, although shared parameter models only get a brief mention. However, given that that book is currently on sale for £1,000, it may be beyond the wallet of the average researcher! Some health-related applications may be more helpful. Vonesh et al. (2011) used shared parameter models to look at the effects of diet and blood pressure control on renal disease progression. Wu and others (2011) look at how to model the effects of a ‘concomitant intervention’, which is one applied when a patient’s health status deteriorates and so is confounded with health, using shared parameter models. And, Baghfalaki and colleagues (2017) examine heterogeneous random effect specification for shared parameter models and apply this to HIV data.

Credit