# Thesis Thursday: Koonal Shah

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Koonal Shah who has a PhD from the University of Sheffield. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Valuing health at the end of life
Supervisors
Aki Tsuchiya, Allan Wailoo
http://etheses.whiterose.ac.uk/17579

What were the key questions you wanted to answer with your research?

My key research question was: Do members of the general public wish to place greater weight on a unit of health gain for end of life patients than on that for other types of patients? Or put more concisely: Is there evidence of public support for an end of life premium?

The research question was motivated by a policy introduced by NICE in 2009 [PDF], which effectively gives special weighting to health gains generated by life-extending end of life treatments. This represents an explicit departure from the Institute’s reference case position that all equal-sized health gains are of equal social value (the ‘a QALY is a QALY’ rule). NICE’s policy was justified in part by claims that it represented the preferences of society, but little evidence was available to either support or refute that premise. It was this gap in the evidence that inspired my research question.

I also sought to answer other questions, such as whether the focus on life extensions (rather than quality of life improvements) in NICE’s policy is consistent with public preferences, and whether people’s stated end of life-related preferences depend on the ways in which the preference elicitation tasks are designed, framed and presented.

Which methodologies did you use to elicit people’s preferences?

All four of my empirical studies used hypothetical choice exercises to elicit preferences from samples of the UK general public. NICE’s policy was used as the framework for the designs in each case. Three of the studies can be described as having used simple choice tasks, while one study specifically applied the discrete choice experiment methodology. The general approach was to ask survey respondents which of two hypothetical patients they thought should be treated, assuming that the health service had only enough funds to treat one of them.

In my final study, which focused on framing effects and study design considerations, I included attitudinal questions with Likert item responses alongside the hypothetical choice tasks. The rationale for including these questions was to examine the consistency of respondents’ views across two different approaches (spoiler: most people are not very consistent).

Your study included face-to-face interviews. Did these provide you with information that you weren’t able to obtain from a more general survey?

The surveys in my first two empirical studies were both administered via face-to-face interviews. In the first study, I conducted the interviews myself, while in the second study the interviews were subcontracted to a market research agency. I also conducted a small number of face-to-face interviews when pilot testing early versions of the surveys for my third and fourth studies. The piloting process was useful as it provided me with first-hand information about which aspects of the surveys did and did not work well when administered in practice. It also gave me a sense of how appropriate my questions were. The subject matter – prioritising between patients described as having terminal illnesses and poor prognoses – had the potential to be distressing for some people. My view was that I shouldn’t be including questions that I did not feel comfortable asking strangers in an interview setting.

The use of face-to-face interviews was particularly valuable in my first study as it allowed me to ask debrief questions designed to probe respondents and elicit qualitative information about the thinking behind their responses.

What factors influence people’s preferences for allocating health care resources at the end of life?

My research suggests that people’s preferences regarding the value of end of life treatments can depend on whether the treatment is life-extending or quality of life-improving. This is noteworthy because NICE’s end of life criteria accommodate life extensions but not quality of life improvements.

I also found that the amount of time that end of life patients have to ‘prepare for death’ was a consideration for a number of respondents. Some of my results suggest that observed preferences for prioritising the treatment of end of life patients may be driven by concern about how long the patients have known their prognosis rather than by concern about how long they have left to live, per se.

The wider literature suggests that the age of the end of life patients (which may act as a proxy for their role in their household or in society) may also matter. Some studies have reported evidence that respondents become less concerned about the number of remaining life years when the patients in question are relatively old. This is consistent with the ‘fair innings’ argument proposed by Alan Williams.

Given the findings of your study, are there any circumstances under which you would support an end of life premium?

My findings offer limited support for an end of life premium (though it should be noted that the wider literature is more equivocal). So it might be considered appropriate for NICE to abandon its end of life policy on the grounds that the population health losses that arise due to the policy are not justified by the evidence on societal preferences. However, there may be arguments for retaining some form of end of life weighting irrespective of societal preferences. For example, if the standard QALY approach systematically underestimates the benefits of end of life treatments, it may be appropriate to correct for this (though whether this is actually the case would itself need investigating).

Many studies reporting that people wish to prioritise the treatment of the severely ill have described severity in terms of quality of life rather than life expectancy. And some of my results suggest that support for an end of life premium would be stronger if it applied to quality of life-improving treatments. This suggests that weighting QALYs in accordance with continuous variables capturing quality of life as well as life expectancy may be more consistent with public preferences than the current practice of applying binary cut-offs based only on life expectancy information, and would address some of the criticisms of the arbitrariness of NICE’s policy.

# Sam Watson’s journal round-up for 12th June 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Machine learning: an applied econometric approach. Journal of Economic Perspectives [RePEcPublished Spring 2017

Machine learning tools have become ubiquitous in the software we use on a day to day basis. Facebook can identify faces in photos; Google can tell you the traffic for your journey; Netflix can recommend you movies based on what you’ve watched before. Machine learning algorithms provide a way to estimate an unknown function $f$ that predicts an outcome $Y$ given some data $x$: $Y = f(x) + \epsilon$. The potential application of these algorithms to many econometric problems is clear. This article outlines the principles of machine learning methods. It divides econometric problems into prediction, $\hat{y}$, and parameter estimation, $\hat{\beta}$ and suggests machine learning is a useful tool for the former. However, this distinction is a false one, I believe. Parameters are typically estimated because they represent an average treatment effect, say $E(y|x=1) - E(y|x=0)$. But, we can estimate these quantities in ‘$\hat{y}$ problems’ since $f(x) = E(y|x)$. Machine learning algorithms, therefore, represent a non-parametric (or very highly parametric) approach to the estimation of treatment effects. In cases where functional form is unknown, where there may be nonlinearities in the response function, and interactions between variables, this approach can be very useful. They do not represent a panacea to estimation problems of course, since interpretation rests on the assumptions. For example, as Jennifer Hill discusses, additive regression tree methods can be used to estimate conditional average treatment effects if we can assume the treatment is ignorable conditional on the covariates. This article, while providing a good summary of methods, doesn’t quite identify the right niche where these approaches might be useful in econometrics.

Incorporating equity in economic evaluations: a multi-attribute equity state approach. European Journal of Health Economics [PubMedPublished 1st June 2017

Efficiency is a key goal for the health service. Economic evaluation provides evidence to support investment decisions, whether displacing resources from one technology to another can produce greater health benefits. Equity is generally not formally considered except through the final investment decision-making process, which may lead to different decisions by different commissioning groups. One approach to incorporating equity considerations into economic evaluation is the weighting of benefits, such as QALYs, by group. For example, a number of studies have estimated that benefits of end-of-life treatments have a greater social valuation than other treatments. One way of incorporating this into economic evaluation is to raise the cost-effectiveness threshold by an appropriate amount for end-of-life treatments. However, multiple attributes may be relevant for equity considerations, negating a simplistic approach like this. This paper proposed a multi-attribute equity state approach to incorporating equity concerns formally in economic evaluation. The basic premise of this approach is to firstly define a set of morally relevant attributes, to secondly derive a weighting scheme for each set of characteristics (similarly to how QALY weights are derived from the EQ-5D questionnaire), and thirdly to apply these weights to economic evaluation. A key aspect of the last step is to weight both the QALYs gained by a population from a new technology and those displaced from another. Indeed, identifying where resources are displaced from is perhaps the biggest limitation to this approach. This displacement problem has also come up in other discussions revolving around the estimation of the cost-effectiveness threshold. This seems to be an important area for future research.

Financial incentives, hospital care, and health outcomes: evidence from fair pricing laws. American Economic Journal: Economic Policy [RePEcPublished May 2017

There is a not-insubstantial literature on the response of health care providers to financial incentives. Generally, providers behave as expected, which can often lead to adverse outcomes, such as overtreatment in cases where there is potential for revenue to be made. But empirical studies of this behaviour often rely upon the comparison of conditions with different incentive schedules; rarely is there the opportunity to study the effects of relative shifts in incentive within the same condition. This paper studies the effects of fair pricing laws in the US, which limited the amount uninsured patients would have to pay hospitals, thus providing the opportunity to study patients with the same conditions but who represent different levels of revenue for the hospital. The introduction of fair pricing laws was associated with a reduction in total billing costs and length of stay for uninsured patients but little association was seen with changes in quality. A similar effect was not seen in the insured suggesting the price ceiling introduced by the fair pricing laws led to an increase in efficiency.

Credits

# Chris Sampson’s journal round-up for 8th May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Verification of decision-analytic models for health economic evaluations: an overview. PharmacoEconomics [PubMed] Published 29th April 2017

Increasingly, it’s expected that model-based economic evaluations can be validated and shown to be fit-for-purpose. However, up to now, discussions have focussed on scientific questions about conceptualisation and external validity, rather than technical questions, such as whether the model is programmed correctly and behaves as expected. This paper looks at how things are done in the software industry with a view to creating guidance for health economists. Given that Microsoft Excel remains one of the most popular software packages for modelling, there is a discussion of spreadsheet errors. These might be errors in logic, simple copy-paste type mistakes and errors of omission. A variety of tactics is discussed. In particular, the authors describe unit testing, whereby individual parts of the code are demonstrated to be correct. Unit testing frameworks do not exist for application to spreadsheets, so the authors recommend the creation of a ‘Tests’ spreadsheet with tests for parameter assignments, functions, equations and exploratory items. Independent review by another modeller is also recommended. Six recommendations are given for taking model verification forward: i) the use of open source models, ii) standardisation in model storage and communication (anyone for a registry?), iii) style guides for script, iv) agency and journal mandates, v) training and vi) creation of an ISPOR/SMDM task force. This is a worthwhile read for any modeller, with some neat tactics that you can build into your workflow.

How robust are value judgments of health inequality aversion? Testing for framing and cognitive effects. Medical Decision Making [PubMed] Published 25th April 2017

Evidence shows that people are often extremely averse to health inequality. Sometimes these super-egalitarian responses imply such extreme preferences that monotonicity is violated. The starting point for this study is the idea that these findings are probably influenced by framing effects and cognitive biases, and that they may therefore not constitute a reliable basis for policy making. The authors investigate 4 hypotheses that might indicate the presence of bias: i) realistic small health inequality reductions vs larger one, ii) population- vs individual-level descriptions, iii) concrete vs abstract intervention scenarios and iv) online vs face-to-face administration. Two samples were recruited: one with a face-to-face discussion (n=52) and the other online (n=83). The questionnaire introduced respondents to health inequality in England before asking 4 questions in the form of a choice experiment, with 20 paired choices. Responses are grouped according to non-egalitarianism, prioritarianism and strict egalitarianism. The main research question is whether or not the alternative strategies resulted in fewer strict egalitarian responses. Not much of an effect was found with regard to large gains or to population-level descriptions. There was evidence that the abstract scenarios resulted in a greater proportion of people giving strong egalitarian responses. And the face-to-face sample did seem to exhibit some social desirability bias, with more egalitarian responses. But the main take-home message from this study for me is that it is not easy to explain-away people’s extreme aversion to health inequality, which is heartening. Yet, as with all choice experiments, we see that the mode of administration – and cognitive effects induced by the question – can be very important.

Adaptation to health states: sick yet better off? Health Economics [PubMed] Published 20th April 2017