Chris Sampson’s journal round-up for 29th August 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Health or happiness? A note on trading off health and happiness in rationing decisions. Value in Health Published 23rd August 2016

Health problems can impact both health and happiness. It seems obvious that individuals would attribute value to the happiness provided by a health technology over and above any health improvement. But what about ‘public’ views? Would people be willing to allocate resources to health care for other people on the basis of the happiness it provides? This study reports on a web-based survey in which 1015 people were asked to make resource allocation choices about groups of patients standing to gain varying degrees of health and/or happiness. Three scenarios were presented – one varying only happiness levels, one varying only health and another varying both. Unfortunately the third scenario was not analysed due to “the many inconsistent choices”. About half of respondents were not willing to make any trade-offs in happiness and health. Those who did make choices attached more weight to health on average. But there were some effects associated with the starting levels of health and happiness – people were less willing to discriminate between groups when starting health (or happiness) was lower, and more weight was given to health. There are a selection of potential biases associated with the responses to the questions, which the authors duly discuss.

Determinants of change in the cost-effectiveness threshold. Medical Decision Making [PubMedPublished 23rd August 2016

Set aside for the moment any theoretical concerns you might have with the ‘threshold’ approach to decision making in health care resource allocation. If we are going to use a willingness to pay threshold, how might it alter over time and in response to particular stimuli? This paper tackles that question using comparative statics and the idea of the ‘cost-effectiveness bookshelf’. If you haven’t come across it before, simply imagine a bookshelf with a book for each technology. The height of the books is determined by the ICER and their width by the budget impact; they’re lined up from shortest to tallest. This paper focuses on the introduction of technologies with ‘marginal’ budget impact, requiring the displacement of one existing technology. But a key idea to remember is that for technologies with large ‘non-marginal’ budget impacts – that is, requiring displacement of more than one existing technology – the threshold will be a weighted average of those technologies that are displaced. The authors describe the impact of changes in 4 different determinants of the threshold: i) the health budget, ii) demand for existing technologies, iii) technical efficiency of existing technologies and iv) funding for new technologies. Some changes (e.g. an increase in the health budget) have unambiguous impacts on the threshold (e.g. to increase it). Others have ambiguous effects – for example a decrease in the cost of a marginal technology might decrease the threshold through reduction of the ICER, or increase the threshold by reducing the budget impact so much that an additional technology could be funded. There’s a nice discussion towards the end about relaxing the assumptions. What if the budget isn’t fixed? What if we aren’t sure we’ve got the books in the right order? The bookshelf analogy is a starting point for these kinds of discussions. The article is an easy read and a good reference point for the threshold debate, even if its practical usefulness may be limited when lining up the NHS’s books seems like a pipedream.

Update to the report of nationally representative values for the noninstitutionalized US adult population for five health-related quality-of-life scores. Value in Health Published 21st August 2016

This paper does what it says on the tin, but it is a useful reference and worth knowing about. The last lot were published in 2006, so this paper is an update to that one using data from 2011. The measures reported are: i) self-rated health, ii) SF-12 mental subscale and (iii) physical subscale, iv) SF-6D and v) Quality of Well-Being Scale. Data come from the Medical Expenditures Panel Survey and the National Health Interview Survey, with 23,906 subjects in the former and 32,242 in the latter. Results are presented by age group (in decades) and by sex. So, for example, we can see that 20-29 year old women reported an average SF-6D index score of 0.809 while for 80-89 year olds the mean was 0.698. For almost all age groups and all measures, men reported higher scores than women. Interestingly, mean SF-6D scores were on average lower than in the 2001 data reported in the previous study.

Use of cost-effectiveness analysis to compare the efficiency of study identification methods in systematic reviews. Systematic Reviews [PubMed] Published 17th August 2016

Health economists have (or at least should have) a bit of a comparative advantage when it comes to economic evaluation. I’ve often thought that we should be leading the way in methods of economic evaluation in economics beyond the subject matter of health, and maybe into other fields. So I was pleased to see this paper using cost-effectiveness analysis for a new purpose. Often, systematic reviews can be mammoth tasks and potentially end up being of little value. Certainly at the margin there are things often done as part of a review (let’s say, including EconLit in a principally clinical review) that in the end prove to be pretty pointless. This study evaluates the cost-effectiveness of 4 alternative approaches to screening titles and abstracts as part of a systematic review. The 4 alternatives are i) ‘double screening’, which is the classic approach used by Cochrane et al whereby two researchers independently review abstracts and then meet to consider disagreements, ii) ‘safety first’, which is a variation on double screening whereby citations are only excludable if both reviewers identify them as such iii) ‘single screening’ with just one reviewer and iv) ‘single screening with text mining’ in which a machine learning process ranks studies by the likelihood of their inclusion. The outcome measure was the number of citations saved from inappropriate exclusion. It’s a big review, starting with 12,477 citations. There wasn’t much in it outcomes-wise, with at most 169 eligible studies and at least 161. But the incremental cost of double screening, compared with single screening plus text mining, was £37,279. This meant an ICER of £4660 per extra study, which seems like a lot. There are some limitations to the study, and the results clearly aren’t generalisable to all reviews. But it’s easy to see how studies-within-studies like this can help guide future research.

Photo credit: Antony Theobald (CC BY-NC-ND 2.0)

Public or patient preferences: ex ante, ex post… extraneous?

As alluded to in yesterday’s journal round-up, on reading a recent article by Versteegh and Brouwer, I have had some thoughts about the way we think about the the debate between the use of either patient or public preferences for health state valuation.

When it comes to valuing health states, NICE (and some of their counterparts) advise the use of preferences from the general public. An alternative argument is that we might use patient preferences, because the public probably do not have an accurate understanding of what it’s like to live in a particular health state. In their new paper, Versteegh and Brouwer outline the key arguments in favour of using public preferences but highlight the limited nature of these arguments. One thing they discuss is the notion that public preferences are ex ante while patient preferences are ex post. It’s analogous to preferences vs satisfaction, or decision utility vs experienced utility. The authors outline some limitations to this interpretation. In this blog post I’d like to build on this discussion. My main focus is on defining what we actually mean when we talk about ‘patient preferences’.

Before and after what?

Ex ante means ‘before the event’, and ex post after it. But when we are valuing health states there is no event before or after which utility can be estimated. We are trying to value a state, not preferences regarding an event. We may contrive an event – such as the onset of a particular health state – but that is theoretically quite a different thing to value. Indeed, this contrivance of an event taking place may be a problem.

We should probably do away with these terms and just speak in English, but let’s be realistic. At the very least, we need to be clear about what ex ante and ex post mean in this context; the ‘event’ in question is experience of the given health state.

But then, health state valuation isn’t about just one health state – it’s only possible to value health states in relation to one another and in particular ‘full health’ and a state equivalent to being dead. Furthermore, there is little doubt that a person’s valuation of past or future health states relates to their current health state. Chances are that any individual completing a health state valuation will be valuing some states from an ex ante position and some from an ex post position, both of which are influenced by their current health status.

Ultimately, whether preferences being elicited are ex ante or ex post has nothing to do with whom is being asked, and everything to do with what they are being asked about.

Anti-patient?

But that isn’t the crux of the matter anyway. What we really want to do here is differentiate between ‘patient preferences’ and ‘public preferences’. ‘The public’ is easy to define. It’s everyone. We usually try to get a representative sample because we cannot ask everyone to do a TTO exercise. But we need to be clearer about how we define patients. Patients are not ex ante – that we can agree on. Or can we? What if we ask an individual about an inevitable future health state associated with disease progression, of which they have a good understanding? What’s worse, patients might also not be ex post, depending on our definition of these terms.

It seems far more intuitive and accurate to describe patients as ex tempore: essentially meaning ‘at the time’. Patients’ health state preferences are neither retrospective nor prospective, but explicitly in relation to their current health state. Crucially, it is that current health state that we are trying to value.

So, a person valuing their own health state is doing so ex tempore, and that’s usually what we mean by ‘patient preferences’. But I hope it’s clear by this point that an individual patient’s preferences need not necessarily be ex tempore either.

People who have never experienced a given health state are necessarily stating their preferences ex ante, but they could still be a patient or not. Meanwhile, somebody who does have experience of a health state could be valuing it from any of the alternative temporal positions. They may, for example, be valuing a future in a health state that they have previously experienced. Versteegh and Brouwer provide a nice taxonomy of the arguments for the use of public preferences. I’d like to provide my own taxonomy here, of the different types of preferences we might elicit. I see it as follows:

Experience of health state No experience of health state
  ex ante ex tempore ex post ex ante ex tempore ex post
Patient A1 A2 A3 B
Non-patient C1 C2 C3 D

There are 4 types of responder (A, B, C and D), determined by whether they are a patient and whether they have previously experienced the health state currently being valued. Similarly, there are 3 different types of health state valuation, depending on whether the state being valued is a past, present or future state. For any given person valuing any given health state, the elicited preferences will be one of the labelled boxes. Ask that same person to value a different health state, or ask a different person to value the same health state, and the elicited preference may well differ.

There may of course be other ways in which individuals differ, such as the extent to which they have adapted to their current health state. But while that’s an important consideration in determining from whom we ought to elicit preferences, I don’t think it’s a key question in identifying patient preferences as opposed to public preferences.

Patient vs patients

One implication of this is that we have (at least) two types of patient preferences. Patient preferences could be A+B. That is, we value a particular health state in all patients, regardless of their current health state. That might be done in a sample representative of the current population of people considered to be a patient, however that might be defined. It strikes me that this is the true definition of patienthood as might be used in other contexts.

The kind of patient we talk about when we discuss ‘patient preferences’ is, I think, just those people falling into ‘A2’; patients valuing their own current health state.

Versteegh and Brouwer seem to suggest that any valuation of current health – that is, ex tempore – represents patient preferences. In practice this will likely be ‘A2’ through the identification of participants, but it’s important to consider the existence of ‘C2’. Just because a person is experiencing the health state of interest does not necessarily make them a patient in any practical sense of the word.

For what it’s worth, I think that public preferences are the least bad option for now. But Versteegh and Brouwer’s suggestion that we should report both is a good one, which could lead to more research that may very well change my mind. I think it will also force this issue of clearer definition of ‘patient preferences’.

Photo credit: Tori Cat (CC BY-NC-ND 2.0)

Chris Sampson’s journal round-up for 22nd August 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Simulation as an ethical imperative and epistemic responsibility for the implementation of medical guidelines in health care. Medicine, Health Care and Philosophy [PubMed] Published 6th August 2016

Some people describe RCTs as a ‘gold standard’ for evidence. But if more than one RCT exists, or we have useful data from outside the RCT, that probably isn’t true. Decision modelling has value over and above RCT data, as well as in lieu of it. One crucial thing that cannot – or at least not usually – be captured in an RCT is how well the evidence might be implemented. Medical guidelines will be developed, but there will be a process of adjustments and no doubt errors; all of which might impact on the quality of life of patients. Here we stray into the realms of implementation science. This paper argues that health care providers have a responsibility to acquire knowledge about implementation and the learning curve of medical guidelines. To this end, there is an epistemic and ethical imperative to simulate the possible impacts on patients’ health of the implementation learning curve. The authors provide some examples of guideline implementation that might have benefited from simulation. However, it’s very easy in hindsight to identify what went wrong and none of the examples set out realistic scenarios for simulation analyses that could have been carried out in advance. It isn’t clear to me how or why we should differentiate – in ethical or epistemic terms – implementation from effectiveness evaluation. It is clear, however, that health economists could engage more with implementation science, and that there is an ethical imperative to do so.

Estimating marginal healthcare costs using genetic variants as instrumental variables: Mendelian randomization in economic evaluation. PharmacoEconomics [PubMedPublished 2nd August 2016

To assert that obesity is associated with greater use of health care resources is uncontroversial. However, to assert that all of the additional cost associated with obesity is because of obesity is a step too far. There are many other determinants of health care costs (and outcomes) that might be independently associated with obesity. One way of dealing with this problem of identifying causality is to use instrumental variables in econometric analysis, but appropriate IVs can be tricky to identify. Enter, Mendelian randomisation. This is a method that can be used to adopt genetic variants as IVs. This paper describes the basis for Mendelian randomisation and outlines the suitability of genetic traits as IVs. En route, the authors provide a nice accessible summary of the IV approach more generally. The focus throughout the paper is upon estimating costs, with obesity used as an example. The article outlines a lot of the potential challenges and pitfalls associated with the approach, such as the use of weak instruments and non-linear exposure-outcome relationships. On the whole, the approach is intuitive and fits easily within existing methodologies. Its main value may lie in the estimation of more accurate parameters for model-based economic evaluation. Of course, we need data. Ideally, longitudinal medical records linked to genotypic information for a large number of people. That may seem like wishful thinking, but the UK Biobank project (and others) can fit the bill.

Patient and general public preferences for health states: A call to reconsider current guidelines. Social Science & Medicine [PubMed] Published 31st July 2016

One major ongoing debate in health economics is the question of whether public or patient preferences should be used to value health states and thus to estimate QALYs. Here in the UK NICE recommends public preferences, and I’d hazard a guess that most people agree. But why? After providing some useful theoretical background, this article reviews the arguments made in favour of the use of public preferences. It focuses on three that have been identified in Dutch guidelines. First, that cost-effectiveness analysis should adopt a societal perspective. The Gold Panel invoked a Rawlsian veil of ignorance argument to support the use of decision (ex ante) utility rather than experiences (ex post). The authors highlight that this is limited, as the public are not behind a veil of ignorance. Second, that the use of patient preferences might (wrongfully) ignore adaptation. This is not a complete argument as there may be elements of adaptation that decision makers wish not to take into account, and public preferences may still underestimate the benefits of treatment due to adaptation. Third, the insurance principle highlights that the obligation to be insured is made ex ante and therefore the benefits of insurance (i.e. health care) should also be valued as such. The authors set out a useful taxonomy of the arguments, their reasoning and the counter arguments. The key message is that current arguments in favour of public preferences are incomplete. As a way forward, the authors suggest that both patient and public preferences should be used alongside each other and propose that HTA guidelines require this. The paper got my cogs whirring, so expect a follow-up blog post tomorrow.

What, who and when? Incorporating a discrete choice experiment into an economic evaluation. Health Economics Review [PubMed] Published 29th July 2016

This study claims to be the first to carry out a discrete choice experiment on clinical trial participants, and to compare willingness to pay results with standard QALY-based net benefit estimates; thus comparing a CBA and a CUA. The trial in question evaluates extending the role of community pharmacists in the management of coronary heart disease. The study focusses on the questions of what, who and when: what factors should be evaluated (i.e. beyond QALYs)? whose preferences (i.e. patients with experience of the service or all participants)? and when should preferences be evaluated (i.e. during or after the intervention)? Comparisons are made along these lines. The DCE asked participants to choose between their current situation and two alternative scenarios involving either the new service or the control. The trial found no significant difference in EQ-5D scores, SF-6D scores or costs between the groups, but it did identify a higher level of satisfaction with the intervention. The intervention group (through the DCE) reported a greater willingness to pay for the intervention than the control group, and this appeared to increase with prolonged use of the service. I’m not sure what the take-home message is from this study. The paper doesn’t answer the questions in the title – at least, not in any general sense. Nevertheless, it’s an interesting discussion about how we might carry out cost-benefit analysis using DCEs.

Photo credit: Antony Theobald (CC BY-NC-ND 2.0)