Paul Mitchell’s journal round-up for 2nd January 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Age effects in mortality risk valuation. European Journal of Health Economics [PubMed] [RePEcPublished 7th December 2016

Placing values on statistical life years has important public policy implications in measuring who benefits and how much they benefit from interventions. The authors of this study provide what they describe as the most comprehensive evidence to date against a constant value for a statistical life year, an assumption they argue is also applicable when calculating QALYs. Using a Spanish household survey collected over a large sample size (approximately 6,000 individuals), the authors study the relationship between willingness to pay (WTP) and age, by estimating individual WTP for reduction in risk of mortality due to acute myocardial infarction. Three different WTP elicitation procedures were performed. Parametric, semi-nonparametric and non-parametric models using marginal and total approaches were applied to understand the relationship using many alternative methods. Binary variables for income (proxied on a measure of self-perceived social status), education (>lower secondary level) and gender were also included as controls for the models. The results of the linear model show that WTP is lower as age increases. Those with higher income (i.e. social status) and education have higher WTP, while gender is not significant in any model. Sensitivity tests were as hypothesised. The non-parametric model produces similar results to the others, albeit with a higher senior discount. The senior discount is not independent of the income variable. From this, the authors estimate the value of a statistical life year for an 85 year old to be 3.5 times higher than that of a 20 year old. The authors are keen to highlight the strengths of their findings with a large sample size allowing for the robustness of results to be tested across a number of different model types. However, they do flag up the lack of comparability with previous studies that have focused on risk reductions with a lower probability of mortality. The assumption that the authors make that their findings for life years have direct applicability for QALYs is somewhat questionable, particularly for non-acute conditions and QALYs calculated for them. The rationale behind the three types of preference elicitation methods and how/why they were chosen is not apparent in the paper itself. The social status measure they use as a proxy for income is also questionable, and appeared to be applied to maximise sample size. If data for real income was used or imputation of income was included for missing data, it would be interesting to see what impact this may have had on their study findings.

Preferences for public involvement in health service decisions: a comparison between best-worst scaling and trio-wise stated preference elicitation techniques. European Journal of Health Economics [PubMedPublished 10th December 2016

Public involvement in health care is something that has become increasingly recognised as important to do and to be informed by public perspectives when making important decisions for their community. How and where that public involvement should feed into decision making is less well understood. In this study, the authors compare two methods, best worst scaling (BWS) case 2, and a new method the authors call ‘trio-wise’ where the choice task is presented in an equilateral triangle. Using ‘trio-wise’, respondents are able to click in any part of the triangle; this the authors argue gives additional insight on the strength of a respondent’s preferences and also accommodates indifferent preferences. Public preferences are sought using these two methods to understand what aspects of public involvement are most important. Eight general characteristics are included in the exercises. Respondents completed either BWS or the ‘trio-wise’ task (not both) using web based surveys. Approximately 1,700 individuals per arm were sampled. Only three of the eight general characteristics could be completed at any one time due to the trio-wise triangle approach. There was some evidence of position bias for both exercises. The authors say that weak preferences were observed using the trio-wise approach but this could be due to difficulty participants faced in choosing which generic characteristic was more important without further information. Impact and focus of public involvement are found to be the most important characteristics across both BWS and trio-wise. The authors find preference intensity has no bearing on choice probabilities, but this could be an artefact of the weak preferences observed in the sample. Although I can see the appeal of using the trio-wise approach when there are only three characteristics, BWS is advantageous in tasks with more characteristics. Indeed it feels that the findings from this experiment were impeded by the use of the trio-wise approach when much more useful information on guiding future public involvement practice could have been gathered using either BWS or a discrete choice experiment (DCE) across all eight characteristics and the options of public involvement within each characteristic.

How do individuals value health states? A qualitative investigation. Social Science & Medicine [PubMedPublished 22nd November 2016

The valuation tasks of health states used to generate QALYs have been previously found to be complex tasks for members of the general public to complete, who have little experience of such health states. This qualitative study seeks to gain a better understanding as to how the general public complete such tasks. Using a purposive sample, 21 individuals were asked to complete eight DCEs and three TTO tasks, based on the EQ-5D-5L valuation protocol. Participants were asked to complete the valuation tasks using think aloud, followed by semi-structured interviews. Three main themes emerged from the framework analysis undertaken on the interview transcripts. Firstly, individuals had to interpret a health state, using their imagination and experience to help visualise a realistic health state with those problems. Knowledge, understanding of descriptive system, additional information for a health state, re-writing of health states and problems with EQ-5D labels all impacted this process. The second theme was called conversion factors, which the authors took to mean in this context the personal and social factors that affected how participants valued health states. Personal interests, values and circumstances were said to have an effect on the interpretation of a health state. The final theme was based on the consequences of health states, that tended to focus on non-health effects caused by health problems, such as activities, enjoyment, independence, relationships, dignity and avoiding being a burden. The authors subsequently developed a three-stage explanatory account as to how people valued health states based on the interview findings. Although I would have some concerns about the generalisability of these findings to general public valuation studies, given the highly educated sample, it does highlight some issues about what health economists might implicitly think individuals are doing when completing such tasks compared to what they actually are doing. There are clearly problems for individuals completing such hypothetical health states, with the authors suggesting a more reflective and deliberative approach to overcome such problems. The authors also raise an interesting comment as to whether participants actually do weigh the consequences of health states and follow compensatory decision-making or instead are using simplifying heuristics based on one attribute, which I agree is an area that requires further investigation.

Credits

Advertisements

Chris Sampson’s journal round-up for 19th December 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Discounting the recommendations of the Second Panel on Cost-Effectiveness in Health and Medicine. PharmacoEconomics [PubMed] Published 9th December 2016

I do enjoy a bit of academic controversy. In this paper, renowned troublemakers Paulden, O’Mahony and McCabe do what they do best. Their target is the approach to discounting recommended by the report from the new Panel on Cost-Effectiveness, which I briefly covered in a recent round-up. This paper starts out by describing what – exactly – the Panel recommends. The real concerns lie with the approach recommended for analyses from the societal perspective. According to the authors, the problems start when the Panel conflates the marginal utility of income and that of consumption, and confusingly label it with our old friend the lambda. The confusion continues with the use of other imprecise terminology. And then there are some aspects of the Panel’s calculations that just seem to be plain old errors, resulting in illogical results – for example, that future consumption should be discounted more heavily if associated with higher marginal utility. Eh? The core criticism is that the Panel recommends the same discount rate for both costs and the consumption value of health, and that this contradicts recent developments. The Panel fails to clearly explain the basis for its recommendation. Helpfully, the authors outline an alternative (correct?) approach. The 3% rate for costs and health effects that the Panel recommends is not justified. The criticisms made in this paper are technical ones. That doesn’t mean they are any less important, but all we can see is that use of the Panel’s recommended decision rule results in some vague threat to utility-maximisation. Whether or not the conflation of consumption and utility value would actually result in bad decisions is not clear. Nevertheless, considering the massive influence of the original Gold Panel that will presumably be enjoyed by the Second Panel, extreme scrutiny is needed. I hope Basu and Ganiats see it fit to respond. I also wonder whether Paulden, O’Mahony and McCabe might have other chapters in their crosshairs.

Is best–worst scaling suitable for health state valuation? A comparison with discrete choice experiments. Health Economics [PubMed] Published 4th December 2016

BWS is gaining favour as a means of valuing health states. In this paper, team DCE throw down the gauntlet for team BWS. The study uses data collected during the development of a ‘glaucoma utility index’ in which DCE and BWS exercises were completed. The first question is, do DCE and BWS give the same results? The answer is no. The models indicate relatively weak correlation. For most dimensions, the BWS gave values for different severity levels that were closer together than in the DCE. This means that large improvements in health might be associated with smaller utility gains using BWS values than using DCE values. BWS is also identified as being more prone to decision biases. The second question is, which technique is best ‘to develop health utility indices’ (as the authors put it)? We need to bear in mind that this may in part be moot. Proponents of BWS have often claimed that they are not even trying to measure utility, so to judge BWS on this basis may not be appropriate. Anyway, set aside for now the fact that your own definition of utility might be (and that the authors’ almost certainly is) at odds with the BWS approach. No surprise that the authors suggest that DCE is superior. The bases on which this judgement is made are stability, monotonicity, continuity and completeness. All of these relate to whether the respondents make the kinds of responses we might expect. BWS answers are found to be less stable, more likely to be non-continuous and tend not to satisfy monotonicity. Personally I don’t see these as objective identifiers of goodness or ability of the technique to identify ‘true’ preferences. Also, I don’t know anything about how the glaucoma measure was developed, but if the health states it defines aren’t very informative then the results of this study won’t be either. Nevertheless, the findings do indicate to me that health state valuation using BWS might be subject to more caveats that need investigating before we start to make greater use of the technique. The much larger body of research behind DCE counts in its favour. Over to you, Terry team BWS.

Preference weighting of health state values: what difference does it make, and why? Value in Health Published 23rd November 2016

When non-economists ask about the way we measure health outcomes, the crux of it all is that the EQ-5D et al are preference-based. We think – or at least have accepted – that preferences must be really very serious and important. Equal weighting of dimensions? Nothing but meaningless nonsense! That may well be true in theory, but what if our approach to preference-elicitation is actually providing us with much the same results as if we were using equal weighting? Much research energy (and some money) goes into the preference weighting project, but could it be a waste of time? I had hoped that this paper might answer that question, but while it’s a useful study I didn’t find it quite so enlightening. The authors look at the EQ-5D-5L and 15D and compared the usual preference-based index for each with one constructed using an equal weighting, rescaled to the 0-1 dead-full health scale. The rescaling takes into account the differences in scale length for the 15D (0 to 1, 1.000) and the EQ-5D-5L (-0.281 to 1, 1.281). Data are from the Multi-Instrument Comparison (MIC) study, which includes healthy people as well as subsamples with a range of chronic diseases. The authors look at the correlations between the preference-based and equal weighted index values. They find very high correlation, especially for the 15D, and agreement on the EQ-5D increases when adjusted for the scale length. Furthermore, the results are investigated for known group validity alongside a depression-specific outcome measure. The EQ-5D performs a little better. But the study doesn’t really tell me what I want to know: would the use of equal-weighting normally give us the same results, and in what cases might it not? The MIC study includes a whole range of generic and condition-specific measures and I can’t see why the study didn’t look at all of them. It also could have used alternative preference weights to see how they differ. And it could have looked at all of the different disease-based subgroups in the sample to try and determine under what circumstances preference weighting might approach equal weighting. I hope to see more research on this issue, not to undermine preference weighting but to inform its improvement.

Credits

Chris Sampson’s journal round-up for 14th November 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Weighing clinical evidence using patient preferences: an application of probabilistic multi-criteria decision analysis. PharmacoEconomics [PubMedPublished 10th November 2016

There are at least two ways in which preferences determine the allocation of health care resources (in a country with a HTA agency, at least). One of them we think about a lot; the (societal) valuation of health states as defined by a multi-attribute measure (like the EQ-5D). The other relates to patient preferences that determine whether or not a specific individual (and their physician) will choose to use a particular technology, given its expected clinical outcomes for that individual. A drug may very well make sense at the aggregate level but be a very bad choice for a particular individual when compared with alternatives. It’s right that this process should be deliberative and not solely driven by an algorithm, but it’s also important to maintain transparent and consistent decision making. Multi-criteria decision analysis (MCDA) has been proposed as a means of achieving this, and it can be used to take into account the uncertainty associated with clinical outcomes. In this study the authors present an approach that also incorporates random preference variation along with parameter uncertainty in both preferences and clinical evidence. The model defines a value function and estimates the impact of uncertainty using probabilistic Monte Carlo simulation, which in turn estimates the mean value of each possible treatment in the population. Treatments can therefore be ranked according to patients’ preferences, along with an estimate of the uncertainty associated with this ranking. To demonstrate the utility of the model it is applied to an example for the relative value of HAARTs for HIV, with parameters derived from clinical evaluations and stated preferences studies. It’s nice to see that the authors also provide their R script. One headline finding seems to be that this approach is likely to demonstrate just how much uncertainty is involved that might not previously have been given much attention. It could therefore help steer us towards more valuable research in the future. And it could be used to demonstrate that optimal decisions might change when all sources of uncertainty are considered. Clearly a potential application of this method is in the realm of personalised medicine, which is slowly but inevitably reaching beyond the confines of pharmacogenomics.

Communal sharing and the provision of low-volume high-cost health services: results of a survey. PharmacoEconomics – Open Published 4th November 2016

One of the distributional concerns we might have about the QALY-maximisation approach is its implications for people with rare diseases. Drugs for rare diseases are often expensive (because the marginal cost is likely to be higher) and therefore less cost-effective. There is mixed evidence about whether or not people exhibit a preference for redistributive allocation of QALY-creating resources according to rarity. Of course, the result you get from such studies is dependent on the question you ask. In order to ask the right question it’s important to understand the mechanisms by which people might prefer allocation of additional resources to services for rare diseases. One suggestion in the literature is the preservation of hope. This study presents another, based on the number of people sharing the cost. So imagine a population of 1000 people, and all those people share the cost of health care. For a rare disease, more people will share the cost of the treatment per person treated. So if 10 people have the disease, that’s 100 payers per recipient. If 100 people have the disease then it’s just 10 payers per recipient. The idea is that people prefer a situation in which more people share the cost, and on that basis prefer to allocate resources to rare diseases. A web-based survey was conducted in Australia in which 702 people were asked to divide a budget between a small patient group with a high-cost illness and a large patient group with a low-cost illness. There were also a set of questions in which respondents indicated the importance of 6 possible influences on their decisions. The findings show that people did choose to allocate more funds to the rarer disease, despite the reduced overall health gain. This suggests that people do have a preference for wider cost sharing, which could explain extra weight being given to rare diseases. I think it’s a good idea that deserves more research, but for me there are a few problems with the study. Much of the effect could be explained by people’s non-linear valuations of risk, as the scenario highlighted that the respondents themselves would be at risk of the disease. We also can’t clearly differentiate between an effect due to the rarity of the disease (and associated cost sharing) and an effect due to the severity of the disease.

The challenge of conditional reimbursement: stopping reimbursement can be more difficult than not starting in the first place! Value in Health Published 3rd November 2016

If anything’s going to make me read a paper, it’s an exclamation mark! Conditional reimbursement of technologies that are probably effective but probably not cost-effective can be conducted in a rational way in order to generate research findings and benefit social welfare in the long run. But that can only hold true if those technologies subsequently found (through more research) to be ineffective or too costly are then made unavailable. Otherwise conditional reimbursement agreements will do more harm than good. This study uses discrete choice experiments to compare public (n=1169) and potential policymaker (n=90) values associated with the removal of an available treatment compared with non-reimbursement of a new treatment. The results showed (in addition to some other common findings) that both the public and policymakers preferred reimbursement of an existing treatment over the reimbursement of a new treatment, and were willing to accept an ICER of more than €7,000 higher for an existing treatment. Though the DCE found it to be a significant determinant, 60% of policymakers reported that they thought that reimbursement status was unimportant, so there may be some cognitive dissonance going on there. The most obvious (and probably most likely) explanation for the observed preference for currently reimbursed treatments is loss aversion. But it could also be that people recognise real costs associated with ending reimbursement that are not reflected in either the QALY estimates or the costs to the health system. Whatever the explanation, HTA agencies need to bear this in mind when using conditional reimbursement agreements.

Head-to-head comparison of health-state values derived by a probabilistic choice model and scores on a visual analogue scale. The European Journal of Health Economics [PubMed] Published 2nd November 2016

I’ve always had a fondness for a good old VAS as a direct measure of health state (dare we say utility) values, despite the limitations of the approach. This study compares discrete choices for EQ-5D-5L states with VAS valuations – thus comparing indirect and direct health state valuations – in Canada, the USA, England and The Netherlands (n=1775). Each respondent had to make a forced choice between two EQ-5D-5L health states and then assess both states on a single VAS. Ten different pairs were completed by each respondent. The two different approaches correlated strongly within and across countries, as we might expect. And pairs of EQ-5D-5L states that were valued relatively low or high in the discrete choice model were also valued accordingly in the VAS. But the relationship between the two approaches was non-linear in that values differed more at the ends of the scale, with poor health states valued more differently in the choice model and good health states valued more differently on the VAS. This probably just reflects some of the biases observed in the use of VAS that are already well-documented, particularly context bias and end-state aversion. This study clearly suggests (though does not by itself prove) that discrete choice models are a better choice for health state valuation… but the VAS ain’t dead yet.

Credits