Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
Estimating health-state utility for economic models in clinical studies: an ISPOR Good Research Practices Task Force report. Value in Health [PubMed] Published 3rd October 2016
When it comes to model-based cost-per-QALY analyses, researchers normally just use utility values from a single clinical study. So we best be sure that these studies are collecting the right data. This ISPOR Task Force report presents guidelines for the collection and reporting of utility values in the context of clinical studies, with a view to making them as useful as possible to the modelling process. The recommendations are quite general and would apply to most aspects of clinical studies: do some early planning; make sure the values are relevant to the population being modelled; bear HTA agencies’ expectations in mind. It bothers me though that the basis for the recommendations is not very concrete (the word “may” appears more than 100 times). The audience for this report isn’t so much people building models, or people conducting clinical trials. Rather, it’s people who are conducting some modelling within a clinical study (or vice versa). I’m in that position, so why don’t the guidelines strike me as useful? They expect a lot of time to be dedicated to the development of the model structure and aims before the clinical study gets underway. So modelling work would be conducted alongside the full duration of a clinical study. In my experience, that isn’t how things usually work. And even if that does happen, practical limitations to data collection will thwart the satisfaction of the vast majority of the recommendations. In short, I think the Task Force’s position puts the cart on top of the horse. Models require data and, yes, models can be used to inform data collection. But seldom can proposed modelling work be the principal basis for determining data collection in a clinical study. I think that may be a good thing and that a more incremental approach (review – model – collect data – repeat) is more fruitful. Having said all that, and having read the paper, I do think it’s useful. It isn’t useful as a set of recommendations that we might expect from an ISPOR Task Force, but rather as a list of things to think about if you’re somebody involved in the collection of health state utility data. If you’re one of those people then it’s well worth a read.
Reliability, validity, and feasibility of direct elicitation of children’s preferences for health states: a systematic review. Medical Decision Making [PubMed] Published 30th September 2016
Set aside for the moment the question of whose preferences we ought to use in valuing health improvements. There are undoubtedly situations in which it would be interesting and useful to know patients’ preferences. What if those patients are children? This study presents the findings from a systematic review of attempts at direct elicitation of preferences from children, focusing on psychometric properties and with the hope of identifying the best approach. To be included in the review, studies needed to report validity, reliability and/or feasibility. 26 studies were included, with most of them using time trade-off (n=14) or standard gamble (n=11). 7 studies reported validity and the findings suggested good construct validity with condition-specific but not generic measures. 4 studies reported reliability and TTO came off better than visual analogue scales. 9 studies reported on feasibility in terms of completion rates and generally found it to be high. The authors also extracted information about the use of preference elicitation in different age groups and found that studies making such comparisons suggested that it may not be appropriate for younger children. Generally speaking, it seems that standard gamble and time trade-off are acceptably valid, reliable and feasible. It’s important to note that there was a lot of potential for bias in the included studies, and that a number of them seemed somewhat lacking in their reporting. And there’s a definite risk of publication and reporting bias lurking here. I think a key issue that the study can’t really enlighten us on is the question of age. There might not be all that much difference between a 17 year old and a 27 year old, but there’s going to be a big difference between a 17 year old and a 7 year old. Future research needs to investigate the notion of an age threshold for valid preference elicitation. I’d like to see a more thorough quantitative analysis of findings from direct preference elicitation studies in children. But what we really need is a big new study in which children (both patients and general public) are asked to complete various direct preference elicitation tasks at multiple time points. Because right now, there just isn’t enough evidence.
Economic evaluation of integrated new technologies for health and social care: suggestions for policy makers, users and evaluators. Social Science & Medicine [PubMed] Published 24th September 2016
There are many debates that take place at the nexus of health care and social care, whether they be about funding, costs or outcome measurement. This study focusses on a specific example of health and social care integration – assisted living technologies (ALTs) – and tries to come up with a new and more appropriate method of economic evaluation. In this context, outcomes might matter ‘beyond health’. I should like this paper. It tries to propose an approach that might satisfy the suggestions I made in a recent essay. Why, then, am I not convinced? The authors outline their proposal as consisting of 3 steps: i) identify attributes relevant to the intervention, ii) value these in monetary terms and iii) value the health benefit. In essence the plan is to estimate QALYs for the health bit and then a monetary valuation for the other bits, with the ‘other bits’ specified in advance of the evaluation. That’s very easily said and not at all easily done. And the paper makes no argument that this is actually what we ought to be doing. Capabilities work their way in as attributes, but little consideration is given to the normative differences between this and other approaches (what I have termed ‘consequents’). The focus on ALTs is odd. The authors fill a lot of space arguing (unconvincingly) that it is a special case, before stating that their approach should be generalisable. The main problem can be summarised by a sentence that appears in the introduction: “the approach is highly flexible because the use of a consistent numeraire (either monetary or health) means that programmes can be compared even if the underlying attributes differ“. Maybe they can, but they shouldn’t. Or at least that’s what a lot of people think, which is precisely why we use QALYs. An ‘anything goes’ approach means that any intervention could easily be demonstrated to be more cost-effective than another if we just pick the right attributes. I’m glad to see researchers trying to tackle these problems, and this could be the start of something important, but I was disappointed that this paper couldn’t offer anything concrete.
Credits