Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
Simulation as an ethical imperative and epistemic responsibility for the implementation of medical guidelines in health care. Medicine, Health Care and Philosophy [PubMed] Published 6th August 2016
Some people describe RCTs as a ‘gold standard’ for evidence. But if more than one RCT exists, or we have useful data from outside the RCT, that probably isn’t true. Decision modelling has value over and above RCT data, as well as in lieu of it. One crucial thing that cannot – or at least not usually – be captured in an RCT is how well the evidence might be implemented. Medical guidelines will be developed, but there will be a process of adjustments and no doubt errors; all of which might impact on the quality of life of patients. Here we stray into the realms of implementation science. This paper argues that health care providers have a responsibility to acquire knowledge about implementation and the learning curve of medical guidelines. To this end, there is an epistemic and ethical imperative to simulate the possible impacts on patients’ health of the implementation learning curve. The authors provide some examples of guideline implementation that might have benefited from simulation. However, it’s very easy in hindsight to identify what went wrong and none of the examples set out realistic scenarios for simulation analyses that could have been carried out in advance. It isn’t clear to me how or why we should differentiate – in ethical or epistemic terms – implementation from effectiveness evaluation. It is clear, however, that health economists could engage more with implementation science, and that there is an ethical imperative to do so.
Estimating marginal healthcare costs using genetic variants as instrumental variables: Mendelian randomization in economic evaluation. PharmacoEconomics [PubMed] Published 2nd August 2016
To assert that obesity is associated with greater use of health care resources is uncontroversial. However, to assert that all of the additional cost associated with obesity is because of obesity is a step too far. There are many other determinants of health care costs (and outcomes) that might be independently associated with obesity. One way of dealing with this problem of identifying causality is to use instrumental variables in econometric analysis, but appropriate IVs can be tricky to identify. Enter, Mendelian randomisation. This is a method that can be used to adopt genetic variants as IVs. This paper describes the basis for Mendelian randomisation and outlines the suitability of genetic traits as IVs. En route, the authors provide a nice accessible summary of the IV approach more generally. The focus throughout the paper is upon estimating costs, with obesity used as an example. The article outlines a lot of the potential challenges and pitfalls associated with the approach, such as the use of weak instruments and non-linear exposure-outcome relationships. On the whole, the approach is intuitive and fits easily within existing methodologies. Its main value may lie in the estimation of more accurate parameters for model-based economic evaluation. Of course, we need data. Ideally, longitudinal medical records linked to genotypic information for a large number of people. That may seem like wishful thinking, but the UK Biobank project (and others) can fit the bill.
Patient and general public preferences for health states: A call to reconsider current guidelines. Social Science & Medicine [PubMed] Published 31st July 2016
One major ongoing debate in health economics is the question of whether public or patient preferences should be used to value health states and thus to estimate QALYs. Here in the UK NICE recommends public preferences, and I’d hazard a guess that most people agree. But why? After providing some useful theoretical background, this article reviews the arguments made in favour of the use of public preferences. It focuses on three that have been identified in Dutch guidelines. First, that cost-effectiveness analysis should adopt a societal perspective. The Gold Panel invoked a Rawlsian veil of ignorance argument to support the use of decision (ex ante) utility rather than experiences (ex post). The authors highlight that this is limited, as the public are not behind a veil of ignorance. Second, that the use of patient preferences might (wrongfully) ignore adaptation. This is not a complete argument as there may be elements of adaptation that decision makers wish not to take into account, and public preferences may still underestimate the benefits of treatment due to adaptation. Third, the insurance principle highlights that the obligation to be insured is made ex ante and therefore the benefits of insurance (i.e. health care) should also be valued as such. The authors set out a useful taxonomy of the arguments, their reasoning and the counter arguments. The key message is that current arguments in favour of public preferences are incomplete. As a way forward, the authors suggest that both patient and public preferences should be used alongside each other and propose that HTA guidelines require this. The paper got my cogs whirring, so expect a follow-up blog post tomorrow.
What, who and when? Incorporating a discrete choice experiment into an economic evaluation. Health Economics Review [PubMed] Published 29th July 2016
This study claims to be the first to carry out a discrete choice experiment on clinical trial participants, and to compare willingness to pay results with standard QALY-based net benefit estimates; thus comparing a CBA and a CUA. The trial in question evaluates extending the role of community pharmacists in the management of coronary heart disease. The study focusses on the questions of what, who and when: what factors should be evaluated (i.e. beyond QALYs)? whose preferences (i.e. patients with experience of the service or all participants)? and when should preferences be evaluated (i.e. during or after the intervention)? Comparisons are made along these lines. The DCE asked participants to choose between their current situation and two alternative scenarios involving either the new service or the control. The trial found no significant difference in EQ-5D scores, SF-6D scores or costs between the groups, but it did identify a higher level of satisfaction with the intervention. The intervention group (through the DCE) reported a greater willingness to pay for the intervention than the control group, and this appeared to increase with prolonged use of the service. I’m not sure what the take-home message is from this study. The paper doesn’t answer the questions in the title – at least, not in any general sense. Nevertheless, it’s an interesting discussion about how we might carry out cost-benefit analysis using DCEs.
Photo credit: Antony Theobald (CC BY-NC-ND 2.0)