Chris Sampson’s journal round-up for 29th May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

“Naming and framing”: The impact of labeling on health state values for multiple sclerosis. Medical Decision Making [PubMedPublished 21st May 2017

Tell someone that the health state that they’re valuing is actually related to cancer, and they’ll give you a different value than if you hadn’t mentioned cancer. A lower value, probably. There’s a growing amount of evidence that ‘labelling’ health state descriptions with the name of a particular disease can influence the resulting values. Generally, the evidence is that mentioning the disease will lower values, though that’s probably because researchers have been selecting diseases that they think will show this. (Has anyone tried it for hayfever?) The jury is out on whether labelling is a good thing or a bad thing, so in the meantime, we need evidence for particular diseases to help us understand what’s going on. This study looks at MS. Two UK-representative samples (n = 1576; n = 1641) completed an online TTO valuation task for states defined using the condition-specific preference-based MSIS-8D. Participants were first asked to complete the MSIS-8D to provide their own health state, and then to rank three MSIS-8D states and also complete a practice TTO task. For the preference elicitation proper, individuals were presented with a set of 5 MSIS-8D health states. One group were asked to imagine that they had MS and were provided with some information and a link to the NHS Choices website. The authors’ first analysis tests for a difference due to labelling. Their second analysis creates two alternative tariffs for the MSIS-8D based on the two surveys. People in the label group reported lower health state values on average. The size of this labelling-related decrement was greater for less severe health states. The creation of the tariffs seemed to show that labelling does not have a consistent impact across dimensions. This means that, in practice, the two tariffs could favour different types of interventions, depending on for which dimensions benefits might be observed. The tariff derived from the label group demonstrated slightly poorer predictive performance. This study tells us that label-or-not is a decision that will influence the relative cost-effectiveness of interventions for MS. But we still need a sound basis for making that choice.

Nudges in a post-truth world. Journal of Medical Ethics [PubMed] Published 19th May 2017

Not everyone likes the idea of nudges. They can be used to get people to behave in ways that are ‘better’… but who decides what is better? Truth, surely, we can all agree, is better. There are strong forces against the truth, whether they be our own cognitive biases, the mainstream media (FAKE NEWS!!!), or Nutella trying to tell us they offer a healthy breakfast option thanks to all that calcium. In this essay, the author outlines a special kind of nudge, which he refers to as a ‘nudge to reason’. The paper starts with a summary of the evidence regarding the failure of people to change their minds in response to evidence, and the backfire effect, whereby false beliefs become even more entrenched in light of conflicting evidence. Memory failures, and the ease with which people can handle the information, are identified as key reasons for perverse responses to evidence. The author then goes on to look at the evidence in relation to the conditions in which people do respond to evidence. In particular, where people get their evidence matters (we still trust academics, right?). The persuasiveness of evidence can be influenced by the way it is delivered. So why not nudge towards the truth? The author focuses on a key objection to nudges; that they do not protect freedom in a substantive sense because they bypass people’s capacities for deliberation. Nudges take advantage of non-rational features of human nature and fail to treat people as autonomous agents deserving of respect. One of the reasons I’ve never much like nudges is that they could promote ignorance and reinforce biases. Nudges to reason, on the other hand, influence behaviour indirectly via beliefs: changing behaviour by changing minds by improving responses to genuine evidence. The author argues that nudges to reason do not bypass the deliberative capacities of agents at all, but rather appeal to them, and are thus permissible. They operate by appealing to mechanisms that are partially constitutive of rationality and this is itself part of what defines our substantive freedom. We could also extend this to argue that we have a moral responsibility to frame arguments in a way that is truth-conducive, in order to show respect to individuals. I think health economists are in a great position to contribute to these debates. Our subfield exists principally because of uncertainty and asymmetry of information in health care. We’ve been studying these things for years. I’m convinced by the author’s arguments about the permissibility of nudges to reason. But they’d probably make for flaccid public policy. Nudges to reason would surely be dominated by nudges to ignorance. Either people need coercing towards the truth or those nudges to ignorance need to be shut down.

How should hospital reimbursement be refined to support concentration of complex care services? Health Economics [PubMed] Published 19th May 2017

Treating rare and complex conditions in specialist centres may be good for patients. We might expect these patients to be especially expensive to treat compared with people treated in general hospitals. Therefore, unless reimbursement mechanisms are able to account for this, specialist hospitals will be financially disadvantaged and concentration might not be sustainable. Healthcare Resource Groups (HRGs) – the basis for current payments – only work if variation in cost is not related to any differences in the types of patients treated at particular hospitals. This study looks at hospitals that might be at risk of financial disadvantage due to differences in casemix complexity. Individual-level Hospital Episode Statistics for 2013-14 were matched to hospital-level Reference Costs and a set of indicators for the use of specialist services were applied. The data included 12.4 million patients of whom 766,204 received complex care. The authors construct a random effects model estimating the cost difference associated with complex care, by modelling the impact of a set of complex care markers on individual-level cost estimates. The Gini coefficient is estimated to look at the concentration of complex care across hospitals. Most of the complex care markers were associated with significantly higher costs. 26 of 69 types of complex care were associated with costs more than 10% higher. What’s more, complex care was concentrated among relatively few hospitals with a mean Gini coefficient of 0.88. Two possible approaches to fixing the payment system are considered: i) recalculation of the HRG price to include a top-up or ii) a more complex refinement of the allocation of patients to different HRGs. The second option becomes less attractive as more HRGs are subject to this refinement as we could end up with just one hospital reporting all of the activity for a particular HRG. Based on the expected impact of these differences – in view of the size of the cost difference and the extent of distribution across different HRGs and hospitals – the authors are able to make recommendations about which HRGs might require refinement. The study also hints at an interesting challenge. Some of the complex care services were associated with lower costs where care was concentrated in very few centres, suggesting that concentration could give rise to cost savings. This could imply that some HRGs may need refining downwards with complexity, which feels a bit counterintuitive. My only criticism of the paper? The references include at least 3 web pages that are no longer there. Please use WebCite, people!

Credits

Advertisements

Chris Sampson’s journal round-up for 22nd May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The effect of health care expenditure on patient outcomes: evidence from English neonatal care. Health Economics [PubMed] Published 12th May 2017

Recently, people have started trying to identify opportunity cost in the NHS, by assessing the health gains associated with current spending. Studies have thrown up a wide range of values in different clinical areas, including in neonatal care. This study uses individual-level data for infants treated in 32 neonatal intensive care units from 2009-2013, along with the NHS Reference Cost for an intensive care cot day. A model is constructed to assess the impact of changes in expenditure, controlling for a variety of variables available in the National Neonatal Research Database. Two outcomes are considered: the in-hospital mortality rate and morbidity-free survival. The main finding is that a £100 increase in the cost per cot day is associated with a reduction in the mortality rate of 0.36 percentage points. This translates into a marginal cost per infant life saved of around £420,000. Assuming an average life expectancy of 81 years, this equates to a present value cost per life year gained of £15,200. Reductions in the mortality rate are associated with similar increases in morbidity. The estimated cost contradicts a much higher estimate presented in the Claxton et al modern classic on searching for the threshold.

A comparison of four software programs for implementing decision analytic cost-effectiveness models. PharmacoEconomics [PubMed] Published 9th May 2017

Markov models: TreeAge vs Excel vs R vs MATLAB. This paper compares the alternative programs in terms of transparency and validation, the associated learning curve, capability, processing speed and cost. A benchmarking assessment is conducted using a previously published model (originally developed in TreeAge). Excel is rightly identified as the ‘ubiquitous workhorse’ of cost-effectiveness modelling. It’s transparent in theory, but in practice can include cell relations that are difficult to disentangle. TreeAge, on the other hand, includes valuable features to aid model transparency and validation, though the workings of the software itself are not always clear. Being based on programming languages, MATLAB and R may be entirely transparent but challenging to validate. The authors assert that TreeAge is the easiest to learn due to its graphical nature and the availability of training options. Save for complex VBA, Excel is also simple to learn. R and MATLAB are equivalently more difficult to learn, but clearly worth the time saving for anybody expecting to work on multiple complex modelling studies. R and MATLAB both come top in terms of capability, with Excel falling behind due to having fewer statistical facilities. TreeAge has clearly defined capabilities limited to the features that the company chooses to support. MATLAB and R were both able to complete 10,000 simulations in a matter of seconds, while Excel took 15 minutes and TreeAge took over 4 hours. For a value of information analysis requiring 1000 runs, this could translate into 6 months for TreeAge! MATLAB has some advantage over R in processing time that might make its cost ($500 for academics) worthwhile to some. Excel and TreeAge are both identified as particularly useful as educational tools for people getting to grips with the concepts of decision modelling. Though the take-home message for me is that I really need to learn R.

Economic evaluation of factorial randomised controlled trials: challenges, methods and recommendations. Statistics in Medicine [PubMed] Published 3rd May 2017

Factorial trials randomise participants to at least 2 alternative levels (for example, different doses) of at least 2 alternative treatments (possibly in combination). Very little has been written about how economic evaluations ought to be conducted alongside such trials. This study starts by outlining some key challenges for economic evaluation in this context. First, there may be interactions between combined therapies, which might exist for costs and QALYs even if not for the primary clinical endpoint. Second, transformation of the data may not be straightforward, for example, it may not be possible to disaggregate a net benefit estimation with its components using alternative transformations. Third, regression analysis of factorial trials may be tricky for the purpose of constructing CEACs and conducting value of information analysis. Finally, defining the study question may not be simple. The authors simulate a 2×2 factorial trial (0 vs A vs B vs A+B) to demonstrate these challenges. The first analysis compares A and B against placebo separately in what’s known as an ‘at-the-margins’ approach. Both A and B are shown to be cost-effective, with the implication that A+B should be provided. The next analysis uses regression, with interaction terms demonstrating the unlikelihood of being statistically significant for costs or net benefit. ‘Inside-the-table’ analysis is used to separately evaluate the 4 alternative treatments, with an associated loss in statistical power. The findings of this analysis contradict the findings of the at-the-margins analysis. A variety of regression-based analyses is presented, with the discussion focussed on the variability in the estimated standard errors and the implications of this for value of information analysis. The authors then go on to present their conception of the ‘opportunity cost of ignoring interactions’ as a new basis for value of information analysis. A set of 14 recommendations is provided for people conducting economic evaluations alongside factorial trials, which could be used as a bolt-on to CHEERS and CONSORT guidelines.

Credits

Thesis Thursday: Raymond Oppong

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Raymond Oppong who graduated with a PhD from the University if Birmingham. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Economic analysis alongside multinational studies
Supervisors
Sue Jowett, Tracy Roberts
Repository link
http://etheses.bham.ac.uk/7288/

What attracted you to studying economic evaluation in the context of multinational studies?

One of the first projects that I was involved in when I started work as a health economist was the Genomics to combat Resistance against Antibiotics in Community-acquired lower respiratory tract infections (LRTI) in Europe (GRACE) project. This was an EU-funded study aimed at integrating and coordinating the activities of physicians and scientists from institutions in 14 European countries to combat antibiotic resistance in community-acquired lower respiratory tract infections.

My first task on this project was to undertake a multinational costing study to estimate the costs of treating acute cough/LRTI in Europe. I faced quite a number of challenges including the lack of unit cost data across countries. Conducting a full economic evaluation alongside the interventional studies in GRACE also brought up a number of issues with respect to methods of analysis of multinational trials which needed to be resolved. The desire to understand and resolve some of these issues led me to undertake the PhD to investigate the implications of conducting economic evaluations alongside multinational studies.

Your thesis includes some case studies from a large multinational project. What were the main findings of your empirical work?

I used three main case studies for my empirical work. The first was an observational study aimed at describing the current presentation, investigation, treatment and outcomes of community-acquired lower respiratory tract infections and analysing the determinants of antibiotic use in Europe. The other 2 were RCTs. The first was aimed at studying the effectiveness of antibiotic therapy (amoxicillin) in community-acquired lower respiratory tract infections, whilst the second was aimed at assessing training interventions to improve antibiotic prescribing behaviour by general practitioners. The observational study was used to explore issues relating to costing and outcomes in multinational studies whilst the RCTs explored the various analytical approaches (pooled and split) to economic evaluation alongside multinational studies.

The results from the observational study revealed large variations in costs across Europe and showed that contacting researchers in individual countries was the most effective way of obtaining unit costs. Results from both RCTs showed that the choice of whether to pool or split data had an impact on the cost-effectiveness of the interventions.

What were the key analytical methods used in your analysis?

The overall aim of the thesis was to study the implications of conducting economic analysis alongside multinational studies. Specific objectives include: i) documenting challenges associated with economic evaluations alongside multinational studies, ii) exploring various approaches to obtaining and estimating unit costs, iii) exploring the impact of using different tariffs to value EQ-5D health state descriptions, iv) comparing methods that have been used to conduct economic evaluation alongside multinational studies and v) making recommendations to guide the design and conduct of future economic evaluations carried out alongside multinational studies.

A number of approaches were used to achieve each of the objectives. A systematic review of the literature identified challenges associated with economic evaluations alongside multinational studies. A four-stage approach to obtaining unit costs was assessed. The UK, European and country-specific EQ-5D value sets were compared to determine which is the most appropriate to use in the context of multinational studies. Four analytical approaches – fully pooled one country costing, fully pooled multicountry costing, fully split one country costing and fully split multicountry costing – were compared in terms of resource use, costs, health outcomes and cost-effectiveness. Finally, based on the findings of the study, a set of recommendations were developed.

You completed your PhD part-time while working as a researcher. Did you find this a help or a hindrance to your studies?

I must say that it was both a help and a hindrance. Working in a research environment was really helpful. There was a lot of support from supervisors and colleagues which kept me motivated. I might have not gotten this support if I was not working in a research/academic environment. However, even though some time during the week was allocated to the PhD, I had to completely put it on hold for long periods of time in order to deal with the pressures of work/research. Consequently, I always had to struggle to find my bearings when I got back to the PhD. I also spent most weekends working on the PhD especially when I was nearing submission.

On the whole, it should be noted that a part-time PhD requires a lot of time management skills. I personally had to go on time management courses which were really helpful.

What advice would you give to a health economist conducting an economic evaluation alongside a multinational study?

For a health economist conducting an economic evaluation alongside a multinational trial, it is important to plan ahead and understand the challenges that are associated with economic evaluations alongside multinational studies. A lot of the problems such as those related to the identification of unit costs can be avoided by ensuring adequate measures are put in place at the design stage of the study. An understanding of the various health systems of the countries involved in the study is important in order to make a judgement about the differences and similarities in resource use across countries. Decision makers are interested in results that can be applied to their jurisdiction; therefore it is important to adopt transparent methods e.g. state the countries that participated in the study, state the sources of unit costs and make it clear whether data from all countries (pooling) or from a subset (splitting) were used. To ensure that the results of the study are generalisable to a number of countries it may be advisable to present country-specific results and probably conduct the analysis from different perspectives.