Chris Sampson’s journal round-up for 22nd May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The effect of health care expenditure on patient outcomes: evidence from English neonatal care. Health Economics [PubMed] Published 12th May 2017

Recently, people have started trying to identify opportunity cost in the NHS, by assessing the health gains associated with current spending. Studies have thrown up a wide range of values in different clinical areas, including in neonatal care. This study uses individual-level data for infants treated in 32 neonatal intensive care units from 2009-2013, along with the NHS Reference Cost for an intensive care cot day. A model is constructed to assess the impact of changes in expenditure, controlling for a variety of variables available in the National Neonatal Research Database. Two outcomes are considered: the in-hospital mortality rate and morbidity-free survival. The main finding is that a £100 increase in the cost per cot day is associated with a reduction in the mortality rate of 0.36 percentage points. This translates into a marginal cost per infant life saved of around £420,000. Assuming an average life expectancy of 81 years, this equates to a present value cost per life year gained of £15,200. Reductions in the mortality rate are associated with similar increases in morbidity. The estimated cost contradicts a much higher estimate presented in the Claxton et al modern classic on searching for the threshold.

A comparison of four software programs for implementing decision analytic cost-effectiveness models. PharmacoEconomics [PubMed] Published 9th May 2017

Markov models: TreeAge vs Excel vs R vs MATLAB. This paper compares the alternative programs in terms of transparency and validation, the associated learning curve, capability, processing speed and cost. A benchmarking assessment is conducted using a previously published model (originally developed in TreeAge). Excel is rightly identified as the ‘ubiquitous workhorse’ of cost-effectiveness modelling. It’s transparent in theory, but in practice can include cell relations that are difficult to disentangle. TreeAge, on the other hand, includes valuable features to aid model transparency and validation, though the workings of the software itself are not always clear. Being based on programming languages, MATLAB and R may be entirely transparent but challenging to validate. The authors assert that TreeAge is the easiest to learn due to its graphical nature and the availability of training options. Save for complex VBA, Excel is also simple to learn. R and MATLAB are equivalently more difficult to learn, but clearly worth the time saving for anybody expecting to work on multiple complex modelling studies. R and MATLAB both come top in terms of capability, with Excel falling behind due to having fewer statistical facilities. TreeAge has clearly defined capabilities limited to the features that the company chooses to support. MATLAB and R were both able to complete 10,000 simulations in a matter of seconds, while Excel took 15 minutes and TreeAge took over 4 hours. For a value of information analysis requiring 1000 runs, this could translate into 6 months for TreeAge! MATLAB has some advantage over R in processing time that might make its cost ($500 for academics) worthwhile to some. Excel and TreeAge are both identified as particularly useful as educational tools for people getting to grips with the concepts of decision modelling. Though the take-home message for me is that I really need to learn R.

Economic evaluation of factorial randomised controlled trials: challenges, methods and recommendations. Statistics in Medicine [PubMed] Published 3rd May 2017

Factorial trials randomise participants to at least 2 alternative levels (for example, different doses) of at least 2 alternative treatments (possibly in combination). Very little has been written about how economic evaluations ought to be conducted alongside such trials. This study starts by outlining some key challenges for economic evaluation in this context. First, there may be interactions between combined therapies, which might exist for costs and QALYs even if not for the primary clinical endpoint. Second, transformation of the data may not be straightforward, for example, it may not be possible to disaggregate a net benefit estimation with its components using alternative transformations. Third, regression analysis of factorial trials may be tricky for the purpose of constructing CEACs and conducting value of information analysis. Finally, defining the study question may not be simple. The authors simulate a 2×2 factorial trial (0 vs A vs B vs A+B) to demonstrate these challenges. The first analysis compares A and B against placebo separately in what’s known as an ‘at-the-margins’ approach. Both A and B are shown to be cost-effective, with the implication that A+B should be provided. The next analysis uses regression, with interaction terms demonstrating the unlikelihood of being statistically significant for costs or net benefit. ‘Inside-the-table’ analysis is used to separately evaluate the 4 alternative treatments, with an associated loss in statistical power. The findings of this analysis contradict the findings of the at-the-margins analysis. A variety of regression-based analyses is presented, with the discussion focussed on the variability in the estimated standard errors and the implications of this for value of information analysis. The authors then go on to present their conception of the ‘opportunity cost of ignoring interactions’ as a new basis for value of information analysis. A set of 14 recommendations is provided for people conducting economic evaluations alongside factorial trials, which could be used as a bolt-on to CHEERS and CONSORT guidelines.



Thesis Thursday: Raymond Oppong

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Raymond Oppong who graduated with a PhD from the University of Birmingham. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Economic analysis alongside multinational studies
Sue Jowett, Tracy Roberts
Repository link

What attracted you to studying economic evaluation in the context of multinational studies?

One of the first projects that I was involved in when I started work as a health economist was the Genomics to combat Resistance against Antibiotics in Community-acquired lower respiratory tract infections (LRTI) in Europe (GRACE) project. This was an EU-funded study aimed at integrating and coordinating the activities of physicians and scientists from institutions in 14 European countries to combat antibiotic resistance in community-acquired lower respiratory tract infections.

My first task on this project was to undertake a multinational costing study to estimate the costs of treating acute cough/LRTI in Europe. I faced quite a number of challenges including the lack of unit cost data across countries. Conducting a full economic evaluation alongside the interventional studies in GRACE also brought up a number of issues with respect to methods of analysis of multinational trials which needed to be resolved. The desire to understand and resolve some of these issues led me to undertake the PhD to investigate the implications of conducting economic evaluations alongside multinational studies.

Your thesis includes some case studies from a large multinational project. What were the main findings of your empirical work?

I used three main case studies for my empirical work. The first was an observational study aimed at describing the current presentation, investigation, treatment and outcomes of community-acquired lower respiratory tract infections and analysing the determinants of antibiotic use in Europe. The other 2 were RCTs. The first was aimed at studying the effectiveness of antibiotic therapy (amoxicillin) in community-acquired lower respiratory tract infections, whilst the second was aimed at assessing training interventions to improve antibiotic prescribing behaviour by general practitioners. The observational study was used to explore issues relating to costing and outcomes in multinational studies whilst the RCTs explored the various analytical approaches (pooled and split) to economic evaluation alongside multinational studies.

The results from the observational study revealed large variations in costs across Europe and showed that contacting researchers in individual countries was the most effective way of obtaining unit costs. Results from both RCTs showed that the choice of whether to pool or split data had an impact on the cost-effectiveness of the interventions.

What were the key analytical methods used in your analysis?

The overall aim of the thesis was to study the implications of conducting economic analysis alongside multinational studies. Specific objectives include: i) documenting challenges associated with economic evaluations alongside multinational studies, ii) exploring various approaches to obtaining and estimating unit costs, iii) exploring the impact of using different tariffs to value EQ-5D health state descriptions, iv) comparing methods that have been used to conduct economic evaluation alongside multinational studies and v) making recommendations to guide the design and conduct of future economic evaluations carried out alongside multinational studies.

A number of approaches were used to achieve each of the objectives. A systematic review of the literature identified challenges associated with economic evaluations alongside multinational studies. A four-stage approach to obtaining unit costs was assessed. The UK, European and country-specific EQ-5D value sets were compared to determine which is the most appropriate to use in the context of multinational studies. Four analytical approaches – fully pooled one country costing, fully pooled multicountry costing, fully split one country costing and fully split multicountry costing – were compared in terms of resource use, costs, health outcomes and cost-effectiveness. Finally, based on the findings of the study, a set of recommendations were developed.

You completed your PhD part-time while working as a researcher. Did you find this a help or a hindrance to your studies?

I must say that it was both a help and a hindrance. Working in a research environment was really helpful. There was a lot of support from supervisors and colleagues which kept me motivated. I might have not gotten this support if I was not working in a research/academic environment. However, even though some time during the week was allocated to the PhD, I had to completely put it on hold for long periods of time in order to deal with the pressures of work/research. Consequently, I always had to struggle to find my bearings when I got back to the PhD. I also spent most weekends working on the PhD especially when I was nearing submission.

On the whole, it should be noted that a part-time PhD requires a lot of time management skills. I personally had to go on time management courses which were really helpful.

What advice would you give to a health economist conducting an economic evaluation alongside a multinational study?

For a health economist conducting an economic evaluation alongside a multinational trial, it is important to plan ahead and understand the challenges that are associated with economic evaluations alongside multinational studies. A lot of the problems such as those related to the identification of unit costs can be avoided by ensuring adequate measures are put in place at the design stage of the study. An understanding of the various health systems of the countries involved in the study is important in order to make a judgement about the differences and similarities in resource use across countries. Decision makers are interested in results that can be applied to their jurisdiction; therefore it is important to adopt transparent methods e.g. state the countries that participated in the study, state the sources of unit costs and make it clear whether data from all countries (pooling) or from a subset (splitting) were used. To ensure that the results of the study are generalisable to a number of countries it may be advisable to present country-specific results and probably conduct the analysis from different perspectives.

Chris Sampson’s journal round-up for 8th May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Verification of decision-analytic models for health economic evaluations: an overview. PharmacoEconomics [PubMed] Published 29th April 2017

Increasingly, it’s expected that model-based economic evaluations can be validated and shown to be fit-for-purpose. However, up to now, discussions have focussed on scientific questions about conceptualisation and external validity, rather than technical questions, such as whether the model is programmed correctly and behaves as expected. This paper looks at how things are done in the software industry with a view to creating guidance for health economists. Given that Microsoft Excel remains one of the most popular software packages for modelling, there is a discussion of spreadsheet errors. These might be errors in logic, simple copy-paste type mistakes and errors of omission. A variety of tactics is discussed. In particular, the authors describe unit testing, whereby individual parts of the code are demonstrated to be correct. Unit testing frameworks do not exist for application to spreadsheets, so the authors recommend the creation of a ‘Tests’ spreadsheet with tests for parameter assignments, functions, equations and exploratory items. Independent review by another modeller is also recommended. Six recommendations are given for taking model verification forward: i) the use of open source models, ii) standardisation in model storage and communication (anyone for a registry?), iii) style guides for script, iv) agency and journal mandates, v) training and vi) creation of an ISPOR/SMDM task force. This is a worthwhile read for any modeller, with some neat tactics that you can build into your workflow.

How robust are value judgments of health inequality aversion? Testing for framing and cognitive effects. Medical Decision Making [PubMed] Published 25th April 2017

Evidence shows that people are often extremely averse to health inequality. Sometimes these super-egalitarian responses imply such extreme preferences that monotonicity is violated. The starting point for this study is the idea that these findings are probably influenced by framing effects and cognitive biases, and that they may therefore not constitute a reliable basis for policy making. The authors investigate 4 hypotheses that might indicate the presence of bias: i) realistic small health inequality reductions vs larger one, ii) population- vs individual-level descriptions, iii) concrete vs abstract intervention scenarios and iv) online vs face-to-face administration. Two samples were recruited: one with a face-to-face discussion (n=52) and the other online (n=83). The questionnaire introduced respondents to health inequality in England before asking 4 questions in the form of a choice experiment, with 20 paired choices. Responses are grouped according to non-egalitarianism, prioritarianism and strict egalitarianism. The main research question is whether or not the alternative strategies resulted in fewer strict egalitarian responses. Not much of an effect was found with regard to large gains or to population-level descriptions. There was evidence that the abstract scenarios resulted in a greater proportion of people giving strong egalitarian responses. And the face-to-face sample did seem to exhibit some social desirability bias, with more egalitarian responses. But the main take-home message from this study for me is that it is not easy to explain-away people’s extreme aversion to health inequality, which is heartening. Yet, as with all choice experiments, we see that the mode of administration – and cognitive effects induced by the question – can be very important.

Adaptation to health states: sick yet better off? Health Economics [PubMed] Published 20th April 2017

Should patients or the public value health states for the purpose of resource allocation? It’s a question that’s cropped up plenty of times on this blog. One of the trickier challenges is understanding and dealing with adaptation. This paper has a pretty straightforward purpose – to look for signs of adaptation in a longitudinal dataset. The authors’ approach is to see whether there is a positive relationship between the length of time a person has an illness and the likelihood of them reporting better health. I did pretty much the same thing (for SF-6D and satisfaction with life) in my MSc dissertation, and found little evidence of adaptation, so I’m keen to see where this goes! The study uses 4 waves of data from the British Cohort Study, looking at self-assessed health (on a 4-point scale) and self-reported chronic illness and health shocks. Latent self-assessed health is modelled using a dynamic ordered probit model. In short, there is evidence of adaptation. People who have had a long-standing illness for a greater duration are more likely to report a higher level of self-assessed health. An additional 10 years of illness is associated with an 8 percentage point increase in the likelihood of reporting ‘excellent’ health. The study is opaque about sample sizes, but I’d guess that finding is based on not-that-many people. Further analyses are conducted to show that adaptation seems to become important only after a relatively long duration (~20 years) and that better health before diagnosis may not influence adaptation. The authors also look at specific conditions, finding that some (e.g. diabetes, anxiety, back problems) are associated with adaptation, while others (e.g. depression, cancer, Crohn’s disease) are not. I have a bit of a problem with this study though, in that it’s framed as being relevant to health care resource allocation and health technology assessment. But I don’t think it is. Self-assessed health in the ‘how healthy are you’ sense is very far removed from the process by which health state utilities are obtained using the EQ-5D. And they probably don’t reflect adaptation in the same way.