Chris Sampson’s journal round-up for 22nd May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The effect of health care expenditure on patient outcomes: evidence from English neonatal care. Health Economics [PubMed] Published 12th May 2017

Recently, people have started trying to identify opportunity cost in the NHS, by assessing the health gains associated with current spending. Studies have thrown up a wide range of values in different clinical areas, including in neonatal care. This study uses individual-level data for infants treated in 32 neonatal intensive care units from 2009-2013, along with the NHS Reference Cost for an intensive care cot day. A model is constructed to assess the impact of changes in expenditure, controlling for a variety of variables available in the National Neonatal Research Database. Two outcomes are considered: the in-hospital mortality rate and morbidity-free survival. The main finding is that a £100 increase in the cost per cot day is associated with a reduction in the mortality rate of 0.36 percentage points. This translates into a marginal cost per infant life saved of around £420,000. Assuming an average life expectancy of 81 years, this equates to a present value cost per life year gained of £15,200. Reductions in the mortality rate are associated with similar increases in morbidity. The estimated cost contradicts a much higher estimate presented in the Claxton et al modern classic on searching for the threshold.

A comparison of four software programs for implementing decision analytic cost-effectiveness models. PharmacoEconomics [PubMed] Published 9th May 2017

Markov models: TreeAge vs Excel vs R vs MATLAB. This paper compares the alternative programs in terms of transparency and validation, the associated learning curve, capability, processing speed and cost. A benchmarking assessment is conducted using a previously published model (originally developed in TreeAge). Excel is rightly identified as the ‘ubiquitous workhorse’ of cost-effectiveness modelling. It’s transparent in theory, but in practice can include cell relations that are difficult to disentangle. TreeAge, on the other hand, includes valuable features to aid model transparency and validation, though the workings of the software itself are not always clear. Being based on programming languages, MATLAB and R may be entirely transparent but challenging to validate. The authors assert that TreeAge is the easiest to learn due to its graphical nature and the availability of training options. Save for complex VBA, Excel is also simple to learn. R and MATLAB are equivalently more difficult to learn, but clearly worth the time saving for anybody expecting to work on multiple complex modelling studies. R and MATLAB both come top in terms of capability, with Excel falling behind due to having fewer statistical facilities. TreeAge has clearly defined capabilities limited to the features that the company chooses to support. MATLAB and R were both able to complete 10,000 simulations in a matter of seconds, while Excel took 15 minutes and TreeAge took over 4 hours. For a value of information analysis requiring 1000 runs, this could translate into 6 months for TreeAge! MATLAB has some advantage over R in processing time that might make its cost ($500 for academics) worthwhile to some. Excel and TreeAge are both identified as particularly useful as educational tools for people getting to grips with the concepts of decision modelling. Though the take-home message for me is that I really need to learn R.

Economic evaluation of factorial randomised controlled trials: challenges, methods and recommendations. Statistics in Medicine [PubMed] Published 3rd May 2017

Factorial trials randomise participants to at least 2 alternative levels (for example, different doses) of at least 2 alternative treatments (possibly in combination). Very little has been written about how economic evaluations ought to be conducted alongside such trials. This study starts by outlining some key challenges for economic evaluation in this context. First, there may be interactions between combined therapies, which might exist for costs and QALYs even if not for the primary clinical endpoint. Second, transformation of the data may not be straightforward, for example, it may not be possible to disaggregate a net benefit estimation with its components using alternative transformations. Third, regression analysis of factorial trials may be tricky for the purpose of constructing CEACs and conducting value of information analysis. Finally, defining the study question may not be simple. The authors simulate a 2×2 factorial trial (0 vs A vs B vs A+B) to demonstrate these challenges. The first analysis compares A and B against placebo separately in what’s known as an ‘at-the-margins’ approach. Both A and B are shown to be cost-effective, with the implication that A+B should be provided. The next analysis uses regression, with interaction terms demonstrating the unlikelihood of being statistically significant for costs or net benefit. ‘Inside-the-table’ analysis is used to separately evaluate the 4 alternative treatments, with an associated loss in statistical power. The findings of this analysis contradict the findings of the at-the-margins analysis. A variety of regression-based analyses is presented, with the discussion focussed on the variability in the estimated standard errors and the implications of this for value of information analysis. The authors then go on to present their conception of the ‘opportunity cost of ignoring interactions’ as a new basis for value of information analysis. A set of 14 recommendations is provided for people conducting economic evaluations alongside factorial trials, which could be used as a bolt-on to CHEERS and CONSORT guidelines.

Credits

Advertisements

Chris Sampson’s journal round-up for 8th May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Verification of decision-analytic models for health economic evaluations: an overview. PharmacoEconomics [PubMed] Published 29th April 2017

Increasingly, it’s expected that model-based economic evaluations can be validated and shown to be fit-for-purpose. However, up to now, discussions have focussed on scientific questions about conceptualisation and external validity, rather than technical questions, such as whether the model is programmed correctly and behaves as expected. This paper looks at how things are done in the software industry with a view to creating guidance for health economists. Given that Microsoft Excel remains one of the most popular software packages for modelling, there is a discussion of spreadsheet errors. These might be errors in logic, simple copy-paste type mistakes and errors of omission. A variety of tactics is discussed. In particular, the authors describe unit testing, whereby individual parts of the code are demonstrated to be correct. Unit testing frameworks do not exist for application to spreadsheets, so the authors recommend the creation of a ‘Tests’ spreadsheet with tests for parameter assignments, functions, equations and exploratory items. Independent review by another modeller is also recommended. Six recommendations are given for taking model verification forward: i) the use of open source models, ii) standardisation in model storage and communication (anyone for a registry?), iii) style guides for script, iv) agency and journal mandates, v) training and vi) creation of an ISPOR/SMDM task force. This is a worthwhile read for any modeller, with some neat tactics that you can build into your workflow.

How robust are value judgments of health inequality aversion? Testing for framing and cognitive effects. Medical Decision Making [PubMed] Published 25th April 2017

Evidence shows that people are often extremely averse to health inequality. Sometimes these super-egalitarian responses imply such extreme preferences that monotonicity is violated. The starting point for this study is the idea that these findings are probably influenced by framing effects and cognitive biases, and that they may therefore not constitute a reliable basis for policy making. The authors investigate 4 hypotheses that might indicate the presence of bias: i) realistic small health inequality reductions vs larger one, ii) population- vs individual-level descriptions, iii) concrete vs abstract intervention scenarios and iv) online vs face-to-face administration. Two samples were recruited: one with a face-to-face discussion (n=52) and the other online (n=83). The questionnaire introduced respondents to health inequality in England before asking 4 questions in the form of a choice experiment, with 20 paired choices. Responses are grouped according to non-egalitarianism, prioritarianism and strict egalitarianism. The main research question is whether or not the alternative strategies resulted in fewer strict egalitarian responses. Not much of an effect was found with regard to large gains or to population-level descriptions. There was evidence that the abstract scenarios resulted in a greater proportion of people giving strong egalitarian responses. And the face-to-face sample did seem to exhibit some social desirability bias, with more egalitarian responses. But the main take-home message from this study for me is that it is not easy to explain-away people’s extreme aversion to health inequality, which is heartening. Yet, as with all choice experiments, we see that the mode of administration – and cognitive effects induced by the question – can be very important.

Adaptation to health states: sick yet better off? Health Economics [PubMed] Published 20th April 2017

Should patients or the public value health states for the purpose of resource allocation? It’s a question that’s cropped up plenty of times on this blog. One of the trickier challenges is understanding and dealing with adaptation. This paper has a pretty straightforward purpose – to look for signs of adaptation in a longitudinal dataset. The authors’ approach is to see whether there is a positive relationship between the length of time a person has an illness and the likelihood of them reporting better health. I did pretty much the same thing (for SF-6D and satisfaction with life) in my MSc dissertation, and found little evidence of adaptation, so I’m keen to see where this goes! The study uses 4 waves of data from the British Cohort Study, looking at self-assessed health (on a 4-point scale) and self-reported chronic illness and health shocks. Latent self-assessed health is modelled using a dynamic ordered probit model. In short, there is evidence of adaptation. People who have had a long-standing illness for a greater duration are more likely to report a higher level of self-assessed health. An additional 10 years of illness is associated with an 8 percentage point increase in the likelihood of reporting ‘excellent’ health. The study is opaque about sample sizes, but I’d guess that finding is based on not-that-many people. Further analyses are conducted to show that adaptation seems to become important only after a relatively long duration (~20 years) and that better health before diagnosis may not influence adaptation. The authors also look at specific conditions, finding that some (e.g. diabetes, anxiety, back problems) are associated with adaptation, while others (e.g. depression, cancer, Crohn’s disease) are not. I have a bit of a problem with this study though, in that it’s framed as being relevant to health care resource allocation and health technology assessment. But I don’t think it is. Self-assessed health in the ‘how healthy are you’ sense is very far removed from the process by which health state utilities are obtained using the EQ-5D. And they probably don’t reflect adaptation in the same way.

Credits

Meeting round-up: International Society for Economics and Social Sciences of Animal Health inaugural meeting

Last week I attended a conference that was very different to any that I’ve attended before. It was the first meeting of a new society – the International Society for Economics and Social Sciences of Animal Health (ISESSAH). I and Prof Marilyn James wanted to get involved with ISESSAH from the get-go in order to start identifying opportunities for collaboration with animal health researchers. In particular, we see the potential for the application of cost-effectiveness analysis methods in the veterinary context. The proceedings of the conference suggested that this is not something that is currently being done.

So off to the Highlands we headed, happily arriving in Aviemore while the town was improbably celebrating being the hottest place in the UK. Aside from my lack of sunglasses and excess of thick jumpers, I did have some intellectual concerns. I was a little worried that there would be few points of commonality between me and the other delegates. A hands-in-the-air poll during the first keynote speech by Tim Carpenter suggested that a minority of people in the room identified primarily as economists. Most people identified as “animal health specialists” and I suspect that most of these people were principally interested in epidemiological questions relating to livestock animals.

Happily, my fears were not realised. The first talk, by Erwin Wauters, discussed the challenge of framing research questions and in particular identifying the context of the decision. This is something we figured out a while ago in health economics and now have the luxury of bickering about health service and societal perspectives for our analyses. But the overlap was striking, as Erwin discussed the proliferation of ‘cost of disease’ studies with limited interpretability. I wondered (aloud, as a question) what the unique challenges might be in defining the context (what we would call perspective) in animal health as opposed to human health. This turned out to be prudent, as numerous delegates approached me over the proceeding 48 hours to tell me what they thought the answer was (euthanasia/culling, market structure, data availability, amongst others).

The whole conference consisted of methods that were familiar. Don’t get me wrong, most (though not all) of the subject matter was alien to me. But that’s par for the course in applied health economics anyway. Many of the studies – and I mean this to be in no way a criticism of those presenting – would strike health economists as analytically rudimentary. There were lots of cost-benefit analyses, plenty of epidemiological models with costs attached (does that make it an economic model?) and a handful of econometric analyses. Some studies (aside from my own poster) were very familiar and referred explicitly to ideas from the health economics field. In particular, Paul Torgerson and colleagues presented a framework that incorporates animal disease burden with DALY estimation. A French group mused on the role of QALYs.

Something consistent across many of the empirical studies was that the decision problems were ill-defined. In the economic evaluation of (human) heath care, we attribute major importance to the adequate definition of the decision problem and the identification and definition of all relevant options for the decision maker. It is perhaps for this reason that – as Jonathan Rushton argued – economics in the animal health context is used more for advocacy than to achieve optimality. Or maybe the causality goes the other way.

There were also lots of sociological and other sub-disciplines of social science represented, with fertile opportunities for interdisciplinary research. I didn’t like the distinction that was made throughout the conference between economics and social science. Economics is a social science. It isn’t bigger or better or distinct. Economists don’t need any encouragement in distancing themselves from sociologists and other social scientists. All of the research (with no exaggeration, though to varying extents) could benefit from health economists’ input. Thanks to our subfield’s softer edges, health economists make for good social science all-rounders. But then I would say that.

There was a discussion of how the conference will operate in the future. As someone who worships at the church of HESG, my instinct was to advise copying it. But that wouldn’t be right in this case (except perhaps for the levy of a nominal membership fee). ISESSAH will need to focus on interdisciplinarity. Delegates had a palpable taste and even excitement for interdisciplinary research. My (previously unknown) Nottingham colleague Marnie Brennan described how she thought the society would do well to adopt a policy of infiltration, to force interdisciplinary engagement, by creating a presence for itself at other conferences. The 2017 meeting took place alongside that of the Society for Veterinary Epidemiology and Preventive Medicine (SVEPM). Hopefully, in the future, we’ll see collaboration with human health research and economics societies and, who knows, maybe even the health economists.