Chris Sampson’s journal round-up for 22nd May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The effect of health care expenditure on patient outcomes: evidence from English neonatal care. Health Economics [PubMed] Published 12th May 2017

Recently, people have started trying to identify opportunity cost in the NHS, by assessing the health gains associated with current spending. Studies have thrown up a wide range of values in different clinical areas, including in neonatal care. This study uses individual-level data for infants treated in 32 neonatal intensive care units from 2009-2013, along with the NHS Reference Cost for an intensive care cot day. A model is constructed to assess the impact of changes in expenditure, controlling for a variety of variables available in the National Neonatal Research Database. Two outcomes are considered: the in-hospital mortality rate and morbidity-free survival. The main finding is that a £100 increase in the cost per cot day is associated with a reduction in the mortality rate of 0.36 percentage points. This translates into a marginal cost per infant life saved of around £420,000. Assuming an average life expectancy of 81 years, this equates to a present value cost per life year gained of £15,200. Reductions in the mortality rate are associated with similar increases in morbidity. The estimated cost contradicts a much higher estimate presented in the Claxton et al modern classic on searching for the threshold.

A comparison of four software programs for implementing decision analytic cost-effectiveness models. PharmacoEconomics [PubMed] Published 9th May 2017

Markov models: TreeAge vs Excel vs R vs MATLAB. This paper compares the alternative programs in terms of transparency and validation, the associated learning curve, capability, processing speed and cost. A benchmarking assessment is conducted using a previously published model (originally developed in TreeAge). Excel is rightly identified as the ‘ubiquitous workhorse’ of cost-effectiveness modelling. It’s transparent in theory, but in practice can include cell relations that are difficult to disentangle. TreeAge, on the other hand, includes valuable features to aid model transparency and validation, though the workings of the software itself are not always clear. Being based on programming languages, MATLAB and R may be entirely transparent but challenging to validate. The authors assert that TreeAge is the easiest to learn due to its graphical nature and the availability of training options. Save for complex VBA, Excel is also simple to learn. R and MATLAB are equivalently more difficult to learn, but clearly worth the time saving for anybody expecting to work on multiple complex modelling studies. R and MATLAB both come top in terms of capability, with Excel falling behind due to having fewer statistical facilities. TreeAge has clearly defined capabilities limited to the features that the company chooses to support. MATLAB and R were both able to complete 10,000 simulations in a matter of seconds, while Excel took 15 minutes and TreeAge took over 4 hours. For a value of information analysis requiring 1000 runs, this could translate into 6 months for TreeAge! MATLAB has some advantage over R in processing time that might make its cost ($500 for academics) worthwhile to some. Excel and TreeAge are both identified as particularly useful as educational tools for people getting to grips with the concepts of decision modelling. Though the take-home message for me is that I really need to learn R.

Economic evaluation of factorial randomised controlled trials: challenges, methods and recommendations. Statistics in Medicine [PubMed] Published 3rd May 2017

Factorial trials randomise participants to at least 2 alternative levels (for example, different doses) of at least 2 alternative treatments (possibly in combination). Very little has been written about how economic evaluations ought to be conducted alongside such trials. This study starts by outlining some key challenges for economic evaluation in this context. First, there may be interactions between combined therapies, which might exist for costs and QALYs even if not for the primary clinical endpoint. Second, transformation of the data may not be straightforward, for example, it may not be possible to disaggregate a net benefit estimation with its components using alternative transformations. Third, regression analysis of factorial trials may be tricky for the purpose of constructing CEACs and conducting value of information analysis. Finally, defining the study question may not be simple. The authors simulate a 2×2 factorial trial (0 vs A vs B vs A+B) to demonstrate these challenges. The first analysis compares A and B against placebo separately in what’s known as an ‘at-the-margins’ approach. Both A and B are shown to be cost-effective, with the implication that A+B should be provided. The next analysis uses regression, with interaction terms demonstrating the unlikelihood of being statistically significant for costs or net benefit. ‘Inside-the-table’ analysis is used to separately evaluate the 4 alternative treatments, with an associated loss in statistical power. The findings of this analysis contradict the findings of the at-the-margins analysis. A variety of regression-based analyses is presented, with the discussion focussed on the variability in the estimated standard errors and the implications of this for value of information analysis. The authors then go on to present their conception of the ‘opportunity cost of ignoring interactions’ as a new basis for value of information analysis. A set of 14 recommendations is provided for people conducting economic evaluations alongside factorial trials, which could be used as a bolt-on to CHEERS and CONSORT guidelines.

Credits

Chris Sampson’s journal round-up for 8th May 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Verification of decision-analytic models for health economic evaluations: an overview. PharmacoEconomics [PubMed] Published 29th April 2017

Increasingly, it’s expected that model-based economic evaluations can be validated and shown to be fit-for-purpose. However, up to now, discussions have focussed on scientific questions about conceptualisation and external validity, rather than technical questions, such as whether the model is programmed correctly and behaves as expected. This paper looks at how things are done in the software industry with a view to creating guidance for health economists. Given that Microsoft Excel remains one of the most popular software packages for modelling, there is a discussion of spreadsheet errors. These might be errors in logic, simple copy-paste type mistakes and errors of omission. A variety of tactics is discussed. In particular, the authors describe unit testing, whereby individual parts of the code are demonstrated to be correct. Unit testing frameworks do not exist for application to spreadsheets, so the authors recommend the creation of a ‘Tests’ spreadsheet with tests for parameter assignments, functions, equations and exploratory items. Independent review by another modeller is also recommended. Six recommendations are given for taking model verification forward: i) the use of open source models, ii) standardisation in model storage and communication (anyone for a registry?), iii) style guides for script, iv) agency and journal mandates, v) training and vi) creation of an ISPOR/SMDM task force. This is a worthwhile read for any modeller, with some neat tactics that you can build into your workflow.

How robust are value judgments of health inequality aversion? Testing for framing and cognitive effects. Medical Decision Making [PubMed] Published 25th April 2017

Evidence shows that people are often extremely averse to health inequality. Sometimes these super-egalitarian responses imply such extreme preferences that monotonicity is violated. The starting point for this study is the idea that these findings are probably influenced by framing effects and cognitive biases, and that they may therefore not constitute a reliable basis for policy making. The authors investigate 4 hypotheses that might indicate the presence of bias: i) realistic small health inequality reductions vs larger one, ii) population- vs individual-level descriptions, iii) concrete vs abstract intervention scenarios and iv) online vs face-to-face administration. Two samples were recruited: one with a face-to-face discussion (n=52) and the other online (n=83). The questionnaire introduced respondents to health inequality in England before asking 4 questions in the form of a choice experiment, with 20 paired choices. Responses are grouped according to non-egalitarianism, prioritarianism and strict egalitarianism. The main research question is whether or not the alternative strategies resulted in fewer strict egalitarian responses. Not much of an effect was found with regard to large gains or to population-level descriptions. There was evidence that the abstract scenarios resulted in a greater proportion of people giving strong egalitarian responses. And the face-to-face sample did seem to exhibit some social desirability bias, with more egalitarian responses. But the main take-home message from this study for me is that it is not easy to explain-away people’s extreme aversion to health inequality, which is heartening. Yet, as with all choice experiments, we see that the mode of administration – and cognitive effects induced by the question – can be very important.

Adaptation to health states: sick yet better off? Health Economics [PubMed] Published 20th April 2017

Should patients or the public value health states for the purpose of resource allocation? It’s a question that’s cropped up plenty of times on this blog. One of the trickier challenges is understanding and dealing with adaptation. This paper has a pretty straightforward purpose – to look for signs of adaptation in a longitudinal dataset. The authors’ approach is to see whether there is a positive relationship between the length of time a person has an illness and the likelihood of them reporting better health. I did pretty much the same thing (for SF-6D and satisfaction with life) in my MSc dissertation, and found little evidence of adaptation, so I’m keen to see where this goes! The study uses 4 waves of data from the British Cohort Study, looking at self-assessed health (on a 4-point scale) and self-reported chronic illness and health shocks. Latent self-assessed health is modelled using a dynamic ordered probit model. In short, there is evidence of adaptation. People who have had a long-standing illness for a greater duration are more likely to report a higher level of self-assessed health. An additional 10 years of illness is associated with an 8 percentage point increase in the likelihood of reporting ‘excellent’ health. The study is opaque about sample sizes, but I’d guess that finding is based on not-that-many people. Further analyses are conducted to show that adaptation seems to become important only after a relatively long duration (~20 years) and that better health before diagnosis may not influence adaptation. The authors also look at specific conditions, finding that some (e.g. diabetes, anxiety, back problems) are associated with adaptation, while others (e.g. depression, cancer, Crohn’s disease) are not. I have a bit of a problem with this study though, in that it’s framed as being relevant to health care resource allocation and health technology assessment. But I don’t think it is. Self-assessed health in the ‘how healthy are you’ sense is very far removed from the process by which health state utilities are obtained using the EQ-5D. And they probably don’t reflect adaptation in the same way.

Credits

Chris Sampson’s journal round-up for 20th June 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Can increased primary care access reduce demand for emergency care? Evidence from England’s 7-day GP opening. Journal of Health Economics Published 15th June 2016

Getting a GP appointment when you want one can be tricky, and complaints are increasingly common in the UK. In April 2013, 7-day opening for some GP practices began being piloted in London, with support from the Prime Minister’s Challenge Fund. Part of the reasoning for 7-day opening – beyond patient satisfaction – is that better access to GP services might reduce the use of A&E at weekends. This study evaluates whether or not this has been observed for the London pilot. Secondary Uses Service patient-level data are analysed for 2009-2014 for 34 GP practices in central London (4 pilot practices and 30 controls). The authors collapse the data into the number of A&E attendances per GP practice, giving 8704 observations (34 practices over 256 weeks). 6 categories of A&E attendance are identified; some that we would expect to be influenced by extended GP opening (e.g. ‘minor’) and some that we would not (e.g. ‘accident’). Pilot practices were not randomly selected, and those that were selected had a significantly higher patient-GP ratio. The authors run difference-in-difference analyses on the outcomes using Poisson regression models. Total weekend attendances dropped by 17.9%, with moderate cases exhibiting the greatest drop. Minor cases were not affected. There was also a 10% drop in weekend admissions and a 20% drop in ambulance usage, suggesting major cost savings. A small spillover effect was observed for weekdays. The authors divide their sample into age groups and find that the fall in A&E attendances was greatest in the over 60s, who account for almost all of the drop in weekend admissions. The authors speculate that this may be due to A&E staff being risk averse with elderly patients with whose background they are not familiar, and that GPs may be better able to assess the seriousness of the case. Patients from wealthier neighbourhoods exhibited a relatively greater drop in A&E attendances. So it looks like 7-day opening for GP services could relieve a lot of pressure on A&E departments. What’s lacking from the paper though is an explicit estimate of the cost savings (if, indeed, there were any). The pilot was funded to the tune of £50 million. Unfortunately this study doesn’t tell us whether or not it was worth it.

Cost-effectiveness analysis in R using a multi-state modeling survival analysis framework: a tutorial. Medical Decision Making [PubMed] Published 8th June 2016

To say my practical understanding of R is rudimentary would be a grand overstatement. But I do understand the benefits of the increasingly ubiquitous open source stats software. People frown hard when I tell them that we often build Markov models in Excel. An alternative script-based approach could clearly increase the transparency of decision models and do away with black box problems. This paper does what it says on the tin and guides the reader through the process of developing a state-based (e.g. Markov) transition model. But the key novelty of the paper is the description of a tool for ‘testing’ the Markov assumption that might be built into a decision model. This is the ‘state-arrival extended model’ which entails the inclusion of a covariate to represent the history from the start of the model. A true Markov model is only interested in time in the current state, so if this extra covariate matters to the results then we can reject the Markov assumption and instead implement a semi-Markov model (or maybe something else). The authors do just this using an example from a previously published trial. I dare say the authors could have figured out that the Markov assumption wouldn’t hold without using such a test, but it’s good to have a justification for model choice. The basis for the tutorial is a 12 step program, and the paper explains each step. The majority of processes are based on adaptations of an existing R package called mstate. It assumes that time is continuous rather than discrete and can handle alternative parametric distributions for survival. Visual assessment of fit is built into the process to facilitate model selection. Functions are defined to compute QALYs and costs associated with states and PSA is implemented with generation of cost-effectiveness planes and CEACs. But your heart may sink when the authors state that “It is assumed that individual patient data are available”. The authors provide a thorough discussion of the ways in which a model might be constructed when individual level data aren’t available. But ultimately this seems like a major limitation of the approach, or at least of the usefulness of this particular tutorial. So don’t throw away your copy of Briggs/Sculpher/Claxton just yet.

Do waiting times affect health outcomes? Evidence from coronary bypass. Social Science & Medicine [PubMed] Published 30th May 2016

Many health economists are quite happy with waiting lists being used as a basis for rationing in health services like the NHS. But, surely, this is conditional on the delay in treatment not affecting either current health or the potential benefit of treatment. This new study provides evidence from coronary bypass surgery. Hospital Episodes Statistics for 133,166 patients for the years 2000-2010 are used to look at 2 outcomes: 30-day mortality and 28-day readmission. During the period, policy resulted in the reduction of waiting times from 220 to 50 days. Three empirical strategies are employed: i) annual cross-sectional estimation of the probability of the 2 outcomes occurring in patients, ii) panel analysis of hospital-level data over the 11 years to evaluate the impact of different waiting time reductions and iii) full analysis of patient-specific waiting times across all years using an instrumental variable based on waiting times for an alternative procedure. For the first analysis, the study finds no effect of waiting times on mortality in all years bar 2003 (in which the effect was negative). Weak association is found with readmission. Doubling waiting times increases risk of readmission from 4.05% to 4.54%. The hospital-level analysis finds a lack of effect on both counts. The full panel analysis finds that longer waiting times reduce mortality, but the authors suggest that this is probably due to some unobserved heterogeneity. Longer waiting times may have a negative effect on people’s health, but it isn’t likely that this effect is dramatic enough to increase mortality. This might be thanks to effective prioritisation in the NHS.