Chris Sampson’s journal round-up for 11th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Identification, review, and use of health state utilities in cost-effectiveness models: an ISPOR Good Practices for Outcomes Research Task Force report. Value in Health [PubMed] Published 1st March 2019

When modellers select health state utility values to plug into their models, they often do it in an ad hoc and unsystematic way. This ISPOR Task Force report seeks to address that.

The authors discuss the process of searching, reviewing, and synthesising utility values. Searches need to use iterative techniques because evidence requirements develop as a model develops. Due to the scope of models, it may be necessary to develop multiple search strategies (for example, for different aspects of disease pathways). Searches needn’t be exhaustive, but they should be systematic and transparent. The authors provide a list of factors that should be considered in defining search criteria. In reviewing utility values, both quality and appropriateness should be considered. Quality is indicated by the precision of the evidence, the response rate, and missing data. Appropriateness relates to the extent to which the evidence being reviewed conforms to the context of the model in which it is to be used. This includes factors such as the characteristics of the study population, the measure used, value sets used, and the timing of data collection. When it comes to synthesis, the authors suggest it might not be meaningful in most cases, because of variation in methods. We can’t pool values if they aren’t (at least roughly) equivalent. Therefore, one approach is to employ strict inclusion criteria (e.g only EQ-5D, only a particular value set), but this isn’t likely to leave you with much. Meta-regression can be used to analyse more dissimilar utility values and provide insight into the impact of methodological differences. But the extent to which this can provide pooled values for a model is questionable, and the authors concede that more research is needed.

This paper can inform that future research. Not least in its attempt to specify minimum reporting standards. We have another checklist, with another acronym (SpRUCE). The idea isn’t so much that this will guide publications of systematic reviews of utility values, but rather that modellers (and model reviewers) can use it to assess whether the selection of utility values was adequate. The authors then go on to offer methodological recommendations for using utility values in cost-effectiveness models, considering issues such as modelling technique, comorbidities, adverse events, and sensitivity analysis. It’s early days, so the recommendations in this report ought to be changed as methods develop. Still, it’s a first step away from the ad hoc selection of utility values that (no doubt) drives the results of many cost-effectiveness models.

Estimating the marginal cost of a life year in Sweden’s public healthcare sector. The European Journal of Health Economics [PubMed] Published 22nd February 2019

It’s only recently that health economists have gained access to data that enables the estimation of the opportunity cost of health care expenditure on a national level; what is sometimes referred to as a supply-side threshold. We’ve seen studies in the UK, Spain, Australia, and here we have one from Sweden.

The authors use data on health care expenditure at the national (1970-2016) and regional (2003-2016) level, alongside estimates of remaining life expectancy by age and gender (1970-2016). First, they try a time series analysis, testing the nature of causality. Finding an apparently causal relationship between longevity and expenditure, the authors don’t take it any further. Instead, the results are based on a panel data analysis, employing similar methods to estimates generated in other countries. The authors propose a conceptual model to support their analysis, which distinguishes it from other studies. In particular, the authors assert that the majority of the impact of expenditure on mortality operates through morbidity, which changes how the model should be specified. The number of newly graduated nurses is used as an instrument indicative of a supply-shift at the national rather than regional level. The models control for socioeconomic and demographic factors and morbidity not amenable to health care.

The authors estimate the marginal cost of a life year by dividing health care expenditure by the expenditure elasticity of life expectancy, finding an opportunity cost of €38,812 (with a massive 95% confidence interval). Using Swedish population norms for utility values, this would translate into around €45,000/QALY.

The analysis is considered and makes plain the difficulty of estimating the marginal productivity of health care expenditure. It looks like a nail in the coffin for the idea of estimating opportunity costs using time series. For now, at least, estimates of opportunity cost will be based on variation according to geography, rather than time. In their excellent discussion, the authors are candid about the limitations of their model. Their instrument wasn’t perfect and it looks like there may have been important confounding variables that they couldn’t control for.

Frequentist and Bayesian meta‐regression of health state utilities for multiple myeloma incorporating systematic review and analysis of individual patient data. Health Economics [PubMed] Published 20th February 2019

The first paper in this round-up was about improving practice in the systematic review of health state utility values, and it indicated the need for more research on the synthesis of values. Here, we have some. In this study, the authors conduct a meta-analysis of utility values alongside an analysis of registry and clinical study data for multiple myeloma patients.

A literature search identified 13 ‘methodologically appropriate’ papers, providing 27 health state utility values. The EMMOS registry included data for 2,445 patients in 22 counties and the APEX clinical study included 669 patients, all with EQ-5D-3L data. The authors implement both a frequentist meta-regression and a Bayesian model. In both cases, the models were run including all values and then with a limited set of only EQ-5D values. These models predicted utility values based on the number of treatment classes received and the rate of stem cell transplant in the sample. The priors used in the Bayesian model were based on studies that reported general utility values for the presence of disease (rather than according to treatment).

The frequentist models showed that utility was low at diagnosis, higher at first treatment, and lower at each subsequent treatment. Stem cell transplant had a positive impact on utility values independent of the number of previous treatments. The results of the Bayesian analysis were very similar, which the authors suggest is due to weak priors. An additional Bayesian model was run with preferred data but vague priors, to assess the sensitivity of the model to the priors. At later stages of disease (for which data were more sparse), there was greater uncertainty. The authors provide predicted values from each of the five models, according to the number of treatment classes received. The models provide slightly different results, except in the case of newly diagnosed patients (where the difference was 0.001). For example, the ‘EQ-5D only’ frequentist model gave a value of 0.659 for one treatment, while the Bayesian model gave a value of 0.620.

I’m not sure that the study satisfies the recommendations outlined in the ISPOR Task Force report described above (though that would be an unfair challenge, given the timing of publication). We’re told very little about the nature of the studies that are included, so it’s difficult to judge whether they should have been combined in this way. However, the authors state that they have made their data extraction and source code available online, which means I could check that out (though, having had a look, I can’t find the material that the authors refer to, reinforcing my hatred for the shambolic ‘supplementary material’ ecosystem). The main purpose of this paper is to progress the methods used to synthesise health state utility values, and it does that well. Predictably, the future is Bayesian.


Chris Sampson’s journal round-up for 17th September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Does competition from private surgical centres improve public hospitals’ performance? Evidence from the English National Health Service. Journal of Public Economics Published 11th September 2018

This study looks at proper (supply-side) privatisation in the NHS. The subject is the government-backed introduction of Independent Sector Treatment Centres (ISTCs), which, in the name of profit, provide routine elective surgical procedures to NHS patients. ISTCs were directed to areas with high waiting times and began rolling out from 2003.

The authors take pre-surgery length of stay as a proxy for efficiency and hypothesise that the entry of ISTCs would improve efficiency in nearby NHS hospitals. They also hypothesise that the ISTCs would cream-skim healthier patients, leaving NHS hospitals to foot the bill for a more challenging casemix. Difference-in-difference regressions are used to test these hypotheses, the treatment group being those NHS hospitals close to ISTCs and the control being those not likely to be affected. The authors use patient-level Hospital Episode Statistics from 2002-2008 for elective hip and knee replacements.

The key difficulty here is that the trend in length of stay changed dramatically at the time ISTCs began to be introduced, regardless of whether a hospital was affected by their introduction. This is because there was a whole suite of policy and structural changes being implemented around this period, many targeting hospital efficiency. So we’re looking at comparing new trends, not comparing changes in existing levels or trends.

The authors’ hypotheses prove right. Pre-surgery length of stay fell in exposed hospitals by around 16%. The ISTCs engaged in risk selection, meaning that NHS hospitals were left with sicker patients. What’s more, the savings for NHS hospitals (from shorter pre-surgery length of stay) were more than undermined by an increase in post-surgery length of stay, which may have been due to the change in casemix.

I’m not sure how useful difference-in-difference is in this case. We don’t know what the trend would have been without the intervention because the pre-intervention trend provides no clues about it and, while the outcome is shown to be unrelated to selection into the intervention, we don’t know whether selection into the ISTC intervention was correlated with exposure to other policy changes. The authors do their best to quell these concerns about parallel trends and correlated policy shocks, and the results appear robust.

Broadly speaking, the study satisfies my prior view of for-profit providers as leeches on the NHS. Still, I’m left a bit unsure of the findings. The problem is, I don’t see the causal mechanism. Hospitals had the financial incentive to be efficient and achieve a budget surplus without competition from ISTCs. It’s hard (for me, at least) to see how reduced length of stay has anything to do with competition unless hospitals used it as a basis for getting more patients through the door, which, given that ISTCs were introduced in areas with high waiting times, the hospitals could have done anyway.

While the paper describes a smart and thorough analysis, the findings don’t tell us whether ISTCs are good or bad. Both the length of stay effect and the casemix effect are ambiguous with respect to patient outcomes. If only we had some PROMs to work with…

One method, many methodological choices: a structured review of discrete-choice experiments for health state valuation. PharmacoEconomics [PubMed] Published 8th September 2018

Discrete choice experiments (DCEs) are in vogue when it comes to health state valuation. But there is disagreement about how they should be conducted. Studies can differ in terms of the design of the choice task, the design of the experiment, and the analysis methods. The purpose of this study is to review what has been going on; how have studies differed and what could that mean for our use of the value sets that are estimated?

A search of PubMed for valuation studies using DCEs – including generic and condition-specific measures – turned up 1132 citations, of which 63 were ultimately included in the review. Data were extracted and quality assessed.

The ways in which the studies differed, and the ways in which they were similar, hint at what’s needed from future research. The majority of recent studies were conducted online. This could be problematic if we think self-selecting online panels aren’t representative. Most studies used five or six attributes to describe options and many included duration as an attribute. The methodological tweaks necessary to anchor at 0=dead were a key source of variation. Those using duration varied in terms of the number of levels presented and the range of duration (from 2 months to 50 years). Other studies adopted alternative strategies. In DCE design, there is a necessary trade-off between statistical efficiency and the difficulty of the task for respondents. A variety of methods have been employed to try and ease this difficulty, but there remains a lack of consensus on the best approach. An agreed criterion for this trade-off could facilitate consistency. Some of the consistency that does appear in the literature is due to conformity with EuroQol’s EQ-VT protocol.

Unfortunately, for casual users of DCE valuations, all of this means that we can’t just assume that a DCE is a DCE is a DCE. Understanding the methodological choices involved is important in the application of resultant value sets.

Trusting the results of model-based economic analyses: is there a pragmatic validation solution? PharmacoEconomics [PubMed] Published 6th September 2018

Decision models are almost never validated. This means that – save for a superficial assessment of their outputs – they are taken at good faith. That should be a worry. This article builds on the experience of the authors to outline why validation doesn’t take place and to try to identify solutions. This experience includes a pilot study in France, NICE Evidence Review Groups, and the perspective of a consulting company modeller.

There are a variety of reasons why validation is not conducted, but resource constraints are a big part of it. Neither HTA agencies, nor modellers themselves, have the time to conduct validation and verification exercises. The core of the authors’ proposed solution is to end the routine development of bespoke models. Models – or, at least, parts of models – need to be taken off the shelf. Thus, open source or otherwise transparent modelling standards are a prerequisite for this. The key idea is to create ‘standard’ or ‘reference’ models, which can be extensively validated and tweaked. The most radical aspect of this proposal is that they should be ‘freely available’.

But rather than offering a path to open source modelling, the authors offer recommendations for how we should conduct ourselves until open source modelling is realised. These include the adoption of a modular and incremental approach to modelling, combined with more transparent reporting. I agree; we need a shift in mindset. Yet, the barriers to open source models are – I believe – the same barriers that would prevent these recommendations from being realised. Modellers don’t have the time or the inclination to provide full and transparent reporting. There is no incentive for modellers to do so. The intellectual property value of models means that public release of incremental developments is not seen as a sensible thing to do. Thus, the authors’ recommendations appear to me to be dependent on open source modelling, rather than an interim solution while we wait for it. Nevertheless, this is the kind of innovative thinking that we need.


Simon McNamara’s journal round-up for 6th August 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Euthanasia, religiosity and the valuation of health states: results from an Irish EQ5D5L valuation study and their implications for anchor values. Health and Quality of Life Outcomes [PubMed] Published 31st July 2018

Do you support euthanasia? Do you think there are health states worse than death? Are you religious? Don’t worry – I am not commandeering this week’s AHE journal round-up just to bombard you with a series of difficult questions. These three questions form the foundation of the first article selected for this week’s round-up.

The paper is based upon the hypothesis that your religiosity (“adherence to religious beliefs”) is likely to impact your support for euthanasia and, subsequently, the likelihood of you valuing severe health states as worse than death. This seems like a logical hypothesis. Religions tend to be anti-euthanasia, and so it appears likely that religious people will have lower levels of support for euthanasia than non-religious people. Equally, if you don’t support the principle of euthanasia, it stands to reason that you are likely to be less willing to choose immediate death over living in a severe health state – something you would need to do for a health state to be considered as being worse than death in a time trade-off (TTO) study.

The authors test this hypothesis using a sub-sample of data (n=160) collected as part of the Irish EQ-5D-5L TTO valuation study. Perhaps unsurprisingly, the authors find evidence in support of the above hypotheses. Those that attend a religious service weekly were more likely to oppose euthanasia than those who attend a few times a year or less, and those who oppose euthanasia were less likely to give “worse than death” responses in the TTO than those that support it.

I found this paper really interesting, as it raises a number of challenging questions. If a society is made up of people with heterogeneous beliefs regarding religion, how should we balance these in the valuation of health? If a society is primarily non-religious is it fair to apply this valuation tariff to the lives of the religious, and vice versa? These certainly aren’t easy questions to answer, but may be worth reflecting on.

E-learning and health inequality aversion: A questionnaire experiment. Health Economics [PubMed] [RePEc] Published 22nd July 2018

Moving on from the cheery topic of euthanasia, what do you think about socioeconomic inequalities in health? In my home country, England, if you are from the poorest quintile of society, you can expect to experience 62 years in full health in your lifetime, whilst if you are from the richest quintile, you can expect to experience 74 years – a gap of 12 years.

In the second paper to be featured in this round-up, Cookson et al. explore the public’s willingness to sacrifice incremental population health gains in order to reduce these inequalities in health – their level of “health inequality aversion”. This is a potentially important area of research, as the vast majority of economic evaluation in health is distributionally-naïve and effectively assumes that members of the public aren’t at all concerned with inequalities in health.

The paper builds on prior work conducted by the authors in this area, in which they noted a high proportion of respondents in health inequality aversion elicitation studies appear to be so averse to inequalities that they violate monotonicity – they choose scenarios that reduce inequalities in health even if these scenarios reduce the health of the rich at no gain to the poor, or they reduce the health of the poor, or they may reduce the health of both groups. The authors hypothesise that these monotonicity violations may be due to incomplete thinking from participants, and suggest that the quality of their thinking could be improved by two e-learning educational interventions. The primary aim of the paper is to test the impact of these interventions in a sample of the UK public (n=60).

The first e-learning intervention was an animated video that described a range of potential positions that a respondent could take (e.g. health maximisation, or maximising the health of the worst off). The second was an interactive spreadsheet-based questionnaire that presented the consequences of the participant’s choices, prior to them confirming their selection. Both interventions are available online.

The authors found that the interactive tool significantly reduced the amount of extreme egalitarian (monotonicity-violating) responses, compared to a non-interactive, paper-based version of the study. Similarly, when the video was watched before completing the paper-based exercise, the number of extreme egalitarian responses reduced. However, when the video was watched before the interactive tool there was no further decrease in extreme egalitarianism. Despite this reduction in extreme egalitarianism, the median levels of inequality aversion remained high, with implied weights of 2.6 and 7.0 for QALY gains granted to someone from the poorest fifth of society, compared to the richest fifth of society for the interactive questionnaire and video groups respectively.

This is an interesting study that provides further evidence of inequality aversion, and raises further concern about the practical dominance of distributionally-naïve approaches to economic evaluation. The public does seem to care about distribution. Furthermore, the paper demonstrates that participant responses to inequality aversion exercises are shaped by the information given to them, and the way that information is presented. I look forward to seeing more studies like this in the future.

A new method for valuing health: directly eliciting personal utility functions. The European Journal of Health Economics [PubMed] [RePEc] Published 20th July 2018

Last, but not least, for this round-up, is a paper by Devlin et al. on a new method for valuing health.

The relative valuation of health states is a pretty important topic for health economists. If we are to quantify the effectiveness, and subsequently cost-effectiveness, of an intervention, we need to understand which health states are better than others, and how much better they are. Traditionally, this is done by asking members of the public to choose between different health profiles featuring differing levels of fulfilment of a range of domains of health, in order to ‘uncover’ the relative importance the respondent places on these domains, and levels. These can then be used in order to generate social tariffs that assign a utility value to a given health state for use in economic evaluation.

The authors point out that, in the modern day, valuation studies can be conducted rapidly, and at scale, online, but at the potential cost of deliberation from participants, and the resultant risk of heuristic dominated decision making. In response to this, the authors propose a new method – direct elicitation of personal utility functions, and pilot its use for the valuation of EQ-5D in a sample of the English public (n=76).

The proposed approach differs from traditional approaches in three key ways. Firstly, instead of simply attempting to infer the relative importance that participants place on differing domains based upon choices between health profiles, the respondents are asked directly about the relative importance they place on differing domains of health, prior to validating these with profile choices. Secondly, the authors place a heavy emphasis on deliberation, and the construction, rather than uncovering, of preferences during the elicitation exercises. Thirdly, a “personal utility function” for each individual is constructed (in effect a personal EQ-5D tariff), and these individual utility functions are subsequently aggregated into a social utility function.

In the pilot, the authors find that the method appears feasible for wider use, albeit with some teething troubles associated with the computer-based tool developed to implement it, and the skills of the interviewers.

This direct method raises an interesting question for health economics – should we be inferring preferences based upon choices that differ in terms of certain attributes, or should we just ask directly about the attributes? This is a tricky question. It is possible that the preferences elicited via these different approaches could result in different preferences – if they do, on what grounds should we choose one or other? This requires a normative judgment, and at present, it appears both are (potentially) as legitimate as each other.

Whilst the authors apply this direct method to the valuation of health, I don’t see why similar approaches couldn’t be applied to any multi-attribute choice experiment. Keep your eyes out for future uses of it in valuation, and perhaps beyond? It will be interesting to see how it develops.