Chris Sampson’s journal round-up for 25th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

How prevalent are implausible EQ-5D-5L health states and how do they affect valuation? A study combining quantitative and qualitative evidence. Value in Health Published 15th March 2019

The EQ-5D-5L is able to describe a lot of different health states (3,125, to be precise), including some that don’t seem likely to ever be observed. For example, it’s difficult to conceive of somebody having extreme problems in pain/discomfort and anxiety/depression while also having no problems with usual activities. Valuation studies exclude these kinds of states because it’s thought that their inclusion could negatively affect the quality of the data. But there isn’t much evidence to help us understand how ‘implausibility’ might affect valuations, or which health states are seen as implausible.

This study is based on an EQ-5D-5L valuation exercise with 890 students in China. The valuation was conducted using the EQ VAS, rather than the standard EuroQol valuation protocol, with up to 197 states being valued by each student. Two weeks after conducting the valuation, participants were asked to indicate (yes or no) whether or not the states were implausible. After that, a small group were invited to participate in a focus group or interview.

No health state was unanimously identified as implausible. Only four states were unanimously rated as not being implausible. 910 of the 3,125 states defined by the EQ-5D-5L were rated implausible by at least half of the people who rated them. States more commonly rated as implausible were of moderate severity overall, but with divergent severities between states (i.e. 5s and 1s together). Overall, implausibility was associated with lower valuations.

Four broad themes arose from the qualitative work, namely i) reasons for implausibility, ii) difficulties in valuing implausible states, iii) strategies for valuing implausible states, and iv) values of implausible states. Some states were considered to have logical conflicts, with some dimensions being seen as mutually inclusive (e.g. walking around is a usual activity). The authors outline the themes and sub-themes, which are a valuable contribution to our understanding of what people think when they complete a valuation study.

This study makes plain the fact that there is a lot of heterogeneity in perceptions of implausibility. But the paper doesn’t fully address the issue of what plausibility actually means. The authors describe it as subjective. I’m not sure about that. For me, it’s an empirical question. If states are observed in practice, they are plausible. We need meaningful valuations of states that are observed, so perhaps the probability of a state being included in a valuation exercise should correspond to the probability of it being observed in reality. The difficulty of valuing a state may relate to plausibility – as this work shows – but that difficulty is a separate issue. Future research on implausible health states should be aligned with research on respondents’ experience of health states. Individuals’ judgments about the plausibility of health states (and the accuracy of those judgments) will depend on individuals’ experience.

An EU-wide approach to HTA: an irrelevant development or an opportunity not to be missed? The European Journal of Health Economics [PubMed] Published 14th March 2019

The use of health technology assessment is now widespread across the EU. The European Commission recently saw an opportunity to rationalise disparate processes and proposed new regulation for cooperation in HTA across EU countries. In particular, the proposal targets cooperation in the assessment of the relative effectiveness of pharmaceuticals and medical devices. A key purpose is to reduce duplication of efforts, but it should also make the basis for national decision-making more consistent.

The authors of this editorial argue that the regulation needs to provide more clarity, in the definition of clinical value, and of the quality of evidence that is acceptable, which vary across EU Member States. There is also a need for the EU to support early dialogue and scientific advice. There is also scope to support the generation and use of real-world evidence. The authors also argue that the challenges for medical device assessment are particularly difficult because many medical device companies cannot – or are not incentivised to – generate sufficient evidence for assessment.

As the final paragraph argues, EU cooperation in HTA isn’t likely to be associated with much in the way of savings. This is because appraisals will still need to be conducted in each country, as well as an assessment of country-specific epidemiology and other features of the population. The main value of cooperation could be in establishing a stronger position for the EU in negotiating in matters of drug design and evidence requirements. Not that we needed any more reasons to stop Brexit.

Patient-centered item selection for a new preference-based generic health status instrument: CS-Base. Value in Health Published 14th March 2019

I do not believe that we need a new generic measure of health. This paper was always going to have a hard time convincing me otherwise…

The premise for this work is that generic preference-based measures of health (such as the EQ-5D) were not developed with patients. True. So the authors set out to create one that is. A key feature of this study is the adoption of a framework that aligns with the multiattribute preference response model, whereby respondents rate their own health state relative to another. This is run through a mobile phone app.

The authors start by extracting candidate items from existing health frameworks and generic measures (which doesn’t seem to be a particularly patient-centred approach) and some domains were excluded for reasons that are not at all clear. 47 domains were included after overlapping candidates were removed. The 47 were classified as physical, mental, social, or ‘meta’. An online survey was conducted by a market research company. 2,256 ‘patients’ (people with diseases or serious complaints) were asked which 9 domains they thought were most important. Why 9? Because the authors figured it was the maximum that could fit on the screen of a mobile phone.

Of the candidate items, 5 were regularly selected in the survey: pain, personal relationships, fatigue, memory, and vision. Mobility and daily activities were also judged important enough to be included. Independence and self-esteem were added as paired domains and hearing was paired with the vision domain. The authors also added anxiety/depression as a pair of domains because they thought it was important. Thus, 12 items were included altogether, of which 6 were parts of pairs. Items were rephrased according to the researchers’ preferences. Each item was given 4 response levels.

It is true to say (as the authors do) that most generic preference-based measures (most notably the EQ-5D) were not developed with direct patient input. The argument goes that this somehow undermines the measure. But there are a) plenty of patient-centred measures for which preference-based values could be created and b) plenty of ways in which existing measures can be made patient-centred post hoc (n.b. our bolt-on study).

Setting aside my scepticism about the need for a new measure, I have a lot of problems with this study and with the resulting CS-Base instrument. The defining feature of its development seems to be arbitrariness. The underlying framework (as far as it is defined) does not seem well-grounded. The selection of items was largely driven by researchers. The wording was entirely driven by the researchers. The measure cannot justifiably be called ‘patient-centred’. It is researcher-centred, even if the researchers were able to refer to a survey of patients. And the whole thing has nothing whatsoever to do with preferences. The measure may prove fantastic at capturing health outcomes, but if it does it will be in spite of the methods used for its development, not because of them. Ironically, that would be a good advert for researcher-centred outcome development.

Proximity to death and health care expenditure increase revisited: a 15-year panel analysis of elderly persons. Health Economics Review [PubMed] [RePEc] Published 11th March 2019

It is widely acknowledged that – on average – people incur a large proportion of their lifetime health care costs in the last few years of their life. But there’s still a question mark over whether it is proximity to death that drives costs or age-related morbidity. The two have very different implications – we want people to be living for longer, but we probably don’t want them to be dying for longer. There’s growing evidence that proximity to death is very important, but it isn’t clear how important – if at all – ageing is. It’s important to understand this, particularly in predicting the impacts of demographic changes.

This study uses Swiss health insurance claims data for around 104,000 people over the age of 60 between 1996 and 2011. Two-part regression models were used to estimate health care expenditures conditional on them being greater than zero. The author analysed both birth cohorts and age classes to look at age-associated drivers of health care expenditure.

As expected, health care expenditures increased with age. The models imply that proximity-to-death has grown in importance over time. For the 1931-35 birth cohort, for example, the proportion of expenditures explained by proximity-to-death rose from 19% to 31%. Expenditures were partly explained by morbidity, and this effect appeared to be relatively constant over time. Thus, proximity to death is not the only determinant of rising expenditures (even if it is an important one). Looking at different age classes over time, there was no clear picture in the trajectory of health care expenditures. For the oldest age groups (76-85), health care expenditures were growing, but for some of the younger groups, costs appeared to be decreasing over time. This study paints a complex picture of health care expenditures, calling for complex policy responses. Part of this could be supporting people to commence palliative care earlier, but there is also a need for more efficient management of chronic illness over the long term.

Credits

Harold Hastings’s journal round-up for 24th December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Mandatory Medicare bundled payment program for lower extremity joint replacement and discharge to institutional postacute care: interim analysis of the first year of a 5-year randomized trial. JAMA [PubMed] Published 4th September 2018

I will focus on two themes: one local to the United States – bundled payments for Medicare, and one global – the economic burden of sepsis. Finkelstein, Ji, Mahoney, and Skinner described the results of a study aimed at assessing the effects of bundled Medicare payments (as opposed to payments for each component of treatment) upon care and costs of lower extremity joint replacement. Finkelstein et al. found only one significant difference between the bundled carte group and a control group: the percentage discharged to institutional care decreased from 33.7% in the control group to 30.8% in the bundled care group, that is, one fewer patient per 33 treated. There was no significant difference in costs or quality of care. In this sense I must differ from the optimism of an associated editorial; to me, a true success would include a significant reduction in cost together with an improvement in outcome. Thus, in terms of bundled Medicare payments, we are not at the end, not even the beginning of the end, but perhaps near the end of the beginning (my apologies to Winston Churchill).

Epidemiology and costs of sepsis in the United States—an analysis based on timing of diagnosis and severity level. Critical Care Medicine [PubMed] Published 1st December 2018

Epidemiology of sepsis in Brazil: incidence, lethality, costs, and other indicators for Brazilian Unified Health System hospitalizations from 2006 to 2015. PLoS One [PubMed] Published 13th April 2018

Sepsis care continues to pose among the most significant health challenges world-wide, both in terms of economics and mortality, with mortality ranging from 10% to almost 80% depending upon severity. In terms of cost, sepsis treatment in the US averages over $18,000 per hospitalization with almost 1 million cases admitted annually, while Brazil spends 1/30 of this amount (~$600 per hospitalization), and 1/10 of this amount for sepsis treatment in the ICU ($1,700 per hospitalization). Mortality in Brazil is higher than that in the US and higher in public hospitals than in private hospitals. The studies offer complementary suggestions for improvement: in the US study, Paoli et al. call for early detection of sepsis as a way to reduce its severity and thus its cost. In the Brazilian study, Neira et al. conclude that limited economic resources may contribute significantly to high mortality, an observation that should concern all of us interested in world-wide health. Clearly both improved detection and more effective, lower cost treatments are essential to address the health and economic burdens of sepsis. The following paper reviews a potential answer to the latter question – that of more effective, lower cost treatments.

Ascorbic acid, corticosteroids, and thiamine in sepsis: a review of the biologic rationale and the present state of clinical evaluation. Critical Care [PubMed] Published 29th October 2018

In terms of the cost of sepsis treatment, it is interesting to note that an intervention successful in a single-site, retrospective review involved a combination of three “cheap and readily available agents with a long safety record in clinical use since 1949.” Mortality decreased from 40% to 8.5%. The 2018 review describes mixed reaction based on informal cost/benefit/risk analysis while nine trials are underway. If these trials prove successful, it might be hoped that the low cost would spur world-wide incorporation of ascorbate-corticosteroid-thiamine therapy for sepsis – addressing world-wide incidence of 15 million cases annually and mortality approaching 60% in less developed countries. An optimist might even hope for reduced mortality at significantly reduced costs, reminiscent of oral rehydration therapy for diarrhoea developed in Bangladesh 50 years ago and responsible for a 90% relative reduction in mortality.

Credits

James Altunkaya’s journal round-up for 3rd September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Sensitivity analysis for not-at-random missing data in trial-based cost-effectiveness analysis: a tutorial. PharmacoEconomics [PubMed] [RePEc] Published 20th April 2018

Last month, we highlighted a Bayesian framework for imputing missing data in economic evaluation. The paper dealt with the issue of departure from the ‘Missing at Random’ (MAR) assumption by using a Bayesian approach to specify a plausible missingness model from the results of expert elicitation. This was used to estimate a prior distribution for the unobserved terms in the outcomes model.

For those less comfortable with Bayesian estimation, this month we highlight a tutorial paper from the same authors, outlining an approach to recognise the impact of plausible departures from ‘Missingness at Random’ assumptions on cost-effectiveness results. Given poor adherence to current recommendations for the best practice in handling and reporting missing data, an incremental approach to improving missing data methods in health research may be more realistic. The authors supply accompanying Stata code.

The paper investigates the importance of assuming a degree of ‘informative’ missingness (i.e. ‘Missingness not at Random’) in sensitivity analyses. In a case study, the authors present a range of scenarios which assume a decrement of 5-10% in the quality of life of patients with missing health outcomes, compared to multiple imputation estimates based on observed characteristics under standard ‘Missing at Random’ assumptions. This represents an assumption that, controlling for all observed characteristics used in multiple imputation, those with complete quality of life profiles may have higher quality of life than those with incomplete surveys.

Quality of life decrements were implemented in the control and treatment arm separately, and then jointly, in six scenarios. This aimed to demonstrate the sensitivity of cost-effectiveness judgements to the possibility of a different missingness mechanism in each arm. The authors similarly investigate sensitivity to higher health costs in those with missing data than predicted based on observed characteristics in imputation under ‘Missingness at Random’. Finally, sensitivity to a simultaneous departure from ‘Missingness at Random’ in both health outcomes and health costs is investigated.

The proposed sensitivity analyses provide a useful heuristic to assess what degree of difference between missing and non-missing subjects on unobserved characteristics would be necessary to change cost-effectiveness decisions. The authors admit this framework could appear relatively crude to those comfortable with more advanced missing data approaches such as those outlined in last month’s round-up. However, this approach should appeal to those interested in presenting the magnitude of uncertainty introduced by missing data assumptions, in a way that is easily interpretable to decision makers.

The impact of waiting for intervention on costs and effectiveness: the case of transcatheter aortic valve replacement. The European Journal of Health Economics [PubMed] [RePEc] Published September 2018

This paper appears in print this month and sparked interest as one of comparatively few studies on the cost-effectiveness of waiting lists. Given interest in using constrained optimisation methods in health outcomes research, highlighted in this month’s editorial in Value in Health, there is rightly interest in extending the traditional sphere of economic evaluation from drugs and devices to understanding the trade-offs of investing in a wider range of policy interventions, using a common metric of costs and QALYs. Rachel Meacock’s paper earlier this year did a great job at outlining some of the challenges involved broadening the scope of economic evaluation to more general decisions in health service delivery.

The authors set out to understand the cost-effectiveness of delaying a cardiac treatment (TVAR) using a waiting list of up to 12 months compared to a policy of immediate treatment. The effectiveness of treatment at 3, 6, 9 & 12 months after initial diagnosis, health decrements during waiting, and corresponding health costs during wait time and post-treatment were derived from a small observational study. As treatment is studied in an elderly population, a non-ignorable proportion of patients die whilst waiting for surgery. This translates to lower modelled costs, but also lower quality life years in modelled cohorts where there was any delay from a policy of immediate treatment. The authors conclude that eliminating all waiting time for TVAR would produce population health at a rate of ~€12,500 per QALY gained.

However, based on the modelling presented, the authors lack the ability to make cost-effectiveness judgements of this sort. Waiting lists exist for a reason, chiefly a lack of clinical capacity to treat patients immediately. In taking a decision to treat patients immediately in one disease area, we therefore need some judgement as to whether the health displaced in now untreated patients in another disease area is of greater, less or equal magnitude to that gained by treating TVAR patients immediately. Alternately, modelling should include the cost of acquiring additional clinical capacity (such as theatre space) to treat TVAR patients immediately, so as not to displace other treatments. In such a case, the ICER is likely to be much higher, due to the large cost of new resources needed to reduce waiting times to zero.

Given the data available, a simple improvement to the paper would be to reflect current waiting times (already gathered from observational study) as the ‘standard of care’ arm. As such, the estimated change in quality of life and healthcare resource cost from reducing waiting times to zero from levels observed in current practice could be calculated. This could then be used to calculate the maximum acceptable cost of acquiring additional treatment resources needed to treat patients with no waiting time, given current national willingness-to-pay thresholds.

Admittedly, there remain problems in using the authors’ chosen observational dataset to calculate quality of life and cost outcomes for patients treated at different time periods. Waiting times were prioritised in this ‘real world’ observational study, based on clinical assessment of patients’ treatment need. Thus it is expected that the quality of life lost during a waiting period would be lower for patients treated in the observational study at 12 months, compared to the expected quality of life loss of waiting for the group of patients judged to need immediate treatment. A previous study in cardiac care took on the more manageable task of investigating the cost-effectiveness of different prioritisation strategies for the waiting list, investigating the sensitivity of conclusions to varying a fixed maximum wait-time for the last patient treated.

This study therefore demonstrates some of the difficulties in attempting to make cost-effectiveness judgements about waiting time policy. Given that the cost-effectiveness of reducing waiting times in different disease areas is expected to vary, based on relative importance of waiting for treatment on short and long-term health outcomes and costs, this remains an interesting area for economic evaluation to explore. In the context of the current focus on constrained optimisation techniques across different areas in healthcare (see ISPOR task force), it is likely that extending economic evaluation to evaluate a broader range of decision problems on a common scale will become increasingly important in future.

Understanding and identifying key issues with the involvement of clinicians in the development of decision-analytic model structures: a qualitative study. PharmacoEconomics [PubMed] Published 17th August 2018

This paper gathers evidence from interviews with clinicians and modellers, with the aim to improve the nature of the working relationship between the two fields during model development.

Researchers gathered opinion from a variety of settings, including industry. The main report focusses on evidence from two case studies – one tracking the working relationship between modellers and a single clinical advisor at a UK university, with the second gathering evidence from a UK policy institute – where modellers worked with up to 11 clinical experts per meeting.

Some of the authors’ conclusions are not particularly surprising. Modellers reported difficulty in recruiting clinicians to advise on model structures, and further difficulty in then engaging recruited clinicians to provide relevant advice for the model building process. Specific comments suggested difficulty for some clinical advisors in identifying representative patient experiences, instead diverting modellers’ attention towards rare outlier events.

Study responses suggested currently only 1 or 2 clinicians were typically consulted during model development. The authors recommend involving a larger group of clinicians at this stage of the modelling process, with a more varied range of clinical experience (junior as well as senior clinicians, with some geographical variation). This is intended to help ensure clinical pathways modelled are generalizable. The experience of one clinical collaborator involved in the case study based at a UK university, compared to 11 clinicians at the policy institute studied, perhaps may also illustrate a general problem of inadequate compensation for clinical time within the university system. The authors also advocate the availability of some relevant training for clinicians in decision modelling to help enhance the efficiency of participants’ time during model building. Clinicians sampled were supportive of this view – citing the need for further guidance from modellers on the nature of their expected contribution.

This study ties into the general literature regarding structural uncertainty in decision analytic models. In advocating the early contribution of a larger, more diverse group of clinicians in model development, the authors advocate a degree of alignment between clinical involvement during model structuring, and guidelines for eliciting parameter estimates from clinical experts. Similar problems, however, remain for both fields, in recruiting clinical experts from sufficiently diverse backgrounds to provide a valid sample.

Credits