Chris Sampson’s journal round-up for 18th December 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Individualized glycemic control for U.S. adults with type 2 diabetes: a cost-effectiveness analysis. Annals of Internal Medicine [PubMed] Published 12th December 2017

The nature of diabetes – that it affects a lot of people and is associated with a wide array of physiological characteristics and health impacts – has given rise to recommendations for individualisation of care. This paper evaluates individualisation of glycemic control targets. Specifically, the individualised programme allocated people to one of 3 HbA1c targets (<6.5%, <7%, <8%) according to their characteristics, while the comparator was based on a single fixed target (<7%). The researchers used a patient-level simulation model. Risk equations developed by the UKPDS study were used to predict diabetes complications and mortality. The baseline population was derived from the NHANES study for 2011-12 and constitutes people who self-reported as having diabetes and who were at least 30 years old at diagnosis (to try and isolate type 2 diabetes). It’s not much of a surprise that the individualised approach dominated uniform intensive control, saving $13,547 on average per patient with a slight improvement in QALY outcomes. But the findings are not all in favour of individualisation. Quality of life improvements due to the benefits of medication were partially counteracted by a slight decrease in life years gained due to a higher rate of (mortality-increasing) complications. The absolute lifetime risk of myocardial infarction was 1.39% higher with individualisation. A key outstanding question is how much the individualisation process would actually cost to get right. Granted, it probably wouldn’t cost as much as the savings estimated in this study, but the difficulty of ensuring adequate data quality to consistently inform individualisation should not be underestimated.

Microlevel prioritizations and incommensurability. Cambridge Quarterly of Healthcare Ethics [PubMed] Published 7th December 2017

This article concerns the ethical challenges of decision-making at the microlevel. For example, decisions may need to be made about allocating resources between 2 or more identifiable patients, perhaps within a particular clinic or amongst an individual clinician’s patients. The author asserts two relevant values: health need satisfaction and efficiency. Health need satisfaction is defined in terms of severity (regardless of capacity to benefit from available treatments), while efficiency is defined in terms of the maximisation of health benefit (subject to the effectiveness of treatment). The author then argues that these two values are incommensurable in the sense that we can have situations in which health need satisfaction is greater (or less) for a given choice over another, while efficiency could be lower (or higher). Thus, it is not always possible to rank choices given two non-cardinally-comparable values. It might not be clear whether it is better to treat patient A or patient B if the implications of doing so are different in terms of need and efficiency. The author then goes on to suggest some solutions to this apparent problem, starting by highlighting the need for decision makers (in this case clinicians) to recognise different decision paths. The first solution is to generate some guidelines that offer complete ordering of possible choices. These might be based on a process of weighting the different values (e.g. health need satisfaction and efficiency). The other ‘solution’ is to leave the decision to medical practitioners, who can create reasons for choices that may be unique to the case at hand. In this case, certain decision paths should be avoided, such as those that would entail discrimination. I have a lot of problems with this assessment of decision-making at the individual level. Mainly, the discussion is undermined by the fact that efficiency and health need satisfaction are entirely commensurable insofar as we care about either of them in relation to prioritisation in health care. We tend to understand both health need satisfaction and opportunity cost (the basis for estimating efficiency) in terms of health outcomes. The essay also fails to clearly identify the uniqueness of the challenge of microlevel decision-making as distinct from the process of creating clinical guidelines. This may call for a follow-up blog post…

EQ-5D: moving from three levels to five. Value in Health Published 6th December 2017

If you work on economic evaluation, the move from using the EQ-5D-3L to the EQ-5D-5L – in terms of the impact on our results – is one of the biggest methodological step changes in recent memory. We all know that the 5L (and associated value set for England) is better than the 3L. Don’t we? So it is perhaps a bit disappointing that the step to the 5L has been so tentative. This editorial articulates the challenge. NICE makes standards. EuroQoL does research. NICE was (relatively) satisfied with the 3L. EuroQoL wasn’t. We have a clash between an inherently (perhaps necessarily) conservative institution and an inherently progressive institution. Hopefully, their interaction will put us on a sustainable path that achieves both methodological consistency and scientific rigour. This editorial also provides us with a DOI-citable account of the saga that includes the development of the 5L value set for England and NICE’s subsequent memorandum.

Current UK practices on health economics analysis plans (HEAPs): are we using heaps of them? PharmacoEconomics [PubMed] Published 6th December 2017

You could get by for years in economic evaluation without even hearing about ‘health economics analysis plans’ (HEAPs). It probably depends on the policies set by the clinical trials unit (CTU) that you’re working with. The idea is that HEAPs are an equivalent standard operating procedure (SOP) to a statistical analysis plan – setting out how the trial data will be analysed before the analysis begins. This could aid transparency and consistency, and prevent dodgy practices. In this study, the researchers sought to find out whether HEAPs are actually being used, and their perceived role in clinical trials research. A survey targeted 46 UK CTUs, asking about the role of health economists in the unit and whether they used HEAP SOPs. Of 28 respondents, 11 reported having an embedded health economics team. A third of CTUs reported always having a HEAP in place. Most said they only used HEAPs ‘sometimes’, and publicly funded trials were said to be more likely to use a HEAP. The majority of respondents agreed it was acceptable to produce the HEAP at any point prior to a lockdown of the data. The findings demonstrate inconsistency in who writes HEAPs and who is perceived to be the audience. I agree with the premise that we need HEAPs. Though I’m not sure what they should look like, except that statistical analysis plans probably should not be used as a template. It would be good if some of these researchers took things a step further and figured out what ought to go into a HEAP, so that we can consistently employ their recommendations. If you’re on the HEALTHECON-ALL mailing list, you’ll know that they’re already on the case.

Credits

 

ICU triage: a challenge and an opportunity

In a well-publicized snapshot of the challenge of ICU triage, Chang and colleagues wrote:

Critical care services can be life-saving, but many patients admitted to intensive care units (ICUs) are too sick or, conversely, not sick enough to benefit. Intensive care unit overutilization can produce more costly and invasive care without improving outcomes.

Emphasis added. Hyder provides an interesting critique to which Chang and Shapiro respond. In this post, I shall consider over-utilization by those “not sick enough to benefit”: 23.4% of the 808 patients admitted to the UCLA Medical Center in the study by Chang et al. This over-utilization provides both a challenge and a win-win opportunity (better outcomes at lower cost) if we can meet the challenge.

In a forward-looking vision, which some may regard as optimistic, Anesi et al wrote:

In the year 2050 we will unambiguously reimburse healthcare based on value, and so there is good reason to suspect that we will have targeted and reduced many services that provide little or no benefit to patients…

It can be argued that ICU over-utilization, on average, provides no overall benefit, while significantly increasing costs. Gooch and Kahn observed that US spending on critical care represents nearly 3% of GDP, while:

In contrast, the United Kingdom spends only 0.1% of its gross domestic product on critical care services, with no evidence of worse patient outcomes and similar life expectancies as in the United States. Although there are many differences between these 2 countries, one significant difference is intensive care unit (ICU) bed supply. The United States has 25 ICU beds per 100 000 people, as compared with 5 per 100 000 in the United Kingdom. As a result, ICU case-mix differs substantially. In the United Kingdom, the majority of ICU patients are at high risk for death, whereas in the United States, many patients are admitted to the ICU for observation.

As observed by Halpern, these differences come at a significant cost in the US:

The number of intensive care unit (ICU) beds in the United States has continued to increase over the last 3 decades, as have ICU utilization rates and costs, and this despite the lack of any federal, regional, or critical care society mandates to justify these increases. Some experts believe that the increase in the number of ICU beds has led to inappropriate use of these beds by patients who are either too healthy or too sick to benefit from intensive care. This may in part explain the stable national ICU occupancy rate of approximately 68% between 1985 and 2010 and suggests that ICU utilization has simply risen to meet the increased number of beds.

Emphasis added. I shall consider here only ICU usage by patients too healthy to benefit. Although the economics behind reducing ICU over-utilization by “those not sick enough to benefit” appears simple, the underlying cause is in fact likely complex.

icu-costs-fig-1

This one appears easy: lower costs and potentially better outcomes

At the same time, I recall several caveats, well known to health economists, but important in planning and communication:

  1. We expect ICUs to be available when needed, including for emergencies and disasters,
  2. ICUs have high fixed costs,
  3. Decision-making is critical: incremental costs of adding capacity become fixed costs in the future.

Chris Sampson recently reviewed a study aimed at overconsumption or misconsumption (a consequence of over-utilization). The authors of that paper suggest that “cultural change might be required to achieve significant shifts in clinical behaviour.” Chris laments that this study did not ‘dig deeper’; here we aim to dig deeper in one specific area: ICU triage for patients “not sick enough to benefit.” More questions than answers at this stage, but hopefully the questions will ultimately lead to answers.

I begin by stepping back: economic decisions frequently involve compromises in allocating scarce resources. Decisions in health economics are frequently no different. How scarce are ICU resources? What happens if they are less scarce? What are the costs? Increasing availability can frequently lead to increased utilization, a phenomenon called “demand elasticity”. For example, increasing expressway/motorway capacity “can lead to increased traffic as new drivers seize the opportunity to travel on the larger road”, and thus no reduction in travel time. Gooch and Kahn further note that:

The presence of demand elasticity in decisions regarding ICU care has major implications for health care delivery and financing. Primarily, this indicates it is possible to reduce the costs of US hospital care by constraining ICU bed supply, perhaps through certificate of need laws or other legislation.

I offer a highly simplified sketch of how ICU over-utilization by those “not sick enough to benefit” is one driver of a vicious cycle in ICU cost growth.

icu-costs-fig-2

ICU over-utilization by patients “not sick enough to benefit” as a driver for ICU demand elasticity

Who (if anyone) is at fault for this ICU vicious cycle? Chang and Shapiro offer one suggestion:

For medical conditions where ICU care is frequently provided, but may not always be necessary, institutions that utilize ICUs more frequently are more likely to perform invasive procedures and have higher costs but have no improvement in hospital mortality. Hospitals had similar ICU utilization patterns across the 4 medical conditions, suggesting that systematic institutional factors may influence decisions to potentially overutilize ICU care.

Emphasis added. I note that demand elasticity is not in itself bad; it must simply be recognized, controlled and used appropriately. As part of a discussion in print on the role of cost considerations in medical decisions, Du and Kahn write:

Although we argue that costs should not be factored into medical decision-making in the ICU, this does not mean that we should not strive toward healthcare cost reduction in other ways. One strategy is to devise systems of care that prevent unnecessary or unwanted ICU admissions—given the small amount of ICU care that is due to discretionary spending, the only real way to reduce ICU costs is to prevent ICU admissions in the first place.

Du and Kahn also argue for careful cost-effectiveness analyses, such as that supported by NICE in the UK:

These programs limit use of treatments that are not cost-effective, taking cost decisions out of the hands of physicians and putting them where they belong: in the hands of society at large… We will achieve real ICU savings only by encouraging a society committed to system-based reforms.

Emphasis added. One can debate “taking cost decisions out of the hands of physicians”, though Guidet & Beale‘s and Capuzzo & Rhodes‘s argument for more physician awareness of cost might provide a good intermediate position in this debate.

Finally, increasing ICU supply (that is, ICU beds) in response to well-conceived increases in ICU demand is not in itself bad; ICU supply must be able to respond to demands imposed by disasters or other emergencies. We need to seek out novel ways to provide this capacity without incurring potentially unnecessary fixed costs, perhaps from region-wide stockpiling of supplies and equipment, and region-wide pools of on-call physicians and other ICU personnel. In summary, current health-related literature offers a wide-ranging discussion of the growing costs of intensive care; in my opinion: more questions than answers at this stage, but hopefully the questions will ultimately lead to answers.

Credits

Chris Sampson’s journal round-up for 20th June 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Can increased primary care access reduce demand for emergency care? Evidence from England’s 7-day GP opening. Journal of Health Economics Published 15th June 2016

Getting a GP appointment when you want one can be tricky, and complaints are increasingly common in the UK. In April 2013, 7-day opening for some GP practices began being piloted in London, with support from the Prime Minister’s Challenge Fund. Part of the reasoning for 7-day opening – beyond patient satisfaction – is that better access to GP services might reduce the use of A&E at weekends. This study evaluates whether or not this has been observed for the London pilot. Secondary Uses Service patient-level data are analysed for 2009-2014 for 34 GP practices in central London (4 pilot practices and 30 controls). The authors collapse the data into the number of A&E attendances per GP practice, giving 8704 observations (34 practices over 256 weeks). 6 categories of A&E attendance are identified; some that we would expect to be influenced by extended GP opening (e.g. ‘minor’) and some that we would not (e.g. ‘accident’). Pilot practices were not randomly selected, and those that were selected had a significantly higher patient-GP ratio. The authors run difference-in-difference analyses on the outcomes using Poisson regression models. Total weekend attendances dropped by 17.9%, with moderate cases exhibiting the greatest drop. Minor cases were not affected. There was also a 10% drop in weekend admissions and a 20% drop in ambulance usage, suggesting major cost savings. A small spillover effect was observed for weekdays. The authors divide their sample into age groups and find that the fall in A&E attendances was greatest in the over 60s, who account for almost all of the drop in weekend admissions. The authors speculate that this may be due to A&E staff being risk averse with elderly patients with whose background they are not familiar, and that GPs may be better able to assess the seriousness of the case. Patients from wealthier neighbourhoods exhibited a relatively greater drop in A&E attendances. So it looks like 7-day opening for GP services could relieve a lot of pressure on A&E departments. What’s lacking from the paper though is an explicit estimate of the cost savings (if, indeed, there were any). The pilot was funded to the tune of £50 million. Unfortunately this study doesn’t tell us whether or not it was worth it.

Cost-effectiveness analysis in R using a multi-state modeling survival analysis framework: a tutorial. Medical Decision Making [PubMed] Published 8th June 2016

To say my practical understanding of R is rudimentary would be a grand overstatement. But I do understand the benefits of the increasingly ubiquitous open source stats software. People frown hard when I tell them that we often build Markov models in Excel. An alternative script-based approach could clearly increase the transparency of decision models and do away with black box problems. This paper does what it says on the tin and guides the reader through the process of developing a state-based (e.g. Markov) transition model. But the key novelty of the paper is the description of a tool for ‘testing’ the Markov assumption that might be built into a decision model. This is the ‘state-arrival extended model’ which entails the inclusion of a covariate to represent the history from the start of the model. A true Markov model is only interested in time in the current state, so if this extra covariate matters to the results then we can reject the Markov assumption and instead implement a semi-Markov model (or maybe something else). The authors do just this using an example from a previously published trial. I dare say the authors could have figured out that the Markov assumption wouldn’t hold without using such a test, but it’s good to have a justification for model choice. The basis for the tutorial is a 12 step program, and the paper explains each step. The majority of processes are based on adaptations of an existing R package called mstate. It assumes that time is continuous rather than discrete and can handle alternative parametric distributions for survival. Visual assessment of fit is built into the process to facilitate model selection. Functions are defined to compute QALYs and costs associated with states and PSA is implemented with generation of cost-effectiveness planes and CEACs. But your heart may sink when the authors state that “It is assumed that individual patient data are available”. The authors provide a thorough discussion of the ways in which a model might be constructed when individual level data aren’t available. But ultimately this seems like a major limitation of the approach, or at least of the usefulness of this particular tutorial. So don’t throw away your copy of Briggs/Sculpher/Claxton just yet.

Do waiting times affect health outcomes? Evidence from coronary bypass. Social Science & Medicine [PubMed] Published 30th May 2016

Many health economists are quite happy with waiting lists being used as a basis for rationing in health services like the NHS. But, surely, this is conditional on the delay in treatment not affecting either current health or the potential benefit of treatment. This new study provides evidence from coronary bypass surgery. Hospital Episodes Statistics for 133,166 patients for the years 2000-2010 are used to look at 2 outcomes: 30-day mortality and 28-day readmission. During the period, policy resulted in the reduction of waiting times from 220 to 50 days. Three empirical strategies are employed: i) annual cross-sectional estimation of the probability of the 2 outcomes occurring in patients, ii) panel analysis of hospital-level data over the 11 years to evaluate the impact of different waiting time reductions and iii) full analysis of patient-specific waiting times across all years using an instrumental variable based on waiting times for an alternative procedure. For the first analysis, the study finds no effect of waiting times on mortality in all years bar 2003 (in which the effect was negative). Weak association is found with readmission. Doubling waiting times increases risk of readmission from 4.05% to 4.54%. The hospital-level analysis finds a lack of effect on both counts. The full panel analysis finds that longer waiting times reduce mortality, but the authors suggest that this is probably due to some unobserved heterogeneity. Longer waiting times may have a negative effect on people’s health, but it isn’t likely that this effect is dramatic enough to increase mortality. This might be thanks to effective prioritisation in the NHS.