Jason Shafrin’s journal round-up for 7th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Combined impact of future trends on healthcare utilisation of older people: a Delphi study. Health Policy [PubMed] [RePEc] Published October 2019

Governments need to plan for the future. This is particularly important in countries where the government pays for the lion’s share of health care expenditures. Predicting the future, however, is not an easy task. One could use quantitative approaches and simply extrapolate recent trends. One could attempt to consult with political experts to determine what policies are likely to be incurred. Another approach is to use a Delphi Panel to elicit expert opinions on future trends in health care utilization to help predict future health care needs. This approach was the one taken by Ravensbergen and co-authors in an attempt to predict trends in health care utilization among older adults in the Netherlands in 2040.

The Delphi Panel approach was applied in this study as follows. First, individuals received a questionnaire via email. Researchers presented the experts with trends from the Dutch Public Health Foresight Study (Volksgezondheid Toekomst Verkenning) to help ground all experts with the same baseline information. The data and questions largely asked separately about trends for either the old (65–80 years) or the oldest old (>80 years). After the responses from the first questionnaire were received, responses were summarized and provided back to each panelist in an anonymous manner. Panelists were then able to revise their views on a second questionnaire taking into account the feedback by the other panelists. Because the panelists did not meet in person, this approach should be considered a modified Delphi Panel.

The Delphi panel identified three broad trends: increased use of eHealth tools, less support, and change in health status. While the panel thought eHealth was important, experts rarely reached consensus how eHealth would affect healthcare utilization. The experts did find consensus, however, in believing that the the share of adults aged 50-64 will decline relative to the share of individuals aged ≥ 85 years, implying fewer caregivers will be available and more of the oldest old will be living independently (i.e. with less support). Because less informal care will be available, the Delphi believed that the demand for home care and general practitioner services will rise. The respondents also believed that in most cases changes in health status will increase health care utilization of general practitioner and specialist services. There was less agreement about trends in the need for long-term care or mental health services, however.

The Delphi Panel approach may be useful to help governments predict future demand for services. More rigorous approaches, such as betting markets, are likely not feasible since the payouts would take too long to generate much interest. Betting markets could be used to predict shorter-run trends in health care utilization. The risk with betting markets, however, is that some individuals could act strategically to drive up or down predictions to increase or decrease reimbursement for certain sectors.

In short, the Delphi Panel is likely a reasonable, low-cost approach for predicting trends in health care utilization. Future studies, however, should validate how good the predictions are from using this type of method.

The fold-in, fold-out design for DCE choice tasks: application to burden of disease. Medical Decision Making [PubMed] Published 29th May 2019

Discrete choice experiments (DCEs) are a useful way to determine what treatment attributes patients (or providers or caregivers) value. Respondents are presented with multiple treatment options and the options can be compared across a series of attributes. An attribute could be treatment efficacy, safety, dosing, cost, or a host of other attributes. One can use this approach to measure the marginal rate of substitution across attributes. If cost is one of the attributes, one can measure willingness to pay for specific attributes.

One of the key challenges of DCEs, however, is attribute selection. Most treatments differ across a range of attributes. Most published DCEs however have four, five, or at most seven attributes presented. Including more attributes makes comparisons too complicated for most respondents. Thus, researchers are left with a difficult choice: (i) a tractable but overly simplified survey, or (ii) a realistic, but overly complex survey unlikely to be comprehended by respondents.

One solution proposed by Lucas Goossens and co-authors is to use a Fold-in Fold-out (FiFo) approach. In this approach, related attributes may be grouped into domains. For some questions, all attributes within the same domain have the same attribute level (i.e., fold in); in other questions, attributes may vary within the domain (i.e., fold out).

To be concrete, in the Goossens paper, they examine treatments for chronic obstructive pulmonary disorder (COPD). They use 15 attributes divided into three domains plus two stand-alone attributes:

a respiratory symptoms domain (with four attributes: shortness of breath at rest, shortness of breath during physical activity, coughing, and sputum production), a limitations domain (four attributes: limitations in strenuous physical activities, limitations in moderate physical activities, limitations in daily activities, and limitations in social activities), a mental problems domain (five attributes: feeling depressed, fearing that breathing gets worse, worrying, listlessness, and tense feeling), a fatigue attribute, and an exacerbations attribute.

This creative approach simplifies the choice set for respondents, but allows for a large number of attributes. Using the data collected, the authors used a Bayesian mixed logit regression model to conduct the analysis. The utility function underlying this assumed domain-specific parameters, but also included parameters for within-domain attribute weights to vary in the questions where it was folded out.

One key challenge, however, is that the authors found that individuals placed more weight on attributes when their domains were folded out (i.e., attribute levels varied within domain) compared to when their domains were folded in (i.e., attribute levels were the same within the domain). Thus, I would say that if five, six or seven attributes can capture the lion’s share of differences in treatment attributes across treatments, use the standard approach; however, if more attributes are needed, the FiFo approach is an attractive option researchers should consider.

The health and cost burden of antibiotic resistant and susceptible Escherichia coli bacteraemia in the English hospital setting: a national retrospective cohort study. PLoS One [PubMed] Published 10th September 2019

Bacterial infections are bad. The good news is that we have antibiotics to treat them so they no longer are a worry, right? While conventional wisdom may believe that we have many antibiotics to treat these infections, in recent years antibiotic resistance has grown. If antibiotics no longer are effective, what is the cost to society?

One effort to quantify the economic burden of antibiotic resistance by Nichola Naylor and co-authors used national surveillance and administrative data from National Health Service (NHS) hospitals in England. They compared the cost for patients with similar observable characteristics with E. coli bacteraemia compared to those who did not have E. coli bacteraemia. Antibiotic resistance in the study was defined as E. coli bacteraemia using laboratory-based definitions of ‘resistant’ and ‘intermediate’ isolates. The antibiotics to which resistance was considered included ciprofloxacin, third generation cephalosporins (ceftazidime and/or cefotaxime), gentamicin, piperacillin/tazobactam and carbapenems (imipenem and/or meropenem).

The authors use an Aalen-Johansen estimator to measure cumulative incidence of in-hospital mortality and length of stay. Both approaches control for the patient’s age, sex, Elixhauser comorbidity index, and hospital trust type. It does not appear that the authors control for the reason for admission to the hospital nor do they propensity match people with those without antibiotic resistance. Thus, it is likely that significant unobserved heterogeneity across groups remains in the analysis.

Despite these limitations, the authors do have some interesting findings. First, bacterial infections are associated with increased risk of death. In-hospital mortality is 14.3% for individuals infected with E. Coli compared to 1.3% for those not infected. Accounting for covariates, the subdistribution hazard rate (SHR) for in-hospital mortality due to E. coli bacteraemia was 5.88. Second, E. coli bacteraemia was associated with 3.9 excess hospital days compared to patients who were not antibiotic resistance. These extra hospital days cost £1,020 per case of E. coli bacteraemia and the estimated annual cost of E. coli bacteraemia in England was £14.3m. If antibiotic resistance has increased in recent years, these estimates are likely to be conservative.

The issue of antibiotic resistance presents a conundrum for policymakers. If current antibiotics are effective, drug-makers will have no incentive to develop new antibiotics since the new treatments are unlikely to be prescribed. On the other hand, failing to identify new antibiotics in reserve means that as antibiotic resistance grows, there will be few treatment alternatives. To address this issue, the United Kingdom is considering a ‘subscription style‘ approach to pay for new antibiotics to incentivize the development of new treatments.

Nevertheless, the paper by Naylor and co-authors provides a useful data point on the cost of antibiotic resistance.

Credits

Sam Watson’s journal round-up for 3rd June 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Limits to human life span through extreme value theory. Journal of the American Statistical Association [RePEc] Published 2nd April 2019

The oldest verified person to have ever lived was Jeanne Calment who died in 1997 at the superlative age of 122. No-one else has ever been recorded as living longer than 120, but there have been perhaps a few hundred supercentarians over 110. Whenever someone reaches such a stupendous age, some budding reporter will ask them what the secret was. They will reply that they have stuck to a regimen of three boiled eggs and a glass of scotch every day for 80 years. And this information is of course completely meaningless due to survivorship bias. But as public health and health care improves and with it life expectancy, there remains the question of whether people will ever exceed these extreme ages or whether there is actually a limit to human longevity.

Some studies have attempted to address the question of maximum human longevity by looking at how key biological systems, like getting oxygen to the muscles or vasculature, degrade. They suggest that there would be an upper limit as key systems of the body just cannot last, which is not to say medicine might not find a way to fix or replace them in the future. Another way of addressing this question is to take a purely statistical approach and look at the distribution of the ages of the oldest people alive and try to make inferences about its upper limit. Such an analysis relies on extreme value theory.

There are two types of extreme value data. The first type consists of just the series of maximum values from the distribution. The Fisher-Tippett-Gnedenko theorem shows that these maxima can only be distributed according to one of three distributions. The second type of data are all of the most extreme observations above a certain threshold, and wonderfully there is another triple-barrelled theorem that shows that these data are distributed as a generalised Pareto distribution – the Pickand-Balkema-de Haan theorem. This article makes use of this latter type of data and theorem to estimate: (i) is there an upper limit to the distribution of human life spans? (ii) What is it, if so? And (iii) does it change over time?

The authors use a dataset of the ages of death in days of all Dutch residents who died over the age of 92 between 1986 and 2015. Using these data to estimate the parameters of the generalised Pareto distribution, they find strong evidence to suggest that, statistically at least, it has an upper limit and that this limit is probably around 117-124. Over the years of the study there did not appear to be any change in this limit. This is not to say that it couldn’t change in the future if some new miraculous treatment appeared, but for now, we humans must put up with a short and finite existence.

Infant health care and long-term outcomes. Review of Economics and Statistics [RePEc] Published 13th May 2019

I haven’t covered an article on infant health and economic conditions and longer term outcomes for a while. It used to be that there would be one in every round-up I wrote. I could barely keep up with the literature, which I tried to summarise in a different blog post. Given that it has been a while, I thought I would include a new one. This time we are looking at the effect of mother and child health centres in Norway in the 1930s on the outcomes of adults later in the 20th Century.

Fortunately the health centres were built in different municipalities at different times. The authors note that the “key identifying assumption” is that they were not built at a time related to the health of infants in those areas (well, this and that the model is linear and additive, time trends are linear, etc. etc. something that economists often forget). They don’t go into too much detail on this, but it seems plausible. Another gripe of mine with most empirical economic papers, and indeed in medical and public health fields, is that plotting the data is a secondary concern or doesn’t happen at all. It should be the most important thing. Indeed, in this article much of the discussion can be captured by the figure buried two thirds through. The figure shows that the centres likely led to a big reduction in diarrhoeal disease, probably due to increased rates of breast feeding, but on other outcomes effects are more ambiguous and probably quite small if they exist. Some evidence is provided to suggest that these differences were associated with very modest increases in educational attainment and adult wages. However, a cost-benefit calculation suggests that on the basis of these wage increases the intervention had a annualised rate of return of about 5%.

I should say that this study is well-conducted and fairly solid so any gripes with it are fairly minor. It certainly fits neatly into the wide literature on the topic, and I don’t think anyone would doubt that investing in childhood interventions is likely to have a number of short and long term benefits.

Relationship between poor olfaction and mortality among community-dwelling older adults: a cohort study. Annals of Internal Medicine [PubMed] Published 21st May 2019

I included this last study, not because of any ground-breaking economics or statistics, but because it is interesting. This is one of a number of studies to have looked at the relationship between smell ability and risk of death. These studies have generally found a strong direct relationship between poor olfaction and risk of death in the following years (summarised briefly in this editorial). This study examines a cohort of a couple of thousand older people whose smell was rigourously tested at baseline, among other things. If they died then their death was categorised by a medical examiner into one of four categories: dementia or Parkinson disease, cardiovascular disease, cancer, and respiratory illness.

There was a very strong relationship between poor ability to smell and all-cause death. They found that cumulative risk for death was 46% and 30% higher in persons with a loss of smelling ability at 10 and 13 years respectively. Delving into death by cause, they found that this relationship was most important among those who died of dementia or Parkinson disease, which makes sense as smell is one of the oldest limbic structures and linked to many parts of the brain. Some relationship was seen with cardiovascular disease but not cancer or respiratory illness. They then use a ‘mediation analysis’, i.e. conditioning on post-treatment variables to ‘block’ causal pathways, to identify how much variation is explained and conclude that dementia, Parkinson disease, and weight loss account for about 30% of the observed relationship. However, I am usually suspicious of mediation analyses, and standard arguments would suggest that model parameters would be biased.

Interestingly, olfaction is not normally used as a diagnostic test among the elderly despite sense of smell being one of the strongest predictors of mortality. People do not generally notice their sense of smell waning as it is gradual, so would not likely remark on it to a doctor. Perhaps it is time to start testing it routinely?

Credits

Paul Mitchell’s journal round-up for 6th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A longitudinal study to assess the frequency and cost of antivascular endothelial therapy, and inequalities in access, in England between 2005 and 2015. BMJ Open [PubMed] Published 22nd October 2017

I am breaking one of my unwritten rules in a journal paper round-up by talking about colleagues’ work, but I feel it is too important not to provide a summary for a number of reasons. The study highlights the problems faced by regional healthcare purchasers in England when implementing national guideline recommendations on the cost-effectiveness of new treatments. The paper focuses on anti-vascular endothelial growth factor (anti-VEGF) medicines in particular, with two drugs, ranibizumab and aflibercept, offered to patients with a range of eye conditions, costing £550-800 per injection. Another drug, bevacizumab, that is closely related to ranibizumab and performs similarly in trials, could be provided at a fraction of the cost (£50-100 per injection), but it is currently unlicensed for eye conditions in the UK. This study investigates how the regional areas in England have coped with trying to provide the recommended drugs using administrative data from Hospital Episode Statistics in England between 2005-2015 by tracking their use since they have been recommended for a number of different eye conditions over the past decade. In 2014/15 the cost of these two new drugs for treating eye conditions alone was estimated at £447 million nationally. The distribution of where these drugs are provided is not equal, varying widely across regions after controlling for socio-demographics, suggesting an inequality of access associated with the introduction of these high-cost drugs over the past decade at a time of relatively low growth in national health spending. Although there are limitations associated with using data not intended for research purposes, the study shows how the most can be made from data routinely collected for non-research purposes. On a public policy level, it raises questions over the provision of such high-cost drugs, for which the authors state the NHS are currently paying more for than US insurers. Although it is important to be careful when comparing to unlicensed drugs, the authors point to clear evidence in the paper as to why their comparison is a reasonable one in this scenario, with a large opportunity cost associated with not including this option in national guidelines. If national recommendations continue to insist that such drugs be provided, clearer guidance is also required on how to disinvest from existing services at a regional level to reduce further examples of inequality in access in the future.

In search of a common currency: a comparison of seven EQ-5D-5L value sets. Health Economics [PubMed] Published 24th October 2017

For those of us out there who like a good valuation study, you will need to set yourself aside a good piece of time to work your way through this one. The new EQ-5D-5L measure of health status, with a primary purpose of generating quality-adjusted life years (QALYs) for economic evaluations, is now starting to have valuation studies emerging from different countries, whereby the relative importance of each of the measure dimensions and levels are quantified based on general population preferences. This study offers the first comparison of value sets across seven countries: 3 Western European (England, Netherlands, Spain), 1 North American (Canada), 1 South American (Uruguay), and two East Asian (Japan and South Korea). The authors in this paper aim to describe methodological differences between the seven value sets, compare the relative importance of dimensions, level decrements and scale length (i.e. quality/quantity trade-offs for QALYs), as well as developing a common (Western) currency across four of the value sets. In brief summary, there does appear to be similar trends across the three Western European countries: level decrements from levels 3 to 4 have the largest value, followed by levels 1 to 2. There is also a pattern in these three countries’ dimensions, whereby the two “symptom” dimensions (i.e. pain/discomfort, anxiety/depression) have equal importance to the other three “functioning” dimensions (i.e. mobility, self-care and usual activities). There are also clear differences with the other four value sets. Canada, although it also has the highest level decrements between levels 3 and 4 (49%), unusually has equal decrements for the remainder (17% x 3). For the other three countries, greater weight is attached to the three functioning dimensions relative to the two symptom dimensions. Although South Korea also has the greatest level decrements between level 3 and 4, it was greatest between level 4 and level 5 in Uruguay and levels 1 and 2 in Japan. Although the authors give a number of plausible reasons as to why these differences may occur, less justification is given in the choice of the four value sets they offer as a common currency, beyond the need to have a value set for countries that do not have one already. The most in-common value sets were the three Western European countries, so a Western European value set may have been more appropriate if the criterion was to have comparable values across countries. If the aim was really for a more international common currency, there are issues with the exclusion of non-Western countries’ value sets from their common currency version. Surely differences across cultures should be reflected in a common currency if they are apparent in different cultures and settings. A common currency should also have a better spread of regions geographically, with no country from Africa, the Middle East, Central and South Asia represented in this study, as well as no lower- and middle-income countries. Though this final criticism is out of the control of the authors based on current data availability.

Quantifying the relationship between capability and health in older people: can’t map, won’t map. Medical Decision Making [PubMed] Published 23rd October 2017

The EQ-5D is one of many ways quality of life can be measured within economic evaluations. A more recent way based on Amartya Sen’s capability approach has attempted to develop outcome measures that move beyond health-related aspects of quality of life captured by EQ-5D and similar measures used in the generation of QALYs. This study examines the relationship between the EQ-5D and the ICECAP-O capability measure in three different patient populations included in the Medical Crises in Older People programme in England. The authors propose a reasonable hypothesis that health could be considered a conversion factor for a person’s broader capability set, and so it is plausible to test how well the EQ-5D-3L dimension values and overall score can map onto the ICECAP-O overall score. Through numerous regressions performed, the strongest relationship between the two measures in this sample was an R-squared of 0.35. Interestingly, the dimensions on the EQ-5D that had a significant relationship with the ICECAP-O score were a mix of dimensions with a focus on functioning (i.e. self-care, usual activities) and symptoms (anxiety/depression), so overall capability on ICECAP-O appears to be related, at least to a small degree, to both health components of EQ-5D discussed in this round-up’s previous paper. The authors suggest it provides further evidence of the complementary data provided by EQ-5D and ICECAP-O, but the causal relationship, as the authors suggest, between both measures remains under-researched. Longitudinal data analysis would provide a more definitive answer to the question of how much interaction there is between these two measures and their dimensions as health and capability changes over time in response to different treatments and care provision.

Credits