Chris Sampson’s journal round-up for 14th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Transparency in health economic modeling: options, issues and potential solutions. PharmacoEconomics [PubMed] Published 8th October 2019

Reading this paper was a strange experience. The purpose of the paper, and its content, is much the same as a paper of my own, which was published in the same journal a few months ago.

The authors outline what they see as the options for transparency in the context of decision modelling, with a focus on open source models and a focus on for whom the details are transparent. Models might be transparent to a small number of researchers (e.g. in peer review), to HTA agencies, or to the public at large. The paper includes a figure showing the two aspects of transparency, termed ‘reach’ and ‘level’, which relate to the number of people who can access the information and the level of detail made available. We provided a similar figure in our paper, using the terms ‘breadth’ and ‘depth’, which is at least some validation of our idea. The authors then go on to discuss five ‘issues’ with transparency: copyright, model misuse, confidential data, software, and time/resources. These issues are framed as questions, to which the authors posit some answers as solutions.

Perhaps inevitably, I think our paper does a better job, and so I’m probably over-critical of this article. Ours is more comprehensive, if nothing else. But I also think the authors make a few missteps. There’s a focus on models created by academic researchers, which oversimplifies the discussion somewhat. Open source modelling is framed as a more complete solution than it really is. The ‘issues’ that are discussed are at points framed as drawbacks or negative features of transparency, which they aren’t. Certainly, they’re challenges, but they aren’t reasons not to pursue transparency. ‘Copyright’ seems to be used as a synonym for intellectual property, and transparency is considered to be a threat to this. The authors’ proposed solution here is to use licensing fees. I think that’s a bad idea. Levying a fee creates an incentive to disregard copyright, not respect it.

It’s a little ironic that both this paper and my own were published, when both describe the benefits of transparency in terms of reducing “duplication of efforts”. No doubt, I read this paper with a far more critical eye than I normally would. Had I not published a paper on precisely the same subject, I might’ve thought this paper was brilliant.

If we recognize heterogeneity of treatment effect can we lessen waste? Journal of Comparative Effectiveness Research [PubMed] Published 1st October 2019

This commentary starts from the premise that a pervasive overuse of resources creates a lot of waste in health care, which I guess might be true in the US. Apparently, this is because clinicians have an insufficient understanding of heterogeneity in treatment effects and therefore assume average treatment effects for their patients. The authors suggest that this situation is reinforced by clinical trial publications tending to only report average treatment effects. I’m not sure whether the authors are arguing that clinicians are too knowledgable and dependent on the research, or that they don’t know the research well enough. Either way, it isn’t a very satisfying explanation of the overuse of health care. Certainly, patients could benefit from more personalised care, and I would support the authors’ argument in favour of stratified studies and the reporting of subgroup treatment effects. The most insightful part of this paper is the argument that these stratifications should be on the basis of observable characteristics. It isn’t much use to your general practitioner if personalisation requires genome sequencing. In short, I agree with the authors’ argument that we should do more to recognise heterogeneity of treatment effects, but I’m not sure it has much to do with waste.

No evidence for a protective effect of education on mental health. Social Science & Medicine Published 3rd October 2019

When it comes to the determinants of health and well-being, I often think back to my MSc dissertation research. As part of that, I learned that a) stuff that you might imagine to be important often isn’t and b) methodological choices matter a lot. Though it wasn’t the purpose of my study, it seemed from this research that higher education has a negative effect on people’s subjective well-being. But there isn’t much research out there to help us understand the association between education and mental health in general.

This study add to a small body of literature on the impact of changes in compulsory schooling on mental health. In (West) Germany, education policy was determined at the state level, so when compulsory schooling was extended from eight to nine years, different states implemented the change at different times between 1949 and 1969. This study includes 5,321 people, with 20,290 person-year observations, from the German Socio-Economic Panel survey (SOEP). Inclusion was based on people being born seven years either side of the cutoff birth year for which the longer compulsory schooling was enacted, with a further restriction to people aged between 50 and 85. The SOEP includes the SF-12 questionnaire, which includes a mental health component score (MCS). There is also an 11-point life satisfaction scale. The authors use an instrumental variable approach, using the policy change as an instrument for years of schooling and estimating a standard two-stage least squares model. The MCS score, life satisfaction score, and a binary indicator for MCS score lower than or equal to 45.6, are all modelled as separate outcomes.

Estimates using an OLS model show a positive and highly significant effect of years of schooling on all three outcomes. But when the instrumental variable model is used, this effect disappears. An additional year of schooling in this model is associated with a statistically and clinically insignificant decrease in the MCS score. Also insignificant was the finding that more years of schooling increases the likelihood of developing symptoms of a mental health disorder (as indicated by the MCS threshold of 45.6) and that life satisfaction is slightly lower. The same model shows a positive effect on physical health, which corresponds with previous research and provides some reassurance that the model could detect an effect if one existed.

The specification of the model seems reasonable and a host of robustness checks are reported. The only potential issue I could spot is that a person’s state of residence at the time of schooling is not observed, and so their location at entry into the sample is used. Given that education is associated with mobility, this could be a problem, and I would have liked to see the authors subject it to more testing. The overall finding – that an additional year of school for people who might otherwise only stay at school for eight years does not improve mental health – is persuasive. But the extent to which we can say anything more general about the impact of education on well-being is limited. What if it had been three years of additional schooling, rather than one? There is still much work to be done in this area.

Scientific sinkhole: the pernicious price of formatting. PLoS One [PubMed] Published 26th September 2019

This study is based on a survey that asked 372 researchers from 41 countries about the time they spent formatting manuscripts for journal submission. Let’s see how I can frame this as health economics… Well, some of the participants are health researchers. The time they spend on formatting journal submissions is time not spent on health research. The opportunity cost of time spent formatting could be measured in terms of health.

The authors focused on the time and wage costs of formatting. The results showed that formatting took a median time of 52 hours per person per year, at a cost of $477 per manuscript or $1,908 per person per year. Researchers spend – on average – 14 hours on formatting a manuscript. That’s outrageous. I have never spent that long on formatting. If you do, you only have yourself to blame. Or maybe it’s just because of what I consider to constitute formatting. The survey asked respondents to consider formatting of figures, tables, and supplementary files. Improving the format of a figure or a table can add real value to a paper. A good figure or table can change a bad paper to a good paper. I’d love to know how the time cost differed for people using LaTeX.

Credits

Jason Shafrin’s journal round-up for 7th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Combined impact of future trends on healthcare utilisation of older people: a Delphi study. Health Policy [PubMed] [RePEc] Published October 2019

Governments need to plan for the future. This is particularly important in countries where the government pays for the lion’s share of health care expenditures. Predicting the future, however, is not an easy task. One could use quantitative approaches and simply extrapolate recent trends. One could attempt to consult with political experts to determine what policies are likely to be incurred. Another approach is to use a Delphi Panel to elicit expert opinions on future trends in health care utilization to help predict future health care needs. This approach was the one taken by Ravensbergen and co-authors in an attempt to predict trends in health care utilization among older adults in the Netherlands in 2040.

The Delphi Panel approach was applied in this study as follows. First, individuals received a questionnaire via email. Researchers presented the experts with trends from the Dutch Public Health Foresight Study (Volksgezondheid Toekomst Verkenning) to help ground all experts with the same baseline information. The data and questions largely asked separately about trends for either the old (65–80 years) or the oldest old (>80 years). After the responses from the first questionnaire were received, responses were summarized and provided back to each panelist in an anonymous manner. Panelists were then able to revise their views on a second questionnaire taking into account the feedback by the other panelists. Because the panelists did not meet in person, this approach should be considered a modified Delphi Panel.

The Delphi panel identified three broad trends: increased use of eHealth tools, less support, and change in health status. While the panel thought eHealth was important, experts rarely reached consensus how eHealth would affect healthcare utilization. The experts did find consensus, however, in believing that the the share of adults aged 50-64 will decline relative to the share of individuals aged ≥ 85 years, implying fewer caregivers will be available and more of the oldest old will be living independently (i.e. with less support). Because less informal care will be available, the Delphi believed that the demand for home care and general practitioner services will rise. The respondents also believed that in most cases changes in health status will increase health care utilization of general practitioner and specialist services. There was less agreement about trends in the need for long-term care or mental health services, however.

The Delphi Panel approach may be useful to help governments predict future demand for services. More rigorous approaches, such as betting markets, are likely not feasible since the payouts would take too long to generate much interest. Betting markets could be used to predict shorter-run trends in health care utilization. The risk with betting markets, however, is that some individuals could act strategically to drive up or down predictions to increase or decrease reimbursement for certain sectors.

In short, the Delphi Panel is likely a reasonable, low-cost approach for predicting trends in health care utilization. Future studies, however, should validate how good the predictions are from using this type of method.

The fold-in, fold-out design for DCE choice tasks: application to burden of disease. Medical Decision Making [PubMed] Published 29th May 2019

Discrete choice experiments (DCEs) are a useful way to determine what treatment attributes patients (or providers or caregivers) value. Respondents are presented with multiple treatment options and the options can be compared across a series of attributes. An attribute could be treatment efficacy, safety, dosing, cost, or a host of other attributes. One can use this approach to measure the marginal rate of substitution across attributes. If cost is one of the attributes, one can measure willingness to pay for specific attributes.

One of the key challenges of DCEs, however, is attribute selection. Most treatments differ across a range of attributes. Most published DCEs however have four, five, or at most seven attributes presented. Including more attributes makes comparisons too complicated for most respondents. Thus, researchers are left with a difficult choice: (i) a tractable but overly simplified survey, or (ii) a realistic, but overly complex survey unlikely to be comprehended by respondents.

One solution proposed by Lucas Goossens and co-authors is to use a Fold-in Fold-out (FiFo) approach. In this approach, related attributes may be grouped into domains. For some questions, all attributes within the same domain have the same attribute level (i.e., fold in); in other questions, attributes may vary within the domain (i.e., fold out).

To be concrete, in the Goossens paper, they examine treatments for chronic obstructive pulmonary disorder (COPD). They use 15 attributes divided into three domains plus two stand-alone attributes:

a respiratory symptoms domain (with four attributes: shortness of breath at rest, shortness of breath during physical activity, coughing, and sputum production), a limitations domain (four attributes: limitations in strenuous physical activities, limitations in moderate physical activities, limitations in daily activities, and limitations in social activities), a mental problems domain (five attributes: feeling depressed, fearing that breathing gets worse, worrying, listlessness, and tense feeling), a fatigue attribute, and an exacerbations attribute.

This creative approach simplifies the choice set for respondents, but allows for a large number of attributes. Using the data collected, the authors used a Bayesian mixed logit regression model to conduct the analysis. The utility function underlying this assumed domain-specific parameters, but also included parameters for within-domain attribute weights to vary in the questions where it was folded out.

One key challenge, however, is that the authors found that individuals placed more weight on attributes when their domains were folded out (i.e., attribute levels varied within domain) compared to when their domains were folded in (i.e., attribute levels were the same within the domain). Thus, I would say that if five, six or seven attributes can capture the lion’s share of differences in treatment attributes across treatments, use the standard approach; however, if more attributes are needed, the FiFo approach is an attractive option researchers should consider.

The health and cost burden of antibiotic resistant and susceptible Escherichia coli bacteraemia in the English hospital setting: a national retrospective cohort study. PLoS One [PubMed] Published 10th September 2019

Bacterial infections are bad. The good news is that we have antibiotics to treat them so they no longer are a worry, right? While conventional wisdom may believe that we have many antibiotics to treat these infections, in recent years antibiotic resistance has grown. If antibiotics no longer are effective, what is the cost to society?

One effort to quantify the economic burden of antibiotic resistance by Nichola Naylor and co-authors used national surveillance and administrative data from National Health Service (NHS) hospitals in England. They compared the cost for patients with similar observable characteristics with E. coli bacteraemia compared to those who did not have E. coli bacteraemia. Antibiotic resistance in the study was defined as E. coli bacteraemia using laboratory-based definitions of ‘resistant’ and ‘intermediate’ isolates. The antibiotics to which resistance was considered included ciprofloxacin, third generation cephalosporins (ceftazidime and/or cefotaxime), gentamicin, piperacillin/tazobactam and carbapenems (imipenem and/or meropenem).

The authors use an Aalen-Johansen estimator to measure cumulative incidence of in-hospital mortality and length of stay. Both approaches control for the patient’s age, sex, Elixhauser comorbidity index, and hospital trust type. It does not appear that the authors control for the reason for admission to the hospital nor do they propensity match people with those without antibiotic resistance. Thus, it is likely that significant unobserved heterogeneity across groups remains in the analysis.

Despite these limitations, the authors do have some interesting findings. First, bacterial infections are associated with increased risk of death. In-hospital mortality is 14.3% for individuals infected with E. Coli compared to 1.3% for those not infected. Accounting for covariates, the subdistribution hazard rate (SHR) for in-hospital mortality due to E. coli bacteraemia was 5.88. Second, E. coli bacteraemia was associated with 3.9 excess hospital days compared to patients who were not antibiotic resistance. These extra hospital days cost £1,020 per case of E. coli bacteraemia and the estimated annual cost of E. coli bacteraemia in England was £14.3m. If antibiotic resistance has increased in recent years, these estimates are likely to be conservative.

The issue of antibiotic resistance presents a conundrum for policymakers. If current antibiotics are effective, drug-makers will have no incentive to develop new antibiotics since the new treatments are unlikely to be prescribed. On the other hand, failing to identify new antibiotics in reserve means that as antibiotic resistance grows, there will be few treatment alternatives. To address this issue, the United Kingdom is considering a ‘subscription style‘ approach to pay for new antibiotics to incentivize the development of new treatments.

Nevertheless, the paper by Naylor and co-authors provides a useful data point on the cost of antibiotic resistance.

Credits

Chris Sampson’s journal round-up for 30th September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A need for change! A coding framework for improving transparency in decision modeling. PharmacoEconomics [PubMed] Published 24th September 2019

We’ve featured a few papers in recent round-ups that (I assume) will be included in an upcoming themed issue of PharmacoEconomics on transparency in modelling. It’s shaping up to be a good one. The value of transparency in decision modelling has been recognised, but simply making the stuff visible is not enough – it needs to make sense. The purpose of this paper is to help make that achievable.

The authors highlight that the writing of analyses, including coding, involves personal style and preferences. To aid transparency, we need a systematic framework of conventions that make the inner workings of a model understandable to any (expert) user. The paper describes a framework developed by the Decision Analysis in R for Technologies in Health (DARTH) group. The DARTH framework builds on a set of core model components, generalisable to all cost-effectiveness analyses and model structures. There are five components – i) model inputs, ii) model implementation, iii) model calibration, iv) model validation, and v) analysis – and the paper describes the role of each. Importantly, the analysis component can be divided into several parts relating to, for example, sensitivity analyses and value of information analyses.

Based on this framework, the authors provide recommendations for organising and naming files and on the types of functions and data structures required. The recommendations build on conventions established in other fields and in the use of R generally. The authors recommend the implementation of functions in R, and relate general recommendations to the context of decision modelling. We’re also introduced to unit testing, which will be unfamiliar to most Excel modellers but which can be relatively easily implemented in R. The role of various tools are introduced, including R Studio, R Markdown, Shiny, and GitHub.

The real value of this work lies in the linked R packages and other online material, which you can use to test out the framework and consider its application to whatever modelling problem you might have. The authors provide an example using a basic Sick-Sicker model, which you can have a play with using the DARTH packages. In combination with the online resources, this is a valuable paper that you should have to hand if you’re developing a model in R.

Accounts from developers of generic health state utility instruments explain why they produce different QALYs: a qualitative study. Social Science & Medicine [PubMed] Published 19th September 2019

It’s well known that different preference-based measures of health will generate different health state utility values for the same person. Yet, they continue to be used almost interchangeably. For this study, the authors spoke to people involved in the development of six popular measures: QWB, 15D, HUI, EQ-5D, SF-6D, and AQoL. Their goal was to understand the bases for the development of the measures and to explain why the different measures should give different results.

At least one original developer for each instrument was recruited, along with people involved at later stages of development. Semi-structured interviews were conducted with 15 people, with questions on the background, aims, and criteria for the development of the measure, and on the descriptive system, preference weights, performance, and future development of the instrument.

Five broad topics were identified as being associated with differences in the measures: i) knowledge sources used for conceptualisation, ii) development purposes, iii) interpretations of what makes a ‘good’ instrument, iv) choice of valuation techniques, and v) the context for the development process. The online appendices provide some useful tables that summarise the differences between the measures. The authors distinguish between measures based on ‘objective’ definitions (QWB) and items that people found important (15D). Some prioritised sensitivity (AQoL, 15D), others prioritised validity (HUI, QWB), and several focused on pragmatism (SF-6D, HUI, 15D, EQ-5D). Some instruments had modest goals and opportunistic processes (EQ-5D, SF-6D, HUI), while others had grand goals and purposeful processes (QWB, 15D, AQoL). The use of some measures (EQ-5D, HUI) extended far beyond what the original developers had anticipated. In short, different measures were developed with quite different concepts and purposes in mind, so it’s no surprise that they give different results.

This paper provides some interesting accounts and views on the process of instrument development. It might prove most useful in understanding different measures’ blind spots, which can inform the selection of measures in research, as well as future development priorities.

The emerging social science literature on health technology assessment: a narrative review. Value in Health Published 16th September 2019

Health economics provides a good example of multidisciplinarity, with economists, statisticians, medics, epidemiologists, and plenty of others working together to inform health technology assessment. But I still don’t understand what sociologists are talking about half of the time. Yet, it seems that sociologists and political scientists are busy working on the big questions in HTA, as demonstrated by this paper’s 120 references. So, what are they up to?

This article reports on a narrative review, based on 41 empirical studies. Three broad research themes are identified: i) what drove the establishment and design of HTA bodies? ii) what has been the influence of HTA? and iii) what have been the social and political influences on HTA decisions? Some have argued that HTA is inevitable, while others have argued that there are alternative arrangements. Either way, no two systems are the same and it is not easy to explain differences. It’s important to understand HTA in the context of other social tendencies and trends, and that HTA influences and is influenced by these. The authors provide a substantial discussion on the role of stakeholders in HTA and the potential for some to attempt to game the system. Uncertainty abounds in HTA and this necessarily requires negotiation and acts as a limit on the extent to which HTA can rely on objectivity and rationality.

Something lacking is a critical history of HTA as a discipline and the question of what HTA is actually good for. There’s also not a lot of work out there on culture and values, which contrasts with medical sociology. The authors suggest that sociologists and political scientists could be more closely involved in HTA research projects. I suspect that such a move would be more challenging for the economists than for the sociologists.

Credits