Chris Sampson’s journal round-up for 14th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Transparency in health economic modeling: options, issues and potential solutions. PharmacoEconomics [PubMed] Published 8th October 2019

Reading this paper was a strange experience. The purpose of the paper, and its content, is much the same as a paper of my own, which was published in the same journal a few months ago.

The authors outline what they see as the options for transparency in the context of decision modelling, with a focus on open source models and a focus on for whom the details are transparent. Models might be transparent to a small number of researchers (e.g. in peer review), to HTA agencies, or to the public at large. The paper includes a figure showing the two aspects of transparency, termed ‘reach’ and ‘level’, which relate to the number of people who can access the information and the level of detail made available. We provided a similar figure in our paper, using the terms ‘breadth’ and ‘depth’, which is at least some validation of our idea. The authors then go on to discuss five ‘issues’ with transparency: copyright, model misuse, confidential data, software, and time/resources. These issues are framed as questions, to which the authors posit some answers as solutions.

Perhaps inevitably, I think our paper does a better job, and so I’m probably over-critical of this article. Ours is more comprehensive, if nothing else. But I also think the authors make a few missteps. There’s a focus on models created by academic researchers, which oversimplifies the discussion somewhat. Open source modelling is framed as a more complete solution than it really is. The ‘issues’ that are discussed are at points framed as drawbacks or negative features of transparency, which they aren’t. Certainly, they’re challenges, but they aren’t reasons not to pursue transparency. ‘Copyright’ seems to be used as a synonym for intellectual property, and transparency is considered to be a threat to this. The authors’ proposed solution here is to use licensing fees. I think that’s a bad idea. Levying a fee creates an incentive to disregard copyright, not respect it.

It’s a little ironic that both this paper and my own were published, when both describe the benefits of transparency in terms of reducing “duplication of efforts”. No doubt, I read this paper with a far more critical eye than I normally would. Had I not published a paper on precisely the same subject, I might’ve thought this paper was brilliant.

If we recognize heterogeneity of treatment effect can we lessen waste? Journal of Comparative Effectiveness Research [PubMed] Published 1st October 2019

This commentary starts from the premise that a pervasive overuse of resources creates a lot of waste in health care, which I guess might be true in the US. Apparently, this is because clinicians have an insufficient understanding of heterogeneity in treatment effects and therefore assume average treatment effects for their patients. The authors suggest that this situation is reinforced by clinical trial publications tending to only report average treatment effects. I’m not sure whether the authors are arguing that clinicians are too knowledgable and dependent on the research, or that they don’t know the research well enough. Either way, it isn’t a very satisfying explanation of the overuse of health care. Certainly, patients could benefit from more personalised care, and I would support the authors’ argument in favour of stratified studies and the reporting of subgroup treatment effects. The most insightful part of this paper is the argument that these stratifications should be on the basis of observable characteristics. It isn’t much use to your general practitioner if personalisation requires genome sequencing. In short, I agree with the authors’ argument that we should do more to recognise heterogeneity of treatment effects, but I’m not sure it has much to do with waste.

No evidence for a protective effect of education on mental health. Social Science & Medicine Published 3rd October 2019

When it comes to the determinants of health and well-being, I often think back to my MSc dissertation research. As part of that, I learned that a) stuff that you might imagine to be important often isn’t and b) methodological choices matter a lot. Though it wasn’t the purpose of my study, it seemed from this research that higher education has a negative effect on people’s subjective well-being. But there isn’t much research out there to help us understand the association between education and mental health in general.

This study add to a small body of literature on the impact of changes in compulsory schooling on mental health. In (West) Germany, education policy was determined at the state level, so when compulsory schooling was extended from eight to nine years, different states implemented the change at different times between 1949 and 1969. This study includes 5,321 people, with 20,290 person-year observations, from the German Socio-Economic Panel survey (SOEP). Inclusion was based on people being born seven years either side of the cutoff birth year for which the longer compulsory schooling was enacted, with a further restriction to people aged between 50 and 85. The SOEP includes the SF-12 questionnaire, which includes a mental health component score (MCS). There is also an 11-point life satisfaction scale. The authors use an instrumental variable approach, using the policy change as an instrument for years of schooling and estimating a standard two-stage least squares model. The MCS score, life satisfaction score, and a binary indicator for MCS score lower than or equal to 45.6, are all modelled as separate outcomes.

Estimates using an OLS model show a positive and highly significant effect of years of schooling on all three outcomes. But when the instrumental variable model is used, this effect disappears. An additional year of schooling in this model is associated with a statistically and clinically insignificant decrease in the MCS score. Also insignificant was the finding that more years of schooling increases the likelihood of developing symptoms of a mental health disorder (as indicated by the MCS threshold of 45.6) and that life satisfaction is slightly lower. The same model shows a positive effect on physical health, which corresponds with previous research and provides some reassurance that the model could detect an effect if one existed.

The specification of the model seems reasonable and a host of robustness checks are reported. The only potential issue I could spot is that a person’s state of residence at the time of schooling is not observed, and so their location at entry into the sample is used. Given that education is associated with mobility, this could be a problem, and I would have liked to see the authors subject it to more testing. The overall finding – that an additional year of school for people who might otherwise only stay at school for eight years does not improve mental health – is persuasive. But the extent to which we can say anything more general about the impact of education on well-being is limited. What if it had been three years of additional schooling, rather than one? There is still much work to be done in this area.

Scientific sinkhole: the pernicious price of formatting. PLoS One [PubMed] Published 26th September 2019

This study is based on a survey that asked 372 researchers from 41 countries about the time they spent formatting manuscripts for journal submission. Let’s see how I can frame this as health economics… Well, some of the participants are health researchers. The time they spend on formatting journal submissions is time not spent on health research. The opportunity cost of time spent formatting could be measured in terms of health.

The authors focused on the time and wage costs of formatting. The results showed that formatting took a median time of 52 hours per person per year, at a cost of $477 per manuscript or $1,908 per person per year. Researchers spend – on average – 14 hours on formatting a manuscript. That’s outrageous. I have never spent that long on formatting. If you do, you only have yourself to blame. Or maybe it’s just because of what I consider to constitute formatting. The survey asked respondents to consider formatting of figures, tables, and supplementary files. Improving the format of a figure or a table can add real value to a paper. A good figure or table can change a bad paper to a good paper. I’d love to know how the time cost differed for people using LaTeX.


Rita Faria’s journal round-up for 2nd September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ [PubMed] Published 28th August 2019

RCTs are the gold standard primary study to estimate the effect of treatments but are often far from perfect. The question is the extent to which their flaws make a difference to the results. Well, RoB 2 is your new best friend to help answer this question.

Developed by a star-studded team, the RoB 2 is the update to the original risk of bias tool by the Cochrane Collaboration. Bias is assessed by outcome, rather than for the whole RCT. For me, this makes sense.  For example, the primary outcome may be well reported, yet the secondary outcome, which may be the outcome of interest for a cost-effectiveness model, much less so.

Bias is considered in terms of 5 domains, with the overall risk of bias usually corresponding to the worst risk of bias in any of the domains. This overall risk of bias is then reflected in the evidence synthesis, with, for example, a stratified meta-analysis.

The paper is a great read! Jonathan Sterne and colleagues explain the reasons for the update and the process that was followed. Clearly, there was quite a lot of thought given to the types of bias and to develop questions to help reviewers assess it. The only downside is that it may require more time to apply, given that it needs to be done by outcome. Still, I think that’s a price worth paying for more reliable results. Looking forward to seeing it in use!

Characteristics and methods of incorporating randomised and nonrandomised evidence in network meta-analyses: a scoping review. Journal of Clinical Epidemiology [PubMed] Published 3rd May 2019

In keeping with the evidence synthesis theme, this paper by Kathryn Zhang and colleagues reviews how the applied literature has been combining randomised and non-randomised evidence. The headline findings are that combining these two types of study designs is rare and, when it does happen, naïve pooling is the most common method.

I imagine that the limited use of non-randomised evidence is due to its risk of bias. After all, it is difficult to ensure that the measure of association from a non-randomised study is an estimate of a causal effect. Hence, it is worrying that the majority of network meta-analyses that did combine non-randomised studies did so with naïve pooling.

This scoping review may kick start some discussions in the evidence synthesis world. When should we combine randomised and non-randomised evidence? How best to do so? And how to make sure that the right methods are used in practice? As a cost-effectiveness modeller, with limited knowledge of evidence synthesis, I’ve grappled with these questions myself. Do get in touch if you have any thoughts.

A cost-effectiveness analysis of shortened direct-acting antiviral treatment in genotype 1 noncirrhotic treatment-naive patients with chronic hepatitis C virus. Value in Health [PubMed] Published 17th May 2019

Rarely we see a cost-effectiveness paper where the proposed intervention is less costly and less effective, that is, in the controversial southwest quadrant. This exceptional paper by Christopher Fawsitt and colleagues is a welcome exception!

Christopher and colleagues looked at the cost-effectiveness of shorter treatment durations for chronic hepatitis C. Compared with the standard duration, the shorter treatment is not as effective, hence results in fewer QALYs. But it is much cheaper to treat patients over a shorter duration and re-treat those patients who were not cured, rather than treat everyone with the standard duration. Hence, for the base-case and for most scenarios, the shorter treatment is cost-effective.

I’m sure that labelling a less effective and less costly option as cost-effective may have been controversial in some quarters. Some may argue that it is unethical to offer a worse treatment than the standard even if it saves a lot of money. In my view, it is no different from funding better and more costlier treatments, given that the savings will be borne by other patients who will necessarily have access to fewer resources.

The paper is beautifully written and is another example of an outstanding cost-effectiveness analysis with important implications for policy and practice. The extensive sensitivity analysis should provide reassurance to the sceptics. And the discussion is clever in arguing for the value of a shorter duration in resource-constrained settings and for hard to reach populations. A must read!


Chris Sampson’s journal round-up for 5th August 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The barriers and facilitators to model replication within health economics. Value in Health Published 16th July 2019

Replication is a valuable part of the scientific process, especially if there are uncertainties about the validity of research methods. When it comes to cost-effectiveness modelling, there are endless opportunities for researchers to do things badly, even with the best intentions. Attempting to replicate modelling studies can therefore support health care decision-making. But replication studies are rarely conducted, or, at least, rarely reported. The authors of this study sought to understand the factors that can make replication easy or difficult, with a view to informing reporting standards.

The authors attempted to replicate five published cost-effectiveness modelling studies, with the aim of recreating the key results. Each replication attempt was conducted by a different author and we’re even given a rating of the replicator’s experience level. The characteristics of the models were recorded and each replicator detailed – anecdotally – the things that helped or hindered their attempt. Some replications were a resounding failure. In one case, the replicated cost per patient was more than double the original, at more than £1,000 wide of the mark. Replicators reported that having a clear diagram of the model structure was a big help, as was the provision of example calculations and explicit listing of the key assumptions. Various shortcomings made replication difficult, all relating to a lack of clarity or completeness in reporting. The impact of this on the validation attempt was exacerbated if the model either involved lots of scenarios that weren’t clearly described or if the model had a long time horizon.

The quality of each study was assessed using the Philips checklist, and all did pretty well, suggesting that the checklist is not sufficient for ensuring replicability. If you develop and report cost-effectiveness models, this paper could help you better understand how end-users will interpret your reporting and make your work more replicable. This study focusses on Markov models. They’re definitely the most common approach, so perhaps that’s OK. It might be useful to produce prescriptive guidance specific to Markov models, informed by the findings of this study.

US integrated delivery networks perspective on economic burden of patients with treatment-resistant depression: a retrospective matched-cohort study. PharmacoEconomics – Open [PubMed] Published 28th June 2019

Treatment-resistant depression can be associated high health care costs, as multiple lines of treatment are tried, with patients experiencing little or no benefit. New treatments and models of care can go some way to addressing these challenges. In the US, there’s some reason to believe that integrated delivery networks (IDNs) could be associated with lower care costs, because IDNs are based on collaborative care models and constitute a single point of accountability for patient costs. They might be particularly useful in the case of treatment-resistant depression, but evidence is lacking. The authors of this study investigated the difference in health care resource use and costs for patients with and without treatment-resistant depression, in the context of IDNs.

The researchers conducted a retrospective cohort study using claims data for people receiving care from IDNs, with up to two years follow-up from first antidepressant use. 1,582 people with treatment-resistant depression were propensity score matched to two other groups – patients without depression and patients with depression that was not classified as treatment-resistant. Various regression models were used to compare the key outcomes of all-cause and specific categories of resource use and costs. Unfortunately, there is no assessment of whether the selected models are actually any good at estimating differences in costs.

The average costs and resource use levels in the three groups ranked as you would expect: $25,807 per person per year for the treatment-resistant group versus $13,701 in the non-resistant group and $8,500 in the non-depression group. People with treatment-resistant depression used a wider range of antidepressants and for a longer duration. They also had twice as many inpatient visits as people with depression that wasn’t treatment-resistant, which seems to have been the main driver of the adjusted differences in costs.

We don’t know (from this study) whether or not IDNs provide a higher quality of care. And the study isn’t able to compare IDN and non-IDN models of care. But it does show that IDNs probably aren’t a full solution to the high costs of treatment-resistant depression.

Rabin’s paradox for health outcomes. Health Economics [PubMed] [RePEc] Published 19th June 2019

Rabin’s paradox arises from the theoretical demonstration that a risk-averse individual who turns down a 50:50 gamble of gaining £110 or losing £100 would, if expected utility theory is correct, turn down a 50:50 gamble of losing £1,000 or gaining millions. This is because of the assumed concave utility function over wealth that is used to model risk aversion and it is probably not realistic. But we don’t know about the relevance of this paradox in the health domain… until now.

A key contribution of this paper is that it considers both decision-making about one’s own health and decision-making from a societal perspective. Three different scenarios are set-up in each case, relating to gains and losses in life expectancy with different levels of health functioning. 201 students were recruited as part of a larger study on preferences and each completed all six gamble-pairs (three individual, three societal). To test for Rabin’s paradox, the participants were asked whether they would accept each gamble involving a moderate stake and a large stake.

In short, the authors observe Rabin’s proposed failure of expected utility theory. Many participants rejected small gambles but did not reject the larger gambles. The effect was more pronounced for societal preferences. Though there was a large minority for whom expected utility theory was not violated. The upshot of all this is that our models of health preferences that are based on expected utility may be flawed where uncertain outcomes are involved – as they often are in health. This study adds to a growing body of literature supporting the relevance of alternative utility theories, such as prospect theory, to health and health care.

My only problem here is that life expectancy is not health. Life expectancy is everything. It incorporates the monetary domain, which this study did not want to consider, as well as every other domain of life. When you die, your stock of cash is as useful to you as your stock of health. I think it would have been more useful if the study focussed only on health status and outcomes and excluded all considerations of death.