Chris Sampson’s journal round-up for 14th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Transparency in health economic modeling: options, issues and potential solutions. PharmacoEconomics [PubMed] Published 8th October 2019

Reading this paper was a strange experience. The purpose of the paper, and its content, is much the same as a paper of my own, which was published in the same journal a few months ago.

The authors outline what they see as the options for transparency in the context of decision modelling, with a focus on open source models and a focus on for whom the details are transparent. Models might be transparent to a small number of researchers (e.g. in peer review), to HTA agencies, or to the public at large. The paper includes a figure showing the two aspects of transparency, termed ‘reach’ and ‘level’, which relate to the number of people who can access the information and the level of detail made available. We provided a similar figure in our paper, using the terms ‘breadth’ and ‘depth’, which is at least some validation of our idea. The authors then go on to discuss five ‘issues’ with transparency: copyright, model misuse, confidential data, software, and time/resources. These issues are framed as questions, to which the authors posit some answers as solutions.

Perhaps inevitably, I think our paper does a better job, and so I’m probably over-critical of this article. Ours is more comprehensive, if nothing else. But I also think the authors make a few missteps. There’s a focus on models created by academic researchers, which oversimplifies the discussion somewhat. Open source modelling is framed as a more complete solution than it really is. The ‘issues’ that are discussed are at points framed as drawbacks or negative features of transparency, which they aren’t. Certainly, they’re challenges, but they aren’t reasons not to pursue transparency. ‘Copyright’ seems to be used as a synonym for intellectual property, and transparency is considered to be a threat to this. The authors’ proposed solution here is to use licensing fees. I think that’s a bad idea. Levying a fee creates an incentive to disregard copyright, not respect it.

It’s a little ironic that both this paper and my own were published, when both describe the benefits of transparency in terms of reducing “duplication of efforts”. No doubt, I read this paper with a far more critical eye than I normally would. Had I not published a paper on precisely the same subject, I might’ve thought this paper was brilliant.

If we recognize heterogeneity of treatment effect can we lessen waste? Journal of Comparative Effectiveness Research [PubMed] Published 1st October 2019

This commentary starts from the premise that a pervasive overuse of resources creates a lot of waste in health care, which I guess might be true in the US. Apparently, this is because clinicians have an insufficient understanding of heterogeneity in treatment effects and therefore assume average treatment effects for their patients. The authors suggest that this situation is reinforced by clinical trial publications tending to only report average treatment effects. I’m not sure whether the authors are arguing that clinicians are too knowledgable and dependent on the research, or that they don’t know the research well enough. Either way, it isn’t a very satisfying explanation of the overuse of health care. Certainly, patients could benefit from more personalised care, and I would support the authors’ argument in favour of stratified studies and the reporting of subgroup treatment effects. The most insightful part of this paper is the argument that these stratifications should be on the basis of observable characteristics. It isn’t much use to your general practitioner if personalisation requires genome sequencing. In short, I agree with the authors’ argument that we should do more to recognise heterogeneity of treatment effects, but I’m not sure it has much to do with waste.

No evidence for a protective effect of education on mental health. Social Science & Medicine Published 3rd October 2019

When it comes to the determinants of health and well-being, I often think back to my MSc dissertation research. As part of that, I learned that a) stuff that you might imagine to be important often isn’t and b) methodological choices matter a lot. Though it wasn’t the purpose of my study, it seemed from this research that higher education has a negative effect on people’s subjective well-being. But there isn’t much research out there to help us understand the association between education and mental health in general.

This study add to a small body of literature on the impact of changes in compulsory schooling on mental health. In (West) Germany, education policy was determined at the state level, so when compulsory schooling was extended from eight to nine years, different states implemented the change at different times between 1949 and 1969. This study includes 5,321 people, with 20,290 person-year observations, from the German Socio-Economic Panel survey (SOEP). Inclusion was based on people being born seven years either side of the cutoff birth year for which the longer compulsory schooling was enacted, with a further restriction to people aged between 50 and 85. The SOEP includes the SF-12 questionnaire, which includes a mental health component score (MCS). There is also an 11-point life satisfaction scale. The authors use an instrumental variable approach, using the policy change as an instrument for years of schooling and estimating a standard two-stage least squares model. The MCS score, life satisfaction score, and a binary indicator for MCS score lower than or equal to 45.6, are all modelled as separate outcomes.

Estimates using an OLS model show a positive and highly significant effect of years of schooling on all three outcomes. But when the instrumental variable model is used, this effect disappears. An additional year of schooling in this model is associated with a statistically and clinically insignificant decrease in the MCS score. Also insignificant was the finding that more years of schooling increases the likelihood of developing symptoms of a mental health disorder (as indicated by the MCS threshold of 45.6) and that life satisfaction is slightly lower. The same model shows a positive effect on physical health, which corresponds with previous research and provides some reassurance that the model could detect an effect if one existed.

The specification of the model seems reasonable and a host of robustness checks are reported. The only potential issue I could spot is that a person’s state of residence at the time of schooling is not observed, and so their location at entry into the sample is used. Given that education is associated with mobility, this could be a problem, and I would have liked to see the authors subject it to more testing. The overall finding – that an additional year of school for people who might otherwise only stay at school for eight years does not improve mental health – is persuasive. But the extent to which we can say anything more general about the impact of education on well-being is limited. What if it had been three years of additional schooling, rather than one? There is still much work to be done in this area.

Scientific sinkhole: the pernicious price of formatting. PLoS One [PubMed] Published 26th September 2019

This study is based on a survey that asked 372 researchers from 41 countries about the time they spent formatting manuscripts for journal submission. Let’s see how I can frame this as health economics… Well, some of the participants are health researchers. The time they spend on formatting journal submissions is time not spent on health research. The opportunity cost of time spent formatting could be measured in terms of health.

The authors focused on the time and wage costs of formatting. The results showed that formatting took a median time of 52 hours per person per year, at a cost of $477 per manuscript or $1,908 per person per year. Researchers spend – on average – 14 hours on formatting a manuscript. That’s outrageous. I have never spent that long on formatting. If you do, you only have yourself to blame. Or maybe it’s just because of what I consider to constitute formatting. The survey asked respondents to consider formatting of figures, tables, and supplementary files. Improving the format of a figure or a table can add real value to a paper. A good figure or table can change a bad paper to a good paper. I’d love to know how the time cost differed for people using LaTeX.

Credits

Chris Sampson’s journal round-up for 30th September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A need for change! A coding framework for improving transparency in decision modeling. PharmacoEconomics [PubMed] Published 24th September 2019

We’ve featured a few papers in recent round-ups that (I assume) will be included in an upcoming themed issue of PharmacoEconomics on transparency in modelling. It’s shaping up to be a good one. The value of transparency in decision modelling has been recognised, but simply making the stuff visible is not enough – it needs to make sense. The purpose of this paper is to help make that achievable.

The authors highlight that the writing of analyses, including coding, involves personal style and preferences. To aid transparency, we need a systematic framework of conventions that make the inner workings of a model understandable to any (expert) user. The paper describes a framework developed by the Decision Analysis in R for Technologies in Health (DARTH) group. The DARTH framework builds on a set of core model components, generalisable to all cost-effectiveness analyses and model structures. There are five components – i) model inputs, ii) model implementation, iii) model calibration, iv) model validation, and v) analysis – and the paper describes the role of each. Importantly, the analysis component can be divided into several parts relating to, for example, sensitivity analyses and value of information analyses.

Based on this framework, the authors provide recommendations for organising and naming files and on the types of functions and data structures required. The recommendations build on conventions established in other fields and in the use of R generally. The authors recommend the implementation of functions in R, and relate general recommendations to the context of decision modelling. We’re also introduced to unit testing, which will be unfamiliar to most Excel modellers but which can be relatively easily implemented in R. The role of various tools are introduced, including R Studio, R Markdown, Shiny, and GitHub.

The real value of this work lies in the linked R packages and other online material, which you can use to test out the framework and consider its application to whatever modelling problem you might have. The authors provide an example using a basic Sick-Sicker model, which you can have a play with using the DARTH packages. In combination with the online resources, this is a valuable paper that you should have to hand if you’re developing a model in R.

Accounts from developers of generic health state utility instruments explain why they produce different QALYs: a qualitative study. Social Science & Medicine [PubMed] Published 19th September 2019

It’s well known that different preference-based measures of health will generate different health state utility values for the same person. Yet, they continue to be used almost interchangeably. For this study, the authors spoke to people involved in the development of six popular measures: QWB, 15D, HUI, EQ-5D, SF-6D, and AQoL. Their goal was to understand the bases for the development of the measures and to explain why the different measures should give different results.

At least one original developer for each instrument was recruited, along with people involved at later stages of development. Semi-structured interviews were conducted with 15 people, with questions on the background, aims, and criteria for the development of the measure, and on the descriptive system, preference weights, performance, and future development of the instrument.

Five broad topics were identified as being associated with differences in the measures: i) knowledge sources used for conceptualisation, ii) development purposes, iii) interpretations of what makes a ‘good’ instrument, iv) choice of valuation techniques, and v) the context for the development process. The online appendices provide some useful tables that summarise the differences between the measures. The authors distinguish between measures based on ‘objective’ definitions (QWB) and items that people found important (15D). Some prioritised sensitivity (AQoL, 15D), others prioritised validity (HUI, QWB), and several focused on pragmatism (SF-6D, HUI, 15D, EQ-5D). Some instruments had modest goals and opportunistic processes (EQ-5D, SF-6D, HUI), while others had grand goals and purposeful processes (QWB, 15D, AQoL). The use of some measures (EQ-5D, HUI) extended far beyond what the original developers had anticipated. In short, different measures were developed with quite different concepts and purposes in mind, so it’s no surprise that they give different results.

This paper provides some interesting accounts and views on the process of instrument development. It might prove most useful in understanding different measures’ blind spots, which can inform the selection of measures in research, as well as future development priorities.

The emerging social science literature on health technology assessment: a narrative review. Value in Health Published 16th September 2019

Health economics provides a good example of multidisciplinarity, with economists, statisticians, medics, epidemiologists, and plenty of others working together to inform health technology assessment. But I still don’t understand what sociologists are talking about half of the time. Yet, it seems that sociologists and political scientists are busy working on the big questions in HTA, as demonstrated by this paper’s 120 references. So, what are they up to?

This article reports on a narrative review, based on 41 empirical studies. Three broad research themes are identified: i) what drove the establishment and design of HTA bodies? ii) what has been the influence of HTA? and iii) what have been the social and political influences on HTA decisions? Some have argued that HTA is inevitable, while others have argued that there are alternative arrangements. Either way, no two systems are the same and it is not easy to explain differences. It’s important to understand HTA in the context of other social tendencies and trends, and that HTA influences and is influenced by these. The authors provide a substantial discussion on the role of stakeholders in HTA and the potential for some to attempt to game the system. Uncertainty abounds in HTA and this necessarily requires negotiation and acts as a limit on the extent to which HTA can rely on objectivity and rationality.

Something lacking is a critical history of HTA as a discipline and the question of what HTA is actually good for. There’s also not a lot of work out there on culture and values, which contrasts with medical sociology. The authors suggest that sociologists and political scientists could be more closely involved in HTA research projects. I suspect that such a move would be more challenging for the economists than for the sociologists.

Credits

Rita Faria’s journal round-up for 26th August 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Vaccine hesitancy and (fake) news: quasi‐experimental evidence from Italy. Health Economics [PubMed] [RePEc] Published 20th August 2019

Has fake news led to fewer children being vaccinated? At least in Italy, the answer seems to be yes.

It’s shocking to read that the WHO has included the reluctance or refusal to vaccinate as one of the 10 threats to global health today. And many of us are asking: why has this happened and what can we do to address it? Vincenzo Carrieri, Leonardo Madio and Francesco Principe help answer this first question. They looked at how fake news affects the take-up of vaccines, assuming that exposure to fake news is proxied by access to broadband and within a difference-in-differences framework. They found that a 10% increase in broadband coverage is associated with a 1.2-1.6% reduction in vaccination rates.

The differences-in-differences method hinges on a court ruling in 2012 that accepted that the MMR vaccine causes autism. Following the ruling, fake news about vaccines spread across the internet. In parallel, broadband coverage increased over time due to a government programme, but it varied by region, depending on the existing infrastructure and geographical conditions. Broadband coverage, by itself, cannot lead to lower vaccination rates. So it makes sense to assume that broadband coverage leads to greater exposure to fake news about vaccines, which in turn leads to lower vaccination rates.

On the other hand, it may be that greater broadband coverage and lower vaccination rates are both caused by something else. The authors wrote a good introduction to justify the model assumptions and show a few robustness checks. Had they had more space, I would have like to read a bit more about the uncertainties around the model assumptions. This is a fantastic paper and good food for thought on the consequences of fake news. Great read!

The cost-effectiveness of one-time birth cohort screening for hepatitis C as part of the National Health Service Health Check programme in England. Value in Health Published 19th August 2019

Jack Williams and colleagues looked at the cost-effectiveness of one-time birth cohort screening for hepatitis C. As hepatitis C is usually asymptomatic before reaching its more advanced stages, people may not be aware that they are infected. Therefore, they may not get tested and treated, even though treatment is effective and cost-effective.

At the level of the individual eligible for testing, the ICERs were between £8k-£31k/QALY, with lower ICERs for younger birth cohorts. The ICERs also depended on the transition probabilities for the progression of the disease, with lower ICERs if progression is faster. Extensive sensitivity and value of information analyses indicate that the key cost-effectiveness drivers are the transition probabilities, probabilities of referral and of treatment post-referral, and the quality of life benefits of being cured.

This is a great example of a good quality applied cost-effectiveness analysis. The model is well justified, the results are thoroughly tested, and the discussion is meticulous. Well done!

NICE, in confidence: an assessment of redaction to obscure confidential information in Single Technology Appraisals by the National Institute for Health and Care Excellence. PharmacoEconomics [PubMed] Published 27th June 2019

NICE walks a fine line between making decisions transparent and protecting confidential information. Confidential information includes commercially sensitive information (e.g. discounts to the price paid by the NHS) and academic-in-confidence information, such as unpublished results of clinical trials. The problem is that the redacted information may preclude readers from understanding NICE decisions.

Ash Bullement and colleagues reviewed NICE appraisals of technologies with an approved price discount. Their goal was to understand the extent of redactions and their consequences on the transparency of NICE decisions. Of the 171 NICE appraisals, 118 had an approved commercial arrangement and 110 had a simple price discount. The type of redacted information varied. Some did not present the ICER, others presented ICERs but not the components of the ICERs, and others did not even present the estimates of life expectancy from the model. Remarkably, the confidential discount could be back-calculated in seven NICE appraisals! The authors also looked at the academic-in-confidence redactions. They found that 68 out of 86 appraisals published before 2018 still had academic-in-confidence information redacted. This made me wonder if NICE has a process to review these redactions and disclose them once the information is in the public domain.

As Ash and colleagues rightly conclude, this review shows that there does not seem to be a consistent process for redaction and disclosure. This is a compelling paper on the practicalities of the NICE process, and with useful reflections for HTA agencies around the world. The message for NICE is that it may be time to review the process to handle sensitive information.

Credits