Chris Sampson’s journal round-up for 18th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A conceptual map of health-related quality of life dimensions: key lessons for a new instrument. Quality of Life Research [PubMed] Published 1st November 2019

EQ-5D, SF-6D, HUI3, AQoL, 15D; they’re all used to describe health states for the purpose of estimating health state utility values, to get the ‘Q’ in the QALY. But it’s widely recognised (and evidenced) that they measure different things. This study sought to better understand the challenge by doing two things: i) ‘mapping’ the domains of the different instruments and ii) advising on the domains to be included in a new measure.

The conceptual model described in this paper builds on two standard models of health – the ICF (International Classification of Functioning, Disability, and Health), which is endorsed by the WHO, and the Wilson and Cleary model. The new model is built around four distinctions, which can be used to define the dimensions included in health state utility instruments: cause vs effect, specific vs broad, physical vs psychological, and subjective vs objective. The idea is that each possible dimension of health can relate, with varying levels of precision, to one or the other of these alternatives.

The authors argue that, conveniently, cause/effect and specific/broad map to one another, as do physical/psychological and objective/subjective. The framework is presented visually, which makes it easy to interpret – I recommend you take a look. Each of the five instruments previously mentioned is mapped to the framework, with the HUI and 15D coming out as ‘symptom’ oriented, EQ-5D and SF-6D as ‘functioning’ oriented, and the AQoL as a hybrid of a health and well-being instrument. Based (it seems) on the Personal Wellbeing Index, the authors also include two social dimensions in the framework, which interact with the health domains. Based on the frequency with which dimensions are included in existing instruments, the authors recommend that a new measure should include three physical dimensions (mobility, self-care, pain), three mental health dimensions (depression, vitality, sleep), and two social domains (personal relationships, social isolation).

This framework makes no sense to me. The main problem is that none of the four distinctions hold water, let alone stand up to being mapped linearly to one another. Take pain as an example. It could be measured subjectively or objectively. It’s usually considered a physical matter, but psychological pain is no less meaningful. It may be a ‘causal’ symptom, but there is little doubt that it matters in and of itself as an ‘effect’. The authors themselves even offer up a series of examples of where the distinctions fall down.

It would be nice if this stuff could be drawn-up on a two-dimensional plane, but it isn’t that simple. In addition to oversimplifying complex ideas, I don’t think the authors have fully recognised the level of complexity. For instance, the work seems to be inspired – at least in part – by a desire to describe health state utility instruments in relation to subjective well-being (SWB). But the distinction between health state utility instruments and SWB isn’t simply a matter of scope. Health state utility instruments (as we use them) are about valuing states in relation to preferences, whereas SWB is about experienced utility. That’s a far more important and meaningful distinction than the distinction between symptoms and functioning.

Careless costs related to inefficient technology used within NHS England. Clinical Medicine Journal [PubMed] Published 8th November 2019

This little paper – barely even a single page – was doing the rounds on Twitter. The author was inspired by some frustration in his day job, waiting for the IT to work. We can all relate to that. This brief analysis sums the potential costs of what the author calls ‘careless costs’, which is vaguely defined as time spent by an NHS employee on activity that does not relate to patient care. Supposing that all doctors in the English NHS wasted an average of 10 minutes per day on such activities, it would cost over £143 million (per year, I assume) based on current salaries. The implication is that a little bit of investment could result in massive savings.

This really bugs me, for at least two reasons. First, it is normal for anybody in any profession to have a bit of downtime. Nobody operates at maximum productivity for every minute of every day. If the doctor didn’t have their downtime waiting for a PC to boot, it would be spent queuing in Costa, or having a nice relaxed wee. Probably both. Those 10 minutes that are displaced cannot be considered equivalent in value to 10 minutes of patient contact time. The second reason is that there is no intervention that can fix this problem at little or no cost. Investments cost money. And if perfect IT systems existed, we wouldn’t all find these ‘careless costs’ so familiar. No doubt, the NHS lags behind, but the potential savings of improvement may very well be closer to zero than to the estimates in this paper.

When it comes to clinical impacts, people insist on being able to identify causal improvements from clearly defined interventions or changes. But when it comes to costs, too many people are confident in throwing around huge numbers of speculative origin.

Socioeconomic disparities in unmet need for student mental health services in higher education. Applied Health Economics and Health Policy [PubMed] Published 5th November 2019

In many countries, the size of the student population is growing, and this population seems to have a high level of need for mental health services. There are a variety of challenges in this context that make it an interesting subject for health economists to study (which is why I do), including the fact that universities are often the main providers of services. If universities are going to provide the right services and reach the right people, a better understanding of who needs what is required. This study contributes to this challenge.

The study is set in the context of higher education in Ireland. If you have no idea how higher education is organised in Ireland, and have an interest in mental health, then the Institutional Context section of this paper is worth reading in its own right. The study reports on findings from a national survey of students. This analysis is a secondary analysis of data collected for the primary purpose of eliciting students’ preferences for counselling services, which has been described elsewhere. In this paper, the authors report on supplementary questions, including measures of psychological distress and use of mental health services. Responses from 5,031 individuals, broadly representative of the population, were analysed.

Around 23% of respondents were classified as having unmet need for mental health services based on them reporting both a) severe distress and b) not using services. Arguably, it’s a sketchy definition of unmet need, but it seems reasonable for the purpose of this analysis. The authors regress this binary indicator of unmet need on a selection of sociodemographic and individual characteristics. The model is also run for the binary indicator of need only (rather than unmet need).

The main finding is that people from lower social classes are more likely to have unmet need, but that this is only because these people have a higher level of need. That is, people from less well-off backgrounds are more likely to have mental health problems but are no less likely to have their need met. So this is partly good news and partly bad news. It seems that there are no additional barriers to services in Ireland for students from a lower social class. But unmet need is still high and – with more inclusive university admissions – likely to grow. Based on the analyses, the authors recommend that universities could reach out to male students, who have greater unmet need.

Credits

Rachel Houten’s journal round-up for 11th November 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A comparison of national guidelines for network meta-analysis. Value in Health [PubMed] Published October 2019

The evolving treatment landscape results in a greater dependence on indirect treatment comparisons to generate estimates of clinical effectiveness, where the current practice has not been compared to the proposed new intervention in a head-to-head trial. This paper is a review of the guidelines of reimbursement bodies for conducting network meta-analyses. Reassuringly, the authors find that it is possible to meet the needs of multiple agencies with one analysis.

The authors assign three categories to the criteria; “assessment and analysis to test assumptions required for a network meta-analysis, presentation and reporting of results, and justification of modelling choices”, with heterogeneity of the included studies highlighted as one of the key elements to be sure to include if prioritisation of the criteria is necessary. I think this is a simple way of thinking about what needs to be presented but the ‘justification’ category, in my experience, is often given less weight than the other two.

This paper is a useful resource for companies submitting to multiple HTA agencies with the requirements of each national body displayed in tables that are easy to navigate. It meets a practical need but doesn’t really go far enough for me. They do signpost to the PRISMA criteria, but I think it would have been really good to think about the purpose of the submission guidelines; to encourage a logical and coherent summary of the approaches taken so the evidence can be evaluated by decision-makers.

Variation in responsiveness to warranted behaviour change among NHS clinicians: novel implementation of change detection methods in longitudinal prescribing data. BMJ [PubMed] Published 2nd October 2019

I really like this paper. Such a lot of work, from all sectors, is devoted to the production of relevant and timely evidence to inform practice, but if the guidance does not become embedded into the real world then its usefulness is limited.

The authors have managed to utilize a HUGE amount of data to identify the real reaction to two pieces of guidance recommending a change in practice in England. The authors used “trend indicator saturation”, which I’m not ashamed to admit I knew nothing about beforehand, but it is explained nicely. Their thoughtful use of the information available to them results in three indicators of response (in this case the deprescribing of two drugs) around when the change occurs, how quickly it occurs, and how much change occurs.

The authors discover variation in response to the recommendations but suggest an application of their methods could be used to generate feedback to clinicians and therefore drive further response. As some primary care practices took a while to embed the guidance change into their prescribing, the paper raises interesting questions as to where the barriers to the adoption of guidance have occurred.

What is next for patient preferences in health technology assessment? A systematic review of the challenges. Value in Health Published November 2019

It may be that patient preferences have a role to play in the uptake of guideline recommendations, as proposed by the authors of my final paper this week. This systematic review, of the literature around embedding patient preferences into HTA decision-making, groups the discussion in the academic literature into five broad areas; conceptual, normative, procedural, methodological, and practical. The authors state that their purpose was not to formulate their own views, merely to present the available literature, but they do a good job of indicating where to find more opinionated literature on this topic.

Methodological issues were the biggest group, with aspects such as the sample selection, internal and external validity of the preferences generated, and the generalisability of the preferences collected from a sample to the entire population. However, in general, the number of topics covered in the literature is vast and varied.

It’s a great summary of the challenges that are faced, and a ranking based on frequency of topic being mentioned in the literature drives the authors proposed next steps. They recommend further research into the incorporation of preferences within or beyond the QALY and the use of multiple-criteria decision analysis as a method of integrating patient preferences into decision-making. I support the need for “a scientifically and valid manner” to integrate patient preferences into HTA decision-making but wonder if we can first learn of what works well and hasn’t worked so well from the attempts of HTA agencies thus far.

Credits

Rita Faria’s journal round-up for 26th August 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Vaccine hesitancy and (fake) news: quasi‐experimental evidence from Italy. Health Economics [PubMed] [RePEc] Published 20th August 2019

Has fake news led to fewer children being vaccinated? At least in Italy, the answer seems to be yes.

It’s shocking to read that the WHO has included the reluctance or refusal to vaccinate as one of the 10 threats to global health today. And many of us are asking: why has this happened and what can we do to address it? Vincenzo Carrieri, Leonardo Madio and Francesco Principe help answer this first question. They looked at how fake news affects the take-up of vaccines, assuming that exposure to fake news is proxied by access to broadband and within a difference-in-differences framework. They found that a 10% increase in broadband coverage is associated with a 1.2-1.6% reduction in vaccination rates.

The differences-in-differences method hinges on a court ruling in 2012 that accepted that the MMR vaccine causes autism. Following the ruling, fake news about vaccines spread across the internet. In parallel, broadband coverage increased over time due to a government programme, but it varied by region, depending on the existing infrastructure and geographical conditions. Broadband coverage, by itself, cannot lead to lower vaccination rates. So it makes sense to assume that broadband coverage leads to greater exposure to fake news about vaccines, which in turn leads to lower vaccination rates.

On the other hand, it may be that greater broadband coverage and lower vaccination rates are both caused by something else. The authors wrote a good introduction to justify the model assumptions and show a few robustness checks. Had they had more space, I would have like to read a bit more about the uncertainties around the model assumptions. This is a fantastic paper and good food for thought on the consequences of fake news. Great read!

The cost-effectiveness of one-time birth cohort screening for hepatitis C as part of the National Health Service Health Check programme in England. Value in Health Published 19th August 2019

Jack Williams and colleagues looked at the cost-effectiveness of one-time birth cohort screening for hepatitis C. As hepatitis C is usually asymptomatic before reaching its more advanced stages, people may not be aware that they are infected. Therefore, they may not get tested and treated, even though treatment is effective and cost-effective.

At the level of the individual eligible for testing, the ICERs were between £8k-£31k/QALY, with lower ICERs for younger birth cohorts. The ICERs also depended on the transition probabilities for the progression of the disease, with lower ICERs if progression is faster. Extensive sensitivity and value of information analyses indicate that the key cost-effectiveness drivers are the transition probabilities, probabilities of referral and of treatment post-referral, and the quality of life benefits of being cured.

This is a great example of a good quality applied cost-effectiveness analysis. The model is well justified, the results are thoroughly tested, and the discussion is meticulous. Well done!

NICE, in confidence: an assessment of redaction to obscure confidential information in Single Technology Appraisals by the National Institute for Health and Care Excellence. PharmacoEconomics [PubMed] Published 27th June 2019

NICE walks a fine line between making decisions transparent and protecting confidential information. Confidential information includes commercially sensitive information (e.g. discounts to the price paid by the NHS) and academic-in-confidence information, such as unpublished results of clinical trials. The problem is that the redacted information may preclude readers from understanding NICE decisions.

Ash Bullement and colleagues reviewed NICE appraisals of technologies with an approved price discount. Their goal was to understand the extent of redactions and their consequences on the transparency of NICE decisions. Of the 171 NICE appraisals, 118 had an approved commercial arrangement and 110 had a simple price discount. The type of redacted information varied. Some did not present the ICER, others presented ICERs but not the components of the ICERs, and others did not even present the estimates of life expectancy from the model. Remarkably, the confidential discount could be back-calculated in seven NICE appraisals! The authors also looked at the academic-in-confidence redactions. They found that 68 out of 86 appraisals published before 2018 still had academic-in-confidence information redacted. This made me wonder if NICE has a process to review these redactions and disclose them once the information is in the public domain.

As Ash and colleagues rightly conclude, this review shows that there does not seem to be a consistent process for redaction and disclosure. This is a compelling paper on the practicalities of the NICE process, and with useful reflections for HTA agencies around the world. The message for NICE is that it may be time to review the process to handle sensitive information.

Credits