Alastair Canaway’s journal round-up for 10th June 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Analytic considerations in applying a general economic evaluation reference case to gene therapy. Value in Health Published 17th May 2019

For fledgling health economists starting in the world of economic evaluation, the NICE reference case is somewhat of a holy text. If in doubt, check the reference case. The concept of a reference case for economic evaluation has been around since the first US Panel on Cost-Effectiveness in Health and Medicine in 1996 and NICE has routinely used its own reference case for well over a decade. The primary purpose of the reference case is to improve the quality and comparability of economic evaluations by standardising methodological practices. There have been arguments made that the same methods are not appropriate for all medical technologies, particularly those in rare diseases or where no treatment currently exists. The focus of this paper is on gene therapy: a novel method that inserts genetic material into cells (as opposed to a drug/surgery) to treat or prevent disease. In this area there has been significant debate as to the appropriateness of the reference case and whether a new reference case is required in this transformative but expensive area. The purpose of the article was to examine the characteristics of gene therapy and make recommendations on changes to the reference case accordingly.

The paper does an excellent job of unpicking the key components of economic evaluation in relation to gene therapy to examine where weaknesses in current reference cases may lie. Rather than recommend that a new reference case be created, they identify specific areas that should be paid special attention when evaluating gene therapy. Additionally, they produce a three part checklist to help analysts to consider what aspects of their economic evaluation they should consider further. For those about to embark on an economic evaluation of a gene therapy intervention, this paper represents an excellent starting point to guide your methodological choices.

Heterogeneous effects of obesity on mental health: evidence from Mexico. Health Economics [PubMed] [RePEc] Published April 2019

The first line of the ‘summary’ section of this paper caught my eye: “Obesity can spread more easily if it is not perceived negatively”. This stirred up contradictory thoughts. From a public health standpoint we should be doing our utmost to prevent increasing levels of obesity and their related co-morbidities, whilst simultaneously we should be promoting body positivity and well-being for mental health. Is there a tension here? Might promoting body positivity and well-being enable the spread of obesity? This paper doesn’t really answer that question, instead it sought to investigate whether overweight and obesity had differing effects on mental health within different populations groups.

The study is set in Mexico which has the highest rate of obesity in the world with 70% of the population being overweight or obese. Previous research suggests that obesity spreads more easily if not perceived negatively. This paper hypothesises that this effect will be more acute among the poor and middle classes where obesity is more prevalent. The study aimed to reveal the extent of the impact of obesity on well-being whilst controlling for common determinants of well-being by examining the impact of measures of fatness on subjective well-being, allowing for heterogeneous effects across differing groups. The paper focused only on women, who tend to be more affected by excess weight than men (in Mexico at least).

To assess subjective well-being (SWB) the General Health Questionnaire (GHQ) was used whilst weight status was measured using waist to height ratio and additionally an obesity dummy. Data was sourced from the Mexican Family and Life Survey and the baseline sample included over 13,000 women. Various econometric models were employed ranging from OLS to instrumental variable estimations, details of which can be found within the paper.

The results supported the hypothesis. They found that there was a negative effect of fatness on well-being for the rich, whilst there was a positive effect for the poor. This has interesting policy implications: policy attempt to reduce obesity may not work if excess weight is not perceived to be an issue. The findings in this study imply that different policy measures are likely necessary for intervening in the wealthy and the poor in Mexico. The paper offers several explanations as to why this relationship may exist, ranging from the poor having lower returns from healthy time (nod to the Grossman model), to differing labour market penalties from fatness due to different job types for the rich and the poor.

Obviously there are limits to the generalisability of these findings, however it does raise interesting questions about how we should seek to prevent obesity within different elements of society, and the unintended consequences that shifts in attitudes may have.

ICECAP-O, the current state of play: a systematic review of studies reporting the psychometric properties and use of the instrument over the decade since its publication. Quality of Life Research [PubMed] Published June 2019

Those who follow the methodological side of outcome measurement will be familiar with the capability approach, operationalised by the ICECAP suite of measures amongst others. These measures focus on what people are able to do, rather than what they do. It is now 12-13 years since the first ICECAP measure was developed: the ICECAP-O designed for use in older adults. Given the ICECAP measures are now included within the NICE reference case for the economic evaluation of social care, it is a pertinent time to look back over the past decade to assess whether the ICECAP measures are being used and, if so, to what degree and how. This systematic review focusses on the oldest of the ICECAP measures, the ICECAP-O, and examines whether it has been used, and for what purpose as well as summarising the results from psychometric papers.

An appropriate search strategy was deployed within the usual health economic databases, and the PRISMA checklist was used to guide the review. In total 663 papers were identified, of which 51 papers made it through the screening process.

The first 8 years of the ICECAP-O’s life is characterised by an increasing amount of psychometric studies, however in 2014 a reversal occurred. Simultaneously, the number of studies using the ICECAP-O within economic evaluations has slowly increased, surmounting the number examining the psychometric properties, and has increased year-on-year in the three years up to 2018. Overall, the psychometric literature found the ICECAP-O to have good construct validity and generally good content validity with the occasional exception in groups of people with specific medical needs. Although the capability approach has gained prominence, the studies within the review suggest it is still very much seen as a secondary instrument to the EQ-5D and QALY framework, with results typically being brief with little to no discussion or interpretation of the ICECAP-O results.

One of the key limitations to the ICECAP framework to date relates to how economists and decision makers should use the results from the ICECAP instruments. Should capabilities be combined with time (e.g. years in full capability), or should some minimum (sufficient) capability threshold be used? The paper concludes that in the short term, presenting results in terms of ‘years of full capability’ is the best bet, however future research should focus on identifying sufficient capability and establishing monetary thresholds for a year with sufficient capability. Given this, whilst the ICECAP-O has seen increased use over the years, there is still significant work to be done to facilitate decision making and for it to routinely be used as a primary outcome for economic evaluation.

Credits

Rachel Houten’s journal round-up for 22nd April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

To HTA or not to HTA: identifying the factors influencing the rapid review outcome in Ireland. Value in Health [PubMed] Published 6th March 2019

National health services are constantly under pressure to provide access to new medicines as soon as marketing authorisation is granted. The NCPE in the Republic of Ireland has a rapid review process for selecting medicines that require a full health technology assessment (HTA), and the rest, approximately 45%, are able to be reimbursed without such an in-depth analysis.

Formal criteria do not exist. However, it has previously been suggested that the robustness of clinical evidence of at least equivalence; a drug that costs the same or less; an annual (or estimated) budget impact of less than €0.75 million to €1 million; and the ability of the current health systems to restrict usage are some of what is considered when making the decision.

The authors of this paper used the allocation over the past eight years to explore the factors that drive the decision to embark on a full HTA. They found, unsurprisingly, that first-in-class medicines are more likely to require an HTA as too are those with orphan status. Interestingly, the clinical area influenced the requirement for a full HTA, but the authors consider all of these factors to indicate that high-cost drugs are more likely to require a full assessment. Drug cost information is not publicly available and so the authors used the data available on the Scottish Medicine Consortium website as a surrogate for costs in Ireland. In doing so, they were able to establish a relationship between the cost per person for each drug and the likelihood of the drug having a full HTA, further supporting the idea that more expensive drugs are more likely to require HTA. On the face of it, this seems eminently sensible. However, my concern is that, in a system that is designed to deliberately measure cost per unit of health care (usually QALYs), there is the potential for lower-cost but ineffective drugs to become commonplace while more expensive medicines are subject to more rigor.

The paper provides some insight into what drives a decision to undertake a full HTA in Ireland. The NICE fast-track appraisal system operates as an opt-in system where manufacturers can ask to follow this shorter appraisal route if their drug is likely to produce an ICER of £10,000 or less. As my day job is for an Evidence Review Group (opinions my own), how things are done elsewhere – unsurprisingly – captured my attention. The desire to speed up the HTA process is obvious but the most appropriate mechanisms in which to do so are far from it. Whether or not the same decision is ultimately made is what concerns me.

NHS joint working with industry is out of public sight. BMJ [PubMed] Published 27th March 2019

This paper suggests that ‘joint working arrangements’ – a government-supported initiative between pharmaceutical companies and the NHS – are not being implemented according to guidelines on transparency. These arrangements are designed to promote collaborative research between the NHS and industry and help advance NHS provision of services.

The authors used freedom of information requests to obtain details on how many trusts were involved in joint working arrangements in 2016 and 2017. The declarations of payments made by drug companies are disclosed but the corresponding information from trusts is less readily accessible, and in some cases access to any details was prevented. Theoretically, the joint working arrangements are supposed to be void of any commercial influence on what is prescribed, but my thoughts are echoed in this paper when it asks “what’s in it for the private sector?” The sheer fact that some NHS trusts were unwilling to provide the BMJ with the information requested due to ‘commercial interest’ rings huge alarm bells.

I’m not completely cynical of these arrangements in principle, though, and the paper cites a couple of projects that involved building new facilities for age-related macular generation, which likely offer benefits to patients, and possibly much faster than could have been achieved with NHS funding alone. Some of the arrangements intend to push the implementation of national guidance, which, as a small cog in the guidance generation machine, I unashamedly (and predictably) think is a good thing.

Does it matter to us? As economists, it means that any work based on national practice and costs is likely to be unrepresentative of what actually happens. This, however, has always been the case to some extent, with variations in local service provision and the negotiation power of trusts with large volumes of patients. A national register of the arrangements would have the potential to feed into economic analysis, even if just as a statement of awareness.

Can the NHS survive without getting into bed with industry? Probably not. I think the paper does a good job of presenting the arguments on all sides and pushing for increasing availability of what is happening.

Estimating joint health condition utility values. Value in Health [PubMed] Published 22nd February 2019

I’m really interested in how this area is developing. Multi-morbidity is the norm, especially as we age. Single condition models are criticised for their lack of representation of patients in the real world. Appropriately estimating the quality of life of people with several chronic conditions, when only individual condition data are available, is incredibly difficult.

In this paper, parametric and non-parametric methods were tested on a dataset from a large primary care patient survey in the UK. The multiplicative approach was the best performing for two conditions. When more than two conditions were considered, the linear index (which incorporates additive, multiplicative, and minimum models with the use of linear regression and parameter weights derived from the underlying data) achieved the best results.

Including long-term mental health within the co-morbidities for which utility was estimated produced biased estimates. The authors discuss some possible explanations for this, including the fact that the anxiety and depression question in the EQ-5D is the only one which directly maps to an individual condition, and that mental health may have a causal effect on physical health. This is a fascinating finding, which has left me somewhat scratching my head as to how this oddity could be addressed and if separate methods of estimation will need to be used for any population with multi-morbidity including mental health conditions.

It did make me wonder if more precise EQ-5D data could be helpful to uncover the true interrelationships between joint health conditions and quality of life. The EQ-5D asks patients to think about their health state ‘today’. Although the primary care dataset used includes 16 chronic health conditions, it doesn’t, as far as I know, contain any information on the symptoms apparent on the day of quality of life assessment, which could be flaring or absent at any given time. This is a common problem with the EQ-5D and I don’t think a readily available data source of this type exists, so it’s a thought on ideals. Unsurprisingly, the more joint health conditions to be considered, the larger the error in terms of estimation from individual conditions. This may be due to the increasing likelihood of overlap in the symptoms experienced across conditions and thus a violation of the assumption that quality of life for an individual condition is independent of any other condition.

Whether the methodology remains robust for populations outside of the UK or for other measures of utility would need to be tested, and the authors are keen to highlight the need for caution before running away and using the methods verbatim. The paper does present a nice summary of the evidence to date in this area, what the authors did, and what it adds to the topic, so worth a read.

Credits

Simon McNamara’s journal round-up for 8th April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

National Institute for Health and Care Excellence, social values and healthcare priority setting. Journal of the Royal Society of Medicine [PubMed] Published 2nd April 2019

As is traditional, this week’s round-up starts with an imaginary birthday party. After much effort, we have finally managed to light the twenty candles, have agreed our approach to the distribution of the cake, and are waiting in anticipation of the entrance of the birthday “quasi-autonomous non-governmental body”. The door opens. You clear your throat. Here we go…

Happy Birthday to you,

Happy Birthday to you,

Happy Birthday dear National Institute for Health and Care Excellence,

Happy Birthday to you.

NICE smiles happily. It is no longer a teenager. It has made it to 20 – despite its parents challenging it a few times (cough, Cancer Drug Fund, cough). After the candles have been blown out, someone at the back shouts: “Speech! Speech!”. NICE coughs, thanks everyone politely, and (admittedly slight strangely) takes the opportunity to announce that they are revising their “Social Value Judgements” paper – a document that outlines the principles they use to develop guidance. They then proceed to circle the room, proudly handing out draft copies of the new document- “The principles that guide the development of NICE guidance and standards” (PDF). They look excited. Your fellow guests start to read.

“Surely not?”, “What the … ?”, “Why?” – they don’t seem pleased. You jump into the document. All of this is about process. Where are all the bits about justice, and inequalities, and bioethics, and the rest? “Why have you taken out loads of the good stuff?” you ask. “This is too vague, too procedural”. Your disappointment is obvious to those in the room.

Your phone pings – it’s your favourite WhatsApp group. One of the other guests has already started drafting a “critical friend” paper in the corner of the room. They want to know if you want to be involved. “I’m in”, you respond, “This is important, we need to make sure NICE knows what we think”. Your phone pings again. Another guest is in: “I want to be involved, this matters. Also, this is exactly the kind of paper that will get picked up by the AHE blog. If we are lucky, we might even be the first paper in one of their journal round-ups”. You pause, think, and respond hopefully: “Fingers crossed”.

I don’t know if NICE had an actual birthday party – if they did I certainly wasn’t invited. I also highly doubt that the authors of this week’s first paper, or indeed any paper, had the AHE blog in mind when writing. What I do know, is that the first article is indeed a “critical friend” paper which outlines the authors’ concerns with NICE’s proposal to “revise” (read: delete) their social value judgements guidance. This paper is relatively short, so if you are interested in these changes I suggest you read it, rather than relying on my imaginary birthday party version of their concerns.

I am highly sympathetic to the views expressed in this paper. The existing “social value judgements” document is excellent, and (to me at least) seems to be the gold standard in setting the values by which an HTA body should develop guidance. Reducing this down to solely procedural elements seems unnecessary, and potentially harmful if the other core values are forgotten, or deprioritised.

As I reflect on this paper, I can’t help think of the old adage: “If it ain’t broke, don’t fix it”. NICE – this ain’t broke.

Measuring survival benefit in health technology assessment in the presence of nonproportional hazards. Value in Health Published 22nd March 2019

Dear HTA bodies that don’t routinely look for violations of proportional hazards in oncology data: 2005 called, they want their methods back.

Seriously though, it’s 2019. Things have moved on. If a new drug has a different mode of action to its comparator, is given for a different duration, or has differing levels of treatment effect in different population subgroups, there are good reasons to think that the trial data for that drug might violate proportional hazards. So why not look? It’s easy enough, and could change the way you think about both the costs and the benefits of that medicine.

If you haven’t worked in oncology before, there is a good chance you are currently asking yourself two questions: “what does proportional hazards mean?” and “why does it matter?”. In massively simplified terms, when we say the hazards in a trial are “proportional” we mean that the treatment effect of the new intervention (typically on survival) is constant over time. If a treatment takes some time to work (e.g. immunotherapies), or is given for only a few weeks before being stopped (e.g. some chemotherapies), there are good reasons to think that the treatment effect of that intervention may vary over time. If this is the case, there will be a violation of proportional hazards (they will be “nonproportional”).

If you are an HTA body, this is important for at least three reasons. First, if hazards are non-proportional, this can mean that the average hazard ratio (treatment effect) from the trial is a poor representation of what is likely to happen beyond the trial period – a big issue if you are extrapolating data in an economic model. Second, if hazards are non-proportional, this can mean that the median survival benefit from the trial is a poor representation of the mean benefit (e.g. in the case of a curve with a “big tail”). If you don’t account for this, and rely on medians (as some HTA bodies do), this can result in your evaluation under-estimating, or over-estimating, the true benefits and costs of the medicine. Third, most approaches to including indirect comparison in economic models rely on proportionality so, if this doesn’t hold, your model might be a poor representation of reality. Given these issues, it makes sense that HTA bodies should be looking for violations in proportional hazards when evaluating oncology data.

In this week’s second paper, the authors review the way different HTA bodies approach the issue of non-proportionality in their methods guides, and in a sample of their appraisals. Of the HTA bodies considered, they find that only NICE (UK), CADTH (Canada), and PBAC (Australia) recommend testing for proportional hazards. Notably, the authors report that the Transparency Committee (France), IQWiG (Germany), and TLV (Sweden) don’t recommend testing for proportionality. Interestingly, despite these recommendations, the authors find that solely the majority of NICE appraisals they reviewed included these tests, and that only 20% of the PBAC appraisals and 8% of the CADTH appraisals did. This suggests that the vast majority of oncology drug evaluations do not include consideration of non-proportionality – a big concern given the issues outlined above.

I liked this paper, although I was a bit shocked at the results. If you work for an HTA body that doesn’t recommend testing for non-proportionality, or doesn’t enforce their existing recommendations, I suggest you think very carefully about this issue – particularly if you rely on the extrapolation of survival curves in your assessments. If you aren’t looking for violations of proportional hazards, there is a good chance that you aren’t reflecting the true costs and benefits of many medicines in your evaluations. So, why not look for them?

The challenge of antimicrobial resistance: what economics can contribute. Science Published 5th April 2019

Health Economics doesn’t normally make it into Science (the journal). If it does, it probably means the paper is an important one. This one certainly is.

Antimicrobial resistance (AMR) is scary – really scary. One source cited in this paper predicts that by 2050, 10 million people a year will die due to AMR. I don’t know about you, but I find this pretty worrying (how’s that for a bit of British understatement?). Given these predicted consequences, you would think that there would be quite a lot of work from economists on this issue. Well, there isn’t. According to this article, there are only 55 papers on EconLit that “broadly relate” to AMR.

This paper contributes to this literature in two important ways. First, it is a call to arms to economists to do more work on AMR. If there are only 55 papers on this topic, this suggests we are only scratching the surface of the issue and could do more as a field contribute to helping solve the problem. Second, it neatly demonstrates how economics could be applied to the problem of AMR – including analysis of both the supply side (not enough new antibiotics being developed) and demand side (too much antibiotic use) of the problem.

In the main body of the paper, the authors draw parallels between the economics of AMR and the economics of climate change: both are global instances of the ‘tragedy of the commons’, both are subject to significant uncertainty about the future, and both are highly sensitive to inter-temporal discounting. They then go on to suggest that many of the ideas developed in the context of climate change could be applied to AMR – including the potential for use of antibiotic prescribing quotas (analogous to carbon quotas) and taxation of antibiotic prescriptions (analogous to the idea of a carbon tax). There are many other ideas in the paper, and if you are interested in these I suggest you take the time to read it in full.

I think this is an important paper and one that has made me think more about the economics of both AMR and, inadvertently, climate change. With both issues, I can’t help but think we might be sleepwalking into a world where we have royally screwed over future generations because we didn’t take the actions we needed to take. If economists can help stop these things happening, we need to act. If we don’t, what will you say in 2050 when you turn on the news and see that 10 million people are dying from AMR each year? That is, assuming you aren’t one of those who has died as a result. Scary stuff indeed.

Credits