Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
The EQ-5D-5L value set for England: findings of a quality assurance program. Value in Health Published 13th March 2020
What do you call a ‘quality assurance program’ when there is no agreed quality standard? I have some ideas. Most readers will be aware of the EQ-5D-5L value set for England fiasco. In short, the published value set was trashed by some researchers and as a result NICE decided not to recommend its use. This publication is by the researchers who did the trashing, which they refer to as a ‘quality assurance program’.
The authors were provided with access to the data and analysis script used by the orginal value set development team. With this, they looked at the quality of the data and the suitability of the analyses. A significant chunk of the paper is dedicated to reporting some logistic regression models that were used to predict whether or not an individual reported having difficulty in making choices as part of the time trade-off (TTO) or discrete choice experiment (DCE) tasks. It isn’t clear to me why this is interesting or important. These tasks should be difficult. The authors find that men are less likely to report difficulty. What does that have to do with the quality of the data? Logistic regressions are also used to model a set of ‘problematic outcomes’, such as an individual valuing a state lower than their value for state 55555 or only trading states in multiples of 5 years. The most notable finding seems to be that people who find the task difficult are less likely to generate problematic outcomes.
Clearly, there are problems with the way the data were collected and managed. For instance, we do not know anything about non-respondents. One of the key ‘findings’ reported in this study is that the time trade-off experiments included less then 2.75% of the 3,125 possible health states described by the EQ-5D-5L. This may or may not be a problem. Another ‘finding’ is that a high proportion of respondents (around 47%) valued at least 20% of states inconsistently, which is twice as many as in the 3L value set. However, ‘inconsistent’ seems to be the word chosen by the authors to describe almost anything questionable in the data, such as an individual only valuing states using integer values.
The authors report a series of tests for the model specification and the Bayesian approach used for the final value set. They state that the model is unidentified and that the Bayesian model relies on unjustified informative priors. Their tests also indicate that the model does not achieve convergence. These are all reasonable questions to ask, but they mostly indicate the potential for problems rather than issues that could undermine the quality of the value set.
The authors throw around words like ‘dangerous’, ‘risk’, and ‘sufficiency’, without bothering to tell us what they mean by these. There is a fine line between scholarly mudslinging and constructive criticism. Contributors to this blog (well, me, at least) will happily engage in both. But surely the authors of this paper had a duty to be constructive and to meaningfully guide policymaking. Instead what we see is an unfortunate mix of one-upmanship, conflicts of interest, and an apparent disinterest in policymaking consequences. It is notable that use of the authors’ own (routinely promoted) mapping algorithm would be undermined by recommendation of the 5L value set for England.
So, to answer my opening question, this article is essentially an opinion piece. It will prove useful for the methodological development of EQ-5D valuation studies, but it should not have had such decisive influence on NICE’s view of the 5L value set for England.
As if by magic, last week also saw the publication of yet another study providing reason to prefer the EQ-5D-5L over the EQ-5D-3L.
This study is based on what might be the largest data set of matched EQ-5D-3L and EQ-5D-5L values. Data are from the General Practice Patient Survey in England, which collected EQ-5D-3L in 2011 and EQ-5D-5L in 2012. The survey is cross-sectional, with different people in each year, so the authors use an approach called coarsened exact matching. This involves matching 3L observations perfectly with 5L in terms of other variables, such as age, health conditions, ethnicity, and economic activity. The observations are ‘coarsened’ in that continuous variables are collapsed into categories to facilitate exact matching. The matched data set included more than 500,000 people in each year.
The analysis reaffirmed findings from previous studies on the measurement properties of the EQ-5D-5L relative to the 3L. The 5L was associated with a reduced ceiling effect. The authors found that the 5L was associated with a greater number of deviations from full health, such that ill health was reported more frequently, but with less severity on average. As well as considering the whole population, analyses are also conducted on people reporting particular health problems and on the subset of people with multiple conditions. From this, we see that the advantageous measurement properties of the 5L are even more pronounced in the multimorbidity group.
Informativity was also assessed using Shannon’s indices. These analyses demonstrated that the 5L was more informative. In particular, there was much greater informativity in the 5L version of the usual activities domain. By looking at a binary question about recent experiences of being unwell, the authors also show that the 5L is more sensitive to this, with a smaller proportion of people with recent experiences of being unwell reporting ‘no problems’ on all domains.
The authors conclude that general population surveys should opt to use the 5L rather than the 3L. Sigh.
Is it time to nationalise the pharmaceutical industry? BMJ [PubMed] Published 4th March 2020
Inevitably, neither of the authors of this ‘head to head’ really support nationalisation of the pharmaceutical industry. Instead, we have Mariana Mazzucato and Henry Lishi Li arguing in favour of greater public sector involvement, and Ara Darzi arguing for effective regulation.
The ‘pro-nationalisation’ perspective hinges on the idea that the pharmaceutical industry is misaligned with public interests. Call me an anticapitalist, but I don’t see many industries that are. The argument includes a proposal for there to be a role for fairly priced government-provided medicines, but this seems to be a jumble of supply-side and demand-side ideas without any concrete framework. There is a suggestion that the government should manufacture medicines and also a suggestion that prices should recognise the public contribution to R&D, but it isn’t clear how these things fit together.
The ‘anti-nationalisation’ perspective seems to be grounded in a fear of upsetting industry too much. From the perspective of the national economy, this might be reasonable. It is good for the the UK if pharmaceutical companies base their R&D here. But the government shouldn’t be shy of exercising its power. The author floats and then dismisses the idea of delinkage, whereby prices aren’t determined by R&D cost, though it may have a role in the development of antibiotics, which the author considers to be a unique challenge.
Both arguments lead us to the idea of government trying to guide innovation and manage competition. This is essentially where we are now and it involves lots of messy negotiations. If everyone feels hard done by and yet the drugs keep coming, the system is probably working OK.
Is there a future for value-based contracting? Value in Health Published 28th February 2020
Around the time that I started out in the health economics world, there was a lot of talk about value-based pricing. But then everyone gave up because it was too hard. Now, it’s making a comeback, but people are calling it value-based contracting. This paper provides a brief summary of where we’re at.
The author points out that only 23 value-based contracts have been publicly disclosed in the US, despite all the chatter. Even if there are more than this that have not been disclosed, the point is that they cannot be having a meaningful impact on health care expenditure overall. The focus to date has been on achieving access to drugs with very high prices, rather than a system-wide shift. This narrow focus can only have a limited impact and the author suggests that alternative measures, such as promoting competition, are likely to be more useful.
There is a whole range of different types of outcome-based or risk-sharing agreement, including ‘managed entry agreements’, ‘coverage with evidence development’, and ‘value-based insurance design’. While it may be important to have bespoke arrangements, the array of different terms may serve only to confuse and maintain the status of value-based pricing as a pipe dream.