Chris Sampson’s journal round-up for 23rd October 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

What is the evidence from past National Institute of Health and Care Excellence single-technology appraisals regarding company submissions with base-case incremental cost-effectiveness ratios of less than £10,000/QALY? Value in Health Published 18th October 2017

NICE have been looking into diversifying their HTA processes of late. One of the newly proposed rules is that technologies with a base-case ICER estimate of less than £10,000 per QALY should be eligible for a fast-track appraisal, so that patients can benefit as early as possible from a therapy that does not pose a great risk of wasting NHS resources. But what have NICE been doing up to this point for such technologies? For this study, the researchers analysed content from all NICE single technology appraisals (STAs) between 2009 and 2016, of which there were 171 with final reports available that reported a base-case ICER. 15% (26) of the STAs reported all base-case ICERs to be below £10,000, and of these 73% (19) received a positive recommendation at the first appraisal committee meeting. A key finding is that 7 of the 26 received a ‘Minded No’ judgment in the first instance due in part to inadequate evidence and – though all got a positive decision in the end – some recommendations were restricted to subgroups. The authors also had a look at STAs with base-case ICERs up to £15,000, of which there were 5 more. All of these received a positive recommendation at the first appraisal committee meeting. Another group of (28) STAs reported multiple ICERs that included estimates both below and above £10,000. These tell a different story. Only 13 received an unrestricted positive recommendation at the first appraisal committee. Positive recommendations eventually followed for all 28, but 7 were on the basis of patient access schemes. There are a few things to consider in light of these findings. It may not be possible for NICE to adequately fast-track some sub-£10k submissions because the ICERs are not estimated on the basis of appropriate comparisons, or because the evidence is otherwise inadequate. But there may be good grounds for extending the fast-track threshold to £15,000. The study also highlights some indicators of complexity (such as the availability of patient access scheme discounts) that might be used as a basis for excluding submissions from the fast-track process.

EQ-5D-5L versus EQ-5D-3L: the impact on cost-effectiveness in the United Kingdom. Value in Health Published 18th October 2017

Despite some protest from NICE, most UK health economists working on trial-based economic evaluations are probably getting on with using the new EQ-5D-5L (and associated value set) over its 3L predecessor. This shift could bring important changes to the distribution of cost-effectiveness results for evaluated technologies. In this study, the researchers sought to identify what these changes might be, by examining a couple of datasets which included both 3L and 5L response data. One dataset was produced by the EuroQol group, with 3,551 individuals from across Europe with a range of health states, and the other was a North American dataset collected from 5,205 patients with rheumatoid disease, which switched from 3L to 5L with a wave of overlap. The analysis employs a previously developed method with a series of ordinal regressions, in which 3L-5L pairs are predicted using a copula approach. The first thing to note is that there was variation in the distribution of responses between the different dimensions and between the two datasets, and so a variety of model specifications are needed. To investigate the implications of using the 5L instead of the 3L, the authors considered 9 cost-effectiveness analysis case studies. The 9 studies reported 13 comparisons. In almost all cases where 3L was replaced with the 5L, the intervention resulted in a smaller QALY gain and higher ICER. The only study in which use of the 5L increased the incremental QALYs was one in which life extension was the key driver of QALY gains. Generally speaking, use of the 5L increases index values and reduces the range, so quality of life improvements are ‘more difficult’ to achieve, while life extension is relatively more valuable than on the 3L. Several technologies move from being clearly cost-effective within NICE’s £20,000-£30,000 threshold to being borderline cases. Different technologies for different diseases will be impacted differently by the move from the 3L to the 5L. So while we should probably still start using the 5L and its value set (because it’s methodologically superior), we mustn’t forget how different our findings might be in comparison to our old ways.

Experience-based utility and own health state valuation for a health state classification system: why and how to do it. The European Journal of Health Economics [PubMedPublished 11th October 2017

There’s debate around whose values we ought to be using to estimate QALYs when making resource allocation decisions. Generally we use societal values, but some researchers think we should be using values from people actually in those health states. I’ve written before about some of the problems with this debate. In this study, the authors try to bring some clarity to the discussion. Four types of values are considered, defined by two distinctions: hypothetical vs own current state and general public vs patient values. The notion of experienced utility is introduced and the authors explain why this cannot be captured by (for example) a TTO exercise, because such exercises require hypothetical future scenarios of health improvement. Thus, the preferred terminology becomes ‘own health state valuation’. The authors summarise some of the research that has sought to compare the 4 types of values specified, highlighting that own health state valuations tend to give higher values associated with dysfunctional health states than do general population hypothetical valuations. The main point is that valuations can differ systematically according to whose values are being elicited. The authors describe some reasons why these values may differ. These could include i) poor descriptions of hypothetical states, ii) changing internal standards (e.g. response shift), and iii) adaptation. Next, the authors consider how to go about collecting own health state values. Two key challenges are specified: i) respondents may be unwilling where questions are complex or intrusive, and ii) there may be ethical concerns, particular where people are in terminal conditions. It is therefore difficult to sample for all possible health states. Selection bias may also rear its head. The tendency for more mild health states to be observed creates problems for the econometricians trying to model value sets. The authors propose some ways forward for identifying own health state value sets. One way would be to purposively sample EQ-5D health states from people representative within the states. However, some states are rarely observed, so we’d be looking at screening millions of people to identify the necessary participants from a general survey. So the authors suggest targeting people via other methods. Though this may still prove very difficult. A more effective (and favourable) approach – the authors suggest – could be to try and obtain better informed general population values. This could involve improving descriptive systems and encouraging deliberation. Evidence suggests that this can reduce the discrepancy between hypothetical and own state valuations. In particular, the authors recommend the use of citizens’ juries and multi-criteria decision analysis. This isn’t something we see being done in the literature, and so may be a fruitful avenue for future research.

Credits

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s