Chris Sampson’s journal round-up for 4th June 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A qualitative investigation of the health economic impacts of bariatric surgery for obesity and implications for improved practice in health economics. Health Economics [PubMed] Published 1st June 2018

Few would question the ‘economic’ nature of the challenge of obesity. Bariatric surgery is widely recommended for severe cases but, in many countries, the supply is not sufficient to satisfy the demand. In this context, this study explores the value of qualitative research in informing economic evaluation. The authors assert that previous economic evaluations have adopted a relatively narrow focus and thus might underestimate the expected value of bariatric surgery. But rather than going and finding data on what they think might be additional dimensions of value, the authors ask patients. Emotional capital, ‘societal’ (i.e. non-health) impacts, and externalities are identified as theories for the types of value that might be derived from bariatric surgery. These theories were used to guide the development of questions and prompts that were used in a series of 10 semi-structured focus groups. Thematic analysis identified the importance of emotional costs and benefits as part of the ‘socioemotional personal journey’ associated with bariatric surgery. Out-of-pocket costs were also identified as being important, with self-funding being a challenge for some respondents. The information seems useful in a variety of ways. It helps us understand the value of bariatric surgery and how individuals make decisions in this context. This information could be used to determine the structure of economic evaluations or the data that are collected and used. The authors suggest that an EQ-5D bolt-on should be developed for ’emotional capital’ but, given that this ‘theory’ was predefined by the authors and does not arise from the qualitative research as being an important dimension of value alongside the existing EQ-5D dimensions, that’s a stretch.

Developing accessible, pictorial versions of health-related quality-of-life instruments suitable for economic evaluation: a report of preliminary studies conducted in Canada and the United Kingdom. PharmacoEconomics – Open [PubMed] Published 25th May 2018

I’ve been telling people about this study for ages (apologies, authors, if that isn’t something you wanted to read!). In my experience, the need for more (cognitively / communicatively) accessible outcome measures is widely recognised by health researchers working in contexts where this is relevant, such as stroke. If people can’t read or understand the text-based descriptors that make up (for example) the EQ-5D, then we need some alternative format. You could develop an entirely new measure. Or, as the work described in this paper set out to do, you could modify existing measures. There are three descriptive systems described in this study: i) a pictorial EQ-5D-3L by the Canadian team, ii) a pictorial EQ-5D-3L by the UK team, and iii) a pictorial EQ-5D-5L by the UK team. Each uses images to represent the different levels of the different dimensions. For example, the mobility dimension might show somebody walking around unaided, walking with aids, or in bed. I’m not going to try and describe what they all look like, so I’ll just encourage you to take a look at the Supplementary Material (click here to download it). All are described as ‘pilot’ instruments and shouldn’t be picked up and used at this stage. Different approaches were used in the development of the measures, and there are differences between the measures in terms of the images selected and the ways in which they’re presented. But each process referred to conventions in aphasia research, used input from clinicians, and consulted people with aphasia and/or their carers. The authors set out several remaining questions and avenues for future research. The most interesting possibility to most readers will be the notion that we could have a ‘generic’ pictorial format for the EQ-5D, which isn’t aphasia-specific. This will require continued development of the pictorial descriptive systems, and ultimately their validation.

QALYs in 2018—advantages and concerns. JAMA [PubMed] Published 24th May 2018

It’s difficult not to feel sorry for the authors of this article – and indeed all US-based purveyors of economic evaluation in health care. With respect to social judgments about the value of health technologies, the US’s proverbial head remains well and truly buried in the sand. This article serves as a primer and an enticement for the use of QALYs. The ‘concerns’ cited relate almost exclusively to decision rules applied to QALYs, rather than the underlying principles of QALYs, presumably because the authors didn’t feel they could ignore the points made by QALY opponents (even if those arguments are vacuous). What it boils down to is this: trade-offs are necessary, and QALYs can be used to promote value in those trade-offs, so unless you offer some meaningful alternative then QALYs are here to stay. Thankfully, the Institute for Clinical and Economic Review (ICER) has recently added some clout to the undeniable good sense of QALYs, so the future is looking a little brighter. Suck it up, America!

The impact of hospital costing methods on cost-effectiveness analysis: a case study. PharmacoEconomics [PubMed] Published 22nd May 2018

Plugging different cost estimates into your cost-effectiveness model could alter the headline results of your evaluation. That might seems obvious, but there are a variety of ways in which the selection of unit costs might be somewhat arbitrary or taken for granted. This study considers three alternative sources of information for hospital-based unit costs for hip fractures in England: (a) spell-level tariffs, (b) finished consultant episode (FCE) reference costs, and (c) spell-level reference costs. Source (b) provides, in theory, a more granular version of (a), describing individual episodes within a person’s hospital stay. Reference costs are estimated on the basis of hospital activity, while tariffs are prices estimated on the basis of historic reference costs. The authors use a previously reported cohort state transition model to evaluate different models of care for hip fracture and explore how the use of the different cost figures affects their results. FCE-level reference costs produced the highest total first-year hospital care costs (£14,440), and spell-level tariffs the lowest (£10,749). The more FCEs within a spell, the greater the discrepancy. This difference in costs affected ICERs, such that the net-benefit-optimising decision would change. The study makes an important point – that selection of unit costs matters. But it isn’t clear why the difference exists. It could just be due to a lack of precision in reference costs in this context (rather than a lack of accuracy, per se), or it could be that reference costs misestimate the true cost of care across the board. Without clear guidance on how to select the most appropriate source of unit costs, these different costing methodologies represent another source of uncertainty in modelling, which analysts should consider and explore.

Credits

Method of the month: Q methodology

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is Q methodology.

Principles

There are many situations when we might be interested in people’s views, opinions or beliefs about an issue, such as how we allocate health care resources or the type of care we provide to dementia patients. Typically, health economists may think about using qualitative methods or preference elicitation techniques, but Q methodology could be your new method to examine these questions. Q methodology combines qualitative and quantitative techniques which allow us to first identify the range of the views that exist on a topic and then describe in-depth those viewpoints.

Q methodology was conceived as a way to study subjectivity by William Stephenson and is detailed in his 1953 book The Study of Behaviour. A more widely available book by Watts and Stenner (2012) provides a great general introduction to all stages of a Q study and the paper by Baker et al (2006) introduces Q methodology in health economics.

Implementation

There are two main stages in a Q methodology study. In the first stage, participants express their views through the rank-ordering of a set of statements known as the Q sort. The second stage uses factor analysis to identify patterns of similarity between the Q sorts, which can then be described in detail.

Stage 1: Developing the statements and Q sorting

The most important part of any Q study is the development of the statements that your participants will rank-order. The starting point is to identify all of the possible views on your topic. Participants should be able to interpret the statements as opinion rather than facts, for example, “The amount of health care people have had in the past should not influence access to treatments in the future”. The statements can come from a range of sources including interview transcripts, public consultations, academic literature, newspapers and social media. Through a process of eliminating duplicates, merging and deleting similar statements, you want to end up with a smaller set of statements that is representative of the population of views that exist on your topic. Pilot these statements in a small number of Q sorts before finalising and starting your main data collection.

The next thing to consider is from whom you are going to collect Q sorts. Participant sampling in Q methodology is similar to that of qualitative methods where you are looking to identify ‘data rich’ participants. It is not about representativeness according to demographics; instead, you want to include participants who have strong and differing views on your topic. Typically this would be around 30 to 60 people. Once you have selected your sample you can conduct your Q sorts. Here, each of your participants rank-orders the set of statements according to an instruction, for example from ‘most agree to most disagree’ or ‘highest priority to lowest priority’. At the end of each Q sort, a short interview is conducted asking participants to summarise their opinions on the Q sort and give further explanation for the placing of selected statements.

Stage 2: Analysis and interpretation

In the analysis stage, the aim is to identify people who have ranked their statements in a similar way. This involves calculating the correlations between the participants Q sorts (the full ranking of all statements) to form a correlation matrix which is then subject to factor analysis. The software outlined in the next section can help you with this. The factor analysis will produce a number of statistically significant solutions and your role as the analyst is to decide how many factors you retain for interpretation. This will be an iterative process where you consider the internal coherence of each factor: i.e. does the ranking of the statements make sense, does it align with the comments made by the participants following the Q sort as well as statistical considerations like Eigen Values. The factors are idealised Q sorts that are a complete ranking of all statements, essentially representing the way a respondent who had a correlation coefficient of 1 with the factor would have ranked their statements. The final step is to provide a descriptive account of the factors, looking at the positioning of each statement in relation to the other statements and drawing on the post Q sort interviews to support and aid your interpretation.

Software

There are a small number of software packages available to analyse your Q data, most of which are free to use. The most widely used programme is PQMethod. It is a DOS-based programme which often causes nervousness for newcomers due to the old school black screen and the requirement to step away from the mouse, but it is actually easy to navigate when you get going and it produces all of the output you need to interpret your Q sorts. There is the newer (and also free) KenQ that is receiving good reviews and has a more up-to-date web-based navigation, but I must confess I like my old time PQMethod. Details on all of the software and where to access these can be found on the Q methodology website.

Applications

Q methodology studies have been conducted with patient groups and the general public. In patient groups, the aim is often to understand their views on the type of care they receive or options for future care. Examples include the views of young people on the transition from paediatric to adult health care services and the views of dementia patients and their carers on good end of life care. The results of these types of Q studies have been used to inform the design of new interventions or to provide attributes for future preference elicitation studies.

We have also used Q methodology to investigate the views of the general public in a range of European countries on the principles that should underlie health care resource allocation as part of the EuroVaQ project. More recently, Q methodology has been used to identify societal views on the provision of life-extending treatments for people with a terminal illness.  This programme of work highlighted three viewpoints and a connected survey found that there was not one dominant viewpoint. This may help to explain why – after a number of preference elicitation studies in this area – we still cannot provide a definitive answer on whether an end of life premium exists. The survey mentioned in the end of life work refers to the Q2S (Q to survey) approach, which is a linked method to Q methodology… but that is for another blog post!

Credit

Alastair Canaway’s journal round-up for 29th January 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Is “end of life” a special case? Connecting Q with survey methods to measure societal support for views on the value of life-extending treatments. Health Economics [PubMed] Published 19th January 2018

Should end-of-life care be treated differently? A question often asked and previously discussed on this blog: findings to date are equivocal. This question is important given NICE’s End-of-Life Guidance for increased QALY thresholds for life-extending interventions, and additionally the Cancer Drugs Fund (CDF). This week’s round-up sees Helen Mason and colleagues attempt to inform the debate around societal support for views of end-of-life care, by trying to determine the degree of support for different views on the value of life-extending treatment. It’s always a treat to see papers grounded in qualitative research in the big health economics journals and this month saw the use of a particularly novel mixed methods approach adding a quantitative element to their previous qualitative findings. They combined the novel (but increasingly recognisable thanks to the Glasgow team) Q methodology with survey techniques to examine the relative strength of views on end-of-life care that they had formulated in a previous Q methodology study. Their previous research had found that there are three prevalent viewpoints on the value of life-extending treatment: 1. ‘a population perspective: value for money, no special cases’, 2. ‘life is precious: valuing life-extension and patient choice’, 3. ‘valuing wider benefits and opportunity cost: the quality of life and death’. This paper used a large Q-based survey design (n=4902) to identify societal support for the three different viewpoints. Viewpoints 1 and 2 were found to be dominant, whilst there was little support for viewpoint 3. The two supported viewpoints are not complimentary: they represent the ethical divide between the utilitarian with a fixed budget (view 1), and the perspective based on entitlement to healthcare (view 2: which implies an expanding healthcare budget in practice). I suspect most health economists will fall into camp number one. In terms of informing decision making, this is very helpful, yet unhelpful: there is no clear answer. It is, however, useful for decision makers in providing evidence to balance the oft-repeated ‘end of life is special’ argument based solely on conjecture, and not evidence (disclosure: I have almost certainly made this argument before). Neither of the dominant viewpoints supports NICE’s End of Life Guidance nor the CDF. Viewpoint 1 suggests end of life interventions should be treated the same as others, whilst viewpoint 2 suggests that treatments should be provided if the patient chooses them; it does not make end of life a special case as this viewpoint believes all treatments should be available if people wish to have them (and we should expand budgets accordingly). Should end of life care be treated differently? Well, it depends on who you ask.

A systematic review and meta-analysis of childhood health utilities. Medical Decision Making [PubMed] Published 7th October 2017

If you’re working on an economic evaluation of an intervention targeting children then you are going to be thankful for this paper. The purpose of the paper was to create a compendium of utility values for childhood conditions. A systematic review was conducted which identified a whopping 26,634 papers after deduplication – sincere sympathy to those who had to do the abstract screening. Following abstract screening, data were extracted for the remaining 272 papers. In total, 3,414 utility values were included when all subgroups were considered – this covered all ICD-10 chapters relevant to child health. When considering only the ‘main study’ samples, 1,191 utility values were recorded and these are helpfully separated by health condition, and methodological characteristics. In short, the authors have successfully built a vast catalogue of child utility values (and distributions) for use in future economic evaluations. They didn’t, however, stop there, they then built on the systematic review results by conducting a meta-analysis to i) estimate health utility decrements for each condition category compared to general population health, and ii) to examine how methodological factors impact child utility values. Interestingly for those conducting research in children, they found that parental proxy values were associated with an overestimation of values. There is a lot to unpack in this paper and a lot of appendices and supplementary materials are included (including the excel database for all 3,414 subsamples of health utilities). I’m sure this will be a valuable resource in future for health economic researchers working in the childhood context. As far as MSc dissertation projects go, this is a very impressive contribution.

Estimating a cost-effectiveness threshold for the Spanish NHS. Health Economics [PubMed] [RePEc] Published 28th December 2017

In the UK, the cost-per-QALY threshold is long-established, although whether it is the ‘correct’ value is fiercely debated. Likewise in Spain, there is a commonly cited threshold value of €30,000 per QALY with a dearth of empirical justification. This paper sought to identify a cost-per-QALY threshold for the Spanish National Health Service (SNHS) by estimating the marginal cost per QALY at which the SNHS currently operates on average. This was achieved by exploiting data on 17 regional health services between the years 2008-2012 when the health budget experienced considerable cuts due to the global economic crisis. This paper uses econometric models based on the provoking work by Claxton et al in the UK (see the full paper if you’re interested in the model specification) to achieve this. Variations between Spanish regions over time allowed the authors to estimate the impact of health spending on outcomes (measured as quality-adjusted life expectancy); this was then translated into a cost-per-QALY value for the SNHS. The headline figures derived from the analysis give a threshold between €22,000 and €25,000 per QALY. This is substantially below the commonly cited threshold of €30,000 per QALY. There are, however (as to be expected) various limitations acknowledged by the authors, which means we should not take this threshold as set in stone. However, unlike the status quo, there is empirical evidence backing this threshold and it should stimulate further research and discussion about whether such a change should be implemented.

Credits