Method of the month: Q methodology

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is Q methodology.

Principles

There are many situations when we might be interested in people’s views, opinions or beliefs about an issue, such as how we allocate health care resources or the type of care we provide to dementia patients. Typically, health economists may think about using qualitative methods or preference elicitation techniques, but Q methodology could be your new method to examine these questions. Q methodology combines qualitative and quantitative techniques which allow us to first identify the range of the views that exist on a topic and then describe in-depth those viewpoints.

Q methodology was conceived as a way to study subjectivity by William Stephenson and is detailed in his 1953 book The Study of Behaviour. A more widely available book by Watts and Stenner (2012) provides a great general introduction to all stages of a Q study and the paper by Baker et al (2006) introduces Q methodology in health economics.

Implementation

There are two main stages in a Q methodology study. In the first stage, participants express their views through the rank-ordering of a set of statements known as the Q sort. The second stage uses factor analysis to identify patterns of similarity between the Q sorts, which can then be described in detail.

Stage 1: Developing the statements and Q sorting

The most important part of any Q study is the development of the statements that your participants will rank-order. The starting point is to identify all of the possible views on your topic. Participants should be able to interpret the statements as opinion rather than facts, for example, “The amount of health care people have had in the past should not influence access to treatments in the future”. The statements can come from a range of sources including interview transcripts, public consultations, academic literature, newspapers and social media. Through a process of eliminating duplicates, merging and deleting similar statements, you want to end up with a smaller set of statements that is representative of the population of views that exist on your topic. Pilot these statements in a small number of Q sorts before finalising and starting your main data collection.

The next thing to consider is from whom you are going to collect Q sorts. Participant sampling in Q methodology is similar to that of qualitative methods where you are looking to identify ‘data rich’ participants. It is not about representativeness according to demographics; instead, you want to include participants who have strong and differing views on your topic. Typically this would be around 30 to 60 people. Once you have selected your sample you can conduct your Q sorts. Here, each of your participants rank-orders the set of statements according to an instruction, for example from ‘most agree to most disagree’ or ‘highest priority to lowest priority’. At the end of each Q sort, a short interview is conducted asking participants to summarise their opinions on the Q sort and give further explanation for the placing of selected statements.

Stage 2: Analysis and interpretation

In the analysis stage, the aim is to identify people who have ranked their statements in a similar way. This involves calculating the correlations between the participants Q sorts (the full ranking of all statements) to form a correlation matrix which is then subject to factor analysis. The software outlined in the next section can help you with this. The factor analysis will produce a number of statistically significant solutions and your role as the analyst is to decide how many factors you retain for interpretation. This will be an iterative process where you consider the internal coherence of each factor: i.e. does the ranking of the statements make sense, does it align with the comments made by the participants following the Q sort as well as statistical considerations like Eigen Values. The factors are idealised Q sorts that are a complete ranking of all statements, essentially representing the way a respondent who had a correlation coefficient of 1 with the factor would have ranked their statements. The final step is to provide a descriptive account of the factors, looking at the positioning of each statement in relation to the other statements and drawing on the post Q sort interviews to support and aid your interpretation.

Software

There are a small number of software packages available to analyse your Q data, most of which are free to use. The most widely used programme is PQMethod. It is a DOS-based programme which often causes nervousness for newcomers due to the old school black screen and the requirement to step away from the mouse, but it is actually easy to navigate when you get going and it produces all of the output you need to interpret your Q sorts. There is the newer (and also free) KenQ that is receiving good reviews and has a more up-to-date web-based navigation, but I must confess I like my old time PQMethod. Details on all of the software and where to access these can be found on the Q methodology website.

Applications

Q methodology studies have been conducted with patient groups and the general public. In patient groups, the aim is often to understand their views on the type of care they receive or options for future care. Examples include the views of young people on the transition from paediatric to adult health care services and the views of dementia patients and their carers on good end of life care. The results of these types of Q studies have been used to inform the design of new interventions or to provide attributes for future preference elicitation studies.

We have also used Q methodology to investigate the views of the general public in a range of European countries on the principles that should underlie health care resource allocation as part of the EuroVaQ project. More recently, Q methodology has been used to identify societal views on the provision of life-extending treatments for people with a terminal illness.  This programme of work highlighted three viewpoints and a connected survey found that there was not one dominant viewpoint. This may help to explain why – after a number of preference elicitation studies in this area – we still cannot provide a definitive answer on whether an end of life premium exists. The survey mentioned in the end of life work refers to the Q2S (Q to survey) approach, which is a linked method to Q methodology… but that is for another blog post!

Credit

Alastair Canaway’s journal round-up for 29th January 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Is “end of life” a special case? Connecting Q with survey methods to measure societal support for views on the value of life-extending treatments. Health Economics [PubMed] Published 19th January 2018

Should end-of-life care be treated differently? A question often asked and previously discussed on this blog: findings to date are equivocal. This question is important given NICE’s End-of-Life Guidance for increased QALY thresholds for life-extending interventions, and additionally the Cancer Drugs Fund (CDF). This week’s round-up sees Helen Mason and colleagues attempt to inform the debate around societal support for views of end-of-life care, by trying to determine the degree of support for different views on the value of life-extending treatment. It’s always a treat to see papers grounded in qualitative research in the big health economics journals and this month saw the use of a particularly novel mixed methods approach adding a quantitative element to their previous qualitative findings. They combined the novel (but increasingly recognisable thanks to the Glasgow team) Q methodology with survey techniques to examine the relative strength of views on end-of-life care that they had formulated in a previous Q methodology study. Their previous research had found that there are three prevalent viewpoints on the value of life-extending treatment: 1. ‘a population perspective: value for money, no special cases’, 2. ‘life is precious: valuing life-extension and patient choice’, 3. ‘valuing wider benefits and opportunity cost: the quality of life and death’. This paper used a large Q-based survey design (n=4902) to identify societal support for the three different viewpoints. Viewpoints 1 and 2 were found to be dominant, whilst there was little support for viewpoint 3. The two supported viewpoints are not complimentary: they represent the ethical divide between the utilitarian with a fixed budget (view 1), and the perspective based on entitlement to healthcare (view 2: which implies an expanding healthcare budget in practice). I suspect most health economists will fall into camp number one. In terms of informing decision making, this is very helpful, yet unhelpful: there is no clear answer. It is, however, useful for decision makers in providing evidence to balance the oft-repeated ‘end of life is special’ argument based solely on conjecture, and not evidence (disclosure: I have almost certainly made this argument before). Neither of the dominant viewpoints supports NICE’s End of Life Guidance nor the CDF. Viewpoint 1 suggests end of life interventions should be treated the same as others, whilst viewpoint 2 suggests that treatments should be provided if the patient chooses them; it does not make end of life a special case as this viewpoint believes all treatments should be available if people wish to have them (and we should expand budgets accordingly). Should end of life care be treated differently? Well, it depends on who you ask.

A systematic review and meta-analysis of childhood health utilities. Medical Decision Making [PubMed] Published 7th October 2017

If you’re working on an economic evaluation of an intervention targeting children then you are going to be thankful for this paper. The purpose of the paper was to create a compendium of utility values for childhood conditions. A systematic review was conducted which identified a whopping 26,634 papers after deduplication – sincere sympathy to those who had to do the abstract screening. Following abstract screening, data were extracted for the remaining 272 papers. In total, 3,414 utility values were included when all subgroups were considered – this covered all ICD-10 chapters relevant to child health. When considering only the ‘main study’ samples, 1,191 utility values were recorded and these are helpfully separated by health condition, and methodological characteristics. In short, the authors have successfully built a vast catalogue of child utility values (and distributions) for use in future economic evaluations. They didn’t, however, stop there, they then built on the systematic review results by conducting a meta-analysis to i) estimate health utility decrements for each condition category compared to general population health, and ii) to examine how methodological factors impact child utility values. Interestingly for those conducting research in children, they found that parental proxy values were associated with an overestimation of values. There is a lot to unpack in this paper and a lot of appendices and supplementary materials are included (including the excel database for all 3,414 subsamples of health utilities). I’m sure this will be a valuable resource in future for health economic researchers working in the childhood context. As far as MSc dissertation projects go, this is a very impressive contribution.

Estimating a cost-effectiveness threshold for the Spanish NHS. Health Economics [PubMed] [RePEc] Published 28th December 2017

In the UK, the cost-per-QALY threshold is long-established, although whether it is the ‘correct’ value is fiercely debated. Likewise in Spain, there is a commonly cited threshold value of €30,000 per QALY with a dearth of empirical justification. This paper sought to identify a cost-per-QALY threshold for the Spanish National Health Service (SNHS) by estimating the marginal cost per QALY at which the SNHS currently operates on average. This was achieved by exploiting data on 17 regional health services between the years 2008-2012 when the health budget experienced considerable cuts due to the global economic crisis. This paper uses econometric models based on the provoking work by Claxton et al in the UK (see the full paper if you’re interested in the model specification) to achieve this. Variations between Spanish regions over time allowed the authors to estimate the impact of health spending on outcomes (measured as quality-adjusted life expectancy); this was then translated into a cost-per-QALY value for the SNHS. The headline figures derived from the analysis give a threshold between €22,000 and €25,000 per QALY. This is substantially below the commonly cited threshold of €30,000 per QALY. There are, however (as to be expected) various limitations acknowledged by the authors, which means we should not take this threshold as set in stone. However, unlike the status quo, there is empirical evidence backing this threshold and it should stimulate further research and discussion about whether such a change should be implemented.

Credits

Meeting round-up: Health Economists’ Study Group (HESG) Winter 2018

Last week’s biannual intellectual knees-up for UK health economists took place at City, University of London. We’ve written before about HESG, but if you need a reminder of the format you can read Lucy Abel’s blog post on the subject. This was the first HESG I’ve been to in a while that took place in an actual university building.

The conference kicked off for me with my colleague Grace Hampson‘s first ever HESG discussion. It was an excellent discussion of Toby Watt‘s paper on the impact of price promotions for cola, in terms of quantities purchased (they increase) and – by extension – sugar consumption. It was a nice paper with a clear theoretical framework and empirical strategy, which generated a busy discussion. Nutrition is a subject that I haven’t seen represented much at past HESG meetings, but there were several on the schedule this time around with other papers by Jonathan James and Ben Gershlick. I expect it’s something we’ll see becoming more prevalent as policymaking becomes more insistent.

The second and third sessions I attended were on the relationship between health and social care, which is a pressing matter in the UK, particular with regard to achieving integrated care. Ben Zaranko‘s paper considered substitution effects arising from changes in the relative budgets of health and social care. Jonathan Stokes and colleagues attempted to identify whether the Better Care Fund has achieved its goal of reducing secondary care use. That paper got a blazing discussion from Andrew Street that triggered an insightful discussion in the room.

A recurring theme in many sessions was the challenge of communicating with local decision-makers, and the apparent difficulty in working without a reference case to fall back on (such as that of NICE). This is something that I have heard regularly discussed at least since the Winter 2016 meeting in Manchester. At City, this was most clearly discussed in Emma Frew‘s paper describing the researchers’ experiences working with local government. Qualitative research has clearly broken through at HESG, including Emma’s paper and a study by Hareth Al-Janabi on the subject of treatment spillovers on family carers.

I also saw a few papers that related primarily to matters of research conduct and publishing. Charitini Stavropoulou‘s paper explored whether highly-cited researchers are more likely to receive public funding, while the paper I chaired by Anum Shaikh explored the potential for recycling cost-effectiveness models. The latter was a joy for me, with much discussion of model registries!

There were plenty of papers that satisfied my own particular research interests. Right up my research street was Mauro Laudicella‘s paper, which used real-world data to assess the cost savings associated with redirecting cancer diagnoses to GP referral rather than emergency presentation. I wasn’t quite as optimistic about the potential savings, with the standard worries about lead time bias and selection effects. But it was a great paper nonetheless. Also using real-world evidence was Ewan Gray‘s study, which supported the provision of adjuvant chemotherapy for early stage breast cancer but delivered some perplexing findings about patient-GP decision-making. Ewan’s paper explored technical methodological challenges, though the prize for the most intellectually challenging paper undoubtedly goes to Manuel Gomes, who continued his crusade to make health economists better at dealing with missing data – this time for the case of quality of life data. Milad Karimi‘s paper asked whether preferences over health states are informed. This is the kind of work I enjoy thinking about – whether measures like the EQ-5D capture what really matters and how we might do better.

As usual, many delegates worked hard and played hard. I took a beating from the schedule at this HESG, with my discussion taking place during the first session after the conference dinner (where we walked in the footsteps of the Spice Girls) and my chairing responsibilities falling on the last session of the last day. But in both cases, the audience was impressive.

I’ll leave the final thought for the blog post with Peter Smith’s plenary, which considered the role of health economists in a post-truth world. Happily, for me, Peter’s ideas chimed with my own view that we ought to be taking our message to the man on the Clapham omnibus and supporting public debate. Perhaps our focus on (national) policymakers is too strong. If not explicit, this was a theme that could be seen throughout the meeting, whether it be around broader engagement with stakeholders, recognising local decision-making processes, or harnessing the value of storytelling through qualitative research. HESG members are STRETCHing the truth.

Credit