Method of the month: Q methodology

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is Q methodology.

Principles

There are many situations when we might be interested in people’s views, opinions or beliefs about an issue, such as how we allocate health care resources or the type of care we provide to dementia patients. Typically, health economists may think about using qualitative methods or preference elicitation techniques, but Q methodology could be your new method to examine these questions. Q methodology combines qualitative and quantitative techniques which allow us to first identify the range of the views that exist on a topic and then describe in-depth those viewpoints.

Q methodology was conceived as a way to study subjectivity by William Stephenson and is detailed in his 1953 book The Study of Behaviour. A more widely available book by Watts and Stenner (2012) provides a great general introduction to all stages of a Q study and the paper by Baker et al (2006) introduces Q methodology in health economics.

Implementation

There are two main stages in a Q methodology study. In the first stage, participants express their views through the rank-ordering of a set of statements known as the Q sort. The second stage uses factor analysis to identify patterns of similarity between the Q sorts, which can then be described in detail.

Stage 1: Developing the statements and Q sorting

The most important part of any Q study is the development of the statements that your participants will rank-order. The starting point is to identify all of the possible views on your topic. Participants should be able to interpret the statements as opinion rather than facts, for example, “The amount of health care people have had in the past should not influence access to treatments in the future”. The statements can come from a range of sources including interview transcripts, public consultations, academic literature, newspapers and social media. Through a process of eliminating duplicates, merging and deleting similar statements, you want to end up with a smaller set of statements that is representative of the population of views that exist on your topic. Pilot these statements in a small number of Q sorts before finalising and starting your main data collection.

The next thing to consider is from whom you are going to collect Q sorts. Participant sampling in Q methodology is similar to that of qualitative methods where you are looking to identify ‘data rich’ participants. It is not about representativeness according to demographics; instead, you want to include participants who have strong and differing views on your topic. Typically this would be around 30 to 60 people. Once you have selected your sample you can conduct your Q sorts. Here, each of your participants rank-orders the set of statements according to an instruction, for example from ‘most agree to most disagree’ or ‘highest priority to lowest priority’. At the end of each Q sort, a short interview is conducted asking participants to summarise their opinions on the Q sort and give further explanation for the placing of selected statements.

Stage 2: Analysis and interpretation

In the analysis stage, the aim is to identify people who have ranked their statements in a similar way. This involves calculating the correlations between the participants Q sorts (the full ranking of all statements) to form a correlation matrix which is then subject to factor analysis. The software outlined in the next section can help you with this. The factor analysis will produce a number of statistically significant solutions and your role as the analyst is to decide how many factors you retain for interpretation. This will be an iterative process where you consider the internal coherence of each factor: i.e. does the ranking of the statements make sense, does it align with the comments made by the participants following the Q sort as well as statistical considerations like Eigen Values. The factors are idealised Q sorts that are a complete ranking of all statements, essentially representing the way a respondent who had a correlation coefficient of 1 with the factor would have ranked their statements. The final step is to provide a descriptive account of the factors, looking at the positioning of each statement in relation to the other statements and drawing on the post Q sort interviews to support and aid your interpretation.

Software

There are a small number of software packages available to analyse your Q data, most of which are free to use. The most widely used programme is PQMethod. It is a DOS-based programme which often causes nervousness for newcomers due to the old school black screen and the requirement to step away from the mouse, but it is actually easy to navigate when you get going and it produces all of the output you need to interpret your Q sorts. There is the newer (and also free) KenQ that is receiving good reviews and has a more up-to-date web-based navigation, but I must confess I like my old time PQMethod. Details on all of the software and where to access these can be found on the Q methodology website.

Applications

Q methodology studies have been conducted with patient groups and the general public. In patient groups, the aim is often to understand their views on the type of care they receive or options for future care. Examples include the views of young people on the transition from paediatric to adult health care services and the views of dementia patients and their carers on good end of life care. The results of these types of Q studies have been used to inform the design of new interventions or to provide attributes for future preference elicitation studies.

We have also used Q methodology to investigate the views of the general public in a range of European countries on the principles that should underlie health care resource allocation as part of the EuroVaQ project. More recently, Q methodology has been used to identify societal views on the provision of life-extending treatments for people with a terminal illness.  This programme of work highlighted three viewpoints and a connected survey found that there was not one dominant viewpoint. This may help to explain why – after a number of preference elicitation studies in this area – we still cannot provide a definitive answer on whether an end of life premium exists. The survey mentioned in the end of life work refers to the Q2S (Q to survey) approach, which is a linked method to Q methodology… but that is for another blog post!

Credit

Thesis Thursday: Koonal Shah

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Koonal Shah who has a PhD from the University of Sheffield. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Valuing health at the end of life
Supervisors
Aki Tsuchiya, Allan Wailoo
Repository link
http://etheses.whiterose.ac.uk/17579

What were the key questions you wanted to answer with your research?

My key research question was: Do members of the general public wish to place greater weight on a unit of health gain for end of life patients than on that for other types of patients? Or put more concisely: Is there evidence of public support for an end of life premium?

The research question was motivated by a policy introduced by NICE in 2009 [PDF], which effectively gives special weighting to health gains generated by life-extending end of life treatments. This represents an explicit departure from the Institute’s reference case position that all equal-sized health gains are of equal social value (the ‘a QALY is a QALY’ rule). NICE’s policy was justified in part by claims that it represented the preferences of society, but little evidence was available to either support or refute that premise. It was this gap in the evidence that inspired my research question.

I also sought to answer other questions, such as whether the focus on life extensions (rather than quality of life improvements) in NICE’s policy is consistent with public preferences, and whether people’s stated end of life-related preferences depend on the ways in which the preference elicitation tasks are designed, framed and presented.

Which methodologies did you use to elicit people’s preferences?

All four of my empirical studies used hypothetical choice exercises to elicit preferences from samples of the UK general public. NICE’s policy was used as the framework for the designs in each case. Three of the studies can be described as having used simple choice tasks, while one study specifically applied the discrete choice experiment methodology. The general approach was to ask survey respondents which of two hypothetical patients they thought should be treated, assuming that the health service had only enough funds to treat one of them.

In my final study, which focused on framing effects and study design considerations, I included attitudinal questions with Likert item responses alongside the hypothetical choice tasks. The rationale for including these questions was to examine the consistency of respondents’ views across two different approaches (spoiler: most people are not very consistent).

Your study included face-to-face interviews. Did these provide you with information that you weren’t able to obtain from a more general survey?

The surveys in my first two empirical studies were both administered via face-to-face interviews. In the first study, I conducted the interviews myself, while in the second study the interviews were subcontracted to a market research agency. I also conducted a small number of face-to-face interviews when pilot testing early versions of the surveys for my third and fourth studies. The piloting process was useful as it provided me with first-hand information about which aspects of the surveys did and did not work well when administered in practice. It also gave me a sense of how appropriate my questions were. The subject matter – prioritising between patients described as having terminal illnesses and poor prognoses – had the potential to be distressing for some people. My view was that I shouldn’t be including questions that I did not feel comfortable asking strangers in an interview setting.

The use of face-to-face interviews was particularly valuable in my first study as it allowed me to ask debrief questions designed to probe respondents and elicit qualitative information about the thinking behind their responses.

What factors influence people’s preferences for allocating health care resources at the end of life?

My research suggests that people’s preferences regarding the value of end of life treatments can depend on whether the treatment is life-extending or quality of life-improving. This is noteworthy because NICE’s end of life criteria accommodate life extensions but not quality of life improvements.

I also found that the amount of time that end of life patients have to ‘prepare for death’ was a consideration for a number of respondents. Some of my results suggest that observed preferences for prioritising the treatment of end of life patients may be driven by concern about how long the patients have known their prognosis rather than by concern about how long they have left to live, per se.

The wider literature suggests that the age of the end of life patients (which may act as a proxy for their role in their household or in society) may also matter. Some studies have reported evidence that respondents become less concerned about the number of remaining life years when the patients in question are relatively old. This is consistent with the ‘fair innings’ argument proposed by Alan Williams.

Given the findings of your study, are there any circumstances under which you would support an end of life premium?

My findings offer limited support for an end of life premium (though it should be noted that the wider literature is more equivocal). So it might be considered appropriate for NICE to abandon its end of life policy on the grounds that the population health losses that arise due to the policy are not justified by the evidence on societal preferences. However, there may be arguments for retaining some form of end of life weighting irrespective of societal preferences. For example, if the standard QALY approach systematically underestimates the benefits of end of life treatments, it may be appropriate to correct for this (though whether this is actually the case would itself need investigating).

Many studies reporting that people wish to prioritise the treatment of the severely ill have described severity in terms of quality of life rather than life expectancy. And some of my results suggest that support for an end of life premium would be stronger if it applied to quality of life-improving treatments. This suggests that weighting QALYs in accordance with continuous variables capturing quality of life as well as life expectancy may be more consistent with public preferences than the current practice of applying binary cut-offs based only on life expectancy information, and would address some of the criticisms of the arbitrariness of NICE’s policy.

Alastair Canaway’s journal round-up for 20th February 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The estimation and inclusion of presenteeism costs in applied economic evaluation: a systematic review. Value in Health Published 30th January 2017

Presenteeism is one of those issues that you hear about from time to time, but rarely see addressed within economic evaluations. For those who haven’t come across it before, presenteeism refers to being at work, but not working at full capacity, for example, due to your health limiting your ability to work. The literature suggests that given presenteeism can have large associated costs which could significantly impact economic evaluations, it should be considered. These impacts are rarely captured in practice. This paper sought to identify studies where presenteeism costs were included, examined how valuation was approached and the degree of impact of including presenteeism on costs. The review included cost of illness studies as well as economic evaluations, just 28 papers had attempted to capture the costs of presenteeism, these were in a wide variety of disease areas. A range of methods was used, across all studies, presenteeism costs accounted for 52% (range from 19%-85%) of the total costs relating to the intervention and disease. This is a vast proportion and significantly outweighed absenteeism costs. Presenteeism is clearly a significant issue, yet widely ignored within economic evaluation. This in part may be due to the health and social care perspective advised within the NICE reference case and compounded by the lack of guidance in how to measure and value productivity costs. Should an economic evaluation pursue a societal perspective, the findings suggest that capturing and valuing presenteeism costs should be a priority.

Priority to end of life treatments? Views of the public in the Netherlands. Value in Health Published 5th January 2017

Everybody dies, and thus, end of life care is probably something that we should all have at least a passing interest in. The end of life context is an incredibly tricky research area with methodological pitfalls at every turn. End of life care is often seen as ‘different’ to other care, and this is reflected in NICE having supplementary guidance for the appraisal of end of life interventions. Similarly, in the Netherlands, treatments that do not meet typical cost per QALY thresholds may be provided should public support be sufficient. There, however, is a dearth of such evidence, and this paper sought to elucidate this issue using the novel Q methodology. Three primary viewpoints emerged: 1) Access to healthcare as a human right – all have equal rights regardless of setting, that is, nobody is more important. Viewpoint one appeared to reject the notion of scarce resources when it comes to health: ‘you can’t put a price on life’. 2) The second group focussed on providing the ‘right’ care for those with terminal illness and emphasised that quality of life should be respected and unnecessary care at end of life should be avoided. This second group did not place great importance on cost-effectiveness but did acknowledge that costly treatments at end of life might not be the best use of money. 3) Finally, the third group felt there should be a focus on care which is effective and efficient, that is, those treatments which generate the most health should be prioritised. There was a consensus across all three groups that the ultimate goal of the health system is to generate the greatest overall health benefit for the population. This rejects the notion that priority should be given to those at end of life and the study concludes that across the three groups there was minimal support for the possibility of the terminally ill being treated with priority.

Methodological issues surrounding the use of baseline health-related quality of life data to inform trial-based economic evaluations of interventions within emergency and critical care settings: a systematic literature review. PharmacoEconomics [PubMed] Published 6th January 2017

Catchy title. Conducting research within emergency and critical settings presents a number of unique challenges. For the health economist seeking to conduct a trial based economic evaluation, one such issue relates to the calculation of QALYs. To calculate QALYs within a trial, baseline and follow-up data are required. For obvious reasons – severe and acute injuries/illness, unplanned admission – collecting baseline data on those entering emergency and critical care is problematic. Even when patients are conscious, there are ethical issues surrounding collecting baseline data in this setting, the example used relates to somebody being conscious after cardiac arrest, is it appropriate to be getting them to complete HRQL questionnaires? Probably not. Various methods have been used to circumnavigate this issue; this paper sought to systematically review the methods that have been used and provide guidance for future studies. Just 19 studies made it through screening, thus highlighting the difficulty of research in this context. Just one study prospectively collected baseline HRQL data, and this was restricted to patients in a non-life threatening state. Four different strategies were adopted in the remaining papers. Eight studies adopted a fixed health utility for all participants at baseline, four used only the available data, that is, from the first time point where HRQL was measured. One asked patients to retrospectively recall their baseline state, whilst one other used Delphi methods to derive EQ-5D states from experts. The paper examines the implications and limitations of adopting each of these strategies. The key finding seems to relate to whether or not the trial arms are balanced with respect to HRQL at baseline. This obviously isn’t observed, the authors suggest trial covariates should instead be used to explore this, and adjustments made where applicable. If, and that’s a big if, trial arms are balanced, then all of the four methods suggested should give similar answers. It seems the key here is the randomisation, however, even the best randomisation techniques do not always lead to balanced arms and there is no guarantee of baseline balance. The authors conclude trials should aim to make an initial assessment of HRQL at the earliest opportunity and that further research is required to thoroughly examine how the different approaches will impact cost-effectiveness results.

Credits