Chris Sampson’s journal round-up for 19th March 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Using HTA and guideline development as a tool for research priority setting the NICE way: reducing research waste by identifying the right research to fund. BMJ Open [PubMed] Published 8th March 2018

As well as the cost-effectiveness of health care, economists are increasingly concerned with the cost-effectiveness of health research. This makes sense, given that both are usually publicly funded and so spending on one (in principle) limits spending on the other. NICE exists in part to prevent waste in the provision of health care – seeking to maximise benefit. In this paper, the authors (all current or ex-employees of NICE) consider the extent to which NICE processes are also be used to prevent waste in health research. The study focuses on the processes underlying NICE guideline development and HTA, and the work by NICE’s Science Policy and Research (SP&R) programme. Through systematic review and (sometimes) economic modelling, NICE guidelines identify research needs, and NICE works with the National Institute for Health Research to get their recommended research commissioned, with some research fast-tracked as ‘NICE Key Priorities’. Sometimes, it’s also necessary to prioritise research into methodological development, and NICE have conducted reviews to address this, with the Internal Research Advisory Group established to ensure that methodological research is commissioned. The paper also highlights the roles of other groups such as the Decision Support Unit, Technical Support Unit and External Assessment Centres. This paper is useful for two reasons. First, it gives a clear and concise explanation of NICE’s processes with respect to research prioritisation, and maps out the working groups involved. This will provide researchers with an understanding of how their work fits into this process. Second, the paper highlights NICE’s current research priorities and provides insight into how these develop. This could be helpful to researchers looking to develop new ideas and proposals that will align with NICE’s priorities.

The impact of the minimum wage on health. International Journal of Health Economics and Management [PubMed] Published 7th March 2018

The minimum wage is one of those policies that is so far-reaching, and with such ambiguous implications for different people, that research into its impact can deliver dramatically different conclusions. This study uses American data and takes advantage of the fact that different states have different minimum wage levels. The authors try to look at a broad range of mechanisms by which minimum wage can affect health. A major focus is on risky health behaviours. The study uses data from the Behavioral Risk Factor Surveillance System, which includes around 300,000 respondents per year across all states. Relevant variables from these data characterise smoking, drinking, and fruit and vegetable consumption, as well as obesity. There are also indicators of health care access and self-reported health. The authors cut their sample to include 21-64-year-olds with no more than a high school degree. Difference-in-differences are estimated by OLS according to individual states’ minimum wage changes. As is often the case for minimum wage studies, the authors find several non-significant effects: smoking and drinking don’t seem to be affected. Similarly, there isn’t much of an impact on health care access. There seems to be a small positive impact of minimum wage on the likelihood of being obese, but no impact on BMI. I’m not sure how to interpret that, but there is also evidence that a minimum wage increase leads to a reduction in fruit and vegetable consumption, which adds credence to the obesity finding. The results also demonstrate that a minimum wage increase can reduce the number of days that people report to be in poor health. But generally – on aggregate – there isn’t much going on at all. So the authors look at subgroups. Smoking is found to increase (and BMI decrease) with minimum wage for younger non-married white males. Obesity is more likely to be increased by minimum wage hikes for people who are white or married, and especially for those in older age groups. Women seem to benefit from fewer days with mental health problems. The main concerns identified in this paper are that minimum wage increases could increase smoking in young men and could reduce fruit and veg consumption. But I don’t think we should overstate it. There’s a lot going on in the data, and though the authors do a good job of trying to identify the effects, other explanations can’t be excluded. Minimum wage increases probably don’t have a major direct impact on health behaviours – positive or negative – but policymakers should take note of the potential value in providing public health interventions to those groups of people who are likely to be affected by the minimum wage.

Aligning policy objectives and payment design in palliative care. BMC Palliative Care [PubMed] Published 7th March 2018

Health care at the end of life – including palliative care – presents challenges in evaluation. The focus is on improving patients’ quality of life, but it’s also about satisfying preferences for processes of care, the experiences of carers, and providing a ‘good death’. And partly because these things can be difficult to measure, it can be difficult to design payment mechanisms to achieve desirable outcomes. Perhaps that’s why there is no current standard approach to funding for palliative care, with a lot of variation between countries, despite the common aspiration for universality. This paper tackles the question of payment design with a discussion of the literature. Traditionally, palliative care has been funded by block payments, per diems, or fee-for-service. The author starts with the acknowledgement that there are two challenges to ensuring value for money in palliative care: moral hazard and adverse selection. Providers may over-supply because of fee-for-service funding arrangements, or they may ‘cream-skim’ patients. Adverse selection may arise in an insurance-based system, with demand from high-risk people causing the market to fail. These problems could potentially be solved by capitation-based payments and risk adjustment. The market could also be warped by blunt eligibility restrictions and funding caps. Another difficulty is the challenge of achieving allocative efficiency between home-based and hospital-based services, made plain by the fact that, in many countries, a majority of people die in hospital despite a preference for dying at home. The author describes developments (particularly in Australia) in activity-based funding for palliative care. An interesting proposal – though not discussed in enough detail – is that payments could be made for each death (per mortems?). Capitation-based payment models are considered and the extent to which pay-for-performance could be incorporated is also discussed – the latter being potentially important in achieving those process outcomes that matter so much in palliative care. Yet another challenge is the question of when palliative care should come into play, because, in some cases, it’s a matter of sooner being better, because the provision of palliative care can give rise to less costly and more preferred treatment pathways. Thus, palliative care funding models will have implications for the funding of acute care. Throughout, the paper includes examples from different countries, along with a wealth of references to dig into. Helpfully, the author explicitly states in a table the models that different settings ought to adopt, given their prevailing model. As our population ages and the purse strings tighten, this is a discussion we can expect to be having more and more.



Thesis Thursday: Francesco Longo

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Francesco Longo who has a PhD from the University of York. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Essays on hospital performance in England
Luigi Siciliani
Repository link

What do you mean by ‘hospital performance’, and how is it measured?

The concept of performance in the healthcare sector covers a number of dimensions including responsiveness, affordability, accessibility, quality, and efficiency. A PhD does not normally provide enough time to investigate all these aspects and, hence, my thesis mostly focuses on quality and efficiency in the hospital sector. The concept of quality or efficiency of a hospital is also surprisingly broad and, as a consequence, perfect quality and efficiency measures do not exist. For example, mortality and readmissions are good clinical quality measures but the majority of hospital patients do not die and are not readmitted. How well does the hospital treat these patients? Similarly for efficiency: knowing that a hospital is more efficient because it now has lower costs is essential, but how is that hospital actually reducing costs? My thesis tries to answer also these questions by analysing various quality and efficiency indicators. For example, Chapter 3 uses quality measures such as overall and condition-specific mortality, overall readmissions, and patient-reported outcomes for hip replacement. It also uses efficiency indicators such as bed occupancy, cancelled elective operations, and cost indexes. Chapter 4 analyses additional efficiency indicators, such as admissions per bed, the proportion of day cases, and proportion of untouched meals.

You dedicated a lot of effort to comparing specialist and general hospitals. Why is this important?

The first part of my thesis focuses on specialisation, i.e. an organisational form which is supposed to generate greater efficiency, quality, and responsiveness but not necessarily lower costs. Some evidence from the US suggests that orthopaedic and surgical hospitals had 20 percent higher inpatient costs because of, for example, higher staffing levels and better quality of care. In the English NHS, specialist hospitals play an important role because they deliver high proportions of specialised services, commonly low-volume but high-cost treatments for patients with complex and rare conditions. Specialist hospitals, therefore, allow the achievement of a critical mass of clinical expertise to ensure patients receive specialised treatments that produce better health outcomes. More precisely, my thesis focuses on specialist orthopaedic hospitals which, for instance, provide 90% of bone and soft tissue sarcomas surgeries, and 50% of scoliosis treatments. It is therefore important to investigate the financial viability of specialist orthopaedic hospitals relative to general hospitals that undertake similar activities, under the current payment system. The thesis implements weighted least square regressions to compare profit margins between specialist and general hospitals. Specialist orthopaedic hospitals are found to have lower profit margins, which are explained by patient characteristics such as age and severity. This means that, under the current payment system, providers that generally attract more complex patients such as specialist orthopaedic hospitals may be financially disadvantaged.

In what way is your analysis of competition in the NHS distinct from that of previous studies?

The second part of my thesis investigates the effect of competition on quality and efficiency under two different perspectives. First, it explores whether under competitive pressures neighbouring hospitals strategically interact in quality and efficiency, i.e. whether a hospital’s quality and efficiency respond to neighbouring hospitals’ quality and efficiency. Previous studies on English hospitals analyse strategic interactions only in quality and they employ cross-sectional spatial econometric models. Instead, my thesis uses panel spatial econometric models and a cross-sectional IV model in order to make causal statements about the existence of strategic interactions among rival hospitals. Second, the thesis examines the direct effect of hospital competition on efficiency. The previous empirical literature has studied this topic by focusing on two measures of efficiency such as unit costs and length of stay measured at the aggregate level or for a specific procedure (hip and knee replacement). My thesis provides a richer analysis by examining a wider range of efficiency dimensions. It combines a difference-in-difference strategy, commonly used in the literature, with Seemingly Unrelated Regression models to estimate the effect of competition on efficiency and enhance the precision of the estimates. Moreover, the thesis tests whether the effect of competition varies for more or less efficient hospitals using an unconditional quantile regression approach.

Where should researchers turn next to help policymakers understand hospital performance?

Hospitals are complex organisations and the idea of performance within this context is multifaceted. Even when we focus on a single performance dimension such as quality or efficiency, it is difficult to identify a measure that could work as a comprehensive proxy. It is therefore important to decompose as much as possible the analysis by exploring indicators capturing complementary aspects of the performance dimension of interest. This practice is likely to generate findings that are readily interpretable by policymakers. For instance, some results from my thesis suggest that hospital competition improves efficiency by reducing admissions per bed. Such an effect is driven by a reduction in the number of beds rather than an increase in the number of admissions. In addition, competition improves efficiency by pushing hospitals to increase the proportion of day cases. These findings may help to explain why other studies in the literature find that competition decreases length of stay: hospitals may replace elective patients, who occupy hospital beds for one or more nights, with day case patients, who are instead likely to be discharged the same day of admission.

Method of the month: Q methodology

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is Q methodology.


There are many situations when we might be interested in people’s views, opinions or beliefs about an issue, such as how we allocate health care resources or the type of care we provide to dementia patients. Typically, health economists may think about using qualitative methods or preference elicitation techniques, but Q methodology could be your new method to examine these questions. Q methodology combines qualitative and quantitative techniques which allow us to first identify the range of the views that exist on a topic and then describe in-depth those viewpoints.

Q methodology was conceived as a way to study subjectivity by William Stephenson and is detailed in his 1953 book The Study of Behaviour. A more widely available book by Watts and Stenner (2012) provides a great general introduction to all stages of a Q study and the paper by Baker et al (2006) introduces Q methodology in health economics.


There are two main stages in a Q methodology study. In the first stage, participants express their views through the rank-ordering of a set of statements known as the Q sort. The second stage uses factor analysis to identify patterns of similarity between the Q sorts, which can then be described in detail.

Stage 1: Developing the statements and Q sorting

The most important part of any Q study is the development of the statements that your participants will rank-order. The starting point is to identify all of the possible views on your topic. Participants should be able to interpret the statements as opinion rather than facts, for example, “The amount of health care people have had in the past should not influence access to treatments in the future”. The statements can come from a range of sources including interview transcripts, public consultations, academic literature, newspapers and social media. Through a process of eliminating duplicates, merging and deleting similar statements, you want to end up with a smaller set of statements that is representative of the population of views that exist on your topic. Pilot these statements in a small number of Q sorts before finalising and starting your main data collection.

The next thing to consider is from whom you are going to collect Q sorts. Participant sampling in Q methodology is similar to that of qualitative methods where you are looking to identify ‘data rich’ participants. It is not about representativeness according to demographics; instead, you want to include participants who have strong and differing views on your topic. Typically this would be around 30 to 60 people. Once you have selected your sample you can conduct your Q sorts. Here, each of your participants rank-orders the set of statements according to an instruction, for example from ‘most agree to most disagree’ or ‘highest priority to lowest priority’. At the end of each Q sort, a short interview is conducted asking participants to summarise their opinions on the Q sort and give further explanation for the placing of selected statements.

Stage 2: Analysis and interpretation

In the analysis stage, the aim is to identify people who have ranked their statements in a similar way. This involves calculating the correlations between the participants Q sorts (the full ranking of all statements) to form a correlation matrix which is then subject to factor analysis. The software outlined in the next section can help you with this. The factor analysis will produce a number of statistically significant solutions and your role as the analyst is to decide how many factors you retain for interpretation. This will be an iterative process where you consider the internal coherence of each factor: i.e. does the ranking of the statements make sense, does it align with the comments made by the participants following the Q sort as well as statistical considerations like Eigen Values. The factors are idealised Q sorts that are a complete ranking of all statements, essentially representing the way a respondent who had a correlation coefficient of 1 with the factor would have ranked their statements. The final step is to provide a descriptive account of the factors, looking at the positioning of each statement in relation to the other statements and drawing on the post Q sort interviews to support and aid your interpretation.


There are a small number of software packages available to analyse your Q data, most of which are free to use. The most widely used programme is PQMethod. It is a DOS-based programme which often causes nervousness for newcomers due to the old school black screen and the requirement to step away from the mouse, but it is actually easy to navigate when you get going and it produces all of the output you need to interpret your Q sorts. There is the newer (and also free) KenQ that is receiving good reviews and has a more up-to-date web-based navigation, but I must confess I like my old time PQMethod. Details on all of the software and where to access these can be found on the Q methodology website.


Q methodology studies have been conducted with patient groups and the general public. In patient groups, the aim is often to understand their views on the type of care they receive or options for future care. Examples include the views of young people on the transition from paediatric to adult health care services and the views of dementia patients and their carers on good end of life care. The results of these types of Q studies have been used to inform the design of new interventions or to provide attributes for future preference elicitation studies.

We have also used Q methodology to investigate the views of the general public in a range of European countries on the principles that should underlie health care resource allocation as part of the EuroVaQ project. More recently, Q methodology has been used to identify societal views on the provision of life-extending treatments for people with a terminal illness.  This programme of work highlighted three viewpoints and a connected survey found that there was not one dominant viewpoint. This may help to explain why – after a number of preference elicitation studies in this area – we still cannot provide a definitive answer on whether an end of life premium exists. The survey mentioned in the end of life work refers to the Q2S (Q to survey) approach, which is a linked method to Q methodology… but that is for another blog post!