Rita Faria’s journal round-up for 15th April 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Emulating a trial of joint dynamic strategies: an application to monitoring and treatment of HIV‐positive individuals. Statistics in Medicine [PubMed] Published 18th March 2019

Have you heard about the target trial approach? This is a causal inference method for using observational evidence to compare strategies. This outstanding paper by Ellen Caniglia and colleagues is a great way to get introduced to it!

The question is: what is the best test-and-treat strategy for HIV-positive individuals? Given that patients weren’t randomised to each of the 4 alternative strategies, chances are that their treatment was informed by their prognostic factors. And these also influence their outcome. It’s a typical situation of bias due to confounding. The target trial approach consists of designing the RCT which would estimate the causal effect of interest, and to think through how its design can be emulated by the observational data. Here, it would be a trial in which patients would be randomly assigned to one of the 4 joint monitoring and treatment strategies. The goal is to estimate the difference in outcomes if all patients had followed their assigned strategies.

The method is fascinating albeit a bit complicated. It involves censoring individuals, fitting survival models, estimating probability weights, and replicating data. It is worthy of a detailed read! I’m very excited about the target trial methodology for cost-effectiveness analysis with observational data. But I haven’t come across any application yet. Please do get in touch via comments or Twitter if you know of a cost-effectiveness application.

Achieving integrated care through commissioning of primary care services in the English NHS: a qualitative analysis. BMJ Open [PubMed] Published 1st April 2019

Are you confused about the set-up of primary health care services in England? Look no further than Imelda McDermott and colleagues’ paper.

The paper starts by telling the story of how primary care has been organised in England over time, from its creation in 1948 to current times. For example, I didn’t know that there are new plans to allow clinical commissioning groups (CCGs) to design local incentive schemes as an alternative to the Quality and Outcomes Framework pay-for-performance scheme. The research proper is a qualitative study using interviews, telephone surveys and analysis of policy documents to understand how the CCGs commission primary care services. CCG Commissioning is intended to make better and more efficient use of resources to address increasing demand for health care services, staff shortage and financial pressure. The issue is that it is not easy to implement in practice. Furthermore, there seems to be some “reinvention of the wheel”. For example, from one of the interviewees: “…it’s no great surprise to me that the three STPs that we’ve got are the same as the three PCT clusters that we broke up to create CCGs…” Hum, shall we just go back to pre-2012 then?

Even if CCG commissioning does achieve all it sets out to do, I wonder about its value for money given the costs of setting it up. This paper is an exceptional read about the practicalities of implementing this policy in practice.

The dark side of coproduction: do the costs outweight the benefits for health research? Health Research Policy and Systems [PubMed] Published 28th March 2019

Last month, I covered the excellent paper by Kathryn Oliver and Paul Cairney about how to get our research to influence policy. This week I’d like to suggest another remarkable paper by Kathryn, this time with Anita Kothari and Nicholas Mays, on the costs and benefits of coproduction.

If you are in the UK, you have certainly heard about public and patient involvement or PPI. In this paper, coproduction refers to any collaborative working between academics and non-academics, of which PPI is one type, but it includes working with professionals, policy makers and any other people affected by the research. The authors discuss a wide range of costs to coproduction. From the direct costs of doing collaborative research, such as organising meetings, travel arrangements, etc., to the personal costs on an individual researcher to manage conflicting views and disagreements between collaborators, of having research products seen to be of lower quality, of being seen as partisan, etc., and costs to the stakeholders themselves

As a detail, I loved the term “hit-and-run research” to describe the current climate: get funding, do research, achieve impact, leave. Indeed, the way that research is funded, with budgets only available for the period that the research is being developed, does not help academics to foster relationships.

This paper reinforced my view that there may well be benefits to coproduction, but that there are also quite a lot of costs. And there tends to be not much attention to the magnitude of those costs, in whom they fall, and what’s displaced. I found the authors’ advice about the questions to ask oneself when thinking about coproduction to be really useful. I’ll keep it to hand when writing my next funding application, and I recommend you do too!

Credits

Chris Sampson’s journal round-up for 4th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Patient choice and provider competition – quality enhancing drivers in primary care? Social Science & Medicine Published 29th January 2019

There’s no shortage of studies in economics claiming to identify the impact (or lack of impact) of competition in the market for health care. The evidence has brought us close to a consensus that greater competition might improve quality, so long as providers don’t compete on price. However, many of these studies aren’t able to demonstrate the mechanism through which competition might improve quality, and the causality is therefore speculative. The research reported in this article was an attempt to see whether the supposed mechanisms for quality improvement actually exist. The authors distinguish between the demand-side mechanisms of competition-increasing quality-improving reforms (i.e. changes in patient behaviour) and the supply-side mechanisms (i.e. changes in provider behaviour), asserting that the supply-side has been neglected in the research.

The study is based on primary care in Sweden’s two largest cities, where patients can choose their primary care practice, which could be a private provider. Key is the fact that patients can switch between providers as often as they like, and with fewer barriers to doing so than in the UK. Prospective patients have access to some published quality indicators. With the goal of maximum variation, the researchers recruited 13 primary health care providers for semi-structured interviews with the practice manager and (in most cases) one or more of the practice GPs. The interview protocol included questions about the organisation of patient visits, information received about patients’ choices, market situation, reimbursement, and working conditions. Interview transcripts were coded and a framework established. Two overarching themes were ‘local market conditions’ and ‘feedback from patient choice’.

Most interviewees did not see competitors in the local market as a threat – conversely, providers are encouraged to cooperate on matters such as public health. Where providers did talk about competing, it was in terms of (speed of) access for patients, or in competition to recruit and keep staff. None of the interviewees were automatically informed of patients being removed from their list, and some managers reported difficulties in actually knowing which patients on their list were still genuinely on it. Even where these data were more readily available, nobody had access to information on reasons for patients leaving. Managers saw greater availability of this information as useful for quality improvement, while GPs tended to think it could be useful in ensuring continuity of care. Still, most expressed no desire to expand their market share. Managers reported using marketing efforts in response to greater competition generally, rather than as a response to observed changes within their practice. But most relied on reputation. Some reported becoming more service-minded as a result of choice reforms.

It seems that practices need more information to be able to act on competitive pressures. But, most practices don’t care about it because they don’t want to expand and they face no risk of there being a shortage of patients (in cities, at least). And, even if they did want to act on the information, chances are it would just create an opportunity for them to improve access as a way of cherry-picking younger and healthier people who demand convenience. Primary care providers (in this study, at least) are not income maximisers, but satisficers (they want to break-even), so there isn’t much scope for reforms to encourage providers to compete for new patients. Patient choice reforms may improve quality, but it isn’t clear that this has anything to do with competitive pressure.

Maximising the impact of patient reported outcome assessment for patients and society. BMJ [PubMed] Published 24th January 2019

Patient-reported outcome measures (PROMs) have been touted as a way of improving patient care. Yet, their use around the world is fragmented. In this paper, the authors make some recommendations about how we might use PROMs to improve patient care. The authors summarise some of the benefits of using PROMs and discuss some of the ways that they’ve been used in the UK.

Five key challenges in the use of PROMs are specified: i) appropriate and consistent selection of the best measures; ii) ethical collection and reporting of PROM data; iii) data collection, analysis, reporting, and interpretation; iv) data logistics; and v) a lack of coordination and efficiency. To address these challenges, the authors recommend an ‘integrated’ approach. To achieve this, stakeholder engagement is important and a governance framework needs to be developed. A handy table of current uses is provided.

I can’t argue with what the paper proposes, but it outlines an idealised scenario rather than any firm and actionable recommendations. What the authors don’t discuss is the fact that the use of PROMs in the UK is flailing. The NHS PROMs programme has been scaled back, measures have been dropped from the QOF, the EQ-5D has been dropped from the GP Patient Survey. Perhaps we need bolder recommendations and new ideas to turn the tide.

Check your checklist: the danger of over- and underestimating the quality of economic evaluations. PharmacoEconomics – Open [PubMed] Published 24th January 2019

This paper outlines the problems associated with misusing methodological and reporting checklists. The author argues that the current number of checklists available in the context of economic evaluation and HTA (13, apparently) is ‘overwhelming’. Three key issues are discussed. First, researchers choose the wrong checklist. A previous review found that the Drummond, CHEC, and Philips checklists were regularly used in the wrong context. Second, checklists can be overinterpreted, resulting in incorrect conclusions. A complete checklist does not mean that a study is perfect, and different features are of varying importance in different studies. Third, checklists are misused, with researchers deciding which items are or aren’t relevant to their study, without guidance.

The author suggests that more guidance is needed and that a checklist for selecting the correct checklist could be the way to go. The issue of updating checklists over time – and who ought to be responsible for this – is also raised.

In general, the tendency seems to be to broaden the scope of general checklists and to develop new checklists for specific methodologies, requiring the application of multiple checklists. As methods develop, they become increasingly specialised and heterogeneous. I think there’s little hope for checklists in this context unless they’re pared down and used as a reminder of the more complex guidance that’s needed to specify suitable methods and achieve adequate reporting. ‘Check your checklist’ is a useful refrain, though I reckon ‘chuck your checklist’ can sometimes be a better strategy.

A systematic review of dimensions evaluating patient experience in chronic illness. Health and Quality of Life Outcomes [PubMed] Published 21st January 2019

Back to PROMs and PRE(xperience)Ms. This study sets out to understand what it is that patient-reported measures are being used to capture in the context of chronic illness. The authors conducted a systematic review, screening 2,375 articles and ultimately including 107 articles that investigated the measurement properties of chronic (physical) illness PROMs and PREMs.

29 questionnaires were about (health-related) quality of life, 19 about functional status or symptoms, 20 on feelings and attitudes about illness, 19 assessing attitudes towards health care, and 20 on patient experience. The authors provide some nice radar charts showing the percentage of questionnaires that included each of 12 dimensions: i) physical, ii) functional, iii) social, iv) psychological, v) illness perceptions, vi) behaviours and coping, vii) effects of treatment, viii) expectations and satisfaction, ix) experience of health care, x) beliefs and adherence to treatment, xi) involvement in health care, and xii) patient’s knowledge.

The study supports the idea that a patient’s lived experience of illness and treatment, and adaptation to that, has been judged to be important in addition to quality of life indicators. The authors recommend that no measure should try to capture everything because there are simply too many concepts that could be included. Rather, researchers should specify the domains of interest and clearly define them for instrument development.

Credits

 

Chris Sampson’s journal round-up for 31st December 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Perspectives of patients with cancer on the quality-adjusted life year as a measure of value in healthcare. Value in Health Published 29th December 2018

Patients should have the opportunity to understand how decisions are made about which treatments they are and are not allowed to use, given their coverage. This study reports on a survey of cancer patients and survivors, with the aim of identifying patients’ awareness, understanding, and opinions about the QALY as a measure of value.

Participants were recruited from a (presumably US-based) patient advocacy group and 774 mostly well-educated, mostly white, mostly women responded. The online survey asked about cancer status and included a couple of measures of health literacy. Fewer than 7% of participants had ever heard of the QALY – more likely for those with greater health literacy. The survey explained the QALY to the participants and then asked if the concept of the QALY makes sense. Around half said it did and 24% thought that it was a good way to measure value in health care. The researchers report a variety of ‘significant’ differences in tendencies to understand or support the use of QALYs, but I’m not convinced that they’re meaningful because the differences aren’t big and the samples are relatively small.

At the end of the survey, respondents were asked to provide opinions on QALYs and value in health care. 165 people provided responses and these were coded and analysed qualitatively. The researchers identified three themes from this one free-text question: i) measuring value, ii) opinions on QALY, and iii) value in health care and decision making. I’m not sure that they’re meaningful themes that help us to understand patients’ views on QALYs. A significant proportion of respondents rejected the idea of using numbers to quantify value in health care. On the other hand, some suggested that the QALY could be a useful decision aid for patients. There was opposition to ‘external decision makers’ having any involvement in health care decision making. Unless you’re paying for all of your care out of pocket, that’s tough luck. But the most obvious finding from the qualitative analysis is that respondents didn’t understand what QALYs were for. That’s partly because health economists in general need to be better at communicating concepts like the QALY. But I think it’s also in large part because the authors failed to provide a clear explanation. They didn’t even use my lovely Wikipedia graphic. Many of the points made by respondents are entirely irrelevant to the appropriateness of QALYs as they’re used (or in the case of the US, aren’t yet used) in practice. For example, several discussed the use of QALYs in clinical decision making. Patients think that they should maintain autonomy, which is fair enough but has nothing to do with how QALYs are used to assess health technologies.

QALYs are built on the idea of trade-offs. They measure the trade-off between life extension and life improvement. They are used to guide trade-offs between different treatments for different people. But the researchers didn’t explain how or why QALYs are used to make trade-offs, so the elicited views aren’t well-informed.

Measuring multivariate risk preferences in the health domain. Journal of Health Economics Published 27th December 2018

Health preferences research is now a substantial field in itself. But there’s still a lot of work left to be done on understanding risk preferences with respect to health. Gradually, we’re coming round to the idea that people tend to be risk-averse. But risk preferences aren’t (necessarily) so simple. Recent research has proposed that ‘higher order’ preferences such as prudence and temperance play a role. A person exhibiting univariate prudence for longevity would be better able to cope with risk if they are going to live longer. Univariate temperance is characterised by a preference for prospects that disaggregate risk across different possible outcomes. Risk preferences can also be multivariate – across health and wealth, for example – determining the relationship between univariate risk preferences and other attributes. These include correlation aversion, cross-prudence, and cross-temperance. Many articles from the Arthur Attema camp demand a great deal of background knowledge. This paper isn’t an exception, but it does provide a very clear and intuitive description of the various kinds of uni- and multivariate risk preferences that the researchers are considering.

For this study, an experiment was conducted with 98 people, who were asked to make 69 choices, corresponding to 3 choices about each risk preference trait being tested, for both gains and losses. Participants were told that they had €240,000 in wealth and 40 years of life to play with. The number of times that an individual made choices in line with a particular trait was used as an indicator of their strength of preference.

For gains, risk aversion was common for both wealth and longevity, and prudence was a common trait. There was no clear tendency towards temperance. For losses, risk aversion and prudence tended to neutrality. For multivariate risk preferences, a majority of people were correlation averse for gains and correlation seeking for losses. For gains, 76% of choices were compatible with correlation aversion, suggesting that people prefer to disaggregate fixed wealth and health gains. For losses, the opposite was true in 68% of choices. There was evidence for cross-prudence in wealth gains but not longevity gains, suggesting that people prefer health risk if they have higher wealth. For losses, the researchers observed cross-prudence and cross-temperance neutrality. The authors go on to explore associations between different traits.

A key contribution is in understanding how risk preferences differ in the health domain as compared with the monetary domain (which is what most economists study). Conveniently, there are a lot of similarities between risk preferences in the two domains, suggesting that health economists can learn from the wider economics literature. Risk aversion and prudence seem to apply to longevity as well as monetary gains, with a shift to neutrality in losses. The potential implications of these findings are far-reaching, but this is just a small experimental study. More research needed (and anticipated).

Prospective payment systems and discretionary coding—evidence from English mental health providers. Health Economics [PubMed] Published 27th December 2018

If you’ve conducted an economic evaluation in the context of mental health care in England, you’ll have come across mental health care clusters. Patients undergoing mental health care are allocated to one of 20 clusters, classed as either ‘psychotic’, ‘non-psychotic’, or ‘organic’, which forms the basis of an episodic payment model. In 2013/14, these episodes were associated with an average cost of between £975 and £9,354 per day. Doctors determine the clusters and the clusters determine reimbursement. Perverse incentives abound. Or do they?

This study builds on the fact that patients are allocated by clinical teams with guidance from the algorithm-based Mental Health Clustering Tool (MHCT). Clinical teams might exhibit upcoding, whereby patients are allocated to clusters that attract a higher price than that recommended by the MHCT. Data were analysed for 148,471 patients from the Mental Health Services Data Set for 2011-2015. For each patient, their allocated cluster is known, along with a variety of socioeconomic indicators and the HoNoS and SARN instruments, which go into the MHCT algorithm. Mixed-effects logistic regression was used to look at whether individual patients were or were not allocated to the cluster recommended as ‘best fit’ by the MHCT, controlling for patient and provider characteristics. Further to this, multilevel multinomial logit models were used to categorise decisions that don’t match the MHCT as either under- or overcoding.

Average agreement across clusters between the MHCT and clinicians was 36%. In most cases, patients were allocated to a cluster either one step higher or one step lower in terms of the level of need, and there isn’t an obvious tendency to overcode. The authors are able to identify a few ways in which observable provider and patient characteristics influence the tendency to under- or over-cluster patients. For example, providers with higher activity are less likely to deviate from the MHCT best fit recommendation. However, the dominant finding – identified by using median odds ratios for the probability of a mismatch between two random providers – seems to be that unobserved heterogeneity determines variation in behaviour.

The study provides clues about the ways in which providers could manipulate coding to their advantage and identifies the need for further data collection for a proper assessment. But reimbursement wasn’t linked to clustering during the time period of the study, so it remains to be seen how clinicians actually respond to these potentially perverse incentives.

Credits