5th IRDES-DAUPHINE Workshop on Applied Health Economics and Policy Evaluation

The fifth IRDES Workshop on Applied Health Economics and Policy Evaluation, will take place in Paris, France, on June 20th-21st 2019. The workshop is organized by IRDES, Institute for Research and Information in Health Economics, and the Chaire Santé Dauphine.

Submission and selection of papers. You are invited to submit a full paper before January 14th 2019. Papers will be selected by the scientific committee on the basis of a full or advanced draft papers, written in English. Papers should include empirical material, and only unpublished papers at the time of the submission will be accepted. The submission should contain author’s name(s) and affiliation(s), a structured abstract and keywords (up to five).
Authors have to submit their complete papers in PDF format through the Submission form.

Registration and fees. Registration fees are 200 euros. Only authors or coauthors can apply for registration. PhD students or early career researchers may benefit from free registration upon request.

ProgramThe workshop will cover the following topics, with an emphasis on Public Policies analysis and evaluation: Social Health Inequalities, Health Services Utilization, Insurance, Health Services Delivery and Organization, Specific Populations: The Elderly, Migrants, High Needs-High Costs Patients, Low Income Households…. About 16 papers will be selected. Each paper will be allocated 20 minutes for presentation and 20 minutes for discussion (introduced by a participant or a member of the scientific committee).

Scientific committee. Damien Bricard (IRDES), Andrew Clark (Paris School of Economics), Brigitte Dormont (Paris Dauphine University and Chaire santé Dauphine), Paul Dourgnon (IRDES), Agnès Gramain (Université Lorraine)Julien Mousquès (IRDES), Aurélie Pierre (IRDES), Erin Strumpf (McGill University, Montreal), Matt Sutton (University of Manchester)

Contact: ahepe@irdes.fr

Rita Faria’s journal round-up for 4th March 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cheap and dirty: the effect of contracting out cleaning on efficiency and effectiveness. Public Administration Review Published 25th February 2019

Before I was a health economist, I used to be a pharmacist and worked for a well-known high street chain for some years. My impression was that the stores with in-house cleaners were cleaner, but I didn’t know if this was a true difference, my leftie bias or my small sample size of 2! This new study by Shimaa Elkomy, Graham Cookson and Simon Jones confirms my suspicions, albeit in the context of NHS hospitals, so I couldn’t resist to select it for my round-up.

They looked at how contracted-out services fare in terms of perceived cleanliness, costs and MRSA rate in NHS hospitals. MRSA is a type of hospital-associated infection that is affected by how clean a hospital is.

They found that contracted-out services are cheaper than in-house cleaning, but that perceived cleanliness is worse. Importantly, contracted-out services increase the MRSA rate. In other words, contracting-out cleaning services could harm patients’ health.

This is a fascinating paper that is well worth a read. One wonders if the cost of managing MRSA is more than offset by the savings of contracting-out services. Going a step further, are in-house services cost-effective given the impact on patients’ health and costs of managing infections?

What’s been the bang for the buck? Cost-effectiveness of health care spending across selected conditions in the US. Health Affairs [PubMed] Published 1st January 2019

Staying on the topic of value for money, this study by David Wamble and colleagues looks at the extent to which the increased spending in health care in the US has translated into better health outcomes over time.

It’s clearly reassuring that, for 6 out of the 7 conditions they looked at, health outcomes have improved in 2015 compared to 1996. After all, that’s the goal of investing in medical R&D, although it remains unclear how much of this difference can be attributed to health care versus other things that have happened at the same time that could have improved health outcomes.

I wasn’t sure about the inflation adjustment for the costs, so I’d be grateful for your thoughts via comments or Twitter. In my view, we would underestimate the costs if we used medical price inflation indices. This is because these indices reflect the specific increase in prices in health care, such as due to new drugs being priced high at launch. So I understand that the main results use the US Consumer Price Index, which means that this reflects the average increase in prices over time rather than the increase in health care.

However, patients may not have seen their income rise with inflation. This means that the cost of health care may represent a disproportionally greater share of people’s income. And that the inflation adjustment may downplay the impact of health care costs on people’s pockets.

This study caught my eye and it is quite thought-provoking. It’s a good addition to the literature on the cost-effectiveness of US health care. But I’d wager that the question remains: to what extent is today’s medical care better value for money that in the past?

The dos and don’ts of influencing policy: a systematic review of advice to academics. Palgrave Communications Published 19th February 2019

We all would like to see our research findings influence policy, but how to do this in practice? Well, look no further, as Kathryn Oliver and Paul Cairney reviewed the literature, summarised it in 8 key tips and thought through their implications.

To sum up, it’s not easy to influence policy; advice about how to influence policy is rarely based on empirical evidence, and there are a few risks to trying to become a mover-and-shaker in policy circles.

They discuss three dilemmas in policy engagement. Should academics try to influence policy? How should academics influence policy? What is the purpose of academics’ engagement in policy making?

I particularly enjoyed reading about the approaches to influence policy. Tools such as evidence synthesis and social media should make evidence more accessible, but their effectiveness is unclear. Another approach is to craft stories to create a compelling case for the policy change, which seems to me to be very close to marketing. The third approach is co-production, which they note can give rise to accusations of bias and can have some practical challenges in terms of intellectual property and keeping one’s independence.

I found this paper quite refreshing. It not only boiled down the advice circulating online about how to influence policy into its key messages but also thought through the practical challenges in its application. The impact agenda seems to be here to stay, at least in the UK. This paper is an excellent source of advice on the risks and benefits of trying to navigate the policy world.

Credits

Chris Sampson’s journal round-up for 4th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Patient choice and provider competition – quality enhancing drivers in primary care? Social Science & Medicine Published 29th January 2019

There’s no shortage of studies in economics claiming to identify the impact (or lack of impact) of competition in the market for health care. The evidence has brought us close to a consensus that greater competition might improve quality, so long as providers don’t compete on price. However, many of these studies aren’t able to demonstrate the mechanism through which competition might improve quality, and the causality is therefore speculative. The research reported in this article was an attempt to see whether the supposed mechanisms for quality improvement actually exist. The authors distinguish between the demand-side mechanisms of competition-increasing quality-improving reforms (i.e. changes in patient behaviour) and the supply-side mechanisms (i.e. changes in provider behaviour), asserting that the supply-side has been neglected in the research.

The study is based on primary care in Sweden’s two largest cities, where patients can choose their primary care practice, which could be a private provider. Key is the fact that patients can switch between providers as often as they like, and with fewer barriers to doing so than in the UK. Prospective patients have access to some published quality indicators. With the goal of maximum variation, the researchers recruited 13 primary health care providers for semi-structured interviews with the practice manager and (in most cases) one or more of the practice GPs. The interview protocol included questions about the organisation of patient visits, information received about patients’ choices, market situation, reimbursement, and working conditions. Interview transcripts were coded and a framework established. Two overarching themes were ‘local market conditions’ and ‘feedback from patient choice’.

Most interviewees did not see competitors in the local market as a threat – conversely, providers are encouraged to cooperate on matters such as public health. Where providers did talk about competing, it was in terms of (speed of) access for patients, or in competition to recruit and keep staff. None of the interviewees were automatically informed of patients being removed from their list, and some managers reported difficulties in actually knowing which patients on their list were still genuinely on it. Even where these data were more readily available, nobody had access to information on reasons for patients leaving. Managers saw greater availability of this information as useful for quality improvement, while GPs tended to think it could be useful in ensuring continuity of care. Still, most expressed no desire to expand their market share. Managers reported using marketing efforts in response to greater competition generally, rather than as a response to observed changes within their practice. But most relied on reputation. Some reported becoming more service-minded as a result of choice reforms.

It seems that practices need more information to be able to act on competitive pressures. But, most practices don’t care about it because they don’t want to expand and they face no risk of there being a shortage of patients (in cities, at least). And, even if they did want to act on the information, chances are it would just create an opportunity for them to improve access as a way of cherry-picking younger and healthier people who demand convenience. Primary care providers (in this study, at least) are not income maximisers, but satisficers (they want to break-even), so there isn’t much scope for reforms to encourage providers to compete for new patients. Patient choice reforms may improve quality, but it isn’t clear that this has anything to do with competitive pressure.

Maximising the impact of patient reported outcome assessment for patients and society. BMJ [PubMed] Published 24th January 2019

Patient-reported outcome measures (PROMs) have been touted as a way of improving patient care. Yet, their use around the world is fragmented. In this paper, the authors make some recommendations about how we might use PROMs to improve patient care. The authors summarise some of the benefits of using PROMs and discuss some of the ways that they’ve been used in the UK.

Five key challenges in the use of PROMs are specified: i) appropriate and consistent selection of the best measures; ii) ethical collection and reporting of PROM data; iii) data collection, analysis, reporting, and interpretation; iv) data logistics; and v) a lack of coordination and efficiency. To address these challenges, the authors recommend an ‘integrated’ approach. To achieve this, stakeholder engagement is important and a governance framework needs to be developed. A handy table of current uses is provided.

I can’t argue with what the paper proposes, but it outlines an idealised scenario rather than any firm and actionable recommendations. What the authors don’t discuss is the fact that the use of PROMs in the UK is flailing. The NHS PROMs programme has been scaled back, measures have been dropped from the QOF, the EQ-5D has been dropped from the GP Patient Survey. Perhaps we need bolder recommendations and new ideas to turn the tide.

Check your checklist: the danger of over- and underestimating the quality of economic evaluations. PharmacoEconomics – Open [PubMed] Published 24th January 2019

This paper outlines the problems associated with misusing methodological and reporting checklists. The author argues that the current number of checklists available in the context of economic evaluation and HTA (13, apparently) is ‘overwhelming’. Three key issues are discussed. First, researchers choose the wrong checklist. A previous review found that the Drummond, CHEC, and Philips checklists were regularly used in the wrong context. Second, checklists can be overinterpreted, resulting in incorrect conclusions. A complete checklist does not mean that a study is perfect, and different features are of varying importance in different studies. Third, checklists are misused, with researchers deciding which items are or aren’t relevant to their study, without guidance.

The author suggests that more guidance is needed and that a checklist for selecting the correct checklist could be the way to go. The issue of updating checklists over time – and who ought to be responsible for this – is also raised.

In general, the tendency seems to be to broaden the scope of general checklists and to develop new checklists for specific methodologies, requiring the application of multiple checklists. As methods develop, they become increasingly specialised and heterogeneous. I think there’s little hope for checklists in this context unless they’re pared down and used as a reminder of the more complex guidance that’s needed to specify suitable methods and achieve adequate reporting. ‘Check your checklist’ is a useful refrain, though I reckon ‘chuck your checklist’ can sometimes be a better strategy.

A systematic review of dimensions evaluating patient experience in chronic illness. Health and Quality of Life Outcomes [PubMed] Published 21st January 2019

Back to PROMs and PRE(xperience)Ms. This study sets out to understand what it is that patient-reported measures are being used to capture in the context of chronic illness. The authors conducted a systematic review, screening 2,375 articles and ultimately including 107 articles that investigated the measurement properties of chronic (physical) illness PROMs and PREMs.

29 questionnaires were about (health-related) quality of life, 19 about functional status or symptoms, 20 on feelings and attitudes about illness, 19 assessing attitudes towards health care, and 20 on patient experience. The authors provide some nice radar charts showing the percentage of questionnaires that included each of 12 dimensions: i) physical, ii) functional, iii) social, iv) psychological, v) illness perceptions, vi) behaviours and coping, vii) effects of treatment, viii) expectations and satisfaction, ix) experience of health care, x) beliefs and adherence to treatment, xi) involvement in health care, and xii) patient’s knowledge.

The study supports the idea that a patient’s lived experience of illness and treatment, and adaptation to that, has been judged to be important in addition to quality of life indicators. The authors recommend that no measure should try to capture everything because there are simply too many concepts that could be included. Rather, researchers should specify the domains of interest and clearly define them for instrument development.

Credits