Chris Sampson’s journal round-up for 4th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Patient choice and provider competition – quality enhancing drivers in primary care? Social Science & Medicine Published 29th January 2019

There’s no shortage of studies in economics claiming to identify the impact (or lack of impact) of competition in the market for health care. The evidence has brought us close to a consensus that greater competition might improve quality, so long as providers don’t compete on price. However, many of these studies aren’t able to demonstrate the mechanism through which competition might improve quality, and the causality is therefore speculative. The research reported in this article was an attempt to see whether the supposed mechanisms for quality improvement actually exist. The authors distinguish between the demand-side mechanisms of competition-increasing quality-improving reforms (i.e. changes in patient behaviour) and the supply-side mechanisms (i.e. changes in provider behaviour), asserting that the supply-side has been neglected in the research.

The study is based on primary care in Sweden’s two largest cities, where patients can choose their primary care practice, which could be a private provider. Key is the fact that patients can switch between providers as often as they like, and with fewer barriers to doing so than in the UK. Prospective patients have access to some published quality indicators. With the goal of maximum variation, the researchers recruited 13 primary health care providers for semi-structured interviews with the practice manager and (in most cases) one or more of the practice GPs. The interview protocol included questions about the organisation of patient visits, information received about patients’ choices, market situation, reimbursement, and working conditions. Interview transcripts were coded and a framework established. Two overarching themes were ‘local market conditions’ and ‘feedback from patient choice’.

Most interviewees did not see competitors in the local market as a threat – conversely, providers are encouraged to cooperate on matters such as public health. Where providers did talk about competing, it was in terms of (speed of) access for patients, or in competition to recruit and keep staff. None of the interviewees were automatically informed of patients being removed from their list, and some managers reported difficulties in actually knowing which patients on their list were still genuinely on it. Even where these data were more readily available, nobody had access to information on reasons for patients leaving. Managers saw greater availability of this information as useful for quality improvement, while GPs tended to think it could be useful in ensuring continuity of care. Still, most expressed no desire to expand their market share. Managers reported using marketing efforts in response to greater competition generally, rather than as a response to observed changes within their practice. But most relied on reputation. Some reported becoming more service-minded as a result of choice reforms.

It seems that practices need more information to be able to act on competitive pressures. But, most practices don’t care about it because they don’t want to expand and they face no risk of there being a shortage of patients (in cities, at least). And, even if they did want to act on the information, chances are it would just create an opportunity for them to improve access as a way of cherry-picking younger and healthier people who demand convenience. Primary care providers (in this study, at least) are not income maximisers, but satisficers (they want to break-even), so there isn’t much scope for reforms to encourage providers to compete for new patients. Patient choice reforms may improve quality, but it isn’t clear that this has anything to do with competitive pressure.

Maximising the impact of patient reported outcome assessment for patients and society. BMJ [PubMed] Published 24th January 2019

Patient-reported outcome measures (PROMs) have been touted as a way of improving patient care. Yet, their use around the world is fragmented. In this paper, the authors make some recommendations about how we might use PROMs to improve patient care. The authors summarise some of the benefits of using PROMs and discuss some of the ways that they’ve been used in the UK.

Five key challenges in the use of PROMs are specified: i) appropriate and consistent selection of the best measures; ii) ethical collection and reporting of PROM data; iii) data collection, analysis, reporting, and interpretation; iv) data logistics; and v) a lack of coordination and efficiency. To address these challenges, the authors recommend an ‘integrated’ approach. To achieve this, stakeholder engagement is important and a governance framework needs to be developed. A handy table of current uses is provided.

I can’t argue with what the paper proposes, but it outlines an idealised scenario rather than any firm and actionable recommendations. What the authors don’t discuss is the fact that the use of PROMs in the UK is flailing. The NHS PROMs programme has been scaled back, measures have been dropped from the QOF, the EQ-5D has been dropped from the GP Patient Survey. Perhaps we need bolder recommendations and new ideas to turn the tide.

Check your checklist: the danger of over- and underestimating the quality of economic evaluations. PharmacoEconomics – Open [PubMed] Published 24th January 2019

This paper outlines the problems associated with misusing methodological and reporting checklists. The author argues that the current number of checklists available in the context of economic evaluation and HTA (13, apparently) is ‘overwhelming’. Three key issues are discussed. First, researchers choose the wrong checklist. A previous review found that the Drummond, CHEC, and Philips checklists were regularly used in the wrong context. Second, checklists can be overinterpreted, resulting in incorrect conclusions. A complete checklist does not mean that a study is perfect, and different features are of varying importance in different studies. Third, checklists are misused, with researchers deciding which items are or aren’t relevant to their study, without guidance.

The author suggests that more guidance is needed and that a checklist for selecting the correct checklist could be the way to go. The issue of updating checklists over time – and who ought to be responsible for this – is also raised.

In general, the tendency seems to be to broaden the scope of general checklists and to develop new checklists for specific methodologies, requiring the application of multiple checklists. As methods develop, they become increasingly specialised and heterogeneous. I think there’s little hope for checklists in this context unless they’re pared down and used as a reminder of the more complex guidance that’s needed to specify suitable methods and achieve adequate reporting. ‘Check your checklist’ is a useful refrain, though I reckon ‘chuck your checklist’ can sometimes be a better strategy.

A systematic review of dimensions evaluating patient experience in chronic illness. Health and Quality of Life Outcomes [PubMed] Published 21st January 2019

Back to PROMs and PRE(xperience)Ms. This study sets out to understand what it is that patient-reported measures are being used to capture in the context of chronic illness. The authors conducted a systematic review, screening 2,375 articles and ultimately including 107 articles that investigated the measurement properties of chronic (physical) illness PROMs and PREMs.

29 questionnaires were about (health-related) quality of life, 19 about functional status or symptoms, 20 on feelings and attitudes about illness, 19 assessing attitudes towards health care, and 20 on patient experience. The authors provide some nice radar charts showing the percentage of questionnaires that included each of 12 dimensions: i) physical, ii) functional, iii) social, iv) psychological, v) illness perceptions, vi) behaviours and coping, vii) effects of treatment, viii) expectations and satisfaction, ix) experience of health care, x) beliefs and adherence to treatment, xi) involvement in health care, and xii) patient’s knowledge.

The study supports the idea that a patient’s lived experience of illness and treatment, and adaptation to that, has been judged to be important in addition to quality of life indicators. The authors recommend that no measure should try to capture everything because there are simply too many concepts that could be included. Rather, researchers should specify the domains of interest and clearly define them for instrument development.

Credits

 

Rita Faria’s journal round-up for 28th January 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Appraising the value of evidence generation activities: an HIV modelling study. BMJ Global Health [PubMed] Published 7th December 2018

How much should we spend on implementing our health care strategy versus getting more information to devise a better strategy? Should we devolve budgets to regions or administer the budget centrally? These are difficult questions and this new paper by Beth Woods et al has a brilliant stab at answering them.

The paper looks at the HIV prevention and treatment policies in Zambia. It starts by finding the most cost-effective strategy and the corresponding budget in each region, given what is currently known about the prevalence of the infection, the effectiveness of interventions, etc. The idea is that the regions receive a cost-effective budget to implement a cost-effective strategy. The issue is that the cost-effective strategy and budget are devised according to what we currently know. In practice, regions might face a situation on the ground which is different from what was expected. Regions might not have enough budget to implement the strategy or might have some leftover.

What if we spend some of the budget to get more information to make a better decision? This paper considers the value of perfect information given the costs of research. Depending on the size of the budget and the cost of research, it may be worthwhile to divert some funds to get more information. But what if we had more flexibility in the budgetary policy? This paper tests 2 more budgetary options: a national hard budget but with the flexibility to transfer funds from under- to overspending regions, and a regional hard budget with a contingency fund.

The results are remarkable. The best budgetary policy is to have a national budget with the flexibility to reallocate funds across regions. This is a fascinating paper, with implications not only for prioritisation and budget setting in LMICs but also for high-income countries. For example, the 2012 Health and Social Care Act broke down PCTs into smaller CCGs and gave them hard budgets. Some CCGs went into deficit, and there are reports that some interventions have been cut back as a result. There are probably many reasons for the deficit, but this paper shows that hard regional budgets clearly have negative consequences.

Health economics methods for public health resource allocation: a qualitative interview study of decision makers from an English local authority. Health Economics, Policy and Law [PubMed] Published 11th January 2019

Our first paper looked at how to use cost-effectiveness to allocate resources between regions and across health care services and research. Emma Frew and Katie Breheny look at how decisions are actually made in practice, but this time in a local authority in England. Another change of the 2012 Health and Social Care Act was to move public health responsibilities from the NHS to local authorities. Local authorities are now given a ring-fenced budget to implement cost-effective interventions that best match their needs. How do they make decisions? Thanks to this paper, we’re about to find out.

This paper is an enjoyable read and quite an eye-opener. It was startling that health economics evidence was not much used in practice. But the barriers that were cited are not insurmountable. And the suggestions by the interviewees were really useful. There were suggestions about how economic evaluations should consider the local context to get a fair picture of the impact of the intervention to services and to the population, and to move beyond the trial into the real world. Equity was mentioned too, as well as broadening the outcomes beyond health. Fortunately, the health economics community is working on many of these issues.

Lastly, there was a clear message to make economic evidence accessible to lay audiences. This is a topic really close to my heart, and something I’d like to help improve. We have to make our work easy to understand and use. Otherwise, it may stay locked away in papers rather than do what we intended it for. Which is, at least in my view, to help inform decisions and to improve people’s lives.

I found this paper reassuring in that there is clearly a need for economic evidence and a desire to use it. Yes, there are some teething issues, but we’re working in the right direction. In sum, the future for health economics is bright!

Survival extrapolation in cancer immunotherapy: a validation-based case study. Value in Health Published 13th December 2018

Often, the cost-effectiveness of cancer drugs hangs in the method to extrapolate overall survival. This is because many cancer drugs receive their marketing authorisation before most patients in the trial have died. Extrapolation is tested extensively in the sensitivity analysis, and this is the subject of many discussions in NICE appraisal committees. Ultimately, at the point of making the decision, the correct method to extrapolate is a known unknown. Only in hindsight can we know for sure what the best choice was.

Ash Bullement and colleagues take advantage of hindsight to know the best method for extrapolation of a clinical trial of an immunotherapy drug. Survival after treatment with immunotherapy drugs is more difficult to predict because some patients can survive for a very long time, while others have much poorer outcomes. They fitted survival models to the 3-year data cut, which was available at the time of the NICE technology appraisal. Then they compared their predictions to the observed survival in the 5-year data cut and to long-term survival trends from registry data. They found that the piecewise model and a mixture-cure model had the best predictions at 5 years.

This is a relevant paper for those of us who work in the technology appraisal world. I have to admit that I can be sceptical of piecewise and mixture-cure models, but they definitely have a role in our toolbox for survival extrapolation. Ideally, we’d have a study like this for all the technology appraisals hanging on the survival extrapolation so that we can take learnings across cancers and classes of drugs. With time, we would get to know more about what works best for which condition or drug. Ultimately, we may be able to get to a stage where we can look at the extrapolation with less inherent uncertainty.

Credits

James Altunkaya’s journal round-up for 3rd September 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Sensitivity analysis for not-at-random missing data in trial-based cost-effectiveness analysis: a tutorial. PharmacoEconomics [PubMed] [RePEc] Published 20th April 2018

Last month, we highlighted a Bayesian framework for imputing missing data in economic evaluation. The paper dealt with the issue of departure from the ‘Missing at Random’ (MAR) assumption by using a Bayesian approach to specify a plausible missingness model from the results of expert elicitation. This was used to estimate a prior distribution for the unobserved terms in the outcomes model.

For those less comfortable with Bayesian estimation, this month we highlight a tutorial paper from the same authors, outlining an approach to recognise the impact of plausible departures from ‘Missingness at Random’ assumptions on cost-effectiveness results. Given poor adherence to current recommendations for the best practice in handling and reporting missing data, an incremental approach to improving missing data methods in health research may be more realistic. The authors supply accompanying Stata code.

The paper investigates the importance of assuming a degree of ‘informative’ missingness (i.e. ‘Missingness not at Random’) in sensitivity analyses. In a case study, the authors present a range of scenarios which assume a decrement of 5-10% in the quality of life of patients with missing health outcomes, compared to multiple imputation estimates based on observed characteristics under standard ‘Missing at Random’ assumptions. This represents an assumption that, controlling for all observed characteristics used in multiple imputation, those with complete quality of life profiles may have higher quality of life than those with incomplete surveys.

Quality of life decrements were implemented in the control and treatment arm separately, and then jointly, in six scenarios. This aimed to demonstrate the sensitivity of cost-effectiveness judgements to the possibility of a different missingness mechanism in each arm. The authors similarly investigate sensitivity to higher health costs in those with missing data than predicted based on observed characteristics in imputation under ‘Missingness at Random’. Finally, sensitivity to a simultaneous departure from ‘Missingness at Random’ in both health outcomes and health costs is investigated.

The proposed sensitivity analyses provide a useful heuristic to assess what degree of difference between missing and non-missing subjects on unobserved characteristics would be necessary to change cost-effectiveness decisions. The authors admit this framework could appear relatively crude to those comfortable with more advanced missing data approaches such as those outlined in last month’s round-up. However, this approach should appeal to those interested in presenting the magnitude of uncertainty introduced by missing data assumptions, in a way that is easily interpretable to decision makers.

The impact of waiting for intervention on costs and effectiveness: the case of transcatheter aortic valve replacement. The European Journal of Health Economics [PubMed] [RePEc] Published September 2018

This paper appears in print this month and sparked interest as one of comparatively few studies on the cost-effectiveness of waiting lists. Given interest in using constrained optimisation methods in health outcomes research, highlighted in this month’s editorial in Value in Health, there is rightly interest in extending the traditional sphere of economic evaluation from drugs and devices to understanding the trade-offs of investing in a wider range of policy interventions, using a common metric of costs and QALYs. Rachel Meacock’s paper earlier this year did a great job at outlining some of the challenges involved broadening the scope of economic evaluation to more general decisions in health service delivery.

The authors set out to understand the cost-effectiveness of delaying a cardiac treatment (TVAR) using a waiting list of up to 12 months compared to a policy of immediate treatment. The effectiveness of treatment at 3, 6, 9 & 12 months after initial diagnosis, health decrements during waiting, and corresponding health costs during wait time and post-treatment were derived from a small observational study. As treatment is studied in an elderly population, a non-ignorable proportion of patients die whilst waiting for surgery. This translates to lower modelled costs, but also lower quality life years in modelled cohorts where there was any delay from a policy of immediate treatment. The authors conclude that eliminating all waiting time for TVAR would produce population health at a rate of ~€12,500 per QALY gained.

However, based on the modelling presented, the authors lack the ability to make cost-effectiveness judgements of this sort. Waiting lists exist for a reason, chiefly a lack of clinical capacity to treat patients immediately. In taking a decision to treat patients immediately in one disease area, we therefore need some judgement as to whether the health displaced in now untreated patients in another disease area is of greater, less or equal magnitude to that gained by treating TVAR patients immediately. Alternately, modelling should include the cost of acquiring additional clinical capacity (such as theatre space) to treat TVAR patients immediately, so as not to displace other treatments. In such a case, the ICER is likely to be much higher, due to the large cost of new resources needed to reduce waiting times to zero.

Given the data available, a simple improvement to the paper would be to reflect current waiting times (already gathered from observational study) as the ‘standard of care’ arm. As such, the estimated change in quality of life and healthcare resource cost from reducing waiting times to zero from levels observed in current practice could be calculated. This could then be used to calculate the maximum acceptable cost of acquiring additional treatment resources needed to treat patients with no waiting time, given current national willingness-to-pay thresholds.

Admittedly, there remain problems in using the authors’ chosen observational dataset to calculate quality of life and cost outcomes for patients treated at different time periods. Waiting times were prioritised in this ‘real world’ observational study, based on clinical assessment of patients’ treatment need. Thus it is expected that the quality of life lost during a waiting period would be lower for patients treated in the observational study at 12 months, compared to the expected quality of life loss of waiting for the group of patients judged to need immediate treatment. A previous study in cardiac care took on the more manageable task of investigating the cost-effectiveness of different prioritisation strategies for the waiting list, investigating the sensitivity of conclusions to varying a fixed maximum wait-time for the last patient treated.

This study therefore demonstrates some of the difficulties in attempting to make cost-effectiveness judgements about waiting time policy. Given that the cost-effectiveness of reducing waiting times in different disease areas is expected to vary, based on relative importance of waiting for treatment on short and long-term health outcomes and costs, this remains an interesting area for economic evaluation to explore. In the context of the current focus on constrained optimisation techniques across different areas in healthcare (see ISPOR task force), it is likely that extending economic evaluation to evaluate a broader range of decision problems on a common scale will become increasingly important in future.

Understanding and identifying key issues with the involvement of clinicians in the development of decision-analytic model structures: a qualitative study. PharmacoEconomics [PubMed] Published 17th August 2018

This paper gathers evidence from interviews with clinicians and modellers, with the aim to improve the nature of the working relationship between the two fields during model development.

Researchers gathered opinion from a variety of settings, including industry. The main report focusses on evidence from two case studies – one tracking the working relationship between modellers and a single clinical advisor at a UK university, with the second gathering evidence from a UK policy institute – where modellers worked with up to 11 clinical experts per meeting.

Some of the authors’ conclusions are not particularly surprising. Modellers reported difficulty in recruiting clinicians to advise on model structures, and further difficulty in then engaging recruited clinicians to provide relevant advice for the model building process. Specific comments suggested difficulty for some clinical advisors in identifying representative patient experiences, instead diverting modellers’ attention towards rare outlier events.

Study responses suggested currently only 1 or 2 clinicians were typically consulted during model development. The authors recommend involving a larger group of clinicians at this stage of the modelling process, with a more varied range of clinical experience (junior as well as senior clinicians, with some geographical variation). This is intended to help ensure clinical pathways modelled are generalizable. The experience of one clinical collaborator involved in the case study based at a UK university, compared to 11 clinicians at the policy institute studied, perhaps may also illustrate a general problem of inadequate compensation for clinical time within the university system. The authors also advocate the availability of some relevant training for clinicians in decision modelling to help enhance the efficiency of participants’ time during model building. Clinicians sampled were supportive of this view – citing the need for further guidance from modellers on the nature of their expected contribution.

This study ties into the general literature regarding structural uncertainty in decision analytic models. In advocating the early contribution of a larger, more diverse group of clinicians in model development, the authors advocate a degree of alignment between clinical involvement during model structuring, and guidelines for eliciting parameter estimates from clinical experts. Similar problems, however, remain for both fields, in recruiting clinical experts from sufficiently diverse backgrounds to provide a valid sample.

Credits