Shilpi Swami’s journal round-up for 9th December 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Performance of UK National Health Service compared with other high-income countries: observational study. BMJ [PubMed] Published 27th November 2019

Efficiencies and inefficiencies of the NHS in the UK have been debated in recent years. This new study reveals the performance of the NHS compared to other high-income countries, based on observational data, and has already caught a bunch of attention (almost 3,000 tweets and 6 news appearances, since publication)!

The authors presented a descriptive analysis of the UK (England, Scotland, Northern Ireland, and Wales) compared to nine other countries (US, Canada, Germany, Australia, Sweden, France, Denmark, the Netherlands, and Switzerland) based on aggregated recent data from a range of sources (such as OECD, World Bank, the Institute for Health Metrics Evaluation, and Eurostat). Good things first; access to care – a lower proportion of people felt unmet needs owing to costs. The waiting times were comparable across other countries, except for specialist care. The UK performed slightly better on the metric of patient safety. The main challenge, however, is that NHS healthcare spending is lower and has been growing more slowly. This means fewer doctors and nurses, and doctors spending less time with patients. The authors vividly suggest that

“Policy makers should consider how recent changes to nursing bursaries, the weakened pound, and uncertainty about the status of immigrant workers in the light of the Brexit referendum result have influenced these numbers and how to respond to these challenges in the future.”

Understandably comparing healthcare systems across the world is difficult. Including the US in the study, and not including other countries like Spain and Japan, may need more justification or could be a scope of future research.

To be fair, the article is a not-to-miss read. It is an eye-opener for those who think it’s only a (too much) demand-side problem the the NHS is facing and confirms the perspective of those who think it’s a (not enough) supply-side problem. Kudos to the hardworking doctors and nurses who are currently delivering efficiently in the stretched situation! For sustainability, the NHS needs to consider increasing its spending to increase labour supply and long-term care.

A systematic review of methods to predict weight trajectories in health economic models of behavioral weight management programs: the potential role of psychosocial factors. Medical Decision Making [PubMed] Published 2nd December 2019

In economic modelling, assumptions are often made about the long-term impact of interventions, and it’s important that these assumptions are based on sound evidence and/or tested in sensitivity analysis, as these could affect the cost-effectiveness results.

The authors explored assumptions about weight trajectories to inform economic modelling of behavioural weight management programmes. Also, they checked their evidence sources, and whether these assumptions were based on any psychosocial variables (such as self-regulation, motivation, self-efficacy, and habit), as these are known to be associated with weight-loss trajectories.

The authors conducted a systematic literature review of economic models of weight management interventions that aimed at reducing weight. In the 38 studies included, they found 6 types of assumptions of weight trajectories beyond trial duration (weight loss maintained, weight loss regained immediately, linear weight regain, subgroup-specific trajectories, exponential decay of effect, maintenance followed by regain), with only 15 of the studies reporting sources for these assumptions. The authors also elaborated on the assumptions and graphically represented them. Psychosocial variables were, in fact, measured in evidence sources of some of the included studies. However, the authors found that none of the studies estimated their weight trajectory assumptions based on these! Though the article also reports on how the assumptions were tested in sensitivity analyses and their impact on results in the studies (if reported within these studies), it would have been interesting to see more insights into this. The authors feel that there’s a need to investigate how psychosocial variables measured in trials can be used within health economic models to calculate weight trajectories and, thus, to improve the validity of cost-effectiveness estimates.

To me, given that only around half of included studies reported sources of assumptions on long-term effects of the interventions and performed sensitivity analysis on these assumptions, it raises the bigger long-debated question on the quality of economic evaluations! To conclude, the review is comprehensive and insightful. It is an interesting read and will be especially useful for those interested in modelling long-term impacts of behavioural support programs.

The societal monetary value of a QALY associated with EQ‐5D‐3L health gains. The European Journal of Health Economics [PubMed] Published 28th November 2019

Finding an estimate of the societal monetary value of a QALY (MVQALY) is mostly performed to inform a range of thresholds for accurately guiding cost-effectiveness decisions.

This study explores the degree of variation in the societal MVQALY based on a large sample of the population in Spain. It uses a discrete choice experiment and a time trade-off exercise to derive a value set for utilities, followed by a willingness to pay questionnaire. The study reveals that the societal values for a QALY, corresponding to different EQ-5D-3L health gains, vary approximately between €10,000 and €30,000. Ironically, the MVQALY associated with larger improvements on QoL was found to be lower than with moderate QoL gains, meaning that WTP is less than proportional to the size of the QoL improvement. The authors further explored whether budgetary restrictions could be a reason for this by analysing responses of individuals with higher income and found out that it may somewhat explain this, but not fully. As this, at face value, implies there should be a lower cost per QALY threshold for interventions with largest improvement of health than with moderate improvements, it raises a lot of questions and forces you to interpret the findings with caution. The authors suggest that the diminishing MVQALY is, at least partly, produced by the lack of sensitivity of WTP responses.

Though I think that the article does not provide a clear take-home message, it makes the readers re-think the very underlying norms of estimating monetary values of QALYs. The study eventually raises more questions than providing answers but could be useful to further explore areas of utility research.

Credits

Chris Sampson’s journal round-up for 20th May 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A new method to determine the optimal willingness to pay in cost-effectiveness analysis. Value in Health Published 17th May 2019

Efforts to identify a robust estimate of the willingness to pay for a QALY have floundered. Mostly, these efforts have relied on asking people about their willingness to pay. In the UK, we have moved away from using such estimates as a basis for setting cost-effectiveness thresholds in the context of resource allocation decisions. Instead, we have attempted to identify the opportunity cost of a QALY, which is perhaps even more difficult, but more easy to justify in the context of a fixed budget. This paper seeks to inject new life into the willingness-to-pay approach by developing a method based on relative risk aversion.

The author outlines the relationship between relative risk aversion and the rate at which willingness-to-pay changes with income. Various candidate utility functions are described with respect to risk preferences, with a Weibull function being adopted for this framework. Estimates of relative risk aversion have been derived from numerous data sources, including labour supply, lottery experiments, and happiness surveys. These estimates from the literature are used to demonstrate the relationship between relative risk aversion and the ‘optimal’ willingness to pay (K), calibrated using the Weibull utility function. For an individual with ‘representative’ parameters plugged into their utility function, K is around twice the income level. K always increases with relative risk aversion.

Various normative questions are raised, including whether a uniform K should be adopted for everybody within the population, and whether individuals should be able to spend on health care on top of public provision. This approach certainly appears to be more straightforward than other approaches to estimating willingness-to-pay in health care, and may be well-suited to decentralised (US-style) resource allocation decision-making. It’s difficult to see how this framework could gain traction in the UK, but it’s good to see alternative approaches being proposed and I hope to see this work developed further.

Striving for a societal perspective: a framework for economic evaluations when costs and effects fall on multiple sectors and decision makers. Applied Health Economics and Health Policy [PubMed] Published 16th May 2019

I’ve always been sceptical of a ‘societal perspective’ in economic evaluation, and I have written in favour of a limited health care perspective. This is mostly for practical reasons. Being sufficiently exhaustive to identify a truly ‘societal’ perspective is so difficult that, in attempting to do so, there is a very high chance that you will produce estimates that are so inaccurate and imprecise that they are more dangerous than useful. But the fact is that there is no single decision-maker when it comes to public expenditure. Governments are made up of various departments, within which there are many levels and divisions. Not everybody will care about the health care perspective, so other objectives ought to be taken into account.

The purpose of this paper is to build on the idea of the ‘impact inventory’, described by the Second Panel on Cost-Effectiveness in Health and Medicine, which sought to address the challenge of multiple objectives. The extended framework described in this paper captures effects and opportunity costs associated with an intervention within various dimensions. These dimensions could (or should) align with decision-makers’ objectives. Trade-offs invariably require aggregation, and this aggregation could take place either within individuals or within dimensions – something not addressed by the Second Panel. The authors describe the implications of each approach to aggregation, providing visual representations of the impact inventory in each case. Aggregating within individuals requires a normative judgement about how each dimension is valued to the individual and then a judgement about how to aggregate for overall population net benefit. Aggregating across individuals within dimensions requires similar normative judgements. Where the chosen aggregation functions are linear and additive, both approaches will give the same results. But as soon as we start to consider equity concerns or more complex aggregation, we’ll see different decisions being indicated.

The authors adopt an example used by the Second Panel to demonstrate the decisions that would be made within a health-only perspective and then decisions that consider other dimensions. There could be a simple extension beyond health, such as including the impact on individuals’ consumption of other goods. Or it could be more complex, incorporating multiple dimensions, sectors, and decision-makers. For the more complex situation, the authors consider the inclusion of the criminal justice sector, introducing the number of crimes averted as an object of value.

It’s useful to think about the limitations of the Second Panel’s framing of the impact inventory and to make explicit the normative judgements involved. What this paper seems to be saying is that cross-sector decision-making is too complex to be adequately addressed by the Second Panel’s impact inventory. The framework described in this paper may be too abstract to be practically useful, and too vague to be foundational. But the complexities and challenges in multi-sector economic evaluation need to be spelt out – there is no simple solution.

Advanced data visualisation in health economics and outcomes research: opportunities and challenges. Applied Health Economics and Health Policy [PubMed] Published 4th May 2019

Computers can make your research findings look cool, which can help make people pay attention. But data visualisation can also be used as part of the research process and provide a means of more intuitively (and accurately) communicating research findings. The data sets used by health economists are getting bigger, which provides more opportunity and need for effective visualisation. The authors of this paper suggest that data visualisation techniques could be more widely adopted in our field, but that there are challenges and potential pitfalls to consider.

Decision modelling is an obvious context in which to use data visualisation, because models tend to involve large numbers of simulations. Dynamic visualisations can provide a means by which to better understand what is going on in these simulations, particularly with respect to uncertainty in estimates associated with alternative model structures or parameters. If paired with interactive models and customised dashboards, visualisation can make complex models accessible to non-expert users. Communicating patient outcomes data is also highlighted as a potential application, aiding the characterisation of differences between groups of individuals and alternative outcome measures.

Yet, there are barriers to wider use of visualisation. There is some scepticism about bias in underlying analyses, and end users don’t want to be bamboozled by snazzy graphics. The fact that journal articles are still the primary mode of communicating research findings is a problem, as you can’t have dynamic visualisations in a PDF. There’s also a learning curve for analysts wishing to develop complex visualisations. Hopefully, opportunities will be identified for two-way learning between the health economics world and data scientists more accustomed to data visualisation.

The authors provide several examples (static in the publication, but with links to live tools), to demonstrate the types of visualisations that can be created. Generally speaking, complex visualisations are proposed as complements to our traditional presentations of results, such as cost-effectiveness acceptability curves, rather than as alternatives. The key thing is to maintain credibility by ensuring that data visualisation is used to describe data in a more accurate and meaningful way, and to avoid exaggeration of research findings. It probably won’t be long until we see a set of good practice guidelines being developed for our field.

Credits

Thesis Thursday: David Mott

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr David Mott who has a PhD from Newcastle University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
How do preferences for public health interventions differ? A case study using a weight loss maintenance intervention
Supervisors
Luke Vale, Laura Ternent
Repository link
http://hdl.handle.net/10443/4197

Why is it important to understand variation in people’s preferences?

It’s not all that surprising that people’s preferences for health care interventions vary, but we don’t have a great understanding of what might drive these differences. Increasingly, preference information is being used to support regulatory decisions and, to a lesser but increasing extent, health technology assessments. It could be the case that certain subgroups of individuals would not accept the risks associated with a particular health care intervention, whereas others would. Therefore, identifying differences in preferences is important. However, it’s also useful to try to understand why this heterogeneity might occur in the first place.

The debate on whose preferences to elicit for health state valuation has traditionally focused on those with experience (e.g. patients) and those without (e.g. the general population). Though this dichotomy is problematic; it has been shown that health state utilities systematically differ between these two groups, presumably due to the difference in relative experience. My project aimed to explore whether experience also affects people’s preferences for health care interventions.

How did you identify different groups of people, whose preferences might differ?

The initial plan for the project was to elicit preferences for a health care intervention from general population and patient samples. However, after reviewing the literature, it seemed highly unlikely that anyone would advocate for preferences for treatments to be elicited from general population samples. It has long been suggested that discrete choice experiments (DCEs) could be used to incorporate patient preferences into decision-making, and it turned out that patients were the focus of the majority of the DCE studies that I reviewed. Given this, I took a more granular approach in my empirical work.

We recruited a very experienced group of ‘service users’ from a randomised controlled trial (RCT). In this case, it was a novel weight loss maintenance intervention aimed at helping obese adults that had lost at least 5% of their overall weight to maintain their weight loss. We also recruited an additional three groups from an online panel. The first group were ‘potential service users’ – those that met the trial criteria but could not have experienced the intervention. The second group were ‘potential beneficiaries’ – those that were obese or overweight and did not meet the trial criteria. The final group were ‘non-users’ – those with a normal BMI.

What can your study tell us about preferences in the context of a weight loss maintenance intervention?

The empirical part of my study involved a DCE and an open-ended contingent valuation (CV) task. The DCE was focused on the delivery of the trial intervention, which was a technology-assisted behavioural intervention. It had a number of different components but, briefly, it involved participants weighing themselves regularly on a set of ‘smart scales’, which enabled the trial team to access and monitor the data. Participants received text messages from the trial team with feedback, reminders to weigh themselves (if necessary), and links to online tools and content to support the maintenance of their weight loss.

The DCE results suggested that preferences for the various components of the intervention varied significantly between individuals and between the different groups – and not all were important. In contrast, the efficacy and cost attributes were important across the board. The CV results suggested that a very significant proportion of individuals would be willing to pay for an effective intervention (i.e. that avoided weight regain), with very few respondents expressing a willingness to pay for an intervention that led to more than 10-20% weight regain.

Do alternative methods for preference elicitation provide a consistent picture of variation in preferences?

Existing evidence suggests that willingness to pay (WTP) estimates from CV tasks might differ from those derived from DCE data, but there aren’t a lot of empirical studies on this in health. Comparisons were planned in my study, but the approach taken in the end was suboptimal and ultimately inconclusive. The original plan was to obtain WTP estimates for an entire WLM intervention using the DCE and to compare this with the estimates from the CV task. Due to data limitations, it wasn’t possible to make this comparison. However, the CV task was a bit unusual because we asked for respondents’ WTP at various different efficacy levels. So instead the comparison made was between average WTP values for a percentage point of weight re-gain. The differences were statistically insignificant.

Are some people’s preferences ‘better defined’ than others’?

We hypothesised that those with experience of the trial intervention would have ‘better defined’ preferences. To explore this, we compared the data quality across the different user groups. From a quick glance at the DCE results, it is pretty clear that the data were much better for the most experienced group; the coefficients were larger, and a much higher proportion was statistically significant. However, more interestingly, we found that the most experienced group were 23% more likely to have passed all of the rationality tests that were embedded in the DCE. Therefore, if you accept that better quality data is an indicator of ‘better defined’ preferences, then the data do seem reasonably supportive of the hypothesis. That being said, there were no significant differences between the other three groups, begging the question: was it the difference in experience, or some other difference between RCT participants and online panel respondents?

What does your research imply for the use of preferences in resource allocation decisions?

While there are still many unanswered questions, and there is always a need for further research, the results from my PhD project suggest that preferences for health care interventions can differ significantly between respondents with differing levels of experience. Had my project been applied to a more clinical intervention that is harder for an average person to imagine experiencing, I would expect the differences to have been much larger. I’d love to see more research in this area in future, especially in the context of benefit-risk trade-offs.

The key message is that the level of experience of the participants matters. It is quite reasonable to believe that a preference study focusing on a particular subgroup of patients will not be generalisable to the broader patient population. As preference data, typically elicited from patients, is increasingly being used in decision-making – which is great – it is becoming increasingly important for researchers to make sure that their respondent samples are appropriate to support the decisions that are being made.