Chris Sampson’s journal round-up for 2nd December 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The treatment decision under uncertainty: the effects of health, wealth and the probability of death. Journal of Health Economics Published 16th November 2019

It’s important to understand how people make decisions about treatment. At the end of life, the question can become a matter of whether to have treatment or to let things take their course such that you end up dead. In order to consider this scenario, the author of this paper introduces the probability of death to some existing theoretical models of decision-making under uncertainty.

The diagnostic risk model and the therapeutic risk model can be used to identify risk thresholds that determine decisions about treatment. The diagnostic model relates to the probability that disease is present and the therapeutic model relates to the probability that treatment is successful. The new model described in this paper builds on these models to consider the impact on the decision thresholds of i) initial health state, ii) probability of death, and iii) wealth. The model includes wealth after death, in the form of a bequest. Limited versions of the model are also considered, excluding the bequest and excluding wealth (described as a ‘QALY model’). Both an individual perspective and an aggregate perspective are considered by excluding and including the monetary cost of diagnosis and treatment, to allow for a social insurance type setting.

The comparative statics show a lot of ambiguity, but there are a few things that the model can tell us. The author identifies treatment as having an ‘insurance effect’, by reducing diagnostic risk, a ‘protective effect’, by lowering the probability of death, and a risk-increasing effect associated with therapeutic risk. A higher probability of death increases the propensity for treatment in both the no-bequest model and the QALY model, because of the protective effect of treatment. In the bequest model, the impact is ambiguous, because treatment costs reduce the bequest. In the full model, wealthier individuals will choose to undergo treatment at a lower probability of success because of a higher marginal utility for survival, but the effect becomes ambiguous if the marginal utility of wealth depends on health (which it obviously does).

I am no theoretician, so it can take me a long time to figure these things out in my head. For now, I’m not convinced that it is meaningful to consider death in this way using a one-period life model. In my view, the very definition of death is a loss of time, which plays little or no part in this model. But I think my main bugbear is the idea that anybody’s decision about life saving treatment is partly determined by the amount of money they will leave behind. I find this hard to believe. The author links the finding that a higher probability of death increases treatment propensity to NICE’s end of life premium. Though I’m not convinced that the model has anything to do with NICE’s reasoning on this matter.

Moving toward evidence-based policy: the value of randomization for program and policy implementation. JAMA [PubMed] Published 15th November 2019

Evidence-based policy is a nice idea. We should figure out whether something works before rolling it out. But decision-makers (especially politicians) tend not to think in this way, because doing something is usually seen to be better than doing nothing. The authors of this paper argue that randomisation is the key to understanding whether a particular policy creates value.

Without evidence based on random allocation, it’s difficult to know whether a policy works. This, the authors argue, can undermine the success of effective interventions and allow harmful policies to persist. A variety of positive examples are provided from US healthcare, including trials of Medicare bundled payments. Apparently, such trials increased confidence in the programmes’ effects in a way that post hoc evaluations cannot, though no evidence of this increased confidence is actually provided. Policy evaluation is not always easy, so the authors describe four preconditions for the success of such studies: i) early engagement with policymakers, ii) willingness from policy leaders to support randomisation, iii) timing the evaluation in line with policymakers’ objectives, and iv) designing the evaluation in line with the realities of policy implementation.

These are sensible suggestions, but it is not clear why the authors focus on randomisation. The paper doesn’t do what it says on the tin, i.e. describe the value of randomisation. Rather, it explains the value of pre-specified policy evaluations. Randomisation may or may not deserve special treatment compared with other analytical tools, but this paper provides no explanation for why it should. The authors also suggest that people are becoming more comfortable with randomisation, as large companies employ experimental methods, particularly on the Internet with A/B testing. I think this perception is way off and that most people feel creeped out knowing that the likes of Facebook are experimenting on them without any informed consent. In the authors’ view, it being possible to randomise is a sufficient basis on which to randomise. But, considering the ethics, as well as possible methodological contraindications, it isn’t clear that randomisation should become the default.

A new tool for creating personal and social EQ-5D-5L value sets, including valuing ‘dead’. Social Science & Medicine Published 30th November 2019

Nobody can agree on the best methods for health state valuation. Or, at least, some people have disagreed loud enough to make it seem that way. Novel approaches to health state valuation are therefore welcome. Even more welcome is the development and testing of methods that you can try at home.

This paper describes the PAPRIKA method (Potentially All Pairwise RanKings of all possible Alternatives) of discrete choice experiment, implemented using 1000Minds software. Participants are presented with two health states that are defined in terms of just two dimensions, each lasting for 10 years, and asked to choose between them. Using the magical power of computers, an adaptive process identifies further choices, automatically ranking states using transitivity so that people don’t need to complete unnecessary tasks. In order to identify where ‘dead’ sits on the scale, a binary search procedure asks participants to compare EQ-5D states with being dead. What’s especially cool about this process is that everybody who completes it is able to view their own personal value set. These personal value sets can then be averaged to identify a social value set.

The authors used their tool to develop an EQ-5D-5L value set for New Zealand (which is where the researchers are based). They recruited 5,112 people in an online panel, such that the sample was representative of the general public. Participants answered 20 DCE questions each, on average, and almost half of them said that they found the questions difficult to answer. The NZ value set showed that anxiety/depression was associated with the greatest disutility, though each dimension has a notably similar level of impact at each level. The value set correlates well with numerous existing value sets.

The main limitation of this research seems to be that only levels 1, 3, and 5 of each EQ-5D-5L domain were included. Including levels 2 and 4 would more than double the number of questions that would need to be answered. It is also concerning that more than half of the sample was excluded due to low data quality. But the authors do a pretty good job of convincing us that this is for the best. Adaptive designs of this kind could be the future of health state valuation, especially if they can be implemented online, at low cost. I expect we’ll be seeing plenty more from PAPRIKA.

Credits

Meeting round-up: Health Economists’ Study Group (HESG) Winter 2019

2019 started with aplomb with the HESG Winter meeting, superbly organised by the Centre for Health Economics, University of York.

Andrew Jones kicked off proceedings with his brilliant course on data visualisation in health econometrics. The eager audience learnt about Edward Tufte’s and others’ ideas about how to create charts that help to make it much easier to understand information. The course was tremendously well received by the HESG audience. And I know that I’ll find it incredibly useful too, as there were lots of ideas that apply to my work. So I’m definitely going to be looking further into Andrew’s chapter on data visualisation to know more.

The conference proper started in the afternoon. I had the pleasure to chair the fascinating paper by Manuela Deidda et al on an economic evaluation using observational data on the Healthy Start Voucher, which was discussed by Anne Ludbrook. We had an engaging discussion, that not only delved into the technical aspects of the paper, such as the intricacies of implementing propensity score matching and regression discontinuity, but also about the policy implications of the results.

I continued with the observational data theme by enjoying the discussion led by Panos Kasteridis on the Andrew McCarthy et al paper. Then I quickly followed this by popping over to catch Attakrit Leckcivilize’s excellent discussion of Padraig Dixon’s et al paper on the effect of obesity on hospital costs. This impressive paper uses Mendelian randomisation, which is a fascinating approach using a type of instrumental variable analysis with individuals’ genetic variants as the instrument.

The meeting continued in the stunning setting of the Yorkshire Museum for the plenary session, which also proved a fitting location to pay tribute to the inspirational Alan Maynard, who sadly passed away in 2018. Unfortunately, I was unable to hear the tributes to Alan Maynard in person, but fellow attendees were able to paint a moving portrait of the event on Twitter, that kept me in touch.

The plenary was chaired by Karen Bloor and included presentations by Kalipso Chalkidou, Brian Ferguson, Becky Henderson and Danny PalnochJane Hall, Steve Birch and Maria Goddard gave personal tributes.

The health economics community was united in gratitude to Professor Alan Maynard, who did so much to advance and disseminate the discipline. It made for a wonderful way to finish day 1!

Day 2 started bright and was full of stimulating sessions to choose from.

I chose to zone in on the cost-effectiveness topic in particular. I started with the David Glynn et al paper about using “back of the envelope” calculations to inform funding and research decisions, discussed by Ed Wilson. This paper is an excellent step towards making value of information easy to use.

I then attended Matthew Quaife’s discussion of Matthew Taylor’s paper on the consequences of assuming independence of parameters to decision uncertainty. This is a relevant paper for the cost-effectiveness world, in particular for those tasked with building and appraising cost-effectiveness models.

Next up it was my turn in the hot seat, as I presented the Jose Robles-Zurita et al paper on the economic evaluation of diagnostic tests. This thought-provoking paper presents a method to account for the effect of accuracy on the uptake of the test, in the context of maximising health.

As always, we were spoilt for choice in the afternoon. The paper “Drop dead: is anchoring at ‘dead’ a theoretical requirement in health state valuation” by Chris Sampson et al, competed very strongly with “Is it really ‘Grim up North’? The causes and consequences of inequalities on health and wider outcomes” by Anna Wilding et al, for the most provocative title. “Predicting the unpredictable? Using discrete choice experiments in economic evaluation to characterise uncertainty and account for heterogeneity”, from Matthew Quaife et al, also gave them a run for their money! I’ll leave a sample here of the exciting papers in discussion, so you can make your own mind up:

Dinner was in the splendid Merchant Adventurers’ Hall. Built in 1357, it is one of the finest Medieval buildings in the UK. Another stunning setting that provided a beautiful backdrop for a wonderful evening!

Andrew Jones presented the ‘Health Economics’ PhD Poster Prize, sponsored by Health Economics Wiley. Rose Atkins took the top honours by winning the Wiley prize for best poster. With Ashleigh Kernohan’s poster being highly commended, given its brilliant use of technology. Congratulations both!

Unfortunately, the vagaries of public transport meant I had to go home straight after dinner, but I heard from many trustworthy sources, on the following day, that the party continued well into the early hours. Clearly, health economics is a most energising topic!

For me, day 3 was all about cost-effectiveness decision rules. I started with the paper by Mark Sculpher et al, discussed by Chris Sampson. This remarkable paper sums up the evidence on the marginal productivity of the NHS, discussing how to use it to inform decisions, and proposes an agenda for research. There were many questions and comments from the floor, showing how important and challenging this topic is. As are so many papers in HESG, this is clearly one to look out for when it appears in print!

The next paper was on a very different way to solve the problem of resource allocation in health care. Philip Clarke and Paul Frijters propose an interesting system of auctions to set prices. The paper was well discussed by James Lomas, which kick-started an animated discussion with the audience about practicalities and implications for investment decisions by drug companies. Great food for thought!

Last, but definitely not least, I took in the paper by Bernarda Zamora et al on the relationship between health outcomes and expenditure across geographical areas in England. David Glynn did a great job discussing the paper, and especially in explaining data envelopment analysis. As ever, the audience was highly engaged and put forward many questions and comments. Clearly, the productivity of the NHS is a central question for health economics and will keep us busy for some time to come.

As always, this was a fantastic HESG meeting that was superbly organised, providing an environment where authors, discussants and participants alike were able to excel.

I really felt a feeling of collegiality, warmth and energy permeate the event. We are part of such an amazing scientific community. Next stop, HESG Summer meeting, hosted by the University of East Anglia. I’m already looking forward to it!

Credit

Sam Watson’s journal round-up for 30th April 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The Millennium Villages Project: a retrospective, observational, endline evaluation. The Lancet Global Health [PubMedPublished May 2018

There are some clinical researchers who would have you believe observational studies are completely useless. The clinical trial is king, they might say, observation studies are just too biased. And while it’s true that observational studies are difficult to do well and convincingly, they can be a reliable and powerful source of evidence. Similarly, randomised trials are frequently flawed, for example there’s often missing data that hasn’t been dealt with, or a lack of allocation concealment, and many researchers forget that randomisation does not guarantee a balance of covariates, it merely increases the probability of it. I bring this up, as this study is a particularly carefully designed observational data study that I think serves as a good example to other researchers. The paper is an evaluation of the Millennium Villages Project, an integrated intervention program designed to help rural villages across sub-Saharan Africa meet the Millennium Development Goals over ten years between 2005 and 2015. Initial before-after evaluations of the project were criticised for inferring causal “impacts” from before and after data (for example, this Lancet paper had to be corrected after some criticism). To address these concerns, this new paper is incredibly careful about choosing appropriate control villages against which to evaluate the intervention. Their method is too long to summarise here, but in essence they match intervention villages to other villages on the basis of district, agroecological zone, and a range of variables from the DHS – matches were they reviewed for face validity and revised until a satisfactory matching was complete. The wide range of outcomes are all scaled to a standard normal and made to “point” in the same direction, i.e. so an increase indicated economic development. Then, to avoid multiple comparisons problems, a Bayesian hierarchical model is used to pool data across countries and outcomes. Costs data were also reported. Even better, “statistical significance” is barely mentioned at all! All in all, a neat and convincing evaluation.

Reconsidering the income‐health relationship using distributional regression. Health Economics [PubMed] [RePEcPublished 19th April 2018

The relationship between health and income has long been of interest to health economists. But it is a complex relationship. Increases in income may change consumption behaviours and a change in the use of time, promoting health, while improvements to health may lead to increases in income. Similarly, people who are more likely to make higher incomes may also be those who look after themselves, or maybe not. Disentangling these various factors has generated a pretty sizeable literature, but almost all of the empirical papers in this area (and indeed all empirical papers in general) use modelling techniques to estimate the effect of something on the expected value, i.e. mean, of some outcome. But the rest of the distribution is of interest – the mean effect of income may not be very large, but a small increase in income for poorer individuals may have a relatively large effect on the risk of very poor health. This article looks at the relationship between income and the conditional distribution of health using something called “structured additive distribution regression” (SADR). My interpretation of SADR is that, one would model the outcome y ~ g(a,b) as being distributed according to some distribution g(.) indexed by parameters a and b, for example, a normal or Gamma distribution has two parameters. One would then specify a generalised linear model for a and b, e.g. a = f(X’B). I’m not sure this is a completely novel method, as people use the approach to, for example, model heteroscedasticity. But that’s not to detract from the paper itself. The findings are very interesting – increases to income have a much greater effect on health at the lower end of the spectrum.

Ask your doctor whether this product is right for you: a Bayesian joint model for patient drug requests and physician prescriptions. Journal of the Royal Statistical Society: Series C Published April 2018.

When I used to take econometrics tutorials for undergraduates, one of the sessions involved going through coursework about the role of advertising. To set the scene, I would talk about the work of Alfred Marshall, the influential economist from the late 1800s/early 1900s. He described two roles for advertising: constructive and combative. The former is when advertising grows the market as a whole, increasing everyone’s revenues, and the latter is when ads just steal market share from rivals without changing the size of the market. Later economists would go on to thoroughly develop theories around advertising, exploring such things as the power of ads to distort preferences, the supply of ads and their complementarity with the product they’re selling, or seeing ads as a source of consumer information. Nevertheless, Marshall’s distinction is still a key consideration, although often phrased in different terms. This study examines a lot of things, but one of its key objectives is to explore the role of direct to consumer advertising on prescriptions of brands of drugs. The system is clearly complex: drug companies advertise both to consumers and physicians, consumers may request the drug from the physician, and the physician may or may not prescribe it. Further, there may be correlated unobservable differences between physicians and patients, and the choice to advertise to particular patients may not be exogenous. The paper does a pretty good job of dealing with each of these issues, but it is dense and took me a couple of reads to work out what was going on, especially with the mix of Bayesian and Frequentist terms. Examining the erectile dysfunction drug market, the authors reckon that direct to consumer advertising reduces drug requests across the category, while increasing the proportion of requests for the advertised drug – potentially suggesting a “combative” role. However, it’s more complex than that patient requests and doctor’s prescriptions seem to be influenced by a multitude of factors.

Credits