Meeting round-up: Health Economists’ Study Group (HESG) Winter 2019

2019 started with aplomb with the HESG Winter meeting, superbly organised by the Centre for Health Economics, University of York.

Andrew Jones kicked off proceedings with his brilliant course on data visualisation in health econometrics. The eager audience learnt about Edward Tufte’s and others’ ideas about how to create charts that help to make it much easier to understand information. The course was tremendously well received by the HESG audience. And I know that I’ll find it incredibly useful too, as there were lots of ideas that apply to my work. So I’m definitely going to be looking further into Andrew’s chapter on data visualisation to know more.

The conference proper started in the afternoon. I had the pleasure to chair the fascinating paper by Manuela Deidda et al on an economic evaluation using observational data on the Healthy Start Voucher, which was discussed by Anne Ludbrook. We had an engaging discussion, that not only delved into the technical aspects of the paper, such as the intricacies of implementing propensity score matching and regression discontinuity, but also about the policy implications of the results.

I continued with the observational data theme by enjoying the discussion led by Panos Kasteridis on the Andrew McCarthy et al paper. Then I quickly followed this by popping over to catch Attakrit Leckcivilize’s excellent discussion of Padraig Dixon’s et al paper on the effect of obesity on hospital costs. This impressive paper uses Mendelian randomisation, which is a fascinating approach using a type of instrumental variable analysis with individuals’ genetic variants as the instrument.

The meeting continued in the stunning setting of the Yorkshire Museum for the plenary session, which also proved a fitting location to pay tribute to the inspirational Alan Maynard, who sadly passed away in 2018. Unfortunately, I was unable to hear the tributes to Alan Maynard in person, but fellow attendees were able to paint a moving portrait of the event on Twitter, that kept me in touch.

The plenary was chaired by Karen Bloor and included presentations by Kalipso Chalkidou, Brian Ferguson, Becky Henderson and Danny PalnochJane Hall, Steve Birch and Maria Goddard gave personal tributes.

The health economics community was united in gratitude to Professor Alan Maynard, who did so much to advance and disseminate the discipline. It made for a wonderful way to finish day 1!

Day 2 started bright and was full of stimulating sessions to choose from.

I chose to zone in on the cost-effectiveness topic in particular. I started with the David Glynn et al paper about using “back of the envelope” calculations to inform funding and research decisions, discussed by Ed Wilson. This paper is an excellent step towards making value of information easy to use.

I then attended Matthew Quaife’s discussion of Matthew Taylor’s paper on the consequences of assuming independence of parameters to decision uncertainty. This is a relevant paper for the cost-effectiveness world, in particular for those tasked with building and appraising cost-effectiveness models.

Next up it was my turn in the hot seat, as I presented the Jose Robles-Zurita et al paper on the economic evaluation of diagnostic tests. This thought-provoking paper presents a method to account for the effect of accuracy on the uptake of the test, in the context of maximising health.

As always, we were spoilt for choice in the afternoon. The paper “Drop dead: is anchoring at ‘dead’ a theoretical requirement in health state valuation” by Chris Sampson et al, competed very strongly with “Is it really ‘Grim up North’? The causes and consequences of inequalities on health and wider outcomes” by Anna Wilding et al, for the most provocative title. “Predicting the unpredictable? Using discrete choice experiments in economic evaluation to characterise uncertainty and account for heterogeneity”, from Matthew Quaife et al, also gave them a run for their money! I’ll leave a sample here of the exciting papers in discussion, so you can make your own mind up:

Dinner was in the splendid Merchant Adventurers’ Hall. Built in 1357, it is one of the finest Medieval buildings in the UK. Another stunning setting that provided a beautiful backdrop for a wonderful evening!

Andrew Jones presented the ‘Health Economics’ PhD Poster Prize, sponsored by Health Economics Wiley. Rose Atkins took the top honours by winning the Wiley prize for best poster. With Ashleigh Kernohan’s poster being highly commended, given its brilliant use of technology. Congratulations both!

Unfortunately, the vagaries of public transport meant I had to go home straight after dinner, but I heard from many trustworthy sources, on the following day, that the party continued well into the early hours. Clearly, health economics is a most energising topic!

For me, day 3 was all about cost-effectiveness decision rules. I started with the paper by Mark Sculpher et al, discussed by Chris Sampson. This remarkable paper sums up the evidence on the marginal productivity of the NHS, discussing how to use it to inform decisions, and proposes an agenda for research. There were many questions and comments from the floor, showing how important and challenging this topic is. As are so many papers in HESG, this is clearly one to look out for when it appears in print!

The next paper was on a very different way to solve the problem of resource allocation in health care. Philip Clarke and Paul Frijters propose an interesting system of auctions to set prices. The paper was well discussed by James Lomas, which kick-started an animated discussion with the audience about practicalities and implications for investment decisions by drug companies. Great food for thought!

Last, but definitely not least, I took in the paper by Bernarda Zamora et al on the relationship between health outcomes and expenditure across geographical areas in England. David Glynn did a great job discussing the paper, and especially in explaining data envelopment analysis. As ever, the audience was highly engaged and put forward many questions and comments. Clearly, the productivity of the NHS is a central question for health economics and will keep us busy for some time to come.

As always, this was a fantastic HESG meeting that was superbly organised, providing an environment where authors, discussants and participants alike were able to excel.

I really felt a feeling of collegiality, warmth and energy permeate the event. We are part of such an amazing scientific community. Next stop, HESG Summer meeting, hosted by the University of East Anglia. I’m already looking forward to it!

Credit

Sam Watson’s journal round-up for 30th April 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The Millennium Villages Project: a retrospective, observational, endline evaluation. The Lancet Global Health [PubMedPublished May 2018

There are some clinical researchers who would have you believe observational studies are completely useless. The clinical trial is king, they might say, observation studies are just too biased. And while it’s true that observational studies are difficult to do well and convincingly, they can be a reliable and powerful source of evidence. Similarly, randomised trials are frequently flawed, for example there’s often missing data that hasn’t been dealt with, or a lack of allocation concealment, and many researchers forget that randomisation does not guarantee a balance of covariates, it merely increases the probability of it. I bring this up, as this study is a particularly carefully designed observational data study that I think serves as a good example to other researchers. The paper is an evaluation of the Millennium Villages Project, an integrated intervention program designed to help rural villages across sub-Saharan Africa meet the Millennium Development Goals over ten years between 2005 and 2015. Initial before-after evaluations of the project were criticised for inferring causal “impacts” from before and after data (for example, this Lancet paper had to be corrected after some criticism). To address these concerns, this new paper is incredibly careful about choosing appropriate control villages against which to evaluate the intervention. Their method is too long to summarise here, but in essence they match intervention villages to other villages on the basis of district, agroecological zone, and a range of variables from the DHS – matches were they reviewed for face validity and revised until a satisfactory matching was complete. The wide range of outcomes are all scaled to a standard normal and made to “point” in the same direction, i.e. so an increase indicated economic development. Then, to avoid multiple comparisons problems, a Bayesian hierarchical model is used to pool data across countries and outcomes. Costs data were also reported. Even better, “statistical significance” is barely mentioned at all! All in all, a neat and convincing evaluation.

Reconsidering the income‐health relationship using distributional regression. Health Economics [PubMed] [RePEcPublished 19th April 2018

The relationship between health and income has long been of interest to health economists. But it is a complex relationship. Increases in income may change consumption behaviours and a change in the use of time, promoting health, while improvements to health may lead to increases in income. Similarly, people who are more likely to make higher incomes may also be those who look after themselves, or maybe not. Disentangling these various factors has generated a pretty sizeable literature, but almost all of the empirical papers in this area (and indeed all empirical papers in general) use modelling techniques to estimate the effect of something on the expected value, i.e. mean, of some outcome. But the rest of the distribution is of interest – the mean effect of income may not be very large, but a small increase in income for poorer individuals may have a relatively large effect on the risk of very poor health. This article looks at the relationship between income and the conditional distribution of health using something called “structured additive distribution regression” (SADR). My interpretation of SADR is that, one would model the outcome y ~ g(a,b) as being distributed according to some distribution g(.) indexed by parameters a and b, for example, a normal or Gamma distribution has two parameters. One would then specify a generalised linear model for a and b, e.g. a = f(X’B). I’m not sure this is a completely novel method, as people use the approach to, for example, model heteroscedasticity. But that’s not to detract from the paper itself. The findings are very interesting – increases to income have a much greater effect on health at the lower end of the spectrum.

Ask your doctor whether this product is right for you: a Bayesian joint model for patient drug requests and physician prescriptions. Journal of the Royal Statistical Society: Series C Published April 2018.

When I used to take econometrics tutorials for undergraduates, one of the sessions involved going through coursework about the role of advertising. To set the scene, I would talk about the work of Alfred Marshall, the influential economist from the late 1800s/early 1900s. He described two roles for advertising: constructive and combative. The former is when advertising grows the market as a whole, increasing everyone’s revenues, and the latter is when ads just steal market share from rivals without changing the size of the market. Later economists would go on to thoroughly develop theories around advertising, exploring such things as the power of ads to distort preferences, the supply of ads and their complementarity with the product they’re selling, or seeing ads as a source of consumer information. Nevertheless, Marshall’s distinction is still a key consideration, although often phrased in different terms. This study examines a lot of things, but one of its key objectives is to explore the role of direct to consumer advertising on prescriptions of brands of drugs. The system is clearly complex: drug companies advertise both to consumers and physicians, consumers may request the drug from the physician, and the physician may or may not prescribe it. Further, there may be correlated unobservable differences between physicians and patients, and the choice to advertise to particular patients may not be exogenous. The paper does a pretty good job of dealing with each of these issues, but it is dense and took me a couple of reads to work out what was going on, especially with the mix of Bayesian and Frequentist terms. Examining the erectile dysfunction drug market, the authors reckon that direct to consumer advertising reduces drug requests across the category, while increasing the proportion of requests for the advertised drug – potentially suggesting a “combative” role. However, it’s more complex than that patient requests and doctor’s prescriptions seem to be influenced by a multitude of factors.

Credits

Chris Sampson’s journal round-up for 6th February 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A review of NICE methods and processes across health technology assessment programmes: why the differences and what is the impact? Applied Health Economics and Health Policy [PubMed] Published 27th January 2017

Depending on the type of technology under consideration, NICE adopts a variety of different approaches in coming up with their recommendations. Different approaches might result in different decisions, which could undermine allocative efficiency. This study explores this possibility. Data were extracted from the manuals and websites for 5 programmes, under the themes of ‘remit and scope’, ‘process of assessment’, ‘methods of evaluation’ and ‘appraisal of evidence’. Semi-structured interviews were conducted with 5 people with expertise in each of the 5 programmes. Results are presented in a series of tables – one for each theme – outlining the essential characteristics of the 5 programmes. In their discussion, the authors then go on to consider how the identified differences might impact on efficiency from either a ‘utilitarian’ health-maximisation perspective or NICE’s egalitarian aim of ensuring adequate levels of health care. Not all programmes deliver recommendations with mandatory funding status, and it is only the ones that do that have a formal appeals process. Allowing for local rulings on funding could be good or bad news for efficiency, depending on the capacity of local decision makers to conduct economic evaluations (so that means probably bad news). At the same time, regional variation could undermine NICE’s fairness agenda. The evidence considered by the programmes varies, from a narrow focus on clinical and cost-effectiveness to the incorporation of budget impact and wider ethical and social values. Only some of the programmes have reference cases, and those that do are the ones that use cost-per-QALY analysis, which probably isn’t a coincidence. The fact that some programmes use outcomes other than QALYs obviously has the potential to undermine health-maximisation. Most differences or borne of practicality; there’s no point in insisting on a CUA if there is no evidence at all to support one – the appraisal would simply not happen. The very existence of alternative programmes indicates that NICE is not simply concerned with health-maximisation. Additional weight is given to rare conditions, for example. And NICE want to encourage research and innovation. So it’s no surprise that we need to take into account NICE’s egalitarian view to understand the type of efficiency for which it strives.

Economic evaluations alongside efficient study designs using large observational datasets: the PLEASANT trial case study. PharmacoEconomics [PubMed] Published 21st January 2017

One of the worst things about working on trial-based economic evaluations is going to lots of effort to collect lots of data, then finding that at the end of the day you don’t have much to show for it. Nowadays, the health service routinely collects many data for other purposes. There have been proposals to use these data – instead of prospectively collecting data – to conduct clinical trials. This study explores the potential for doing an economic evaluation alongside such a trial. The study uses CPRD data, including diagnostic, clinical and resource use information, for 8,608 trial participants. The intervention was the sending out of a letter in the hope of reducing unscheduled medical contacts due to asthma exacerbation in children starting a new school year. QALYs couldn’t be estimated using the CPRD data, so values were derived from the literature and estimated on the basis of exacerbations indicated by changes in prescriptions or hospitalisations. Note here the potentially artificial correlation between costs and outcomes that this creates, thus somewhat undermining the benefit of some good old bootstrapping. The results suggest the intervention is cost-saving with little impact on QALYs. Lots of sensitivity analyses are conducted, which are interesting in themselves and say something about the concerns around some of the structural assumptions. The authors outline the pros and cons of the approach. It’s an important discussion as it seems that studies like this are going to become increasingly common. Regarding data collection, there’s little doubt that this approach is more efficient, and it should be particularly valuable in the evaluation of public health and service delivery type interventions. The problem is that the study is not able to use individual-level cost and outcome data from the same people, which is what sets a trial-based economic evaluation apart from a model-based study. So for me, this isn’t really a trial-based economic evaluation. Indeed, the analysis incorporates a Markov-type model of exacerbations. It’s a different kind of beast, which incorporates aspects of modelling and aspects of trial-based analysis, along with some unique challenges of its own. There’s a lot more methodological work that needs to be done in this area, but this study demonstrates that it could be fruitful.

“Too much medicine”: insights and explanations from economic theory and research. Social Science & Medicine [PubMed] Published 18th January 2017

Overconsumption of health care represents an inefficient use of resources, and so we wouldn’t recommend it. But is that all we – as economists – have to say on the matter? This study sought to dig a little deeper. A literature search was conducted to establish a working definition of overconsumption. Related notions such as overdiagnosis, overtreatment, overuse, low-value care, overmedicalisation and even ‘pharmaceuticalisation’ all crop up. The authors introduce ‘need’ as a basis for understanding overconsumption; it represents health care that should never be considered as “needed”. A useful distinction is identified between misconsumption – where an individual’s own consumption is detrimental to their own well-being – and overconsumption, which can be understood as having a negative effect on social welfare. Note that in a collectively funded system the two concepts aren’t entirely distinguishable. Misconsumption becomes the focus of the paper, as avoiding harm to patients has been the subject of the “too much medicine” movement. I think this is a shame, and not really consistent with an economist’s usual perspective. The authors go on to discuss issues such as moral hazard, supplier-induced demand, provider payment mechanisms, ‘indication creep’, regret theory, and physicians’ positional consumption, and whether or not such phenomena might lead to individual welfare losses and thus be considered causes of misconsumption. The authors provide a neat diagram showing the various causes of misconsumption on a plane. One dimension represents the extent to which the cause is imperfect knowledge or imperfect agency, and the other the degree to which the cause is at the individual or market level. There’s a big gap in the top right, where market level causes meet imperfect knowledge. This area could have included patent systems, research fraud and dodgy Pharma practices. Or maybe just a portrait of Ben Goldacre for shorthand. There are some warnings about the (limited) extent to which market reforms might address misconsumption, and the proposed remedy for overconsumption is not really an economic one. Rather, a change in culture is prescribed. More research looking at existing treatments rather than technology adoption, and to investigate subgroup effects, is also recommended. The authors further suggest collaboration between health economists and ecological economists.

Credits