Chris Sampson’s journal round-up for 28th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Spatial competition and quality: evidence from the English family doctor market. Journal of Health Economics [RePEc] Published 17th October 2019

Researchers will never stop asking questions about the role of competition in health care. There’s a substantial body of literature now suggesting that greater competition in the context of regulated prices may bring some quality benefits. But with weak indicators of quality and limited generalisability, it isn’t a closed case. One context in which evidence has been lacking is in health care beyond the hospital. In the NHS, an individual’s choice of GP practice is perhaps the context in which quality can be observed and choice most readily (and meaningfully) exercised. That’s where this study comes in. Aside from the horrible format of a ‘proper economics’ paper (where we start with spoilers and climax with robustness tests), it’s a good read.

The study relies on a measure of competition based on the number of rival GPs within a 2km radius. Number of GPs, that is, rather than number of practices. This is important, as the number of GPs per practice has been increasing. About 75% of a practice’s revenues are linked to the number of patients registered, wherein lies the incentive to compete with other practices for patients. And, in this context, research has shown that patient choice is responsive to indicators of quality. The study uses data for 2005-2012 from all GP practices in England, making it an impressive data set.

The measures of quality come from the Quality and Outcomes Framework (QOF) and the General Practice Patient Survey (GPPS) – the former providing indicators of clinical quality and the latter providing indicators of patient experience. A series of OLS regressions are run on the different outcome measures, with practice fixed effects and various characteristics of the population. The models show that all of the quality indicators are improved by greater competition, but the effect is very small. For example, an extra competing GP within a 2km radius results in 0.035% increase in the percentage of the population for whom the QOF indicators have been achieved. The effects are a little stronger for the patient satisfaction indicators.

The paper reports a bunch of important robustness checks. For instance, the authors try to test whether practices select their locations based on the patient casemix, finding no evidence that they do. The authors even go so far as to test the impact of a policy change, which resulted in an exogenous increase in the number of GPs in some areas but not others. The main findings seem to have withstood all the tests. They also try out a lagged model, which gives similar results.

The findings from this study slot in comfortably with the existing body of research on the role of competition in the NHS. More competition might help to achieve quality improvement, but it hardly seems worthy of dedicating much effort or, importantly, much expense to the cause.

Worth living or worth dying? The views of the general public about allowing disabled children to die. Journal of Medical Ethics [PhilPapers] [PubMed] Published 15th October 2019

Recent years have seen a series of cases in the UK where (usually very young) children have been so unwell and with such a severe prognosis that someone (usually a physician) has judged that continued treatment is not warranted and that the child should be allowed to die. These cases have generated debate and outrage in the media. But what do people actually think?

This study recruited members of the public in the UK (n=130) to an online panel and asked about the decisions that participants would support. The survey had three parts. The first part set out six scenarios of hospitalised infants, which varied in terms of the infants’ physical and sensory abilities, cognitive capacity, level of suffering, and future prospects. Some of the cases approximated real cases that have received media coverage, and the participants were asked whether they thought that withdrawing treatment was justified in each case. In the second part of the survey, participants were asked about the factors that they believed were important in making such decisions. In the third part, participants answered a few questions about themselves and answered the Oxford Utilitarianism Scale.

The authors set up the concept of a ‘life not worth living’, based on the idea that net future well-being is ‘negative’, and supposing the individual’s own judgement were they able to provide it. In the first part of the survey, 88% indicated that life would be worse than death in at least one of the cases. In such cases, 65% thought that treatment withdrawal was ethically obligatory, while 33% thought that either decision was acceptable. Pain was considered the most important factor in making such decisions, followed by the presence of pleasure. Perhaps predictably for health economists familiar with the literature, about 42% of people thought that resources should be considered in the decision, while 40% thought they shouldn’t.

The paper includes an extensive discussion, with plenty of food for thought. In particular, it discusses the ways in which the findings might inform the debate between the ‘zero line view’, whereby treatment should be withdrawn at the point where life has no benefit, and the ‘threshold view’, which establishes a grey zone of ethical uncertainty, in which either decision is ethically acceptable. To some extent, the findings of this study support the need for a threshold approach. Ethical questions are rarely black and white.

How is the trade-off between adverse selection and discrimination risk affected by genetic testing? Theory and experiment. Journal of Health Economics [PubMed] [RePEc] Published 1st October 2019

A lot of people are worried about how knowledge of their genetic information could be used against them. The most obvious scenario is one in which insurers increase premiums – or deny coverage altogether – on the basis of genetic risk factors. There are two key regulatory options in this context – disclosure duty, whereby individuals are obliged to tell insurers about the outcome of genetic tests, or consent law, whereby people can keep the findings to themselves. This study explores how people behave under each of these regulations.

The authors set up a theoretical model in which individuals can choose whether to purchase a genetic test that can identify them as being either high-risk or low-risk of developing some generic illness. The authors outline utility functions under disclosure duty and consent law. Under disclosure duty, individuals face a choice between the certainty of not knowing their risk and receiving pooled insurance premiums, or a lottery in which they have to disclose their level of risk and receive a higher or lower premium accordingly. Under consent law, individuals will only reveal their test results if they are at low risk, thus securing lower premiums and contributing to adverse selection. As a result, individuals will be more willing to take a test under consent law than under disclosure duty, all else equal.

After setting out their model (at great length), the authors go on to describe an experiment that they conducted with 67 economics students, to elicit preferences within and between the different regulatory settings. The experiment was set up in a very generic way, not related to health at all. Participants were presented with a series of tasks across which the parameters representing the price of the test and the pooled premium were varied. All of the authors’ hypotheses were supported by the experiment. More people took tests under consent law. Higher test prices reduce the number of people taking tests. If prices are high enough, people will prefer disclosure duty. The likelihood that people take tests under consent law is increasing with the level of adverse selection. And people are very sensitive to the level of discrimination risk under disclosure duty.

It’s an interesting study, but I’m not sure how much it can tell us about genetic testing. Framing the experiment as entirely unrelated to health seems especially unwise. People’s risk preferences may be very different in the domain of real health than in the hypothetical monetary domain. In the real world, there’s a lot more at stake.

Credits

Chris Sampson’s journal round-up for 19th August 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Paying for kidneys? A randomized survey and choice experiment. American Economic Review [RePEc] Published August 2019

This paper starts with a quote from Alvin Roth about ‘repugnant transactions’, of which markets for organs provide a prime example. This idea of ‘repugnant transactions’ has been hijacked by some pop economists to represent the stupid opinions of non-economists. If you ask me, markets for organs aren’t repugnant, they just seem like a very bad idea in terms of both efficiency and equity. But it doesn’t matter what I think; it matters what the people of the United States think.

The authors of this study conducted an online survey with a representative sample of 2,666 Americans. Each respondent was randomised to evaluate one of eight systems compared with the current system. The eight systems differed with respect to i) cash or non-cash compensation of ii) different sizes ($30,000 or $100,000), iii) paid by either a public agency or the organ recipient. Participants made five binary choices that differed according to the gain – in transplants generated – associated with the new system. Half of the participants were also asked to express moral judgements.

Both the system features (e.g. who pays) and the outcomes of the new system influenced people’s choices. Broadly speaking, the results suggest that people aren’t opposed to donors being paid, but are opposed to patients paying. (Remember, we’re talking about the US here!). Around 21% of respondents opposed payment no matter what, 46% were in favour no matter what, and 18% were sensitive to the gain in the number of transplants. A 10% point increase in transplants resulted in a 2.6% point increase in support. Unsurprisingly, individuals’ moral judgements were predictive of the attitudes they expressed, particularly with respect to fairness. The authors describe their results as exhibiting ‘strong polarisation’, which is surely inevitable for questions that involve moral judgement.

Being in AER, this is a long meandering paper with extensive analyses and thoroughly reported results. There’s lots of information and findings that I can’t share here. It’s a valuable study with plenty of food for thought, but I can’t help but think that it is, methodologically, a bit weak. If we want to understand the different views in society, surely some Q methodology would be more useful than a basic online survey. And if we want to elicit stated preferences, surely a discrete choice experiment with a well-thought-out efficient design would give us more meaningful results.

Estimating local need for mental healthcare to inform fair resource allocation in the NHS in England: cross-sectional analysis of national administrative data linked at person level. The British Journal of Psychiatry [PubMed] Published 8th August 2019

The need to fairly (and efficiently) allocate NHS resources across the country played an important part in the birth of health economics in the UK, and resulted in resource allocation formulas. Since 1996 there has been a separate formula for mental health services, which is periodically updated. This study describes the work undertaken for the latest update.

The model is based on predicting service use and total mental health care costs observed in 2015 from predictors in the years 2013-2014, to inform allocations in 2019-2024. Various individual-level data sources available to the NHS were used for 43.7 million people registered with a GP practice and over the age of 20. The cost per patient who used mental health services ranged from £94 to over one million, averaging around £2,000. The predictor variables included individual indicators such as age, sex, ethnicity, physical diagnoses, and household type (e.g. number of adults and kids). The model also used variables observed at the local or GP practice level, such as the proportion of people receiving out-of-work benefits and the distance from the mental health trust. All of this got plugged into a good old OLS regression. From individual-level predictions, the researchers created aggregated indices of need for each clinical commission group (CCG).

A lot went into the model, which explained 99% of the variation in costs between CCGs. A key way in which this model differs from previous versions is that it relies on individual-level indicators rather than those observed at the level of GP practice or CCG. There was a lot of variation in the CCG need indices, ranging from 0.65 for Surrey Heath to 1.62 for Southwark, where 1.00 is the average. You’ll need to check the online appendices for your own CCG’s level of need (Lewisham: 1.52). As one might expect, the researchers observed a strong correlation between a CCG’s need index and the CCG’s area’s level of deprivation. Compared with previous models, this new model indicates a greater allocation of resources to more deprived and older populations.

Measuring, valuing and including forgone childhood education and leisure time costs in economic evaluation: methods, challenges and the way forward. Social Science & Medicine [PubMed] Published 7th August 2019

I’m a ‘societal perspective’ sceptic, not because I don’t care about non-health outcomes (though I do care less) but because I think it’s impossible to capture everything that is of value to society, and that capturing just a few things will introduce a lot of bias and noise. I would also deny that time has any intrinsic value. But I do think we need to do a better job of evaluating interventions for children. So I expected this paper to provide me with a good mix of satisfaction and exasperation.

Health care often involves a loss of leisure or work time, which can constitute an opportunity cost and is regularly included in economic evaluations – usually proxied by wages – for adults. The authors outline the rationale for considering ‘time-related’ opportunity costs in economic evaluations and describe the nature of lost time for children. For adults, the distinction is generally between paid or unpaid work and leisure time. Arguably, this distinction is not applicable to children. Two literature reviews are described. One looked at economic evaluations in the context of children’s health, to see how researchers have valued lost time. The other sought to identify ideas about the value of lost time for children from a broader literature.

The authors do a nice job of outlining how difficult it is to capture non-health-related costs and outcomes in the context of childhood. There is a handful of economic evaluations that have tried to measure and value children’s foregone time. The valuations generally focussed on the costs of childcare rather than the costs to the child, though one looked at the rate of return to education. There wasn’t a lot to go off in the non-health literature, which mostly relates to adults. From what there is, the recommendation is to capture absence from formal education and foregone leisure time. Of course, consideration needs to be given to the importance of lost time and thus the value of capturing it in research. We also need to think about the risk of double counting. When it comes to measurement, we can probably use similar methods as we would for adults, such as diaries. But we need very different approaches to valuation. On this, the authors found very little in the way of good examples to follow. More research needed.

Credits

Chris Sampson’s journal round-up for 18th February 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

An educational review about using cost data for the purpose of cost-effectiveness analysis. PharmacoEconomics [PubMed] Published 12th February 2019

Costing can seem like a cinderella method in the health economist’s toolkit. If you’re working on an economic evaluation, estimating resource use and costs can be tedious. That is perhaps why costing methodology has been relatively neglected in the literature compared to health state valuation (for example). This paper tries to redress the balance slightly by providing an overview of the main issues in costing, explaining why they’re important, so that we can do a better job. The issues are more complex than many assume.

Supported by a formidable reference list (n=120), the authors tackle 9 issues relating to costing: i) costs vs resource use; ii) trial-based vs model-based evaluations; iii) costing perspectives; iv) data sources; v) statistical methods; vi) baseline adjustments; vii) missing data; viii) uncertainty; and ix) discounting, inflation, and currency. It’s a big paper with a lot to say, so it isn’t easily summarised. Its role is as a reference point for us to turn to when we need it. There’s a stack of papers and other resources cited in here that I wasn’t aware of. The paper itself doesn’t get technical, leaving that to the papers cited therein. But the authors provide a good discussion of the questions that ought to be addressed by somebody designing a study, relating to data collection and analysis.

The paper closes with some recommendations. The main one is that people conducting cost-effectiveness analysis should think harder about why they’re making particular methodological choices. The point is also made that new developments could change the way we collect and analyse cost data. For example, the growing use of observational data demands that greater consideration be given to unobserved confounding. Costing methods are important and interesting!

A flexible open-source decision model for value assessment of biologic treatment for rheumatoid arthritis. PharmacoEconomics [PubMed] Published 9th February 2019

Wherever feasible, decision models should be published open-source, so that they can be reviewed, reused, recycled, or, perhaps, rejected. But open-source models are still a rare sight. Here, we have one for rheumatoid arthritis. But the paper isn’t really about the model. After all, the model and supporting documentation are already available online. Rather, the paper describes the reasoning behind publishing a model open-source, and the process for doing so in this case.

This is the first model released as part of the Open Source Value Project, which tries to convince decision-makers that cost-effectiveness models are worth paying attention to. That is, it’s aimed at the US market, where models are largely ignored. The authors argue that models need to be flexible to be valuable into the future and that, to achieve this, four steps should be followed in the development: 1) release the initial model, 2) invite feedback, 3) convene an expert panel to determine actions in light of the feedback, and 4) revise the model. Then, repeat as necessary. Alongside this, people with the requisite technical skills (i.e. knowing how to use R, C++, and GitHub) can proffer changes to the model whenever they like. This paper was written after step 3 had been completed, and the authors report receiving 159 comments on their model.

The model itself (which you can have a play with here) is an individual patient simulation, which is set-up to evaluate a variety of treatment scenarios. It estimates costs and (mapped) QALYs and can be used to conduct cost-effectiveness analysis or multi-criteria decision analysis. The model was designed to be able to run 32 different model structures based on different assumptions about treatment pathways and outcomes, meaning that the authors could evaluate structural uncertainties (which is a rare feat). A variety of approaches were used to validate the model.

The authors identify several challenges that they experienced in the process, including difficulties in communication between stakeholders and the large amount of time needed to develop, test, and describe a model of this sophistication. I would imagine that, compared with most decision models, the amount of work underlying this paper is staggering. Whether or not that work is worthwhile depends on whether researchers and policymakers make us of the model. The authors have made it as easy as possible for stakeholders to engage with and build on their work, so they should be hopeful that it will bear fruit.

EQ-5D-Y-5L: developing a revised EQ-5D-Y with increased response categories. Quality of Life Research [PubMed] Published 9th February 2019

The EQ-5D-Y has been a slow burner. It’s been around 10 years since it first came on the scene, but we’ve been without a value set and – with the introduction of the EQ-5D-5L – the questionnaire has lost some comparability with its adult equivalent. But the EQ-5D-Y has almost caught-up, and this study describes part of how that’s been achieved.

The reason to develop a 5L version for the EQ-5D-Y is the same as for the adult version – to reduce ceiling effects and improve sensitivity. A selection of possible descriptors was identified through a review of the literature. Focus groups were conducted with children between 8 and 15 years of age in Germany, Spain, Sweden, and the UK in order to identify labels that can be understood by young people. Specifically, the researchers wanted to know the words used by children and adolescents to describe the quantity or intensity of health problems. Participants ranked the labels according to severity and specified which labels they didn’t like. Transcripts were analysed using thematic content analysis. Next, individual interviews were conducted with 255 participants across the four countries, which involved sorting and response scaling tasks. Younger children used a smiley scale. At this stage, both 4L and 5L versions were being considered. In a second phase of the research, cognitive interviews were used to test for comprehensibility and feasibility.

A 5-level version was preferred by most, and 5L labels were identified in each language. The English version used terms like ‘a little bit’, ‘a lot’, and ‘really’. There’s plenty more research to be done on the EQ-5D-Y-5L, including psychometric testing, but I’d expect it to be coming to studies near you very soon. One of the key takeaways from this study, and something that I’ve been seeing more in research in recent years, is that kids are smart. The authors make this point clear, particulary with respect to the response scaling tasks that were conducted with children as young as 8. Decision-making criteria and frameworks that relate to children should be based on children’s preferences and ideas.

Credits