Chris Sampson’s journal round-up for 28th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Spatial competition and quality: evidence from the English family doctor market. Journal of Health Economics [RePEc] Published 17th October 2019

Researchers will never stop asking questions about the role of competition in health care. There’s a substantial body of literature now suggesting that greater competition in the context of regulated prices may bring some quality benefits. But with weak indicators of quality and limited generalisability, it isn’t a closed case. One context in which evidence has been lacking is in health care beyond the hospital. In the NHS, an individual’s choice of GP practice is perhaps the context in which quality can be observed and choice most readily (and meaningfully) exercised. That’s where this study comes in. Aside from the horrible format of a ‘proper economics’ paper (where we start with spoilers and climax with robustness tests), it’s a good read.

The study relies on a measure of competition based on the number of rival GPs within a 2km radius. Number of GPs, that is, rather than number of practices. This is important, as the number of GPs per practice has been increasing. About 75% of a practice’s revenues are linked to the number of patients registered, wherein lies the incentive to compete with other practices for patients. And, in this context, research has shown that patient choice is responsive to indicators of quality. The study uses data for 2005-2012 from all GP practices in England, making it an impressive data set.

The measures of quality come from the Quality and Outcomes Framework (QOF) and the General Practice Patient Survey (GPPS) – the former providing indicators of clinical quality and the latter providing indicators of patient experience. A series of OLS regressions are run on the different outcome measures, with practice fixed effects and various characteristics of the population. The models show that all of the quality indicators are improved by greater competition, but the effect is very small. For example, an extra competing GP within a 2km radius results in 0.035% increase in the percentage of the population for whom the QOF indicators have been achieved. The effects are a little stronger for the patient satisfaction indicators.

The paper reports a bunch of important robustness checks. For instance, the authors try to test whether practices select their locations based on the patient casemix, finding no evidence that they do. The authors even go so far as to test the impact of a policy change, which resulted in an exogenous increase in the number of GPs in some areas but not others. The main findings seem to have withstood all the tests. They also try out a lagged model, which gives similar results.

The findings from this study slot in comfortably with the existing body of research on the role of competition in the NHS. More competition might help to achieve quality improvement, but it hardly seems worthy of dedicating much effort or, importantly, much expense to the cause.

Worth living or worth dying? The views of the general public about allowing disabled children to die. Journal of Medical Ethics [PhilPapers] [PubMed] Published 15th October 2019

Recent years have seen a series of cases in the UK where (usually very young) children have been so unwell and with such a severe prognosis that someone (usually a physician) has judged that continued treatment is not warranted and that the child should be allowed to die. These cases have generated debate and outrage in the media. But what do people actually think?

This study recruited members of the public in the UK (n=130) to an online panel and asked about the decisions that participants would support. The survey had three parts. The first part set out six scenarios of hospitalised infants, which varied in terms of the infants’ physical and sensory abilities, cognitive capacity, level of suffering, and future prospects. Some of the cases approximated real cases that have received media coverage, and the participants were asked whether they thought that withdrawing treatment was justified in each case. In the second part of the survey, participants were asked about the factors that they believed were important in making such decisions. In the third part, participants answered a few questions about themselves and answered the Oxford Utilitarianism Scale.

The authors set up the concept of a ‘life not worth living’, based on the idea that net future well-being is ‘negative’, and supposing the individual’s own judgement were they able to provide it. In the first part of the survey, 88% indicated that life would be worse than death in at least one of the cases. In such cases, 65% thought that treatment withdrawal was ethically obligatory, while 33% thought that either decision was acceptable. Pain was considered the most important factor in making such decisions, followed by the presence of pleasure. Perhaps predictably for health economists familiar with the literature, about 42% of people thought that resources should be considered in the decision, while 40% thought they shouldn’t.

The paper includes an extensive discussion, with plenty of food for thought. In particular, it discusses the ways in which the findings might inform the debate between the ‘zero line view’, whereby treatment should be withdrawn at the point where life has no benefit, and the ‘threshold view’, which establishes a grey zone of ethical uncertainty, in which either decision is ethically acceptable. To some extent, the findings of this study support the need for a threshold approach. Ethical questions are rarely black and white.

How is the trade-off between adverse selection and discrimination risk affected by genetic testing? Theory and experiment. Journal of Health Economics [PubMed] [RePEc] Published 1st October 2019

A lot of people are worried about how knowledge of their genetic information could be used against them. The most obvious scenario is one in which insurers increase premiums – or deny coverage altogether – on the basis of genetic risk factors. There are two key regulatory options in this context – disclosure duty, whereby individuals are obliged to tell insurers about the outcome of genetic tests, or consent law, whereby people can keep the findings to themselves. This study explores how people behave under each of these regulations.

The authors set up a theoretical model in which individuals can choose whether to purchase a genetic test that can identify them as being either high-risk or low-risk of developing some generic illness. The authors outline utility functions under disclosure duty and consent law. Under disclosure duty, individuals face a choice between the certainty of not knowing their risk and receiving pooled insurance premiums, or a lottery in which they have to disclose their level of risk and receive a higher or lower premium accordingly. Under consent law, individuals will only reveal their test results if they are at low risk, thus securing lower premiums and contributing to adverse selection. As a result, individuals will be more willing to take a test under consent law than under disclosure duty, all else equal.

After setting out their model (at great length), the authors go on to describe an experiment that they conducted with 67 economics students, to elicit preferences within and between the different regulatory settings. The experiment was set up in a very generic way, not related to health at all. Participants were presented with a series of tasks across which the parameters representing the price of the test and the pooled premium were varied. All of the authors’ hypotheses were supported by the experiment. More people took tests under consent law. Higher test prices reduce the number of people taking tests. If prices are high enough, people will prefer disclosure duty. The likelihood that people take tests under consent law is increasing with the level of adverse selection. And people are very sensitive to the level of discrimination risk under disclosure duty.

It’s an interesting study, but I’m not sure how much it can tell us about genetic testing. Framing the experiment as entirely unrelated to health seems especially unwise. People’s risk preferences may be very different in the domain of real health than in the hypothetical monetary domain. In the real world, there’s a lot more at stake.

Credits

Chris Sampson’s journal round-up for 5th February 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness analysis of germ-line BRCA testing in women with breast cancer and cascade testing in family members of mutation carriers. Genetics in Medicine [PubMed] Published 4th January 2018

The idea of testing women for BRCA mutations – faulty genes that can increase the probability and severity of breast and ovarian cancers – periodically makes it into the headlines. That’s not just because of Angelina Jolie. It’s also because it’s a challenging and active area of research with many uncertainties. This new cost-effectiveness analysis evaluates a programme that incorporates cascade testing; testing relatives of mutation carriers. The idea is that this could increase the effectiveness of the programme with a reduced cost-per-identification, as relatives of mutation carriers are more likely to also carry a mutation. The researchers use a cohort-based Markov-style decision analytic model. A programme with three test cohorts – i) women with unilateral breast cancer and a risk prediction score >10%, ii) first-degree relatives, and iii) second-degree relatives – was compared against no testing. A positive result in the original high-risk individual leads to testing in the first- and second-degree relatives, with the number of subsequent tests occurring in the model determined by assumptions about family size. Women who test positive can receive risk-reducing mastectomy and/or bilateral salpingo-oophorectomy (removal of the ovaries). The results are favourable to the BRCA testing programme, at $19,000 (Australian) per QALY for testing affected women only and $15,000 when the cascade testing of family members was included, with high probabilities of cost-effectiveness at $50,000 per QALY. I’m a little confused by the model. The model includes the states ‘BRCA positive’ and ‘Breast cancer’, which clearly are not mutually exclusive. And It isn’t clear how women entering the model with breast cancer go on to enjoy QALY benefits compared to the no-test group. I’m definitely not comfortable with the assumption that there is no disutility associated with risk-reducing surgery. I also can’t see where the cost of identifying the high-risk women in the first place was accounted for. But this is a model, after all. The findings appear to be robust to a variety of sensitivity analyses. Part of the value of testing lies in the information it provides about people beyond the individual patient. Clearly, if we want to evaluate the true value of testing then this needs to be taken into account.

Economic evaluation of direct-acting antivirals for hepatitis C in Norway. PharmacoEconomics Published 2nd February 2018

Direct-acting antivirals (DAAs) are those new drugs that gave NICE a headache a few years back because they were – despite being very effective and high-value – unaffordable. DAAs are essentially curative, which means that they can reduce resource use over a long time horizon. This makes cost-effectiveness analysis in this context challenging. In this new study, the authors conduct an economic evaluation of DAAs compared with the previous class of treatment, in the Norwegian context. Importantly, the researchers sought to take into account the rebates that have been agreed in Norway, which mean that the prices are effectively reduced by up to 50%. There are now lots of different DAAs available. Furthermore, hepatitis C infection corresponds to several different genotypes. This means that there is a need to identify which treatments are most (cost-)effective for which groups of patients; this isn’t simply a matter of A vs B. The authors use a previously developed model that incorporates projections of the disease up to 2030, though the authors extrapolate to a 100-year time horizon. The paper presents cost-effectiveness acceptability frontiers for each of genotypes 1, 2, and 3, clearly demonstrating which medicines are the most likely to be cost-effective at given willingness-to-pay thresholds. For all three genotypes, at least one of the DAA options is most likely to be cost-effective above a threshold of €70,000 per QALY (which is apparently recommended in Norway). The model predicts that if everyone received the most cost-effective strategy then Norway would expect to see around 180 hepatitis C patients in 2030 instead of the 300-400 seen in the last six years. The study also presents the price rebates that would be necessary to make currently sub-optimal medicines cost-effective. The model isn’t that generalisable. It’s very much Norway-specific as it reflects the country’s treatment guidelines. It also only looks at people who inject drugs – a sub-population whose importance can vary a lot from one country to the next. I expect this will be a valuable piece of work for Norway, but it strikes me as odd that “affordability” or “budget impact” aren’t even mentioned in the paper.

Cost-effectiveness of prostate cancer screening: a systematic review of decision-analytical models. BMC Cancer [PubMed] Published 18th January 2018

You may have seen prostate cancer in the headlines last week. Despite the number of people in the UK dying each year from prostate cancer now being greater than the number of people dying from breast cancer, prostate cancer screening remains controversial. This is because over-detection and over-treatment are common and harmful. Plenty of cost-effectiveness studies have been conducted in the context of detecting and treating prostate cancer. But there are various ways of modelling the problem and various specifications of screening programme that can be evaluated. So here we have a systematic review of cost-effectiveness models evaluating prostate-specific antigen (PSA) blood tests as a basis for screening. From a haul of 1010 studies, 10 made it into the review. The studies modelled lots of different scenarios, with alternative screening strategies, PSA thresholds, and treatment pathways. The results are not consistent. Many of the scenarios evaluated in the studies were more costly and less effective than current practice (which tended to be the lack of any formal screening programme). None of the UK-based cost-per-QALY estimates favoured screening. The authors summarise the methodological choices made in each study and consider the extent to which this relates to the pathways being modelled. They also specify the health state utility values used in the models. This will be a very useful reference point for anyone trying their hand at a prostate cancer screening model. Of the ten studies included in the review, four of them found at least one screening programme to be potentially cost-effective. ‘Adaptive screening’ – whereby individuals’ recall to screening was based on their risk – was considered in two studies using patient-level simulations. The authors suggest that cohort-level modelling could be sufficient where screening is not determined by individual risk level. There are also warnings against inappropriate definition of the comparator, which is likely to be opportunistic screening rather than a complete absence of screening. Generally speaking, a lack of good data seems to be part of the explanation for the inconsistency in the findings. It could be some time before we have a clearer understanding of how to implement a cost-effective screening programme for prostate cancer.

Credits

 

Chris Sampson’s journal round-up for 31st July 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

An exploratory study on using principal-component analysis and confirmatory factor analysis to identify bolt-on dimensions: the EQ-5D case study. Value in Health Published 14th July 2017

I’m not convinced by the idea of using bolt-on dimensions for multi-attribute utility instruments. A state description with a bolt-on refers to a different evaluative space, and therefore is not comparable with the progenitor, thus undermining its purpose. Maybe this study will persuade me otherwise. The authors analyse data from the Multi Instrument Comparison database, including responses to EQ-5D-5L, SF-6D, HUI3, AQoL 8D and 15D questionnaires, as well as the ICECAP and 3 measures of subjective well-being. Content analysis was used to allocate items from the measures to underlying constructs of health-related quality of life. The sample of 8022 was randomly split, with one half used for principal-component analysis and confirmatory factor analysis, and the other used for validation. This approach looks at the underlying constructs associated with health-related quality of life and the extent to which individual items from the questionnaires influence them. Candidate items for bolt-ons are those items from questionnaires other than the EQ-5D that are important and not otherwise captured by the EQ-5D questions. The principal-component analysis supported a 9-component model: physical functioning, psychological symptoms, satisfaction, pain, relationships, speech/cognition, hearing, energy/sleep and vision. The EQ-5D only covered physical functioning, psychological symptoms and pain. Therefore, items from measures that explain the other 6 components represent bolt-on candidates for the EQ-5D. This study succeeds in its aim. It demonstrates what appears to be a meaningful quantitative approach to identifying items not fully captured by the EQ-5D, which might be added as bolt-ons. But it doesn’t answer the question of which (if any) of these bolt-ons ought to be added, or in what circumstances. That would at least require pre-definition of the evaluative space, which might not correspond to the authors’ chosen model of health-related quality of life. If it does, then these findings would be more persuasive as a reason to do away with the EQ-5D altogether.

Endogenous information, adverse selection, and prevention: implications for genetic testing policy. Journal of Health Economics Published 13th July 2017

If you can afford it, there are all sorts of genetic tests available nowadays. Some of them could provide valuable information about the risk of particular health problems in the future. Therefore, they can be used to guide individuals’ decisions about preventive care. But if the individual’s health care is financed through insurance, that same information could prove costly. It could reinforce that classic asymmetry of information and adverse selection problem. So we need policy that deals with this. This study considers the incentives and insurance market outcomes associated with four policy options: i) mandatory disclosure of test results, ii) voluntary disclosure, iii) insurers knowing the test was taken, but not the results and iv) complete ban on the use of test information by insurers. The authors describe a utility model that incorporates the use of prevention technologies, and available insurance contracts, amongst people who are informed or uninformed (according to whether they have taken a test) and high or low risk (according to test results). This is used to estimate the value of taking a genetic test, which differs under the four different policy options. Under voluntary disclosure, the information from a genetic test always has non-negative value to the individual, who can choose to only tell their insurer if it’s favourable. The analysis shows that, in terms of social welfare, mandatory disclosure is expected to be optimal, while an information ban is dominated by all other options. These findings are in line with previous studies, which were less generalisable according to the authors. In the introduction, the authors state that “ethical issues are beyond the scope of this paper”. That’s kind of a problem. I doubt anybody who supports an information ban does so on the basis that they think it will maximise social welfare in the fashion described in this paper. More likely, they’re worried about the inequities in health that mandatory disclosure could reinforce, about which this study tells us nothing. Still, an information ban seems to be a popular policy, and studies like this indicate that such decisions should be reconsidered in light of their expected impact on social welfare.

Returns to scientific publications for pharmaceutical products in the United States. Health Economics [PubMedPublished 10th July 2017

Publication bias is a big problem. Part of the cause is that pharmaceutical companies have no incentive to publish negative findings for their own products. Though positive findings may be valuable in terms of sales. As usual, it isn’t quite that simple when you really think about it. This study looks at the effect of publications on revenue for 20 branded drugs in 3 markets – statins, rheumatoid arthritis and asthma – using an ‘event-study’ approach. The authors analyse a panel of quarterly US sales data from 2003-2013 alongside publications identified through literature searches and several drug- and market-specific covariates. Effects are estimated using first difference and difference in first difference models. The authors hypothesise that publications should have an important impact on sales in markets with high generic competition, and less in those without or with high branded competition. Essentially, this is what they find. For statins and asthma drugs, where there was some competition, clinical studies in high-impact journals increased sales to the tune of $8 million per publication. For statins, volume was not significantly affected, with mediation through price. In rhematoid arthritis, where competition is limited, the effect on sales was mediated by the effect on volume. Studies published in lower impact journals seemed to have a negative influence. Cost-effectiveness studies were only important in the market with high generic competition, increasing statin sales by $2.2 million on average. I’d imagine that these impacts are something with which firms already have a reasonable grasp. But this study provides value to public policy decision makers. It highlights those situations in which we might expect manufacturers to publish evidence and those in which it might be worthwhile increasing public investment to pick up the slack. It could also help identify where publication bias might be a bigger problem due to the incentives faced by pharmaceutical companies.

Credits