Chris Sampson’s journal round-up for 5th February 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness analysis of germ-line BRCA testing in women with breast cancer and cascade testing in family members of mutation carriers. Genetics in Medicine [PubMed] Published 4th January 2018

The idea of testing women for BRCA mutations – faulty genes that can increase the probability and severity of breast and ovarian cancers – periodically makes it into the headlines. That’s not just because of Angelina Jolie. It’s also because it’s a challenging and active area of research with many uncertainties. This new cost-effectiveness analysis evaluates a programme that incorporates cascade testing; testing relatives of mutation carriers. The idea is that this could increase the effectiveness of the programme with a reduced cost-per-identification, as relatives of mutation carriers are more likely to also carry a mutation. The researchers use a cohort-based Markov-style decision analytic model. A programme with three test cohorts – i) women with unilateral breast cancer and a risk prediction score >10%, ii) first-degree relatives, and iii) second-degree relatives – was compared against no testing. A positive result in the original high-risk individual leads to testing in the first- and second-degree relatives, with the number of subsequent tests occurring in the model determined by assumptions about family size. Women who test positive can receive risk-reducing mastectomy and/or bilateral salpingo-oophorectomy (removal of the ovaries). The results are favourable to the BRCA testing programme, at $19,000 (Australian) per QALY for testing affected women only and $15,000 when the cascade testing of family members was included, with high probabilities of cost-effectiveness at $50,000 per QALY. I’m a little confused by the model. The model includes the states ‘BRCA positive’ and ‘Breast cancer’, which clearly are not mutually exclusive. And It isn’t clear how women entering the model with breast cancer go on to enjoy QALY benefits compared to the no-test group. I’m definitely not comfortable with the assumption that there is no disutility associated with risk-reducing surgery. I also can’t see where the cost of identifying the high-risk women in the first place was accounted for. But this is a model, after all. The findings appear to be robust to a variety of sensitivity analyses. Part of the value of testing lies in the information it provides about people beyond the individual patient. Clearly, if we want to evaluate the true value of testing then this needs to be taken into account.

Economic evaluation of direct-acting antivirals for hepatitis C in Norway. PharmacoEconomics Published 2nd February 2018

Direct-acting antivirals (DAAs) are those new drugs that gave NICE a headache a few years back because they were – despite being very effective and high-value – unaffordable. DAAs are essentially curative, which means that they can reduce resource use over a long time horizon. This makes cost-effectiveness analysis in this context challenging. In this new study, the authors conduct an economic evaluation of DAAs compared with the previous class of treatment, in the Norwegian context. Importantly, the researchers sought to take into account the rebates that have been agreed in Norway, which mean that the prices are effectively reduced by up to 50%. There are now lots of different DAAs available. Furthermore, hepatitis C infection corresponds to several different genotypes. This means that there is a need to identify which treatments are most (cost-)effective for which groups of patients; this isn’t simply a matter of A vs B. The authors use a previously developed model that incorporates projections of the disease up to 2030, though the authors extrapolate to a 100-year time horizon. The paper presents cost-effectiveness acceptability frontiers for each of genotypes 1, 2, and 3, clearly demonstrating which medicines are the most likely to be cost-effective at given willingness-to-pay thresholds. For all three genotypes, at least one of the DAA options is most likely to be cost-effective above a threshold of €70,000 per QALY (which is apparently recommended in Norway). The model predicts that if everyone received the most cost-effective strategy then Norway would expect to see around 180 hepatitis C patients in 2030 instead of the 300-400 seen in the last six years. The study also presents the price rebates that would be necessary to make currently sub-optimal medicines cost-effective. The model isn’t that generalisable. It’s very much Norway-specific as it reflects the country’s treatment guidelines. It also only looks at people who inject drugs – a sub-population whose importance can vary a lot from one country to the next. I expect this will be a valuable piece of work for Norway, but it strikes me as odd that “affordability” or “budget impact” aren’t even mentioned in the paper.

Cost-effectiveness of prostate cancer screening: a systematic review of decision-analytical models. BMC Cancer [PubMed] Published 18th January 2018

You may have seen prostate cancer in the headlines last week. Despite the number of people in the UK dying each year from prostate cancer now being greater than the number of people dying from breast cancer, prostate cancer screening remains controversial. This is because over-detection and over-treatment are common and harmful. Plenty of cost-effectiveness studies have been conducted in the context of detecting and treating prostate cancer. But there are various ways of modelling the problem and various specifications of screening programme that can be evaluated. So here we have a systematic review of cost-effectiveness models evaluating prostate-specific antigen (PSA) blood tests as a basis for screening. From a haul of 1010 studies, 10 made it into the review. The studies modelled lots of different scenarios, with alternative screening strategies, PSA thresholds, and treatment pathways. The results are not consistent. Many of the scenarios evaluated in the studies were more costly and less effective than current practice (which tended to be the lack of any formal screening programme). None of the UK-based cost-per-QALY estimates favoured screening. The authors summarise the methodological choices made in each study and consider the extent to which this relates to the pathways being modelled. They also specify the health state utility values used in the models. This will be a very useful reference point for anyone trying their hand at a prostate cancer screening model. Of the ten studies included in the review, four of them found at least one screening programme to be potentially cost-effective. ‘Adaptive screening’ – whereby individuals’ recall to screening was based on their risk – was considered in two studies using patient-level simulations. The authors suggest that cohort-level modelling could be sufficient where screening is not determined by individual risk level. There are also warnings against inappropriate definition of the comparator, which is likely to be opportunistic screening rather than a complete absence of screening. Generally speaking, a lack of good data seems to be part of the explanation for the inconsistency in the findings. It could be some time before we have a clearer understanding of how to implement a cost-effective screening programme for prostate cancer.

Credits

 

Sam Watson’s journal round-up for 15th January 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness of publicly funded treatment of opioid use disorder in California. Annals of Internal Medicine [PubMed] Published 2nd January 2018

Deaths from opiate overdose have soared in the United States in recent years. In 2016, 64,000 people died this way, up from 16,000 in 2010 and 4,000 in 1999. The causes of public health crises like this are multifaceted, but we can identify two key issues that have contributed more than any other. Firstly, medical practitioners have been prescribing opiates irresponsibly for years. For the last ten years, well over 200,000,000 opiate prescriptions were issued per year in the US – enough for seven in every ten people. Once prescribed, opiate use is often not well managed. Prescriptions can be stopped abruptly, for example, leaving people with unexpected withdrawal syndromes and rebound pain. It is estimated that 75% of heroin users in the US began by using legal, prescription opiates. Secondly, drug suppliers have started cutting heroin with its far stronger but cheaper cousin, fentanyl. Given fentanyl’s strength, only a tiny amount is required to achieve the same effects as heroin, but the lack of pharmaceutical knowledge and equipment means it is often not measured or mixed appropriately into what is sold as ‘heroin’. There are two clear routes to alleviating the epidemic of opiate overdose: prevention, by ensuring responsible medical use of opiates, and ‘cure’, either by ensuring the quality and strength of heroin, or providing a means to stop opiate use. The former ‘cure’ is politically infeasible so it falls on the latter to help those already habitually using opiates. However, the availability of opiate treatment programs, such as opiate agonist treatment (OAT), is lacklustre in the US. OAT provides non-narcotic opiates, such as methadone or buprenorphine, to prevent withdrawal syndromes in users, from which they can slowly be weaned. This article looks at the cost-effectiveness of providing OAT for all persons seeking treatment for opiate use in California for an unlimited period versus standard care, which only provides OAT to those who have failed supervised withdrawal twice, and only for 21 days. The paper adopts a previously developed semi-Markov cohort model that includes states for treatment, relapse, incarceration, and abstinence. Transition probabilities for the new OAT treatment were determined from treatment data for current OAT patients (as far as I understand it). Although this does raise the question about the generalisability of this population to the whole population of opiate users – given the need to have already been through two supervised withdrawals, this population may have a greater motivation to quit, for example. In any case, the article estimates that the OAT program would be cost-saving, through reductions in crime and incarceration, and improve population health, by reducing the risk of death. Taken at face value these results seem highly plausible. But, as we’ve discussed before, drug policy rarely seems to be evidence-based.

The impact of aid on health outcomes in Uganda. Health Economics [PubMed] Published 22nd December 2017

Examining the response of population health outcomes to changes in health care expenditure has been the subject of a large and growing number of studies. One reason is to estimate a supply-side cost-effectiveness threshold: the health returns the health service achieves in response to budget expansions or contractions. Similarly, we might want to know the returns to particular types of health care expenditure. For example, there remains a debate about the effectiveness of aid spending in low and middle-income country (LMIC) settings. Aid spending may fail to be effective for reasons such as resource leakage, failure to target the right population, poor design and implementation, and crowding out of other public sector investment. Looking at these questions at an aggregate level can be tricky; the link between expenditure or expenditure decisions and health outcomes is long and causality flows in multiple directions. Effects are likely to therefore be small and noisy and require strong theoretical foundations to interpret. This article takes a different, and innovative, approach to looking at this question. In essence, the analysis boils down to a longitudinal comparison of those who live near large, aid funded health projects with those who don’t. The expectation is that the benefit of any aid spending will be felt most acutely by those who live nearest to actual health care facilities that come about as a result of it. Indeed, this is shown by the results – proximity to an aid project reduced disease prevalence and work days lost to ill health with greater effects observed closer to the project. However, one way of considering the ‘usefulness’ of this evidence is how it can be used to improve policymaking. One way is in understanding the returns to investment or over what area these projects have an impact. The latter is covered in the paper to some extent, but the former is hard to infer. A useful next step may be to try to quantify what kind of benefit aid dollars produce and its heterogeneity thereof.

The impact of social expenditure on health inequalities in Europe. Social Science & Medicine Published 11th January 2018

Let us consider for a moment how we might explore empirically whether social expenditure (e.g. unemployment support, child support, housing support, etc) affects health inequalities. First, we establish a measure of health inequality. We need a proxy measure of health – this study uses self-rated health and self-rated difficulty in daily living – and then compare these outcomes along some relevant measure of socioeconomic status (SES) – in this study they use level of education and a compound measure of occupation, income, and education (the ISEI). So far, so good. Data on levels of social expenditure are available in Europe and are used here, but oddly these data are converted to a percentage of GDP. The trouble with doing this is that this variable can change if social expenditure changes or if GDP changes. During the financial crisis, for example, social expenditure shot up as a proportion of GDP, which likely had very different effects on health and inequality than when social expenditure increased as a proportion of GDP due to a policy change under the Labour government. This variable also likely has little relationship to the level of support received per eligible person. Anyway, at the crudest level, we can then consider how the relationship between SES and health is affected by social spending. A more nuanced approach might consider who the recipients of social expenditure are and how they stand on our measure of SES, but I digress. In the article, the baseline category for education is those with only primary education or less, which seems like an odd category to compare to since in Europe I would imagine this is a very small proportion of people given compulsory schooling ages unless, of course, they are children. But including children in the sample would be an odd choice here since they don’t personally receive social assistance and are difficult to compare to adults. However, there are no descriptive statistics in the paper so we don’t know and no comparisons are made between other groups. Indeed, the estimates of the intercepts in the models are very noisy and variable for no obvious reason other than perhaps the reference group is very small. Despite the problems outlined so far though, there is a potentially more serious one. The article uses a logistic regression model, which is perfectly justifiable given the binary or ordinal nature of the outcomes. However, the authors justify the conclusion that “Results show that health inequalities measured by education are lower in countries where social expenditure is higher” by demonstrating that the odds ratio for reporting a poor health outcome in the groups with greater than primary education, compared to primary education or less, is smaller in magnitude when social expenditure as a proportion of GDP is higher. But the conclusion does not follow from the premise. It is entirely possible for these odds ratios to change without any change in the variance of the underlying distribution of health, the relative ordering of people, or the absolute difference in health between categories, simply by shifting the whole distribution up or down. For example, if the proportions of people in two groups reporting a negative outcome are 0.3 and 0.4, which then change to 0.2 and 0.3 respectively, then the odds ratio comparing the two groups changes from 0.64 to 0.58. The difference between them remains 0.1. No calculations are made regarding absolute effects in the paper though. GDP is also shown to have a positive effect on health outcomes. All that might have been shown is that the relative difference in health outcomes between those with primary education or less and others changes as GDP changes because everyone is getting healthier. The question of the article is interesting, it’s a shame about the execution.

Credits

 

Chris Sampson’s journal round-up for 8th January 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

An empirical comparison of the measurement properties of the EQ-5D-5L, DEMQOL-U and DEMQOL-Proxy-U for older people in residential care. Quality of Life Research [PubMed] Published 5th January 2018

There is now a condition-specific preference-based measure of health-related quality of life that can be used for people with cognitive impairment: the DEMQOL-U. Beyond the challenge of appropriately defining quality of life in this context, cognitive impairment presents the additional difficulty that individuals may not be able to self-complete a questionnaire. There’s some good evidence that proxy responses can be valid and reliable for people with cognitive impairment. The purpose of this study is to try out the new(ish) EQ-5D-5L in the context of cognitive impairment in a residential setting. Data were taken from an observational study in 17 residential care facilities in Australia. A variety of outcome measures were collected including the EQ-5D-5L (proxy where necessary), a cognitive bolt-on item for the EQ-5D, the DEMQOL-U and the DEMQOL-Proxy-U (from a family member or friend), the Modified Barthel Index, the cognitive impairment Psychogeriatric Assessment Scale (PAS-Cog), and the neuropsychiatric inventory questionnaire (NPI-Q). The researchers tested the correlation, convergent validity, and known-group validity for the various measures. 143 participants self-completed the EQ-5D-5L and DEMQOL-U, while 387 responses were available for the proxy versions. People with a diagnosis of dementia reported higher utility values on the EQ-5D-5L and DEMQOL-U than people without a diagnosis. Correlations between the measures were weak to moderate. Some people reported full health on the EQ-5D-5L despite identifying some impairment on the DEMQOL-U, and some vice versa. The EQ-5D-5L was more strongly correlated with clinical outcome measures than were the DEMQOL-U or DEMQOL-Proxy-U, though the associations were generally weak. The relationship between cognitive impairment and self-completed EQ-5D-5L and DEMQOL-U utilities was not in the expected direction; people with greater cognitive impairment reported higher utility values. There was quite a lot of disagreement between utility values derived from the different measures, so the EQ-5D-5L and DEMQOL-U should not be seen as substitutes. An EQ-QALY is not a DEM-QALY. This is all quite perplexing when it comes to measuring health-related quality of life in people with cognitive impairment. What does it mean if a condition-specific measure does not correlate with the condition? It could be that for people with cognitive impairment the key determinant of their quality of life is only indirectly related to their impairment, and more dependent on their living conditions.

Resolving the “cost-effective but unaffordable” paradox: estimating the health opportunity costs of nonmarginal budget impacts. Value in Health Published 4th January 2018

Back in 2015 (as discussed on this blog), NICE started appraising drugs that were cost-effective but implied such high costs for the NHS that they seemed unaffordable. This forced a consideration of how budget impact should be handled in technology appraisal. But the matter is far from settled and different countries have adopted different approaches. The challenge is to accurately estimate the opportunity cost of an investment, which will depend on the budget impact. A fixed cost-effectiveness threshold isn’t much use. This study builds on York’s earlier work that estimated cost-effectiveness thresholds based on health opportunity costs in the NHS. The researchers attempt to identify cost-effectiveness thresholds that are in accordance with different non-marginal (i.e. large) budget impacts. The idea is that a larger budget impact should imply a lower (i.e. more difficult to satisfy) cost-effectiveness threshold. NHS expenditure data were combined with mortality rates for different disease categories by geographical area. When primary care trusts’ (PCTs) budget allocations change, they transition gradually. This means that – for a period of time – some trusts receive a larger budget than they are expected to need while others receive a smaller budget. The researchers identify these as over-target and under-target accordingly. The expenditure and outcome elasticities associated with changes in the budget are estimated for the different disease groups (defined by programme budgeting categories; PBCs). Expenditure elasticity refers to the change in PBC expenditure given a change in overall NHS expenditure. Outcome elasticity refers to the change in PBC mortality given a change in PBC expenditure. Two econometric approaches are used; an interaction term approach, whereby a subgroup interaction term is used with the expenditure and outcome variables, and a subsample estimation approach, whereby subgroups are analysed separately. Despite the limitations associated with a reduced sample size, the subsample estimation approach is preferred on theoretical grounds. Using this method, under-target PCTs face a cost-per-QALY of £12,047 and over-target PCTs face a cost-per-QALY of £13,464, reflecting diminishing marginal returns. The estimates are used as the basis for identifying a health production function that can approximate the association between budget changes and health opportunity costs. Going back to the motivating example of hepatitis C drugs, a £772 million budget impact would ‘cost’ 61,997 QALYs, rather than the 59,667 that we would expect without accounting for the budget impact. This means that the threshold should be lower (at £12,452 instead of £12,936) for a budget impact of this size. The authors discuss a variety of approaches for ‘smoothing’ the budget impact of such investments. Whether or not you believe the absolute size of the quoted numbers depends on whether you believe the stack of (necessary) assumptions used to reach them. But regardless of that, the authors present an interesting and novel approach to establishing an empirical basis for estimating health opportunity costs when budget impacts are large.

First do no harm – the impact of financial incentives on dental x-rays. Journal of Health Economics [RePEc] Published 30th December 2017

If dentists move from fee-for-service to a salary, or if patients move from co-payment to full exemption, does it influence the frequency of x-rays? That’s the question that the researchers are trying to answer in this study. It’s important because x-rays always present some level of (carcinogenic) risk to patients and should therefore only be used when the benefits are expected to exceed the harms. Financial incentives shouldn’t come into it. If they do, then some dentists aren’t playing by the rules. And that seems to be the case. The authors start out by establishing a theoretical framework for the interaction between patient and dentist, which incorporates the harmful nature of x-rays, dentist remuneration, the patient’s payment arrangements, and the characteristics of each party. This model is used in conjunction with data from NHS Scotland, with 1.3 million treatment claims from 200,000 patients and 3,000 dentists. In 19% of treatments, an x-ray occurs. Some dentists are salaried and some are not, while some people pay charges for treatment and some are exempt. A series of fixed effects models are used to take advantage of these differences in arrangements by modelling the extent to which switches (between arrangements, for patients or dentists) influence the probability of receiving an x-ray. The authors’ preferred model shows that both the dentist’s remuneration arrangement and the patient’s financial status influences the number of x-rays in the direction predicted by the model. That is, fee-for-service and charge exemption results in more x-rays. The combination of these two factors results in a 9.4 percentage point increase in the probability of an x-ray during treatment, relative to salaried dentists with non-exempt patients. While the results do show that financial incentives influence this treatment decision (when they shouldn’t), the authors aren’t able to link the behaviour to patient harm. So we don’t know what percentage of treatments involving x-rays would correspond to the decision rule of benefits exceeding harms. Nevertheless, this is an important piece of work for informing the definition of dentist reimbursement and patient payment mechanisms.

Credits