Chris Sampson’s journal round-up for 5th February 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness analysis of germ-line BRCA testing in women with breast cancer and cascade testing in family members of mutation carriers. Genetics in Medicine [PubMed] Published 4th January 2018

The idea of testing women for BRCA mutations – faulty genes that can increase the probability and severity of breast and ovarian cancers – periodically makes it into the headlines. That’s not just because of Angelina Jolie. It’s also because it’s a challenging and active area of research with many uncertainties. This new cost-effectiveness analysis evaluates a programme that incorporates cascade testing; testing relatives of mutation carriers. The idea is that this could increase the effectiveness of the programme with a reduced cost-per-identification, as relatives of mutation carriers are more likely to also carry a mutation. The researchers use a cohort-based Markov-style decision analytic model. A programme with three test cohorts – i) women with unilateral breast cancer and a risk prediction score >10%, ii) first-degree relatives, and iii) second-degree relatives – was compared against no testing. A positive result in the original high-risk individual leads to testing in the first- and second-degree relatives, with the number of subsequent tests occurring in the model determined by assumptions about family size. Women who test positive can receive risk-reducing mastectomy and/or bilateral salpingo-oophorectomy (removal of the ovaries). The results are favourable to the BRCA testing programme, at $19,000 (Australian) per QALY for testing affected women only and $15,000 when the cascade testing of family members was included, with high probabilities of cost-effectiveness at $50,000 per QALY. I’m a little confused by the model. The model includes the states ‘BRCA positive’ and ‘Breast cancer’, which clearly are not mutually exclusive. And It isn’t clear how women entering the model with breast cancer go on to enjoy QALY benefits compared to the no-test group. I’m definitely not comfortable with the assumption that there is no disutility associated with risk-reducing surgery. I also can’t see where the cost of identifying the high-risk women in the first place was accounted for. But this is a model, after all. The findings appear to be robust to a variety of sensitivity analyses. Part of the value of testing lies in the information it provides about people beyond the individual patient. Clearly, if we want to evaluate the true value of testing then this needs to be taken into account.

Economic evaluation of direct-acting antivirals for hepatitis C in Norway. PharmacoEconomics Published 2nd February 2018

Direct-acting antivirals (DAAs) are those new drugs that gave NICE a headache a few years back because they were – despite being very effective and high-value – unaffordable. DAAs are essentially curative, which means that they can reduce resource use over a long time horizon. This makes cost-effectiveness analysis in this context challenging. In this new study, the authors conduct an economic evaluation of DAAs compared with the previous class of treatment, in the Norwegian context. Importantly, the researchers sought to take into account the rebates that have been agreed in Norway, which mean that the prices are effectively reduced by up to 50%. There are now lots of different DAAs available. Furthermore, hepatitis C infection corresponds to several different genotypes. This means that there is a need to identify which treatments are most (cost-)effective for which groups of patients; this isn’t simply a matter of A vs B. The authors use a previously developed model that incorporates projections of the disease up to 2030, though the authors extrapolate to a 100-year time horizon. The paper presents cost-effectiveness acceptability frontiers for each of genotypes 1, 2, and 3, clearly demonstrating which medicines are the most likely to be cost-effective at given willingness-to-pay thresholds. For all three genotypes, at least one of the DAA options is most likely to be cost-effective above a threshold of €70,000 per QALY (which is apparently recommended in Norway). The model predicts that if everyone received the most cost-effective strategy then Norway would expect to see around 180 hepatitis C patients in 2030 instead of the 300-400 seen in the last six years. The study also presents the price rebates that would be necessary to make currently sub-optimal medicines cost-effective. The model isn’t that generalisable. It’s very much Norway-specific as it reflects the country’s treatment guidelines. It also only looks at people who inject drugs – a sub-population whose importance can vary a lot from one country to the next. I expect this will be a valuable piece of work for Norway, but it strikes me as odd that “affordability” or “budget impact” aren’t even mentioned in the paper.

Cost-effectiveness of prostate cancer screening: a systematic review of decision-analytical models. BMC Cancer [PubMed] Published 18th January 2018

You may have seen prostate cancer in the headlines last week. Despite the number of people in the UK dying each year from prostate cancer now being greater than the number of people dying from breast cancer, prostate cancer screening remains controversial. This is because over-detection and over-treatment are common and harmful. Plenty of cost-effectiveness studies have been conducted in the context of detecting and treating prostate cancer. But there are various ways of modelling the problem and various specifications of screening programme that can be evaluated. So here we have a systematic review of cost-effectiveness models evaluating prostate-specific antigen (PSA) blood tests as a basis for screening. From a haul of 1010 studies, 10 made it into the review. The studies modelled lots of different scenarios, with alternative screening strategies, PSA thresholds, and treatment pathways. The results are not consistent. Many of the scenarios evaluated in the studies were more costly and less effective than current practice (which tended to be the lack of any formal screening programme). None of the UK-based cost-per-QALY estimates favoured screening. The authors summarise the methodological choices made in each study and consider the extent to which this relates to the pathways being modelled. They also specify the health state utility values used in the models. This will be a very useful reference point for anyone trying their hand at a prostate cancer screening model. Of the ten studies included in the review, four of them found at least one screening programme to be potentially cost-effective. ‘Adaptive screening’ – whereby individuals’ recall to screening was based on their risk – was considered in two studies using patient-level simulations. The authors suggest that cohort-level modelling could be sufficient where screening is not determined by individual risk level. There are also warnings against inappropriate definition of the comparator, which is likely to be opportunistic screening rather than a complete absence of screening. Generally speaking, a lack of good data seems to be part of the explanation for the inconsistency in the findings. It could be some time before we have a clearer understanding of how to implement a cost-effective screening programme for prostate cancer.

Credits

 

Sam Watson’s journal round-up for 21st August 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Multidimensional performance assessment of public sector organisations using dominance criteria. Health Economics [RePEcPublished 18th August 2017

The empirical assessment of the performance or quality of public organisations such as health care providers is an interesting and oft-tackled problem. Despite the development of sophisticated methods in a large and growing literature, public bodies continue to use demonstrably inaccurate or misleading statistics such as the standardised mortality ratio (SMR). Apart from the issue that these statistics may not be very well correlated with underlying quality, organisations may improve on a given measure by sacrificing their performance on another outcome valued by different stakeholders. One example from a few years ago showed how hospital rankings based upon SMRs shifted significantly if one took into account readmission rates and their correlation with SMRs. This paper advances this thinking a step further by considering multiple outcomes potentially valued by stakeholders and using dominance criteria to compare hospitals. A hospital dominates another if it performs at least as well or better across all outcomes. Importantly, correlation between these measures is captured in a multilevel model. I am an advocate of this type of approach, that is, the use of multilevel models to combine information across multiple ‘dimensions’ of quality. Indeed, my only real criticism would be that it doesn’t go far enough! The multivariate normal model used in the paper assumes a linear relationship between outcomes in their conditional distributions. Similarly, an instrumental variable model is also used (using the now routine distance-to-health-facility instrumental variable) that also assumes a linear relationship between outcomes and ‘unobserved heterogeneity’. The complex behaviour of health care providers may well suggest these assumptions do not hold – for example, failing institutions may well show poor performance across the board, while other facilities are able to trade-off outcomes with one another. This would suggest a non-linear relationship. I’m also finding it hard to get my head around the IV model: in particular what the covariance matrix for the whole model is and if correlations are permitted in these models at multiple levels as well. Nevertheless, it’s an interesting take on the performance question, but my faith that decent methods like this will be used in practice continues to wane as organisations such as Dr Foster still dominate quality monitoring.

A simultaneous equation approach to estimating HIV prevalence with nonignorable missing responses. Journal of the American Statistical Association [RePEcPublished August 2017

Non-response is a problem encountered more often than not in survey based data collection. For many public health applications though, surveys are the primary way of determining the prevalence and distribution of disease, knowledge of which is required for effective public health policy. Methods such as multiple imputation can be used in the face of missing data, but this requires an assumption that the data are missing at random. For disease surveys this is unlikely to be true. For example, the stigma around HIV may make many people choose not to respond to an HIV survey, thus leading to a situation where data are missing not at random. This paper tackles the question of estimating HIV prevalence in the face of informative non-response. Most economists are familiar with the Heckman selection model, which is a way of correcting for sample selection bias. The Heckman model is typically estimated or viewed as a control function approach in which the residuals from a selection model are used in a model for the outcome of interest to control for unobserved heterogeneity. An alternative way of representing this model is as copula between a survey response variable and the response variable itself. This representation is more flexible and permits a variety of models for both selection and outcomes. This paper includes spatial effects (given the nature of disease transmission) not only in the selection and outcomes models, but also in the model for the mixing parameter between the two marginal distributions, which allows the degree of informative non-response to differ by location and be correlated over space. The instrumental variable used is the identity of the interviewer since different interviewers are expected to be more or less successful at collecting data independent of the status of the individual being interviewed.

Clustered multistate models with observation level random effects, mover–stayer effects and dynamic covariates: modelling transition intensities and sojourn times in a study of psoriatic arthritis. Journal of the Royal Statistical Society: Series C [ArXiv] Published 25th July 2017

Modelling the progression of disease accurately is important for economic evaluation. A delicate balance between bias and variance should be sought: a model too simple will be wrong for most people, a model too complex will be too uncertain. A huge range of models therefore exists from ‘simple’ decision trees to ‘complex’ patient-level simulations. A popular choice are multistate models, such as Markov models, which provide a convenient framework for examining the evolution of stochastic processes and systems. A common feature of such models is the Markov property, which is that the probability of moving to a given state is independent of what has happened previously. This can be relaxed by adding covariates to model transition properties that capture event history or other salient features. This paper provides a neat example of extending this approach further in the case of arthritis. The development of arthritic damage in a hand joint can be described by a multistate model, but there are obviously multiple joints in one hand. What is more, the outcomes in any one joint are not likely to be independent of one another. This paper describes a multilevel model of transition probabilities for multiple correlated processes along with other extensions like dynamic covariates and different mover-stayer probabilities.

Credits

Chris Sampson’s journal round-up for 20th June 2016

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Can increased primary care access reduce demand for emergency care? Evidence from England’s 7-day GP opening. Journal of Health Economics Published 15th June 2016

Getting a GP appointment when you want one can be tricky, and complaints are increasingly common in the UK. In April 2013, 7-day opening for some GP practices began being piloted in London, with support from the Prime Minister’s Challenge Fund. Part of the reasoning for 7-day opening – beyond patient satisfaction – is that better access to GP services might reduce the use of A&E at weekends. This study evaluates whether or not this has been observed for the London pilot. Secondary Uses Service patient-level data are analysed for 2009-2014 for 34 GP practices in central London (4 pilot practices and 30 controls). The authors collapse the data into the number of A&E attendances per GP practice, giving 8704 observations (34 practices over 256 weeks). 6 categories of A&E attendance are identified; some that we would expect to be influenced by extended GP opening (e.g. ‘minor’) and some that we would not (e.g. ‘accident’). Pilot practices were not randomly selected, and those that were selected had a significantly higher patient-GP ratio. The authors run difference-in-difference analyses on the outcomes using Poisson regression models. Total weekend attendances dropped by 17.9%, with moderate cases exhibiting the greatest drop. Minor cases were not affected. There was also a 10% drop in weekend admissions and a 20% drop in ambulance usage, suggesting major cost savings. A small spillover effect was observed for weekdays. The authors divide their sample into age groups and find that the fall in A&E attendances was greatest in the over 60s, who account for almost all of the drop in weekend admissions. The authors speculate that this may be due to A&E staff being risk averse with elderly patients with whose background they are not familiar, and that GPs may be better able to assess the seriousness of the case. Patients from wealthier neighbourhoods exhibited a relatively greater drop in A&E attendances. So it looks like 7-day opening for GP services could relieve a lot of pressure on A&E departments. What’s lacking from the paper though is an explicit estimate of the cost savings (if, indeed, there were any). The pilot was funded to the tune of £50 million. Unfortunately this study doesn’t tell us whether or not it was worth it.

Cost-effectiveness analysis in R using a multi-state modeling survival analysis framework: a tutorial. Medical Decision Making [PubMed] Published 8th June 2016

To say my practical understanding of R is rudimentary would be a grand overstatement. But I do understand the benefits of the increasingly ubiquitous open source stats software. People frown hard when I tell them that we often build Markov models in Excel. An alternative script-based approach could clearly increase the transparency of decision models and do away with black box problems. This paper does what it says on the tin and guides the reader through the process of developing a state-based (e.g. Markov) transition model. But the key novelty of the paper is the description of a tool for ‘testing’ the Markov assumption that might be built into a decision model. This is the ‘state-arrival extended model’ which entails the inclusion of a covariate to represent the history from the start of the model. A true Markov model is only interested in time in the current state, so if this extra covariate matters to the results then we can reject the Markov assumption and instead implement a semi-Markov model (or maybe something else). The authors do just this using an example from a previously published trial. I dare say the authors could have figured out that the Markov assumption wouldn’t hold without using such a test, but it’s good to have a justification for model choice. The basis for the tutorial is a 12 step program, and the paper explains each step. The majority of processes are based on adaptations of an existing R package called mstate. It assumes that time is continuous rather than discrete and can handle alternative parametric distributions for survival. Visual assessment of fit is built into the process to facilitate model selection. Functions are defined to compute QALYs and costs associated with states and PSA is implemented with generation of cost-effectiveness planes and CEACs. But your heart may sink when the authors state that “It is assumed that individual patient data are available”. The authors provide a thorough discussion of the ways in which a model might be constructed when individual level data aren’t available. But ultimately this seems like a major limitation of the approach, or at least of the usefulness of this particular tutorial. So don’t throw away your copy of Briggs/Sculpher/Claxton just yet.

Do waiting times affect health outcomes? Evidence from coronary bypass. Social Science & Medicine [PubMed] Published 30th May 2016

Many health economists are quite happy with waiting lists being used as a basis for rationing in health services like the NHS. But, surely, this is conditional on the delay in treatment not affecting either current health or the potential benefit of treatment. This new study provides evidence from coronary bypass surgery. Hospital Episodes Statistics for 133,166 patients for the years 2000-2010 are used to look at 2 outcomes: 30-day mortality and 28-day readmission. During the period, policy resulted in the reduction of waiting times from 220 to 50 days. Three empirical strategies are employed: i) annual cross-sectional estimation of the probability of the 2 outcomes occurring in patients, ii) panel analysis of hospital-level data over the 11 years to evaluate the impact of different waiting time reductions and iii) full analysis of patient-specific waiting times across all years using an instrumental variable based on waiting times for an alternative procedure. For the first analysis, the study finds no effect of waiting times on mortality in all years bar 2003 (in which the effect was negative). Weak association is found with readmission. Doubling waiting times increases risk of readmission from 4.05% to 4.54%. The hospital-level analysis finds a lack of effect on both counts. The full panel analysis finds that longer waiting times reduce mortality, but the authors suggest that this is probably due to some unobserved heterogeneity. Longer waiting times may have a negative effect on people’s health, but it isn’t likely that this effect is dramatic enough to increase mortality. This might be thanks to effective prioritisation in the NHS.