Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
“Stick or twist?” Negotiating price and data in an era of conditional approval. Value in Health [PubMed] Published February 2020
This article caught my attention for two reasons. Firstly, it explores decision making in relation to the relatively new NICE recommended for use within the Cancer Drugs Fund (CDF) pathway. Secondly, and more unusually, is that it views HTA decision making from an industry perspective instead of a payer perspective.
The reworking of the NICE cancer drug fund means that HTA committees have three possible decisions for cancer medicines when benefits of treatment are highly uncertain: 1) regular approval, for which an acceptable price is set based on conservative assumptions in the economic analysis; 2) approval with a requirement to complete additional research and re-review within two years, in which the acceptable price may be set in relation to the results of the new research; or 3) reject.
The decision this paper examines is that faced by a submitting pharmaceutical company on whether to accept a lower price for regular approval, based on conservative interpretation of early data (‘stick’), or target a higher price through an approval with evidence generation (‘twist’). The second strategy is risky in that the additional evidence may prove unfavourable, lowering the acceptable price and maybe even proving the treatment ineffective.
The authors use a twist (ha!) on the standard expected value of sample information (EVSI) method to calculate the expected value of the two options for the company. To slightly oversimplify, the stick strategy’s commercial value is the price supported by current evidence multiplied by volume of cases. The twist strategy’s value is more complex. Each possible outcome of data collection would support a different acceptable price. Outcomes of data collection are simulated according to the company’s beliefs expressed as a probability distribution, and then the resulting acceptable prices are calculated using the cost-effectiveness model applied in HTA decision making. Knowing the expectation of acceptable price gets us to the post data collection value of the drug. After taking into account research costs we get the value of the twist option. The process is nicely illustrated with a case study of a real submitted medicine with additional simulated data. A nice feature of this type of analysis is that it provides both the expected commercial value of each strategy and also the probability that ‘stick’ or ‘twist’ is the optimal choice. This is important if company decision makers are not risk neutral.
An important insight is that the option to twist is most attractive when the company has good reasons to be more optimistic about additional data than the HTA committee. This can plausibly happen because HTA committees use the specific trial data presented and do not make use of external data (e.g. effectiveness of similar agents in other cancers) when assessing uncertainty.
To me, this also suggests that there might be some signal for NICE in company decisions to stick or twist. If the option to stick is usually preferred this might suggest that the conservative assumptions used in setting stick option prices are not really conservative in the eyes of the company. The lack of interest in completing research to resolve the uncertainty despite an incentive could signal pessimism about the true effectiveness of their product. To extend the cards metaphor, there may exist a need for NICE to ‘call their bluff’ by setting more aggressive prices for regular approvals.
I would recommend reading this thought-provoking paper both if you might be involved in consulting for industry on this topic or if you are simply interested in strategy in HTA decision making.
Shared decision making: from decision science to data science. Medical Decision Making [PubMed] Published 6th February 2020
‘Shared decision making’ is a relatively new term for an age-old practice, that of the ideal form of the agency relationship between the doctor and patient. The increasing popularity of the phrase ‘shared decision making’ is probably due to the shift in recent decades from ‘doctor knows best’ towards more participation from patients in decisions about their own care. It seems shared decision making is now catching on in health economics. It is closely linked to two major research themes, patient preferences and demand for healthcare.
In this paper the authors propose a new process for clinical decision making supported by stated preference data. Conjoint analysis tasks are completed by patients within a clinical encounter (decision science). Their responses are combined with prior data about preference phenotypes and patient satisfaction (data science). The method utilises a type of Bayesian collaborative filtering (CF) algorithm. This type of algorithm is commonly used online to generate recommendations. For example, Netflix predicting which shows you might like based on your viewing history combined with data about what is popular with similar viewers, i.e. viewers with the same preference phenotype. It might sound strange to suggest doctors should recommend treatment in the same way Netflix recommends series but there are some situations in which I can see this making sense. There are often trade-offs of risks and benefits that mean no one treatment is strictly superior to all the alternatives. It is hoped that the system can help clinicians and patients reach decisions that better reflect individual patients’ preferences.
The novelty of the paper is use of the CF algorithm. This is compared to a more basic approach for combining these data types that does not utilise the concept of preference phenotypes. Comparison was made with a number of simulated data sets with differing degrees of preference heterogeneity. Results support use of the CF algorithm in the presence of heterogeneity when there are relatively separable classes of patients. Using the CF alogorithm there was better agreement between predicted and true best recommendation. This is what we would expect given the details of the two methods that were compared.
For me, the most useful aspect of this paper was laying out the feasibility of the new method. I am less convinced about the superiority of this exact approach compared to related methods at this stage. However, as there is still a great deal of work to be done prior to implementation in practice, that is a topic to which future research can return. I hope this paper will contribute to future trials of decision aids that use conjoint analysis augmented with knowledge of patient preferences and reported satisfaction. For shared decision making to be more than a trendy turn of phrase we need innovative approaches like this that try to enrich the clinical encounter, making it more than simply an exchange of information and patient consent.
Is the whole larger than the sum of its parts? Impact of missing data imputation in economic evaluation conducted alongside randomized controlled trials. The European Journal of Health Economics [PubMed] Published 27th February 2020
Missing data can be a real headache for analysts. The problem can range from minor pain to full-on migraine. The (methods) guideline recommended treatment is multiple imputation (MI). This provides estimates and inference that is robust to bias from data missingness under the so-called ‘missing at random’ assumption.
A number of questions arise when planning MI analysis, one of which is whether to impute every individual item in a questionnaire or only the aggregate you are interested in (e.g. total cost vs counts of resource use items). This paper addresses this methodological choice and aims to determine which method – item imputation or aggregate imputation (a.k.a. unit imputation) – is superior. Importantly, the authors then go on to address the question of whether the difference has practical significance in terms of the results of an economic evaluation.
We already know that when missingness is due to unit non-response then item imputation and unit imputation will have similar performance. This is quite intuitive. What this paper adds is that unit imputation may be more prone to bias and less precise, when item missingness is prevalent and unit missingness is not.
Using a real trial data set with resource use and utilities, as well as missing data simulations based on the trial data, the authors explored relative bias in costs and utilities (EQ-5D), coverage probabilities of estimates, and resulting ICERs. The results are broadly supportive of item imputation compared to unit imputation for all missing data patterns except missing completely at random. Judging the degree of support for item imputation is difficult. Relative performance may depend on specific aspects of the imputation models and data set. In the chosen setting, if about 20% or more of data were missing then the methodological choice made an important difference to cost-effectiveness conclusions.
The missing data patterns considered in this paper are relatively simple but trial data can get considerably more complex. One aspect that was not addressed in this paper was that clinical trial data often include multiple observations at a number of follow-up time points, i.e. panel data. Missingness often occurs at later time points and is correlated within individuals across waves. Anecdotally, a reason that item imputation is not used is that when combined with more complex panel data, multiple imputation models can fail to converge or are computationally expensive.
Missing data analysis is one of the most challenging aspects for any analyst working on economic evaluation alongside clinical trials. Better practical guidance on how to conduct MI would be useful to many but probably requires future translation from studies such as this to textbook and taught course format that reconciles the various studies on the topic.
Credits