RSS

Sofosbuvir: a fork in the road for NICE?

NICE recently completed their appraisal of the hepatitis C drug sofosbuvir. However, as has been reported in the media, NHS England will not be complying with the guidance within the normal time period.

The cost of a 24 week course of sofosbuvir is almost £70,000. Around 160,000 people are chronically infected with the hepatitis C virus in England, so that adds up to a fair chunk of the NHS budget. Yet the drug does appear to be cost-effective. ICERs differ for different patient groups, but for most scenarios the ICER is below £30,000 per QALY. In the NICE documentation, a number of reasons are listed for NHS England’s decision. But what they ultimately boil down to – it seems – is affordability.

The problem is that NICE doesn’t account for affordability in its guidance. One need only consider that the threshold has remained unchanged for over a decade to see that this is true. How to solve this problem really depends on what we believe the job of NICE should be. Should it be NICE’s job to consider what should and shouldn’t be purchased within the existing health budget? Or, rather, should it be NICE’s job simply to figure out what is ‘worth it’ to society, regardless of affordability? This isn’t the first time that an NHS organisation has appealed against a NICE decision in some way. Surely, it won’t be the last. These instances represent a failure in the system, not least on grounds of accountability for reasonableness. Here I’d like to suggest that NICE has 3 options for dealing with this problem; one easy, one hard and one harder.

The easy option

The simplest option involves the fewest changes to the NICE process. Indeed, it would involve doing pretty much what it does now, only with slightly different (and more transparent) reasoning. In this scenario NICE would explicitly ignore the problem of affordability. Its remit would cease to be the consideration of optimality on a national level and it would ignore the budget constraint. NICE’s remit would become figuring out which health technologies are ‘worth it'; i.e. would the public be willing to purchase a given technology with a given health benefit at a given cost. To some extent, therefore, NICE would become a threshold-setter. The threshold should be based on some definition of a social value of a QALY. This is the easy option for NICE as setting the threshold would be the only additional task to what they currently do. Its threshold might not change all that much, or may be a little higher.

However, even if NICE denies responsibility, clearly someone does need to take account of affordability. Given the events associated with sofosbuvir it seems that this could become the work of NHS England. NHS England could use a threshold based on the budget and current QALY-productivity in the NHS. One might expect NHS England to be in a better position to identify the local evidence necessary to determine appropriate thresholds, which would likely be much lower than NICE’s. It would also be responsible for disinvestment decisions. Given the nationwide remit of NHS England, this would still prevent postcode lotteries. The implication here, of course, is that NICE and NHS England might use different thresholds. Any number of decision rules could be used to determine the result for technologies falling between the two. Maybe this is where considerations for innovation or non-health-related equity concerns belong. It seems probable to me that NICE’s threshold would be higher than NHS England’s, in which case NICE would effectively be advising increases in the health budget. This is something that I quite like the sound of.

The hard option

Personally, I believe that NICE’s failure to justify their threshold(s) is quite a serious failing and undermines the enterprise. The hard option will involve them defining it properly, informed by current levels of QALY-productivity in the NHS. Thus properly adopting a position as a threshold-searcher, and doing the job prescribed to NHS England in the ‘easy option’. NICE guidance would therefore be informed by the current health budget and affordability, and therefore must include guidance on disinvestment. The first stage of this work has already been done. The disinvestment guidance would be the hard part. This argument has already been much discussed, and seems to be what many economists support.

I don’t find this argument entirely compelling, at least not as a solution to the affordability problem. To solve this issue NICE would need to regularly review the current threshold and revise it in light of current productivity and the prevailing health budget. It has no experience of doing this. I believe the task could be more effectively carried out by commissioning organisations (such as NHS England), who are in a better position to oversee the collection of the appropriate data and would have a public responsibility to do so. It might also be politically useful if decisions about affordability were made independently of decisions about value.

The harder option

The harder option is for there to be a paradigm shift in the way NICE – and health economics more generally – operates. It could involve programme budgeting and marginal analysis, or the Birch and Gafni approach. This might just be the best option, but it seems unlikely to happen nationally any time soon.

It’s possible that more cost-effective but unaffordable drugs are in the pipeline. Failure to address the affordability problem soon could seriously undermine NICE.

DOI: 10.6084/m9.figshare.1291123

 

Tags: , , , , , , , , , , , , , , , , , , , ,

Review: Thrive (Richard Layard, David Clark)

Thrive: The Power of Evidence-Based Psychological Therapies

Hardcover, 384 pages, ISBN: 9781846146053, published 3 July 2014

Amazon / Google Books / Allen Lane

Mental illness reduces national income by about 4%, and yet we only spend about 13% of our health budget and about 5% of our medical research funds on tackling the problem.

As an economist who writes a fair bit on mental health, I regularly trot out statements like this about how costly mental health problems are to society and how the under-provision of services is grossly inefficient. To some the point may now seem obvious and trite. As evidence grows ever more compelling, government policy slowly shifts in response. One success story is the Improving Access to Psychological Therapies (IAPT) initiative, which has greatly improved the availability of evidence-based treatment for some of the most prevalent mental health problems in the UK. Yet in many cases we still await adequate action from the government and decision-makers. Two key players in getting IAPT into government policy were Richard Layard – an economist – and David Clark – a psychologist. In their new book Thrive: The Power of Evidence-Based Psychological Therapies, Layard and Clark demonstrate the need for wider provision of cost-effective mental health care in the UK.

The book starts with a gentle introduction to mental illness; what it is, who suffers, the nature of treatment. This will give any reader a way in, with an engaging set-up for what follows (though with one third of families including someone with a mental illness, most people will find the topic relatable). The opening chapters go on to dig deeper into these questions; do these people get help, how does it affect their lives and what are the societal impacts? These chapters serve as a crash course in mental health and though the style is conversational and easily followed, on reflection you’ll realise that you’ve absorbed a great deal of information about mental health. More importantly, you’ll have a deeper understanding. This isn’t simply because of the number of statistics that have been thrown at you, but because of the personal stories and illustrations that accompany the numbers. This forms the first half of the book – ‘The Problem’ – which encourages the reader to start questioning why more isn’t being done. Economists may at times balk at the broad brush strokes in considering the societal ‘costs’ of mental health problems, but the figures are nevertheless startling.

From there the book continues to build. In the second half – ‘What Can Be Done?’ – the authors go on to explain that actually there’s a ton of effective therapies available. We know what they are and who they work for, but they aren’t available. There’s no doubt that the view of the evidence presented is an optimistic one, but it isn’t designed to mislead; where evidence is lacking, the authors say so. The book seems to be written with the sceptical academic in mind; no sooner can you start to question a claim than you are thrown another baffling statistic to chew on. Various therapies are explored, though the focus is undeniably on depression and anxiety and on cognitive behavioural therapy (CBT). Readers with CBT bugbears may feel alienated by this, but should consider it within the broader scope of the book.

Readers would do well to stop after chapter 14. Things go sharply downhill from this point and could, for some readers, undermine what goes before. This would be a great shame. In all seriousness, chapters 15 and 16 would be better off read at a later date, once the rest of the book has been absorbed, understood and – possibly – acted upon. In the final chapters Layard and Clark make distinctly political proposals about how society should be organised. The happiness agenda takes centre stage. In places, mental illness is presented as simply the opposite of happiness. This is an unfortunate and unnecessary tangent. I have some sympathies with the happiness agenda, but for many I expect these chapters would ruin the book. The less said about them the better.

It is a scandal that so many people with mental health problems do not have access to the cost-effective treatments that exist. Layard and Clark demonstrate convincingly that the issue is of public interest. Thrive has the potential to instill in people the right amounts of sympathy, anger and understanding to bring about change. Many will disagree with their prescriptions, but this should not detract from the central message of the book.

DOI: 10.6084/m9.figshare.1287738

 
Leave a comment

Posted by on January 13, 2015 in Public Health, Reviews

 

Tags: , , , , , , , , , , , ,

Heterogeneity and Markov models

The big appeal of Markov models is their relative simplicity, with their focus on what happens with a whole cohort, instead of individual patients. Because of this, they are relatively bad at taking into account patient heterogeneity (true differences in outcomes between patients, which can be explained by for example disease severity, age, biomarkers). In the past, there have been several ways of dealing with patient heterogeneity. Earlier this year, I and my co-authors Dr. Lucas Goossens and Prof.Dr. Maureen Rutten-van Mölken, published a study showing the outcomes of these differences in approach. We show that three of the four methods are useful in different circumstances. The fourth one should not be used anymore.

In practice, heterogeneity is often ignored. An average value of the patient population will then be used for any variables representing patient characteristics in the model. The cost-effectiveness outcomes for this ‘average patient’ are then assumed to represent the entire patient population. In addition to ignoring available evidence, the results are difficult to interpret since the ‘average patient’ does not exist. With non-linearity being the rule rather than the exception in Markov modelling, heterogeneity should be taken into account explicitly in order to obtain a correct cost-effectiveness estimate over a heterogeneous population. This method can therefore be useful only if there is little heterogeneity, or it is expected not to have an influence on the cost-effectiveness outcomes.

An alternative is to define several subgroups of patients by defining several different combinations of patient characteristics, and to calculate the outcomes for each of these. The comparison of subgroups allows for the exploration of the effect that differences between patients have on cost-effectiveness outcomes. In our study, subgroup analyses did lead to insight in the differences between the different types of patients, but not all outcomes were useful for decision makers. After all, policy and reimbursement decisions are commonly made for an entire patient population, not subgroups. If a decision maker wants to use the subgroup analyses for decision regarding specific subgroups, equity concerns are always an issue. Patient heterogeneity in clinical characteristics, such as starting FEV1% in our study, may be acceptable for sub-group specific recommendations. Other input parameters, such as gender, race or in our case age, are not. This part of the existing heterogeneity has to be ignored if you use subgroup analyses.

In some cases, heterogeneity has been handled by simply combining it with parameter uncertainty in a probabilistic sensitivity analysis (PSA). The expected outcome for the Single Loop PSA is correct for the population, but the distribution of the expected outcome (which reflects the uncertainty in which many decision makers are interested) is not correct. The outcomes ignore the fundamental difference between the patient heterogeneity and parameter uncertainty. In our study, it even influenced the shape of the cost-effectiveness plane, leading to an overestimation of uncertainty. In our opinion, this method should never be used any more.

In order to correctly separate parameter uncertainty and heterogeneity, the analysis requires a nested Monte Carlo simulation, by drawing a number of individual patients within each PSA iteration. In this way you can investigate sampling uncertainty, while still accounting for patient heterogeneity. This method accounts sufficiently for heterogeneity, is easily interpretable and can be performed using existing software. In essence, this ‘Double Loop PSA’ uses the existing Expected Value of Partial Perfect Information (EVPPI) methodology with a different goal.

Calculation time may be a burden for this method, compared to the other options. In our study, we have chosen a small sample of 30 randomly drawn patients within each PSA draw, to avoid the rapidly increasing computation time. After testing, we concluded that 30 would be a good middle ground between accuracy and runtime. In our case, the calculation time was 9 hours (one overnight calculation) which is not a huge obstacle, in our opinion. Fortunately, since computational speed increases rapidly, it is likely that using faster, more modern computers would decrease the necessary time.

To conclude, we think that three of the methods discussed can be useful in cost-effectiveness research, each in different circumstances. When little or no heterogeneity is expected, or when it is not expected to influence the cost-effectiveness results, disregarding heterogeneity may be correct. In our case study, heterogeneity did have an impact. Subgroup analyses may inform policy decisions on each subgroup, as long as they are well defined and the characteristics of the cohort that define a subgroup truly represent the patients within that subgroup. Despite the necessary calculation time, the Double Loop PSA is a viable alternative which leads to better results and better policy decisions, when accounting for heterogeneity in a Markov model. Directly combining patient heterogeneity with parameter uncertainty in a PSA can only be used to calculate the point estimate of the expected outcome. It disregards the fundamental differences between heterogeneity and sampling uncertainty and overestimates uncertainty as a result.

 
Leave a comment

Posted by on January 9, 2015 in Economic Evaluation

 

Tags: , , , , , , , , , , , , , , , ,

How do you solve a problem like obesity?

obesity2

Making headlines this morning (Thursday 20th November) has been the report by McKinsey Global Institute, an offshoot of the management consultancy McKinsey, on the global economic impact of obesity. This report estimates that $2.0 trillion is spent annually worldwide as a result of obesity, which it compares to the global burden of smoking and armed conflict; the quoted figure is comprised of various elements such as productivity losses and spending to mitigate obesity. Certainly, the magnitude of the burden is in part due to the fact that obesity is generally a developed nation problem, and these nations typically spend many orders of magnitude more on healthcare than their developing nation counterparts. The claim then that obesity represents a problem as serious as armed conflict and violence may therefore end up being somewhat spurious if global issues were measured on a scale other than total financial expenditure. Nonetheless, the report acknowledges such issues, and provides a comprehensive summary of obesity related statistics to demonstrate them.

One of the main aims of the report is to identify interventions that may be used to tackle obesity in order to reduce expenditure resulting from obesity. To credit the McKinsey report, it recognises the complex nature of obesity and reproduces the above figure, asking if it is possible to tackle obesity given its complex aetiology. The report even provides some evidence that various social and cultural factors are at play. However, the authors write that while the background may be complex, the proximal causes are well known, and that interventions that target these proximal causes are both more feasible and simpler to implement and ought to be the ones they consider. This expression of a certain public health ideology, I would argue, is an issue with many discussions about population and global health issues.

This is the notion that public health and healthcare should be focussed on targeting individuals and modifying their behaviour, through such things as technological innovation, divorced from social, economic, or political contexts. For example, the McKinsey report suggests calorie labelling, advertising restrictions, and public health campaigns. However, if we want to tackle health issues such as obesity at the aggregate level then we should probably consider asking aggregate level questions, such as why markets are producing inefficient outcomes in terms of the health of the labour force, and why there is an oversupply of calories in some countries and an undersupply elsewhere. Policies that result from such analyses are likely to be more complex but are also more likely to be efficacious.

Historically, public health progress has been the result of a convergence of a wide range of social, economic, and political projects. Countries have adopted various strategies, historically, to reduce mortality including: better income distribution; improved diet; public health; medicine; changes in household education – however, none of these policies have been universally successful on its own and real progress requires integration of various social, medical, political, and economic strategies (Brin, 2005The Lancet—University of Oslo Commission on Global Governance for Health, 2014). The interventions in the report seem to me to be somewhat limp in the face of what they call a problem with a ‘global burden’.

 

Tags: , , , , , , , , , , ,

Is there any use in publishing surgeons’ death rates?

mynhs1

Today sees the publication of surgeons’ death rates on the MyNHS website (see Guardian and BBC stories). The website presents full lists of surgeons by specialty alongside either blue circles with a large ‘OK’ inside, grey circles with question marks, or green circles with ticks, to reflect, respectively, whether the surgeons’ risk adjusted mortality (or other significant morbidity) falls within expected limits or is a negative or positive outlier. The important question here is whether these measures actually reflect surgeon quality.

This issue returns to the perennial question of measuring healthcare quality. In terms of surgeon quality, we should consider that a high quality surgeon is one which makes fewer errors, and as such causes fewer preventable adverse events. Deaths, or other adverse health outcomes, that would have occurred regardless of the responsible consultant cannot be attributed to variations in surgeon quality. Therefore, the question we should ask is whether risk-adjusted mortality is a good proxy for preventable mortality. Girling et al (2012) ask exactly this question in relation to case-mix adjusted hospital mortality and preventable mortality and conclude, ‘If 6% of hospital deaths are preventable (as suggested by the literature), the predictive value of the SMR can be no greater than 9%. This value could rise to 30%, if 15% of deaths are preventable.’ A similar argument applies to individual physicians.

It is also important to ask what the consequences of publishing such data would be on patient and surgeon behaviour. In the latter case, surgeons may become more risk averse, avoiding cases in which there is a greater chance of non-preventable mortality since these cases would reflect badly against them. Indeed, speaking on this morning’s (Wednesday 19th November) Today programme, Ian Martin, from the Federation of Surgical Speciality Associations, suggested that there was anecdotal evidence indicating that this was the case. This is certainly not in the interests of the patient population. The publication of these data may also alter the way in which patients and surgeons are matched to one another, since patients will likely decide not to visit a surgeon with a high risk adjusted mortality rate. Yet, this altering of a specific surgeon’s case-mix resulting from patient choice, will mean that previous adjusted mortality rates will have poor predictive value for future adjusted mortality rates, and even less predictive value for preventable mortality.

These figures are published in the name of patient choice. Yet they may actually contain little useful information to support such a choice.

 

Tags: , , , , , ,

Is payment by diagnosis for dementia a good strategy?

There is a considerable furore surrounding the new proposal to pay GPs £55 for each dementia diagnosis. The Patients Association called it “a step too far” that would mean a “bounty on the head” of some patients (link), while the Daily Mail quoted a GP as describing the programme as ‘an intellectual and ethical travesty.’ Vitriol aside, there are clearly some issues with incentivising clinicians on the basis of making diagnoses.

Payment by diagnosis could be compared to other schemes, such as the Pay for Performance (P4P) scheme, which Sutton et al (2012) demonstrated had a mortality reducing effect in hospitals in England. However, P4P created incentives by paying doctors on the basis of specific process variables, such as prescribing aspirin at discharge for patients with acute myocardial infarction. These incentives act by altering the opportunity cost of time. For clinicians qua clinicians they may prioritise their time differently in order to increase their revenue from medical practice so that they are more likely to engage in clinical tasks with higher earnings potential. For clinicians qua individuals they may allocate more time to labour, substituting from leisure or work at home, at the benefit of patients. The P4P interventions operate at a specific part of the healthcare causal chain, at the level of process or specific interventions, which may then generate an increase in detection rates or a reduction in adverse events, all leading to improved patient outcomes. Incentivising physicians by diagnosis, however, operates at a different part of the healthcare process. Certainly, the payment for diagnosis may ensure GPs spend more time diagnosing or working with potential dementia patients, in order to boost dementia detection rates; however, equally, a diagnosis per se does not require much time to make and doctors may be incentivised to make incorrect diagnoses. Furthermore, in distorting the opportunity costs of physician time, GPs will allocate more time to identifying dementia patients at the potential risk of neglecting other patients.

Dementia is a concern for an ageing population. Only around 50% of dementia cases are thought to have been diagnosed. The global burden of dementia and Alzheimer’s disease was estimated to be $422 billion in 2009, of which $124 billion was unpaid care (Wimo et al, 2010). One strategy for reducing the burden of dementia is earlier detection – before the development of frank dementia most patients have a period of cognitive decline and suffer from what is termed mild cognitive impairment (MCI) (Petersen et al, 1999). While the deterioration of cognitive function is inexorable in dementia patients, it may possibly be slowed with appropriate therapy, which would then potentially delay or prevent a patient requiring highly costly care for late stage dementia (Gestios et al, 2010, 2012, Petersen et al, 2005, Teixera et al, 2012). There would also be considerable benefit to people with MCI and their families where the devastating impact of dementia can be reduced. Whether or not an incentive for dementia diagnoses would lead to earlier detection remains to be seen. Nonetheless, it would seem that incentivising testing for MCI in order to improve early detection, would be a more appropriate strategy. Indeed, this is the aim with type 2 diabetes where the potential benefits of a screening programme have been discussed widely (Gillies et al, 2008, Kahn et al, 2010, Schaufler and Wolff, 2010, among many examples). Simply paying doctors every time they diagnose a case of diabetes would, at face value, be less effective, particularly since earlier cases may be harder to detect – the harder to detect cases would require more time on the part of the clinician, the marginal benefit of which may be smaller than the marginal cost to the clinician. Incentivising for conducting tests arguably does not discriminate on the same basis.

While this may be a step in the right direction to improve dementia detection rates, there may have been a more effective method of incentivising GPs than payment by diagnosis.

 
Leave a comment

Posted by on October 22, 2014 in Supply of Health Services

 

Tags: , , , , , , , , , , ,

Do economists care about patients?

I stand accused. Not of a particularly heinous crime, but of something that has given me pause for thought recently. During a discussion about a piece of work involving patient outcomes, I was accused of ‘thinking like an economist’. Had this come from an economist, it would have been meant in a complementary sense – ‘now you’re thinking like an economist!’ Alas, it was not an economist but a clinician and medical researcher who said it and as such was meant more along the lines of ‘Unfortunately, you’re thinking like an economist’. In particular, the comment was meant to suggest that I was missing the importance of the patient, that I was focussed solely on the numbers, that I knew the price of everything and the value of nothing. Perhaps I am exaggerating the meaning but the sentiment still stands. I felt that this was unfair, both to me in that particular circumstance, but to economists and economics in general. However, I do see why people may come to this understanding of economics; economic discourse to the untrained ear is abstract and separate from the real world, economics is often very mathematical.

Typically, most economic fields do not work as closely alongside natural science as does health economics. Medical sciences and health economics often overlap greatly in their interests; many health economists publish in both medical and economics journals. However, their approach differs somewhat and the difference between them highlights an important difference between the natural and social sciences. Indeed, medical statistics and health econometrics are often considered different subjects, despite the similarities in the tools they use to approach the problems they face. Perhaps, then, it is important to analyse the distinction between the fields to find out the implications of ‘thinking like an economist’.

Economics is a social science. I am not going to provide a treatise on the philosophy of social science but will provide some thoughts on the distinction between the social sciences and the natural sciences. Arguably, the goal of science, social or natural, is to provide and validate statements which are epistemically objective. That is, we may say it produces falsifiable statements and facts. The difference between the natural and social sciences is that the former studies objects which are ontologically objective whereas the latter studies objects that are ontologically subjective. Or, to phrase it in another way, the objects of study of the natural sciences are observer independent and would exist and function whether or not there were humans or science; the social sciences studies observer dependent objects. The economy is a system of social relations in which economists participate and about which economists already have preconceived notions and have already made value judgements.

An education in the natural sciences involves teaching a student to ‘think scientifically’. To formulate and test falsifiable hypotheses about observer independent objects. An education in the social sciences involves teaching a student to re-observe the world in which they live, and to provide them with the tools to simplify it. This way we can think clearly about processes of causality in the social world. In economics, this often involves the use of mathematics and, importantly, a simplified vocabulary. This ‘econspeak’ does not identify new, economic objects – it is not more fundamental or philosophical than everyday language, it serves to simplify the social world. And, since economic words describe a social reality and since the social world is itself entirely linguistic, the meaning of economic words can always be restated in terms of everyday language and have the same meaning. For example, elasticity can be restated as ‘how much one variable changes in response to a change in another variable’, which can then be understood in terms of everyday social life – ‘how many more oranges would I buy if the price of oranges went up by 10p?’ But, in the natural sciences, the subject specific words cannot be restated in everyday language since they are not part of our world. The name of a specific protein cannot be restated in language any more simply, I could describe its function or its structure, but these explanations just skirt around the object, trying to identify it exactly. Our alternative ways of stating elasticity mean exactly the same thing.

Health economics is a social science. Medical science is perhaps more tricky to define. It has elements of both natural and social science since it studies how proteins and cells and ontologically objective objects function but also how behaviour affects health. Medicine is the application of science to improve health and is a social activity. But, many medical scientists come from a natural scientific background rather than a social scientific background. Thus, unless otherwise trained, economic representations of the social world will seem alien and abstract.

I certainly do not deny that economists lack the social interaction with many patients that health service staff have and that this may create a distance between the analyst and the people they study. This may even lead to a lack of understanding on the part of the economist about the nature of things and the state of the world that they examine. I would certainly advocate enhanced dialogue between practitioners of a variety of disciplines – this can only enhance the work being conducted – indeed, this goes both ways. Economists and clinicians alike may better learn about their subject matter. Social science has a lot to contribute to the understanding of medicine and its practice. Greater dialogue may mean greater understanding and fewer accusations such as this.

 

Tags: , , , , , , , ,

 
Follow

Get every new post delivered to your Inbox.

Join 1,268 other followers

%d bloggers like this: