Sam Watson’s journal round-up for 8th October 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A cost‐effectiveness threshold based on the marginal returns of cardiovascular hospital spending. Health Economics [PubMed] Published 1st October 2018

There are two types of cost-effectiveness threshold of interest to researchers. First, there’s the societal willingness-to-pay for a given gain in health or quality of life. This is what many regulatory bodies, such as NICE, use. Second, there is the actual return on medical spending achieved by the health service. Reimbursement of technologies with a lesser return for every pound or dollar would reduce the overall efficiency of the health service. Some refer to this as the opportunity cost, although in a technical sense I would disagree that it is the opportunity cost per se. Nevertheless, this latter definition has seen a growth in empirical work; with some data on health spending and outcomes, we can start to estimate this threshold.

This article looks at spending on cardiovascular disease (CVD) among elderly age groups by gender in the Netherlands and survival. Estimating the causal effect of spending is tricky with these data: spending may go up because survival is worsening, external factors like smoking may have a confounding role, and using five year age bands (as the authors do) over time can lead to bias as the average age in these bands is increasing as demographics shift. The authors do a pretty good job in specifying a Bayesian hierarchical model with enough flexibility to accommodate these potential issues. For example, linear time trends are allowed to vary by age-gender groups and  dynamic effects of spending are included. However, there’s no examination of whether the model is actually a good fit to the data, something which I’m growing to believe is an area where we, in health and health services research, need to improve.

Most interestingly (for me at least) the authors look at a range of priors based on previous studies and a meta-analysis of similar studies. The estimated elasticity using information from prior studies is more ‘optimistic’ about the effect of health spending than a ‘vague’ prior. This could be because CVD or the Netherlands differs in a particular way from other areas. I might argue that the modelling here is better than some previous efforts as well, which could explain the difference. Extrapolating using life tables the authors estimate a base case cost per QALY of €40,000.

Early illicit drug use and the age of onset of homelessness. Journal of the Royal Statistical Society: Series A Published 11th September 2018

How the consumption of different things, like food, drugs, or alcohol, affects life and health outcomes is a difficult question to answer empirically. Consider a recent widely-criticised study on alcohol published in The Lancet. Among a number of issues, despite including a huge amount of data, the paper was unable to address the problem that different kinds of people drink different amounts. The kind of person who is teetotal may be so for a number of reasons including alcoholism, interaction with medication, or other health issues. Similarly, studies on the effect of cannabis consumption have shown among other things an association with lower IQ and poorer mental health. But are those who consume cannabis already those with lower IQs or at higher risk of psychoses? This article considers the relationship between cannabis and homelessness. While homelessness may lead to an increase in drug use, drug use may also be a cause of homelessness.

The paper is a neat application of bivariate hazard models. We recently looked at shared parameter models on the blog, which factorise the joint distribution of two variables into their marginal distribution by assuming their relationship is due to some unobserved variable. The bivariate hazard models work here in a similar way: the bivariate model is specified as the product of the marginal densities and the individual unobserved heterogeneity. This specification allows (i) people to have different unobserved risks for both homelessness and cannabis use and (ii) cannabis to have a causal effect on homelessness and vice versa.

Despite the careful set-up though, I’m not wholly convinced of the face validity of the results. The authors claim that daily cannabis use among men has a large effect on becoming homeless – as large an effect as having separated parents – which seems implausible to me. Cannabis use can cause psychological dependency but I can’t see people choosing it over having a home as they might with something like heroin. The authors also claim that homelessness doesn’t really have an effect on cannabis use among men because the estimated effect is “relatively small” (it is the same order of magnitude as the reverse causal effect) and only “marginally significant”. Interpreting these results in the context of cannabis use would then be difficult, though. The paper provides much additional material of interest. However, the conclusion that regular cannabis use, all else being equal, has a “strong effect” on male homelessness, seems both difficult to conceptualise and not in keeping with the messiness of the data and complexity of the empirical question.

How could health care be anything other than high quality? The Lancet: Global Health [PubMed] Published 5th September 2018

Tedros Adhanom Ghebreyesus, or Dr Tedros as he’s better known, is the head of the WHO. This editorial was penned in response to the recent Lancet Commission on Health Care Quality and related studies (see this round-up). However, I was critical of these studies for a number of reasons, in particular, the conflation of ‘quality’ as we normally understand it and everything else that may impact on how a health system performs. This includes resourcing, which is obviously low in poor countries, availability of labour and medical supplies, and demand side choices about health care access. The empirical evidence was fairly weak; even in countries like in the UK in which we’re swimming in data we struggle to quantify quality. Data are also often averaged at the national level, masking huge underlying variation within-country. This editorial is, therefore, a bit of an empty platitude: of course we should strive to improve ‘quality’ – its goodness is definitional. But without a solid understanding of how to do this or even what we mean when we say ‘quality’ in this context, we’re not really saying anything at all. Proposing that we need a ‘revolution’ without any real concrete proposals is fairly meaningless and ignores the massive strides that have been made in recent years. Delivering high-quality, timely, effective, equitable, and integrated health care in the poorest settings means more resources. Tinkering with what little services already exist for those most in need is not going to produce a revolutionary change. But this strays into political territory, which UN organisations often flounder in.

Editorial: Statistical flaws in the teaching excellence and student outcomes framework in UK higher education. Journal of the Royal Statistical Society: Series A Published 21st September 2018

As a final note for our academic audience, we give you a statement on the Teaching Excellence Framework (TEF). For our non-UK audience, the TEF is a new system being introduced by the government, which seeks to introduce more of a ‘market’ in higher education by trying to quantify teaching quality and then allowing the best-performing universities to charge more. No-one would disagree with the sentiment that improving higher education standards is better for students and teachers alike, but the TEF is fundamentally statistically flawed, as discussed in this editorial in the JRSS.

Some key points of contention are: (i) TEF doesn’t actually assess any teaching, such as through observation; (ii) there is no consideration of uncertainty about scores and rankings; (iii) “The benchmarking process appears to be a kind of poor person’s propensity analysis” – copied verbatim as I couldn’t have phrased it any better; (iv) there has been no consideration of gaming the metrics; and (v) the proposed models do not reflect the actual aims of TEF and are likely to be biased. Economists will also likely have strong views on how the TEF incentives will affect institutional behaviour. But, as Michael Gove, the former justice and education secretary said, Britons have had enough of experts.

Credits

Sam Watson’s journal round-up for 15th January 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness of publicly funded treatment of opioid use disorder in California. Annals of Internal Medicine [PubMed] Published 2nd January 2018

Deaths from opiate overdose have soared in the United States in recent years. In 2016, 64,000 people died this way, up from 16,000 in 2010 and 4,000 in 1999. The causes of public health crises like this are multifaceted, but we can identify two key issues that have contributed more than any other. Firstly, medical practitioners have been prescribing opiates irresponsibly for years. For the last ten years, well over 200,000,000 opiate prescriptions were issued per year in the US – enough for seven in every ten people. Once prescribed, opiate use is often not well managed. Prescriptions can be stopped abruptly, for example, leaving people with unexpected withdrawal syndromes and rebound pain. It is estimated that 75% of heroin users in the US began by using legal, prescription opiates. Secondly, drug suppliers have started cutting heroin with its far stronger but cheaper cousin, fentanyl. Given fentanyl’s strength, only a tiny amount is required to achieve the same effects as heroin, but the lack of pharmaceutical knowledge and equipment means it is often not measured or mixed appropriately into what is sold as ‘heroin’. There are two clear routes to alleviating the epidemic of opiate overdose: prevention, by ensuring responsible medical use of opiates, and ‘cure’, either by ensuring the quality and strength of heroin, or providing a means to stop opiate use. The former ‘cure’ is politically infeasible so it falls on the latter to help those already habitually using opiates. However, the availability of opiate treatment programs, such as opiate agonist treatment (OAT), is lacklustre in the US. OAT provides non-narcotic opiates, such as methadone or buprenorphine, to prevent withdrawal syndromes in users, from which they can slowly be weaned. This article looks at the cost-effectiveness of providing OAT for all persons seeking treatment for opiate use in California for an unlimited period versus standard care, which only provides OAT to those who have failed supervised withdrawal twice, and only for 21 days. The paper adopts a previously developed semi-Markov cohort model that includes states for treatment, relapse, incarceration, and abstinence. Transition probabilities for the new OAT treatment were determined from treatment data for current OAT patients (as far as I understand it). Although this does raise the question about the generalisability of this population to the whole population of opiate users – given the need to have already been through two supervised withdrawals, this population may have a greater motivation to quit, for example. In any case, the article estimates that the OAT program would be cost-saving, through reductions in crime and incarceration, and improve population health, by reducing the risk of death. Taken at face value these results seem highly plausible. But, as we’ve discussed before, drug policy rarely seems to be evidence-based.

The impact of aid on health outcomes in Uganda. Health Economics [PubMed] Published 22nd December 2017

Examining the response of population health outcomes to changes in health care expenditure has been the subject of a large and growing number of studies. One reason is to estimate a supply-side cost-effectiveness threshold: the health returns the health service achieves in response to budget expansions or contractions. Similarly, we might want to know the returns to particular types of health care expenditure. For example, there remains a debate about the effectiveness of aid spending in low and middle-income country (LMIC) settings. Aid spending may fail to be effective for reasons such as resource leakage, failure to target the right population, poor design and implementation, and crowding out of other public sector investment. Looking at these questions at an aggregate level can be tricky; the link between expenditure or expenditure decisions and health outcomes is long and causality flows in multiple directions. Effects are likely to therefore be small and noisy and require strong theoretical foundations to interpret. This article takes a different, and innovative, approach to looking at this question. In essence, the analysis boils down to a longitudinal comparison of those who live near large, aid funded health projects with those who don’t. The expectation is that the benefit of any aid spending will be felt most acutely by those who live nearest to actual health care facilities that come about as a result of it. Indeed, this is shown by the results – proximity to an aid project reduced disease prevalence and work days lost to ill health with greater effects observed closer to the project. However, one way of considering the ‘usefulness’ of this evidence is how it can be used to improve policymaking. One way is in understanding the returns to investment or over what area these projects have an impact. The latter is covered in the paper to some extent, but the former is hard to infer. A useful next step may be to try to quantify what kind of benefit aid dollars produce and its heterogeneity thereof.

The impact of social expenditure on health inequalities in Europe. Social Science & Medicine Published 11th January 2018

Let us consider for a moment how we might explore empirically whether social expenditure (e.g. unemployment support, child support, housing support, etc) affects health inequalities. First, we establish a measure of health inequality. We need a proxy measure of health – this study uses self-rated health and self-rated difficulty in daily living – and then compare these outcomes along some relevant measure of socioeconomic status (SES) – in this study they use level of education and a compound measure of occupation, income, and education (the ISEI). So far, so good. Data on levels of social expenditure are available in Europe and are used here, but oddly these data are converted to a percentage of GDP. The trouble with doing this is that this variable can change if social expenditure changes or if GDP changes. During the financial crisis, for example, social expenditure shot up as a proportion of GDP, which likely had very different effects on health and inequality than when social expenditure increased as a proportion of GDP due to a policy change under the Labour government. This variable also likely has little relationship to the level of support received per eligible person. Anyway, at the crudest level, we can then consider how the relationship between SES and health is affected by social spending. A more nuanced approach might consider who the recipients of social expenditure are and how they stand on our measure of SES, but I digress. In the article, the baseline category for education is those with only primary education or less, which seems like an odd category to compare to since in Europe I would imagine this is a very small proportion of people given compulsory schooling ages unless, of course, they are children. But including children in the sample would be an odd choice here since they don’t personally receive social assistance and are difficult to compare to adults. However, there are no descriptive statistics in the paper so we don’t know and no comparisons are made between other groups. Indeed, the estimates of the intercepts in the models are very noisy and variable for no obvious reason other than perhaps the reference group is very small. Despite the problems outlined so far though, there is a potentially more serious one. The article uses a logistic regression model, which is perfectly justifiable given the binary or ordinal nature of the outcomes. However, the authors justify the conclusion that “Results show that health inequalities measured by education are lower in countries where social expenditure is higher” by demonstrating that the odds ratio for reporting a poor health outcome in the groups with greater than primary education, compared to primary education or less, is smaller in magnitude when social expenditure as a proportion of GDP is higher. But the conclusion does not follow from the premise. It is entirely possible for these odds ratios to change without any change in the variance of the underlying distribution of health, the relative ordering of people, or the absolute difference in health between categories, simply by shifting the whole distribution up or down. For example, if the proportions of people in two groups reporting a negative outcome are 0.3 and 0.4, which then change to 0.2 and 0.3 respectively, then the odds ratio comparing the two groups changes from 0.64 to 0.58. The difference between them remains 0.1. No calculations are made regarding absolute effects in the paper though. GDP is also shown to have a positive effect on health outcomes. All that might have been shown is that the relative difference in health outcomes between those with primary education or less and others changes as GDP changes because everyone is getting healthier. The question of the article is interesting, it’s a shame about the execution.

Credits

 

Chris Sampson’s journal round-up for 14th August 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Does paying service providers by results improve recovery outcomes for drug misusers in treatment in England? Addiction [PubMedPublished 10th August 2017

‘Getting what you pay for’ is a fundamentally attractive funding model, which is why we see lots of pay for performance (P4P) initiatives cropping up in the NHS. But P4P plans can go awry. This study considers an experimental setting in which 8 areas participated in P4P pilots for drug misuse treatment, from 2012-2014. Payments were aligned with 3 national priorities: i) abstinence, ii) reduced offending and iii) improved health and well-being. The participating areas allocated differing proportions of payments to the P4P model, between 10% and 100%. Data were drawn from the National Drug Treatment Monitoring System, which includes information on drug use, assessment and interventions received. Other national sources were used to identify criminal activity and mortality rates. Drug misusers attending treatment services during the 2 years before and after the introduction of the P4P scheme were included in the study. Using a difference-in-differences analysis, the researchers compared outcomes in the 8 participating areas with those in 143 non-participating areas. Separate multilevel regression models were used for a set of outcomes, each controlling for a variety of individual-level characteristics. The authors analysed ‘treatment journeys’, of which there were around 20,000 for those in participating areas and 280,000 for those in non-participating areas; roughly half before the introduction and half after. The results don’t look good for P4P. Use of opiates, crack cocaine and injecting increased. Treatment initiation increased in non-participating areas but decreased in participating areas. Moreover, longer waiting times were observed in participating areas as well as more unplanned discharges. P4P was associated with people being less likely to successfully complete treatment within 12 months. In P4P’s favour, there was evidence that abstinence increased. I’d’ve liked to have seen some attempt at matching between the areas, given that there was an element of self-selection into the scheme. Or at least, better control for the characteristics of the areas before P4P was introduced. This paper isn’t quite the final nail in the coffin. I don’t see P4P disappearing anytime soon. There’s a lot to be learnt from the paper’s discussion, which outlines some of the likely reasons and mechanisms underlying the findings. Commissioners should take note.

The short- and long-run effects of smoking cessation on alcohol consumption. International Journal of Health Economics and Management [PubMedPublished 7th August 2017

Anecdotally, it seems as if smoking and drinking are complementary behaviours. Generally, the evidence suggests that this is true. Smoking cessation programmes may, therefore, have value in their ability to reduce alcohol consumption (and vice versa). But only if the relationship is causal. This study seeks to add to that causal evidence. Using data from 5887 individuals in the Lung Health Study, the author runs a two-stage least squares estimation, with randomisation to smoking cessation treatment as an instrumental variable for smoking status. In the short term, there is some evidence that smokers tend to drink more (especially men). But findings in the longer term, up to 5 years, are more persuasive. It’s unfortunate that the (largely incoherent) rational addiction theory makes an appearance and that the findings are presented as supportive of it. A stopped clock is right twice a day. In line with rational addiction theory, the long-term relationship is measured in terms of a ‘smoking stock’, which is an aggregate measure of smoking behaviour over the 5 year period. Smoking and drinking are found to be complementary in the long term. Crucially, the extent of their complementarity is associated with particular factors. For example, people who smoke more cigarettes or who abstain for longer exhibit larger reductions in alcohol consumption when they stop smoking. People who smoke relatively few cigarettes per day do not drink more alcohol. Those smoking 6-10 per day consume around 1 extra drink per week compared with non-smokers. Quitting for 5 years can reduce alcohol consumption by more than 50%. In the long run, the effect is more pronounced for women and for people who are married. This highlights important opportunities for targeted public policy, which could achieve a win-win in terms of reducing both cigarette and alcohol consumption.

Time for a change in how new antibiotics are reimbursed: development of an insurance framework for funding new antibiotics based on a policy of risk mitigation. Health Policy Published 5th August 2017

Antibiotics have become a key component of health care, but antimicrobial resistance threatens their usefulness and we don’t see new antibiotics in the pipeline to help overcome this. It’s a fundamentally difficult problem; we want new antibiotics but we want to use them as sparingly as possible. Antibiotic development is relatively unattractive (financially) to pharmaceutical companies. Provision of research funding and regulatory changes haven’t solved the problem to date. This paper considers why this might be the case, and explores 2 alternative approaches: a premium price model and an insurance-type model. Essentially, the authors conduct a spreadsheet analysis to compare the alternative models with a base case of no incentives. The expected net present value of the base case was negative (to the tune of about $1.5 billion), demonstrating why much-needed new antibiotics aren’t being developed. Current incentives – including public-private funding partnerships and market exclusivity – are also shown to fail to reach a positive net present value. The premium price model, whereby there is an enhanced price per unit, is not particularly attractive. The daily cost of the resulting antibiotics would likely be too high, and manufacturers’ pursuit of profit would be at odds with conservative prescribing. Furthermore, it exposes areas experiencing outbreaks to serious financial risk. The insurance model, which involved an annual fee paid by each healthcare system (to manufacturers), is more promising. Pharmaceutical companies would be insured against low prices and variable use and health systems would be insured against a lack of antibiotics and the risk of an infection outbreak. The key feature here is that manufacturers’ revenues are de-linked from sales volume. This is important when we consider the need for conservative prescribing. The authors estimate that the necessary fee (for the global market) would be around $262 million per year, or $114 million if combined with current funding and regulatory incentives. Of course, these findings are based on major assumptions about infection rates, research costs and plenty besides. A number of sensitivity analyses are conducted that highlight uncertainty about what the insurance fee might need to be in the future. I think this uncertainty is somewhat understated – there are far more sensitivity and scenario analyses that would be warranted if such a policy were being seriously considered. Nevertheless, pooling risk in an insurance model looks like a promising strategy that’s worthy of further investigation and piloting.

Credits