Rita Faria’s journal round-up for 2nd September 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ [PubMed] Published 28th August 2019

RCTs are the gold standard primary study to estimate the effect of treatments but are often far from perfect. The question is the extent to which their flaws make a difference to the results. Well, RoB 2 is your new best friend to help answer this question.

Developed by a star-studded team, the RoB 2 is the update to the original risk of bias tool by the Cochrane Collaboration. Bias is assessed by outcome, rather than for the whole RCT. For me, this makes sense.  For example, the primary outcome may be well reported, yet the secondary outcome, which may be the outcome of interest for a cost-effectiveness model, much less so.

Bias is considered in terms of 5 domains, with the overall risk of bias usually corresponding to the worst risk of bias in any of the domains. This overall risk of bias is then reflected in the evidence synthesis, with, for example, a stratified meta-analysis.

The paper is a great read! Jonathan Sterne and colleagues explain the reasons for the update and the process that was followed. Clearly, there was quite a lot of thought given to the types of bias and to develop questions to help reviewers assess it. The only downside is that it may require more time to apply, given that it needs to be done by outcome. Still, I think that’s a price worth paying for more reliable results. Looking forward to seeing it in use!

Characteristics and methods of incorporating randomised and nonrandomised evidence in network meta-analyses: a scoping review. Journal of Clinical Epidemiology [PubMed] Published 3rd May 2019

In keeping with the evidence synthesis theme, this paper by Kathryn Zhang and colleagues reviews how the applied literature has been combining randomised and non-randomised evidence. The headline findings are that combining these two types of study designs is rare and, when it does happen, naïve pooling is the most common method.

I imagine that the limited use of non-randomised evidence is due to its risk of bias. After all, it is difficult to ensure that the measure of association from a non-randomised study is an estimate of a causal effect. Hence, it is worrying that the majority of network meta-analyses that did combine non-randomised studies did so with naïve pooling.

This scoping review may kick start some discussions in the evidence synthesis world. When should we combine randomised and non-randomised evidence? How best to do so? And how to make sure that the right methods are used in practice? As a cost-effectiveness modeller, with limited knowledge of evidence synthesis, I’ve grappled with these questions myself. Do get in touch if you have any thoughts.

A cost-effectiveness analysis of shortened direct-acting antiviral treatment in genotype 1 noncirrhotic treatment-naive patients with chronic hepatitis C virus. Value in Health [PubMed] Published 17th May 2019

Rarely we see a cost-effectiveness paper where the proposed intervention is less costly and less effective, that is, in the controversial southwest quadrant. This exceptional paper by Christopher Fawsitt and colleagues is a welcome exception!

Christopher and colleagues looked at the cost-effectiveness of shorter treatment durations for chronic hepatitis C. Compared with the standard duration, the shorter treatment is not as effective, hence results in fewer QALYs. But it is much cheaper to treat patients over a shorter duration and re-treat those patients who were not cured, rather than treat everyone with the standard duration. Hence, for the base-case and for most scenarios, the shorter treatment is cost-effective.

I’m sure that labelling a less effective and less costly option as cost-effective may have been controversial in some quarters. Some may argue that it is unethical to offer a worse treatment than the standard even if it saves a lot of money. In my view, it is no different from funding better and more costlier treatments, given that the savings will be borne by other patients who will necessarily have access to fewer resources.

The paper is beautifully written and is another example of an outstanding cost-effectiveness analysis with important implications for policy and practice. The extensive sensitivity analysis should provide reassurance to the sceptics. And the discussion is clever in arguing for the value of a shorter duration in resource-constrained settings and for hard to reach populations. A must read!

Credits

Thesis Thursday: Alastair Irvine

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Alastair Irvine who has a PhD from the University of Aberdeen. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Time preferences and the patient-doctor interaction
Supervisors
Marjon van der Pol, Euan Phimister
Repository link
http://digitool.abdn.ac.uk/webclient/DeliveryManager?pid=238373

How can people’s time preferences affect the way they use health care?

Time preferences are a way of thinking about how people choose between things that happen over time. Some people prefer a treatment with large side effects and a long chain of future benefits; others prefer smaller benefits but less side effects. These influence a wide range of health outcomes and decisions. One of the most interesting questions I had coming into the PhD was around non-adherence.

Non-adherence can’t be captured by ‘standard’ exponential time preferences because there is no way for something you prefer now to be ‘less preferred’ in the future if everything is held constant. Instead, present-bias preferences can capture non-adherent behaviour. With these preferences, people place a higher weight on the ‘current period’ relative to all future periods but weight all future periods consistently. What that means is you can have a situation where you plan to do something – eat healthily, take your medication – but end up not doing it. When planning, you placed less relative weight on the near term ‘cost’ (like medication side effects) than you do when the decision arrives.

In what way might the patient-doctor interaction affect a patient’s adherence to treatment?

There’s asymmetric information between doctors and patient, leading to an agency relationship. Doctors in general know more about treatment options than patients, and don’t know their patient’s preferences. So if doctors are making recommendations to patients, this asymmetry can lead to recommendations that are accepted by the patient but not adhered to. For example, present-biased patients accept the same treatments as exponential discounters. Depending on the treatment parameters, present-biased people will not adhere to some treatments. If the doctor doesn’t anticipate this when making recommendations, it leads to non-adherence.

One of the issues from a contracting perspective is that naive present-bias people don’t anticipate their own non-adherence, so we can’t write traditional ‘separating contracts’ that lead present-bias people to one treatment and exponential discounters to another. However, if the doctor can offer a lower level of treatment to all patients – one that has less side effects and a concomitantly lower benefit – then everyone sticks to that treatment. This clearly comes at the expense of the exponential discounters’ health, but if the proportion of present-bias is high enough it can be an efficient outcome.

Were you able to compare the time preferences of patients and of doctors?

Not this time! It had been the ‘grand plan’ at the start of the PhD to compare matched doctor and patient time preferences then link it to treatment choices but that was far too ambitious for the time, and there had been very little work establishing how time preferences work in the patient-doctor interaction so I felt we had a lot to do.

One interesting question we did ask was whether doctors’ time preferences for themselves were the same as for their patients. A lot of the existing evidence asks doctors for their own time preferences, but surely the important time preference is the one they apply to their patients?

We found that while there was little difference between these professional and private time preferences, a lot of the responses displayed increasing impatience. This means that as the start of treatment gets pushed further into the future, doctors started to prefer shorter-but-sooner benefits for themselves and their patients. We’re still thinking about whether this reflects that in the real world (outside the survey) doctors already account for the time patients have spent with symptoms when assessing how quickly a treatment benefit should arrive.

How could doctors alter their practice to reduce non-adherence?

We really only have two options – to make ‘the right thing’ easier or the ‘wrong thing’ more costly. The implication of present-bias is you need to use less intense treatments because the problem is the (relative) over-weighting of the side effects. The important thing we need for that is good information on adherence.

We could pay people to adhere to treatment. However, my gut feeling is that payments are hard to implement on the patient side without being coercive (e.g making non-adherence costly with charges) or expensive for the implementer when identification of completion is tricky (giving bonuses to doctors based on patient health outcomes). So doctors can reduce non-adherence by anticipating it, and offering less ‘painful’ treatments.

It’s important to say I was only looking at one kind of non-adherence. If patients have bad experiences then whatever we do shouldn’t keep them taking a treatment they don’t want. However, the fact that stopping treatment is always an option for the patient makes non-adherence hard to address because as an economist you would like to separate different reasons for stopping. This is a difficulty for analysing non-adherence as a problem of temptation. In temptation preferences we would like to change the outcome set so that ‘no treatment’ is not a tempting choice, but there are real ethical and practical difficulties with that.

To what extent did the evidence generated by your research support theoretical predictions?

I designed a lab experiment that put students in the role of the doctor with patients that may or may not be present-biased. The participants had to recommend treatments to a series of hypothetical patients and was set up so that adapting to non-adherence with less intense treatments was best. Participants got feedback on their previous patients, to learn about which treatments patients stuck to over the rounds.

We paid one arm a salary, and another a ‘performance payment’. The latter only got paid when patients stuck to treatment and the pay correlated with the patient outcomes. In both arms, patients’ outcomes were reflected with a charity donation.

The main result is that there was a lot of adaptation to non-adherence in both arms. The adaptation was stronger under the performance payment, reflecting the upper limit of the adaptation we can expect because it perfectly aligns patient and doctor preferences.

In the experimental setting, even when there is no direct financial benefit of doing so, participants adapted to non-adherence in the way I predicted.

Sam Watson’s journal round-up for 13th November 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Scaling for economists: lessons from the non-adherence problem in the medical literature. Journal of Economic Perspectives [RePEcPublished November 2017

It has often been said that development economics has been at the vanguard of the use of randomised trials within economics. Other areas of economics have slowly caught up; the internal validity, and causal interpretation, offered by experimental randomised studies can provide reliable estimates for the effects of particular interventions. Health economics though has perhaps an even longer history with randomised controlled trials (RCTs), and now economic evaluation is often expected alongside clinical trials. RCTs of physician incentives and payments, investment programmes in child health, or treatment provision in schools all feature as other examples. However, even experimental studies can suffer from the same biases in the data analysis process as observational studies. The multiple decisions made in the data analysis and publication stages of research can lead to over-inflated estimates. Beyond that, the experimental conditions of the trial may not pertain in the real world – the study may lack external validity. The medical literature has long recognised this issue, as many as 50% of patients don’t take the medicines prescribed to them by a doctor. As a result, there has been considerable effort to develop an understanding of, and interventions to remedy, the lack of transferability between RCTs and real-world outcomes. This article summarises this literature and develops lessons for economists, who are only just starting to deal with, what they term, ‘the scaling problem’. For example, there are many reasons people don’t respond to incentives as expected: there are psychological costs to switching; people are hyperbolic discounters and often prefer small short-term gains for larger long-term costs; and, people can often fail to understand the implications of sets of complex options. We have also previously discussed the importance of social preferences in decision making. The key point is that, as policy is becoming more and more informed by randomised studies, we need to be careful about over-optimism of effect sizes and start to understand adherence to different policies in the real world. Only then are recommendations reliable.

Estimating the opportunity costs of bed-days. Health Economics [PubMedPublished 6th November 2017

The health economic evaluation of health service delivery interventions is becoming an important issue in health economics. We’ve discussed on many occasions questions surrounding the implementation of seven-day health services in England and Wales, for example. Other service delivery interventions might include changes to staffing levels more generally, medical IT technology, or an incentive to improve hand washing. Key to the evaluation of these interventions is that they are all generally targeted at improving quality of care – that is, to reduce preventable harm. The vast majority of patients who experience some sort of preventable harm do not die but are likely to experience longer lengths of stay in hospital. Consider a person suffering from bed sores or a fall in hospital. Therefore, we need to be able to value those extra bed days to be able to say what the value of improving hospital quality is. Typically we use reference costs or average accounting costs for the opportunity cost of a bed-day, mainly for pragmatic reasons, but also on the assumption that this is equivalent to the value of the second-best alternative foregone. This requires the assumption that health care markets operate properly, which they almost certainly do not. This paper explores the different ways economists have thought about opportunity costs and applies them to the question of the opportunity cost of a hospital bed-day. This includes definitions such as “Net health benefit forgone for the second-best patient‐equivalents”, “Net monetary benefit forgone for the second-best treatment-equivalents”, and “Expenditure incurred + highest net revenue forgone.” The key takeaway is that there is wide variation in the estimated opportunity costs using all the different methods and that, given the assumptions underpinning the most widely used methodologies are unlikely to hold, we may be routinely under- or over-valuing the effects of different interventions.

Universal investment in infants and long-run health: evidence from Denmark’s 1937 Home Visiting Program. American Economic Journal: Applied Economics [RePEcPublished October 2017

We have covered a raft of studies that look at the effects of in-utero health on later life outcomes, the so-called fetal origins hypothesis. A smaller, though by no means small, literature has considered what impact improving infant and childhood health has on later life adult outcomes. While many of these studies consider programmes that occurred decades ago in the US or Europe, their findings are still relevant today as many countries are grappling with high infant and childhood mortality. For many low-income countries, programmes with community health workers – lay-community members provided with some basic public health training – involving home visits, education, and referral services are being widely adopted. This article looks at the later life impacts of an infant health programme, the Home Visiting Program, implemented in Denmark in the 1930s and 40s. The aim of the programme was to provide home visits to every newborn in each district to provide education on feeding and hygiene practices and to monitor infant progress. The programme was implemented in a trial based fashion with different districts adopting the programme at different times and some districts remaining as control districts, although selection into treatment and control was not random. Data were obtained about the health outcomes in the period 1980-2012 of people born 1935-49. In short, the analyses suggest that the programme improved adult longevity and health outcomes, although the effects are small. For example, they estimate the programme reduced hospitalisations by half a day between the age of 45 and 64, and 2 to 6 more people per 1,000 survived past 60 years of age. However, these effect sizes may be large enough to justify what may be a reasonably low-cost programme when scaled across the population.

Credits