Chris Sampson’s journal round-up for 23rd December 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The Internet and children’s psychological wellbeing. Journal of Health Economics Published 13th December 2019

Here at the blog, we like the Internet. We couldn’t exist without it. We vie for your attention along with all of the other content factories (or “friends”). But there’s a well-established sense that people – especially children – should moderate their consumption of Internet content. The Internet is pervasive and is now a fundamental part of our day-to-day lives, not simply an information source to which we turn when we need it. Almost all 12-15 year olds in the UK use the Internet. The ubiquity of the Internet makes it difficult to test its effects. But this paper has a good go at it.

This study is based on the idea that broadband speeds are a good proxy for Internet use. In England, a variety of public and private sector initiatives have resulted in a distorted market with quasi-random assigment of broadband speeds. The authors provide a very thorough explanation of children’s wellbeing in relation to the Internet, outlining a range of potential mechanisms.

The analysis combines data from the UK’s pre-eminent household panel survey (Understanding Society) with broadband speed data published by the UK regulator Ofcom. Six wellbeing outcomes are analysed from children’s self-reported responses. The questions ask children how they feel about their lives – measured on a seven-point scale – in relation to school work, appearance, family, friends, school attended, and life as a whole. An unbalanced panel of 6,310 children from 2012-2017 provides 13,938 observations from 3,765 different Lower Layer Super Output Areas (LSOA), with average broadband speeds for each LSOA for each year. Each of the six wellbeing outcomes is modelled with child-, neighbourhood- and time-specific fixed effects. The models’ covariates include a variety of indicators relating to the child, their parents, their household, and their local area.

A variety of models are tested, and the overall finding is that higher broadband speeds are negatively associated with all of the six wellbeing indicators. Wellbeing in relation to appearance shows the strongest effect; a 1% increase in broadband speed reduces happiness with appearance by around 0.6%. The authors explore a variety of potential mechanisms by running pairs of models between broadband speeds and the mechanism and between the mechanism and the outcomes. A key finding is that the data seem to support the ‘crowding out’ hypothesis. Higher broadband speeds are associated with children spending less time on activities such as sports, clubs, and real world social interactions, and these activities are in turn positively associated with wellbeing. The authors also consider different subgroups, finding that the effects are more detrimental for girls.

Where the paper falls down is that it doesn’t do anything to convince us that broadband speeds represent a good proxy for Internet use. It’s also not clear exactly what the proxy is meant to be for – use (e.g. time spent on the Internet) or access (i.e. having the option to use the Internet) – though the authors seem to be interested in the former. If that’s the case, the logic of the proxy is not obvious. If I want to do X on the Internet then higher speeds will enable me to do it in less time, in which case the proxy would capture the inverse of the desired indicator. The other problem I think we have is in the use of self-reported measures in this context. A key supposed mechanism for the effect is through ‘social comparison theory’, which we might reasonably expect to influence the way children respond to questions as well as – or instead of – their underlying wellbeing.

One-way sensitivity analysis for probabilistic cost-effectiveness analysis: conditional expected incremental net benefit. PharmacoEconomics [PubMed] Published 16th December 2019

Here we have one of those very citable papers that clearly specifies a part of cost-effectiveness analysis methodology. A better title for this paper could be Make one-way sensitivity analysis great again. The authors start out by – quite rightly – bashing the tornado diagram, mostly on the basis that it does not intuitively characterise the information that a decision-maker needs. Instead, the authors propose an approach to probabilistic one-way sensitivity analysis (POSA) that is a kind of simplified version of EVPPI (expected value of partially perfect information) analysis. Crucially, this approach does not assume that the various parameters of the analysis are independent.

The key quantity created by this analysis is the conditional expected incremental net monetary benefit (cINMB), conditional, that is, on the value of the parameter of interest. There are three steps to creating a plot of the POSA results: 1) rank the costs and outcomes for the sampled values of the parameter – say from the first to the last centile; 2) plug in a cost-effectiveness threshold value to calculate the cINMB at each sampled value; and 3) record the probability of observing each value of the parameter. You could use this information to present a tornado-style diagram, plotting the credible range of the cINMB. But it’s more useful to plot a line graph showing the cINMB at the different values of the parameter of interest, taking into account the probability that the values will actually be observed.

The authors illustrate their method using three different parameters from a previously published cost-effectiveness analysis, in each case simulating 15,000 Monte Carlo ‘inner loops’ for each of the 99 centiles. It took me a little while to get my head around the results that are presented, so there’s still some work to do around explaining the visuals to decision-makers. Nevertheless, this approach has the potential to become standard practice.

A head-on ordinal comparison of the composite time trade-off and the better-than-dead method. Value in Health Published 19th December 2019

For years now, methodologists have been trying to find a reliable way to value health states ‘worse than dead’. The EQ-VT protocol, used to value the EQ-5D-5L, includes the composite time trade-off (cTTO). The cTTO task gives people the opportunity to trade away life years in good health to avoid having to subsequently live in a state that they have identified as being ‘worse than dead’ (i.e. they would prefer to die immediately than to live in it). An alternative approach to this is the better-than-dead method, whereby people simply compare given durations in a health state to being dead. But are these two approaches measuring the same thing? This study sought to find out.

The authors recruited a convenience sample of 200 students and asked them to value seven different EQ-5D-5L health states that were close to zero in the Dutch tariff. Each respondent completed both a cTTO task and a better-than-dead task (the order varied) for each of the seven states. The analysis then looked at the extent to which there was agreement between the two methods in terms of whether states were identified as being better or worse than dead. Agreement was measured using counts and using polychoric correlations. Unsurprisingly, agreement was higher for those states that lay further from zero in the Dutch tariff. Around zero, there was quite a bit of disagreement – only 65% agreed for state 44343. Both approaches performed similarly with respect to consistency and test-retest reliability. Overall, the authors interpret these findings as meaning that the two methods are measuring the same underlying preferences.

I don’t find that very convincing. States were more often identified as worse than dead in the better-than-dead task, with 55% valued as such, compared with 37% in the cTTO. That seems like a big difference. The authors provide a variety of possible explanations for the differences, mostly relating to the way the tasks are framed. Or it might be that the complexity of the worse-than-dead task in the cTTO is so confusing and counterintuitive that respondents (intentionally or otherwise) avoid having to do it. For me, the findings reinforce the futility of trying to value health states in relation to being dead. If a slight change in methodology prevents a group of biomedical students from giving consistent assessments of whether or not a state is worse than being dead, what hope do we have?

Credits

Chris Sampson’s journal round-up for 2nd December 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The treatment decision under uncertainty: the effects of health, wealth and the probability of death. Journal of Health Economics Published 16th November 2019

It’s important to understand how people make decisions about treatment. At the end of life, the question can become a matter of whether to have treatment or to let things take their course such that you end up dead. In order to consider this scenario, the author of this paper introduces the probability of death to some existing theoretical models of decision-making under uncertainty.

The diagnostic risk model and the therapeutic risk model can be used to identify risk thresholds that determine decisions about treatment. The diagnostic model relates to the probability that disease is present and the therapeutic model relates to the probability that treatment is successful. The new model described in this paper builds on these models to consider the impact on the decision thresholds of i) initial health state, ii) probability of death, and iii) wealth. The model includes wealth after death, in the form of a bequest. Limited versions of the model are also considered, excluding the bequest and excluding wealth (described as a ‘QALY model’). Both an individual perspective and an aggregate perspective are considered by excluding and including the monetary cost of diagnosis and treatment, to allow for a social insurance type setting.

The comparative statics show a lot of ambiguity, but there are a few things that the model can tell us. The author identifies treatment as having an ‘insurance effect’, by reducing diagnostic risk, a ‘protective effect’, by lowering the probability of death, and a risk-increasing effect associated with therapeutic risk. A higher probability of death increases the propensity for treatment in both the no-bequest model and the QALY model, because of the protective effect of treatment. In the bequest model, the impact is ambiguous, because treatment costs reduce the bequest. In the full model, wealthier individuals will choose to undergo treatment at a lower probability of success because of a higher marginal utility for survival, but the effect becomes ambiguous if the marginal utility of wealth depends on health (which it obviously does).

I am no theoretician, so it can take me a long time to figure these things out in my head. For now, I’m not convinced that it is meaningful to consider death in this way using a one-period life model. In my view, the very definition of death is a loss of time, which plays little or no part in this model. But I think my main bugbear is the idea that anybody’s decision about life saving treatment is partly determined by the amount of money they will leave behind. I find this hard to believe. The author links the finding that a higher probability of death increases treatment propensity to NICE’s end of life premium. Though I’m not convinced that the model has anything to do with NICE’s reasoning on this matter.

Moving toward evidence-based policy: the value of randomization for program and policy implementation. JAMA [PubMed] Published 15th November 2019

Evidence-based policy is a nice idea. We should figure out whether something works before rolling it out. But decision-makers (especially politicians) tend not to think in this way, because doing something is usually seen to be better than doing nothing. The authors of this paper argue that randomisation is the key to understanding whether a particular policy creates value.

Without evidence based on random allocation, it’s difficult to know whether a policy works. This, the authors argue, can undermine the success of effective interventions and allow harmful policies to persist. A variety of positive examples are provided from US healthcare, including trials of Medicare bundled payments. Apparently, such trials increased confidence in the programmes’ effects in a way that post hoc evaluations cannot, though no evidence of this increased confidence is actually provided. Policy evaluation is not always easy, so the authors describe four preconditions for the success of such studies: i) early engagement with policymakers, ii) willingness from policy leaders to support randomisation, iii) timing the evaluation in line with policymakers’ objectives, and iv) designing the evaluation in line with the realities of policy implementation.

These are sensible suggestions, but it is not clear why the authors focus on randomisation. The paper doesn’t do what it says on the tin, i.e. describe the value of randomisation. Rather, it explains the value of pre-specified policy evaluations. Randomisation may or may not deserve special treatment compared with other analytical tools, but this paper provides no explanation for why it should. The authors also suggest that people are becoming more comfortable with randomisation, as large companies employ experimental methods, particularly on the Internet with A/B testing. I think this perception is way off and that most people feel creeped out knowing that the likes of Facebook are experimenting on them without any informed consent. In the authors’ view, it being possible to randomise is a sufficient basis on which to randomise. But, considering the ethics, as well as possible methodological contraindications, it isn’t clear that randomisation should become the default.

A new tool for creating personal and social EQ-5D-5L value sets, including valuing ‘dead’. Social Science & Medicine Published 30th November 2019

Nobody can agree on the best methods for health state valuation. Or, at least, some people have disagreed loud enough to make it seem that way. Novel approaches to health state valuation are therefore welcome. Even more welcome is the development and testing of methods that you can try at home.

This paper describes the PAPRIKA method (Potentially All Pairwise RanKings of all possible Alternatives) of discrete choice experiment, implemented using 1000Minds software. Participants are presented with two health states that are defined in terms of just two dimensions, each lasting for 10 years, and asked to choose between them. Using the magical power of computers, an adaptive process identifies further choices, automatically ranking states using transitivity so that people don’t need to complete unnecessary tasks. In order to identify where ‘dead’ sits on the scale, a binary search procedure asks participants to compare EQ-5D states with being dead. What’s especially cool about this process is that everybody who completes it is able to view their own personal value set. These personal value sets can then be averaged to identify a social value set.

The authors used their tool to develop an EQ-5D-5L value set for New Zealand (which is where the researchers are based). They recruited 5,112 people in an online panel, such that the sample was representative of the general public. Participants answered 20 DCE questions each, on average, and almost half of them said that they found the questions difficult to answer. The NZ value set showed that anxiety/depression was associated with the greatest disutility, though each dimension has a notably similar level of impact at each level. The value set correlates well with numerous existing value sets.

The main limitation of this research seems to be that only levels 1, 3, and 5 of each EQ-5D-5L domain were included. Including levels 2 and 4 would more than double the number of questions that would need to be answered. It is also concerning that more than half of the sample was excluded due to low data quality. But the authors do a pretty good job of convincing us that this is for the best. Adaptive designs of this kind could be the future of health state valuation, especially if they can be implemented online, at low cost. I expect we’ll be seeing plenty more from PAPRIKA.

Credits

Chris Sampson’s journal round-up for 28th October 2019

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Spatial competition and quality: evidence from the English family doctor market. Journal of Health Economics [RePEc] Published 17th October 2019

Researchers will never stop asking questions about the role of competition in health care. There’s a substantial body of literature now suggesting that greater competition in the context of regulated prices may bring some quality benefits. But with weak indicators of quality and limited generalisability, it isn’t a closed case. One context in which evidence has been lacking is in health care beyond the hospital. In the NHS, an individual’s choice of GP practice is perhaps the context in which quality can be observed and choice most readily (and meaningfully) exercised. That’s where this study comes in. Aside from the horrible format of a ‘proper economics’ paper (where we start with spoilers and climax with robustness tests), it’s a good read.

The study relies on a measure of competition based on the number of rival GPs within a 2km radius. Number of GPs, that is, rather than number of practices. This is important, as the number of GPs per practice has been increasing. About 75% of a practice’s revenues are linked to the number of patients registered, wherein lies the incentive to compete with other practices for patients. And, in this context, research has shown that patient choice is responsive to indicators of quality. The study uses data for 2005-2012 from all GP practices in England, making it an impressive data set.

The measures of quality come from the Quality and Outcomes Framework (QOF) and the General Practice Patient Survey (GPPS) – the former providing indicators of clinical quality and the latter providing indicators of patient experience. A series of OLS regressions are run on the different outcome measures, with practice fixed effects and various characteristics of the population. The models show that all of the quality indicators are improved by greater competition, but the effect is very small. For example, an extra competing GP within a 2km radius results in 0.035% increase in the percentage of the population for whom the QOF indicators have been achieved. The effects are a little stronger for the patient satisfaction indicators.

The paper reports a bunch of important robustness checks. For instance, the authors try to test whether practices select their locations based on the patient casemix, finding no evidence that they do. The authors even go so far as to test the impact of a policy change, which resulted in an exogenous increase in the number of GPs in some areas but not others. The main findings seem to have withstood all the tests. They also try out a lagged model, which gives similar results.

The findings from this study slot in comfortably with the existing body of research on the role of competition in the NHS. More competition might help to achieve quality improvement, but it hardly seems worthy of dedicating much effort or, importantly, much expense to the cause.

Worth living or worth dying? The views of the general public about allowing disabled children to die. Journal of Medical Ethics [PhilPapers] [PubMed] Published 15th October 2019

Recent years have seen a series of cases in the UK where (usually very young) children have been so unwell and with such a severe prognosis that someone (usually a physician) has judged that continued treatment is not warranted and that the child should be allowed to die. These cases have generated debate and outrage in the media. But what do people actually think?

This study recruited members of the public in the UK (n=130) to an online panel and asked about the decisions that participants would support. The survey had three parts. The first part set out six scenarios of hospitalised infants, which varied in terms of the infants’ physical and sensory abilities, cognitive capacity, level of suffering, and future prospects. Some of the cases approximated real cases that have received media coverage, and the participants were asked whether they thought that withdrawing treatment was justified in each case. In the second part of the survey, participants were asked about the factors that they believed were important in making such decisions. In the third part, participants answered a few questions about themselves and answered the Oxford Utilitarianism Scale.

The authors set up the concept of a ‘life not worth living’, based on the idea that net future well-being is ‘negative’, and supposing the individual’s own judgement were they able to provide it. In the first part of the survey, 88% indicated that life would be worse than death in at least one of the cases. In such cases, 65% thought that treatment withdrawal was ethically obligatory, while 33% thought that either decision was acceptable. Pain was considered the most important factor in making such decisions, followed by the presence of pleasure. Perhaps predictably for health economists familiar with the literature, about 42% of people thought that resources should be considered in the decision, while 40% thought they shouldn’t.

The paper includes an extensive discussion, with plenty of food for thought. In particular, it discusses the ways in which the findings might inform the debate between the ‘zero line view’, whereby treatment should be withdrawn at the point where life has no benefit, and the ‘threshold view’, which establishes a grey zone of ethical uncertainty, in which either decision is ethically acceptable. To some extent, the findings of this study support the need for a threshold approach. Ethical questions are rarely black and white.

How is the trade-off between adverse selection and discrimination risk affected by genetic testing? Theory and experiment. Journal of Health Economics [PubMed] [RePEc] Published 1st October 2019

A lot of people are worried about how knowledge of their genetic information could be used against them. The most obvious scenario is one in which insurers increase premiums – or deny coverage altogether – on the basis of genetic risk factors. There are two key regulatory options in this context – disclosure duty, whereby individuals are obliged to tell insurers about the outcome of genetic tests, or consent law, whereby people can keep the findings to themselves. This study explores how people behave under each of these regulations.

The authors set up a theoretical model in which individuals can choose whether to purchase a genetic test that can identify them as being either high-risk or low-risk of developing some generic illness. The authors outline utility functions under disclosure duty and consent law. Under disclosure duty, individuals face a choice between the certainty of not knowing their risk and receiving pooled insurance premiums, or a lottery in which they have to disclose their level of risk and receive a higher or lower premium accordingly. Under consent law, individuals will only reveal their test results if they are at low risk, thus securing lower premiums and contributing to adverse selection. As a result, individuals will be more willing to take a test under consent law than under disclosure duty, all else equal.

After setting out their model (at great length), the authors go on to describe an experiment that they conducted with 67 economics students, to elicit preferences within and between the different regulatory settings. The experiment was set up in a very generic way, not related to health at all. Participants were presented with a series of tasks across which the parameters representing the price of the test and the pooled premium were varied. All of the authors’ hypotheses were supported by the experiment. More people took tests under consent law. Higher test prices reduce the number of people taking tests. If prices are high enough, people will prefer disclosure duty. The likelihood that people take tests under consent law is increasing with the level of adverse selection. And people are very sensitive to the level of discrimination risk under disclosure duty.

It’s an interesting study, but I’m not sure how much it can tell us about genetic testing. Framing the experiment as entirely unrelated to health seems especially unwise. People’s risk preferences may be very different in the domain of real health than in the hypothetical monetary domain. In the real world, there’s a lot more at stake.

Credits