Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
The Internet and children’s psychological wellbeing. Journal of Health Economics Published 13th December 2019
Here at the blog, we like the Internet. We couldn’t exist without it. We vie for your attention along with all of the other content factories (or “friends”). But there’s a well-established sense that people – especially children – should moderate their consumption of Internet content. The Internet is pervasive and is now a fundamental part of our day-to-day lives, not simply an information source to which we turn when we need it. Almost all 12-15 year olds in the UK use the Internet. The ubiquity of the Internet makes it difficult to test its effects. But this paper has a good go at it.
This study is based on the idea that broadband speeds are a good proxy for Internet use. In England, a variety of public and private sector initiatives have resulted in a distorted market with quasi-random assigment of broadband speeds. The authors provide a very thorough explanation of children’s wellbeing in relation to the Internet, outlining a range of potential mechanisms.
The analysis combines data from the UK’s pre-eminent household panel survey (Understanding Society) with broadband speed data published by the UK regulator Ofcom. Six wellbeing outcomes are analysed from children’s self-reported responses. The questions ask children how they feel about their lives – measured on a seven-point scale – in relation to school work, appearance, family, friends, school attended, and life as a whole. An unbalanced panel of 6,310 children from 2012-2017 provides 13,938 observations from 3,765 different Lower Layer Super Output Areas (LSOA), with average broadband speeds for each LSOA for each year. Each of the six wellbeing outcomes is modelled with child-, neighbourhood- and time-specific fixed effects. The models’ covariates include a variety of indicators relating to the child, their parents, their household, and their local area.
A variety of models are tested, and the overall finding is that higher broadband speeds are negatively associated with all of the six wellbeing indicators. Wellbeing in relation to appearance shows the strongest effect; a 1% increase in broadband speed reduces happiness with appearance by around 0.6%. The authors explore a variety of potential mechanisms by running pairs of models between broadband speeds and the mechanism and between the mechanism and the outcomes. A key finding is that the data seem to support the ‘crowding out’ hypothesis. Higher broadband speeds are associated with children spending less time on activities such as sports, clubs, and real world social interactions, and these activities are in turn positively associated with wellbeing. The authors also consider different subgroups, finding that the effects are more detrimental for girls.
Where the paper falls down is that it doesn’t do anything to convince us that broadband speeds represent a good proxy for Internet use. It’s also not clear exactly what the proxy is meant to be for – use (e.g. time spent on the Internet) or access (i.e. having the option to use the Internet) – though the authors seem to be interested in the former. If that’s the case, the logic of the proxy is not obvious. If I want to do X on the Internet then higher speeds will enable me to do it in less time, in which case the proxy would capture the inverse of the desired indicator. The other problem I think we have is in the use of self-reported measures in this context. A key supposed mechanism for the effect is through ‘social comparison theory’, which we might reasonably expect to influence the way children respond to questions as well as – or instead of – their underlying wellbeing.
One-way sensitivity analysis for probabilistic cost-effectiveness analysis: conditional expected incremental net benefit. PharmacoEconomics [PubMed] Published 16th December 2019
Here we have one of those very citable papers that clearly specifies a part of cost-effectiveness analysis methodology. A better title for this paper could be Make one-way sensitivity analysis great again. The authors start out by – quite rightly – bashing the tornado diagram, mostly on the basis that it does not intuitively characterise the information that a decision-maker needs. Instead, the authors propose an approach to probabilistic one-way sensitivity analysis (POSA) that is a kind of simplified version of EVPPI (expected value of partially perfect information) analysis. Crucially, this approach does not assume that the various parameters of the analysis are independent.
The key quantity created by this analysis is the conditional expected incremental net monetary benefit (cINMB), conditional, that is, on the value of the parameter of interest. There are three steps to creating a plot of the POSA results: 1) rank the costs and outcomes for the sampled values of the parameter – say from the first to the last centile; 2) plug in a cost-effectiveness threshold value to calculate the cINMB at each sampled value; and 3) record the probability of observing each value of the parameter. You could use this information to present a tornado-style diagram, plotting the credible range of the cINMB. But it’s more useful to plot a line graph showing the cINMB at the different values of the parameter of interest, taking into account the probability that the values will actually be observed.
The authors illustrate their method using three different parameters from a previously published cost-effectiveness analysis, in each case simulating 15,000 Monte Carlo ‘inner loops’ for each of the 99 centiles. It took me a little while to get my head around the results that are presented, so there’s still some work to do around explaining the visuals to decision-makers. Nevertheless, this approach has the potential to become standard practice.
A head-on ordinal comparison of the composite time trade-off and the better-than-dead method. Value in Health Published 19th December 2019
For years now, methodologists have been trying to find a reliable way to value health states ‘worse than dead’. The EQ-VT protocol, used to value the EQ-5D-5L, includes the composite time trade-off (cTTO). The cTTO task gives people the opportunity to trade away life years in good health to avoid having to subsequently live in a state that they have identified as being ‘worse than dead’ (i.e. they would prefer to die immediately than to live in it). An alternative approach to this is the better-than-dead method, whereby people simply compare given durations in a health state to being dead. But are these two approaches measuring the same thing? This study sought to find out.
The authors recruited a convenience sample of 200 students and asked them to value seven different EQ-5D-5L health states that were close to zero in the Dutch tariff. Each respondent completed both a cTTO task and a better-than-dead task (the order varied) for each of the seven states. The analysis then looked at the extent to which there was agreement between the two methods in terms of whether states were identified as being better or worse than dead. Agreement was measured using counts and using polychoric correlations. Unsurprisingly, agreement was higher for those states that lay further from zero in the Dutch tariff. Around zero, there was quite a bit of disagreement – only 65% agreed for state 44343. Both approaches performed similarly with respect to consistency and test-retest reliability. Overall, the authors interpret these findings as meaning that the two methods are measuring the same underlying preferences.
I don’t find that very convincing. States were more often identified as worse than dead in the better-than-dead task, with 55% valued as such, compared with 37% in the cTTO. That seems like a big difference. The authors provide a variety of possible explanations for the differences, mostly relating to the way the tasks are framed. Or it might be that the complexity of the worse-than-dead task in the cTTO is so confusing and counterintuitive that respondents (intentionally or otherwise) avoid having to do it. For me, the findings reinforce the futility of trying to value health states in relation to being dead. If a slight change in methodology prevents a group of biomedical students from giving consistent assessments of whether or not a state is worse than being dead, what hope do we have?