Chris Sampson’s journal round-up for 19th June 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Health-related resource-use measurement instruments for intersectoral costs and benefits in the education and criminal justice sectors. PharmacoEconomics [PubMed] Published 8th June 2017

Increasingly, people are embracing a societal perspective for economic evaluation. This often requires the identification of costs (and benefits) in non-health sectors such as education and criminal justice. But it feels as if we aren’t as well-versed in capturing these as we are in the health sector. This study reviews the measures that are available to support a broader perspective. The authors search the Database of Instruments for Resource Use Measurement (DIRUM) as well as the usual electronic journal databases. The review also sought to identify the validity and reliability of the instruments. From 167 papers assessed in the review, 26 different measures were identified (half of which were in DIRUM). 21 of the instruments were only used in one study. Half of the measures included items relating to the criminal justice sector, while 21 included education-related items. Common specifics for education included time missed at school, tutoring needs, classroom assistance and attendance at a special school. Criminal justice sector items tended to include legal assistance, prison detainment, court appearances, probation and police contacts. Assessments of the psychometric properties was found for only 7 of the 26 measures, with specific details on the non-health items available for just 2: test-retest reliability for the Child and Adolescent Services Assessment (CASA) and validity for the WPAI+CIQ:SHP,V2 (there isn’t room on the Internet for the full name). So there isn’t much evidence of any validity for any of these measures in the context of intersectoral (non-health) costs and benefits. It’s no doubt the case that health-specific resource use measures aren’t subject to adequate testing, but this study has identified that the problem may be even greater when it comes to intersectoral costs and benefits. Most worrying, perhaps, is the fact that 1 in 5 of the articles identified in the review reported using some unspecified instrument, presumably developed specifically for the study or adapted from an off-the-shelf instrument. The authors propose that a new resource use measure for intersectoral costs and benefits (RUM ICB) be developed from scratch, with reference to existing measures and guidance from experts in education and criminal justice.

Use of large-scale HRQoL datasets to generate individualised predictions and inform patients about the likely benefit of surgery. Quality of Life Research [PubMed] Published 31st May 2017

In the NHS, EQ-5D data are now routinely collected from patients before and after undergoing one of four common procedures. These data can be used to see how much patients’ health improves (or deteriorates) following the operations. However, at the individual level, for a person deciding whether or not to undergo the procedure, aggregate outcomes might not be all that useful. This study relates to the development of a nifty online tool that a prospective patient can use to find out the expected likelihood that they will feel better, the same or worse following the procedure. The data used include EQ-5D-3L responses associated with almost half a million unilateral hip or knee replacements or groin hernia repairs between April 2009 and March 2016. Other variables are also included, and central to this analysis is a Likert scale about improvement or worsening of hip/knee/hernia problems compared to before the operation. The purpose of the study is to group people – based on their pre-operation characteristics – according to their expected postoperative utility scores. The authors employed a recursive Classification and Regression Tree (CART) algorithm to split the datasets into strata according to the risk factors. The final set of risk variables were age, gender, pre-operative EQ-5D-3L profile and symptom duration. The CART analysis grouped people into between 55 and 60 different groups for each of the procedures, with the groupings explaining 14-27% of the variation in postoperative utility scores. Minimally important (positive and negative) differences in the EQ-5D utility score were estimated with reference to changes in the Likert scale for each of the procedures. These ranged in magnitude from 0.041 to 0.106. The resulting algorithms are what drive the results delivered by the online interface (you can go and have a play with it). There are a few limitations to the study, such as the reliance on complete case analysis and the fact that the CART analysis might lack predictive ability. And there’s an interesting problem inherent in all of this, that the more people use the tool, the less representative it will become as it influences selection into treatment. The validity of the tool as a precise risk calculator is quite limited. But that isn’t really the point. The point is that it unlocks some of the potential value of PROMs to provide meaningful guidance in the process of shared decision-making.

Can present biasedness explain early onset of diabetes and subsequent disease progression? Exploring causal inference by linking survey and register data. Social Science & Medicine [PubMed] Published 26th May 2017

The term ‘irrational’ is overused by economists. But one situation in which I am willing to accept it is with respect to excessive present bias. That people don’t pay enough attention to future outcomes seems to be a fundamental limitation of the human brain in the 21st century. When it comes to diabetes and its complications, there are lots of treatments available, but there is only so much that doctors can do. A lot depends on the patient managing their own disease, and it stands to reason that present bias might cause people to manage their diabetes poorly, as the value of not going blind or losing a foot 20 years in the future seems less salient than the joy of eating your own weight in carbs right now. But there’s a question of causality here; does the kind of behaviour associated with time-inconsistent preferences lead to poorer health or vice versa? This study provides some insight on that front. The authors outline an expected utility model with quasi-hyperbolic discounting and probability weighting, and incorporate a present bias coefficient attached to payoffs occurring in the future. Postal questionnaires were collected from 1031 type 2 diabetes patients in Denmark with an online discrete choice experiment as a follow-up. These data were combined with data from a registry of around 9000 diabetes patients, from which the postal/online participants were identified. BMI, HbA1c, age and year of diabetes onset were all available in the registry and the postal survey included physical activity, smoking, EQ-5D, diabetes literacy and education. The DCE was designed to elicit time preferences using the offer of (monetary) lottery wins, with 12 different choice sets presented to all participants. Unfortunately, despite the offer of a real-life lottery award for taking part in the research, only 79 of 1031 completed the online DCE survey. Regression analyses showed that individuals with diabetes since 1999 or earlier, or who were 48 or younger at the time of onset, exhibited present bias. And the present bias seems to be causal. Being inactive, obese, diabetes illiterate and having lower quality of life or poorer glycaemic control were associated with being present biased. These relationships hold when subject to a number of control measures. So it looks as if present bias explains at least part of the variation in self-management and health outcomes for people with diabetes. Clearly, the selection of the small sample is a bit of a concern. It may have meant that people with particular risk preferences (given that the reward was a lottery) were excluded, and so the sample might not be representative. Nevertheless, it seems that at least some people with diabetes could benefit from interventions that increase the salience of future health-related payoffs associated with self-management.

Credits

Advertisements

Sam Watson’s journal round-up for 12th June 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Machine learning: an applied econometric approach. Journal of Economic Perspectives [RePEcPublished Spring 2017

Machine learning tools have become ubiquitous in the software we use on a day to day basis. Facebook can identify faces in photos; Google can tell you the traffic for your journey; Netflix can recommend you movies based on what you’ve watched before. Machine learning algorithms provide a way to estimate an unknown function f that predicts an outcome Y given some data x: Y = f(x) + \epsilon. The potential application of these algorithms to many econometric problems is clear. This article outlines the principles of machine learning methods. It divides econometric problems into prediction, \hat{y}, and parameter estimation, \hat{\beta} and suggests machine learning is a useful tool for the former. However, this distinction is a false one, I believe. Parameters are typically estimated because they represent an average treatment effect, say E(y|x=1) - E(y|x=0). But, we can estimate these quantities in ‘\hat{y} problems’ since f(x) = E(y|x). Machine learning algorithms, therefore, represent a non-parametric (or very highly parametric) approach to the estimation of treatment effects. In cases where functional form is unknown, where there may be nonlinearities in the response function, and interactions between variables, this approach can be very useful. They do not represent a panacea to estimation problems of course, since interpretation rests on the assumptions. For example, as Jennifer Hill discusses, additive regression tree methods can be used to estimate conditional average treatment effects if we can assume the treatment is ignorable conditional on the covariates. This article, while providing a good summary of methods, doesn’t quite identify the right niche where these approaches might be useful in econometrics.

Incorporating equity in economic evaluations: a multi-attribute equity state approach. European Journal of Health Economics [PubMedPublished 1st June 2017

Efficiency is a key goal for the health service. Economic evaluation provides evidence to support investment decisions, whether displacing resources from one technology to another can produce greater health benefits. Equity is generally not formally considered except through the final investment decision-making process, which may lead to different decisions by different commissioning groups. One approach to incorporating equity considerations into economic evaluation is the weighting of benefits, such as QALYs, by group. For example, a number of studies have estimated that benefits of end-of-life treatments have a greater social valuation than other treatments. One way of incorporating this into economic evaluation is to raise the cost-effectiveness threshold by an appropriate amount for end-of-life treatments. However, multiple attributes may be relevant for equity considerations, negating a simplistic approach like this. This paper proposed a multi-attribute equity state approach to incorporating equity concerns formally in economic evaluation. The basic premise of this approach is to firstly define a set of morally relevant attributes, to secondly derive a weighting scheme for each set of characteristics (similarly to how QALY weights are derived from the EQ-5D questionnaire), and thirdly to apply these weights to economic evaluation. A key aspect of the last step is to weight both the QALYs gained by a population from a new technology and those displaced from another. Indeed, identifying where resources are displaced from is perhaps the biggest limitation to this approach. This displacement problem has also come up in other discussions revolving around the estimation of the cost-effectiveness threshold. This seems to be an important area for future research.

Financial incentives, hospital care, and health outcomes: evidence from fair pricing laws. American Economic Journal: Economic Policy [RePEcPublished May 2017

There is a not-insubstantial literature on the response of health care providers to financial incentives. Generally, providers behave as expected, which can often lead to adverse outcomes, such as overtreatment in cases where there is potential for revenue to be made. But empirical studies of this behaviour often rely upon the comparison of conditions with different incentive schedules; rarely is there the opportunity to study the effects of relative shifts in incentive within the same condition. This paper studies the effects of fair pricing laws in the US, which limited the amount uninsured patients would have to pay hospitals, thus providing the opportunity to study patients with the same conditions but who represent different levels of revenue for the hospital. The introduction of fair pricing laws was associated with a reduction in total billing costs and length of stay for uninsured patients but little association was seen with changes in quality. A similar effect was not seen in the insured suggesting the price ceiling introduced by the fair pricing laws led to an increase in efficiency.

Credits

Alastair Canaway’s journal round-up for 5th June 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Redistribution and redesign in health care: an ebbing tide in England versus growing concerns in the United States. Health Economics [PubMed] Published 4th May 2017

Health Economics included an editorial that will be of interest to a wider readership. It focusses on the similarities and differences between the US and the UK’s health care systems, particularly in terms of (re)design, redistribution, and the challenges facing each. The UK system is characterised by a preference for collectivism in funding and access, and in the US, a pluralism of funding. In both countries, groups seek to reverse their existing approach (the grass is always greener). The editorial outlines recent changes in healthcare design, notably, the impact of the affordable care act (ACA). The main focus of the editorial is twofold: i) a discussion of the efforts in England to limit public spending whilst increasing hospital sector efficiency, ii) discussion of the US’s attempt to reduce the growth in the role of government in financing and delivering healthcare. In respect to the UK, the diagnosis is worrying yet unsurprising: chronic underfunding combined with a plethora of unevidenced reform proposals has left the NHS on a knife-edge; the prognosis is that it is uncertain whether the NHS will survive the next few years. In the US, the picture is more complex and the paper discussed possible repeal components of the ACA. A key point of the discussion relates to the assumption that US healthcare is much more expensive than any OECD country due to American’s using too much medical care. In fact as the authors note, the evidence points to the contrary, and the high expenditure is due to a myriad of factors including high wages, high drug prices, and a system which requires many more lawyers, administrators and consultants. The paper discusses various nuances with both systems in the current political context and is well worth reading for a quick overview of some the key issues facing both countries.

Statistical alchemy: conceptual validity and mapping to generate health state utility values. Pharmacoeconomics – Open Published 15th May 2017

With a passing interest in mapping and counting myself as a bit of a mapping skeptic, this paper discussing mapping in terms of ‘statistical alchemy’ obviously caught my eye. As most will know, mapping is a frequently used technique to obtain utility estimates by predicting utility values from data collected using other measures. The focus of the paper is ‘conceptual validity’: ‘the degree to which the content of two different instruments reflect one another when used for mapping’. There were three aims i) explain the idea of conceptual validity in relation to mapping, ii) consider the implications of poor conceptual validity when mapping for decision making in the context of resource allocation, and iii) provide suggestions to improve conceptual validity. The paper successfully achieves the first goal with an exposition of the (many) issues with mapping in relation to conceptual validity. The paper highlights that poor conceptual validity will result in systematic biases in the preferences for health when mapped estimates are used. This is aptly demonstrated through an example using a multiple sclerosis measure, and the EQ-5D. A number of ways for improving the conceptual validity are also presented, these include: i) response mapping, ii) assessment of ‘conceptual decision validity’ (which draws upon face, construct and criterion validity) to determine whether there is a prima facie case that a mapping function may lead to a valid decision, and iii) the need to examine ‘what is lost’ should mapping be used. I found it to be a thoughtful paper, and echoed some of my concerns with existing mapping functions. For those interested in conducting a mapping exercise this is an essential read as an introduction to some of the pitfalls you will encounter.

Is there additional value attached to health gains at the end of life? A revisit. Health Economics Published 1st June 2017

Following NICE’s (2009) guidance for the acceptability of higher cost-per-QALY thresholds for life extending treatments, the past eight years has seen an increase in research examining whether the general public actually have an appetite for this. That is, do the general public have a preference for an end of life premium? Many studies have sought to answer this, with mixed results. All previous attempts however, have tackled this issue from an ex-post perspective: respondents are asked to choose between providing treatment after the diagnosis when they face a shorter life expectancy without treatment. The issue highlighted in this paper is that by presenting life expectancy as certain and salient (e.g. 2 years, or 10 years), it may be interpreted as a life sentence regardless of length. This paper goes down an alternative route by adopting an ex-ante insurance approach. Additionally a new comparator is used, end of life treatment is compared with a preventative treatment that offers life extension with the same expected health gain. It also explores whether preferences depend on recipient age. The paper found that preventative treatments were prioritised over end of life treatments, and thus a dearth of justification for the end of life premium exists. This is another addition to the mixed literature regarding preferences for end of life treatments. The paper does have its limitations which it readily admits. It is however another useful addition this tricky research area.

Credits