Chris Sampson’s journal round-up for 27th February 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Does it pay to know prices in health care? American Economic Journal: Economic Policy Published February 2017

In the US, people in need of health care have to pay for it – or for insurance to cover it – without knowing in advance how much said health care actually costs. Weird, right? Instinctively, it feels as if people really ought to be able to find out. However, if knowing prices in advance doesn’t actually affect consumption, maybe we can say it really doesn’t matter. Well, we can’t. As this new study shows, having access to price information affects consumer choices. There’s plenty of price dispersion to make this potentially important: in this study’s dataset, a move from the 90th to the 50th percentile is on average associated with a price drop of 35%. The data relate to 387,774 procedures for 6,208 people working for a corporate client of a price information firm. Access to this service was staggered for different employees, creating the potential for experimental investigation. The principal strategy is difference-in-differences regression analysis. Access to the price information service was associated with prices around 1.6% lower on average. For primary care – which might be less price sensitive – and for complex cases where lots of procedures are taking place, the effect is weakened. The results seem robust to matching and other tests. The author is able to provide further insight by showing that access to price information increases the probability of seeing a new doctor by 14%. And when an instrumental variable approach is used to assess the price reduction specifically for people who searched for price information and then received a procedure within 30 days, the reduction in price reaches a whopping 17%. This suggests that the average impact of a 1.6% reduction could be a lot higher if people searched for price information more frequently. The fact that they don’t is likely due to a particular kind of moral hazard being at play. Moral hazard in search occurs when people have no incentive to search for cheaper services. The author goes on to show that in any given week an individual is around 90% less likely to search if they have already met their deductible, and that this translates into an elasticity of search propensity to the proportion out-of-pocket expense of approximately 1.8. We mustn’t forget the other side of the welfare coin here. What if people are choosing lower quality care in order to save money, or foregoing it altogether? Looking at the rate of follow-through after searches and bringing in hospital quality data seems to show that this isn’t a concern here. This group of people aren’t representative of the general population so it may be that access to prices is only valuable to certain groups. Nevertheless, this paper tells us a lot about the importance of price information and in particular the special kind of moral hazard that can arise in the presence of comprehensive insurance coverage.

Mitigating the consequences of a health condition: The role of intra- and interhousehold assistance. Journal of Health Economics Published 20th February 2017

There’s a lot of research around the effect that an individual’s health problem can have on their immediate family, both in terms of the overspill in quality of life impacts and the costs of satisfying need for health care. However, large panel data research can be limited because the data can’t connect non-coresident family members. This study considers informal insurance and consumption smoothing within families beyond the current household. The data come from the Panel Study of Income Dynamics, with 7,578 individuals and around 33,000 household years from 2001-2011. The panel follows offspring after they leave a household, facilitating the identification of genetically linked families. Participants are asked whether they suffer from 11 different health problems and, if they do, the extent to which it limits their daily activities. The data also include information on different categories of spending, including health. The analysis involves regression that accounts for individual fixed effects and looks at the impact of a change in health status on consumption. If a household is fully insured, changes in health status should not affect non-health expenditures. The analysis focuses on the impact of severe limitations, which are reported at some point by 1,321 people. Such a change in health status was associated with a reduction in annual working hours of around 20%, corresponding to $5000 for men and $2800 for women. Additionally, household health expenditures increased by $479 on average. The notion of complete insurance facilitating consumption smoothing appears to fail, with a decline in consumption of around 10%. Partial insurance smoothes roughly half the loss. Households with formal insurance exhibit a much smaller reduction in consumption. A key finding is that being married may facilitate consumption smoothing to the extent of full insurance, while unmarried couples take a bigger hit. Home equity seems to play an important role in this dynamic, with married couples more likely to remortgage in response to a health shock. Married couples also receive more in social security transfers. Unmarried couples, it seems, have to turn to non-coresident family members instead and are 50% more likely to use this channel than married couples. Male children are more likely to use their own home equity to support their parents, while female children tend to reduce their own consumption. This study identifies a lot of interesting relationships and divergent strategies for consumption smoothing that warrant further investigation.

Handling missing data in within-trial cost-effectiveness analysis: a review with future recommendations. PharmacoEconomics – Open Published 9th February 2017

If you conduct trial-based cost-effectiveness analyses then chances are that at some point you’ve had to go and figure out how to deal with all that missing data. There are a handful of quality papers out there that offer guidance. If we all followed their advice then we’d be doing a decent job of it. This new paper demonstrates that we aren’t all doing a good job of it and offers fresh guidance. The paper starts by outlining the ‘principled’ approach to handling missing data. Essentially it means being sensible with the data, considering the most appropriate statistical model and describing assumptions about the missing data mechanism. Imputation methods that can support this principled approach are briefly discussed. The authors present a quality evaluation scheme, which can be used to assess the appropriateness of methods adopted in a study and the completeness of reporting. It makes recommendations with respect to the description of missing data, the methods used to handle it and the limitations associated with the study. The quality evaluation scheme can be used to score and rank papers from A-E. This is what the authors go on to do, with a systematic review including 81 eligible papers. A previous review found complete case analysis to be the most popular base case method adopted. In 2009-2015, multiple imputation became the most frequently used base case method, though complete case analysis remains common and many studies are still unclear about the methods adopted. Most articles did not describe any robustness analysis, reporting only the base case approach to missing data. Many articles were classified as the lowest quality (E), though this has improved over time. The authors demonstrate that their proposed grading system is associated with the strength of the assumptions in the adopted methods. If you’re engaged in trial-based economic evaluation, you ought to read this paper.

Credits

Advertisements

Alastair Canaway’s journal round-up for 20th February 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The estimation and inclusion of presenteeism costs in applied economic evaluation: a systematic review. Value in Health Published 30th January 2017

Presenteeism is one of those issues that you hear about from time to time, but rarely see addressed within economic evaluations. For those who haven’t come across it before, presenteeism refers to being at work, but not working at full capacity, for example, due to your health limiting your ability to work. The literature suggests that given presenteeism can have large associated costs which could significantly impact economic evaluations, it should be considered. These impacts are rarely captured in practice. This paper sought to identify studies where presenteeism costs were included, examined how valuation was approached and the degree of impact of including presenteeism on costs. The review included cost of illness studies as well as economic evaluations, just 28 papers had attempted to capture the costs of presenteeism, these were in a wide variety of disease areas. A range of methods was used, across all studies, presenteeism costs accounted for 52% (range from 19%-85%) of the total costs relating to the intervention and disease. This is a vast proportion and significantly outweighed absenteeism costs. Presenteeism is clearly a significant issue, yet widely ignored within economic evaluation. This in part may be due to the health and social care perspective advised within the NICE reference case and compounded by the lack of guidance in how to measure and value productivity costs. Should an economic evaluation pursue a societal perspective, the findings suggest that capturing and valuing presenteeism costs should be a priority.

Priority to end of life treatments? Views of the public in the Netherlands. Value in Health Published 5th January 2017

Everybody dies, and thus, end of life care is probably something that we should all have at least a passing interest in. The end of life context is an incredibly tricky research area with methodological pitfalls at every turn. End of life care is often seen as ‘different’ to other care, and this is reflected in NICE having supplementary guidance for the appraisal of end of life interventions. Similarly, in the Netherlands, treatments that do not meet typical cost per QALY thresholds may be provided should public support be sufficient. There, however, is a dearth of such evidence, and this paper sought to elucidate this issue using the novel Q methodology. Three primary viewpoints emerged: 1) Access to healthcare as a human right – all have equal rights regardless of setting, that is, nobody is more important. Viewpoint one appeared to reject the notion of scarce resources when it comes to health: ‘you can’t put a price on life’. 2) The second group focussed on providing the ‘right’ care for those with terminal illness and emphasised that quality of life should be respected and unnecessary care at end of life should be avoided. This second group did not place great importance on cost-effectiveness but did acknowledge that costly treatments at end of life might not be the best use of money. 3) Finally, the third group felt there should be a focus on care which is effective and efficient, that is, those treatments which generate the most health should be prioritised. There was a consensus across all three groups that the ultimate goal of the health system is to generate the greatest overall health benefit for the population. This rejects the notion that priority should be given to those at end of life and the study concludes that across the three groups there was minimal support for the possibility of the terminally ill being treated with priority.

Methodological issues surrounding the use of baseline health-related quality of life data to inform trial-based economic evaluations of interventions within emergency and critical care settings: a systematic literature review. PharmacoEconomics [PubMed] Published 6th January 2017

Catchy title. Conducting research within emergency and critical settings presents a number of unique challenges. For the health economist seeking to conduct a trial based economic evaluation, one such issue relates to the calculation of QALYs. To calculate QALYs within a trial, baseline and follow-up data are required. For obvious reasons – severe and acute injuries/illness, unplanned admission – collecting baseline data on those entering emergency and critical care is problematic. Even when patients are conscious, there are ethical issues surrounding collecting baseline data in this setting, the example used relates to somebody being conscious after cardiac arrest, is it appropriate to be getting them to complete HRQL questionnaires? Probably not. Various methods have been used to circumnavigate this issue; this paper sought to systematically review the methods that have been used and provide guidance for future studies. Just 19 studies made it through screening, thus highlighting the difficulty of research in this context. Just one study prospectively collected baseline HRQL data, and this was restricted to patients in a non-life threatening state. Four different strategies were adopted in the remaining papers. Eight studies adopted a fixed health utility for all participants at baseline, four used only the available data, that is, from the first time point where HRQL was measured. One asked patients to retrospectively recall their baseline state, whilst one other used Delphi methods to derive EQ-5D states from experts. The paper examines the implications and limitations of adopting each of these strategies. The key finding seems to relate to whether or not the trial arms are balanced with respect to HRQL at baseline. This obviously isn’t observed, the authors suggest trial covariates should instead be used to explore this, and adjustments made where applicable. If, and that’s a big if, trial arms are balanced, then all of the four methods suggested should give similar answers. It seems the key here is the randomisation, however, even the best randomisation techniques do not always lead to balanced arms and there is no guarantee of baseline balance. The authors conclude trials should aim to make an initial assessment of HRQL at the earliest opportunity and that further research is required to thoroughly examine how the different approaches will impact cost-effectiveness results.

Credits

Chris Sampson’s journal round-up for 6th February 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

A review of NICE methods and processes across health technology assessment programmes: why the differences and what is the impact? Applied Health Economics and Health Policy [PubMed] Published 27th January 2017

Depending on the type of technology under consideration, NICE adopts a variety of different approaches in coming up with their recommendations. Different approaches might result in different decisions, which could undermine allocative efficiency. This study explores this possibility. Data were extracted from the manuals and websites for 5 programmes, under the themes of ‘remit and scope’, ‘process of assessment’, ‘methods of evaluation’ and ‘appraisal of evidence’. Semi-structured interviews were conducted with 5 people with expertise in each of the 5 programmes. Results are presented in a series of tables – one for each theme – outlining the essential characteristics of the 5 programmes. In their discussion, the authors then go on to consider how the identified differences might impact on efficiency from either a ‘utilitarian’ health-maximisation perspective or NICE’s egalitarian aim of ensuring adequate levels of health care. Not all programmes deliver recommendations with mandatory funding status, and it is only the ones that do that have a formal appeals process. Allowing for local rulings on funding could be good or bad news for efficiency, depending on the capacity of local decision makers to conduct economic evaluations (so that means probably bad news). At the same time, regional variation could undermine NICE’s fairness agenda. The evidence considered by the programmes varies, from a narrow focus on clinical and cost-effectiveness to the incorporation of budget impact and wider ethical and social values. Only some of the programmes have reference cases, and those that do are the ones that use cost-per-QALY analysis, which probably isn’t a coincidence. The fact that some programmes use outcomes other than QALYs obviously has the potential to undermine health-maximisation. Most differences or borne of practicality; there’s no point in insisting on a CUA if there is no evidence at all to support one – the appraisal would simply not happen. The very existence of alternative programmes indicates that NICE is not simply concerned with health-maximisation. Additional weight is given to rare conditions, for example. And NICE want to encourage research and innovation. So it’s no surprise that we need to take into account NICE’s egalitarian view to understand the type of efficiency for which it strives.

Economic evaluations alongside efficient study designs using large observational datasets: the PLEASANT trial case study. PharmacoEconomics [PubMed] Published 21st January 2017

One of the worst things about working on trial-based economic evaluations is going to lots of effort to collect lots of data, then finding that at the end of the day you don’t have much to show for it. Nowadays, the health service routinely collects many data for other purposes. There have been proposals to use these data – instead of prospectively collecting data – to conduct clinical trials. This study explores the potential for doing an economic evaluation alongside such a trial. The study uses CPRD data, including diagnostic, clinical and resource use information, for 8,608 trial participants. The intervention was the sending out of a letter in the hope of reducing unscheduled medical contacts due to asthma exacerbation in children starting a new school year. QALYs couldn’t be estimated using the CPRD data, so values were derived from the literature and estimated on the basis of exacerbations indicated by changes in prescriptions or hospitalisations. Note here the potentially artificial correlation between costs and outcomes that this creates, thus somewhat undermining the benefit of some good old bootstrapping. The results suggest the intervention is cost-saving with little impact on QALYs. Lots of sensitivity analyses are conducted, which are interesting in themselves and say something about the concerns around some of the structural assumptions. The authors outline the pros and cons of the approach. It’s an important discussion as it seems that studies like this are going to become increasingly common. Regarding data collection, there’s little doubt that this approach is more efficient, and it should be particularly valuable in the evaluation of public health and service delivery type interventions. The problem is that the study is not able to use individual-level cost and outcome data from the same people, which is what sets a trial-based economic evaluation apart from a model-based study. So for me, this isn’t really a trial-based economic evaluation. Indeed, the analysis incorporates a Markov-type model of exacerbations. It’s a different kind of beast, which incorporates aspects of modelling and aspects of trial-based analysis, along with some unique challenges of its own. There’s a lot more methodological work that needs to be done in this area, but this study demonstrates that it could be fruitful.

“Too much medicine”: insights and explanations from economic theory and research. Social Science & Medicine [PubMed] Published 18th January 2017

Overconsumption of health care represents an inefficient use of resources, and so we wouldn’t recommend it. But is that all we – as economists – have to say on the matter? This study sought to dig a little deeper. A literature search was conducted to establish a working definition of overconsumption. Related notions such as overdiagnosis, overtreatment, overuse, low-value care, overmedicalisation and even ‘pharmaceuticalisation’ all crop up. The authors introduce ‘need’ as a basis for understanding overconsumption; it represents health care that should never be considered as “needed”. A useful distinction is identified between misconsumption – where an individual’s own consumption is detrimental to their own well-being – and overconsumption, which can be understood as having a negative effect on social welfare. Note that in a collectively funded system the two concepts aren’t entirely distinguishable. Misconsumption becomes the focus of the paper, as avoiding harm to patients has been the subject of the “too much medicine” movement. I think this is a shame, and not really consistent with an economist’s usual perspective. The authors go on to discuss issues such as moral hazard, supplier-induced demand, provider payment mechanisms, ‘indication creep’, regret theory, and physicians’ positional consumption, and whether or not such phenomena might lead to individual welfare losses and thus be considered causes of misconsumption. The authors provide a neat diagram showing the various causes of misconsumption on a plane. One dimension represents the extent to which the cause is imperfect knowledge or imperfect agency, and the other the degree to which the cause is at the individual or market level. There’s a big gap in the top right, where market level causes meet imperfect knowledge. This area could have included patent systems, research fraud and dodgy Pharma practices. Or maybe just a portrait of Ben Goldacre for shorthand. There are some warnings about the (limited) extent to which market reforms might address misconsumption, and the proposed remedy for overconsumption is not really an economic one. Rather, a change in culture is prescribed. More research looking at existing treatments rather than technology adoption, and to investigate subgroup effects, is also recommended. The authors further suggest collaboration between health economists and ecological economists.

Credits