Skip to content

Chris Sampson’s journal round-up for 21st September 2020

Every Monday our authors provide a round-up of the latest peer-reviewed journal publications. We cover all issues of major health economics journals as well as some other notable releases. If you’d like to write one of our weekly journal round-ups, get in touch.

Health Economics

Volume 29, Issue 9

This issue of Health Economics includes six full articles and four Letters, all on applied microeconometrics (although one article on switching costs includes a CT scan image, so there may be more going on here than I think).

I was drawn to some research on maternal mental health and earned income tax credits. By looking at expansions in federal and state programmes in the US, the authors try to explain why there might be a positive association. The findings suggest that it isn’t about tax credits supporting insurance coverage, but it is (partly) about labour supply. And the explanation differs according to marital status. If a mother is married, they may experience fewer days in poor mental health thanks to the cash injection providing greater leisure time. For unmarried mothers, the mechanism may be that they increase their labour supply. I don’t find this a very satisfying conclusion. One group benefits because they can decrease their labour supply while the other benefits because they can increase it. Surely, there is a clearer explanation to be found.

Keeping with maternal health, there’s a study looking at the impact of the Affordable Care Act on pregnancy and birth outcomes. The author didn’t find much of an effect on birth outcomes, but income-based insurance subsidies and expansion of Medicaid coverage for women (before they become pregnant) were associated with prenatal clinic attendances and an increase in breastfeeding.

There can’t be many studies on the impact of income inequality on the provision of health care, but there’s an interesting one in this issue. The authors looked at how increases in income inequality were associated with the public/private mix of health care in Canada. The finding is as you might expect (or fear). The number of private clinics and the number of private physicians both seem to increase with income inequality. That means fewer doctors for the public sector.

This issue also includes a study for your collection on competition in health care. This one comes from the US, with an analysis of the density of Medicare primary care services and its relationship to quality of care in terms of screening, follow-up, and prescribing. It turns out that quality is lower where density is higher. We should take that to mean that competition, even where prices are regulated, can be bad for quality. But the article doesn’t provide a convincing explanation for this.

It’s very trendy to use Google Trends data nowadays; people reveal a lot in their googlings. There’s a study in this issue that considers the impact of fear on self-assessed health. The fear in question is that of the Affordable Care Act being repealed, which was helpfully created by Trump. Put Google Trends alongside data from a large health survey and you see that, for people who could be affected by the repeal, self-assessed health correlates with the number of relevant searches. It’s important to consider context when using self-assessed health data. It’s even more important that we should start to see politicians and news media as a public health concern.

Value in Health

Volume 23, Issue 8

As ever, a hefty issue from Value in Health, with 17 articles. Of most interest to me are those relating to methods of economic evaluation.

The Institute for Clinical and Economic Review (The IfCaER) recently updated their value framework, as part of which they essentially rejected the use of multi-criteria decision analysis (MCDA). An article in this issue defends the use of MCDA, outlining quantitative approaches that can be used to combine numerous sources of value in a robust and transparent way. The authors present an example using a past assessment by The IfCaER and they are clearly not convinced that The IfCaER did enough groundwork before rejecting MCDA.

The (non-) inclusion of future unrelated costs in economic evaluation is becoming a hot topic. Part of the challenge is that it may not be easy to accurately estimate lifetime health care use. A new study in this issue presents a methodology for England and Wales, based on estimates of NHS expenditure by age, sex, and time to death. Three examples are presented, which inevitably show that cost-effectiveness ratios (ICERs) increase and that the ranking of interventions can change. I’m still not convinced that we’re ready for unrelated costs, but it’s good that we’re closer to having the option.

Recent years have seen a proliferation of alternative approaches to estimating cost-effectiveness thresholds. It is no longer a choice between the (impossibly difficult) task of identifying a causal relationship in historic expenditures and outcomes or the (almost as difficult) task of identifying society’s willingness to pay for a QALY. This issue includes an article outlining a method to estimate the value of a QALY in France, derived from official estimates of the value of a statistical life combined with EQ-5D data. The headline estimate for the value of a statistical QALY is €147,093. Big money.

On the more applied side of cost-effectiveness analysis, there’s a study on using DICE (discretely-integrated condition event) models which demonstrates how much simpler and more efficient they can be than Markov-style models in Excel. We’ve mentioned DICE a few times on the blog. It seems useful, but if you’re going to learn a new modelling technique, why not abandon the spreadsheets? Modellers have nothing to lose but their (Markov) chains.

Finally, there’s a study turning the SIDECAR-D instrument, for carers of people with dementia, into a preference-accompanied measure. The researchers used best-worst scaling with EQ-5D and SIDECAR-D items. So, you now have a new tool for measuring carer QALYs.

Health Services and Outcomes Research Methodology

Volume 20, Issue 2-3

I don’t think this journal has ever featured on the blog. I’m not sure I even knew it existed until recently. Anyway, this issue (or is it two issues?) includes four articles that are all relevant to health economics.

There are a couple of studies on modelling health care use and expenditure. One is concerned with two-part models and provides formulations for the identification of marginal and incremental effects for four alternative models. The authors apply their methods to an analysis of German panel data. Another study is on cluster analysis with high health expenditures, comparing two different methods.

There’s also an article setting out a novel cluster sampling design, aimed at addressing the challenge of nested providers in the US. The authors present algorithms to support survey sampling in this context.

Finally, there’s a study comparing two versions of the new SF-6Dv2 and the EQ-5D-5L in the context of breast cancer for patients in Iran. The conclusion seems to be that they give different results, and the different versions of the SF-6D differ from the EQ-5D in different ways, but it’s difficult to see how that’s important.

Credits

Support the blog, become a patron on Patreon.

By

  • Chris Sampson

    Founder of the Academic Health Economists' Blog. Principal Economist at the Office of Health Economics. ORCID: 0000-0001-9470-2369

We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

0 0 votes
Article Rating
Subscribe
Notify of
guest

1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Mour
Mour
2 years ago

Re: DICE.

You should keep in mind that DICE is a product from a single company and that this paper is just another promotional material pushing that product (notice who the first author is). You write that the study “demonstrates how much simpler and more efficient they can be than Markov-style models in Excel.” But the model used in that study was a Markov model. And DICE is still built in spreadsheets with an Excel add-in on top (which is its main selling point) so, if anything, it will delay transition to other software.

In that particular study (unfortunately, I don’t have access to full text, so I’m judging from the abstract) they seem to be making a completely irrelevant comparison. First of all, who cares if a model file has 0.12 or 18 MB? The days of floppy disks are ancient history. In addition, they clearly admit that they compare their macro-run DICE model with a non-DICE model that had 32 copies of the same set of calculations. If one thinks that this size is a problem (which it isn’t necessarily), it would be straightforward to re-use the same structure for all strata and treatments with a macro. It’s a standard thing to do in commercially-built Excel models and a single replication should take no more than 16 seconds. So their comparison is unfair and clearly designed to support their thesis.

Perhaps more importantly, DICE is not much more than a slightly modified (and limited) version of DES. They build hybrid or Markov models by making state transition an event. Once you move outside Excel and their add-in, there’s no need for that at all, you can use DES or any kind of hybrid model you can think of.

1
0
Join the conversation, add a commentx
()
x
%d bloggers like this: