Every Monday our authors provide a round-up of the latest peer-reviewed journal publications. We cover all issues of major health economics journals as well as some other notable releases. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.
Volume 12, Issue 3
Of the 14 articles in the latest issue of AEJ:EP, there are three that focus on health care.
A study from Taiwan estimates price elasticities to identify the impact of cost-sharing in early childhood. Health care for children under three years of age is subsidised, which reduces cost-sharing, setting the stage for the researchers’ regression discontinuity design. They find that the price elasticity for inpatient care is essentially zero, with no real change in utilisation around the age threshold. This implies that full coverage for children’s inpatient care is ‘safe’ insofar as overuse is not a concern. But it’s a somewhat different story for outpatient care, where cost-sharing does influence care use. In particular, it affects the types of health care facilities that people access; in Taiwan, there is no gatekeeper system to prevent people accessing hospital services. The findings highlight the potential for cost-sharing to reduce expenditure. But the authors are only able to look at serious health outcomes (e.g. mortality), so it isn’t clear how reduced access to services beyond the age of three might affect children’s health.
In some health insurance markets, public subsidies are used to reduce premiums to a target price. This issue includes a study of these price-linked subsidies that tries to identify the impact on prices. The authors use data from Massachusetts to estimate demand elasticities, exploiting price changes and changes in terms, and also simulate alternative scenarios and market equilibria to identify overall welfare effects. Linking subsidies to prices is shown to reduce competition and increase prices by up to 6%. Unless the market is very competitive, or there is a great deal of uncertainty about cost shocks, price-linking subsidies is probably not worth it.
Why don’t people buy enough long-term care insurance? A study using Canadian data tries to answer this question using a clever combination of stated-preference data and risk prediction. The survey asked an online panel of 2,000 people to rate alternative long-term care insurance plans. It also asked participants general questions about the types of long-term care they would prefer and about their risk perceptions. These data were then plugged into an existing microsimulation model to estimate the individuals’ needs for long-term care in the future. From all of this, the authors then construct models of demand and supply. A key finding is that underinsurance arises due to people having a lack of information about available products and provision, and their likelihood of needing long-term care in the future. Despite this, the authors’ models suggest that increasing awareness might only have a moderate impact on uptake.
Volume 40, Issue 5
There are several articles in the latest issue of MDM on methods of model-based cost-effectiveness analysis, of which I’ll focus on two.
Threshold analysis can be a useful way of exploring what might be changed in order to achieve cost-effectiveness. This issue includes an article on probabilistic threshold analysis, which is (to me, at least) a novel idea. The authors compare several methods for a threshold based on the minimum probability of hospitalisation in a model evaluating a vaccination programme. A deterministic threshold analysis suggests that the intervention is not cost-effective within five years for any probability of hospitalisation. Conversely, a probabilistic analysis with a generalised additive model proves to be an efficient method and provides a threshold probability (of 0.061). This could prove to be a very useful methodology, especially for preventive care where cost-effectiveness is heavily dependent on the prevailing probability of various events.
A group of researchers from Erasmus have a paper on measuring uncertainty in discrete-event simulations. The relevant types of uncertainty are much the same as any other modelling framework but, with an event-based patient-level model, they can be a bit more tricky to handle. The four challenges identified are i) varying patient heterogeneity for each comparator, ii) changes to life expectancy after events, iii) stochastic uncertainty in treatment effectiveness, and iv) other parameter uncertainty. Using a COPD model, the authors demonstrate the implications of getting it wrong and describe solutions to each challenge.
As discussed in my recent Thesis Thursday appearance, I’ve done some work on the cost-effectiveness of risk prediction models. So I was interested in a study in this issue on people’s preferences for risk prediction models in the context of chronic lung disease. The researchers conducted a discrete choice experiment, asking people to choose between models defined in terms of sensitivity, specificity, uncertainty in these estimates, and the time horizon of the model. In general, people valued all of these attributes and prioritised sensitivity and specificity to a similar extent. The results tell us something about how likely patients might be to use risk prediction models with certain characteristics, but there seems to be one big problem with the study; people with chronic lung disease weren’t consulted in the identification of the attributes. I would contest that there are probably far more important attributes than those included, such as the risk level at which care might be provided. And while people might be concerned about something like false-positive rates, which is a concept that is easy to grasp, I’m not convinced that the study’s framing of sensitivity and specificity would be salient with participants.
Finally, this issue includes a study on the importance of socioeconomic factors in distributional cost-effectiveness analysis (DCEA: a method we’ve discussed a few times). The researchers used two DCEA models with analyses conducted at both national and local levels. Accounting for socioeconomic factors didn’t make much of a difference to the average cost-effectiveness, but it was very important in determining the impact of the intervention on (in)equality. But there seems to have been no obvious tendencies that might help us determine when doing a DCEA (rather than a plain old CEA) might be worthwhile. Presumably, then, we should always be doing it!