Skip to content

Journal round-up: Health Economics 31(7)

Our authors provide regular round-ups of the latest peer-reviewed journals. We cover all issues of major health economics journals as well as other notable releases. Visit our journal round-up log to see past editions organised by publication title. If you’d like to write a journal round-up, get in touch.


Health Economics is one of my favourite journals. I personally love how much detail is required in each of the articles leading to high-quality studies. As part of my review, I have focussed on the more interesting research, defined as having policy implications and therefore may help to shape current health economics practices. These include articles on: collaborations between economists and epidemiologists to inform policymaking; the impact of provider supply on access to primary care; hospital transfer payments; machine learning in the generation of risk estimation; a regression framework to examine determinants of net benefit separation; and improving access to healthcare through telemedicine.

This issue’s ‘perspective’ discusses the benefits of collaborations between epidemiologists and economists in the context of economic modelling. Being a lover of all things related to economic modelling, this caught my interest.


Modelling to inform economy-wide pandemic policy: bringing epidemiologists and economists together.

The premise of this opinion piece is that epidemiologists and economists should work together for the greater good. Naturally, the case study discussed is that of COVID-19 and more specifically, modelling. Similarities are drawn between the two fields, namely in terms of both using mathematical modelling techniques to inform policy. However, the opinion piece argues that due to the two respective fields tailoring their modelling to different research questions, the resulting ‘policy prescriptions’ ‘diverge’.

The perspective piece cites the following three elements that contribute to these differences:

  1. Understanding the degree to which behaviour shifts occur
  2. The primary factors considered that might drive that behaviour
  3. Balancing health outcomes with non-health outcomes 

It argues that epidemiologic models focus on health outcomes whilst economic models emphasise health-wealth trade-offs. On the other hand, they argue that epidemiological models do not consider the preferences and constraints that drive individual decision-making. 

The authors convened a group of scholars from the respective fields to understand how best the two fields could work together to develop more effective non-conflicting policy recommendations. Using a restaurant capacity restriction scenario, the scholars agreed that economic and health outcomes should be considered together, data should be used to inform differential disease transmission, and that realistic models in terms of disease burden and human behaviour should be developed. The convened scholars agreed that the three components above were both considered in the respective fields but that the emphasis in each varied. Following this discussion with experts, six steps are recommended that would bring the relevant experts together to improve policymaking. These are grouped into three themes: relationship building, improvements in assumptions and data, and the combination of efforts. The finer details of the three themes are largely self-explanatory and therefore not covered here.

I completely agree with the point that the authors are trying to make. For decision-makers to be able to make informed policy decisions, which is even more important during a pandemic given the wide-reaching impacts, it is essential that the models are developed to incorporate all the relevant elements. However, whilst I agree with the premise of this perspective piece, it raises a key issue which I believe needs to be addressed. Here goes. 

It is fair to say that we can all agree on the importance of informed (and collaborative) decision-making, which the authors correctly argue is fundamental to preventing the proliferation of extreme viewpoints not supported by science, such as those seen during the height of the COVID-19 pandemic. The conclusion drawn, namely the six recommended steps, is fair and reasoned. However, from the viewpoint of a health economist based primarily outside of academia, they lack originality and are, in fact, descriptive of the way in which the majority of health economists work when developing models. The piece seems to infer that all economists (and epidemiologists) function in a non-collaborative, one-dimensional, and regressive manner, and the authors use this as the basis for their perspective. Yes, this was indeed the case during the COVID-19 pandemic, which, unfortunately, contributed to the public’s negative perception of science in general. However, it would be unbalanced and unfair to not point out that during the height of the pandemic there were innumerable voices from multiple fields (including economists and epidemiologists) highlighting the lack of collaboration in the development of these models. This was, and is, a concern as these models were receiving the most attention and being used to inform pandemic policy and yet they were largely (arguably) based on one-dimensional and narrow viewpoints. The question we should be asking is why it was acceptable that these models received so much uncritical airtime.


Provider supply and access to primary care.

Next up is a US-based study covering the relationship between wait times and the availability of physicians using the Veterans Health Administration (VHA). The VHA comprises 168 hospitals, 1053 community-based outpatient clinicians, and 135 nursing homes. As healthcare provision through the VHA is publicly funded, wait times are used as a means to manage demand. Additionally, the Area Health and Resource File and population count were also used in the analysis.

The authors used panel data from the VHA to estimate a linear model of primary care wait times for new patients, defined as those who have not seen a clinician in two or more years. The model includes clinician capacity, productivity, demand factors, certain medical centre control variables, medical centre fixed effects, year indicators, and quarter indicators. Additionally, they also included instrumental variables that essentially account for physician time. This enabled them to determine the effect of capacity on wait times in a supply and demand framework.

The ordinary least squares (OLS) results show a negative relationship between wait times and capacity meaning that increased provider supply reduces wait times. Specifically, a 10% increase in capacity reduced wait times by 2.1%. Increased physician productivity and alternative healthcare options also reduced wait times. Lower income and education, and higher housing prices, numbers of nurses, and physician assistants, reduced wait times. One of the aims of the study was to determine the impact of giving current patients priority over new patients. Unsurprisingly, they found that this led to poor access for new patients.

Overall, increasing the number of physicians leads to reduced wait times. Although this was a good quality study, the study could have been strengthened by including an estimation of the point of equilibrium. That is, at what point are there too many clinicians?


Effectiveness of hospital transfer payments under a prospective payment system: an analysis of a policy change in New Zealand.

The premise of this study is that prospective payment systems that use diagnosis-specific flat fees can lead to increased transfers between hospitals, in order to prevent excessive healthcare costs. This means that patients who are expected to be expensive to treat may be unnecessarily transferred to a different hospital. In 2003, New Zealand introduced a transfer fee payable each time patients were transferred between hospitals. Therefore, New Zealand hospital data (courtesy of the Integrated Data Infrastructure from the Ministry of Health) are an excellent case study to understand the impact of transfer fees on hospital transfers. Moreso because the New Zealand healthcare system requires individuals to seek care at their local domiciled hospital. The transfer fee is based on national prices and the actual volume of the flows and is payable by the hospital of origin from their funding allocation. Tertiary hospitals receive higher fees as they are expected to treat more complex cases by virtue of their specialist nature.

The study analysed 4,020,796 healthcare events from 2000 to 2007, aiming to provide a comparison of the data before and after the 2003 introduction of a transfer fee. For each health event, available data include the start and end date, diagnosis and severity, length of stay, admission and readmission details, hospital location (both domicile and service), transfer information, cost weight, death date if the patient died, and basic patient demographic information. A logit model was developed to identify a change in behaviours due to the introduction of the transfer fee. The model results suggest that there was a reduction in transfers to non-tertiary hospitals whilst transfers to tertiary hospitals increased. Transfers to non-tertiary hospitals, which were found to have the second-highest complexity rating (in terms of case-mix), were most impacted by the introduction of the transfer fee. Importantly, the transfer fee did not lead to worse health outcomes. 

I found this study very interesting and really enjoyed reading it. It used a robust data set to analyse the impact of a key policy change on transfers between hospitals and concluded that the introduction of transfer fees is an effective way to prevent hospitals from transferring patients unnecessarily in order to cut their own costs in a prospective payment system. 


Comparing risk adjustment estimation methods under data availability constraints.

Risk-adjustment algorithms are used to predict healthcare costs, which are in turn used to determine healthcare funding. The Italian healthcare system forms the basis for this research study. An age-weighted capitation approach is used to allocate funding to the individual regions, although this may not accurately predict healthcare needs. Italy’s heavily decentralised system results in large heterogeneity in data collection, meaning that there are large differences in data granularity and quality control. The study explored alternative methods for risk-adjustment estimation methods based on person-based formulae, which are thought to be more equitable. 

The authors compare conventional (OLS and generalised linear models) and machine learning methods (penalised regressions, generalised additive model, RF and an SL) in six data scenarios using the Emilia 2016 administrative data set. Specifically, hospital discharge records and outpatient pharmaceutical and speciality databases were used. Socio-demographic, hospital-based, and pharmacy covariates were included as independent variables and annual total expenditure as the dependent variable in a concurrent risk-adjustment approach. A training set for developing the algorithms and a test set to assess algorithm performance were generated from 100,000 randomly sampled observations with R2 and mean squared error used to assess model fit. This enabled the authors to evaluate which techniques are optimal in data-constrained environments such as the Italian healthcare system. 

The results showed that machine learning methods, especially SL (discrete and weighted super learner), outperformed OLS in all fine data granularity but there was no statistical significance and, as the authors themselves point out, there was considerable data paucity in their most complete data set. They point the reader to a US-based study which concluded that using machine learning techniques in data-rich settings led to improved estimation methods. Importantly, they also found that there was no advantage to using machine learning techniques compared to OLS in situations where there is coarse data granularity. In summary, this study suggests that machine learning techniques are most appropriate in data-rich environments.

The reason why I selected this study is that aside from the data constraints (!) from using the Italian healthcare system, it highlights an important point: just because a technique is considered sexy, doesn’t mean that it is the one you should always choose. There is a lesson here for all.


A regression framework for a probabilistic measure of cost-effectiveness.

This study highlighted to me that I had completely missed out on the development of the net-benefit separation (NBS). If, like me, you have not yet heard of this, allow me to enlighten you. It is a novel probabilistic measure of cost-effectiveness (developed by one of the authors of this study) that characterises the stochastic ordering of individual net benefits (INB; not a comparative measure) between treated and untreated populations. The NBS characterises the difference in the distribution of INBs between treated and untreated. The authors claim it can “serve as an alternative measure of a treatment’s cost-effectiveness”. It measures the “probability that a patient receiving treatment will experience greater treatment benefit than a patient receiving control”, and, “where treatment is randomised, NBS can be estimated nonparametrically using a scaled variant of the Wilcoxon rank-sum statistic”. 

The authors previously introduced a semiparametric Monte Carlo standardisation process (for instance, inverse probability of censoring weights to standardise) to estimate NBS while adjusting for differences in the distribution of confounding variables across treatment arms. Here, they introduce regression techniques to account for variability in the NBS using observed covariates. Endometrial cancer is used as a case study for their proposed methodology following a simulation study which was used to specify the models. Namely, Weibull models are used for survival and two-stage logistic and log-normal model for costs. All models were conditional on treatment status and confounders. 10,000 Monte Carlo observations were run for survival and costs using the aforementioned models. NBS was then estimated to compare the intervention to the comparator controlling for cancer stage and Charlson comorbidity index in a probit model. In all, the NBS allows the user to account for individual patient characteristics to understand how these impact cost-effectiveness. This methodology is limited to cost-effectiveness models with individual patient data and knowledge of all confounders.  

This is potentially an exciting methodology to use when trying to allocate resources in a cost-effective manner. However, it requires significant knowledge, not least data and information on all the factors that may impact treatment outcomes. As such, I am not convinced of the feasibility of using this methodology in cost-effectiveness modelling.


On the demand for telemedicine: evidence from the COVID-19 pandemic.

Another COVID-19-related study, this time from Argentina. A couple of years before the pandemic, Argentina had embarked on the use of telemedicine as part of its universal health coverage strategy.

The authors sought to study how COVID-19 impacted the use of telemedicine. For this, they used panel data for all calls received during 2019-2022, generated from administrative data from one of the largest suppliers of telemedicine in Argentina, to analyse changes in the use of telemedicine due to the pandemic. An important metric they assessed was the daily number of first-time callers. LLamando al Doctor (‘Calling the Doctor’) is accessed through an app, which initially asks screening questions before patients are given the opportunity to discuss with a clinician. An ’observed mobility’ variable, used by the authors as a marker for COVID-19 restrictions, was generated based on three indicators of mobility. 

Using these data, the authors estimated an event-study model and a difference-in-differences; the latter to account for seasonal differences in telemedicine use and allow calculation of treatment effects. They found that there was a large increase in the use of telemedicine (up 230%), including first-time callers (up 198%), as a result of the pandemic. The authors also found that calls resulting in prescriptions increased by 332%, calls requiring follow-up by 305% and referrals by 190%. Family medicine consultations saw a 290% increase in daily calls and demand from older patients saw the largest increase. A decrease in the use of telemedicine following a reduction in mobility restrictions was observed, but the use of telemedicine remained higher than pre-restriction levels. Specifically, a 1% increase in the average mobility resulted in a reduction of between 0.8 and 2.6% points in demand for telemedicine. Therefore, this study shows the demand for telemedicine is present irrespective of pandemics. Importantly, it highlights a hidden demand that could be exploited to improve access to healthcare at a relatively low cost. Therefore, it appears that policymakers are right to focus on telemedicine as a way to improve access to care. 


There were a few more studies not reviewed here, including on rainfall shocks, child mortality and water infrastructure; assessing willingness-to-pay for glasses in Burkina Fasopetrol prices and obesitycoping with consequences of short-term illness shockshospital-physician integration and risk-coding intensityprenatal substance use policies and new-born healtheffects of Uber diffusion on the mental health of drivers, and a couple of letters.


Credits

Support the blog, become a patron on Patreon.
close

We now have a newsletter!

Sign up to receive updates about the blog and the wider health economics world.

Join the discussion

%d bloggers like this: