Thesis Thursday: Miqdad Asaria

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Miqdad Asaria who graduated with a PhD from the University of York. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
The economics of health inequality in the English National Health Service
Supervisors
Richard Cookson, Tim Doran
Repository link
http://etheses.whiterose.ac.uk/16189

What types of inequality are relevant in the context of the NHS?

For me the inequalities that really matter are the inequalities in health outcomes, in the English context it is particularly the socioeconomic patterning of these inequalities that is of concern. The focus of health policy in England over the last 200 years has been on improving the average health of the population as well as on providing financial risk protection against catastrophic health expenditure. Whilst great strides have been made in improving average population health through various pioneering interventions including the establishment of the NHS, health inequality has in fact consistently widened over this period. Recent research suggests that in terms of quality-adjusted life expectancy the gap between people living in the most deprived fifth of neighbourhoods in the country as compared to those living in the most affluent fifth is now approximately 11 quality-adjusted life years.

However, these socio-economic inequalities in health typically accumulate across the life course and there is a limited amount that health care on its own can do to prevent these gaps from widening or indeed to close these gaps once they emerge. This is why health systems including the NHS typically focus on measuring and tackling the inequalities that they can influence even though eliminating such inequalities can have at best only modest impacts on reducing health inequality overall. These comprise of inequalities in access to and quality of healthcare as well as inequality of those health outcomes specifically amenable to healthcare.

What were the key methods and data that you used to identify levels of health inequality?

I am currently working on a project with the Ministry of Health and Family Welfare in India and it is really making me appreciate the amazingly detailed and comprehensive administrative datasets available to researchers in England. For the work underpinning my thesis I linked 10 years of data looking at every hospital admission and outpatient visit in the country with the quality and outcomes achieved for patients registered at each primary care practice, the number of doctors working at each primary care practice, general population census data, cause-specific mortality data, hospital cost data and deprivation data all at neighbourhood level. I spent a lot of time assembling, cleaning and linking these data sets and then used this data platform to build a range of health inequality indicators – some of which can be seen in an interactive tool I built to present the data to clinical commissioning groups.

As well as measuring inequality retrospectively in order to provide evidence to evaluate past NHS policies, and building tools to enable the NHS to monitor inequality going forward, another key focus of my thesis was to develop methods to model and incorporate health inequality impacts into cost-effectiveness analysis. These methods allow analysts to evaluate proposed health interventions in terms of their impact on the distribution of health rather than just their impact on the mythical average citizen. The distributional cost-effectiveness analysis framework I developed is based on the idea of using social welfare functions to evaluate the estimated health distributions arising from the rollout of different health care interventions and compute the equity-efficiency trade-offs that would need to be made in order to prefer one intervention over another. A key parameter in this analysis required in order to make equity-efficiency trade-offs is the level of health inequality aversion. This parameter was quite tricky to estimate with methods used to elicit it from the general public being prone to various framing effects. The preliminary estimates that I used in my analysis for this parameter suggested that at the margin the general public thought people living in the most deprived fifth of neighbourhoods in the country deserve approximately 7 times the priority in terms of health care spending as those who live in the most affluent fifth of neighbourhoods.

Does your PhD work enable us to attach a ‘cost’ to inequality, and ‘value’ to policies that reduce it?

As budding economists, we are ever cautious to distinguish association and causation. My thesis starts by estimating the cost associated with inequality to the NHS. That is the additional cost to the NHS spent on treating the excess morbidity in those living in relatively deprived neighbourhoods. I estimated the difference between the actual NHS hospital budget and what the cost would have been if everybody in the country had the morbidity profile of those who live in just the most affluent fifth of neighbourhoods. For inpatient hospital costs this difference came to £4.8 billion per year and widening this to all NHS costs this came to £12.5 billion per year approximately a fifth of the total NHS budget. I looked both cross-sectionally and also modelled lifetime estimated health care use and found that even over their entire lifetimes people living in more deprived neighbourhoods consumed more health care despite their substantially shorter life expectancies.

This cost is of course very different to the value of policies to reduce inequality. This difference arises for two main reasons. First, my estimates were not causal but rather associations so we are unable to conclude that reducing socioeconomic inequality would actually result in everybody in the country gaining the morbidity profile of those living in the most affluent fifth of neighbourhoods. Second and perhaps more significantly, my estimates do not value any of the health benefits that would result from reducing health inequality they just count the costs that could be saved by the NHS due to the excess morbidity avoided. The value of these health benefits forgone in terms of quality adjusted life years gained would have to be converted into monetary terms using an estimate of willingness to pay for health and added to these cost savings (which themselves would need to be converted to consumption values) to get a total value of reducing inequality from a health perspective. There would also, of course, be a range of non-health impacts of reducing inequality that would need to be accounted for if this exercise were to be comprehensively conducted.

In simple terms, if the causal link between socioeconomic inequality and health could be determined then the value to the health sector of policies that could substantially reduce this inequality would likely be far greater than the costs quoted here.

How did you find the PhD-by-publication route? Would you recommend it?

I came to academia relatively late having previously worked in both the government and the private sector for a number of years. The PhD by publication route suited me well as it allowed me to get stuck into a number of projects, work with a wide range of academics and build an academic career whilst simultaneously curating a set of papers to submit as a thesis. However, it is certainly not the fastest way to achieve PhD status, my thesis took 6 years to compile. The publication route is also still relatively uncommon in England and I found both my supervisors and examiners somewhat perplexed about how to approach it. Additionally, my wife who did her PhD by the traditional route assures me that it is not a ‘proper’ PhD!

For those fresh out of an MSc programme the traditional route probably works well, giving you the opportunity to develop research skills and focus on one area in depth with lots of guidance from a dedicated supervisor. However, for people like me who probably would never have got around to doing a traditional PhD, it is nice that there is an alternative way to acquire the ‘Dr’ title which I am finding confers many unanticipated benefits.

What advice would you give to a researcher looking to study health inequality?

The most important thing that I have learnt from my research is that health inequality, particularly in England, has very little to do with health care and everything to do with socioeconomic inequality. I would encourage researchers interested in this area to look at broader interventions tackling the social determinants of health. There is lots of exciting work going on at the moment around basic income and social housing as well as around the intersection between the environment and health which I would love to get stuck into given the chance.

Advertisements

Why insurance works better with some adverse selection

Adverse selection, a process whereby low-risk individuals drop out of the insurance pool, leaving only high-risk individuals, arises when the individuals purchasing insurance have better information regarding their risk status than does the insurer. […] In the limit, adverse selection can make insurance markets unsustainable. Even short of the market disappearing altogether… The market cannot offer a full set of insurance contracts, reducing allocative efficiency.

The story summarised above (by Jeremiah Hurley) is familiar to all health economists. Adverse selection is generally understood to be a universal problem for efficiency in health insurance (and indeed all insurance), which should always be avoided or minimised, or else traded off against other objectives of equity. In my book, Loss Coverage: Why Insurance Works Better with Some Adverse Selection, I put forward a contrary argument that a modest degree of adverse selection in insurance can increase efficiency.

My argument depends on two departures from canonical models of insurance, both realistic. First, I assume that not all individuals will buy insurance when it is risk-rated; this is justified by observation of extant markets (e.g. around 10% of the US population has no health insurance, and around 50% have no life insurance). Second, my criterion of efficiency is based not on Pareto optimality (unsatisfactory because it says so little) or utilities (unsatisfactory because always unobservable), but on ‘loss coverage.’

In its simplest form, loss coverage is the expected fraction of the population’s losses which is compensated by insurance.

Since the purpose of insurance is to compensate the population’s losses, I argue that higher loss coverage is more efficient than lower loss coverage. Under this criterion, insurance of one high risk will contribute more to efficiency than insurance of one low risk. This is intuitively reasonable: higher risks are those who most need insurance!

If this intuition is accepted, the orthodox arguments about adverse selection seem to overlook one point. True, adverse selection leads to a higher average price for insurance and a fall in numbers of individuals insured. But it also leads to a shift in coverage towards higher risks (those who need insurance most). If this shift in coverage is large enough, it can more than outweigh the fall in numbers insured, so that loss coverage is increased.

My argument can be illustrated by the following toy example. The numbers are simplified and exaggerated for clarity, but the underlying argument is quite general.

Consider a population of just ten risks (say lives), with three alternative scenarios for insurance risk classification: risk-differentiated premiums, pooled premiums (with some adverse selection), and pooled premiums (with severe adverse selection). Assume that all losses and insurance cover are for unit amounts (this simplifies the discussion, but it is not necessary).

The three scenarios are represented in the three panels of the illustration. Each ‘H’ represents one higher risk and each ‘L’ represents one lower risk. The population has the typical predominance of lower risks: a lower risk-group of eight risks each with probability of loss 0.01, and a higher risk-group of two risks each with probability of loss 0.04.

In Scenario 1, risk-differentiated premiums (actuarially fair premiums) are charged. The demand response of each risk-group to an actuarially fair price is the same: exactly half the members of each risk-group buy insurance. The shading shows that a total of five risks buy insurance.

Scenario 1

 

The weighted average of the premiums paid is (4 x 0.01 +1 x 0.04)/5 = 0.016. Since higher and lower risks are insured in the same proportions as they exist in the population, there is no adverse selection.

Exactly half the population’s expected losses are compensated by insurance. I describe this as ‘loss coverage’ of 50%. (The calculation is (4 x 0.01 + 1x 0.04) / (8 x 0.01 + 2 x 0.04) = 0.50.)

In Scenario 2, risk classification has been banned, and so insurers have to charge a common pooled premium to both higher and lower risks. Higher risks buy more insurance, and lower risks buy less (adverse selection). The pooled premium is set as the weighted average of the true risks, so that expected profits on low risks exactly offset expected losses on high risks. This weighted average premium is (1 x 0.01 +2 x 0.04)/3 = 0.03. The shading symbolises that that three risks (compared with five previously) buy insurance.

Scenario 2

 

Note that the weighted average premium is higher in Scenario 2, and the number of risks insured is lower. These are the essential features of adverse selection, which Scenario 2 accurately and completely represents. But there is a surprise: despite the adverse selection in Scenario 2, the expected losses compensated by insurance for the whole population are now higher. That is, 56% of the population’s expected losses are now compensated by insurance, compared with 50% before. (The calculation is (1 x 0.01 + 2 x 0.04) / (8x 0.01 + 2 x 0.04) = 0.56.)

I argue that Scenario 2, with a higher expected fraction of the population’s losses compensated by insurance – higher loss coverage – is more efficient than Scenario 1. The superiority of Scenario 2 arises not despite adverse selection, but because of adverse selection.

At this point an economist might typically retort that that the lower numbers insured in Scenario 2 compared with Scenario 1 is suggestive of lower efficiency. However, it seems surprising that an arrangement such as Scenario 2, under which more risk is voluntarily traded and more losses are compensated, is always disparaged as less efficient.

A ban on risk classification can also reduce loss coverage, if the adverse selection which the ban induces becomes too severe. This possibility is illustrated in Scenario 3. Adverse selection has progressed to the point where only one higher risk, and no lower risks, buys insurance. The expected losses compensated by insurance for the whole population are now lower. That is, 25% of the population’s expected losses are now compensated by insurance, compared with 50% in Scenario 1, and 56% in Scenario 2. (The calculation is (1 x 0.04) / (8x 0.01 + 2 x 0.04) = 0.25.)

Scenario 3

 

These scenarios suggest that banning risk classification can increase loss coverage if it induces the `right amount’ of adverse selection (Scenario 2), but reduce loss coverage if it generates `too much’ adverse selection (Scenario 3). Which of Scenario 2 or Scenario 3 actually prevails depends on the demand elasticities of higher and lower risks.

The argument illustrated by the toy example applies broadly. It does not depend on any unusual choice of numbers for the example. The key idea is that loss coverage – and hence, I argue, efficiency – is increased by a modest degree of adverse selection.

Sam Watson’s journal round-up for 12th June 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Machine learning: an applied econometric approach. Journal of Economic Perspectives [RePEcPublished Spring 2017

Machine learning tools have become ubiquitous in the software we use on a day to day basis. Facebook can identify faces in photos; Google can tell you the traffic for your journey; Netflix can recommend you movies based on what you’ve watched before. Machine learning algorithms provide a way to estimate an unknown function f that predicts an outcome Y given some data x: Y = f(x) + \epsilon. The potential application of these algorithms to many econometric problems is clear. This article outlines the principles of machine learning methods. It divides econometric problems into prediction, \hat{y}, and parameter estimation, \hat{\beta} and suggests machine learning is a useful tool for the former. However, this distinction is a false one, I believe. Parameters are typically estimated because they represent an average treatment effect, say E(y|x=1) - E(y|x=0). But, we can estimate these quantities in ‘\hat{y} problems’ since f(x) = E(y|x). Machine learning algorithms, therefore, represent a non-parametric (or very highly parametric) approach to the estimation of treatment effects. In cases where functional form is unknown, where there may be nonlinearities in the response function, and interactions between variables, this approach can be very useful. They do not represent a panacea to estimation problems of course, since interpretation rests on the assumptions. For example, as Jennifer Hill discusses, additive regression tree methods can be used to estimate conditional average treatment effects if we can assume the treatment is ignorable conditional on the covariates. This article, while providing a good summary of methods, doesn’t quite identify the right niche where these approaches might be useful in econometrics.

Incorporating equity in economic evaluations: a multi-attribute equity state approach. European Journal of Health Economics [PubMedPublished 1st June 2017

Efficiency is a key goal for the health service. Economic evaluation provides evidence to support investment decisions, whether displacing resources from one technology to another can produce greater health benefits. Equity is generally not formally considered except through the final investment decision-making process, which may lead to different decisions by different commissioning groups. One approach to incorporating equity considerations into economic evaluation is the weighting of benefits, such as QALYs, by group. For example, a number of studies have estimated that benefits of end-of-life treatments have a greater social valuation than other treatments. One way of incorporating this into economic evaluation is to raise the cost-effectiveness threshold by an appropriate amount for end-of-life treatments. However, multiple attributes may be relevant for equity considerations, negating a simplistic approach like this. This paper proposed a multi-attribute equity state approach to incorporating equity concerns formally in economic evaluation. The basic premise of this approach is to firstly define a set of morally relevant attributes, to secondly derive a weighting scheme for each set of characteristics (similarly to how QALY weights are derived from the EQ-5D questionnaire), and thirdly to apply these weights to economic evaluation. A key aspect of the last step is to weight both the QALYs gained by a population from a new technology and those displaced from another. Indeed, identifying where resources are displaced from is perhaps the biggest limitation to this approach. This displacement problem has also come up in other discussions revolving around the estimation of the cost-effectiveness threshold. This seems to be an important area for future research.

Financial incentives, hospital care, and health outcomes: evidence from fair pricing laws. American Economic Journal: Economic Policy [RePEcPublished May 2017

There is a not-insubstantial literature on the response of health care providers to financial incentives. Generally, providers behave as expected, which can often lead to adverse outcomes, such as overtreatment in cases where there is potential for revenue to be made. But empirical studies of this behaviour often rely upon the comparison of conditions with different incentive schedules; rarely is there the opportunity to study the effects of relative shifts in incentive within the same condition. This paper studies the effects of fair pricing laws in the US, which limited the amount uninsured patients would have to pay hospitals, thus providing the opportunity to study patients with the same conditions but who represent different levels of revenue for the hospital. The introduction of fair pricing laws was associated with a reduction in total billing costs and length of stay for uninsured patients but little association was seen with changes in quality. A similar effect was not seen in the insured suggesting the price ceiling introduced by the fair pricing laws led to an increase in efficiency.

Credits