How to cite The Academic Health Economists’ Blog

Occasionally we get emails from people who would like to cite our blog posts. Usually, these requests are framed as ‘is this going to be published in a journal?’. It’s no surprise that people are more comfortable citing the traditional academic literature. But researchers are increasingly citing blog posts. Indeed, some of our blog posts have been cited in published academic literature.

There are plenty of guides out there for citing blog posts. You may like to refer to them for specific formatting styles. Cite This For Me is a useful tool for generating references in a variety of styles. Here I’d like to provide a few specific recommendations for citing posts from this blog.

1. Cite the author

Our blog posts are written by lots of different authors, not by ‘the blog’. The author’s name – assuming they have not claimed anonymity – will appear at the top of the blog post. Let’s take a recent example. To start with, your citation should look something like:

Watson, S. (2017). Variations in NHS admissions at a glance. The Academic Health Economists’ Blog. Available at: [Accessed 8 Mar. 2017].

2. Use our ISSN

As of this week, the blog now has its own International Standard Serial Number (ISSN). This number uniquely identifies and distinguishes the blog. Our ISSN is 2514-3441. You can find it at the bottom of the sidebar and on our About page. So your citation could become:

Watson, S. (2017). Variations in NHS admissions at a glance. The Academic Health Economists’ Blog (ISSN 2514-3441). Available at: [Accessed 8 Mar. 2017].

3. Use WebCite

Unlike journal articles, websites can change. One of our authors could (in principle) completely change the content of their blog post after publishing it. More importantly, it is possible that our URLs may change in the future. If this were to happen, the link in the reference above would become redundant and the citation would not be useful to readers. What needs to be cited, therefore, is the blog post at the time at which you accessed it. Enter WebCite. WebCite is a service that archives a webpage and provides a permanent link for citation. This can be achieved by completing an archiving form. Our citation becomes:

Watson, S. (2017). Variations in NHS admissions at a glance. The Academic Health Economists’ Blog (ISSN 2514-3441). Available at: [Accessed 8 Mar. 2017]. (Archived by WebCite® at

4. Check the comments

Finally, authors may choose to subsequently publish their blog post elsewhere in another format or to upload it to a service such as figshare in order to obtain a DOI. Check the comments below a blog post to see if this is the case as there may be an alternative source that you might prefer to cite.

But as ever, if you’re struggling, get in touch.



Sam Watson’s journal round-up for 6th March 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

It’s good to be first: order bias in reading and citing NBER working papers. The Review of Economics and Statistics [RePEcPublished 23rd February 2017

Each week one of the authors at this blog choose three or four recently published studies to summarise and briefly discuss. Making this choice from the many thousands of articles published every week can be difficult. I browse those journals that publish in my area and search recently published economics papers on PubMed and Econlit for titles that pique my interest. But this strategy is not without its own flaws as this study aptly demonstrates. When making a choice among many alternatives, people aren’t typically presented with a set of choices, rather a list. This arises in healthcare as well. In an effort to promote competition, at least in the UK, patients are presented with a list of possible of providers and some basic information about those providers. We recently covered a paper that explored this expansion of choice ‘sets’ and investigated its effects on quality. We have previously criticised the use of such lists. People often skim these lists relying on simple heuristics to make choices. This article shows that for the weekly email of new papers published by the National Bureau of Economic Research (NBER), being listed first leads to an increase of approximately 30% in downloads and citations, despite the essentially random ordering of the list. This is certainly not the first study to illustrate the biases in human decision making, but it shows that this journal round-up may not be a fair reflection of the literature, and providing more information about healthcare providers may not have the impact on quality that might be hypothesised.

Economic conditions, illicit drug use, and substance use disorders in the United States. Journal of Health Economics [PubMed] Published March 2017

We have featured a large number of papers about the relationship between macroeconomic conditions and health and health-related behaviours on this blog. It is certainly one of the health economic issues du jour and one we have discussed in detail. Generally speaking, when looking at an aggregate level, such as countries or states, all-cause mortality appears to be pro-cyclical: it declines in economic downturns. Whereas an examination at individual or household levels suggest unemployment and reduced income is generally bad for health. It is certainly possible to reconcile these two effects as any discussion of Simpson’s paradox will reveal. This study takes the aggregate approach to looking at US state-level unemployment rates and their relationship with drug use. It’s relevant to the discussion around economic conditions and health; the US has seen soaring rates of opiate-related deaths recently, although whether this is linked to the prevailing economic conditions remains to be seen. Unfortunately, this paper predicates a lot of its discussion about whether there is an effect on whether there was statistical significance, a gripe we’ve contended with previously. And there are no corrections for multiple comparisons, despite the well over 100 hypothesis tests that are conducted. That aside, the authors conclude that the evidence suggests that use of ecstasy and heroin is procyclical with respect to unemployment (i.e increase with greater unemployment) and LSD, crack cocaine, and cocaine use is counter-cyclical. The results appear robust to the model specifications they compare, but I find it hard to reconcile some of the findings with the prior information about how people actually consume drugs. Many drugs are substitutes and/or compliments for one another. For example, many heroin users began using opiates through abuse of prescription drugs such as oxycodone but made the switch as heroin is generally much cheaper. Alcohol and marijuana have been shown to be substitutes for one another. All of this suggesting a lack of independence between the different outcomes considered. People may also lose their job because of drug use. Taken all together I remain a little sceptical of the conclusions from the study, but it is nevertheless an interesting and timely piece of research.

Child-to-adult neurodevelopmental and mental health trajectories after early life deprivation: the young adult follow-up of the longitudinal English and Romanian Adoptees study. The Lancet [PubMedPublished 22nd February 2017

Does early life deprivation lead to later life mental health issues? A question that is difficult to answer with observational data. Children from deprived backgrounds may be predisposed to mental health issues, perhaps through familial inheritance. To attempt to discern whether deprivation in early life is a cause of mental health issues this paper uses data derived from a cohort of Romanian children who spent time in one of the terribly deprived institutions of Ceaușescu’s Romania and who were later adopted by British families. These institutions were characterised by poor hygiene, inadequate food, and lack of social or educational stimulation. A cohort of British adoptees was used for comparison. For children who spent more than six months in one of the deprived institutions, there was a large increase in cognitive and social problems in later life compared with either British adoptees or those who spent less than six months in an institution. The evidence is convincing, with differences being displayed across multiple dimensions of mental health, and a clear causal mechanism by which deprivation acts. However, for this and many other studies that I write about on this blog, a disclaimer might be needed when there is significant (pun intended) abuse and misuse of p-values. Ziliak and McClosky’s damning diatribe on p-values, The Cult of Statistical Significance, presents examples of lists of p-values being given completely out of context, with no reference to the model or hypothesis test they are derived from, and with the implication that they represent whether an effect exists or not. This study does just that. I’ll leave you with this extract from the abstract:

Cognitive impairment in the group who spent more than 6 months in an institution remitted from markedly higher rates at ages 6 years (p=0·0001) and 11 years (p=0·0016) compared with UK controls, to normal rates at young adulthood (p=0·76). By contrast, self-rated emotional symptoms showed a late onset pattern with minimal differences versus UK controls at ages 11 years (p=0·0449) and 15 years (p=0·17), and then marked increases by young adulthood (p=0·0005), with similar effects seen for parent ratings. The high deprivation group also had a higher proportion of people with low educational achievement (p=0·0195), unemployment (p=0·0124), and mental health service use (p=0·0120, p=0·0032, and p=0·0003 for use when aged <11 years, 11–14 years, and 15–23 years, respectively) than the UK control group.


Alastair Canaway’s journal round-up for 20th February 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The estimation and inclusion of presenteeism costs in applied economic evaluation: a systematic review. Value in Health Published 30th January 2017

Presenteeism is one of those issues that you hear about from time to time, but rarely see addressed within economic evaluations. For those who haven’t come across it before, presenteeism refers to being at work, but not working at full capacity, for example, due to your health limiting your ability to work. The literature suggests that given presenteeism can have large associated costs which could significantly impact economic evaluations, it should be considered. These impacts are rarely captured in practice. This paper sought to identify studies where presenteeism costs were included, examined how valuation was approached and the degree of impact of including presenteeism on costs. The review included cost of illness studies as well as economic evaluations, just 28 papers had attempted to capture the costs of presenteeism, these were in a wide variety of disease areas. A range of methods was used, across all studies, presenteeism costs accounted for 52% (range from 19%-85%) of the total costs relating to the intervention and disease. This is a vast proportion and significantly outweighed absenteeism costs. Presenteeism is clearly a significant issue, yet widely ignored within economic evaluation. This in part may be due to the health and social care perspective advised within the NICE reference case and compounded by the lack of guidance in how to measure and value productivity costs. Should an economic evaluation pursue a societal perspective, the findings suggest that capturing and valuing presenteeism costs should be a priority.

Priority to end of life treatments? Views of the public in the Netherlands. Value in Health Published 5th January 2017

Everybody dies, and thus, end of life care is probably something that we should all have at least a passing interest in. The end of life context is an incredibly tricky research area with methodological pitfalls at every turn. End of life care is often seen as ‘different’ to other care, and this is reflected in NICE having supplementary guidance for the appraisal of end of life interventions. Similarly, in the Netherlands, treatments that do not meet typical cost per QALY thresholds may be provided should public support be sufficient. There, however, is a dearth of such evidence, and this paper sought to elucidate this issue using the novel Q methodology. Three primary viewpoints emerged: 1) Access to healthcare as a human right – all have equal rights regardless of setting, that is, nobody is more important. Viewpoint one appeared to reject the notion of scarce resources when it comes to health: ‘you can’t put a price on life’. 2) The second group focussed on providing the ‘right’ care for those with terminal illness and emphasised that quality of life should be respected and unnecessary care at end of life should be avoided. This second group did not place great importance on cost-effectiveness but did acknowledge that costly treatments at end of life might not be the best use of money. 3) Finally, the third group felt there should be a focus on care which is effective and efficient, that is, those treatments which generate the most health should be prioritised. There was a consensus across all three groups that the ultimate goal of the health system is to generate the greatest overall health benefit for the population. This rejects the notion that priority should be given to those at end of life and the study concludes that across the three groups there was minimal support for the possibility of the terminally ill being treated with priority.

Methodological issues surrounding the use of baseline health-related quality of life data to inform trial-based economic evaluations of interventions within emergency and critical care settings: a systematic literature review. PharmacoEconomics [PubMed] Published 6th January 2017

Catchy title. Conducting research within emergency and critical settings presents a number of unique challenges. For the health economist seeking to conduct a trial based economic evaluation, one such issue relates to the calculation of QALYs. To calculate QALYs within a trial, baseline and follow-up data are required. For obvious reasons – severe and acute injuries/illness, unplanned admission – collecting baseline data on those entering emergency and critical care is problematic. Even when patients are conscious, there are ethical issues surrounding collecting baseline data in this setting, the example used relates to somebody being conscious after cardiac arrest, is it appropriate to be getting them to complete HRQL questionnaires? Probably not. Various methods have been used to circumnavigate this issue; this paper sought to systematically review the methods that have been used and provide guidance for future studies. Just 19 studies made it through screening, thus highlighting the difficulty of research in this context. Just one study prospectively collected baseline HRQL data, and this was restricted to patients in a non-life threatening state. Four different strategies were adopted in the remaining papers. Eight studies adopted a fixed health utility for all participants at baseline, four used only the available data, that is, from the first time point where HRQL was measured. One asked patients to retrospectively recall their baseline state, whilst one other used Delphi methods to derive EQ-5D states from experts. The paper examines the implications and limitations of adopting each of these strategies. The key finding seems to relate to whether or not the trial arms are balanced with respect to HRQL at baseline. This obviously isn’t observed, the authors suggest trial covariates should instead be used to explore this, and adjustments made where applicable. If, and that’s a big if, trial arms are balanced, then all of the four methods suggested should give similar answers. It seems the key here is the randomisation, however, even the best randomisation techniques do not always lead to balanced arms and there is no guarantee of baseline balance. The authors conclude trials should aim to make an initial assessment of HRQL at the earliest opportunity and that further research is required to thoroughly examine how the different approaches will impact cost-effectiveness results.