Analysing Patient-Level Data using HES Workshop

This intensive workshop introduces participants to HES (Hospital Episode Statistics) data and how to handle and manipulate these very large patient-level data sets using computer software. Understanding and interpreting the data is a key first step for using these data in economic evaluation or evaluating health care policy and practice. Participants will engage in lectures and problem-solving exercises, analysing the information in highly interactive sessions. Data manipulation and statistical analysis will be taught and demonstrated using Stata.

This workshop is offered to people in the academic, public and commercial sectors.  It is useful for analysts who wish to harness the power of HES non-randomised episode level patient data to shed further light on such things as patient costs and pathways, re-admissions and outcomes and provider performance.  The workshop is suitable for individuals working in NHS hospitals, commissioning organisations, NHS England, Monitor, and the Department of Health and Social Care, pharmaceutical companies or consultancy companies and for health care researchers and PhD students.  Overseas participants may find the tuition helpful for their own country, but note that the course is heavily oriented towards understanding HES data for England.

The workshop fee is 900GBP for the public sector; 1,400GBP for the commercial sector. This includes all tuition, course materials, lunches, the welcome and drinks reception, the workshop dinner and refreshments, but does not include accommodation.

Online registration is now open; further information and registration is at: https://www.york.ac.uk/che/courses/patient-data/

Subsidised places are available for full-time PhD students. If this is applicable to you, please email the workshop administrators and request an Application Form.

Contact: Gillian or Louise, Workshop Administrators, at: che-apd@york.ac.uk;  tel: +44 (0)1904 321436

Chris Sampson’s journal round-up for 5th March 2018

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Healthy working days: the (positive) effect of work effort on occupational health from a human capital approach. Social Science & Medicine Published 28th February 2018

If you look at the literature on the determinants of subjective well-being (or happiness), you’ll see that unemployment is often cited as having a big negative impact. The same sometimes applies for its impact on health, but here – of course – the causality is difficult to tease apart. Then, in research that digs deeper, looking at hours worked and different types of jobs, we see less conclusive results. In this paper, the authors start by asserting that the standard approach in labour economics (on which I’m not qualified to comment) is to assume that there is a negative association between work effort and health. This study extends the framework by allowing for positive effects of work that are related to individuals’ characteristics and working conditions, and where health is determined in a Grossman-style model of health capital that accounts for work effort in the rate of health depreciation. This model is used to examine health as a function of work effort (as indicated by hours worked) in a single wave of the European Working Conditions Survey (EWCS) from 2010 for 15 EU member states. Key items from the EWCS included in this study are questions such as “does your work affect your health or not?”, “how is your health in general?”, and “how many hours do you usually work per week?”. Working conditions are taken into account by looking at data on shift working and the need to wear protective equipment. One of the main findings of the study is that – with good working conditions – greater work effort can improve health. The Marxist in me is not very satisfied with this. We need to ask the question, compared to what? Working fewer hours? For most people, that simply isn’t an option. Aren’t the people who work fewer hours the people who can afford to work fewer hours? No attention is given to the sociological aspects of employment, which are clearly important. The study also shows that overworking or having poorer working conditions reduces health. We also see that, for many groups, longer hours do not negatively impact on health until we reach around 120 hours a week. This fails a good sense check. Who are these people?! I’d be very interested to see if these findings hold for academics. That the key variables are self-reported undermines the conclusions somewhat, as we can expect people to adjust their expectations about work effort and health in accordance with their colleagues. It would be very difficult to avoid a type 2 error (with respect to the negative impact of effort on health) using these variables to represent health and the role of work effort.

Agreement between retrospectively and contemporaneously collected patient-reported outcome measures (PROMs) in hip and knee replacement patients. Quality of Life Research [PubMed] Published 26th February 2018

The use of patient-reported outcomes (PROMs) in elective care in the NHS has been a boon for researchers in our field, providing before-and-after measurement of health-related quality of life so that we can look at the impact of these interventions. But we can’t do this in emergency care because the ‘before’ is never observed – people only show up when they’re in the middle of the emergency. But what if people could accurately recall their pre-emergency health state? There’s some evidence to suggest that people can, so long as the recall period is short. This study looks at NHS PROMs data (n=443), with generic and condition-specific outcomes collected from patients having hip or knee replacements. Patients included in the study were additionally asked to recall their health state 4 weeks prior to surgery. The authors assess the extent to which the contemporary PROM measurements agree with the retrospective measurements, and the extent to which any disagreement relates to age, socioeconomic status, or the length of time to recall. There wasn’t much difference between contemporary and retrospective measurements, though patients reported slightly lower health on the retrospective questionnaires. And there weren’t any compelling differences associated with age or socioeconomic status or the length of recall. These findings are promising, suggesting that we might be able to rely on retrospective PROMs. But the elective surgery context is very different to the emergency context, and I don’t think we can expect the two types of health care to impact recollection in the same way. In this study, responses may also have been influenced by participants’ memories of completing the contemporary questionnaire, and the recall period was very short. But the only way to find out more about the validity of retrospective PROM collection is to do more of it, so hopefully we’ll see more studies asking this question.

Adaptation or recovery after health shocks? Evidence using subjective and objective health measures. Health Economics [PubMed] Published 26th February 2018

People’s expectations about their health can influence their behaviour and determine their future health, so it’s important that we understand people’s expectations and any ways in which they diverge from reality. This paper considers the effect of a health shock on people’s expectations about how long they will live. The authors focus on survival probability, measured objectively (i.e. what actually happens to these patients) and subjectively (i.e. what the patients expect), and the extent to which the latter corresponds to the former. The arguments presented are couched within the concept of hedonic adaptation. So the question is – if post-shock expectations return to pre-shock expectations after a period of time – whether this is because people are recovering from the disease or because they are moving their reference point. Data are drawn from the Health and Retirement Study. Subjective survival probability is scaled to whether individuals expect to survive for 2 years. Cancer, stroke, and myocardial infarction are the health shocks used. The analysis uses some lagged regression models, separate for each of the three diagnoses, with objective and subjective survival probability as the dependent variable. There’s a bit of a jumble of things going on in this paper, with discussions of adaptation, survival, self-assessed health, optimism, and health behaviours. So it’s a bit difficult to see the wood for the trees. But the authors find the effect they’re looking for. Objective survival probability is negatively affected by a health shock, as is subjective survival probability. But then subjective survival starts to return to pre-shock trends whereas objective survival does not. The authors use this finding to suggest that there is adaptation. I’m not sure about this interpretation. To me it seems as if subjective life expectancy is only weakly responsive to changes in objective life expectancy. The findings seem to have more to do with how people process information about their probability of survival than with how they adapt to a situation. So while this is an interesting study about how people process changes in survival probability, I’m not sure what it has to do with adaptation.

3L, 5L, what the L? A NICE conundrum. PharmacoEconomics [PubMed] Published 26th February 2018

In my last round-up, I said I was going to write a follow-up blog post to an editorial on the EQ-5D-5L. I didn’t get round to it, but that’s probably best as there has since been a flurry of other editorials and commentaries on the subject. Here’s one of them. This commentary considers the perspective of NICE in deciding whether to support the use of the EQ-5D-5L and its English value set. The authors point out the differences between the 3L and 5L, namely the descriptive systems and the value sets. Examples of the 5L descriptive system’s advantages are provided: a reduced ceiling effect, reduced clustering, better discriminative ability, and the benefits of doing away with the ‘confined to bed’ level of the mobility domain. Great! On to the value set. There are lots of differences here, with 3 main causes: the data, the preference elicitation methods, and the modelling methods. We can’t immediately determine whether these differences are improvements or not. The authors stress the point that any differences observed will be in large part due to quirks in the original 3L value set rather than in the 5L value set. Nevertheless, the commentary is broadly supportive of a cautionary approach to 5L adoption. I’m not. Time for that follow-up blog post.

Credits

 

Chris Sampson’s journal round-up for 25th September 2017

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Good practices for real‐world data studies of treatment and/or comparative effectiveness: recommendations from the Joint ISPOR‐ISPE Special Task Force on Real‐World Evidence in Health Care Decision Making. Value in Health Published 15th September 2017

I have an instinctive mistrust of buzzwords. They’re often used to avoid properly defining something, either because it’s too complicated or – worse – because it isn’t worth defining in the first place. For me, ‘real-world evidence’ falls foul. If your evidence isn’t from the real world, then it isn’t evidence at all. But I do like a good old ISPOR Task Force report, so let’s see where this takes us. Real-world evidence (RWE) and its sibling buzzword real-world data (RWD) relate to observational studies and other data not collected in an experimental setting. The purpose of this ISPOR task force (joint with the International Society for Pharmacoepidemiology) was to prepare some guidelines about the conduct of RWE/RWD studies, with a view to improving decision-makers’ confidence in them. Essentially, the hope is to try and create for RWE the kind of ecosystem that exists around RCTs, with procedures for study registration, protocols, and publication: a noble aim. The authors distinguish between 2 types of RWD: ‘Exploratory Treatment Effectiveness Studies’ and ‘Hypothesis Evaluating Treatment Effectiveness Studies’. The idea is that the latter test a priori hypotheses, and these are the focus of this report. Seven recommendations are presented: i) pre-specify the hypotheses, ii) publish a study protocol, iii) publish the study with reference to the protocol, iv) enable replication, v) test hypotheses on a separate dataset than the one used to generate the hypotheses, vi) publically address methodological criticisms, and vii) involve key stakeholders. Fair enough. But these are just good practices for research generally. It isn’t clear how they are in any way specific to RWE. Of course, that was always going to be the case. RWE-specific recommendations would be entirely contingent on whether or not one chose to define a study as using ‘real-world evidence’ (which you shouldn’t, because it’s meaningless). The authors are trying to fit a bag of square pegs into a hole of undefined shape. It isn’t clear to me why retrospective observational studies, prospective observational studies, registry studies, or analyses of routinely collected clinical data should all be treated the same, yet differently to randomised trials. Maybe someone can explain why I’m mistaken, but this report didn’t do it.

Are children rational decision makers when they are asked to value their own health? A contingent valuation study conducted with children and their parents. Health Economics [PubMed] [RePEc] Published 13th September 2017

Obtaining health state utility values for children presents all sorts of interesting practical and theoretical problems, especially if we want to use them in decisions about trade-offs with adults. For this study, the researchers conducted a contingent valuation exercise to elicit children’s (aged 7-19) preferences for reduced risk of asthma attacks in terms of willingness to pay. The study was informed by two preceding studies that sought to identify the best way in which to present health risk and financial information to children. The participating children (n=370) completed questionnaires at school, which asked about socio-demographics, experience of asthma, risk behaviours and altruism. They were reminded (in child-friendly language) about the idea of opportunity cost, and to consider their own budget constraint. Baseline asthma attack risk and 3 risk-reduction scenarios were presented graphically. Two weeks later, the parents completed similar questionnaires. Only 9% of children were unwilling to pay for risk reduction, and most of those said that it was the mayor’s problem! In some senses, the children did a better job than their parents. The authors conducted 3 tests for ‘incorrect’ responses – 14% of adults failed at least one, while only 4% of children did so. Older children demonstrated better scope sensitivity. Of course, children’s willingness to pay was much lower in absolute terms than their parents’, because children have a much smaller budget. As a percentage of the budget, parents were – on average – willing to pay more than children. That seems reassuringly predictable. Boys and fathers were willing to pay more than girls and mothers. Having experience of frequent asthma attacks increased willingness to pay. Interestingly, teenagers were willing to pay less (as a proportion of their budget) than younger children… and so were the teenagers’ parents! Children’s willingness to pay was correlated with that of their own parent’s at the higher risk reductions but not the lowest. This study reports lots of interesting findings and opens up plenty of avenues for future research. But the take-home message is obvious. Kids are smart. We should spend more time asking them what they think.

Journal of Patient-Reported Outcomes: aims and scope. Journal of Patient-Reported Outcomes Published 12th September 2017

Here we have a new journal that warrants a mention. The journal is sponsored by the International Society for Quality of Life Research (ISOQOL), making it a sister journal of Quality of Life Research. One of its Co-Editors-in-Chief is the venerable David Feeny, of HUI fame. They’ll be looking to publish research using PRO(M) data from trials or routine settings, studies of the determinants of PROs, qualitative studies in the development of PROs; anything PRO-related, really. This could be a good journal for more thorough reporting of PRO data that can get squeezed out of a study’s primary outcome paper. Also, “JPRO” is fun to say. The editors don’t mention that the journal is open access, but the website states that it is, so APCs at the ready. ISOQOL members get a discount.

Research and development spending to bring a single cancer drug to market and revenues after approval. JAMA Internal Medicine [PubMed] Published 11th September 2017

We often hear that new drugs are expensive because they’re really expensive to develop. Then we hear about how much money pharmaceutical companies spend on marketing, and we baulk. The problem is, pharmaceutical companies aren’t forthcoming with their accounts, so researchers have to come up with more creative ways to estimate R&D spending. Previous studies have reported divergent estimates. Whether R&D costs ‘justify’ high prices remains an open question. For this study, the authors looked at public data from the US for 10 companies that had only one cancer drug approved by the FDA between 2007 and 2016. Not very representative, perhaps, but useful because it allows for the isolation of the development costs associated with a single drug reaching the market. The median time for drug development was 7.3 years. The most generous estimate of the mean cost of development came in at under a billion dollars; substantially less than some previous estimates. This looks like a bargain; the mean revenue for the 10 companies up to December 2016 was over $6.5 billion. This study may seem a bit back-of-the-envelope in nature. But that doesn’t mean it isn’t accurate. If anything, it begs more confidence than some previous studies because the methods are entirely transparent.

Credits