Fascinating to see that the publication of the Dr Foster Hospital Guide created a bit of a stir in the press yesterday.
Getting a perspective on hospital care. Or perhaps not. |
Dr Foster is one of two companies, rapidly being joined by a third, that lead the way in using publicly available data to compare hospital performance.
The main rival of Dr Foster is CHKS. The two are constantly sparring about their differences, above all over how each calculates mortality.
In fact, both companies tend to fixate on death rates, like most of the press comment yesterday. They seem to regard them as the single most important measure of hospital quality, which is curious seeing as there are large areas of hospital care where death is so rare that it makes mortality a useless measure: for example, a fairly large proportion of hospital work – not far off 9% from English public data – involves maternity cases and the death rate is 0.0045%.
It’s obviously important to be aware deaths in childbirth, but a serious measure of quality would have to affect a far higher proportion of cases. It needs to look at suffering, of mother or child, the extent to which surgery has to be used – episiotomies, forceps deliveries, caesareans – or other poor outcomes short of death, such as haemorrhaging after delivery.
But still the debate seems to focus above all on comparative death rates. And the controversy rages around what measure mortality to use. Should it be Dr Foster’s Hospital Standardised Mortality Rate? Or CHKS’s Risk Adjusted Mortality Index? Or perhaps the government’s own Standardised Hospital Mortality Index?
More to the point, who outside a handful of statisticians knows the difference or cares very much?
Despite all that, many commentators regard the debate as important. So, let’s consider an aspect of it which at least has the merit of raising some intriguing questions about the main parties involved.
One of the hospitals that doesn’t do terribly well in the Dr Foster hospital mortality comparisons is University Hospitals Birmingham. In particular, it has an unfortunately high value for an index measuring deaths among patients at low risk. Deaths among patients who shouldn't be danger is especially bad news.
So it comes as no surprise that the Medical Director at the hospital hit back. He was quoted by the Guardian as saying that ‘Dr Foster frequently changes the methodology of the [Hospital Standardised Morality Rate], which, in our opinion, […] reduces its credibility as a comparator.’ On the low-risk deaths specifically, he pointed out that Dr Foster were including a condition for which the death rate is around 50% which, it’s hard to classify as low.
He should know about hospital comparisons. Because the third and newest player in this field is a system developed, strangely enough, at University Hospitals Birmingham. The hospital calls the system HED, and it has taken it to market in collaboration with Price Waterhouse Coopers, the giant international consultancy. They are making significant inroads into the marketplace.
Which means there’s little love lost between University Hospitals Birmingham and Dr Foster or, for that matter, CHKS. That makes it amusing that the hospital does so poorly in the Dr Foster comparisons – amusing but no coincidence, since it was precisely out of its rejection of the Dr Foster methodology that the Birmingham hospital first got in on the act of providing its own hospital comparisons.
So what can we say about yesterday’s exercise and the Good Hospital Guide? It certainly tells us something about how well English hospitals are performing or, at any rate, under-performing. Particularly on the mortality measure.
However, it really doesn’t tell us anything like as much as more sophisticated comparisons would, taking account the characteristics of different types of care.
On the other hand, it does offer some entertaining insights into the bad blood between the different players in the field.
No comments:
Post a Comment