Friday, 4 December 2009

The dirty dozen

A great storm in a teacup last weekend. We were glued to our TVs and our newspapers. Well, perhaps not glued. Fascinated. Maybe that’s overstating it too. The stories caught our attention at least. And we talked about them. A lot. For an hour or two.

The subject: the ‘dirty dozen’ most underperforming hospitals in England.

The report that identified these citadels of shame came from Dr Foster Intelligence (DFI), a company that established its credentials a few months ago when it denounced Mid Staffs NHS Trust as an organisation that had chalked up 400 more deaths in three years than one would expect. Hospital leaders resigned. Auditors audited. Journalists journalised. All very exciting. Particularly as it’s my local hospital and a good friend works there.

So when DFI spoke out again last week, obviously they got an audience. And this member of the audience took a look at their website and at the methodology they’d used.

They had judged English hospitals on the basis of fifteen indicators. Most hospitals have 20 to 30 specialties, so the report didn’t even use one indicator per specialty. Four of their indicators concerned mortality – but only mortality after a specific type of case. Six of them concerned readmissions, i.e. when a patient has to return to hospital to fix problems that emerged after discharge – but again, the study looked only at certain types of readmission. And the other five indicators were all very specific – for example, the surgical technique used for gall bladder removals (and no other surgical technique at all).

A bit limited, you might feel. To give you an idea of how limited, if you were going to evaluate just one specialty, obstetrics – one of the biggest, since delivering babies is a major part of the work of hospitals in the West – you’d want to look at least at caesarean rates, episiotomy rates, perineal tears, forceps deliveries and post-partum haemorrhages. Five indicators. As a minimum. And you’d want to adjust them for risk – after all, if you’re treating high risk patients, you would expect a high rate of difficulties. Failing to take risk into account just penalises those hospitals with the guts to take on the toughest cases.

And then there’s the use of mortality as a measure. The study only considered mortality in hospital. That means that an institution which transfers its sickest patients to another hospital or sends terminal patients home (which may be a good thing to do, by the way – many dying patients prefer to end their lives in their own beds) will have an apparently lower mortality rate than others.

So a pretty flawed study. A flawed study which didn’t, by the way, class Mid Staffs among the dirty dozen. In fact, Mid Staffs did far more than avoid the bottom twelve – it actually came in at number nine out of 146. ‘Good work’, you might think, ‘the new management has really turned that hospital round.’ Except that the data on which the study was based came from the period ending 31 March 2009. And it was in March that the scandal about the hospital broke.

Oh, well. It made for some amusing reading and a lively conversation, on a grey winter Sunday. Before we moved on to a more intelligent topic.

2 comments:

  1. This makes me think of the excellent Analysis podcast on targets - these studies, especially as unsophisticated as this one, surely encourage easy, cheap remedies that look good rather than doing a proper job.

    By the way, how do I make blogger send me an email each time you write a new post? I've been looking around the settings for ages and my usual computer literacy has failed me.

    ReplyDelete
  2. The Analysis programme did, however, also point out that the targets have had some pretty impressive results - particularly on hospital-acquired infections.

    I'll try and find out about alerts

    ReplyDelete