Friday 24 May 2019

Is research good for your health?

Back in the nineties, I was involved in a study in France into ways of calculating risk factors in maternity services. The first step was to define what we meant by a good or bad outcomes in childbirth, so we could answer the question, “we want to measure risk, but the risk of what?”

What could possibly go wrong?
We were trying to measure the ways it might
One of the bad outcomes we looked at was postpartum haemorrhage – bleeding after childbirth. But that too required definition, because there is some bleeding in any childbirth. The study decided we'd regard any blood loss exceeding 500 millilitres as a haemorrhage.

One of the obstetricians involved in the study decided that she wanted to use graduated pouches to catch and measure any blood lost. The midwives in weren’t happy with this idea, since their professional experience meant they could assess whether a patient was haemorrhaging or not just by observing them, without anything so coldly technical as a pouch. But the obstetrician overrode them.

At the time, fathers weren’t allowed into that hospitals delivery suites. So as soon as a baby was born, the midwife would nip out into the corridor with it and, in the obstetrician’s words, coo over it with the father. The mother, in the meantime, would be left on the table, possibly bleeding away.

Using the pouches put an end to all that. One of the effects, unsurprising to anyone but the midwives, was that it turned out they were actually missing a great many cases of haemorrhage, professional experience or not. But far more important was the fact that the midwives had to stay behind to fit the pouches, which meant they were looking after the women.

The rate of readmissions to the hospital for postpartum anaemia dropped massively.

The mere fact of looking into the problem had led to an improvement in patient care.

Those memories made it particularly interesting for me to listen to a conference presentation that talked about a classic paper by Sharon K. Inouye and others, ‘A Multicomponent Intervention to Prevent Delirium in Hospitalized Older Patients. It was published in the New England Journal of Medicine in 1999.

Seminal study
But its flaw may be as interesting as its conclusions
The study compared the outcomes for elderly patients given specific interventions targeted at a range of problems leading to delirium, compared to two other groups receiving ‘usual care’. It found that some of the interventions genuinely prevented the onset of delirium, though the likelihood of recurrence of delirium and the severity of any delirium suffered, was pretty much the same in both groups.

The conclusion was that it was important to prevent the initial occurrence of delirium, and the interventions were effective.

What the speaker, Dr Andrew Severn, a senior anaesthetist from University Hospital of Morecambe Bay NHS Trust, pointed out was that the apparent division between the groups, intervention and control, was nothing like as rigid as might be expected. The study mentions that overall, across both the target and the control group, rates of delirium were lower than would normally be expected in the population generally. What’s more, there was a drop in the usual-care group – not as substantial as in the intervention group but still significant. The conclusion? There’d been some contamination between the groups: some of the interventions from the target group had somehow spread into the control group. As the study paper explains:

Although efforts were made to avoid contamination, some intervention protocols were disseminated by word of mouth to staff members in usual-care units. Moreover, although the intervention strategies most often involved the nursing staff, the physicians rotated on all hospital floors and carried over some of the intervention protocols to the usual-care group.

This sounds like a flaw in the study. But, in reality, it’s something far more significant: what it’s demonstrating is that the nurses using the new interventions were talking to their colleagues working with the control patients, and the latter were saying, “hey! That’s a good idea. I’m going to do that too.”

The same was happening to the doctors, simply by virtue of the fact that they were treating both groups.

In other words, good ideas were being tested in this study. The care professionals involved were identifying the ideas as good and simply couldn’t wait until the end of the study to apply them. As a consequence, the mere fact of researching enhancements in care was leading to better care.

Like my postpartum haemorrhage project, it’s an excellent example of the Hawthorne effect: work observed is work improved.

Interestingly, Dr Severn spoke right after a patient involved in a clinical trial. She said that her inclusion in the project, on its own, had improved her health. She now knew she was being monitored. She could get attention when she needed it. She realised that she might be in the group receiving the placebo rather than the active drug, but she felt better for being subject to more active care.

Yet further proof that research itself can be good for patient care. Whether or not it leads to a breakthrough…

No comments: