Thursday 2 November 2017

Evidence-base medicine: the case against

No sooner had I finished a post extolling the merits of evidence-based medicine (EBM) than I found my certainty that it was unquestionable being decisively questioned by an outstanding presentation at the same conference. 

That was a challenging but refreshing experience.


Trisha Greenhalgh
an authority on evidence-based medicine and a challenging patient
Trisha Greenhalgh is Professor of Primary Care Health Sciences at Oxford University. That makes her a leading authority in this field. In fact, she was awarded the Order of the British Empire in 2001 for her work on evidence-based medicine. So when she speaks out about its limitations, she commands attention – certainly mine.

Not that she advocates turning our backs on evidence. She simply calls for us to make more use what she calls “patient evidence”. The “old-fashioned clinical method” asks a key question:

What do I know about this patient: her history, the examination, test results, how she reacted the last time she took this drug, her beliefs, her family circumstances, etc. And given all that, what evidence do I need?

Without knowledge of the patient, the clinician may well pick the wrong evidence. Greenhalgh feels that clinicians don’t apply “old fashioned clinical methods” anything like often enough, as became clear after her own hospitalisation following a serious bicycle accident. Here’s what happened as she subjectively perceived it:

I was riding my racing bike along the towpath. I was going about 20 miles an hour.

Something caught in my front wheel. The bike somersaulted into the air. I came down heavily on the concrete, landing on my arms and the back of my head.

I was very dazed. Both my arms were deformed and useless. My fingers were numb. My helmet was split.


This was converted by the hospital into the following objective narrative:

55 yr old female fell off her bike

The hospital chose to apply a guideline for an older woman at danger of a fall. 

The fractures to both her arms led to seven operations over four months. But after six months she had wasting of both hands, together with heaviness, clumsiness and hyperreflexia (excessively pronounced reflex reactions) in the legs.

The guideline didn’t provide for any kind of diagnostic testing to the spine but eventually an MRI scan was carried out, revealing a displacement of two discs in her cervical spine (the neck).

In passing, let me say that an American in the audience pointed out that in the States the spinal examination would certainly have been carried out (“and billed”, cried a voice in a distinctly English accent). But, as Greenhalgh made clear, it should have been carried out in Britain too. 

NICE, the National Institute for Health and Care Excellence, provides a guideline which includes an algorithm for Selection of adults for imaging of the cervical spine. It calls on staff dealing with “adults presenting to the emergency department who have sustained a head injury” to check on the presence of any of a series of risk factors, including “dangerous mechanism of injury (fall from over 1 metre or 5 stairs)” or “ejection from a motor vehicle”.

A bike is not a motor vehicle, but surely the key condition is the ejection. And since the bike rose high in the air, she fell from over a metre. Her point is that it takes judgement to decide which guideline to apply. It isn’t a simple mechanical process.

She decided to have surgery. But it seems that some “self-proclaimed experts in evidence-based medicine” told her:

You didn’t need that operation. Randomised clinical trials have shown that in cervical disc lesions, surgical groups didn’t do any better than conservatively managed groups.

She stressed that:

They said this without taking a full history, without asking what the examination or MRI findings were, and without acknowledging the inclusion/exclusion criteria for the trials.


Yep. Got to agree with Greenhalgh on that one
As an authority in the field, she dug out the trial report. It specifically excluded patients with obvious muscle wasting, from which she was suffering, or who had undergone a whiplash-type of injury.

Things got worse when she needed to treat the acute pain she suffered after the operation. The specific question was whether she should use an opioid-based analgesic or a non-steroidal anti-inflammatory drug (NSAID). The evidence-based view was:

Patient advised not to take NSAIDs for one month following surgery: “some evidence of delayed healing of bone repairs, and risk of bleeding is higher in the post-op period”

The evidence that NSAIDs slowed healing was from a 2011 study.

It was based on work on just 12 subjects.

All of whom were rats. And I don’t mean politicians. Rodents with long tails.


Greenhalgh, formerly an elite athlete, had plenty of experience of taking NSAIDs, even after bone fractures, with no evidence of delayed healing. As for opioids, she had taken them with some severe adverse reactions in the past, including itching and vomiting.

As she pointed out, the conclusion had to be:

In this patient, given the history, clinical picture and equivocal nature of the evidence, the benefit-harm balance is in favour of NSAIDs...

Her presentation reached two conclusions. One, she rightly maintained, was uncontroversial. Clinicians should ask themselves:

Is the management of this patient in these circumstances an appropriate (‘real’) or inappropriate (‘rubbish’) application of the principles of EBM?

The second conclusion is far more controversial and more interesting:

If we practice patient-focused, individualization of the evidence (also known as real EBM) we will often find that more research is not needed.

Perhaps the uncertainty in science is
inherent.

Perhaps we need to return to old-fashioned clinical method and use EBM
less comprehensively…

A challenging conclusion indeed. But if it stops clinicians allowing their view of the patient to be blinded by a mountain of not necessarily relevant evidence, that has to be good news for us all. Any of us might some day be patients.

Not that this means the need for evidence is less strong. Only that the application of evidence has to be guided by the real condition of the patient. Surely all clinicians would make that condition the starting point for medicine?

Wouldn’t they?

No comments: