Thursday, November 10, 2011

Post-hOckham analyses - the simplest explanation is that it just plain didn't flipp'n work


You're probably familiar with that Franciscan friar Sir William of Ockham, and his sacred saw. Apparently the principle has been as oversimplified as it has ignored, as a search of Wikipedia will attest. Suffice it to say, nonetheless, that this maxim guides us to select the simplest from among multiple explanations for any phenomenon - and this intuitively makes sense, because there are infinite and infinitely complex possible explanations for any phenomenon.

So I'm always amused and sometimes astonished when medical scientists reappraise their theories after they've been defeated by their very own data and begin to formulate increasingly complex explanations and apologies, so smitten and beholden to them as they are. "True Believers" is what Jon Abrams, MD, one of my former attendings, used to call them. The transition from scientist to theist is an insidious and subversive one.

The question is begged: did we design such and such clinical trial to test the null hypothesis or not? If some post-hoc subgroup is going to do better with therapy XYZ, why didn't we identify that a priori? Why didn't we test just THAT group? Why didn't we say, in advance, "if this trial fails to show efficacy, it will be because we should have limited it to this or that subgroup. And if it fails, we will follow up with a trial of this or that subgroup."


But this is not what happens. When our trial fails, we desperately pore over the data looking for signals, however faint, in any group we can. Forget about multiple comparisons, pre-study probabilities and the like, we can rationalize why just about any subgroup might have been the one. I keep thinking of the NETT trial (lung volume reduction surgery for COPD). So convinced of the efficacy of this therapy were its proponents that many refused to participate or to enroll their patients for fear that they would be denying their patients its obvious benefits. When it failed, some subgroups among many of course appeared to benefit, and this therapy is still recommended for them. Interestingly, and I think this is very telling, in successful trials, nobody pours over subgroups to find those where there was NOT efficacy and then advocates excluding them from use of the therapy in the future, or doing future trials in those subgroups to see if they benefit. (Except in the case of subgroups where there may be side effects or net harm - but that's just omission bias rearing its ugly and familiar head.)

In the past month I have seen at least two notable examples of this post-hoc nonsense. In October, JAMA published a study of omega-3 fatty acids as "pharmaconutrition" for patients with ALI/ARDS, which showed that even though the fish oils were making it into the patients' systems, no favorable effect on outcomes was observed. Which does not surprise me. I have always thought that the notion that you can select a feeding formula for a critically ill patient with just the right mix of nutrients and favorably affect the outcome of the illness was tripe (pun intended). I suspect you could buy one of those fancy juicers shown on infomercials at 2AM on the shopper's network (here's one: Magic Bullet) and just grind up whatever slop was on the regular patient trays, mix in a liter of water, infuse it in the nasogastric tube and call it a day. After all, why does a patient with ARDS on the vent for 3 days need some sexy formula during that time, but then when he's extubated on day 4 he gets a regular meal tray from the cafeteria? Do we REALLY believe that 3 days of Oxepa is going to cause his organ failures to melt away? I don't. And the OMEGA trial bolsters my already robust skepticism.

But the editorialists saw it otherwise (click HERE). The trial was too small, stopped too soon, etc. They laud the authors for their inventive design using boluses to supply the supplements, then blame the design for the failure of the trial. They reference a study showing that multicenter trials show smaller results (so we should go back to single center trials?) and a study showing that larger trials show smaller results (wouldn't this have something to do with power calculations?), but I'm not sure how any of this bears on OMEGA's failure. In the final paragraph, they advocate a "broad, bold research agenda....for the future of pharmaconutrition". I would advocate something bolder: Accept the data and abandon that research agenda. Time to move on.

I will leave it to interested readers to peruse another JAMA article released this week investigating extracranial-intracranial bypass surgery for stroke prevention. As re-emphasized by the editorialist, this study highlights the frailty of surrogate endpoints (in this case a measure of brain perfusion after bypass) and the misleading tendencies of historical controls. What's most interesting to me are the comments of one of the authors in this New York Times article about the study. Here is a quote from the article, with Dr. Powers explaining their attempt to find variables to identify those patients who would have peri-operative strokes (events in these patients negated benefits of the surgery for the entire cohort):

Dr. Powers said that the researchers pored over their data to see if they could find some clue to predict which patients would be most likely to have strokes soon after the surgery.
“We looked at 50 different factors to see if we could identify those people, and we couldn’t,” he said.


Or we can just stop doing the choperation. Because it doesn't work.

1 comment:

  1. Thanks for a nicely written article.

    Regards,

    Med. student from Norway, Trondheim.

    ReplyDelete

Note: Only a member of this blog may post a comment.