Saturday, October 11, 2014

Enrolling Bad Patients After Good: Sunk Cost Bias and the Meta-Analytic Futility Stopping Rule

Four (relatively) large critical care randomized controlled trials were published early in the NEJM in the last week.  I was excited to blog on them, but then I realized they're all four old news, so there's nothing to blog about.  But alas, the fact that there is no news is the news.

In the last week, we "learned" that more transfusion is not helpful in septic shock, that EGDT (the ARISE trial) is not beneficial in sepsis, that simvastatin (HARP-2 trial) is not beneficial in ARDS, and that parental administration of nutrition is not superior to enteral administration in critical illness.  Any of that sound familiar?

I read the first two articles, then discovered the last two and I said to myself "I'm not reading these."  At first I felt bad about this decision, but then that I realized it is a rational one.  Here's why.

Sunk cost bias occurs when people invest in a course of action or a financial product that seemed like a good idea initially, but later information accrues that shows that it was a bad idea and that going forward, more money/time/effort/resources will be lost unless they "cut bait".  The money/effort/resources/time invested up until that point are "sunk costs" - that is, they cannot be recovered.  People fall victim to sunk cost bias if they invest additional resources into a losing proposition.  On Wall Street, this is colloquially referred to as "throwing good money [that which is not yet sunk] after bad [money that is sunk]."

Sunk cost bias is the reason that futility stopping rules exist.  A group of investigators with limited resources (assuming they're fungible) does not want to keep spending money on a study that is doomed to fail, once this becomes apparent in an interim analysis.  (There are ethical nuances to the decision to stop an ill-fated trial too.)  But a Bayesian-minded trialist might develop what could be called a "meta-analytic futility stopping rule" (in keeping with the trend to turn everything into a cheeky acronym, I'll call this the MAFSR, pronounced "Maf-Sir").  That is, let's step back from the individual trial level and imagine how the data from an ongoing trial might be incorporated into a meta-analysis.  If you are in the middle of the HARP-2 trial of simvastatin in ARDS with a planned enrollment of 500 or so patients and the data of the ARDSnet SAILS trial of 740 patients testing rosuvastatin in ARDS are released, and those data show nothing, should you stop HARP-2, knowing that there is no way that your trial will influence a meta-analytic result the basis of which is formed by SAILS?  The same could be said of the ARISE investigators if PROCESS came out before ARISE was finished, etc.  Indeed, taken to its logical conclusion, the MAFSR dictates that some trials should not even be started, because there is basically no way any result with a reasonable expectation could influence a meta-analysis integrating that trial's results with existing data.  I'm thinking of the most recent prone positioning in ARDS trial here, among others - such as transfusion trials in ever-expanding patient subgroups with ever-narrowing definitions.  After the MINT (Myocardial Ischemia and Transfusion) trial is published, it will be time to stop.

But momma always said "finish what you started" and the Little League taught you that "winners never quit and quitters never win" and some research resources are not fungible and if not spent on the ongoing trial cannot be transferred to another project.  So I have no illusion about whether investigators will employ the MAFSR, and shut down ongoing trials if another group beats them to the punchline and releases damning data from another trial.

But I can employ the MAFSR as my personal futility stopping rule.  And that's why I didn't read the HARP-2 simvastatin trial, or the parental nutrition article, I just skimmed the abstracts - because I know that their likelihood ratios cannot possibly shift the already firmly established, narrow confidence interval prior probability that these things don't work.

1 comment: