Tuesday, November 8, 2011

The Nihilist versus the Trialist: Why Most Published Research Findings Are False

I came across this PLoS Med article today that I wish I had seen years ago: Why Most Published Research Findings Are False . In this delightful essay, John P. A. Ioannidis describes why you must be suspicious of everything you read, because most of it is spun hard enough to give you a wicked case of vertigo. He highlights one of the points made repeatedly on this blog, namely that all hypotheses are not created equal, and some require more evidence to confirm (or refute) than others - basically a Bayesian approach to the evidence. With this approach, the diagnostician's "pre-test probability" becomes the trialist's "pre-study probability" and likelihood ratios stem from the data from the trial as well as alpha and beta. He creates a function for trial bias and shows how this impacts the probability that the trial's results are true as the pre-study probability and the study power are varied. He infers that alpha is probably too high (and hence Type I error rates too high) and beta is too low (both alpha and beta influence the likelihood ratio of a given dataset). He discusses terms (coined by others whom he references) such as "false positive" for study reports, and highlights several corollaries of his analysis (often discussed on this blog), including:
  • beware of studies with small sample sizes
  • beware of studies with small effect sizes (delta)
  • beware of multiple hypothesis testing and soft outcome measures
  • beware of flexibility of designs (think Prowess/Xigris among others), definitions, outcomes (NETT trial), and analytic modes

Perhaps most importantly, he discusses the role that researcher bias may play in analyzing or aggregating data from research reports - the GIGO (garbage in, garbage out) principle. Conflicts of interest extend beyond the financial to tenure, grants, pride, and faith. Gone forever is the notion of the noble scientist in pursuit of the truth, replaced by the egoist climber of ivory and builder of Babel towers, so bent on promoting his or her (think Greet Van den Berghe) hypothesis that they lose sight of the basic purpose of scientific testing, and the virtues of scientific agnosticism.

Ioannodis also rails against entire (unnamed) fields of research saying that some of their results in aggregate are probably usable as a surrogate measure of the amount of bias in the field. So, assume that alternative and complementary medicine, and herbals and supplements, and nutritional interventions and all that jazz that has been all the rage for a decade now - suppose that it is all poppycock and the real truth is that none of it works - if you look in aggregate at the data in that field, any net effect is likely to reflect nothing more than the bias in that field. What a refreshing concept. Cheers.

For my own part, I still taste saltiness and bitter almonds whenever I think of Xigris, that fallen angel of sepsis which was dispatched to hell last month after a [final] study showed no efficacy. I was duped for a full decade on that one, perhaps because I gave too little weight to a change in the formulation of the drug mid-trial. Shame on me. Live and learn.

In reading his article, I am reminded also of outright fraud in science and several salient and contemporaneous examples. Hwang Woo Suk of South Korea basically faked a whole program of stem cell research, results of some of which studies were published in that pinnacle of peer-reviewed research, "Science"; Mark Hauser of Harvard continues to face investigation because irregularities in his research (some also published in Science); and most recently, the author of a This 2011 Study of Prejudice in Science has been accused of widespread fraud and scientific "misconduct on an astonishing scale."

So much for the peer-review process and impact factors. They're no match for ego-maniacal cheaters hell bent on tenure, grants, and manuscripts no matter the costs.....or consequences.

No comments:

Post a Comment