Thursday, November 10, 2011

Post-hOckham analyses - the simplest explanation is that it just plain didn't flipp'n work


You're probably familiar with that Franciscan friar Sir William of Ockham, and his sacred saw. Apparently the principle has been as oversimplified as it has ignored, as a search of Wikipedia will attest. Suffice it to say, nonetheless, that this maxim guides us to select the simplest from among multiple explanations for any phenomenon - and this intuitively makes sense, because there are infinite and infinitely complex possible explanations for any phenomenon.

So I'm always amused and sometimes astonished when medical scientists reappraise their theories after they've been defeated by their very own data and begin to formulate increasingly complex explanations and apologies, so smitten and beholden to them as they are. "True Believers" is what Jon Abrams, MD, one of my former attendings, used to call them. The transition from scientist to theist is an insidious and subversive one.

The question is begged: did we design such and such clinical trial to test the null hypothesis or not? If some post-hoc subgroup is going to do better with therapy XYZ, why didn't we identify that a priori? Why didn't we test just THAT group? Why didn't we say, in advance, "if this trial fails to show efficacy, it will be because we should have limited it to this or that subgroup. And if it fails, we will follow up with a trial of this or that subgroup."

Tuesday, November 8, 2011

The Nihilist versus the Trialist: Why Most Published Research Findings Are False

I came across this PLoS Med article today that I wish I had seen years ago: Why Most Published Research Findings Are False . In this delightful essay, John P. A. Ioannidis describes why you must be suspicious of everything you read, because most of it is spun hard enough to give you a wicked case of vertigo. He highlights one of the points made repeatedly on this blog, namely that all hypotheses are not created equal, and some require more evidence to confirm (or refute) than others - basically a Bayesian approach to the evidence. With this approach, the diagnostician's "pre-test probability" becomes the trialist's "pre-study probability" and likelihood ratios stem from the data from the trial as well as alpha and beta. He creates a function for trial bias and shows how this impacts the probability that the trial's results are true as the pre-study probability and the study power are varied. He infers that alpha is probably too high (and hence Type I error rates too high) and beta is too low (both alpha and beta influence the likelihood ratio of a given dataset). He discusses terms (coined by others whom he references) such as "false positive" for study reports, and highlights several corollaries of his analysis (often discussed on this blog), including:
  • beware of studies with small sample sizes
  • beware of studies with small effect sizes (delta)
  • beware of multiple hypothesis testing and soft outcome measures
  • beware of flexibility of designs (think Prowess/Xigris among others), definitions, outcomes (NETT trial), and analytic modes

Perhaps most importantly, he discusses the role that researcher bias may play in analyzing or aggregating data from research reports - the GIGO (garbage in, garbage out) principle. Conflicts of interest extend beyond the financial to tenure, grants, pride, and faith. Gone forever is the notion of the noble scientist in pursuit of the truth, replaced by the egoist climber of ivory and builder of Babel towers, so bent on promoting his or her (think Greet Van den Berghe) hypothesis that they lose sight of the basic purpose of scientific testing, and the virtues of scientific agnosticism.