Monday, January 28, 2013

Coffee Drinking, Mortality, and Prespecified Falsification Endpoints

A few months back, the NEJM published this letter in response to an article by Freedman et al in the May 17, 2012 NEJM reporting an association between coffee drinking and reduced mortality found in a large observational dataset.  In a nutshell, the letter said that there was no biological plausibility for mortality reductions resulting from coffee drinking so the results were probably due to residual confounding, and that reductions in mortality in almost all categories (see Figure 1 of the index article) including accidents and injuries made the results dubious at best.  The positive result in the accidents and injuries category was in essence a failed negative control in the observational study.

Last week in the January 16th issue of JAMA Prasad and Jena operationally formalized this idea of negative controls for observational studies, especially in light of Ioannidis' call for a registry of observational studies.  They recommend that investigators mining databases establish a priori hypotheses that ought to turn out negative because they are biologically implausible.  These hypotheses can therefore serve as negative controls for the observational associations of interest, the ones that the authors want to be positive.  In essence, they recommend that the approach to observational data become more scientific.  At the most rudimentary end of the dataset analysis spectrum, investigators just mine the data to see what interesting associations they can find.  In the middle of the spectrum, investigators have a specific question that they wish to answer (usually in the affirmative), and they leverage a database to try to answer that question.  Prasad and Jena are suggesting going a step further towards the ideal end of the spectrum:  to specify both positive and negative associations that should be expected in a more holistic assessment of the ability of the dataset to answer the question of interest.  (If an investigator were looking to rule out an association rather than to find one, s/he could use a positive control rather than a negative one [a falsification end point] to establish the database's ability to confirm expected differences.)

I think that they are correct in noting that the burgeoning availability of large databases (of almost anything) and the ease with which they can be analyzed poses some problems for interpretation of results.  Registering observational studies and assigning prespecified falsification end points should go a long way towards reducing incorrect causal inferences and false associations.

I wish I had thought of that.

Added 3/3/2013 - I just realized that another recent study of dubious veracity had some inadvertent unspecified falsification endpoints, which nonetheless cast doubt on the results.  I blogged about it here:  Multivitamins caused epistaxis and reduced hematuria in male physicians.


  1. I just wish that people would learn that conclusions drawn from observational studies are very unreliable. All they do is highlight things that perhaps should be investigated further.


Note: Only a member of this blog may post a comment.