Needless to say, in spite of a dormant hope something so simple could prevent cancer, I was skeptical. Despite decades, perhaps eons of enthusiasm for the use of vitamins, minerals, and herbal remedies, there is, to my knowledge (please, dear reader, direct me to the data if this is an omission) no credible evidence of a durable health benefit from taking such supplements in the absence of deficiency. But supplements have a lure that can beguile even the geniuses among us (see: Linus Pauling). So before I read the abstract and methods to check for the level of statistical significance, the primary endpoint, the number of endpoints, and sources of bias, I asked myself: "What is the probability that taking a simple commercially available multivitamin can prevent cancer?" and "what kind of P-value or level of statistical significance would I require to believe the result?" Indeed, if you have not yet seen the study, you can ask yourself those same questions now.
I don't have the data necessary to reduplicate their Cox analysis, but Table 2 does give me the raw data on numbers of patients with cancer. (These do not account for duration, or time-to-event, so my analysis with these data is necessarily contrived and incomplete, but illuminating nonetheless.) Plugging the raw data into STATA, I find that the rate of cancer in the multivitamin (MVI) group is 17.6% and that in the placebo group is 18.8%. Whoa! What happened to 8%? That was a relative difference - the absolute difference is only 1.2% using the raw data. Suddenly, taking an MVI every day loses much of its luster.
On this blog, I have described ad nauseum the importance of biological precedent and plausibility, multiple comparisons, etc, so I won't belabor these points again here. Suffice it to say that once these high-flying results are taken out of orbit for closer examination without spin, we see that multivitamins (and other supplements) create both expensive urine and expensive studies - and a lot of it just goes right down the drain.