Thursday, September 27, 2012

True Believers: Faith and Reason in the Adoption of Evidence

In last week's NEJM, in an editorial response to an article demonstrating that physicians, in essence, probability adjust (a la Expected Utility Theory) the likelihood that data are true based on the funding source of a study, editor-in-Chief Jeffery M. Drazen implored the journal's readership to "believe the data." Unfortunately, he did not answer the obvious question, "which data?" A perusal of the very issue in which his editorial appears, as well as this week's journal, considered in the context of more than a decade of related research demonstrates just how ironic and ludicrous his invocation is.

This November marks the eleventh year since the publication, with great fanfare, of Van den Berghe's trial of intensive insulin therapy (IIT) in the NEJM.  That article was followed by what I have called a "premature rush to adopt the therapy" (I should have called it a stampede), creation of research agendas in multiple countries and institutions devoted to its study, amassing of reams of robust data failing to confirm the original results, and a reluctance to abandon the therapy that is rivaled in its tenacity only by the enthusiasm that drove its adoption.  In light of all the data from the last decade, I am convinced of only one thing - that it remains an open question whether control of hyperglycemia within ANY range is of benefit to patients.
Suffice it to say that the Van den Berghe data have not suffered from lack of believers - the Brunkhorst, NICE-SUGAR, and Glucontrol data have - and  it would seem that in many cases what we have is not a lack of faith so much as a lack of reason when it comes to data.  The publication of an analysis of hypoglycemia using the NICE-SUGAR database in the September 20th NEJM, and a trial in this week's NEJM involving pediatric cardiac surgery patients by by Agus et al gives researchers and clinicians yet another opportunity to apply reason and reconsider their belief in IIT and for that matter the treatment of hyperglycemia in general.
The NICE-SUGAR investigators report numerous associations that they discovered in the trial database which can be grouped into several themes.  First, hypoglycemia occurred more frequently, earlier, and was more severe in patients receiving IIT.  Second, the occurrence of hypoglycemia was associated with death in both the IIT and the control groups.  Third, as known from the original study, patients randomized to IIT were more likely to die, an inconvenient tidbit that I worry has received inadequate attention since NICE-SUGAR was published over four years ago.

The question that remains is whether hypoglycemia is part of the causal pathway to death, or whether it is an association or epiphenomenon, a marker for some other part of the causal pathway.  This question is likely to remain unanswered, but I can tell you what most clinicians think:  they think hypoglycemia is likely to be part of the causal pathway to bad outcomes.  This can be logically inferred from their behavior, namely that they treat hypoglycemia, even mild hypoglycemia, rather than brush it off as a marker of disease severity.  Of course, it could be an epiphenomenon with its own untoward downstream consequences, but that still makes it part of a causal pathway - a multifactorial one.  That they think it's causal does not make it so, but I think it is important to recognize that the community does not have equipoise here - if the blood sugar is less than 60mg/dL, you can bet dollars to donuts that it will get treated.
This week, the NEJM published results of a trial with a variation on the theme of the original IIT trial in which the majority of participants were adult cardiac surgery patients.  Agus et al studied pediatric cardiac surgery patients and used the original IIT target range of 80-110.  980 children were enrolled (so the confidence intervals weren't super tight, but tight enough to discern important trends) and the primary outcome was healthcare associated infections at day 30.  Table 3 in the article shows outcomes by study group including the primary outcome, mortality, length of stay, and others, and there are really no strong trends in favor of either group.  In short, this therapy does not substantially affect the course or outcome of these patients (but like so many others it did significantly and substantially increase the rate of hypoglycemia in spite of use of a subcutaneous continuous glucose monitoring device).
And finally, an unrelated article published first online on August 27th in the NEJM, Thiele et al report the results of a trial of intraaortic balloon pump counterpulsation (IABP) in patients with acute myocardial infarction and cardiogenic shock.  IABP use spans almost half a century and is based on its several "obvious" and logical effects on hemodynamics and coronary blood flow as well as several decades of registry data which have gained it a Class IB recommendation in US guidelines.  But behold the fallibility of surrogate endpoints and registries:  this, the largest such RCT to date for IABP use in its showcase indication showcased its utter failure to influence any measured outcome between patients assigned to IABP versus control.
Which brings me full circle to the article that was the subject of Drazen's editorial.  Kesselheim et al in the September 20th issue of NEJM reported that physicians' confidence in a study's results, ratings of study rigor, and willingness to adopt the studied therapy varied as a function of the reported source of funding for the article.  Willingness to adopt fell by 50% if the study was industry funded as compared to NIH funded.  And this, Drazen takes issue with.  But is it wrong to look askance at data that are profit-driven?  Is it unfounded to treat all data with a certain degree of mistrust?  Perhaps a suspicion of industry is just one of MANY factors that may lead a rational person to probability adjust the truth of any dataset, creating in essence his own confidence interval to reflect residual uncertainty.  Perhaps we should be cultivating Rational Reasoners, instead of True Believers.
This all reminds me of a Grand Rounds I gave at Ohio State University shortly after arriving there in 2005 about adoption of therapies based on evidence.  I was researching biases in the interpretation of evidence at the time and during the talk I was arguing for more rapid adoption of therapies based on recent clinical trials evidence after showing my own data about biases in interpretation that favored non-adoption.  After the talk, Earl "The Pearl" Metz approached me and said "What about all the therapies we have seen adopted and later abandoned over the years such as steroids in critical illness?"  I forget my exact response at the time, which was probably something like "you gotta go on the best evidence you have at the time," but I think Earl was harkening to a wisdom that day which I did not begin to appreciate until several years later.  After you have witnessed the rise and fall of countless therapies over the years, you become more circumspect, and you may wish, based on this caution, to probability adjust your own estimates of a therapy's effectiveness by some fudge factor to account for general uncertainty in life and in all studies, no matter how apparently well done the study, no matter how small the P-value, no matter who the sponsor.  So when I hear the admonition "Believe the data", it summons me to consider all the data, including data from studies that are not yet done.  (See Ioannidis, JAMA, Aug. 8, 2012.)

2 comments:

  1. Suppose I read that a new study shows that treatment X results in longer survival than treatment Y, by a median of 6 months. I feel I somewhat know how to interpret and use that information. Then, at the end of the paper, I notice that the study was funded by a drug company. What should be my new estimate of the increase in survival? 6 months? 3 months? 0 months?

    If this is a study that has not been done before, this is the only information I have (other than my personal beliefs, which should be accounted for via Bayes theorem). I see no evidence at all for 3 months, and the only "evidence" for 0 months is that this was the hypothesis the study presumably tested, which of course is no evidence at all.

    I suppose one should "down-weight" this new study in the Bayes calculation, acting as if the sample sizes were smaller than they actually were, or something like that. But I really see no reason to act as if the study had not been done.

    ReplyDelete
  2. Tom, this is a great question and I don't know the answer. I was taught as a medical student, resident, and fellow from 1994-2005 to embrace EBM as the new standard for clinical care, like a panacea. But after a decade of reading and critically evaluating studies and observing trends, I have become much more circumspect. I think we need very seriously consider priors before we read a study results. Some priors, like sepsis studies, are so low as that I don't think any p-value will convince me. And the industry shenanigans go on and on. It's not enough for me to dismiss industry funded studies, but rather to be more critical. Are there alternatives? Do the effects justify the expense and side effects? Is the endpoint solid? Was the deck stacked? I can offer no pat answer, each study must be evaluated on its individual merits. Thanks for your interest.

    ReplyDelete

Note: Only a member of this blog may post a comment.