Monday, May 2, 2016

Hope: The Mother of Bias in Research

I realized the other day that underlying every slanted report or overly-optimistic interpretation of a trial's results, every contorted post hoc analysis, every Big Pharma obfuscation, is hope.  And while hope is generally a good, positive emotion, it engenders great bias in the interpretation of medical research.  Consider this NYT article from last month:  "Dashing Hopes, Study Shows Cholesterol Drug Had No Effect on Heart Health."  The title itself reinforces my point, as do several quotes in the article.
“All of us would have put money on it,” said Dr. Peter Libby, a Harvard cardiologist. The drug, he said, “was the great hope.”
 Again, hope is wonderful, but it blinds people to the truth in everyday life and I'm afraid researchers are no more immune to its effects than the laity.  In my estimation, three main categories of hope creep into the evaluation of research and foments bias:

  1. Hope for a cure, prevention, or treatment for a disease (on the part of patients, investigators, or both)
  2. Hope for career advancement, funding, notoriety, being right (on the part of investigators) and related sunk cost bias
  3. Hope for financial gain (usually on the part of Big Pharma and related industrial interests)
Consider prone positioning for ARDS.  For over 20 years, investigators have hoped that prone positioning improves not only oxygenation but also outcomes (mostly mortality).  So is it any wonder that after the most recent trial, in spite of the 4 or 5 previous failed trials, the community enthusiastically declared "success!"  "Prone Positioning works!"  Of course it is no wonder - this has been the hope for decades.

But consider what the most recent trial represents through the lens of replicability:  a failure to replicate previous results showing that prone positioning does not improve mortality.  The recent trial is the outlier.  It is the "false positive" rather than the previous trials being the "false negatives."

This way of interpreting the trials of prone positioning in the aggregate should be an obvious one, and it astonishes me that it took me so long to see the results this way - as a single failure to replicate previously replicable negative results.  But it hearkens to the underlying bias - we view results through the magnifying glass of hope, and it distorts our appraisal of the evidence.

Indeed, I have been accused of being a nihilist because of my views on this blog, which some see as derogating the work of others or an attempt to dash their hopes.  But these critics engage, or wish me to engage in a form of outcome bias - the value of the research lies in the integrity of its design, conduct, analysis, and reporting, not in its results.  One can do superlative research and get negative results, or shoddy research and get positive results.  My goal here is and always has been to judge the research on its merits, regardless of the results or the hopes that impel it.

(Aside:  Cholesterol researchers have a faith or hope in the cholesterol hypothesis - that cholesterol is a causal factor in pathways to cardiovascular outcomes.  Statin data corroborate this, and preliminary PCSK9 inhibitor data do, too.  But how quickly we engage in hopeful confirmation bias!  If cholesterol is a causal factor, it should not matter how you manipulate it - lower the cholesterol, lower cardiovascular events.  The fact that it does appear to matter how you lower it suggests that either there are multiplicity of agent effects (untoward and unknown effects of some agents negate some their beneficial effects in the cholesterol causal pathway) or that cholesterol levels are epiphenomena - markers of the effects of statins and PCSK9 inhibitors on the real, but as yet undelineated causal pathways.  Maybe the fact that we can easily measure cholesterol and that it is associated with outcomes in untreated individuals is a convenient accident of history that led us to trial statins which work in ways that we do not yet understand.)

Tuesday, February 23, 2016

Much Ado About Nothing? The Relevance of New Sepsis Definitions for Clinical Care Versus Research

What's in a name?  That which we call a rose, by any other name would smell as sweet. - Shakespeare, Romeo and Juliet Act II Scene II

The Society of Critical Care Medicine is meeting this week, JAMA devoted an entire issue to sepsis and critical illness, and my twitter feed is ablaze with news of release of a new consensus definition of sepsis.  Much laudable work has been done to get to this point, even as the work is already generating controversy (Is this a "first world" definition that will be forced upon second and third world countries where it may have less external validity?  Why were no women on the panel?).  Making the definition of sepsis more reliable, from a sensitivity and specificity standpoint (more accurate) is a step forward for the sepsis research enterprise, for it will allow improved targeting of inclusion criteria for trials of therapies for sepsis, and better external validity when those therapies are later applied in a population that resembles those enrolled.  But what impact will/should the new definition have on clinical care?  Are the-times-they-are-a-changing?

Diagnosis, a fundamental goal of clinical medicine is important for several reasons, chief among them:

  1. To identify the underlying cause of symptoms and signs so that treatments specific to that illness can be administered
  2. To provide information on prognosis, natural history, course, etc for patients with or without treatment
  3. To reassure the physician and patients that there is an understanding of what is going on; information itself has value even if it is not actionable
Thus redefining sepsis (or even defining it in the first place) is valuable if it allows us to institute treatments that would not otherwise be instituted, or provides prognostic or other information that is valuable to patients.  Does it do either of those two things?

Wednesday, February 10, 2016

A Focus on Fees: Why I Practice Evidence Based Medicine Like I Invest for Retirement

He is the best physician who knows the worthlessness of the most medicines."  - Ben Franklin

This blog has been highly critical of evidence, taking every opportunity to strike at any vulnerability of a trial or research program.  That is because this is serious business.  Lives and limbs hang in the balance, pharmaceutical companies stand to gain billions from "successful" trials, investigators' careers and funding are on the line if chance findings don't pan out in subsequent investigations, sometimes well-meaning convictions blind investigators and others to the truth; in short, the landscape is fertile for bias, manipulation, and even fraud.  To top it off, many of the questions about how to practice or deal with a particular problem have scant or no evidence to bear upon them, and practitioners are left to guesswork, convention, or pathophysiological reasoning - and I'm not sure which among these is most threatening.  So I am often asked, how do you deal with the uncertainty that arises from fallible evidence or paucity of evidence when you practice?

I have ruminated about this question and how to summarize the logic of my minimalist practice style for some time but yesterday the answer dawned on me:  I practice medicine like I invest in stocks, with a strategy that comports with the data, and with precepts of rational decision making.

Investors make numerous well-described and wealth destroying mistakes when they invest in stocks.  Experts such as John Bogle, Burton Malkiel, David Swenson and others have written influential books on the topic, utilizing data from studies in economics (financial and behavioral).  Key among the mistakes that investors make are trying to select high performers (such as mutual funds or hedge fund managers), chasing performance, and timing the market.  The data suggest that professional stock pickers fare little better than chance over the long run, that you cannot discern who will beat the average over the long run, and that the excess fees you are charged by high performers will negate any benefit they might otherwise have conferred to you.  The experts generally recommend that you stick with strategies that are proven beyond a reasonable doubt: a heavy concentration in stocks with their long track record of superior returns, diversification, and strict minimization of fees.  Fees are the only thing you can guarantee about your portfolio's returns.

Thursday, February 4, 2016

Diamox Results in Urine: General and Specific Lessons from the DIABOLO Acetazolamide Trial

The trial of acetazolamide to reduce duration of mechanical ventilation in COPD patients was published in JAMA this week.  I will use this trial to discuss some general principles about RCTs and make some comments specific to this trial.

My arguable but strong prior belief, before I even read the trial, is that Diamox (acetazolamide) is ineffectual in acute and chronic respiratory failure, or that it is harmful.  Its use is predicated on a "normalization fallacy" which guides practitioners to try attempt to achieve euboxia (normal numbers).  In chronic respiratory acidosis, the kidneys conserve bicarbonate to maintain normal pH.  There was a patient we saw at OSU in about 2008 who had severe COPD with a PaCO2 in the 70s and chronic renal failure with a bicarbonate under 20.  A well-intentioned but misguided resident checked an ABG and the patient's pH was on the order of 7.1.  We (the pulmonary service) were called to evaluate the patient for MICU transfer and intubation, and when we arrived we found him sitting at the bedside comfortably eating breakfast.  So it would appear that if the kidneys can't conserve enough bicarbonate to maintain normal pH, patients can get along with acidosis, but obviously evolution has created systems to maintain normal pH.  Why you would think that interfering with this highly conserved system to increase minute ventilation in a COPD patient you are trying to wean is beyond the reach of my imagination.  It just makes no sense.

This brings us to a major problem with a sizable proportion of RCTs that I read:  the background/introduction provides woefully insufficient justification for the hypothesis that the RCT seeks to test.  In the background of this paper, we are sent to references 4-14.  Here is a summary of each:

4.)  A review of metabolic alkalosis in a general population of critically ill patients
5.)  An RCT of acetazolamide for weaning COPD patients showing that it doesn't work
6.)  Incidence of alkalosis in hospitalized patients in 1980
7.)  A 1983 translational study to delineate the effect of acetazolamide on acid base parameters in 10 paitnets
8.)  A 1982 study of hemodynamic parameters after acetazolamide administration in 12 patients
9.)  A study of metabolic and acid base parameters in 14 patients with cystic fibrosis 
10.) A retrospective epidemiological descriptive study of serum bicarbonate in a large cohort of critically ill patients
11.)  A study of acetazolamide in anesthetized cats
12 - 14).  Commentary and pharmacodynamic studies of acetazolamide by the authors of the current study

Wednesday, December 23, 2015

Narrated and Abridged: There is (No) Evidence for That: Epistemic Problems in Critical Care Medicine

Below is the narrated video of my powerpoint presentation on Epistemic Problems in Critical Care Medicine, which provides a framework for understanding why we have both false positives and false negatives in clinical trials in critical care medicine and why we should be circumspect about our "evidence base" and our "knowledge".  This is not trivial stuff, and is worth the 35 minutes required to watch the narration of the slideshow.  It is a provocative presentation which gives compelling reasons to challenge our "evidence base" in critical care and medicine in general, in ways that are not widely recognized but perhaps should be, with several suggestions about assumptions that need to be challenged and revised to make our models of reality more reliable.  Please contact me if you would like me to give an iteration of this presentation at your institution.


Tuesday, November 10, 2015

Peersnickety Review: Rant on My Recent Battle With Peer Reviewers

I'd like to relate a tale of exasperation with the peer review process that I recently experienced and that is probably all too familiar - but one that most folks are too timid to complain publicly about.

Nevermind that laypersons think that peer review means that your peers are reviewing your actual data for accuracy and fidelity (they are not, they are reviewing only your manuscript, final analyses, and conclusions), which causes them to be perplexed when revelations of fraudulent data published in top journals are reported.  Nevermind that the website Retraction Watch, which began as a small side project now has daily and twice daily postings of retracted papers.  Nevermind that some scientists have built entire careers on faked data.  Nevermind that the fact that something has been peer reviewed provides very little in the way of assurance that the report contains anything other than rubbish.  Nevermind that leading investigators publish the same reviews over and over in different journals with the same figures and sometimes the same text.

The entire process is cumbersome, time consuming, frustrating, and of dubious value as currently practiced.

Last year I was invited by the editors of Chest to write a "contemporary review of ionized calcium in the ICU - should it be measured?  should it be treated?"  I am not aware of why I was selected for this, but I infer that someone suggested me as the author because of my prior research in medical decision making and because of the monograph we wrote several years back called Laboratory Testing in the ICU which applied principles of rational decision making such as Bayesian methods and back-of-the-envelope cost benefit analyses to make a framework of rational laboratory testing in the ICU.  I accepted the invitation, even knowing it would entail a good deal of work for me that would be entirely uncompensated, save for buttressing my fragile ego, he said allegorically.

Now, consider for an instant the extra barriers that I, as a non-academic physician faced in agreeing to do this.  As a non-academic physician, I do not have access to a medical library, and of course the Chest editors do not have a way to grant me access.  That is, non-academic physicians doing scholarly work such as this are effectively disenfranchised from the infrastructure that they need to do scholarly work.  Fortunately for me, my wife was a student at the University of Utah during this time so I was able to access the University library with her help.  Whether academic centers and peer-reviewed journals ought to have a monopoly on this information is a matter for debate elsewhere, and not a trivial one.

Sunday, October 11, 2015

When Hell Freezes Over: Trials of Temperature Manipulation in Critical Illness

The bed is on fire
Two articles published online ahead of print in the NEJM last week deal with actual and attempted temperature manipulation to improve outcomes in critically ill patients.

The Eurotherm3235 trial was stopped early because of concerns of harm or futility.  This trial enrolled patients with traumatic brain injury (TBI) and elevated intracranial pressure (ICP) and randomized them to induced hypothermia (which reduces ICP) versus standard care.  There was a suggestion of worse outcomes in the hypothermia group.  I know that the idea that we can help the brain with the simple maneuver of lowering body temperature has great appeal and what some would call "biological plausibility" a term that I henceforth forsake and strike from my vocabulary.  You can rationalize the effect of an intervention any way you want using theoretical biological reasoning.  So from now on I'm not going to speak of biological plausibility, I will call it biological rationalizing.  A more robust principle, as I have claimed before, is biological precedent - that is, this or that pathway has been successfully manipulated in a similar way in the past.  It is reasonable to believe that interfering with LDL metabolism will improve cardiovascular outcomes because of decades of trials of statins (though agents used to manipulate this pathway are not all created equal).  It is reasonable to believe that intervening with platelet aggregation will improve outcomes from cardiovascular disease because of decades of trials of aspirin and plavix and others.  It is reasonable to doubt that manipulation of body temperature will improve any outcome because there is no unequivocal precedent for this, save for warming people with hypothermia from exposure - which basically amounts to treating the known cause of their ailment.  This is one causal pathway that we understand beyond a reasonable doubt.  If you get exposure, you freeze to death.  If we find you still alive and warm you, you may well survive.

Wednesday, October 7, 2015

Early Mobility in the ICU: The Trial That Should Not Be

I learned via twitter yesterday that momentum is building to conduct a trial of early mobility in critically ill patients.  While I greatly respect many of the investigators headed down this path, forthwith I will tell you why this trial should not be done, based on principles of rational decision making.

A trial is a diagnostic test of a hypothesis, a complicated and costly test of a hypothesis, and one that entails risk.  Diagnostic tests should not be used indiscriminately.  That the RCT is a "Gold Standard" in the hierarchy of testing hypotheses does not mean that we should hold it sacrosanct, nor does it follow that we need a gold standard in all cases.  Just like in clinical medicine, we should be judicious in our ordering of diagnostic tests.

The first reason that we should not do a trial of early mobility (or any mobility) in the ICU is because in the opinion of this author, experts in critical care, and many others, early mobility works.  We have a strong prior probability that this is a beneficial thing to be doing (which is why prominent centers have been doing it for years, sans RCT evidence).  When the prior probability is high enough, additional testing has decreasing yield and risks false negative results if people are not attuned to the prior.  Here's my analogy - a 35 year old woman with polycystic kidney disease who is taking birth control presents to the ED after collapsing with syncope.  She had shortness of breath and chest pain for 12 hours prior to syncope.  Her chest x-ray is clear and bedside ultrasound shows a dilated right ventricle.  The prior probability of pulmonary embolism is high enough that we don't really need further testing, we give anticoagulants right away.  Even if a V/Q scan (creatnine precludes CT) is "low probability" for pulmonary embolism, we still think she has it because the prior probability is so high.  Indeed, the prior probability is so high that we're willing to make decisions without further testing, hence we gave heparin.  This process follows the very rational Threshold Approach to Decision Making approach proposed by Pauker and Kasirrer in the NEJM in 1980, which is basically a reformulation of VonNeumann and Morganstern's Expected Utility Theory to adapt it to medical decisions.  Distilled it states in essence, "when you get to a threshold probability of disease where the benefits of treatment exceed the risks, you treat."  And so let it be with early mobility.  We already think the benefits exceed the risks, which is why we're doing it.  We don't need a RCT.  As I used to ask the housestaff over and over until I was cyanotic: "How will the results of that test influence what you're going to do?"

Notice that this logical approach to clinical decision making shines a blinding light upon "evidence based medicine" and the entire enterprise of testing hypotheses with frequentist methods that are deaf to prior probabilities.  Can you imagine using V/Q scanning to test for PE without prior probabilities?  Can you imagine what a mess you would find yourself in with regard to false negatives and false positives?  You would be the neophyte medical student who thinks "test positive, disease present; test negative, disease absent."  So why do we continue ad nauseum in critical care medicine to dismiss prior probabilities and decision thresholds and blindly test hypotheses in a purist vacuum?

The next reasons this trial should not be conducted flow from the first.  The trial will not have a high enough likelihood ratio to sway the high prior below the decision threshold; if the trial is "positive" we will have spent millions of dollars to "prove" something we already knew at a threshold above our treatment threshold; if the trial is positive, some will squawk "It wasn't blinded" yada yada yada in an attempt to dismiss the results as false positives; if the trial is negative, some will, like the tyro medical student, declare that "there is no evidence for early mobility" and similar hoopla and poppycock; or the worst case:  the trial shows harm from early mobility, which will get the naysayers of early mobility very agitated.  But of course, our prior probability that early mobility is harmful is hopelessly low, making such a result highly likely to be spurious.  When we clamor about "evidence" we are in essence clamoring about "testing hypotheses with RCTs" and eschewing our responsibility to use clinical judgment, recognize the limits of testing, and practice in the face of uncertainty using our "untested" prior probabilities.

Consider a trial of exercise on cardiovascular outcomes in community dwelling adults - what good can possibly come of such a trial?  Don't we already know that exercise is good for you?  If so, a positive trial reinforces what we already know (but does little to convince sedentary folks to exercise, as they too already know they should exercise), but a negative trial risks sending the message to people that exercise is of no use to you, or that the number needed to treat is too small for you to worry about.

Or consider the recent trials of EGDT which "refuted" the Rivers trial from 14 years ago.  Now, everybody is saying, "Well, we know it works, maybe not the catheters and the ScVO2 and all those minutaie , but in general, rapid early resuscitation works.  And the trials show that we've already incorporated what works into general practice!"

I don't know the solutions to these difficult quandries that we repeatedly find ourselves in trial after trial in critical care medicine.  I'm confused too.  That's why I'm thinking very hard and very critically about the limits of our methods and our models and our routines.  But if we can anticipate not only the results of the trials, but also the community reaction to them, then we have guidance about how to proceed in the future.  Because what value does a mega-trial have, if not to guide care after its completion?  And even if that is not its goal, (maybe its goal is just to inform the science), can we turn a blind eye to the fact that it will guide practice after its completion, even if that guidance is premature?

It is my worry that, given the high prior probability that a trial in critical care medicine will be "negative", the most likely result is a negative trial which will embolden those who wish to dismiss the probable benefits of early mobility and give them an excuse to not do it.

Diagnostic tests have risks.  A false negative test is one such risk.

Wednesday, July 22, 2015

There is (No) Evidence For That: Epistemic Problems in Evidence Based Medicine

Below is a Power Point Presentation that I have delivered several times recently including one iteration at the SMACC conference in Chicago.  It addresses epistemic problems in our therapeutic knowledge, and calls into question all claims of "there is evidence for ABC" and "there is no evidence for ABC."  Such claims cannot be taken at face value and need deeper consideration and evaluation considering all possible states of reality - gone is the cookbook or algorithmic approach to evidence appraisal as promulgated by the User's Guides.  Considered in the presentation are therapies for which we have no evidence, but they undoubtedly work (Category 1 - Parachutes) and therapies for which we have evidence of efficacy or lack thereof (Category 2) but that evidence is subject to false positives and false negatives, for numerous reasons including: the Ludic Fallacy, study bias (See: Why Most Published Research Findings Are False), type 1 and 2 errors, the "alpha bet" (the arbitrary and lax standard used for alpha, namely 0.05), Bayesian interpretations, stochastic dominance of the null hypothesis, inadequate study power in general and that due to delta inflation and subversion of double significance hypothesis testing.  These are all topics that have been previously addressed to some degree on this blog, but this presentation presents them together as a framework for understanding the epistemic problems that arise within our "evidence base."  It also provides insights into why we have a generation of trials in critical care the results of which converge on the null and why positive studies in this field cannot be replicated.

Tuesday, June 2, 2015

Evolution Based Medicine: A Philosophical Framework for Understanding Why Things Don't Work

An afternoon session at the ATS meeting this year about "de-adoption" of therapies which have been shown to be ineffective was very thought provoking and the contrasts between it and the morning session on ARDS are nothing less than ironic.   As I described in the prior post about the baby in the bathwater, physicians seem to have a hard time de-adopting therapies.  Ask your colleagues at the next division conference if you should abandon hypothermia after cardiac arrest and rather just treat fever based on the TTM trial and the recent pediatric trial, and see what the response is.  Or, suggest that hyperglycemia (at any level in non-diabetic patients) in the ICU be observed rather than treated.  Or float the idea to your surgical colleagues that antibiotics be curtailed after four days in complicated intraabdominal infection, and see how quickly you are ushered out of the SICU.  Tell your dietition that you're going to begin intentionally underfeeding patients, or not feeding them at all and see what s/he say(s).  Propose that you discard sepsis resuscitation bundles, etc.  We have a hard time de-adopting.  We want to take what we have learned about physiology and pharmacology and apply it, to usurp control of and modify biological processes that we think we understand. We (especially in critical care) are interventionists at heart.

The irony occurred at ATS because in the morning session, we were told that there is incontrovertible (uncontroverted may have been a better word) evidence for the efficacy of prone positioning in ARDS (interestingly, one of the only putative therapies for ARDS that the ARDSnet investigators never trialed), and it was strongly suggested that we begin using esophageal manometry to titrate PEEP in ARDS.  So, in the morning, we are admonished to adopt, and in the afternoon we are chided to de-adopt a host of therapies.  Is this the inevitable cycle in critical care and medical therapeutics?  A headlong rush to adopt, then an uphill battle to de-adopt?

Friday, May 1, 2015

Is There a Baby in That Bathwater? Status Quo Bias in Evidence Appraisal in Critical Care

"But we are not here concerned with hopes and fears, only the truth so far as our reason allows us to discover it."  -  Charles Darwin, The Descent of Man

Status quo bias is a cognitive decision making bias that leads to decision makers' preference for the choice represented by the current status quo, even when the status quo is arbitrary or irrelevant.  Decision makers tend to perceive a change from the status quo as a loss and therefore their decisions are biased toward the status quo.  This can lead to preference reversals when the status quo reference frame is changed.  The status quo can be debiased using a reversal test, i.e., manipulating the status quo either experimentally or via thought experiment to consider a change in the opposite direction.  If reluctance to change from the status quo exists in both directions, status quo bias is likely to exist.

My collaborators Peter Terry, Hal Arkes and I reported in a study published in 2006 that physicians were far more likely to abandon a therapy that was status quo or standard therapy based on new evidence of harm than they were to adopt an identical therapy based on the same evidence of benefit from a fictitious RCT (randomized controlled trial) presented in the vignette.  These results suggested that there was an asymmetric status quo bias - physicians showed a strong preference for the status quo in the adoption of new therapies, but a strong preference for abandoning the status quo when a standard of care was shown to be harmful.  Two characteristics of the vignettes used in this intersubject study deserve attention.  First, the vignettes described a standard or status quo therapy that had no support from RCTs prior to the fictitious one described in the vignette.  Second, this study was driven in part by what I perceived at the time was a curious lack of adoption of drotrecogin-alfa (Xigris), with its then purported mortality benefit and associated bleeding risk.  Thus, our vignettes had very significant trade-offs in terms of side effects in both the adopt and abandon reference frames.  Our results seemed to explain s/low uptake of Xigris, and were also consistent with the relatively rapid abandonment of hormone replacement therapy (HRT) after publication of the WHI, the first RCT of HRT.

Thursday, January 29, 2015

The Therapeutic Paradox: What's Right for the Population May Not Be Right for the Patient

Bad for the population, good for me
An article in this week's New York Times called Will This Treatment Help Me?  There's a Statistic for that highlights the disconnect between the risks (and risk reductions) that epidemiologists, researchers, guideline writers, the pharmaceutical industry, and policy wonks think are significant and the risks (and risk reductions) patients intuitively think are significant enough to warrant treatment.

The authors, bloggers at The Incidental Economist, begin the article with a sobering look at the number needed to treat (NNT).  For the primary prevention of myocardial infarction (MI), if 2000 people with a 10% or higher risk of MI in the next 10 years take aspirin for 2 years, one MI will be prevented.  1999 people will have gotten no benefit from aspirin, and four will have an MI in spite of taking aspirin.  Aspirin, a very good drug on all accounts, is far from a panacea, and this from a man (me) who takes it in spite of falling far below the risk threshold at which it is recommended.

One problem with NNT is that for patients it is a gratuitous numerical transformation of a simple number that anybody could understand (the absolute risk reduction  - "your risk of stroke is reduced 3% by taking coumadin"), into a more abstract one (the NNT - "if we treat 33 people with coumadin, we prevent one stroke among them") that requires retransformation into examples that people can understand, as shown in pictograms in the NYT article.  A person trying to understand stroke prevention with coumadin could care less about the other 32 people his doctor is treating with coumadin, he is interested in himself.  And his risk is reduced 3%.  So why do we even use the NNT, why not just use ARR?

Saturday, January 17, 2015

Clinical Trialists Should Use Economies of Scale to Maximize Profits of Large RCTs

The lever is a powerful tool
I am writing (very slowly) a review article about ionized calcium in the ICU - should it be measured, and should it be treated?  There are several recent large observational studies that look at the association between calcium and outcomes of  critical illness, but being observational, they do not offer guidance as to whether chasing calcium levels with calcium gluconate or chloride will improve outcomes or whether hypo- or hyper-calcemia is simply a marker of severity of illness (the latter is of course my bet.)

Thinking about calcium levels and causation and repletion, one cannot help but think about all sorts of other levels we check in the ICU - potassium, magnesium, phosphate - and may other things we routinely do but about which we have no real inkling of an idea as to whether we're doing any patients any good.  (Arterial lines are another example.)  Are we just wasting our time with many of the things we do?  This question becomes more urgent as evidence mounts that much of what we do (in the ICU and elsewhere) is useless, wasteful, or downright harmful.  But who or what agency is going to fund a trial of potassium or calcium replacement in the ICU?  It certainly seems unglamorous.   Don't we have other disease-specific priorities that are paramount in importance to such a trial?

I then realized that a good businessman, wanting to maximize the "profit" from a large, randomized controlled trial (and the dollars "invested" in it), would take advantage of economies of scale.  For those who are not business savvy (I do not imply that I am), business costs can be roughly divided into fixed costs and variable costs.  If you have a factory making widgets you have certain  costs such as the rent, advertising, widget making machines.  These costs are "fixed" meaning that they are invariable whether you make 100 widgets or 10,000 widgets.  Variable costs are the costs of materials, electricity, and human resources which must be scaled up as you make more widgets.  In general, the cost of making each widget goes down as the fixed costs are spread out over more widget units.  Additionally, if you can leverage your infrastructure to make wadgets, a product similar to a widget, you likewise increase profits by lowering costs per unit.

Saturday, October 11, 2014

Enrolling Bad Patients After Good: Sunk Cost Bias and the Meta-Analytic Futility Stopping Rule

Four (relatively) large critical care randomized controlled trials were published early in the NEJM in the last week.  I was excited to blog on them, but then I realized they're all four old news, so there's nothing to blog about.  But alas, the fact that there is no news is the news.

In the last week, we "learned" that more transfusion is not helpful in septic shock, that EGDT (the ARISE trial) is not beneficial in sepsis, that simvastatin (HARP-2 trial) is not beneficial in ARDS, and that parental administration of nutrition is not superior to enteral administration in critical illness.  Any of that sound familiar?

I read the first two articles, then discovered the last two and I said to myself "I'm not reading these."  At first I felt bad about this decision, but then that I realized it is a rational one.  Here's why.

Saturday, July 12, 2014

Better the Devil You Know: Thrombolysis for Pulmonary Embolism

In my view, the task of the expert is to render the complex simple.  And the expert does do this, except when reality goes against his bets and complexity becomes a tool for obfuscating an unwanted result.

In 2002, Konstantanidis compared alteplase plus heparin versus heparin alone for submassive pulmonary embolism (PE).  The simple message from this study was "alteplase now saves you from alteplase later" and the simple strategy is to wait until there is hemodynamic deterioration (shock) and then give alteplase.  Would that it were actually viewed so simply - I would not then get calls from stressed providers hemming and hawing about the septum bowing on the echo and the sinus tachycardia and the....

If you're a true believer, you think alteplase works - you want it to work.  So, you do another study, hoping that biomarkers better identify a subset of patients that will benefit from an up front strategy of thrombolysis.  Thus, the PEITHO study appeared in the April 10th, 2014 issue of the NEJM.  It too showed that fibrinolysis (with tenecteplase) now simply saved you from tenecteplase later.  But fibrinolysis now also causes stroke later with an increase from 0.2% in the control group versus 2.4% in the fibrinolysis group - and most of them were hemorrhagic.   Again, the strategic path is in stark relief - if your patient is dying of shock from PE, give fibrinolysis.  If not, wait - because less than 5% of them are going to deteriorate.

So we have vivid clarity provided by large modern randomized controlled trials guiding us on what to do with that subset of patients with PE that is not in shock.  For those that are in shock, most agree that we should give thrombolysis.

To muddy that clarity, Chatterjee et al report the results of a meta-analysis in the June 18th issue of JAMA in which they combine all trials they could find over the past 45 years (back to 1970!) of all patients with PE, regardless of hemodynamic status.  The result:  fewer patients died but more had bleeding.  We have now made one full revolution, from trying to identify subsets likely to benefit, to combining them all back together - I think I'm getting dizzy.

If the editorialist would look at his numbers as his patients likely would (and dispense with relative risk reductions), he would see that:

Death Bleeding in the brain Other Major Bleeding
Blood Thinner 3.89% 0.19% 3.42
Clot Buster 2.17% 1.46% 9.24
Difference 1.72% -1.27% -5.82

For almost every life that is saved, there is almost one (0.74) case of bleeding in the brain and there are 3.4 more cases of major bleeding.  And bear in mind that these are the aggregate meta-analysis numbers that include patients in shock and those not in shock - the picture is worse if you exclude those in shock.

Better the devil you know.

Monday, May 19, 2014

Sell Side Bias and Scientific Stockholm Syndrome: A Report from the Annual Meeting of the American Thoracic Society

What secrets lie inside?
Analysts working on Wall Street are sometimes categorized as working on either the "buy side" or the "sell side" depending on whether their firm is placing orders for stocks (buy side, such as institutional investors for mutual funds) or filling orders for stocks (sell side, which makes commissions on stock trades).  Sell side bias refers to any tendency for the sell side to "push" stocks via overly optimistic ratings and analyses.

Well, I'm at the American Thoracic Society (ATS) meeting in San Diego right now, and it certainly does feel like people - everyone - is trying to sell me something.  From the giant industry sponsored banners, to the emblazoned tote bags, to the bags of propaganda left at my hotel room door every morning, to the exhibitor hall filled with every manner of new and fancy gadgets (but closed to cameras), to the investigators themselves, everybody is trying to convince me to buy (or prescribe) something.  Especially ideas.  Investigators have a promotional interest in their ideas.  And they want you and me to buy into their ideas.  I have become convinced that investigators without industry ties (that dying breed) are just about as susceptible to sell side bias as those with industry ties.  Indeed, I have also noted that the potential consumer of many of the ideas himself seems biased - he wants things to work, too, and he has a ready explanation for why some ideas didn't pan out in the data (see below).  It's like an epidemic of scientific Stockholm Syndrome.

The first session I attended was a synopsis of the SAILS trial by the ARDSnet investigators, testing whether use of a statin, rosuvastatin, in patients with sepsis-incited lung injury would influence 60 day mortality.  The basis of this trial was formed by observational associations that patients on statins had better outcomes in this, that, and the other thing, including sepsis.  If you are not already aware of the results, guess whether rosuvastatin was beneficial in this study.

Saturday, April 26, 2014

Dear SIRS: Your Septic System Stinks

I perused with interest the April 2nd JAMA article on the temporal improvement in severe sepsis outcomes in Australia and New Zealand (ANZ) by Kaukonen et al this week.  Epidemiological studies like this remind me why I'm so fond of reading reports of RCTs:  because they're so much easier to think about.  Epidemiological studies have so many variables, measured and unmeasured, and so much confounding you have to consider. I spent at least five hours poring over the ANZ report, and then comparing it to the recent NEJM article about improved diabetes complications between 1990 and 2010, which is similar, but a bit more convincing (perhaps the reason it's in the NEJM).

I was delighted that the authors of the ANZ study twice referenced our delta inflation article and that the editorialists agree with the letter I wrote to AJRCCM last year advocating composite morbidity outcomes in trials of critical illness.  These issues dovetail - we have a consistent track record of failure to demonstrate mortality improvements in critical care, while we turn a blind eye to other outcomes which may be more tractable and which are often of paramount concern to patients.

Monday, April 21, 2014

Stowaway and Accidental Empiricist Humbles Physiological Theorists: The Boy in the Wheel Well

Kessler Peak in the Wasatch:  10,400 feet
Several years ago, I posted about empirical confirmation of West's theoretical blood gas results at altitude on Everest.  (Last week, an avalanche on Everest took more lives in a single day than any other in the history of the mountain.)  The remarkably low PaO2 values (mean 26 mm Hg) demonstrated by those authors, (and the correspondingly low estimated SaO2 values) are truly incredible and even bewildering especially from the perspective of clinical practice where we often get all bent out of shape with PaO2 values under 55 mm Hg or so.  Documentation of the PaO2 values in the "natural experiment" that mountaineers subject themselves to serves as fodder for ponder for those of us who are prone to daydreaming about physiology:  is tolerance of these low values possible only because of acclimatization and extreme physical fitness?  (but they're exercising, not just standing there!)  what is the lower safe limit of hypoxemia?  does it vary by age?  the presence of other illnesses?  is there a role for permissive hypoxemia in the practice of critical care?

Sunday, April 6, 2014

Underperforming the Market: Why Researchers are Worse than Professional Stock Pickers and A Way Out

I was reading in the NYT yesterday a story about Warren Buffet and how the Oracle of Omaha has trailed the S&P 500 for four of the last five years.  It was based on an analysis done by a statistician who runs a blog called Statistical Ideas, which has a post on p-values that links to this Nature article a couple of months back that describes how we can be misled by P-values.  And all of this got me thinking.

We have a dual problem in medical research:  a.)  of conceiving alternative hypotheses which cannot be confirmed in large trials free of bias;  and b.) not being able to replicate the findings of positive trials.  What are the reasons for this?

Tuesday, April 1, 2014

Absolute Confusion: How Researchers Mislead the Public with Relative Risk

This article in Sunday's New York Times about gauging the risk of autism highlights an important confusion in the appraisal of evidence from clinical trials and epidemiological studies that appears to be shared by laypersons, researchers, and practitioners alike:  we focus on relative risks when we should be concerned with absolute risks.

The rational decision maker, when evaluating a risk or a benefit, is concerned with the absolute magnitude of that risk or benefit.  A proportional change from an arbitrary baseline (a relative risk) is irrelevant.  Here's an example that should bring this into keen focus.

If you are shopping and you find a 50% off sale, that's a great sale.  Unless you're shopping for socks.  At $0.99 a pair, you save $0.50 with that massive discount.  Alternatively, if you come across a 3% sale, but it's at the Audi dealership, that paltry discount can save you $900 on a $30,000 Audi A4.   Which discount should you spend the day pursuing?  The discount rate mathematically obscures the value of the savings.  If we framed the problem in terms of absolute savings, we would be better consumers.  But retailers know that saying "50% OFF!" attracts more attention than "$0.50 OFF!" in the sock department.  Likewise, car salesmen know that writing "$1000 BELOW INVOICE!" on the windshield looks a lot more attractive than "3% BELOW INVOICE!"

Sunday, March 23, 2014

Lost Without a MAP: Blood Pressure Targets in Septic Shock

Another of the critical care articles published early online at www.nejm.org last week was this trial of High versus Low Blood-Pressure Target in Patients with Septic Shock.  In this multicenter, open-label trial, the authors enrolled 776 patients in France and randomized them to a target MAP (mean arterial pressure) of 65-70 mm Hg (low target) versus 80-85 (high target).  The hypothesis is that a higher pressure, achieved through vasopressor administration, will improve 28-day mortality.  If you don't already know the result, guess if the data from this trial support or confirm the hypothesis (the trial had 80% power to show a 10% absolute reduction in mortality).

Thursday, March 20, 2014

Sepsis Bungles: The Lessons of Early Goal Directed Therapy

On March 18th, the NEJM published early online three original trials of therapies for the critically ill that will serve as fodder for several posts.  Here, I focus on the ProCESS trial of protocol guided therapy for early septic shock.  This trial is in essence a multicenter version of the landmark 2001 trial of Early Goal Directed Therapy (EGDT) for severe sepsis by Rivers et al.  That trial showed a stunning 16% absolute reduction in mortality in sepsis attributed to the use of a protocol based on physiological goals for hemodynamic management.  That absolute reduction in mortality is perhaps the largest for any therapy in critical care medicine.  If such a reduction were confirmed, it would make EGDT the single most important therapy in the field.  If such reduction cannot be confirmed, there are several reasons why the Rivers results may have been misleading:

There were other concerns about the Rivers study and how it was later incorporated into practice, but I won't belabor them here.  The ProCESS trial randomized about 1350 patients among three groups, one simulating the original Rivers protocol, one to a modified Rivers protocol, and one representing "standard care" that is, care directed by the treating physician without a protocol.  The study had 80% power to demonstrate a mortality reduction of 6-7%.  Before you read further, please wager, will the trial show any statistically significant differences in outcome that favor EGDT or protocolized care?

Friday, February 28, 2014

Overediagnosis and Mitigated Overdiagnosis: Ongoing problems with Breast and Lung Cancer Screening

I got to thinking about cancer screening (again) in the last week after reading in BMJ about the Canadian National Breast Screening Study (CNBSS).  That article piqued my interest because I immediately recalled the brouhaha that ensued after the U.S. Preventative Services Task Force (USPSTF) recommended that women not get mammograms until  age 50 rather than age 40.  That uproar was similar to the outcry by urologists when the USPSTF recommended against screening for prostate cancer with PSA testing.  Meanwhile, changes in the cholesterol guidelines have incited intellectual swashbuckling among experts in that field.  Without getting into the details, observers of these events might generate the following hypotheses:
  1. People are comfortable with the status quo and uncomfortable with change
  2. People get emotionally connected to good causes and this makes the truth blurry, or invisible.  After you've participated in the Race for the Cure, it's hard to swallow the possibility that the linchpin of the Race might not be as useful as we thought; and is no longer recommended for a whole swath of women. 
  3. People are terrified of cancer
  4. Screening costs money.  Somebody pockets that money.  Urologists and radiologists and gastroenterologists LOVE screening programs.  So do Porche dealers.

Monday, February 10, 2014

Brief Updates on Hypothermia, Hyperglycemia, Cholesterol, Blood Pressure Lowering in Stroke and Testosterone

I've read a lot of interesting articles recently, but none that are sufficient fodder for a dedicated post.  So here I will update some themes from previous blog posts with recent articles from NEJM and JAMA that relate to them.

Prehospital Induction of Hypothermia After Cardiac Arrest
In this article in the January 1st issue of JAMA, investigators from King County Washington report the results of a trial which tested the hypothesis that earlier (prehospital) induction of hypothermia, by infusing cold saline, would augment the assumed benefit of hypothermia that is usually initiated in the hospital for patients with ventricular fibrillation.  Please guess what was the effect of this intervention on survival to hospital discharge and neurological outcomes.

You were right.  There was not even a signal, not a trend towards benefit, even though body temperature was lower by 1 degree Celcius and time to target hypothermia temperature in the hospital was one hour shorter.  However, the intervention group experienced re-arrest in the field significantly more often than the control group and had more pulmonary edema and diuretic use.  Readers interested in exploring this topic further are referred to this post on Homeopathic Hypothermia.

Hyperglycemic Control in Pediatric Intensive Care
In this article in the January 9th issue of NEJM, we are visited yet again by the zombie topic that refuses to die.  We keep looking for subgroups or populations that will benefit, and if we find one that appears to, it will be a Type I error, thinks the blogger with Bayesian inclinations.  In this trial, 1369 pediatric patients at 13 centers in England were randomized to tight versus conventional glycemic control.  Consistent with other trials in other populations, there was no benefit in the primary outcome, but tightly "controlled" children had much more and severe hypoglycemia.  The "cost effectiveness" analysis they report is irrelevant.  You can't have "cost effectiveness" of an ineffective therapy.  My, my, how we continue to grope.

Wednesday, January 29, 2014

Does Investigating Delirium Make You Delirious? A Sober Look at Sedation and Analgesia in the ICU

Michael's Milk
I rarely use the Medical Evidence Blog to discuss review articles, but today's NEJM has one that I can't pass up about Sedation and Delirium in the Intensive Care Unit.  It is my opinion that we have gotten carried away by the torrent of articles, many in prominent journals, about delirium in the ICU and that while this is an important topic for research, it is extremely premature to try to translate the findings into practice, and moreover, that the approach to sedation suggested by the article is lacking in common sense.

As chronicled in the accompanying perspective article by D.S. Jones, delirium has been around as long as ICUs have, and the longer you're there, the more likely you will become delirious.  It's an exposure thing.  Thus, until somebody reports the results of a trial of delirium treatment or prevention that has important and undeniable effects on clinically relevant outcomes, I will continue to approach delirium as I always have - by going to great lengths to get patients out of bed, off the vent, and out of the ICU as fast as I possibly can - because these things benefit all patients regardless of whether they have an impact on delirium.

Thursday, January 23, 2014

White Noise: Trials of Pharmaceuticals for Alzheimer's Disease

"But we are not here concerned with hopes or fears, only with the truth as far as our reason allows us to discover it." - Charles Darwin

In yesterday's NEJM, the results of two trials of antiamyloid monoclonal antibodies (sonalezumab and bapeneuzumab)  for Alzheimer's Disease (AD) are published.  I became interested in the evidence for AD treatments after the recent trial of Vitamin E and Mematine for AD (the TEAM-AD VA Cooperative Trial) was published in JAMA earlier this month.  Regular readers know that I think that the prior probability that vitamins, minerals, and antioxidants are beneficial for any disease outside of deficiency states is very low.  The vitamin E trial was the impetus for some background investigation which I will summarize below.

Friday, December 27, 2013

Billions and Billions of People on Statins? Damn the Torpedos and Full Speed Ahead

Absolutely Relative
Risk is in the Mind of the Taker
Among the many editorials providing background and backlash about the new cholesterol guidelines is this one:  More Than a Billion People Taking Statins? by John Ioannidis, which echoes the worries of others that the result of the guidelines (which changed the 10-year risk threshold for treatment from 10% to 7.5%) may be that many more people (billions and billions?) will be prescribed statins.  But the title is a curious one - if statins are beneficial, should we lament their widespread prescription and adoption or is it just unfortunate that heart disease is so prevalent? Whose side are we on, the cure or the disease?

Are the premises of the guidelines flawed leading to flawed extrapolations, or are the premises correct and we just don't like the implications?  Let's look at the premises - because if they're flawed, we may find that other premises we have accepted are flawed.

Wednesday, November 20, 2013

Chill Out: Homeopathic Hypothermia after Cardiac Arrest

In the Feb 21, 2002 NEJM, two trials of what came to be known as therapeutic hypothermia (or HACA - Hypothermia after Cardiac Arrest) were simultaneously published:  one by the HACA study group and another by Bernard et al.  During the past decade, I can think of only one other therapy which has caused such a paradigm shift in care in the ICU:  Intensive Insulin Therapy (ill-fated as it were).  Indeed, even though the 2002 studies specifically limited enrollment to out of hospital (OOH) cardiac arrest with either Ventricular Tachycardia (VT) or Ventricular Fibrillation (VF), the indications have been expanded at many institutions to include all patients with coma after cardiac arrest regardless of location or rhythm (or any other original exclusion criterion), so great has been the enthusiasm for this therapy, and so zealous its proponents.

Readers of this blog may know that I harbor measured skepticism for HACA even though I recognize that it may be beneficial.  From a pragmatic perspective, it makes sense to use it, since the outcome of hypoxic-ischemic encephalopathy (HIE) and ABI (Anoxic Brain Injury) is so dismal.  But what did the original two studies actually show?
  • The HACA group multicenter trial randomized 273 patients to hypothermia versus control and found that the hypothermia group had higher rates of "favorable neurological outcome" (a cerebral performance category of 1 or 2 - the primary endpoint) with RR of 1.40 and 95% CI 1.08-1.81; moreover, mortality was lower in the hypothermia group, with RR 0.74 and 95% CI 0.58-0.95
  • The Bernard et al study randomized 77 patients to hypothermia versus control and found that survival (the primary outcome) was 49% and 26% in the hypothermia and control groups, respectively, with P=0.046

Monday, November 18, 2013

Dead in the Water: Colloids versus Crystalloids for Fluid Resuscitation in the ICU

It is a valid question:  at what point has a concept been tested ad infinitum such that further testing is not worthwhile?  There are at least three reasons why additional study of a concept may not be justified:

  1. Because the prior probability of success is so low (based on extant trials) that a subsequent trial is unlikely to influence the posterior probability that any success represents the truth.  (This is a Bayesian or meta-analytic worldview.)
  2. Because the low probability of success does not justify the expense of additional trials
  3. Because the low probability of success violates bioethical precepts mandating that trials must have added value for patients and society
And so we have, in the November 6th edition of JAMA, the CRISTAL trial of colloids versus crystalloids for resuscitation in the ICU.  As is customary, I will leave it to interested readers to peruse the manuscript for details.  My task here is to provide some background and nuance.

Saturday, November 16, 2013

The Cardiologist Giveth, then the Cardiologist Taketh Away: Revision of the Cholesterol Guidelines

There has been quite a stir this week with the publication of the newest revision of the ACC/AHA guidelines for the treatment of cholesterol.  The New York Times is awash with articles summarizing or opining on the changes and many of the authors are perspicacious observers:
As the old Spanish proverb states, "rio revuelto, ganancia de pescadores" - when the river is stirred up, the fishermen benefit.  I will admit that I'm gloating a bit since I consider the new guidelines to be a tacit affirmative nod to several posts on the topic of the cholesterol hypothesis (CH).  (More posts here and here and here, among several others - search for "cholesterol" or "causal pathways" on the Medical Evidence Blog search bar.)

Sunday, November 3, 2013

The Intensivist Giveth Then the Intensivist Taketh Away: Esmolol in Septic Patients Receiving High Dose Norepinephrine

Two studies in the October 23/30 issue of JAMA serve as fodder for reflection on the history and direction of critical care research and the hypotheses that drive it.   Morelli et all report the results of a study of Esmolol in septic shock.  To quickly summarize, this was a single center dose ranging study the primary aim of which was to determine if esmolol could be titrated to a heart rate goal (primary outcome), presumably with the later goal of performing a phase 3 clinical trial to see if esmolol, titrated in such a fashion, could favorably influence clinical outcomes of interest.  154 patients with septic shock on high dose norepinephrine with a heart rate greater than 95 were enrolled, and heart rate was indeed lower in the esmolol group (P less than 0.001).  Perhaps surprisingly, hemodynamic parameters, lactate clearance, and pressor and fluid requirements were (statistically significantly) improved in the esmolol group.  Most surprising (and probably the reason why we find this published in JAMA rather than Critical Care Medicine - consider that outlier results such as this may get disproportionate attention), mortality in the esmolol group was 50% compared to 80% in the control group (P less than 0.001).  The usual caveats apply here:  a small study, a single center, lack of blinding.  And regular readers will guess that I won't swallow the mortality difference.  I'm a Bayesian (click here for a nice easy-to-use Bayesian calcluator), there's no biological precedent for such a finding and it's too big a bite for me to swallow. So I will go on the record here as stating that I'm betting against similar results in a larger trial.

I'm more interested in how we formulate the hypothesis that esmolol will provide benefit in septic shock.  I was a second year medical student in 1995 when Gattinoni et al published the results of a trial of "goal-oriented hemodynamic therapy" in critically ill patients in the NEJM.  I realize that critical care research as we now recognize it was in its adolescence then, as a quick look at the methods section of that article demonstrates.  I also recognize that they enrolled a heterogenous patient population.  But it is worth reviewing the wording of the introduction to their article:

Recently, increasing attention has been directed to the hemodynamic treatment of critically ill patients, because it has been observed in several studies that patients who survived had values for the cardiac index and oxygen delivery that were higher than those of patients who died and, more important, higher than standard physiologic values.1-3 Cardiac-index values greater than 4.5 liters per minute per square meter of body-surface area and oxygen-delivery values greater than 650 ml per minute per square meter — derived empirically on the basis of the median values for patients who previously survived critical surgical illness — are commonly referred to as supranormal hemodynamic values.4

Saturday, October 12, 2013

Goldilocks Meets Walter White in the ICU: Finding the Temperature (for Sepsis and Meningitis) that's Just Right

In the Point/Counterpoint  section of the October issue of Chest, two pairs of authors spar over whether fever should be controlled in sepsis by either pharmacological or external means.  Readers of this blog may recall this post wherein I critically appraised the Schortgen article on external cooling in septic shock that was in AJRCCM last year.  Apparently that article made a more favorable impression on some practitioners than it did on me, as the proponents of cooling in the Chest piece hang their hats on this article (and their ability to apply physiological principles to medical therapeutics).  (My gripes with the Schortgen study were many, including a primary endpoint that was of little value, cherrypicking the timing of the secondary mortality endpoint, and the lack of any biological precedent for manipulation of body temperature improving mortality in any disease.)

Reading the Point and Counterpoint piece (in addition to an online first article in JAMA describing a trial of induced hypothermia in severe bacterial meningitis - more on that later) allowed me to synthesize some ideas about the epistemology (and psychology) of medical evidence and its evaluation that I have been tossing about in my head for a while.  Both the proponent pair and the opponent pair of authors give some background physiological reasoning as to why fever may be, by turns, beneficial and detrimental in sepsis.  The difference, and I think this is typical, is that the proponents of fever reduction:  a.) seem much more smitten by their presumed understanding of the underlying physiology of sepsis and the febrile response; b.) focus more on minutiae of that physiology; c.) fail to temper their faith in application of physiological principles with the empirical data; and d.) grope for subtle signals in the empirical data that appear to rescue the sinking hypothesis.