Friday, July 10, 2009

Happy Anniversary to the Blog! Two Years Old!

The medical evidence blog has turned out to be a fruitful experience for me and hopefully for others. The idea was conceived while I was at OSU auditing a course on capital punishment in the law school taught by the wonderful Douglas Berman, JD, who used a blog as part of the course material and who created the prominent SLAP (Sentencing Law and Punishment) blog. That formative and enriching experience led me to create this blog to ruffle feathers in the medical evidence community, as an alternative to numerous and sundry letters to the editor of the NEJM which I had theretofore been writing. (Every now and again I lose the ability to restrain myself and submit a letter in spite of the blog.) The experiment has paid off, I hope, and hopefully this blog provides fodder for thoughtful clinicians and researchers, as well as physicians in training, and journal clubs. I hope that the tradition of the first two years will continue into perpetuity and we will beat the bushes of evidence on this blog as we strive to understand the truth and the limitations of what is currently known using our logic and our sense of reason to guide us. Thank all of you who have followed this blog for the encouragement to keep it going.

Cheers, Scott

Type rest of the post here

Thursday, July 9, 2009

No Sham Needed in Sham Trials: Polymyxin B Hemoperfusion in Abdominal Septic Shock (Alternative Title: How Meddling Ethicists Ruin Everything)

This a superlative article to jab at to demonstrate some interesting points about randomized controlled trials that have more basis in hope than reason and whose very design threatens to invalidate their findings: http://jama.ama-assn.org/cgi/content/abstract/301/23/2445?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=polymyxin&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT . Because endotoxin has an important role in the pathogenesis of gram-negative sepsis, there has been interest in interfering with it or removing it in the hopes of abating the untoward effects of the sepsis inflammatory cascade. Learning from previous experiences/studies (e.g., http://content.nejm.org/cgi/content/abstract/324/7/429 ) that taking a poorly defined and heterogenous illness (namely sepsis) and using therapy that is expected to work in only a subset of patients with the illness (gram-negative source), the authors chose to study abdominal sepsis because they expected that the majority of patients will have gram-negatives as a causative or contributory source of infection. They randomized such patients to receive standard care (not well defined) or the insertion of a dialysis catheter with subsequent hemoperfusion over a Polymyxin B impregnated surface because this agent is known to adsorb endotoxin. The basic biological hypothesis is that removing the endotoxin in this fashion will cause amelioration of the untoward effects of the sepsis inflammatory cascade in such a way as to improve blood pressure, other phyisological parameters, and hopefully, mortality as well. There is reason to begin one's reading of this report with robust skepticism. The history of modern molecular medicine, for well over 25 years, has been polluted with the vast detritus of innumerable failed sepsis trials founded on hypotheses related to modulation of the sepsis cascade. During this period, only one agent has been shown to be efficacious, and even its efficacy remains highly doubtful to perhaps the majority of intensivists (myself excluded; see: http://content.nejm.org/cgi/content/abstract/344/10/699 ).


Mortality was not the primary endpoint in this trial, but rather was used for the early stopping rule. Even though I am currently writing an article suggesting that mortality may not be a good endpoint for trials of critical illness, this trial reminds me why the critical care community has selected this endpoint as the bona fide gold standard. Who cares if this invasive therapy increases your MAP from the already acceptable level of ~77mmHg to the supertarget level of 86? Who cares if it reduces your pressor requirements? Why would a patient, upon awakening from critical illness, thank his doctors for inserting a large dialysis catheter in him to keep his BP a little higher than it otherwise would have been? Why would he rather have a giant hole in his neck (or worse - GROIN!) than a little more levophed? If it doesn't save your life or make your life better when you recover, why do you care? We desperately need to begin to study concepts such as "return to full functionality at three (or six) months" or "recovery without persistent organ failures at x,y,z months". (This latter term I would define as not needing ongoing therapy for the support of any lingering organ failure after critical illness [that did not exist in the premorbid state], such as oxygen therapy, tracheostomy, dialysis, etc.). Should I be counted as a "save" if my existence after the interventions of the "saviors" is constituted by residence in a nursing home dependent on others for my care with waxing and waning lucidity? What does society think about these questions? We should begin to ask.

And we segue to the stopping issue which I find especially intriguing. Basing the stopping rule on a mortality difference seems to validate my points above, namely that the primary endpoint (MAP) is basically a worthless one - if it were not, or if it were not trumped by mortality, why would we not base stopping of the trial on MAP? (And if this is a Phase II or pilot trial, it should be named accordingly, methinks.) This small trial was stopped on the basis of a mortality difference significant at P=0.026 with the stopping boundary at P<0.029. I will point out again on this blog for those not familiar with it this pivotal article warning of the hazards of early stopping rules (http://jama.ama-assn.org/cgi/content/abstract/294/17/2203 ). But here's the real rub. When they got these results at the first and only planned interim analysis, (deep breath), they consulted with an ethicist. The ethicist said that it is unethical to continue the trial because to do so would be to deny this presumably effective therapy to the control group. But does ANYONE in his or her right state of mind agree that this therapy is effective on the basis of these data? And if these data are not conclusive, does not that condemn future participants in a future trial to the same unfair treatment, namely randomization to placebo? Does not stopping the trial early just shift the burden to other people? It does worse. It invalidates to large degree the altruistic motives of the participants (or their surrogates) in the current trial because stopping it early invalidated it scientifically (per the above referenced article) and because stopping it early necessitates the performance of yet another larger trial where participants will be randomized to placebo, and which, it is fair to suspect, will demonstrate this therapy to be useless, which is tantamount to harmful in the net because of the risk of catheters and wasted resources in performing yet another trial. Likewise, if we assume that this therapy IS beneficial, stopping it has reduced NET utility to current participants, because now NOBODY is receiving the therapy. So, from a consequentialist or utilitarian standpoint, overall utility is reduced and net harm has resulted from stopping the trial. What if the investigators of this trial had made it more scientifically valid from the outset by using a sham hemoperfusion device (an approach that itself would have caused an ethical maelstrom)? And what if the sham group proved superior in terms of mortality - would the ethicists have argued for stopping the trial because continuing it would mean depriving patients of sham therapy? Would there have been a call for providing sham therapy to all patients with surgically intervened abdominal sepsis? I write this with my tongue in my cheek, but the ludicrousness of it does seem to drive home the point that the premature stopping of this trial is neither ethically clear-cut nor obligatory, and that from a utilitarian standpoint, net negative utility (for society and for participants - for everyone!) has resulted from this move. And that segues me to the issue of sham procedures. It is abundantly obvious that patients with a dialysis catheter inserted for this trial (probably put in by an investigator, but not stated in the manuscript) will be likely to receive more vigilant care. This is the whole reason that protocols were developed in critical care research, as a result of the early ECMO trials (Morris et al 1994) where it was recognized that you would have all sorts of confounding by the inability to blind treating physicians in such a study. While it is not feasible to blind an ECMO study, the investigators of this study do little to convince us that blinding was not possible and feasible, and they make light of the differences in care that may have resulted from lack of blinding. Moreover, they do not report on the use of protocols for patient care that may/could have minimized the impact of lack of blinding, and in a GLARING omission, they do not describe fluid balance in these patients, a highly discretionary aspect of care that clearly could have influenced the primary outcome and which could have been differential between groups because of the lack of blinding and sham procedures. Unbelievable! (As an afterthought, even the mere increased stimulation [tactile, auditory, or visual] of patients in the intervention group, by more nursing presence or physician presence in the room may have led to increases in blood pressure.) There are also some smaller points, such as the fact that by my count 10 patients (not accounting for multiple organisms) in the intervention group had gram positive or fungal infections making it difficult to imagine how the therapy could have influenced these patients. What if patients without gram-negative organisms isolated are excluded from the analysis? Does the effect persist? What is the p-value for mortality then? And that point segues me to a final point - if our biologically plausible hypothesis is that reducing endotoxin levels with this therapy leads to improvements in parameters of interest, why, for the love of God, did we not measure and report endotoxin levels and perform secondary analyses of the effect of the therapy as a function of endotoxin levels and also report data on whether these levels were reduced by the therapy, thus supporting the most fundamental assumption of the biological hypothesis upon which the entire study is predicated?

Saturday, June 20, 2009

Randomized controlled trial of an intervention to reduce gun-related violence: A Parody

I am incredibly disappointed that the journal that I consider to be the very pinnacle of medical evidence continues to print ideological propaganda without any regard whatever to evidence and logic when it suits the editorial agenda http://content.nejm.org/cgi/content/extract/360/22/2360. Unadulterated propaganda pieces related to capital punishment, abortion, and gun control are shamelessly and predictably aligned with a singular political stance, and evidence and logic are eschewed entirely in favor of dogmatic and sanctimonious deontology. Without slinging any more mud on my favorite journal, I will demonstrate this in the following parody:

ARTICLE TITLE:
Efficacy of a gun control policy in reducing gun-related violence: A multi-state, multi-center, randomized controlled trial.

BACKGROUND:
Gun related violence results in tens of thousands of deaths (mostly suicides and homicides) each year. Interventions to reduce the toll of gun-related violence are desperately needed.

METHODS:
We used CDC data on gun-related deaths over the last decade to identify populations at risk for gun-related violence. However, our inclusion criteria did not comport with NIH-funding guidelines about inclusion of women and minorities and vulnerable populations such as former prisoners and felons and people with mental disabilities, some of which were over-represented and some of which were under-represented in the at-risk group we identified. Therefore, we dropped inclusion and exclusion criteria altogether, and randomized the entire populations of several states to the intervention (moratorium on firearms ownership defined as a complete ban imposed by state legislatures coupled with Directly Observed Confiscation) versus control (no moratorium or ban). Causes of deaths in each group were tracked and adjudicated by medical examiners in each state.

RESULTS:
The two populations were well matched on baseline demographic characteristics. There was no difference in the gun-related fatality rate between the intervention and control groups (20.1 per 100,000 in the intervention group and 20.2 per 100,000 in the control group; P=0.98) based on an intention to treat analysis. There was considerable cross-over between groups and this potentially explains the failure of the intervention to produce the intended result. In subjects who crossed over from the intervention to the control group (hereafter called "criminals"), the odds of gun-related violence increased 1000.42 (p=0.00001). Many criminals were responsible for more than one gun-related death and crossed over multiple times from intervention to control. There was wide variability between the rates of gun related violence on the basis of geography and other factors, with fatality rates 10-100 times higher in Baltimore, MD than in Provo, UT.

CONCLUSIONS:
An intervention to reduce gun-related violence failed to achieve this goal, largely as a result of cross-over from the intervention to the control group by "criminals". These criminals undermined the efficacy of the intervention. Moreover, the high geographic variability in gun related violence suggests that factors unrelated to the availability of firearms may drive gun-related violence rates. Future studies in limiting gun-related violence should focus on at-risk groups identified through crime statistics, and should not be NIH funded. Moreover, recrudescent crossover in future studies should be limited by incarceration of criminals for life without parole. Future studies might also focus on more traditional ways of preventing recrudescent cross-over (such as capital punishment). The Personalized Healthcare movement might also provide guidance on how to deal with this challenging problem.

Monday, May 11, 2009

Autism, Vaccines, and The Tragedy of the Commons: Whose Tragedy and Whose Commons?

In last week's NEJM, there is an article about the purported perils of foregoing vaccinations for your kids. The article is here: http://content.nejm.org/cgi/content/full/360/19/1981 .

There are a few points that I think deserve to be made about this issue. First, I digress to outline briefly the idea of "The Tragedy of the Commons."

The Tragedy of the Commons refers to the notion that "commons" such as parks or more traditionally "grazing areas" will be more fruitfully enjoyed by all if they are used responsibly. If everybody grazes as many sheep as s/he pleases on the commons, soon enough, there will be no grass for the sheep to eat. So it stands to reason that one should graze his sheep responsibly and sparingly on the commons. Paradoxically, there is little incentive to exercise such restraint. Because insomuch as you do, your neighbor does not, and the sparing of the commons effected by you is obliterated by your neighbor, or his neighbor, etc. As when passing a sign enjoining you to not walk on the grass and you are want to say "ah, but what difference will it make?", your neighbor might respond "yeah, but if we all did that....". The sign is there to regulate the commons that would be depradated were it not for some social policy forcing restraint. So long as the MAJORITY refrains from treading on the lush monocultured turf, it will remain lush. But after a certain threshold number of defectors trammels it, the commons is lost.

And such, I will demonstrate, is the issue with refusing vaccinations. The threat that results is not so much to the unvaccinated child, but rather to the commons - to the herd immunity. So far, it seems to me, the medical and public health establishments have sought to appeal to the sensitivities of parents to their own children's welfare rather than to supplicate them to "do what's right for society." To me, this is a overtly disingenuous approach. The vaccination of any indivudual child, when the baseline vaccination rate is above some critical threshold is an act of social responsibility much more than it is something essential for the health of the individual child. I suspect that some vaccine-refusing parents (let's call them Refusniks, shall we?) recognize this, and this recognition, combined with a tendency for rebellion, creates an impetus for refusal, especially if they think that the vaccine may cause autism or some other untoward effect. Let's look at some numbers.

First let's start with an estimate of the incidence of Measles with and without vaccination (if you take issue with these estimates and the resulting conclusions, please furnish your own numbers with a reference):

Measles with vaccination:
0.0000010000000 per annum
Measles without vaccination:
0.0002500000000

Even though this is a 250x increase, it is still only an absolute increase of:
0.0002490000000

So, if you fail to vaccinate your child, you increase his/her risk of measles by only 0.024%.

But the case fataility rate for measles is only about 0.3%. So, you increase your child's risk of death from measles by only:
0.0000004230000

That's a very small number, my friends.

Now let's also say that you're concerned about the risk of autism, for whatever reason, even a specious one. And you ask your pediatrician who is skeptical, so s/he refers you to the most recent good quality epidemiological data, the Danish data from NEJM in 2002: http://content.nejm.org/cgi/content/abstract/347/19/1477 .

In this study, the upper 95% CI for an association of MMR with Autism was 1.24. Thus, a 24% increase in the risk of autism is certainly within the range of plausibility based on these data. The base rate of autism in this study was:

Base rate of autism:
0.0005880000000
Rate of autism with a 24% increase (assuming it may be as hight as the UCI):
0.0007290000000
Absolute increase in autism rate:
0.0001410000000

Now, I realise that autism may not be as bad as death for a child, but this POTENTIAL increase in autism, consistent with good data, far overshadows the risk of death from Measles attributable to failure to vaccinate your child.

So it stands to reason that, if a person has, for whatever reasons, a value system that makes autism a grave concern for them, they are NOT acting terribly far outside the bounds of rationality by refusing vaccination for their individual child.

Now if their child has siblings, and/or they live in a community where there is a high rate of vaccination refusal, these numbers are out the window and the individual child risk is much harder to calculate and probably much higher.

(I recognize also that I have used data on the ANNUAL measles risk which may be cumulative and this may sway the numbers in favor of vaccination since presumably the risk of autism from vaccine exposure is a one-time event.)

I do not mean to imply here that I am against vaccination (I am not), nor that I believe that autism is caused by MMR or other vaccines (I do not), but I think 4 points are germane to this conversation which may be emblematic of other issues in public health where officials are apt to take a paternalistic stance:

1.) The individual child's absolute risk of death from Measles is VERY small, as is the increase in risk from failure to be vaccinated.

2.) The risk of autism from MMR based on the Madsen data has a wide confidence interval which does not exclude what some parents may think is a meaningful increased risk of 24%. The meaningfulness of this risk may be especially important in the context of comparing it with another very small risk, such as that of death or diasbility from measles, or motor vehicle accidents.

3.) The refusal to vaccinate is more of a social responsibility issue, a Tragedy of the Commons, than it is an individual patient safety and health issue. (Such is also the case with PPDs, TB, and INH prophylaxis, but don't get me started on that.)

4.) The risks that parents take for their children through vaccination refusal is similar risks they take via motor vehicle travel. We are not encouraging parents to cut in half the number of miles they drive with their children per annum to reduce the risk of death from MVAs from 0.000145 to half of that, so why are we so adamant about their getting MMR? Because it's an issue of the commons, not the individual.

And if it is an issue of civic responsibilty, we should frame it as such, rather than guilt-tripping parents about exposing their children to risk via neglect. Just like driving a massive Ford Excursion, where your children may be safer but everybody else's are worse off (because of the size of your projectile or its impact on the environment), vaccination is better for the commons, if not for your own children.

Thursday, April 30, 2009

Luck that Looks Like Logic? Statins (Rosuvastatin), the Cholesterol Hypothesis, and Causal Pathways

The Cholesterol Hypothesis (CH), namely that the association between elevated cholesterol (LDL) and cardiovascular disease and events is a CAUSAL one, and thus that intervening to lower cholesterol prevents these diseases has seduced mainstream medicine for decades. However, much if not most of the evidence for the causality of cholesterol in atherogenesis and its reversal by lowering cholesterol derives from studies of "Statins" or HMG-CoA-reductase inhibitors; indeed the evidence that lowering LDL cholesterol (or raising HDL) through other pathways has salutary effects on cardiovascular outcomes is scant at best as has been chronicled on this blog (see posts on torcetrapib and ezetimibe/Vytorin). Not myself immune to the beguiling allure of the CH, I admit that I take Niacin, in spite of normal HDL levels and scant to no trustworthy evidence that, in addition to raising HDL and lowering LDL, it will have any primary (or secondary or tertiary) preventative effects for me.

In yesterday's NEJM, Glynn et al report the results of analysis of data on a secondary endpoint from the JUPITER trial of Rosuvastatin. (http://content.nejm.org/cgi/content/abstract/360/18/1851 .) The primary aim of the trial was to determine if Rosuvastatin was effective for primary prevention of cardiovascular events in people with normal cholesterol levels and elevated CRP levels. The secondary endpoint described in the article was the occurrence of venothromboembolism during the study period. Because I see no obvious evidence of foul play, and because this study was simply impeccably designed, conducted, and reported, I'm going to hereafter ignore the fact that it was industry sponsored, and that there is probably some motive of "off-label promotion by proxy" (http://medicalevidence.blogspot.com/2008/06/off-label-promotion-by-proxy-how-nejm.html .) here...

Lo and behold: Rosuvastatin lowered venothromboembolism rates. The difficulties posed by ascertainment of this outcome notwithstanding, this trial has convincing evidence of a statistically significant reduction in DVT and PE event rates (which were very low - ~0.2%/100 persons/year) during the four year period of study. And this does not make a whole lot of sense from the standpoint of the CH. There's something more going on. Like an anti-inflammatory property of Statins. Which is very interesting and noteworthy and worthwhile in its own right. But I'm more interested in what kind of light this sheds on the validity of the CH.

Because of my interest in the fraility of the normalization hypothesis/heuristic (the notion that you just measure something and then raise or lower it to the normal range and make things ALL better) I am obviously a reserved skeptic of the Cholesterol Hypothesis, which was bolstered by if not altogether reared by data from trials of statins. And these new data, combined with emerging evidence that statins may have salutary effects on lung inflammation in ARDS and COPD, among perhaps others, make me wonder - was it just pure LUCK rather than a triumph of LOGIC that the first widely tested and marketed drug for cholesterol happened to both reduce cardiovascular endpoints AND lower cholesterol, even though not necessarily as part of the same causal pathway? Is it just "true, true, and unrelated?" Are they the anti-inflammatory properties or some other piece of the complex biochemical effects of these drugs on the body that leads to their clinical benefits? Other examples come to mind: Is blood pressure lowering just an epiphenomenon of another primary ACE-inhibitor effect on heart failure? Because these effects appear to be superficially and intuitively related does not mean that they are an obvious causal pathway.

What if things had happened another way. What if Statins had eluded discovery for another 20-30 years. What if study of the cholesterol hypothesis meanwhile proceeded through evaluation of Cholestyramine, Cholestipol, Niacin, and other drugs, and what if it had been "disconfirmed" by failure of these agents to reduce cardiovascular outcomes? These hypotheticals will be answerable only after more study of Statins and other drugs as well as their mechanisms. The data presented by the Harvard group as well as their other work with CRP are but one leg of a long journey toward elucidation of the biological mechanisms of atherogenesis, coagulation, and downstream clinical events.

Tuesday, April 21, 2009

Judicial use of DNA "evidence" and Misuse of Statistics: The Prosecutor's Fallacy

A recent article in the NYT described the adoption by the judicial system of a technology that began as a biomedical research tool (I resist to some extent the notion that DNA technology has directly been a boon to clinical patient care.) (See: http://www.nytimes.com/2009/04/19/us/19DNA.html.) This powerful technology, when used appropriately in appropriate circumstances, provides damning evidence of guilt because of its high specificity - the probability of a coincidental match is stated to be as low as 1x10-9. Thus, in a case such as that of the infamous (and nefarious) OJ Sipmson, in which there is strong suspicion of guilt BEFORE the DNA evidence is evaluated, a positive match, in the absence of laboratory error or misconduct (neither of which can be routinely discounted - see: http://www.nytimes.com/2001/09/26/us/police-chemist-accused-of-shoddy-work-is-fired.html) essentially proves, beyond any reasonable doubt, the genetic identity of the person to whom the sample belongs. (Yes, that does indeed mean that OJ Simpson is the perpetrator of the heinous murder of Nicole Brown Simpson, he said unapologetically.)

In the case of old OJ, he was one among perhaps 10, let's say 100 suspects. Let's assume that the LAPD had their act together (this also requires a leap of faith) and that the perpetrator is among the suspects that have been rounded up, but we have no evidence to differentiate their respective probabilities of guilt. Thus, each of the 100 has a 1% probability of being guilty, on the basis of circumstantial evidence alone, or a relation to or relationship with the victim(s) or just being in the wrong place at the wrong time, whatever. Given that 1% probability of guilt, we can make a 2x2 table representing the the probability of guilt given a positive test, which is ultimately what we want to know. I don't know the sensitivity of DNA fingerprinting, but it doesn't really matter because the high specificity of the test drives the likelihood ratio. I will assume it's 50% for simplicity:


In this "population" of 100 suspects (by suspects, I mean persons whose probability of having committed the crime is enhanced over that of a random member of the overall population by virtue of other evidence), even if all 100 suspects have equiprobable guilt, a DNA "match" is damning indeed and all but assures the guilt of the matching suspect (with the caveats mentioned above.)

But consider a different situation, one in which there are no convincing suspects. Suppose that the law enforcement authorities compare a biological sample with a large DNA database to look for a match. Note that we do not use the term "suspect" here - because it implies that there is some suspicion that has limited this population from the overall population. When a database (of unsuspected persons) is canvassed, no such suspicion exists. Rather, a fishing expedition ensues, and the probabilities, when computed, come out quite different. Suppose there are DNA samples from 100 million individuals in the database, and the entire database is canvassed. Now our 2x2 table looks like this:


Whereas in our previous example of a population of "suspects" guilt was all but assured based on a "match", in this example of canvassing a database, guilt is dubious. But what do you suppose will happen in such an investigation? Who will suspend his judgment and conduct a fair investigation of this "matching" individual, who is now a "suspect" based only on "evidence" from this misused test? How tempting will it be for detectives to selectively gather information and see reality through the distorted lens of the "infallible" DNA testing? How can such a person hope to exonerate himself?

This is the Prosecutor's Fallacy. It bolsters arguments by the ACLU and others that the trend of snowballing DNA sample collection should be curtailed, and that limits should be placed on canvassing efforts to solve crimes.

One way to limit the impact of the Prosecutor's Fallacy and false positive "matches" from canvassing efforts would be to force investigators to assign certain profiles to the imaginary "suspect" whom they hope to find in the database and to canvas a subgroup of the database that matches those characteristics. For example, if the crime occurred in Seattle, the canvassing effort could be limited to a subset of the database that lived in or near Seattle, since it is unlikely that a person in Baltimore committed the crime. Other characteristics that are probabilistically associated with certain crimes could be used to limit broad canvassing efforts.

As the use of medical technology expands both inside and outside medicine, we have a responsibility to utilize it wisely and rationally. The strategy of database screening and canvassing is reckless, unwise, and unjust, and should be summarily and duly curtailed.

Wednesday, April 8, 2009

The PSA Screening Quagmire - If Ignorance is Bliss then 'Tis Folly to be Wise?

The March 26th NEJM was a veritable treasure trove of interesting evidence so I can't stop after praising NICE-SUGAR and railing on intensive insulin therapy. If 6000 patients (40,000 screened) seemed like a commendable and daunting study to conduct, consider that the PLCO Project Team randomized over 76,000 US men to screening versus control (http://content.nejm.org/cgi/reprint/360/13/1310.pdf) and the ERSPC Investigators randomized over 162,000 European men in a "real-time meta-analysis" of sorts (wherein multiple simultaneous studies were conducted with similar but different enrollment requirements and combined; see: http://content.nejm.org/cgi/reprint/360/13/1320.pdf.)   This is, as the editorialist points out a "Hurculean effort" and that is fitting and poignant - because ongoing PSA screening efforts in current clinical practice represent a Hurculean effort to reduce morbidity and mortality of this disease and this reinforces the importance of the research question - are we wasting our time? Are we doing more harm than good?

The lay press was quick to start trumpeting the downfall of PSA screening with headlines such as "Prostate Test Found to Save Few Lives" . But for all their might, both of these studies give me, a longtime critic of cancer screening efforts, a good bit of pause. (Pulmonologists may be prone to "sour grapes" as a result of the failures of screening for lung cancer.)

Before I summarize briefly the studies and point out some interesting aspects of each, allow me to indulge in a few asides. First, I direct you to this interesting article in Medical Decision Making "Cure Me Even if it Kills Me". This wonderful study in judgment and decision making shows how difficult it is for patients to live with the knowledge that there is a cancer, however small growing in them. They want it out. And they want it out even if they are demonstrably worse off with it cut out or x-rayed out or whatever. It turns out that patients have a value for "getting rid of it" that probably arises from the emotional costs of living knowing there's a cancer in you. I highly recommend that anyone interested in cancer screening or treatment read this article.

This article invokes in me an unforgettable patient from my residency whom we screened in compliance with VA mandates at the time. Sure enough, this patient with heart disease had a mildly elevated PSA and sure enough he had a cancer on biopsy. And we discussed treatments in concert with our Urology colleagues. While he had many options, this patient agonized and brooded and could not live with the thought of a cancer in him He proceeded with radical prostatectomy, the most drastic of his options. And I will never forget that look of crestfallen resignation every time I saw him after that surgery because he thereafter came to clinic in diapers, having been rendered incontinent and impotent by that surgery. He was more full of self-flagellating regret than any other patient I have seen in my career. This poor man and his experience certainly jaded me at a young age and made me highly attuned to the pitfalls of PSA screening.

Against this backdrop where cancer is the most feared diagnosis in medicine, we feel an urge towards action to screen and prevent, even when there is a marginal net benefit of cancer screening, and even when other greater opportunities for improving health exist. I need not go into the literature about [ir]rational risk appraisal other than to say that our overly-exuberant fear of cancer (relative to other concerns) almost certainly leads to unrealistic hopes for screening and prevention. Hence the great interest in and attention to these two studies.

In summary, the PLCO study showed no reduction in prostate-cancer-related mortality from DRE (digital rectal examination) and PSA screening. Absence of evidence is not evidence, however, and a few points about this study deserve to be made:

~Because of high (and increasing) screening rates in the control group, this was essentially a study of the "dose" of screening. The dose in the control group was ~45 and that in the screening group was ~85%. So the question that the study asked was not really "does screening work" but rather "does doubling the dose of screening work". Had there been a favorable trend in this study, I would have been tempted to double the effect size of the screening to infer the true effect, reasoning that if increasing screening from 40% to 80% reduces prostate cancer mortality by x%, then increasing screening from 0% to 80% would reduce it by 2x%. Alas this was not the case with this study which was underpowered.

~I am very wary of studies that have cause-specific mortality as an endpoint. There's just too much room for adjudication bias, as the editorialist points out. Moreover, if you reduce prostate cancer mortality but overall mortality is unchanged, what do I, as a potential patient care? Great, you saved me from prostate cancer and I died at about the same time I would have but from an MI or a CVA instead? We have to be careful about whether our goals are good ones - the goal should not be to "fight cancer" but rather to "improve overall health". The latter, I admit, is a much less enticing and invigorating banner. We like to feel like we're fighting. (Admittedly, overall mortality appears to not differ in this study, but I'm at a loss as to what's really being reported in Table 4.) The DSMB for the ESRCP trial argue here that cancer specific mortality is most appropriate for screening trials because of dilution by other causes of mortality, and because screening for a specific cancer can only be expected to reduce mortality for that cancer. From an efficacy standpoint, I agree, but from an effectiveness standpoint, this position causes me to squint and tilt my head askance.

~It is so very interesting that this study was stopped not for futility, nor for harm, nor for efficacy, but because it was deemed necessary for the data to be released because of the [potential] impact on public health. And what has been the impact of those data? Utter confusion. That increasing screening from 40% to 80% does not improve prostate specific mortality does not say to me that we should reduce screening to 0%. In fact I don't know what to do, nor what to make of these data. Especially in the context of the next study.

In the ERSPC trial, investigators found a 20% reduction in prostate cancer deaths with screening with PSA alone in Europe. The same caveats regarding adjudication of this outcome notwithstanding, there are some very curious aspects of this trial that merit attention:

~This trial was, as I stated above, a "real-time meta-analysis" with many slightly different studies combined for analysis. I don't know what this does to internal or external validity because this is such an unfamiliar approach to me, but I'll be pondering it for a while I'm sure.

~I am concerned that I don't fully understand the way that interim analyses were performed in this trial, what the early stopping rules were, and whether a one-sided or two-sided alpha was used. Reference 6 states that it was one-sided but the index article says 2. Someone will have to help me out with the O'Brien-Fleming alpha spending function and let me know if 1% spending at each analysis is par for the course.

~As noted by the editorialist, we are not told what the "contamination rate" of screening in the control group is. If it is high, we might use my method described above to infer the actual impact of screening.

~Look at the survival curves that diverge and then appear to converge again at a low hazard rate. Is it any wonder that there is no impact on overall mortality?


So where does this all leave us? We have a population of physicians and patients that yearn for effective screening and believe in it, so much so that it is hard to conduct an uncontaminated study of screening. We have a US study that is stopped prematurely in order to inform public health, but which is inadequate to inform it. We have a European study which shows a benefit near the a priori expected benefit, but which has a bizarre design and is missing important data that we would like to consider before accepting the results. We have no hint of a benefit on overall mortality. We have lukewarm conclusions from both groups, and want desperately to know what the associated morbidities in each group are. We are spending vast amounts of resources and incurring an enormous emotional toll on men who live in fear after a positive PSA test, many of whom pay dearly ("a pound of flesh") to exorcise that fear. And we have a public over-reaction to the results of these studies which merely increase our quandary.

If ignorance is bliss, then truly 'tis folly to be wise. Perhaps this saying applies equally to individual patients, and the investigation of PSA screening in these large-scale trials. For my own part, this is one aspect of my health that I shall leave to fate and destiny, while I focus on more directly remediable aspects of preventive health, ones where the prevention is pleasurable (running and enjoying a Mediterranean diet) rather than painful (prostatectomy).

Sunday, April 5, 2009

Another [the final?] nail in the coffin of intensive insulin therapy (Leuven Protocol) - and redoubled scrutiny of single center studies

In the March 26th edition of the NEJM, the NICE-SUGAR study investigators publish the results of yet another study of intensive insulin therapy in critically ill patients: http://content.nejm.org/cgi/content/abstract/360/13/1283 .

This article is of great interest to critical care practitioners because intensive insulin therapy (Leuven Protocol) or some diluted or half-hearted version of it has become a de facto standard of care in ICUs across the nation and indeed worldwide; and because it is an incredibly well-designed and well-conducted study. My own interest derives also from my own [prescient] letter to the editor of the NEJM after the second Van den Berghe study (http://content.nejm.org/cgi/content/extract/354/19/2069 , the criticisms I levied against this therapy on this blog after another follow-up study recently showed negative results (http://medicalevidence.blogspot.com/2008/01/jumping-gun-with-intensive-insulin.html ), and in a recent paper railing against the "normalization heuristic" (http://www.medical-hypotheses.com/article/S0306-9877(09)00033-4/abstract ). The results of this study also add to the growing evidence that intensive control of hyperglycemia in other settings may not be beneficial (see the ACCORD and ADVANCE studies.)

The current study was designed to largely mirror the enrollment criteria and outcome definitions of the previous studies, had excellent follow-up, had well described and simple statistical analyses with ample power, and is well reported. Key differences between it and the original Van den Berghe study were the lack of high-calorie parenteral glucose infusions, and its multicenter design. This latter characteristic may be pivotal in understanding why the initially promising Leuven Protocol results have not panned out on subsequent study.

The results of this study can be summarized simply by saying that it appears that this therapy is of NO benefit and actually probably kills patients, in addition to markedly increasing the rate of very very severe hypoglycemia (6.3% increase, P<0.001). In contrast to Van den Berghe's second study in medical patients, there were no favorable trends towards reduction in ICU length of stay, time on the ventilator, or reduced organ failures. In short, this therapy appears to be a complete flop.

So why the difference? Why did this therapy, which in 2001 appeared to have such promise that it enjoyed rapid and widespread [and premature] adoption fail to withstand the basic test of science, namely, repeatability? I think that medical history will judge two factors to be responsible. Firstly, the massive dextrose infusions in the first study markedly jeporadized the external validity of the first (positive) Van den Berghe study - it's not that intensive insulin saves you from your illness, it saves you from the harmful caloric infusions used in the surgical patients in the first study.

Secondly, and this is related to the first, single center studies also compromise external validity. In a single center, local practice patterns may be uniform and idiosyncratic, so that the benefit of any therapy tested in such a center may also be idiosyncratic. Moreover, and I dare say, investigators at a single center may have more decisional latitude and control or influence over enrollment, ascentainment of outcomes, and clinical care of enrolled patients. The so-called "trial effect" whereby patients enrolled in a trial receive superior care and have superior outcomes may be more likely in single center studies. Such effects are of increased concern in trials whre total blinding/masking or treatment assignment is not possible. (Recall that in the Van den Berghe study, kan endocrinologist was consulted for insulin adjustments; in the current trial, a computerized algorithm controlled the adjustments.) Moreover still, for single center studies, investigators and the instutution itself may have more "riding on" the outcome of the study, and collective equipoise may not exist. As an "analogy of extremes", just for illustrative purposes, if you wanted to design a trial where you could subversively influence outcomes in a way that would not be apparent from the outside, would you design a single center study (at your own institution where your cronies were) or a large multicenter, multinational study? Which design would allow you to have more influence?

I LOVE the authors' concluding statement that "a clinical trial targeting a perceived risk factor is a test of a complex strategy that may have profound effects beyond its effect on the risk factor." This resonates beautifully with our conceptualization of the "normalization heuristic" and harkens to Ben Franklin's sage old saw that "He is the best physician who knows the worthlessness of the most medicines." I think that we now have more than ample data to assure us that intensive insulin therapy (i.e., targeting a blood sugar of 80-108) is a worthless medicine, and should be largely if not wholly abandoned.

Addendum 4/7/09: Also note the scrutiny of the only other "positive" study (with mortality as the primary endpoint) in critical care in the last decade: Rivers et al; see: http://online.wsj.com/article/SB121867179036438865.html .

Saturday, March 14, 2009

"Statistical Slop": What billiards can teach us about multiple comparisons and the need to assign primary endpoints

Anyone who has played pool knows that you have to call your shots before you make them. This rule is intended to decrease probability of "getting lucky" from just hitting the cue ball as hard as you can, expecting that the more it bounces around the table, the more likely it is that one of your many balls will fall through chance alone. Sinking a ball without first calling it is referred to coloquially as "slop" or a "slop shot".

The underlying logic is that you know best which shot you're MOST likely to successfully make, so not only does that increase the prior probability of a skilled versus a lucky shot (especially if it is a complex shot, such as one "off the rail"), but also it effectively reduces the number of chances the cue ball has to sink one of your balls without you losing your turn. It reduces those multiple chances to one single chance.

Likewise, a clinical trialist must focus on one "primary outcome" for two reasons: 1.) because preliminary data, if available, background knowledge, and logic will allow him to select the variable with the highest "pre-test probability" of causing the null hypothesis to be rejected, meaning that the post-test probability of the alternative hypothesis is enhanced; and 2.) because it reduces the probaility to find "significant" associations among multiple variables through chance alone. Today I came across a cute little experiment that drives this point home quite well. The abstract can be found here on pubmed: http://www.ncbi.nlm.nih.gov/pubmed/16895820?ordinalpos=4&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum .


In it, the authors describe "dredging" a Canadian database and looking for correlations between astrological signs and various diagnoses. Significant associations were found between the Leo sign and gastrointestinal hemorrhage, and the Saggitarius sign and humerous fracture. With this "analogy of extremes" as I like to call them, you can clearly see how the failure to define a prospective primary endpoint can lead to statistical slop. (Nobody would have been able to predict a priori that it would be THOSE two diagnoses associated with THOSE two signs!) Failure to PROSPECTIVELY identify ONE primary endpoint led to multiple chances for chance associations. Moreover, because there were no preliminary data upon which to base a primary hypothesis, the prior probability of any given alternative hypothesis is markedly reduced, and thus the posterior probability of the alternative hypothesis remains low IN SPITE OF the statistically significant result.

It is for this very reason that "positive" or significant associations among non-primary endpoint variables in clinical trials are considered "hypothesis generating" rather than hypothesis confirming. Requiring additional studies of these associations as primary endpoints is like telling your slop shot partner in the pool hall "that's great, but I need to see you do that double rail shot again to believe that it's skill rather than luck."

Reproducibility of results is indeed the hallmark of good science.

Tuesday, March 10, 2009

PCI versus CABG - Superiority is in the heart of the angina sufferer

In the current issue of the NEJM, Serruys et al describe the results of a multicenter RCT comparing PCI with CABG for severe coronary artery disease: http://content.nejm.org/cgi/content/full/360/10/961. The trial, which was designed by the [profiteering] makers of drug-coated stents, was a non-inferiority trial intended to show the non-inferiority (NOT the equivalence) of PCI (new treatment) to CABG (standard treatment). Alas, the authors appear to misunderstand the design and reporting of non-inferiority trials, and mistakenly declare CABG as superior to PCI as a result of this study. This error will be the subject of a forthcoming letter to the editor of the NEJM.

The findings of the study can be summarized as follows: compared to PCI, CABG led to a 5.6% reduction in the combined endpoint of death from any cause, stroke, myocardial infarction, or repeat vascularization (P=0.002). The caveats regarding non-inferiority trials notwithstanding, there are other reasons to call into question the interpretation that CABG is superior to PCI, and I will enumerate some of these below.

1.) The study used a ONE-SIDED 95% confidence interval - shame, shame, shame. See: http://jama.ama-assn.org/cgi/content/abstract/295/10/1152 .
2.) Table 1 is conspicuous for the absence of cost data. The post-procedural hospital stay was 6 days longer for CABG than PCI, and the procedural time was twice as long - both highly statistically and clinically significant. I recognize that it would be somewhat specious to provide means for cost because it was a multinational study and there would likely be substantial dispersion of cost among countries, but it seems like neglecting the data altogether is a glaring omission of a very important variable if we are to rationally compare these two procedures.
3.) Numbers needed to treat are mentioned in the text for variables such as death and myocardial infarction that were not individually statistically significant. This is misleading. The significance of the composite endpoint does not allow one to infer that the individual components are significant (they were not) and I don't think it's conventional to report NNTs for non-significant outcomes.
4.) Table 2 lists significant deficencies and discrepancies between pharmocological medical management at discharge which are inadequately explained as mentioned by the editorialist.
5.) Table 2 also demonstrates a five-fold increase in amiodarone use and a three-fold increase in warfarin use at discharge among patients in the CABG group. I infer this to represent an increase in the rate of atrial fibrillation in the CABG patients, but because the rates are not reported, I am kept wondering.
6.) Neurocognitive functioning and the incidence of defecits (if measured), known complications of bypass, are not reported.
7.) It is mentioned in the discussion that after consent, more patients randomized to CABG compared to PCI withdrew consent, a tacit admission of the wariness of patients to submit to this more invasive procedure.

In all, what this trial does for me is to remind me to be wary of an overly-simplistic interpretation of complex data and a tendency toward dichotimous thinking - superior versus inferior, good versus bad, etc.

One interpretation of the data is that a 3.4 hour bypass surgery and 9 days in the hospital !MIGHT! save you from an extra 1.7 hour PCI and another 3 days in the hospital on top of your initial committment of 1.7 hours of PCI and 3 days in the hospital if you wind up requiring revascularization, the primary [only] driver of the composite endpoint. And in payment for this dubiously useful exchange, you must submit to a ~2% increase in the risk of stroke, have a cracked chest, risk surgical wound infection (rate of which is also not reported) pay an unknown (but probably large) increased financial cost, risk some probably large increased risk of atrial fibrillation and therefore be discharged on amiodarone and coumadin with their high rates of side effects and drug-drug interactions, while coincidentally risk being discharged on inadequate medical pharmacological management.

Looked at from this perspective, one sees that beauty is truly in the eye of the beholder.

Monday, March 9, 2009

Money talks and Chantix (varenicline) walks - the role of financial incentives in inducing healthful behavior

I usually try to keep the posts current, but I missed a WONDERFUL article a few weeks ago in the NEJM, one that is pivotal in its own right, but especially in the context of good decision making about therapeutic choices and opportunity costs.

The article, by Volpp et all entitled: A Randomized, Controlled Trial of Financial Incentives for Smoking Cessation can be found here: http://content.nejm.org/cgi/content/abstract/360/7/699
In summary, smokers at a large US company, where a smoking cessation program existed before the research began were randomized to receive additional information about the program, versus the same information plus a financial incentive of up to $750 for successfully stopping smoking. At 9-12 months, smoking cessation was 10% higher in the financial incentive group (14.7% vs. 5.0%, P<0.001).

In the 2006 JAMA article on varenicline (Chantix) by Gonzales et al (http://jama.ama-assn.org/cgi/reprint/296/1/47.pdf ), the cessation rates at weeks 9-52 were 8.4% for placebo and 21.9% for varenicline, an absolute gain of 13.5%. (Similar results were reported in the study by Jorenby et al: http://jama.ama-assn.org/cgi/content/abstract/296/1/56?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=varenicline&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT ) Now, given that this branded pharmaceutical sells for ~$120 for a 30 day supply, and that, based on the article by Tonstad (http://jama.ama-assn.org/cgi/reprint/296/1/64.pdf ), many patients are continued on varenicline for 24 weeks or more, the cost of a course of treatment with the drug is approximately $720, just about the same as the financial incentives used in the index article.

And all of this begs the question: Is it better to pay $750 for 6 months of treatment with a drug that has [potentially serious] side effects to achieve ~13% reduction in smoking, or to pay patients to quit smoking to achieve a 10% reduction in smoking without harmful side effects and in fact with POSITIVE side effects (money to spend on pleasurable alternatives to smoking or other necessities)?

The choice is clear to me, and, having failed Chantix, I now consider whether I should offer my brother payment to quit smoking. (I expect to receive a call as soon as he reads this, especially since I haven't mentioned the cotinine tests yet.)

And all of this begs the more important question of why we seek drugs to solve behavioral problems, when good old fashioned greenbacks will do the trick just fine. Why bother with Meridia and Rimonabant and all the other weight loss drugs when we might be able to pay people to lose weight? (See: http://jama.ama-assn.org/cgi/content/abstract/300/22/2631 .) Perhaps one part of Obama's stimulus bill can allocate funds to additional such an experiments, or better yet, to such a social program.

One answer to this question is that the financial incentive to study financial incentives is not as great as the financial incentive to find another profitable pill to treat social ills. (There is after all a "pipeline deficiency" in a number of Big Pharma companies that has led to several mergers and proposed mergers, such as the announcement today of a possible merger of MRK and SGP, two of my personal favorites.) Yet this study sets the stage for more such research. If we are going to pay one way or another, I for one would rather that we be paying people to volitionally change their behavior, rather than paying via third party to reinforce the notion that there is "a pill for everything". As Ben Franklin said, "S/He is the best physician who knows the worthlessness of the most medicines."

Wednesday, March 4, 2009

The Normailzation Heuristic: how an untested hypothesis may misguide medical decisions

Here is an article that may be of interest written by two perspicacious young fellows:
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WN2-4VP175C-1&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=0067dfb6094ecc27303ccd6939257200


In this article, we describe how the general clinical hypothesis that "normalizing" abnormal laboratory values and physiological parameters will improve patient outcomes is unreliably accurate, and use historical examples of practices such as hormone replacement therapy, and the CAST trial to buttress this argument. We further suggest that many ongoing practices that rely on normalizing values should be called into question because the normalization hypothesis is a fragile one. We also operationally define the "normalization heuristic" and define four general ways in which it can fail clinical decision makers. Lastly, we make suggestions for empirical testing of existence of this heuristic and caution clinicians and medical educators to be wary of reliance on the normalization hypothesis and the normalization heuristic. This paper is an expansion of the idea of the normalization heuristic that was mentioned previously on this blog.

Tuesday, February 10, 2009

West's estimations of PaO2 on Everest Confirmed - but SaO2 remains an estimation

Recently, Grocott et al published results of an intriguing study in which they drew blood gas samples from climbers near the summit of everest and analyzed them at one of the high camps with a modified blood gas analyzer. (See: http://content.nejm.org/cgi/content/abstract/360/2/140 ) This is no small feat, and the perhaps shocking results confirm earlier estimations of low arterial oxygen tension derived from samples of exhaled gas. The PaO2 of these climbers is often under 30mmHg - a difficult to believe number for clinicians who are accustomed to a danger zone represented by much higher numbers in clinical practice.

As intriguing as the numbers may be, the authors have made a crucial assumption in the estimation of arterial oxygen saturation (SaO2) that leads us to be circumspect about the accuracy of this estimated value. A letter written by me and my colleagues emphasizing several caveats in these estimations was not accepted for publication by the NEJM so I will post it below.

In the article by Grocott et al, an important limitation of using calculated SaO2 values for the estimation of arterial oxygen content is neglected. The equation used for the calculation of SaO2 in the article does not take into account changes in hemoglobin affinity induced by increased 2,3-DPG levels which are known to occur during acclimatization (1;2). Errors resulting from these estimations will be magnified for values of PaO2 on the steep portion of the oxyhemoglobin dissociation curve. The PaO2 values of the subjects studied are on this portion of the curve. Can the authors comment on 2,3-DPG levels in these climbers and how any resulting changes in hemoglobin affinity may have affected calculated values? Were the climbers taking acetazolamide, which has variably been demonstrated to affect the oxygen affinity of hemoglobin (3;4)? Is there any evidence that acclimatization induces increased production of fetal hemoglobin as occurs in some other species (5)? Because of such caveats and possibly other unknown variables, co-oximetry remains the gold standard for determination of arterial oxygen saturation.


Reference List

(1) Wagner PD, Wagner HE, Groves BM, Cymerman A, Houston CS. Hemoglobin P(50) during a simulated ascent of Mt. Everest, Operation Everest II. High Alt Med Biol 2007; 8(1):32-42.
(2) Winslow RM, Samaja M, West JB. Red cell function at extreme altitude on Mount Everest. J Appl Physiol 1984; 56(1):109-116.
(3) Gai X, Taki K, Kato H, Nagaishi H. Regulation of hemoglobin affinity for oxygen by carbonic anhydrase. J Lab Clin Med 2003; 142(6):414-420.
(4) Milles JJ, Chesner IM, Oldfield S, Bradwell AR. Effect of acetazolamide on blood gases and 2,3 DPG during ascent and acclimatization to high altitude. Postgrad Med J 1987; 63(737):183-184.
(5) Reynafarje C, Faura J, Villavicencio D, Curaca A, Reynafarje B, Oyola L et al. Oxygen transport of hemoglobin in high-altitude animals (Camelidae). J Appl Physiol 1975; 38(5):806-810.


Scott K Aberegg, MD, MPH
Leroy Essig, MD
Andrew Twehues, MD

Monday, February 9, 2009

More Data on Dexmedetomidine - moving in the direction of a new standard

A follow-up study of dexmedetomidine (see previous blog: http://medicalevidence.blogspot.com/2007/12/dexmedetomidine-new-standard-in_16.html )
was published in last week's JAMA (http://jama.ama-assn.org/cgi/content/abstract/301/5/489 ) and hopefully serves as a prelude to future studies of this agent and indeed all studies in critical care. The recent study addresses one of my biggest concerns of the previous one, namely that routine interruptions of sedatives were not employed.

Ironically, it may be this difference between the studies that led to the failure to show a difference in the primary endpoint in the current study. The primary endpoint, namely the percentage of time within the target RASS, was presumably chosen not only on the basis of its pragmatic utility, but also because it was one of the most statistically significant differences found among secondary analyses in the previous study (percent of patients with a RASS [Richmond Agitation and Sedation Scale] score within one point of the physician goal; 67% versus 55%, p=0.008). It is possible, and I reason likely, that daily interruptions in the current study obliterated that difference which was found in the previous study.


But that failure does not undermine the usefulness of the current study which showed that sedation comparable to routinely used benzos can be achieved with dexmed, probably with less delirium, and perhaps with shorter time on the ventilator and fewer infections. What I would like to see now, and what is probably in the works, is a study of dexmed which shows shorter time on the ventilator and/or reductions in nosocomial infections as primary study endpoints.

But to show endpoints such as these, we are going to need to carefully standardize our ascertainment of infections (difficult to say the least) and also to standardize our approach to discontinuation of mechanical ventilation. In regard to the latter, I propose that we challenge some of our current assumptions about liberation from mechanical ventilation - namely, that a patient must be fully awake and following commands prior to extubation. I think that a status quo bias is at work here. We have many a patient with delirium in the ICU who is not already intubated and we do not intubate them for delirium alone. Why, then, should we fail to extubate a patient in whom all indicators show reaolution of critical illness, but who remains delirious? Is it possible that this is the main player in the causal pathway between sedation and extubation and perhaps even nosocomial infections and mortality? (The protocols or lack thereof for assessing extubation readiness were not described in the current study, unless I missed them.) It would certainly be interesting and perhaps mandatory to know the extubation practices in the centers involved in this study, especially if we are going to take great stock in this secondary outcome of this study.

Another thing I am interested in knowing is what PATIENT experiences are like in each group - whether there is greater recall or other differences in psychological outcomes between patients who receive different sedatives during their ICU experience.

I hope this study and others like it serve as a wake-up call to the critical care research community which has heretofore been brainwashed into thinking that a therapy is only worthwhile if it improves mortality, a feat that is difficult to achieve not only because it is often unrealistic and because absurd power calculations and delta inflation run rampant in trial design, but because of limitations in funding and logistical difficulties. This group has shown us repeatedly that useful therapies in critical care need not be predicated upon a mortality reduction. It's past time to start buying some stock in shorter times on the blower and in the ICU.

Tuesday, February 3, 2009

Cost: The neglected adverse event / side effect in trials of for-profit pharmaceuticals and devices

Amid press releases and conference calls today pertaining to the release of data on two trials of the investigational drug pirfenidone, one analyst's comments struck me as subtly profound. She was saying that in spite of conflicting data on and uncertainty about the efficacy of the drug (in the Capacity 1 and Capacity 2 trials - percent change in FVC [forced vital CAPACITY] at 72 weeks was the primary endpoint of the study) IPF is a deadly and desperate disease for which no effective treatments exist (save for lung transplantations if you're willing to consider that an effective treatment) and therefore any treatment with any positive effect however small and however uncertain should be given ample consideration, especially given the relative absense of side effects of pirfenidone in the Capacity trials.

And I thought to myself - "absense of side effects?" Here we have a drug that, over the course of about 1.5 years reduces the decline in FVC by about 60ccs (maybe - it did so in Capacity 2 but not in Capacity 1) but does not prolong survival or dyspnea scores or any other outcome that a patient may notice. So, I'm picturing an IPF patient traipsing off to the drugstore to purchase pirfenidone, a branded drug, and I'm imagining that the cash outlay might be perceived by such a patient as an adverse event, a side effect of sorts of using this questionably effective drug to prevent an intangible decline in FVC. The analyst's argument distilled to: "why not, there's no drawback to using it and there are no alternatives", but this utterly neglected the financial hardships that many patients endure when taking expensive branded drugs and ignored alternative ways that patients with IPF may spend their income to benefit their health or general well-being.

This perspective is even more poignant when we consider the cases of "me-too" drugs that add marginally to the benefits or side effect profiles of existing drugs, and which are often approved on the basis of a trial comparing them to placebo rather than existing generic alternatives. One of the last posts on this blog detailed the case of Aliskiren, and I am reminded of the trial of Tiotropium published in the NEJM in October, among many other entire classes of drugs such as the proton pump inhibitors, antidepressants, antihistamines, inhaled corticosteroids, antihypertensives, ACE-inhibitors for congestive heart failure, and the list goes on.

Given todays economy, soaring healthcare costs, and increasing financial burdens and co-pays shouldered by patients especially those of limited economic means or those hit hardest by economic downturns, we can no longer afford (pun intended) to ignore the financial costs of "me too" medications as adverse events of the use of these drugs when cheaper alternatives exist.

In terms of trial design, we should demand that new agents be compared to existing alternatives when those exist, and we need to develop a system for evaluating the results of a trial that does not neglect the full range of adverse effects experienced by patients as a result of using expensive branded drugs. Marginally "better" is not better at all if it costs ridiculously more, and the uncertainty relating to the efficacy of a drug must be accounted for in terms of its value to patients, especially when costly.


Monday, June 2, 2008

"Off-Label Promotion By Proxy": How the NEJM and Clinical Trials are Used as an Advertising Apparatus. The Case of Aliskiren

In the print edition of the June 5th NEJM (mine is delivered almost a week early sometimes), readers will see on the front cover the lead article entitled "Aliskiren Combined with Losartan in Type 2 Diabetes and Nephropathy," and on the back cover a sexy advertisement for Tekturna (aliskiren), an approved antihypertensive agent, which features "mercury-man", presumably a former hypertensive patient metamorphized into elite biker (and perhaps superhero) by the marvels of Tekturna. Readers who lay the journal inside down while open may experience the same irony I did when they see the front cover lead article juxtaposed to the back cover advertisement.

The article describes how aliskiren, in the AVOID trial, reduced the mean urinary albumin-to-creatinine ratio as compared to losartan alone. There are several important issues here. First, if one wants to use a combination of agents, s/he can use losartan with a generic ACE-inhibitor (ACEi). A more equitable comparison would have pitted aliskiren plus losartan against [generic] ACEi plus losartan. The authors would retort of course that losartan alone is a recommended agent for the condition studied, but that is circular logic. If we were not in need of more aggressive therapy for this condition, then why study aliskiren in combination for it at all? If you want to study a new aggressive combination, it seems only fair to compare it to existing aggressive combinations.

Which brings me to another point - should aliskiren be used for ANY condition? No, it should not. It is a novel [branded] agent which is expensive, for which there is little experience, which may have important side effects which are only discovered after it is used in hundreds of thousands of patients, and more importantly, alternative effective agents exist which are far less costly adn for which more experience exist. A common error in decision making occurs when decision makers focus only on the agent or choice at hand and fail to consider the range of alternatives and how the agent under consideration fares when compared to the alternatives. Because aliskiren has only been shown to lower blood pressure, a surrogate endpoint, we would do well to stick with cheaper agents for which there are more data and more experience, and reserve use of aliskiren until a study shows a long-term mortality or meaningful morbidity benefit.

But here's the real rub - after an agent like this gets approved for one [common] indication (hypertension), the company is free to conduct little studies like this one, for off-label uses, to promote its sale [albeit indirectly] in patients who do not need it for its approved indication (BP lowering). And what better advertising to bring the drug into the sight of physicians than a lead article in the NEJM, with a complementary full page advertisement on the back cover? This subversive "off-label promotion by proxy", effected by study of off-label indications for which FDA approval may or may not ultimately be sought, has the immediate benefit of misleading the unwary who may increase prescriptions of this medication based on this study (which they are free to do) withouth considering the full range of alternatives.

My colleague David Majure, MD, MPH has commented to me about an equally insidious but perhaps more nefarious practice that he noticed may be occuring while attending this year's meeting of the American College of Cardiology (ACC). There, "investigtors" and corporate cronies are free to present massive amounts of non-peer reviewed data in the form of abstracts and presentations, much of which data will not and should not withstand peer review or which will be relegated to the obscurity of low-tier journals (where it likely belongs). But eager audience members, lulled by the presumed credibility of data presented at a national meeting of [company paid] experts will likely never see the data in peer-reviewed form, and instead will carry away the messages as delivered. "Drug XYZ was found to do 1-2-3 to [surrogate endpoint/off-label indication] ABC." By sheer force of repetition alone, these abstracts and presentations serve to increase product recognition, and, almost certainly, prescriptions. Whether the impact of the data presented is meaningful or not need not be considered, and probably cannot be considered without seeing the data in printed form - and this is just fine - for sales that is.

(Added 6/11/2008: this pre-publication changing of practice patterns has been described before - see http://jama.ama-assn.org/cgi/content/abstract/284/22/2886 .)

The novel mechanism of action of this agent and the scientific validity of the AVOID trial notwithstanding, the editorialship of the NEJM and the medical community should realize that science and the profit motive are inextricably interwoven when companies study these branded agents. The full page advertisement on the back cover of this week's NEJM was just too much for me.

Thursday, May 29, 2008

Prucalopride: When Delivery is so Suspicious that the Entire Message Seems Corrupt

In this week's NEJM, (http://content.nejm.org/cgi/content/short/358/22/2344) Camilleri (of the Mayo Clinic) and comrades from Movetis (a pharmaceutical company) report the results of a study of Prucalopride, a prokinetic agent, for the treatment of chronic constipation. What is striking about this study is not the agent's relation to Ciaspride (Propulsid, an agent removed from the market a number of years ago because of QTc prolongation and associated cardiac risk) but rather the fact that this study was completed nearly a decade ago, and was only just now published. Such a delay is certainly worthy of concern as astutely pointed out by an editorialist (http://content.nejm.org/cgi/content/short/358/22/2402).

A colleague and I recently pointed out the unethical practice of witholding the results of negative trials from the scientific community (see http://ccmjournal.com/pt/re/ccm/fulltext.00003246-200803000-00060.htm;jsessionid=L2bQSl9ygT9BzlZq81qlnJGfyfG2Jh2f2qQvP4XTp0YqMQ1ZD3T1!195308708!181195628!8091!-1?index=1&database=ppvovft&results=1&count=10&searchid=2&nav=search#P6), but the Prucalopride trial takes the cake. Here, positive results were either intentionally witheld from that community or by happenstance were omitted from publication, delaying further study of this agent (if it is indeed even warranted) and undermining the altruistic basis of subjects' participation in the trial, which, ostensibly, was to advance science (unless they participated for financial incentives, which I might argue [as others already have] should be disclosed in the reporting of a trial - see http://content.nejm.org/cgi/content/extract/358/22/2316.)

I will leave it to other bloggers and commentators to speculate whether the profit or other motives were the impetus behind this delay and whether medical ghostwriting was in any way involved in the publication of this article. Suffice it to say that there are certain irregularities in the way a trial is reported (in addition to those with which it was conducted) that should give us pause. Prucalopride has now shown itself to be worthy of a bright spotlight of intense scrutiny.

Wednesday, May 14, 2008

Troponin Predicts Outcome in Heart Failure - But So What?

In today's NEJM, Peacock and others (http://content.nejm.org/cgi/content/short/358/20/2117 ) report that cardiac troponin is STATISTICALLY associated with hospital mortality in patients with acute decompensated heart failure, and that this association is independent of other predictive variables. Let us assume that we take the results for granted, and that this is an internally and externally valid study with little discernible bias.

In the first paragraph of the discussion, the authors state that "These results suggest that measurement of troponin adds important prognostic information to the initial evaluation of patients with acute decompensated heart failure and should be considered as part of an early assessment of risk."

Really?


The mortality in patients in the lowest quartile of troponin I was 2.0% and that in the highest quartile was 5.3%. If we make the common mistake of comparing things on a relative scale, this is in an impressive difference - in excess of a twofold increase in mortality. But that is like saying that I saved 50% off the price of a Hershey Kiss which costs 5 cents - so I saved 3 cents! As we approach zero, smaller and smaller absolute differences can appear impressive on a relative scale. But health should not be appraised that way. If you are "buying" something, be it health or some other commodity, you shouldn't care about your relative return on your investment, only the absolute return. You have after all, only some absolute quantity of money. Charlie (from the Chocolate Factory) may find 3 cents to be meaningful, but we are not here talking about getting a 3% reduction in mortality - we are talking about predicting for Charlie whether he will have to pay $0.05 for his kiss or $0.02 for it, and even if our prediction is accurate, we do not know how to help him get the discounted kiss - he's either lucky or he's not.

Imagine that you are a patient hospitalized for acute decompensated heart failure. Does it matter to you if your physician comes to you carrying triumphantly the results of your troponin I test and informs you that because it is low, your mortality is 2% rather than 5%? It probably matters very little. It matters even less if your physician is not going to do anything differently given the results of that test. Two percent, 5 percent, it doesn't matter if it can't be changed.

Then there is the cost associated with this test. My hospital charges on the order of $200 for this test. Consider the opportunity costs - what else could that $200 be spent on, in the care of American patients, and perhaps even more importantly in the context of global health and economics? Also consider the value of the test to a patient who might have to pay out of pocket for it - is it worth $200 to discriminate within an in-hospital mortality range of 2-5%?

This study, while meticulously conducted and reported, underscores the important distinction between statistical significance and clinical significance. With the aid of a ginormous patient registry, the authors clearly demonstrated a statistically significant result that is at least mildly interesting from a biological perspective (is it interesting that a failing heart spills some of its contents into the blodstream and that they can be detected by a highly sensitive assay?) But the clinical significance of the findings appears to be negligible, and I worry that this report will encourate the already rampant mindless use of this expensive test which, outside of the context of clinical pre-test probabilities, already serves to misguide care and run up healthcare costs in a substantial proportion of the patients in whom it is ordered.

Tuesday, April 29, 2008

Blood Substitutes Doomed by Natanson's Meta-Analysis in JAMA

When the ARMY gives up on something, you should be on the lookout for red flags. (Pentagon types beholden to powerful contractors and highly susceptible to sunk cost bias still haven't given up on that whirligig of death called the Osprey, have they?) But the ARMY's abandonment of a blood substitute that it found was killing animals in tests was apparently no deterrent to Northfield Laboratories, Inc., makers of "Polyheme", as well as Wall Street investors in this an other companies working on products with a similar goal - to cook up an extracellular hemoglobin-based molecule that can be used in lieu of red blood cell transfusions in trauma patients and others.

Charles Natanson, an intramural researcher at the NIH and co-workers performed a meta-analysis of trials of blood substitutes which was published on-line today at the JAMA website: http://jama.ama-assn.org/cgi/content/full/299.19.jrv80007 . They found that these trials, which were powered for outcomes such as number of transfusions provided or other "surrogate-sounding" endpoints, when combined demonstrate that these products were killing subjects in these studies. The relative risk of death for study subjects receiving one of these products was 1.3 and the risk of myocardial infarction increased more than threefold. The robustness of these findings is enhanced by the biological plausibility of the result - cell-free hemoglobin is known to eat up nitric oxide from the endothelium of the vasculature leading to substantial vasoconstriction and other untoward downstream outcomes.

In addition to my penchant for cautionary tales, my interest in this study has to do with study design. We are beholden to "conventional" study design expectations where a p-value is a p-value, they're all 0.05, and an outcome is an outcome, whether it be bleeding, or pain or death, we don't differentially value them. But if you're studying a novel agent, looking for some crumby surrogate endpoint like number of transfusions, and your alpha threshold for that is 0.05, then the alpha threshold for death should be higher (say 0.25 or so), especially if you're underpowered to detect excess deaths. That kind of arrangement would imply that we value death at least 5 times higher than transfusion (I for one would rather have 500 or more transfusions that be dead, but that's a topic for another discussion).

Fortunately for any patients that may have been recruited to participate in such studies, Natanson et al undertook this perspicacious meta-analysis, and the editiorialists extended their recommendations for more transparency in data dissemination to argue, almost, that future trials of blood substitutes should be banned or boycotted. Even if the medical community does not have the gumption to go that far, prospective participants in such studies and their surrogates can at least perform a simple google search, and from now on the Natanson article is liable to be on the first page.

Thursday, April 3, 2008

A [now open] letter to Congress re: Proposed Medicare Reimbursement Cuts

I'm not sure that this is entirely in keeping with the theme of this blog, but I will justify it by saying that the health of the healthcare system is of vital interest to all stakeholders including researchers with an interest in clinical trials. The following letter was sent via the ACCP to my senators and congressmen in regards to the Medicare reimbursement cuts that are to be instituted in July of this year. We were solicited via the medical professional society to be a voice in opposition to the cuts....

Dear Sir or Madam-

Physicians' income, especially that of primary care providers, upon whom patients rely most heavily for basic care, has been falling in real dollars (not keeping pace with inflation) for years, and the newest cuts will markedly exacerbate the disconcerting trend that already exists.

Most physicians do not begin earning income in earnest until they are over 30 years old, a significant lost opportunity due to prolonged schooling and training. This compounds the problem of substantial debt burden that recent graduates must bear. Economically speaking, medicine, especially in the essential primary care fields, is no longer an attractive option for many talented students and graduates. From a job satisfaction standpoint, medicine has also become far less attractive due to regulatory burdens, paperwork, lack of adequate time to spend with patients, and fragmentation of care.

This fragmentation of care is in fact at least partially driven by Medicare cuts. When reimbursement to an individual physician is cut, s/he simply "farms out" parcels of the overall care of the patient to other physicians and specialists. This "multi-consultism" militates against any cost savings that might be achieved by cuts in reimbursement to individual physicians. Perhaps more alarming is the fact that care delivery is less comprehensive, more fragmented, and less satisfying to patients and physicians alike, the latter which may feel a "diffusion of responsibilty" regarding patients' care when multiconsultism is employed. Reduced reimbursements also likely drive the excess ordering of laboratory tests and radiographic scans, both in situations where the physician stands to profit from the testing and when s/he does not, in the latter case because the care is being "farmed out" not to another physician, but to the laboratory or radiology suite. The result is that Medicare "cuts" may paradoxically increase overall net healthcare expenditures. Physicians are already squeezed as much as they can tolerate being squeezed. Further cuts are certain to backfire in this and myriad other ways.

A perhaps more insidious, invidious, and pernicious result of reimbursement cuts is that it is driving the talent out of medicine, especially primary care medicine. Were it not for the veritable reimbursement shelter that I experience as a practitioner at an academic medical center, I would surely not be practicing medicine in any traditional way - it is simply not worth it. Hence we have the genesis and proliferation of "concierge practices" where the wealthy pay an annual fee for entry into the practice, only cash payments are accepted, and more traditional service from your physician (e.g., time to talk to him/her in an unhurried fashion) can be expected by patients. Hence we have, as pointed out in a recent New York Times article (http://query.nytimes.com/gst/fullpage.html?res=9C05E6D81E38F93AA25750C0A96E9C8B63&scp=2&sq=dermatology&st=nyt ), the siphoning of medical student talent into specialties such as dermatology and plastic surgery because the lifestyle is more attractive and reimbursement is not a problem since the "clientele" (aka patients) are affluent and pay out-of-pocket. Hence we have the brightest physicians, such as my colleague and close friend Michael C., MD, leaving medicine altogether to work on Wall Street in the financial sector. All of these disturbing trends threaten to undermine what was heretofore (and hopefully still is) one of the best healthcare systems on the planet. I, for one, will not recommend a career in primary care to any medical student who seeks my advice, and to undergraduates contemplating a career in medicine I say "enter medicine only if it is the only field you can invision yourself ever being happy in."

The system is broken, and we as a country cannot endure and thrive if our healthcare expenditures continue to eat up 15+% of our GDP. But cutting the payments to physicians, the very workforce upon which delivery of any care depends, is no longer a viable solution to the problem. Other excesses in the system, such as use of branded pharmaceuticals (e.g., Vytorin or Zetia) when generic alternatives are as good or better, use of expensive scans of unproven benefit (screening CT scans for lung cancer) when cheaper alternatives exist (stoping smoking), excessive and wasteful laboratory testing of unproven benefit (daily laboratory testing on hospital inpatients, wanton ordering of chest x-rays, head CTs, EKGs, and echocardiograms), use of therapeutic modalities of very high cost and modest benefit (AICDs, lung transplantation, back surgery, knee arthroscopy, coated stents, etc.), and provision of futile care at the end of life are better targets for cost savings, limitations on which are far less likely to compromise delivery of generally effective and affordable care for the average citizen.

I urge congress to consider the far-reaching but difficult to measure consequences of further reimbursement cuts before an entire generation of the most talented physicians and potential physicians determines that the financial, lifestyle, and opportunity costs of practicing medicine, especially primary care medicine, are just too much to bear.

Regards,

Scott K Aberegg, MD, MPH, FCCP
Assistant Professor of Medicine
The Ohio State University College of Medicine
Columbus,

Monday, March 31, 2008

MRK and SGP: Ye shall know the truth, and the truth shall send thy stock spiralling

Apparently, the editors of the NEJM read my blog (even though they stop short of calling for a BOYCOTT):

"...it seems prudent to encourage patients whose LDL cholesterol levels remain elevated despite treatment with an optimal dose of a statin to redouble their efforts at dietary control and regular exercise. Niacin, fibrates, and resins should be considered when diet, exercise, and a statin have failed to achieve the target, with ezetimibe reserved for patients who cannot tolerate these agents."

Sound familiar?

The full editorial can be seen here: http://content.nejm.org/cgi/content/full/NEJMe0801842
along with a number of other early-release articles on the subject.

The ENHANCE data are also published online (http://content.nejm.org/cgi/content/full/NEJMoa0800742
and there's really nothing new to report. We have known the results for several months now. What is new is doctors' nascent realization that they have been misled and bamboozled by the drug reps, Big Pharma, and their own long-standing, almost religious faith in surrogate endpoints (see post below). It's like you have to go through the stages of grief (Kubler-Ross) before you give up on your long-cherished notions of reality (denial, anger, bargaining, then, finally, acceptance). Amazingly, the ACC, whose statement just months ago appeared to be intended to allay patients' and doctors' concerns about Zetia, has done a apparent 180 on the drug: "Go back to Statins" is now their sanctimonious advice: http://acc08.acc.org/SSN/Documents/ACC%20D3LR.pdf

I was briefly at the ACC meeting yesterday (although I did not pay the $900 fee to attend the sessions). The Big Pharma marketing presence was nauseating. A Lipitor-emblazoned bag was given to each attendee. A Lipitor laynard was used to hold your $900 ID badge. Busses throughout the city were emblazoned with Vytorin and Lipitor advertisements among others. Banners covered numerous floors of the facades of city buildings. The "exhibition hall," a veritable orgy of marketing madness, was jam-packed with the most aesthetically pleasing and best-dressed salespersons with their catchy displays and gimmicks. (Did you know that abnormal "vascular reactivity" is a heretofore unknown "risk factor"? And that with a little $20,000 device that they can sell you (which you can probably bill for), you can detect said abnormal vascular reactivity.) The distinction between science, reality, and marketing is blurred imperceptibly if it exists at all. Physicians from all over the world greedily scramble for free pens, bags, and umbrellas (as if they cannot afford such trinkets on their own - or was it the $900 entrance fee that squeezed their pocketbooks?) They can be seen throughout the convention center with armloads of Big Pharma propaganda packages: flashlights, laser pointers, free orange juice and the like.

I just wonder: How much money does the ACC receive from these companies (for this Big Pharma Bonanza and for other "activities")? If my guess is in the right ballpark, I don't have to wonder why the ACC hedged in its statement when the ENHANCE data were released in January. I think I might have an idea.

Wednesday, March 26, 2008

Torcetrapib, Ezetimibe, and Surrogate Endpoints: A Cautionary Tale

In today's JAMA, (http://jama.ama-assn.org/cgi/content/extract/299/12/1474 ), Drs. Psaty and Lumley echo many of the points on this blog over the last six months about ezetimibe and torcetrapib (see posts below.) While they stop short of calling for a boycott of ezetimibe, and their perspective on torcetrapib is tempered by Pfizer's early conduct of a trial with hard outcomes as endpoints, their commentary underscores the dangers inherent in the long-standing practice of almost unquestioningly accepting the validy of "established" surrogate endpoints. The time to re-examine the validity of surrogate endpoints such as glycemic control, LDL, HDL, and blood pressure is now. Agents to treat these maladies are abundant and widely accessible, so potential delays in discovery and approval of new agents is no longer a suitable argument for a "fast track" approval process for new agents. We have seen time and again that such "fast tracks" are nothing more than expressways to profit for Big Pharma.

Psaty and Lumley's chronology of the studies of ezitimibe and their timing are themselves timely and should refocus needed scrutiny on the role of pharmaceutical companies as the stewards of scientific data and discovery.

Monday, March 10, 2008

The CORTICUS Trial: Power, Priors, Effect Size, and Regression to the Mean

The long-awaited results of another trial in critical care were published in a recent NEJM: (http://content.nejm.org/cgi/content/abstract/358/2/111). Similar to the VASST trial, the CORTICUS trial was "negative" and low dose hydrocortisone was not demonstrated to be of benefit in septic shock. However, unlike VASST, in this case the results are in conflict with an earlier trial (Annane et al, JAMA, 2002) that generated much fanfare and which, like the Van den Berghe trial of the Leuven Insulin Protocol, led to widespread [and premature?] adoption of a new therapy. The CORTICUS trial, like VASST, raises some interesting questions about the design and interpretation of trials in which short-term mortality is the primary endpoint.

Jean Louis Vincent presented data at this year's SCCM conference with which he estimated that only about 10% of trials in critical care are "positive" in the traditional sense. (I was not present, so this is basically hearsay to me - if anyone has a reference, please e-mail me or post it as a comment.) Nonetheless, this estimate rings true. Few are the trials that show a statistically significant benefit in the primary outcome, fewer still are trials that confirm the results of those trials. This begs the question: are critical care trials chronically, consistently, and woefully underpowered? And if so, why? I will offer some speculative answers to these and other questions below.

The CORTICUS trial, like VASST, was powered to detect a 10% absolute reduction in mortality. Is this reasonable? At all? What is the precedent for a 10% ARR in mortality in a critical care trial? There are few, if any. No large, well-conducted trials in critical care that I am aware of have ever demonstrated (least of all consistently) a 10% or greater reduction in mortality of any therapy, at least not as a PRIMARY PROSPECTIVE OUTCOME. Low tidal volume ventilation? 9% ARR. Drotrecogin-alfa? 7% ARR in all-comers. So I therefore argue that all trials powered to detect an ARR in mortality of greater than 7-9% are ridiculously optimistic, and that the trials that spring from this unfortunate optimism are woefully underpowered. It is no wonder that, as JLV purportedly demonstrated, so few trials in critical care are "positive". The prior probability is is exceedingly low that ANY therapy will deliver a 10% mortality reduction. The designers of these trials are, by force of pragmatic constraints, rolling the proverbial trial dice and hoping for a lucky throw.

Then there is the issue of regression to the mean. Suppose that the alternative hypothesis (Ha) is indeed correct in the generic sense that hydrocortisone does beneficially influence mortality in septic shock. Suppose further that we interpret Annane's 2002 data as consistent with Ha. In that study, a subgroup of patients (non-responders) demonstrated a 10% ARR in mortality. We should be excused for getting excited about this result, because after all, we all want the best for our patients and eagerly await the next breaktrough, and the higher the ARR, the greater the clinical relevance, whatever the level of statistical significance. But shouldn't we regard that estimate with skepticism since no therapy in critical care has ever shown such a large reduction in mortality as a primary outcome? Since no such result has ever been consistently repeated? Even if we believe in Ha, shouldn't we also believe that the 10% Annane estimate will regress to the mean on repeated trials?

It may be true that therapies with robust data behind them become standard practice, equipoise dissapates, and the trials of the best therapies are not repeated - so they don't have a chance to be confirmed. But the knife cuts both ways - if you're repeating a trial, it stands to reason that the data in support of the therapy are not that robust and you should become more circumspect in your estimates of effect size - taking prior probability and regression to the mean into account.

Perhaps we need to rethink how we're powering these trials. And funding agencies need to rethink the budgets they will allow for them. It makes little sense to spend so much time, money, and effort on underpowered trials, and to establish the track record that we have established where the majority of our trials are "failures" in the traditional sence and which all include a sentence in the discussion section about how the current results should influence the design of subsequent trials. Wouldn't it make more sense to conduct one trial that is so robust that nobody would dare repeat it in the future? One that would provide a definitive answer to the quesiton that is posed? Is there something to be learned from the long arc of the steroid pendulum that has been swinging with frustrating periodicity for many a decade now?

This is not to denigrate in any way the quality of the trials that I have referred to. The Canadian group in particular as well as other groups (ARDSnet) are to be commended for producing work of the highest quality which is of great value to patients, medicine, and science. But in keeping with the advancement of knowledge, I propose that we take home another message from these trials - we may be chronically underpowering them.