Tuesday, April 29, 2008

Blood Substitutes Doomed by Natanson's Meta-Analysis in JAMA

When the ARMY gives up on something, you should be on the lookout for red flags. (Pentagon types beholden to powerful contractors and highly susceptible to sunk cost bias still haven't given up on that whirligig of death called the Osprey, have they?) But the ARMY's abandonment of a blood substitute that it found was killing animals in tests was apparently no deterrent to Northfield Laboratories, Inc., makers of "Polyheme", as well as Wall Street investors in this an other companies working on products with a similar goal - to cook up an extracellular hemoglobin-based molecule that can be used in lieu of red blood cell transfusions in trauma patients and others.

Charles Natanson, an intramural researcher at the NIH and co-workers performed a meta-analysis of trials of blood substitutes which was published on-line today at the JAMA website: http://jama.ama-assn.org/cgi/content/full/299.19.jrv80007 . They found that these trials, which were powered for outcomes such as number of transfusions provided or other "surrogate-sounding" endpoints, when combined demonstrate that these products were killing subjects in these studies. The relative risk of death for study subjects receiving one of these products was 1.3 and the risk of myocardial infarction increased more than threefold. The robustness of these findings is enhanced by the biological plausibility of the result - cell-free hemoglobin is known to eat up nitric oxide from the endothelium of the vasculature leading to substantial vasoconstriction and other untoward downstream outcomes.

In addition to my penchant for cautionary tales, my interest in this study has to do with study design. We are beholden to "conventional" study design expectations where a p-value is a p-value, they're all 0.05, and an outcome is an outcome, whether it be bleeding, or pain or death, we don't differentially value them. But if you're studying a novel agent, looking for some crumby surrogate endpoint like number of transfusions, and your alpha threshold for that is 0.05, then the alpha threshold for death should be higher (say 0.25 or so), especially if you're underpowered to detect excess deaths. That kind of arrangement would imply that we value death at least 5 times higher than transfusion (I for one would rather have 500 or more transfusions that be dead, but that's a topic for another discussion).

Fortunately for any patients that may have been recruited to participate in such studies, Natanson et al undertook this perspicacious meta-analysis, and the editiorialists extended their recommendations for more transparency in data dissemination to argue, almost, that future trials of blood substitutes should be banned or boycotted. Even if the medical community does not have the gumption to go that far, prospective participants in such studies and their surrogates can at least perform a simple google search, and from now on the Natanson article is liable to be on the first page.

Thursday, April 3, 2008

A [now open] letter to Congress re: Proposed Medicare Reimbursement Cuts

I'm not sure that this is entirely in keeping with the theme of this blog, but I will justify it by saying that the health of the healthcare system is of vital interest to all stakeholders including researchers with an interest in clinical trials. The following letter was sent via the ACCP to my senators and congressmen in regards to the Medicare reimbursement cuts that are to be instituted in July of this year. We were solicited via the medical professional society to be a voice in opposition to the cuts....

Dear Sir or Madam-

Physicians' income, especially that of primary care providers, upon whom patients rely most heavily for basic care, has been falling in real dollars (not keeping pace with inflation) for years, and the newest cuts will markedly exacerbate the disconcerting trend that already exists.

Most physicians do not begin earning income in earnest until they are over 30 years old, a significant lost opportunity due to prolonged schooling and training. This compounds the problem of substantial debt burden that recent graduates must bear. Economically speaking, medicine, especially in the essential primary care fields, is no longer an attractive option for many talented students and graduates. From a job satisfaction standpoint, medicine has also become far less attractive due to regulatory burdens, paperwork, lack of adequate time to spend with patients, and fragmentation of care.

This fragmentation of care is in fact at least partially driven by Medicare cuts. When reimbursement to an individual physician is cut, s/he simply "farms out" parcels of the overall care of the patient to other physicians and specialists. This "multi-consultism" militates against any cost savings that might be achieved by cuts in reimbursement to individual physicians. Perhaps more alarming is the fact that care delivery is less comprehensive, more fragmented, and less satisfying to patients and physicians alike, the latter which may feel a "diffusion of responsibilty" regarding patients' care when multiconsultism is employed. Reduced reimbursements also likely drive the excess ordering of laboratory tests and radiographic scans, both in situations where the physician stands to profit from the testing and when s/he does not, in the latter case because the care is being "farmed out" not to another physician, but to the laboratory or radiology suite. The result is that Medicare "cuts" may paradoxically increase overall net healthcare expenditures. Physicians are already squeezed as much as they can tolerate being squeezed. Further cuts are certain to backfire in this and myriad other ways.

A perhaps more insidious, invidious, and pernicious result of reimbursement cuts is that it is driving the talent out of medicine, especially primary care medicine. Were it not for the veritable reimbursement shelter that I experience as a practitioner at an academic medical center, I would surely not be practicing medicine in any traditional way - it is simply not worth it. Hence we have the genesis and proliferation of "concierge practices" where the wealthy pay an annual fee for entry into the practice, only cash payments are accepted, and more traditional service from your physician (e.g., time to talk to him/her in an unhurried fashion) can be expected by patients. Hence we have, as pointed out in a recent New York Times article (http://query.nytimes.com/gst/fullpage.html?res=9C05E6D81E38F93AA25750C0A96E9C8B63&scp=2&sq=dermatology&st=nyt ), the siphoning of medical student talent into specialties such as dermatology and plastic surgery because the lifestyle is more attractive and reimbursement is not a problem since the "clientele" (aka patients) are affluent and pay out-of-pocket. Hence we have the brightest physicians, such as my colleague and close friend Michael C., MD, leaving medicine altogether to work on Wall Street in the financial sector. All of these disturbing trends threaten to undermine what was heretofore (and hopefully still is) one of the best healthcare systems on the planet. I, for one, will not recommend a career in primary care to any medical student who seeks my advice, and to undergraduates contemplating a career in medicine I say "enter medicine only if it is the only field you can invision yourself ever being happy in."

The system is broken, and we as a country cannot endure and thrive if our healthcare expenditures continue to eat up 15+% of our GDP. But cutting the payments to physicians, the very workforce upon which delivery of any care depends, is no longer a viable solution to the problem. Other excesses in the system, such as use of branded pharmaceuticals (e.g., Vytorin or Zetia) when generic alternatives are as good or better, use of expensive scans of unproven benefit (screening CT scans for lung cancer) when cheaper alternatives exist (stoping smoking), excessive and wasteful laboratory testing of unproven benefit (daily laboratory testing on hospital inpatients, wanton ordering of chest x-rays, head CTs, EKGs, and echocardiograms), use of therapeutic modalities of very high cost and modest benefit (AICDs, lung transplantation, back surgery, knee arthroscopy, coated stents, etc.), and provision of futile care at the end of life are better targets for cost savings, limitations on which are far less likely to compromise delivery of generally effective and affordable care for the average citizen.

I urge congress to consider the far-reaching but difficult to measure consequences of further reimbursement cuts before an entire generation of the most talented physicians and potential physicians determines that the financial, lifestyle, and opportunity costs of practicing medicine, especially primary care medicine, are just too much to bear.

Regards,

Scott K Aberegg, MD, MPH, FCCP
Assistant Professor of Medicine
The Ohio State University College of Medicine
Columbus,

Monday, March 31, 2008

MRK and SGP: Ye shall know the truth, and the truth shall send thy stock spiralling

Apparently, the editors of the NEJM read my blog (even though they stop short of calling for a BOYCOTT):

"...it seems prudent to encourage patients whose LDL cholesterol levels remain elevated despite treatment with an optimal dose of a statin to redouble their efforts at dietary control and regular exercise. Niacin, fibrates, and resins should be considered when diet, exercise, and a statin have failed to achieve the target, with ezetimibe reserved for patients who cannot tolerate these agents."

Sound familiar?

The full editorial can be seen here: http://content.nejm.org/cgi/content/full/NEJMe0801842
along with a number of other early-release articles on the subject.

The ENHANCE data are also published online (http://content.nejm.org/cgi/content/full/NEJMoa0800742
and there's really nothing new to report. We have known the results for several months now. What is new is doctors' nascent realization that they have been misled and bamboozled by the drug reps, Big Pharma, and their own long-standing, almost religious faith in surrogate endpoints (see post below). It's like you have to go through the stages of grief (Kubler-Ross) before you give up on your long-cherished notions of reality (denial, anger, bargaining, then, finally, acceptance). Amazingly, the ACC, whose statement just months ago appeared to be intended to allay patients' and doctors' concerns about Zetia, has done a apparent 180 on the drug: "Go back to Statins" is now their sanctimonious advice: http://acc08.acc.org/SSN/Documents/ACC%20D3LR.pdf

I was briefly at the ACC meeting yesterday (although I did not pay the $900 fee to attend the sessions). The Big Pharma marketing presence was nauseating. A Lipitor-emblazoned bag was given to each attendee. A Lipitor laynard was used to hold your $900 ID badge. Busses throughout the city were emblazoned with Vytorin and Lipitor advertisements among others. Banners covered numerous floors of the facades of city buildings. The "exhibition hall," a veritable orgy of marketing madness, was jam-packed with the most aesthetically pleasing and best-dressed salespersons with their catchy displays and gimmicks. (Did you know that abnormal "vascular reactivity" is a heretofore unknown "risk factor"? And that with a little $20,000 device that they can sell you (which you can probably bill for), you can detect said abnormal vascular reactivity.) The distinction between science, reality, and marketing is blurred imperceptibly if it exists at all. Physicians from all over the world greedily scramble for free pens, bags, and umbrellas (as if they cannot afford such trinkets on their own - or was it the $900 entrance fee that squeezed their pocketbooks?) They can be seen throughout the convention center with armloads of Big Pharma propaganda packages: flashlights, laser pointers, free orange juice and the like.

I just wonder: How much money does the ACC receive from these companies (for this Big Pharma Bonanza and for other "activities")? If my guess is in the right ballpark, I don't have to wonder why the ACC hedged in its statement when the ENHANCE data were released in January. I think I might have an idea.

Wednesday, March 26, 2008

Torcetrapib, Ezetimibe, and Surrogate Endpoints: A Cautionary Tale

In today's JAMA, (http://jama.ama-assn.org/cgi/content/extract/299/12/1474 ), Drs. Psaty and Lumley echo many of the points on this blog over the last six months about ezetimibe and torcetrapib (see posts below.) While they stop short of calling for a boycott of ezetimibe, and their perspective on torcetrapib is tempered by Pfizer's early conduct of a trial with hard outcomes as endpoints, their commentary underscores the dangers inherent in the long-standing practice of almost unquestioningly accepting the validy of "established" surrogate endpoints. The time to re-examine the validity of surrogate endpoints such as glycemic control, LDL, HDL, and blood pressure is now. Agents to treat these maladies are abundant and widely accessible, so potential delays in discovery and approval of new agents is no longer a suitable argument for a "fast track" approval process for new agents. We have seen time and again that such "fast tracks" are nothing more than expressways to profit for Big Pharma.

Psaty and Lumley's chronology of the studies of ezitimibe and their timing are themselves timely and should refocus needed scrutiny on the role of pharmaceutical companies as the stewards of scientific data and discovery.

Monday, March 10, 2008

The CORTICUS Trial: Power, Priors, Effect Size, and Regression to the Mean

The long-awaited results of another trial in critical care were published in a recent NEJM: (http://content.nejm.org/cgi/content/abstract/358/2/111). Similar to the VASST trial, the CORTICUS trial was "negative" and low dose hydrocortisone was not demonstrated to be of benefit in septic shock. However, unlike VASST, in this case the results are in conflict with an earlier trial (Annane et al, JAMA, 2002) that generated much fanfare and which, like the Van den Berghe trial of the Leuven Insulin Protocol, led to widespread [and premature?] adoption of a new therapy. The CORTICUS trial, like VASST, raises some interesting questions about the design and interpretation of trials in which short-term mortality is the primary endpoint.

Jean Louis Vincent presented data at this year's SCCM conference with which he estimated that only about 10% of trials in critical care are "positive" in the traditional sense. (I was not present, so this is basically hearsay to me - if anyone has a reference, please e-mail me or post it as a comment.) Nonetheless, this estimate rings true. Few are the trials that show a statistically significant benefit in the primary outcome, fewer still are trials that confirm the results of those trials. This begs the question: are critical care trials chronically, consistently, and woefully underpowered? And if so, why? I will offer some speculative answers to these and other questions below.

The CORTICUS trial, like VASST, was powered to detect a 10% absolute reduction in mortality. Is this reasonable? At all? What is the precedent for a 10% ARR in mortality in a critical care trial? There are few, if any. No large, well-conducted trials in critical care that I am aware of have ever demonstrated (least of all consistently) a 10% or greater reduction in mortality of any therapy, at least not as a PRIMARY PROSPECTIVE OUTCOME. Low tidal volume ventilation? 9% ARR. Drotrecogin-alfa? 7% ARR in all-comers. So I therefore argue that all trials powered to detect an ARR in mortality of greater than 7-9% are ridiculously optimistic, and that the trials that spring from this unfortunate optimism are woefully underpowered. It is no wonder that, as JLV purportedly demonstrated, so few trials in critical care are "positive". The prior probability is is exceedingly low that ANY therapy will deliver a 10% mortality reduction. The designers of these trials are, by force of pragmatic constraints, rolling the proverbial trial dice and hoping for a lucky throw.

Then there is the issue of regression to the mean. Suppose that the alternative hypothesis (Ha) is indeed correct in the generic sense that hydrocortisone does beneficially influence mortality in septic shock. Suppose further that we interpret Annane's 2002 data as consistent with Ha. In that study, a subgroup of patients (non-responders) demonstrated a 10% ARR in mortality. We should be excused for getting excited about this result, because after all, we all want the best for our patients and eagerly await the next breaktrough, and the higher the ARR, the greater the clinical relevance, whatever the level of statistical significance. But shouldn't we regard that estimate with skepticism since no therapy in critical care has ever shown such a large reduction in mortality as a primary outcome? Since no such result has ever been consistently repeated? Even if we believe in Ha, shouldn't we also believe that the 10% Annane estimate will regress to the mean on repeated trials?

It may be true that therapies with robust data behind them become standard practice, equipoise dissapates, and the trials of the best therapies are not repeated - so they don't have a chance to be confirmed. But the knife cuts both ways - if you're repeating a trial, it stands to reason that the data in support of the therapy are not that robust and you should become more circumspect in your estimates of effect size - taking prior probability and regression to the mean into account.

Perhaps we need to rethink how we're powering these trials. And funding agencies need to rethink the budgets they will allow for them. It makes little sense to spend so much time, money, and effort on underpowered trials, and to establish the track record that we have established where the majority of our trials are "failures" in the traditional sence and which all include a sentence in the discussion section about how the current results should influence the design of subsequent trials. Wouldn't it make more sense to conduct one trial that is so robust that nobody would dare repeat it in the future? One that would provide a definitive answer to the quesiton that is posed? Is there something to be learned from the long arc of the steroid pendulum that has been swinging with frustrating periodicity for many a decade now?

This is not to denigrate in any way the quality of the trials that I have referred to. The Canadian group in particular as well as other groups (ARDSnet) are to be commended for producing work of the highest quality which is of great value to patients, medicine, and science. But in keeping with the advancement of knowledge, I propose that we take home another message from these trials - we may be chronically underpowering them.

Sunday, March 9, 2008

The "Trials" and Tribulations of Powering Clinical Trials: The Case of Vasopressin for Septic Shock (VASST trial)

Nobody likes "negative" trials. They're just not as exciting as positive ones. (Unless they show that something we're doing is harmful or that a product that Wall Street has bet heavily on is headed for the chopping block.) But "negative" studies such as an excellent one by Russell et al in a recent NEJM (http://content.nejm.org/cgi/content/abstract/358/9/877 ) show just how difficult it is to design and conduct a "positive" trial. The [non-significant] trends in this study, namely that vasopressin is superior to norepinephrine in reducing mortality in septic shock, were demonstrated in a study that had an a priori power of 80%, based on an expected mortality rate of 60% in the placebo group. Actual power in the study was significantly less, not because, as the authors appear to suggest, the observed placebo mortality was only ~39%, but rather because the observed effect size fell markedly short of the anticipated 10% absolute mortality reduction. In order to demonstrate a mortality benefit of the magnitude observed in the current trial (~4% ARR) at a significance level of 0.05, approximately 1500 patients in each study arm would be required. This is a formidable number for a critical care trial.

Thus, this trial illustrates the trials and tribulations of designing and conducting studies with 28-day mortality as an endpoint. These studies not only entail substantial costs, but pose challenges for patient recruitment, necessitating the participation of numerous centers in a multinational setting. The coordination of such a trial is daunting. It is understandable, therefore, that investigators may wish to be optimistic about the ARR they can expect from a therapy, as this will reduce sample size and increase the chances that the trial will be successfully completed in a resonable period of time. (For an example of a study which had to be terminated early because of these challenges, see Mancebo et al : http://ajrccm.atsjournals.org/cgi/content/short/200503-353OCv1 ). Powering the trial at 80% instead of 90% likewise represents a compromise between optimism for the efficacy of the therapy and optimism for patient recruitment. In essence, the lower the power, the more "faith" there must be that a roll of the trial dice will confirm the alternative hypothesis.

These realities played out [dissappointingly] in the Russell trial. The p-value for the ARR (28-day mortality - the primary endpoint) associated with vasopressin compared with placebo was 0.26, while that associated with 90-day mortality (a prespecified secondary endpoint) was 0.11. Thus, this trial is considered negative by conventional standards.

But its being "negative" does not mean that it is not of value to practitioners. This large experience with vasopressin demonstrates both that this agent is a viable alternative to norepinephrine in regards to raising the MAP to within the goal range, as also that we can expect that there will not be a significant excess of adverse events when this agent is used. In my opinion, this study represents a veritable "green light" for continued use of this agent, as I agree with the editorialist (http://content.nejm.org/cgi/reprint/358/9/954.pdf ) that many patients with sepsis who are not responding to norepinephrine respond dramatically and favorably to this agent.

Perhaps there is a larger lesson here. Should we use the same p-value threshold for a study of, say, an antidepressant as we do for a study of an agent that may reduce mortality? In the former case, we may be most concerned about exposure of patients to a costly drug with no benefits and potential side effects - in essence, we are most concerned with a Type I Error, i.e., concluding that there is a benefit when in reality there is none. Perhaps in a trial of a potentially life-saving therapy (e.g., vasopressin) we should be most concerned with a Type II Error, i.e., concluding that there is no real benefit when in reality one exists. If that were the case, and you may have already guessed that I believe that it should be, we could address this concern by loosening the standard of statistical significance for a study of potentially life-saving agents.

The standards notwithstanding, critical care practitioners are free to interpret these data as they see fit. And one reasonable conclusion is that, the trends being in the right direction and the side effect profile being acceptable, we should be using more vasopressin in septic shock.

Or, we must make a tough call: do we want to invest the resources in a much larger trail to determine if vasopressin can be shown to reduce mortality at the conventional p-value level of 0.05? Can we recruit the necessary 3000 patients?

Monday, February 18, 2008

Wake Up and Smell the Coffee then Wake Up Your Patients and Let Them Breathe

A few weeks ago in The Lancet (http://www.thelancet.com/journals/lancet/article/PIIS0140673608601051/abstract ) appeared a wonderful and pragmatic article demonstrating the effectiveness of combining Spontaneous Awakening Trials (SATs) with Spontaneous Breathing Trials (SBTs) in the ICU. This strategy of "Wake Up and Breathe" was highly effective and critical care practitioners everywhere should take heed. Unfortunately, a penchant for the status quo and a heaping of omission bias led the editorialist to foment skepticism for the adoption of "wake up and breathe." My colleagues and I find this skepticism unfounded and frankly dangerous in that it risks reducing the adoption of this highly effective strategy, the benefits of which clearly exceed the risks. Our letter to the editor of The Lancet was not accepted for publication, but is posted below. Hats off to Girard and Ely and co-workers for this vital addition to our literature. Now if we can just convince critical care practitioners to wake up and wake their patients up...

We read with interest the report of the ABC Trial which demonstrated the efficacy of combining daily awakenings with breathing trials in mechanically ventilated patients (1). In the accompanying editorial, Dr. Brochard contends that “sedation is also an important component of care for critically ill patients,” but he cites only one review article to support this claim (2). It is unknown if the disturbing weaning experiences he references are related to sedation restriction. What is known with reasonable certainty is that oversedation is common and associated with increased delirium (1;3), neuroimaging (4), long-term psychiatric consequences (5) and mortality (1) and longer duration of mechanical ventilation and ICU stay (1;4). The ABC trial adds to this body of literature by demonstrating the practical utility of combining daily sedation cessation with spontaneous breathing trails. That 92% of spontaneous awakening trials were well-tolerated strongly suggests that patients were no worse without sedation, and is consistent with prior studies showing that oversedation, not undersedation, is the principal risk to critically ill patients.
For too long, we suffered from a dearth of quality evidence to guide the care of the critically ill. Now that such evidence is available, we would be wise to act upon it. We therefore disagree with Dr. Brochard’s statement that “more information is needed to show that the approach is feasible and safe.” Each year that we await another confirmatory trial is another year that our patients suffer prolonged mechanical ventilation and illness due to our fondness for the status quo.


Reference List

1. Girard TD, Kress JP, Fuchs BD, Thomason JW, Schweickert WD, Pun BT et al. Efficacy and safety of a paired sedation and ventilator weaning protocol for mechanically ventilated patients in intensive care (Awakening and Breathing Controlled trial): a randomised controlled trial. Lancet 2008;371(9607):126-34.
2. Brochard L. Sedation in the intensive-care unit: good and bad? Lancet 2008;371(9607):95-7.
3. Pandharipande P, Shintani A, Peterson J, Pun BT, Wilkinson GR, Dittus RS et al. Lorazepam is an independent risk factor for transitioning to delirium in intensive care unit patients. Anesthesiology 2006;104(1):21-6.
4. Kress JP, Pohlman AS, O'Connor MF, Hall JB. Daily interruption of sedative infusions in critically ill patients undergoing mechanical ventilation. N.Engl.J Med 2000;342(20):1471-7.
5. Kress JP, Gehlbach B, Lacy M, Pliskin N, Pohlman AS, Hall JB. The long-term psychological effects of daily sedative interruption on critically ill patients. Am.J Respir.Crit Care Med 2003;168(12):1457-61.

James M. O'Brien, Md, MSc
Naeem A. Ali, MD
Scott K. Aberegg, MD, MPH

Friday, January 18, 2008

Have the Peddlers of Antidepressants (Big Pharma) been Successful in Suppressing Negative Trial Results?

Yes, according to this article in yesterday's NEJM:
http://content.nejm.org/cgi/content/short/358/3/252

Talk about publication bias. According to Erick H. Turner, M.D. and coauthors, the selective publication of only "positive" trials, in addition to publishing in a positive light studies that the FDA considered "negative" leads to a 32% increase in the apparent efficacy of antidepressant drugs, on average (range 11-69%). Once again, profit trumps science, safety, and patient and public health.

What can we do about it? First, reduce by one third the effect size of any antidepressant results you see in an industry-sponsored clinical trial. Next, carefully consider whether whatever [probably modest] effect remains is worth the side effects (e.g., increase in suicide), cost, and nuisance of the drug. Third, prescribe generic agents. Fourth, don't allow pharmaceutical reps to speak with you about new products. Fifth, consider alternative treatments.

I am reminded of a curious occurrence relating to a drug that I think is definately worth the cost, side effects, and nuisance associated with it: Chantix (varenicline) - Pfizer's smoking cessation drug. In JAMA in July 2006,
(http://jama.ama-assn.org/content/vol296/issue1/index.dtl)
two nearly identical articles described two nearly identical studies, which shared many of the same authors. What was the intent of this? Why not conduct one larger study? Was the intent to diversify the risk of failure and allow for selective publication of positive results? I'm very interested in any information anyone can provide about this curious arrangement, which appears to be without precedent. Please leave your comments below.

Wednesday, January 16, 2008

Is the American College of Cardiology (ACC) Complicit with Big Pharma (Merck and Shering-Plough)?

I am reminded of the surgical attending at Johns Hopkins who (perhaps apocryphally) would scream at the intern in the morning when a patient had done poorly overnight:
"Whose side are you on, the patient or the disease?!"

And I ask the ACC, "Whose side are you on? Patients' or Big Pharma's"?!

Their main web page now links to this statement:
http://www.acc.org/enhance.htm
which states:
"The American College of Cardiology recommends that major clinical decisions not be made on the basis of the ENHANCE study alone."

Is it really a "major clinical decision" to stop Zetia/Vytorin and take a statin or niacin until the very efficacy of Vytorin and Zetia is sorted out?

I'd say that the ACC and its members need to reconsider the rather major decision they made to support the use of this drug based on surrogate end-points. As with torcetrapib, they're going to have to learn the hard way to take their lashings.

The statement goes on to say:
"The ACC recommends that Zetia remain a reasonable option for patients who are currently on a high dose statin but have not reached their goal. The ACC also notes that Zetia is a reasonable option for patients who cannot tolerate statins or can only tolerate a low dose statin."

Well, that sounds reasonable, but do you really thing that the majority of patients on Zetia or Vytorin are on it because they failed a reasonable attempt to use a high-dose statin? We all know that after it hits the market, a drug is generally prescribed willy-nilly rather than carefully and rationally in selected patient groups. The ACC should know this. Hence my suspicion of complicity.

It bothers me how entrenched the use of these drugs becomes and how hard it is to remove patients from them. This is a serious status quo bias that I have commented upon before. Few physicians would start a patient on Avandia now, but the ones who are already on it get left on it. The same is true, it appears, with Vytorin, and the ACC is contributing to the status quo bias!

The mandate for physicians and the FDA is to prescribe only SAFE and EFFECTIVE therapies. The burden of scientific proof is on the drug companies who are driven by profit to promote these drugs. It is up to physicians to stand between patients' health and the companies' profits and prescribe only drugs that have met the burden of proof. And Vytorin and Zetia have not. Boycott them until the proof is in. Use alternative agents in the meantime.

Monday, January 14, 2008

Vytorin Vanquished: ENHANCE comes out from hiding and the call for a BOYCOTT gathers steam

Merck (MRK) and Shering-Plough (SGP) have finally released the ENHANCE data and they do not look good, neither for MRK and SGP stock prices (both of which were significantly down in pre-market trading!) nor for patients who have been taking ezetimibe as either Vytorin or Zetia - all the trends were in the WRONG DIRECTION (i.e., they favored simvastatin alone) IN SPITE OF robust additional LDL lowering with ezitimibe:
http://biz.yahoo.com/bw/080114/20080114005752.html?.v=1
This further evidence that this drug does not influence important clinical outcomes should renew interest in BOYCOTTING ezitimibe in all forms until/unless improved clinically meaningful outcomes can be shown with this agent in properly designed and conducted trials with sufficient transparency.
(Of course, I recognize that Vytorin is Vanquished only in this battle, that others will follow, and that MRK and SGP will say that these "real trials" are still being conducted, as if they funded ENHANCE for no good reason, and as if, had it been a postive study, they would have downplayed its significance and emphasized cautious interpretation of the results, pending completion of the "real trials".)

Friday, January 11, 2008

Jumping the Gun with Intensive Insulin Therapy (Leuven Protocol):How ICUs across the nation rushed to adopt a therapy which is probably not beneficial

In this week's NEJM is an anxiously awaited article about intensive insulin therapy in severely septic patients in the ICU: http://content.nejm.org/cgi/content/short/358/2/125
This business of intensive insulin therapy began with publication in the NEJM in 2001 an article by Van den Berghe et al showing a remarkable reduction in mortality in surgical (mostly post-cardiac surgery) patients in a surgical ICU. Thereafter ensued a veritable rush to adopt this therapy, and ICUs around the country began developing and adopting protocols for "tight glucose control" in spite of concerns about the study and its generalization to non-surgical patients who were not being fed concentrated intravenous dextrose solutions....

I vividly remember one of the ICU attendings at Johns Hopkins Hospital, Dr. Jimmy Sylvester, telling us on the morning after the study was published that "this is either the largest break-through in intensive care therapeutics ever, or these data are faked". In essence what he was saying was that the prior expectation of a result as dramatic as demonstrated by Van den Berghe was very low (see also: http://jama.ama-assn.org/cgi/content/full/294/17/2203 ). That lower prior probability should have reduced our confidence in the results, and made us more skeptical of the population studied and the dextrose solutions and the applicability to non-surgical patients. Well then, why didn't it?

My colleague James M. O'Brien, Jr, MD, MSc and I have one possible explanation for the rush to adopt "intensive insulin therapy" which we have dubbed the "normalization heuristic." Physicians, for all of our training, remain quite simple-minded. We like simple, feel-good fixes. Normalizing lab values is one of those things. "Make it normal and all will be fine," goes the mantra. We like to make the potassium normal. We like to make the hematocrit normal. We love it when the magnesium increases after we order 4 grams. It's satisfying. And it feels like we're doing some measurable, that is, easily measurable good in the world. Normalizing blood sugars fits that paradigm and makes us feel like we are doing good. But are we?

We have learned the hard way over the years that many of the things we do to "normalize" some surface value causes an undercurrent of harm for patients. Think suppression of PVCs (the CAST trial: http://content.nejm.org/cgi/content/abstract/321/6/406 ) or transfusion thresholds (the TRICC study and others: http://content.nejm.org/cgi/content/abstract/340/6/409 ). Oftentimes, it seems, our efforts to "normalize" some value cause more harm than good. It is quite possible that this is also the case with intensive insulin, and that the "feel-good" appeal of making the blood sugars normal in the short term in acutely ill patients propelled us to early adoption of this probably useless and possibly harmful therapy.

(For an analogous contemporaneous story about biology's complexity and defiance of simple explanations and logic such as the normalization heuristic, see: http://www.nytimes.com/2008/01/11/science/11ants.html?scp=1&sq=aiding+trees+can+kill+them.)

The interesting thing regarding the "adoption" of Van den Berghe's "Leuven protocol" is that no ICU I have worked in really adopted that protocol. They softened it up, making the target blood sugar not 80-120, but rather 120-150 or some similar range. So what was adopted was "moderate insulin therapy" rather than intensive insulin therapy. Nobody has any idea whether such an approach is beneficial. It's certainly safer. But it has substantial costs in terms of nursing care that might be better spent on other interventions (think sedation interruption).

(I have been highly critical of Van den Berghe's medical insulin article, and my criticisms were published in the NEJM. I was delighted that she did not even address me/them in "the authors reply" - apparently I left her speechless: http://content.nejm.org/cgi/content/extract/354/19/2069.)

So this wonderful article in the current issue by Brunkhorst et al is music to my ears. Rather than hiding the high rate of severe hypoglycemia in supplementary material, Brunkhorst et al come right out and say that not only was the Leuven protocol NOT associated with reduced mortality, but also that it had a very high incidence of severe side effects and that their DSMB had the wherewithal to stop the study early for safety reasons. Bravo!

We await the results of several other ongoing studies of intensive insulin therapy before we nail shut the coffin on the Leuven protocol. Meanwhile, I hope that someone somewhere will design a protocol to test the "moderate insulin therapy" that we rushed to adopt after the first Van den Berghe article as a half-hearted hedge/compromise between our "normalization heuristic", our tempered enthusiasm for the Leuven protocol, our desire to "do something" for critically ill patients, and our fear of causing side effects that result directly from our interventions (omission bias: http://mdm.sagepub.com/cgi/content/abstract/26/6/575 ).

Thank you, Brunkhorst et al, for testing the Leuven protocol in an even-handed and scientifically unbiased manner and for reporting your results candidly.

Merck and Schering's "Secret Vytorin Panel"

Matthew Herper continues to lead the pack in investigating the shenanigans perpetrated by Shering-Plough (SGP) and Merck (MRK)in the conduct of the ENHANCE trial of Vytorin. I reiterate that it is my strong but measured and carefully considered opinion that this drug or ezetimibe should NOT be used in ANY patients until definitive evidence of efficacy is available, since alternative, more proven alternatives exist. Patients' health should not be risked on this drug. There is too much uncertainty, and too many proven alternatives.

Matthew's article describes more intriguing aspects of this saga, and I couldn't state it any better than he, so I invite you to read his article:

http://www.forbes.com/2008/01/10/merck-schering-vytorin-biz-cx_mh_0111enhance.html?partner=email

Type rest of the post here

Monday, December 31, 2007

Is there any place for the f/Vt (the Yang-Tobin index) in today's ICU?

Recently, Tobin and Jubran performed an eloquent re-analysis of the value of “weaning predictor tests” (Crit Care Med 2008; 36: 1). In an accompanying editorial, Dr. MacIntyre does an admirable job of disputing some of the authors’ contentions (Crit Care Med 2008; 36: 329). However, I suspect space limited his ability to defend the recommendations of the guidelines for weaning and discontinuation of ventilatory support.

Tobin and Jubran provide a whirlwind tour of the limitations of meta-analyses. These are important considerations when interpreting the reported results. However, lost in this critique of the presumed approach used by the McMaster group and the joint tack force are the limitations of the studies on which the meta-analysis was based. Tobin and Jubran provide excellent points about systematic error limiting the internal validity of the study but, interestingly, do not apply such criticism to studies of f/Vt.

For the sake of simplicity, I will limit my discussion to the original report by Yang and Tobin (New Eng J Med 1991; 324: 1445). As a reminder, this was a single center study which included 36 subjects in a “training set” and 64 subjects in a “prospective-validation set.” Patients were selected if “clinically stable and whose primary physicians considered them ready to undergo a weaning trial.” The authors then looked a variety of measures to determine predictors of those “able to sustain spontaneous breathing for ≥24 hours after extubation” versus those “in whom mechanical ventilation was reinstituted at the end of a weaning trial or who required reintubation within 24 hours.” While not explicitly stated, it looks as if all the patients who failed a weaning trial had mechanical ventilation reinstituted, rather than failing extubation.

In determining the internal validity of a diagnostic test, one important consideration is that all subjects have the “gold standard” test performed. In the case of “weaning predictor tests,” what is the condition we are trying to diagnose? I would argue that it is the presence of respiratory failure requiring continued ventilatory support. Alternatively, it is the absence of respiratory failure requiring continued ventilatory support. I would also argue that the gold standard test for this condition is the ability to sustain spontaneous breathing. Therefore, to determine the test performance of “weaning predictor tests,” all subjects should undergo a trial of spontaneous breathing regardless of the results of the predictor tests. Now, some may argue that the self-breathing trial (or spontaneous breathing trial) is, indeed, this gold standard. I would agree if SBTs were perfectly accurate in predicting removal of the endotracheal tube and spontaneous breathing without a ventilator in the room. This is, however, not the case. So, truly, what Yang and Tobin are assessing is the ability of these tests to predict the performance on a subsequent SBT.

Dr. MacIntyre argues that “since the outcome of an SBT is the outcome of interest, why waste time and effort trying to predict it?” I would agree with this within limits. Existing literature supports the use of very basic parameters (e.g., hemodynamic stability, low levels of FiO2 and PEEP, etc.) as screens for identifying patients for whom an SBT is appropriate. Uncertain is the value of daily SBTs in all patients, regardless of passing this screen or not. One might hypothesize that simplifying this step even further might provide incremental benefit. Yang and Tobin, however, must consider a failure on an SBT to have deleterious effects. They consider “weaning trials undertaken either prematurely or after an unnecessary delay…equally deleterious to a patient’s health.” There is no reference supporting this assertion. Recent data suggest that inclusion of “weaning predictor tests” do not save patients from harm due to avoiding SBTs destined to fail (Tanios et al. Crit Care Med, 2006; 34: 2530). On the contrary, inclusion of the f/Vt as the first in Tobin’s and Jubran’s “three diagnostic tests in sequence” resulted in prolonged weaning time.

Tobin and Jubran also note the importance of prior probabilities in determining the performance of a diagnostic test. In the original study, Yang and Tobin selected patients who “were considered ready to undergo a weaning trial” by their primary physicians. Other studies have reported that such clinician assessments are very unreliable with predictive values marginally better than a coin-flip (Stroetz et al, Am J Resp Crit Care Med, 1995; 152: 1034). Perhaps, the clinicians whose patients were in this study are better than this. However, we are not provided with strict clinical rules which define this candidacy for weaning but can probably presume that “readiness” is at least a 50% prior probability of success. Using Yang and Tobin’s sensitivity of 0.97 and specificity of 0.64 for f/Vt, we can generate a range of posterior probabilities of success on a weaning trial:


As one can see, the results of the f/Vt assessment have a dramatic effect on the posterior probabilities of successful SBTs. However, is there a threshold below which one would advocate not performing an SBT if one’s prior probability is 50% or higher? I doubt it. Even with a pre-test probability of successful SBT of 50% and a failed f/Vt, 1 in 25 patients would actually do well on an SBT. I am not willing to forego an SBT with such data since, in my mind, SBTs are not as dangerous as continued, unneeded mechanical ventilation. I would consider low f/Vt values as completely non-informative since they do not instruct me at all regarding the success of extubation – the outcome for which I am most interested.

Other studies have used f/Vt to predict extubation failure (rather than SBT failure) and these are nicely outlined in a recent summary by Tobin and Jubran (Intensive Care Medicine 2006; 32: 2002). Even if we ignore different cut-points of f/Vt and provide the most optimistic specificities (96% for f/Vt <100, Uusaro et al, Crit Care Med 2000; 28: 2313) and sensitivities (79% for f/VT <88, Zeggwagh et al., Intens Care Med 1999; 25:1077), the f/Vt may not help much. As with the prior table, using prior probabilities and the results of the f/Vt testing, we can generate posterior probabilities of successful extubation:


As with the predictions of SBT failure, a high f/Vt lowers the posterior probability of successful extubation greatly. However, one must consider the cut off for posterior probabilities in which one would not even attempt an SBT. Even with a 1% posterior probability, 1 in 100 patients will be successfully extubated. This is the rate when the prior probability of successful extubation is only 20% AND the patient has a high f/Vt! What rate of failed extubation is acceptable or, even, preferable? Five percent? Ten percent? If one never reintubates a patient, it is more likely that he is waiting “too long” to extubate rather than possessing perfect discrimination. Furthermore, what is the likelihood that patients with poor performance on an f/Vt will do well on an SBT? I suspect this failure will prohibit extubation and the high f/Vt values will only spare the effort of performing the SBT. Is the incremental effort of performing SBTs on those who are destined to fail such that it requires more time than the added complexity of using the f/Vt to determine if a patient should receive an SBT at all? Presuming that we require an SBT prior to extubation, low f/Vt values remain non-informative. One could argue that with a posterior probability of >95%, we should simply extubate the patient, but I doubt many would take this approach, except in those intubated for reasons not related to respiratory problems (e.g. mechanical ventilation for surgery or drug overdose).

Drs. Tobin, Jubran and Marini (who writes an additional, accompanying editorial, Crit Care Med 2008; 36: 328) are master clinicians and physiologists. When they are at the bedside, I do not doubt that their “clinical experience and firm grasp of pathophysiology” (as Dr. Marini mentions), can match or even exceed the performance of protocolized care. Indeed, expert clinicians at Johns Hopkins have demonstrated that protocolized care did not improve the performance of the clinical team (Krishnan et al., Am J Resp Crit Care Med 2004; 169: 673). I have heard Dr. Tobin argue that this indicates that protocols do not provide benefit for assessment of liberation (American Thoracic Society, 2007). I doubt that the authors would strictly agree with his interpretation of their data since several of the authors note in a separate publication that “the regularity of steps enforced by a protocol as executed by nurses or therapists trumps the rarefied individual decisions made sporadically by busy physicians” (Fessler and Brower, Crit Care Med 2005; 33: S224). What happens to the first patient who is admitted after Dr. Tobin leaves service? What if the physician assuming the care of his patients is more interested in sepsis than ventilatory physiology? What about the patient admitted to a small hospital in suburban Chicago rather than one of the Loyola hospitals? Protocols do not intend to set the ceiling on clinical decision-making and performance, but they can raise the floor.

Friday, December 28, 2007

Results of the Poll - Large Trials are preferred

The purpose of the poll that has been running alongside the posts on this blog for some months now was to determine if physicians/researchers (a convenience sample of folks visiting this site) intuitively are Bayesian when they think about clinical trials.

To summarize the results, 43/68 respondents (63%) reported that they preferred the larger 30-center RCT. This differs significantly from the hypothesized value of 50% (p=0.032).

From a purely mathematical and Bayesian perspective, physicians should be ambivalent about the choice between a large(r) 30-center RCT involving 2100 patients showing a 5% mortality reduction at p=0.0005, and 3 small(er) 10-center RCTs involving 700 patients each showing the same 5% mortality reduction at p=0.04. In essence, unless respondents were reading between the lines somewhere, the choice is between two options with identical posterior probabilities. That is, if the three smaller trials are combined, they are equal to the larger trial and the meta-analytic p-value is 0.0005. Looked at from a different perspective, the large 30-center trial could have been analyzed as 3 10-center trials based on the region of the country in which the centers were located or any other arbitrary classification of centers.

Why this result? I obviously can't say based on this simple poll, but here are some guesses: 1.) People are more comfortable with larger multicenter studies, perhaps because they are accustomed to seeing cardiology mega-trials in journals such as NEJM; or 2.) The p-value of 0.04 associated with the small(er) studies seems "marginal" and the combination of the three studies is non-intuitive, and/or it is not possible to see that the combination p-value will be the same. However, I have some (currently unpublished) data which show that [paradoxically] for the same study, physicians are more willing to adopt a therapy with a higher rather than a lower p-value.
Further research is obviously needed to determine how physicians respond to evidence from clinical trials and whether or not their responses are normative. In this poll, it appears that they were not.

Friday, December 21, 2007

Patients and Physicians should BOYCOTT Zetia and Vytorin: Forcing MRK and SGP to come clean with the data

You wouldn't believe it - or would you? The NYT reports today that SGP has data from a number of - go figure - unpublished studies that may contain important data about increased [and previously undisclosed] risks of liver toxicity with Zetia and Vytorin: http://www.nytimes.com/2007/12/21/business/21drug.html Unproven benefits, undisclosed risks? If I were a patient, I would want to be taken off this drug and be put on atorvastatin or simvastatin or a similar agent. If teh medical community would get on board and take patients off of this unproven and perhaps risky drug, that might at least force the companies to come clean with their data.

In fact, I'm astonished at the medical community's reluctance to challenge the status quo which is represented by widespread use of drugs such as this and Avandia, for which there is no proof of efficacy save for surrogate endpoints, and for which there is evidence of harm. These drugs are not good bets unless alternatives do not exist, and of course they do. I am astonished in my pulmonary clinic to see many patients referred for dyspnea, with a history of heart disease and/or cardiomyopathy who remain on Avandia. Apparently, protean dyspnea is not a sufficient wake-up call to change the diabetes management of a patient who is receiving an agent of unproven efficacy and which is known to cause fluid retention and CHF. This just goes to show how effective pharmaceutical marketing campaigns are, how out-of-control things have become, and how non-normative physicians' approach to the data are.

The profit motive impels them forward. The evidence does not support the agents proffered. Evidence of harm is available. Alternatives exist. Why aren't physicians taking patients off drugs such as vioxx, avandia, zetia, and vytorin, and using alternative agents until the confusion is resolved?

Sunday, December 16, 2007

Dexmedetomidine: a New Standard in Critical Care Sedation?

In last week's JAMA, Wes Ely's group at Vanderbilt report the results of a trial comparing dexmedetomidine to lorazepam for the sedation of critically ill patients:
http://jama.ama-assn.org/cgi/content/short/298/22/2644
This group, along with others, has taken the lead as innovators in research related to sedation and delirium in the ICU (in addition to other topics), and this is a very important article in this area. In short, the authors found that, when compared to lorazepam, dexmed led to better targeted sedation and less time in coma, with a trend toward improved mortality.

One of the most impressive things about this study is stated as a post-script:

“This investigator-initiated study was aided by receipt of study drug and an unrestricted research grant for laboratory and investigational studies from Hospira Inc….Hospira Inc had no role in the design or conduct of the study; in the collection, analysis, and interpretation of the data; in the preparation, review, or approval of this manuscript; or in the publication strategy of the results of this study. These data are not being used to generate FDA label changes for this medication, but rather to advance the science of sedation, analgesia, and brain dysfunction in critically ill patients….”

Investigator-initiated....investigator-controlled design and publication, investigators as stewards of the data.....music to my ears.


But is dexmed going to be the new standard in critical care sedation? For that question, it would appear that it is too early for answers. I have the following observations:
• This study used higher doses of dexmed for longer durations than what the product labeling advises. Should practitioners use the doses studied or the approved doses? My very small experience with this drug so far at the labelled doses is that it is difficult to use in that it does not achieve adequate sedation in the most agitated patients - those receiveing the highest doses of benzos and narcotics, in whom lightenting of sedationl is assigned the highest priority.
• The most impressive primary endpoint achieved by the drug was days alive without delirium or coma, but most of it was driven by coma-free days. Perhaps this is not surprising given two aspects of the study's design
1. Patients did not have daily interruptions of sedative infusions, a difficult-to-employ, but evidence-based practice to reduce oversedation and coma
2. lorazepam was titrated upwards without boluses between dose increases. Given the long half-life of this drug, we would expect overshoot by the time steady state pharmacokinetics were achieved.
So is it surprising that patients in the dexmed group had fewer coma-free days?
• We are not told about the tracheostomy practices in this study. Getting a trach earlier may lead to both sedation reduction and improved mortality (See http://ccmjournal.org/pt/re/ccm/abstract.00003246-200408000-00009.htm;jsessionid=HlfG93Qfvb113sCpnD10053YzKqMB3zFfDTdbGvgCQPdlMZ3S8kV!1219373867!181195629!8091!-1?index=1&database=ppvovft&results=1&count=10&searchid=1&nav=search).
• We are not told the proportion of patients in each group who had withdrawal of support. Anecdotally, I have found that families have greater willingness to withdraw support for patients who are comatose, regardless of other underlying physiological variables or organ failures. Can the trend towards improved mortality with dexmed be attributed to differrences in willingness of families to WD support?
• In spite of substantial data that delirium is associated with mortality (http://jama.ama-assn.org/cgi/content/abstract/291/14/1753?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=delirium&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT ), and these data showing that there is a TREND towards fewer delirium-free days with dexmed, the hypothesis that dexmed improves mortality via improvement in delirium is one that can only be tested by a study with mortality as a primary endpoint.
The data from the current study are compelling, and Ely and investigators are to be commended for the important research they are doing (this article is only the tip of that iceberg of research). However, it remains to be seen if one sedative compared to others can lead to improvements in mortality or more rapid recovery from critical illness, or whether limitation of sedation in general with whatever agent is used is primarily responsible for improved outcomes.


Wednesday, December 12, 2007

ENHANCE trial faces congressional scrutiny

Merck and Shering-Plough had better get their houses in order. Congress is on the case:

http://www.nytimes.com/2007/12/12/business/12zetia.html?_r=1&oref=slogin

Apparently, representatives of the US populus, which pays for a substantial portion of the Zetia sold, are not pleased by the delays in release of the data from the ENHANCE trial. The chicanery is going to be harder to sustain.

I certainly hope for everyone's sake (especially patients') that there is no foul play afoot with this trial or ezetimibe - Merck can hardly withstand another round of Vioxx-type suits, can it? Or can it. Merck's stock price (MRK: http://finance.yahoo.com/q/bc?s=MRK&t=5y&l=on&z=m&q=l&c=) is at the same level as it was in Jan, 2004. Some high price to pay for obfuscating the truth, concealing evidence of harm, bilking insurers and the American public and government for billions of $$$ for a prescription painkiller when equivalent non-branded products were available, and causing thousands of heart attacks in the process....

The consequences should be harsher the second time around.....

Type rest of the post here

Tuesday, December 11, 2007

Pronovost, Checklists, and Putting Evidence into Practice

In this week's New Yorker:
http://www.newyorker.com/reporting/2007/12/10/071210fa_fact_gawande
Atul Gawande, a popular physician writer who may be familiar to readers from his columns in the NEJM and the NYT, chronicles the hurculean efforts by Peter Pronovost, MD, PhD at Johns Hopkins Hospital to make sure that the mundane but effective does not always take back seat to the heroic but largely symbolic efforts of critical care doctors.

One of my chronic laments is that evidence is not utilized and that physician efforts do not appear to be rationally apportioned to what counts most. There appears to be too much emphasis on developing evidence and too little emphasis on making sure it is expeditiously adopted and employed; to much emphasis on diagnosis, too little emphasis on evidence-based treatment; too much focus on the "rule of rescue" too little focus on the "power of prevention". Pronovost has demonstrated that simple checklists can have bountiful yields in terms of teamwork, prevention, and delivery of effective care - then why aren't we all familiar with his work? Why doesn't every ICU use his checklists?

My own experience at the Ohio State University Medical Center is emblematic of the challenges of getting an unglamorous thing like a checklist accepted as a routine part of clinical practice in the ICU. In spite of evidence supporting it, its obvious rational basis, widespread recognition that we often miss things if we aren't rigorous and systematic, adopting an adapted version of Pronovost's checklist at OSUMC has proven challenging (albeit possible). As local champion of a checklist that I largely plagarized from Pronovost's original, I have been told by colleagues that it is "cumbersome", but RNs that it is "superfluous", by fellows that it is a "pain", by people of all disciplines that they "don't seen the point" and have been frustrated that when I do not personally assure that it is being done daily (by woaking through the ICU and checking), that it is abandoned as yet another "chore", another piece of bureaucratic red tape that hampers the delivery of more important "patient-centered" care - such as procudures and ordering of tests.

All of these criticisms are delivered despite my admonition that the checklist, like a fishing expedition, is not expected to yield a "catch" on every cast, but that if it is cast enough, things will be caught that would otherwise be missed; desipte my reminder that it is an opportunity to improve our communication with our multi-disciplinary ICU team (and to learn the names of its constituents); despite producing evidence of its benefit and evidence of underutilization of evidence-based therapies which the checklist reminds practitioners to consider. If I were not personally committed to making sure that the checklist is photocopied/available and consistently filled out (by our fellows, who deserve great credit for filling it out), it would quicly fall by the wayside, another relic of a well-meaning effort to encourage concsientiousness through bureaucracy and busy-work (think HIPPA here -the intent is noble, but the practical result an abject failure).

So what is the solution? How are we to increase acceptance of Pronovost's checklist and recognition of its utility and its necessity? It could be through fiat, through education, through a variety of means. But it appears that it has survived at Hopkins because of Pronovost's ongoing efforts to promote it and extol its benefits and its virtues and to get "buy-in" from other stake-holders: RNs, patients, adminitrators, the public, and other physicians. This is not an easy task - but then again, rarely is anything that is worth it. Hopefully other champions of this and other unglamorous innovations will continue to advocate for mundane but effective interventions to improve communication among members of multidisciplinary healthcare teams, the utilzation of evidence-based therapies, and outcomes for patients.



Friday, November 30, 2007

Eltrombopag: Alas data that speak for themselves

In this week's NEJM, two articles describe the results of two phase 2 studies of Eltrombopag, a non-peptide, oral agonist of the thrombopoetin receptor, one in patients with HCV and thrombocytopenia:
http://content.nejm.org/cgi/content/abstract/357/22/2227
and another in patients with ITP:
http://content.nejm.org/cgi/content/abstract/357/22/2237.

I have grown so weary of investigators who must speak for their data - massaging them, doing post-hoc analyses, proffering excuses for them, changing their endpoints and designs to conform to the data, offering partial analyses, ignoring alternative interpretations, stacking the deck in favor of their agent - that I breathe a sigh of relief and contentment when I see data like these which are robust enough to speak for themselves - both in level of statistical significance and effect size which is clearly clinically meaningful.

Of course, we should be clear what these studies can tell us and what they can't. This is a phase 2 trial and it certainly demonstrated efficacy and a dose response which should satisfy even the harshest critics (e.g., me). However, the time of treatment was relatively short so we don't know if the response can be sustained over time; and the study was wildly underpowered to detect side effects at all but the highest frequencies. What untoward effects of stimulating megakaryocytes through this pathway might there be? What about thrombotic complications?
(This is an interesting question also - supposing there are increased thrombotic complications with this agent - how will we know whether this is a direct adverse effect of the agent or whether it results from reversal of protection against thrombosis conferred by ITP itself, if that even exists?)

So, we await the results of larger phase 3 trials of Eltrombopag, hoping that they are well designed and attuned to careful measure of adverse effects, content for now that a novel and apparently robust agent has been discovered to add to the currently inadequate treatments for cirrhotic thrombocytopenia and that associated with ITP.

Sunday, November 25, 2007

Are Merck and Schering-Plough "enhancing" the ENHANCE data?

I'm from Missouri, "The Show-Me State," and like many others, I'd like Merck and Schering-Plough to show me the ENHANCE trial results. I'd like them raw and unenhanced, please. This expose in the NYT last week is priceless:

http://www.nytimes.com/2007/11/21/business/21drug.html?ex=1353387600&en=2d41b634a5c553df&ei=5124&partner=permalink&exprod=permalink

I just learned that Matthew Herper at Forbes reported it first in an equally priceless article:
http://www.forbes.com/home/healthcare/2007/11/19/zetia-vytorin-schering-merck-biz-health-cx_mh_1119schering.html

In a nutshell: Sinvastatin (misspelling intentional) recently lost patent protection. Sinvastatin (Zocor) has been combined with ezetimibe (Zetia) to yield combination drug Vytorin. This combination holds the promise of rescuing Sinvastatin, a multi-billion dollar drug, from generic death if doctors continue to prescribe it in combination with ezetimibe as a branded product. There's only one problem: unlike sinvastatin, ezetimibe has never been shown to do anyting but lower LDL cholesterol, a surrogate endpoint. That's right, just like Torcetrapib, we don't know what ezetimibe does to clinically meaningful outcomes, the ones that patients and doctors care about. (The drug compaines care about surrogate outcomes because some of them are sufficient for FDA approval - that subject is a blog post or two in itself.)

So Merck and Schering-Plough designed the ENHANCE trial, which compares 80 mg of simvastatin to 80 mg of simvastatin + 10 mg of ezetimibe on the primary outcomes of carotid intima-media thickness and femoral artery (IMT). Note that we still don't have a clinically meaningful endpoint as a primary outcome, but we're getting there. A trial assessing the combination's effects on meaningful outcomes isn't due to be completed until 2010. Of course a big worry here is that ezetimibe is like torcetrapib and that in spite of creating a more favorable cholesterol profile, there is no clinically meaningful outcome improvement; i.e., the cholesterol panel is a merely cosmetic result of ezetimibe.

(Regarding the ongoing trials evaluating clinical outcomes: Schering-Plough is up to some tricks there too to rescue Sinvastatin from generic death. The improve-it study [they need a study to "prove-it" before they embark on a mission to "improve-it," don't you think?] design can be seen here:
http://clinicaltrials.gov/ct/show/NCT00202878
In this study, ezetimibe is not being compared to maximum dose sinvastatin, nor is a combination of ezetimibe and sinvastatin being compared to maximum sinvastatin alone. If one of those comparisons were done, important information could be gleaned - doctors would know, for example, if ezetimibe is superior to an alternative (one that is now available in generic, mind you) at maximum dose, or if its addition to maximum dose sinvastatin has any additional yield. But such trials are too risky for the company - they may show that there is no point to prescribing ezetimibe because it is either less potent than max dose sinvastatin, or that it has no incremental value over max dose sinvastatin. So, instead, sinvastatin 40mg+ ezetimibe 10mg is being compared to sinvastatin 40mg alone. The main outcomes are hard clinical endpoints - death, stroke, MI, etc. Supposing that this trial is "positive" - that the combination (Vytorin) is superior to sinvastatin 40mg. Should patients now be on Vytorin (sinvastatin 40mg+ ezetimibe =patent-protected=expensive) instead of sinvastatin 80 mg (=generic=cheap)? Well, there will be no way to know based on this trial, which is exactly the way Schering-Plough wants it. You see, this trial was designed primarily for the purpose of securing patent protection for simvastatin in the combination pill. Its potential contribution to science and patient care is negligible. So much so in fact, that I think this trial is unethical. It is unethical because patients volunteer for research mainly out of altruism (although in this case you could argue it's for free drugs). The result of such altruism is expected to be a contribution to science and patient care in the future. But in this case, the science sucks and the main contribution patients are making goes to the coffers of Schering-Plough. Physicians should stop allowing their patients to participate in such trials, so that their altruism is not violated.)

The NYT article makes some suspicious and concerning observations:

  • The data, expected to be available 6 months ago (the trial was completed almost 2 years ago!), will not be released until some time next year, and then only a partial dataset analysis, not complete data analysis.
  • The primary endpoint was changed after the trial was concluded! (Originally it was going to be carotid IMT at three places, now only at one place - a change that is rich fodder for conspiracy theorists, regardless if an outside consulting agency suggested the change.)
  • Data on femoral artery IMT are not going to be released at all now

Matthew Herper's Forbes article also notes that the trial was not listed on http://www.clinicaltrials.gov/ until Forbes asked why it was not there!

For the a priori trial design and pre-specified analyses, see pubmed ID # 15846260 at http://www.pubmed.org/ . In that report of the study's design, I do not see mention of monitoring of safety endpoints such as mortality and cardiovascular outcomes. But I presume these are being monitored for safety reasons. And Merck and Schering-Plough, who have claimed that they have not released the IMT data because it's taking longer than anticipated to analyze it, could certainly allay some of our concerns by releasing the data on mortality and safety endpoints, couldn't they? It doesn't take very long to add up deaths.

The problem with pre-specifying all these analyses (carotid IMT at 3 locations and femoral IMT) is that now you have multiple endpoints, and your chances of meeting one of them by chance alone is increased. That's why the primary endpoint holds such a hallowed position in the heirarchy of endpoints - it forces you to call your shot. I liken this to billiards where it doesn't matter how many balls you put down unless you call them. And none of them counts unless you first put down your first pre-specified ball - if you fail that, you lose your turn. In this case, if you check a bunch of IMTs, one of them might be significantly different based on chance alone - so if you change the primary endpoint after the study is done, we will rightly be suspicious that you changed it to the one that you saw was positive. That's bad science, and we and the editors of the journals should not let people get away with it.

I have a proposal: When you register a trial at http://www.clinicaltrials.gov/ , you should have to list a date of data/analysis release and a summary of the data/analyses that will be released. Should you not release the data/analysis by that pre-specified date, your ability to list or publish future trials, and your ability to seek or pursue regulatory approval for that or any other drug you have is suspended until you release the data. Moreover, you are forbidden from releasing the data/analyses prior to the pre-specified date - to prevent shenanigans with pre-specified list dates in the remote future, followed by premature release.

Lung Transplantation: Exempt from the scrutiny of a randomized controlled trial?

In last week's NEJM, Liou et al in an excellent article analyzed pediatric lung transplant data and found that there is scant evidence for an improvement in survival associated with this procedure:
http://content.nejm.org/cgi/content/short/357/21/2143.

The authors seem prepared to accept the unavoidable metholodical limitations of their analyses and call for a randomized controlled trial (RCT) for pediatric lung transplantation. The editorialist, however, does not share their enthusiasm for a RCT, and appears to take it on faith that the new organ allocation scheme (whereby the sickest children get organs first) will make everything OK:
http://content.nejm.org/cgi/content/short/357/21/2186


True believers die hard. And because of their hardiness, an RCT will be difficult to perform, as many pediatric pulmonologists will be loathe to allow their patients to be randomized to no transplant. They have no individual equipoise, even though there appears to be collective equipoise among folks willing to give serious consideration to the available data.

What we have here may be an example of what I will call "action bias" - which is basically the opposite of omission bias. In omission bias, people fail to act even though outcomes from action are superior to those from omission - often as a result of reluctance to risk or cause direct harm even though direct benefits outweigh them in the net. Action bias, as the enantiomer of omission bias, would refer to causing worse outcomes through action because of the great reluctance to stand by helplessly while a patient is dying, even when the only "therapies" we can offer make patients worse off - save for the hope they offer, reason notwithstanding.

Wednesday, November 21, 2007

Torcetrapib Torpedoed: When the hypothesis is immune to the data

I have watched the torcetrapib saga with interest for some time now. This drug is a powerful non-HMG-CoA-reductase inhibitor raiser of HDL (up to a 100% increase) and effects modest decreases in LDL also (20%) as reported with great fanfare in the NEJM in 2004: http://content.nejm.org/cgi/content/abstract/350/15/1505.

Such was the enthusiasm for this drug that one editorialist in the same journal cried foul play in reference to Pfizer's intent to study the drug only with Lipitor, suggesting that such a move was intended to soften the blow to this blockbuster (read multibillion dollar) drug when it soon loses patent protection:
http://content.nejm.org/cgi/content/extract/352/25/2573.
The tone is one of serious concern - as this drug was expected to truly be spectacular at BOTH raising HDL and preventing cardiovascular morbidity and mortality - an assumption based on the well-established use of cholesterol lowering as a surrogate endpoint in trials of cardiovascular medications.

(I'm sure the Avandia analogy is banging like a clapper in your skull right now.)

But a perspicacious consumer of the literature on torcetrapib would have noted that there were precious few and conflicting data about its efficacy as an antiatherogenic agent - preclinical data from animal studies were neither consistent nor overwhelming regarding its effects on the vasculature (in spite of the use of VERY high doses of the drug yielding high degrees of CETP inhibition) and studies of patients with CETP mutations also were inconsistent regarding its influence on the development of cardiovascular disease. Certainly, one would expect a drug with such remarkable HDL raising abilities to do something substantial and consistent to sensitive measures of atherogenesis in preclinical studies or to have some consistent and perhaps dramatic effect in patients with mutations leading to high HDL levels. (For a good review of pre-clinical studies, see:
http://atvb.ahajournals.org/cgi/content/full/27/2/257?cookietest=yes and http://www.jlr.org/cgi/content/full/48/6/1263).
But alas, there was not consistent and robust evidence for anything but changes in surrogate markers. Of course this is all hindsight and it's easy for me to pontificate now that the horse was let out of the barn; first by Nissen et al: http://content.nejm.org/cgi/content/abstract/356/13/1304
and then today:
http://content.nejm.org/cgi/content/short/357/21/2109.
(In fact, I would say that the horse is galloping about the barnyard trammeling Lipitor's hopes of life after generic death.)


But what interests me now is not that the drug failed, and not that I have a new archetypal drug for failure of surrogate endpoints, but rather how difficult it is for the believers to let go. True believers die hard. How do the editors let a conclusion like this make it to print:


"In conclusion, our study neither validates nor invalidates the hypothesis that raising levels of HDL cholesterol by the inhibition of CETP may be cardioprotective. Thus, the possibility that the inhibition of CETP may be beneficial will remain hypothetical until it is put to the test in a trial with a CETP inhibitor that does not share the off-target pharmacologic effects of torcetrapib. "

Really?

Had the study been positive, would that have been the conclusion? No, the authors would have concluded that the hypothesis was validated.

So if the study is positive, the hypothesis is confirmed; but if it is negative (or shows harm), the hypothesis is immune to the data. The authors should not be allowed to have their cake and eat it too.

The above conclusion is tantamount to saying “our data do not bear on the hypothesis” which is tantamount to saying “our study was badly designed.”

Sure, another agent without that little BP problem may have more salutary effects on mortality, but I'd hate to be the guy trying to get that one through the IRB. Here we have a drug in a class that killed people in the last study. We'd better have more robust pre-clinical data the next time around. The other thing that fascinates me is the grasping for explanations. Here is a drug with ROBUST effects on HDL, and it causes an overall statistically significant increase in mortality. That's one helluva a hurdle for the next drug to jump even without the BP problem. Moreover, I refer the reader to the HOT trial:
(http://rss.sciencedirect.com/getMessage?registrationId=GHEIGIEIHNEJOHFJIHEPHIGKGJGPHHJQLZGQJNLMOE).
A 5 mmHg lowering of BP over a 3.8 year period reduced mortality by a mere 0.9% (p=0.32 - not significant). That's a small increase and it's not statistically significant. But lowering LDL with simvastatin (the 4S trial: Lancet. 1994 Nov 19;344(8934):1383-9.) for 3.3 years on average led 1.7% ARR in mortality (RR 0.70 (95% CI 0.58-0.85, p = 0.0003). So it would appear that on average, you get more bang for your buck in lowering cholesterol than you do in lowering BP. With an agent that is such a potent raiser of HDL, we would certainly expect at worst a null effect if the BP effect militated against the HDL/LDL effect. I have not done a meta-analysis of trials of BP lowering or cholesterol lowering, but I would be interested in the comparison. For now, I'm substantially convinced that the BP argument is abjectly insufficient to explain the failure of this agent to improve meaningful outcomes.

So the search will go on for a molecular variation of this agent which doesn't increase BP, with the hopes that another blockbuster cholesterol agent will be discovered. But in all likelihood, this mechanism of altering cholesterol metabolism is fatally flawed and I wouldn't volunteer any of my patients for the next trial. I'd give them 80mg of generic simvastatin or atorvastatin.

Wednesday, November 7, 2007

Plavix Defeated: Prasugrel is superior in a properly designed and executed study

Published early on Sunday, November 5th in the NEJM (http://content.nejm.org/cgi/content/abstract/NEJMoa0706482v1) is a randomized controlled superiority trial comparing clopidogrel to a novel agent - Prasugrel.

Prasugrel was superior to Plavix. And it was superior to a degree similar to the degree to which Plavix is superior to aspirin alone. (See http://content.nejm.org/cgi/content/abstract/352/12/1179
and
http://content.nejm.org/cgi/content/abstract/345/7/494).

So therefore, by precedent, if one accepts the notion that aspirin alone is inferior to aspirin and Plavix because reductions in death and MI on the order of 2-3% are thought to be non-negligible (as I think they should be considered), one must therefore accept the notion that given the choice between Plavix and Prasugrel, one should choose the latter.



There is this issue of bleeding. But, eschewing your tendency towards omission bias, as I know you are wont to, you will agree that even if bleeding is as bad as death or MI (and it is NOT!), the net benefit of Prasugrel remains positive. Bleeding gums with dental flossing is annoying until you compare your life to your neighbor in cardiac rehab after his MI.

There is also the issue of Plavix's patent expiration in a few years. If the medications were equivalently priced, the choice is a no-brainer. If Prasugrel is costly and Plavix is generic, the calculus increases considerably in complexity - both from the perspective of the patient paying out of pocket, and the policy expert wielding his cost-effectiveness analysis. If my co-pay were the same, I would certainly choose Prasugrel. But if money is/were tight, I might consider that diet and excersise (which are free, financially, at least) may be a more cost-effective personal intervention than the co-pay for an expensive drug.

And what if Plavix at a higher dose is just as effective as Prasugrel? That question will have to be answered by future RCTs, which may be unlikely to happen if Plavix is about to lose patent protection...

Saturday, November 3, 2007

Post-exposure prophylaxis for Hepatitis A: Temptation seizes even the most well-intentioned authors

Victor et al report in the October 25th NEJM (http://content.nejm.org/cgi/content/abstract/357/17/1685) the non-inferiority of Hepatitis A vaccine to Immune Globulin for post-exposure prophylaxis of hepatitis A. The results are convincing for the non-inferiority hypothesis: symptomatic hepatitis A occurred in 4.4% of subjects who received vaccine versos 3.3% of subjects who received immune globulin (RR 1.35%; 95% CI .70-2.67).

This is a very well-executed non-inferiority study. If one looks at the methods section, s/he sees that the authors described very well their non-inferiority hypothesis and how it was arrived at. Given the low baseline rate of symptomatic hepatitis A (~3%), a RR of 3.0 is reasonable for non-inferiority, as non-inferiority implies<2%> non-significant trend toward less symptomatic Hepatitis A in the immune globlin group, the authors suggest that this agent may be preferred.

Again, one cannot have his cake and eat it too. One either conducts a non-inferiority trial and accepts non-inferior results as meaning that one agent is non-inferior to the alternative agent, or one conducts a superiority trial to demonstrate that one agent is truly superior. If the point estimates in this trial are close to correct, and immune globulin is 1.1% superior to HAV vaccine, ~7300 patients would be required in EACH group to determine superiority at a power of 90% and an alpha of 0.05. So the current trial is no substitute for a superiority trial with~7300 patients in each group. Unless such a trial is performed, HAV vaccine and immune globulin are non-inferior to each other for post-exposure prophylaxis to HAV, period.

To sum up: one either believes that two agents are non-inferior (or more conservatively, equivalent) and he therefore conducts a non-inferiority trial and accepts the results based on the a priori margins (delta) that he himself specified - or he conducts a superiority trial to demonstrate unequivocally that his preferred agent is superior to the comparator agent.

Wednesday, October 31, 2007

Lanthanic Disease increasing because of MRI, reports NEJM

In this week's NEJM (http://content.nejm.org/cgi/content/short/357/18/1821) authors from the Netherlands report a large series of asymptomatic patients who had brain MRI scans. There was a [surprisingly?] large incidence of abnormalities, particularly [presumed] brain infarcts, the incidence of which [predictably] increased with age. This is a timely report given the proliferation and technical evolution of advanced imaging techniques, which we can expect to lead to the discovery of an increasing number of "abnormalities" in asymptomatic patients. As in the case of screening for lung cancer (http://jama.ama-assn.org/cgi/content/abstract/297/9/953?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=computed+tomography&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT), the benefits of early detection of an abnormality must be weighed against the cost of the technology and the diagnostic and therapeutic misadventures that result from pursuit of incidentalomas that are discovered. The psychological impact of the "knowledge" gained on patients must also be considered. Sometimes, ignorance truly is bliss, and therefore 'tis folly to be wise.

Lanthanic disease (with which I am familiar thanks to the sage mentorship of Peter B. Terry, MD, MA at Johns Hopkins Hospital) refers to incidentally discovered abnormalities in asymptomatic individuals. Not surprisingly, it generally is thought to have a better prognosis than disease that is discovered after symptoms develop, presumably because it is discovered at a less advanced stage or is behaving in a less sinister fashion.

The discovery of Lanthanic disease poses challenges for clinicians. Is the natural history of incidentally discovered disease different from what is classically reported? Should pre-emptive interventions be undertaken? What of the elderly female with mental status changes who presents to the ED and in whom a cortical infarct or SDH is discovered on an MRI? Can her current symptoms be attributed to the imaging abnormalities? Clinicians will do well to be aware of the high prevalence of asymptomatic abnormalities on such scans.

The authors' conclusions are perspicacious: "Information on the natural history of these lesions is needed to inform clinical management."

Sunday, October 7, 2007

CROs (Contract Reseaerch Organizations) denounced in the NEJM

This last week's NEJM contains a long-overdue expose on CROs (contract research organizations): http://content.nejm.org/cgi/content/short/357/14/1365 .

These organizations have one purpose: to carry out studies for the pharmaceutical industry in the most expeditious and efficient manner. The problem is that often, it is expeditious and efficient to compromise patient safety.

The article states the issue better than I could hope to. I will only comment that regardless of who is carrying out the actual clinical trial, that industry control of or involvement in the design of the trial is another MAJOR problem that must be addressed if we wish to search for the truth and protect study participant and subsequent patient safety in the study of novel pharmaceutical agents.

Friday, September 28, 2007

Badly designed studies - is the FDA to blame?

On the front page of today's NYT (http://www.nytimes.com/2007/09/28/health/policy/28fda.html?ex=1348718400&en=30b7a25ac3835517&ei=5124&partner=permalink&exprod=permalink)
is an article describing a report to be released today by teh inspector general of the Department of Health and Human Service that concludes that FDA oversight of clinical trials (mostly for drugs seeking approval by the agency from the industry) is sorely lacking.

In it, Rosa DeLauro (D-CT) opines that the agency puts industry interests ahead of public health. Oh, really?

Read the posts below and you might be of the same impression. Some of the study designs the FDA approves for testing of agents are just unconscionable. These studies have little or no value for the public health, science, or patients. They serve only as coffer-fillers for the industry. Sadly, they often serve as coffin-fillers when things sometimes go terribly awry. Think Trovan. Rezulin. Propulsid. Vioxx.

The medical community, as consumers of these "data" and the resulting products, has an obligation to its patients which extends beyond those which we see in our offices. We should stop tolerating shenanigans in clinical trials, "me-too" drugs, and corporate profiteering at the expense of patient safety.

Thursday, September 27, 2007

Defaults suggested to improve healthcare outcomes

In today's NEJM (http://content.nejm.org/cgi/content/short/357/13/1340), Halpern, Ubel, and Asch describe the use of defaults to improve utilization of evidence-based practices. This strategy, which requires that we give up our status quo and omission biases (http://www.chestjournal.org/cgi/content/abstract/128/3/1497?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&author1=aberegg&searchid=1&FIRSTINDEX=0&sortspec=relevance&resourcetype=HWCIT ), could prove highly useful - if we have the gumption to follow their good advice and adopt it.

It is known that patients recieve only approximately 50% of the evidence-based therapies that are indicated in their care (see McGlynn et al: http://content.nejm.org/cgi/content/abstract/348/26/2635) and that there is a lag of approximately 17 years between substantial evidence of benefit of a therapy and its adoption into routine care.

Given this dismal state of affairs, it seems that the biggest risk is not that a patient is going to receive a defalut therapy that is harmful, wasteful, or not indicated, but rather that patients are going to continue to receive inadequate and incomplete care. The time to institute defaults into practice is now.

Wednesday, September 26, 2007

Dueling with anideulafungin

Our letter to the editor of the NEJM regarding the anidulafungin article (described in a blog post in July - see below) was published today and can be seen at: http://content.nejm.org/cgi/content/short/357/13/1347 .

To say the least, I am disappointed in the authors' response, particularly in regards to the non-inferiority and superiority issues.

The "two-step" process they describe for sequential determination of non-inferiority followed by superiority is simply the way that a non-inferiority trial is conducted. Superiority is declared in a non-inferiority trial if the CI of the point estimate does not include zero. (See http://jama.ama-assn.org/cgi/content/abstract/295/10/1152?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=piaggio&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT .

The "debate" among statisticians that they refer to is not really a debate at all, but relates to the distinction between a non-inferiority trial and an equivalence trial - in the latter, the CI of the point estimate must not include negative delta; in this case that would mean the 95% CI would have to fall so far to the left of zero that it did not include minus 20, or the pre-specified margin of non-inferiority. Obviously, the choice of a non-inferiority trial rather than an equivalence trial makes it easier to declare superiority. And this choice can create, as it did in this case, an apparent contradiction that the authors try to gloss over by restating the definition of superiority they chose when designing the trial.

Here is the contradiction, the violation of logic. The drug is declared superior because the 95% CI does not cross zero, but of course, that 95% CI is derived from a point estimate, in this case 15.4%. So, 15.4% is sufficient for the drug to be superior. But if your very design implied that a difference less than 20% is clinically negligible (a requirement for the rational determination of a delta, a prespecified margin of non-inferiority), aren't you obliged by reason and fairness to qualify the declaration of superiority by saying something like "but, we think that a 15.4% difference is clinically negligible?"

There is no rule that states that you must qualify it in this way, but I think it's only fair. Perhaps we, the medical community, should create a rule - namely that you cannot claim superiority in a non-inferiority trial, only in an equivalence trial. This would prevent the industry from getting one of the "free lunches" they currently get when they conduct these trials, and the apparent contradictions that sometimes arise from them.

Tuesday, September 25, 2007

Lilly, Xigris, the XPRESS trial and non-inferiority shenanigans

The problem with non-inferiority trials (in addition to the apparent fact that the pharmaceutical industry uses them to manufacture false realities) is that people don't generally understand them (which is what allows false realities to be manufactured and consumed.) One only need look at the Windish article described below to see that the majority of folks struggle with biomedical statistics.

The XPRESS trial, published in AJRCCM Sept. 1st, (http://ajrccm.atsjournals.org/cgi/content/abstract/176/5/483) was mandated by the FDA as a condition of the approval of drotrecogin-alfa for severe sepsis. According to the authors of this study, the basic jist is to see if heparin interferes with the efficacy of Xigris (drotrecogin-alfa) in severe sepsis. The trial is finally published in a peer-reviewed journal, although Lilly has been touting the findings as supportive of Xigris for quite a while already.


The stated hypothesis was that Xigris+placebo is equivalent to Xigris+heparin (LMWH or UFH). [Confirmation of this hypothesis has obvious utility for Lilly and users of this drug because it would allay concerns of coadministration of Xigris and heparinoids, the use of the latter which is staunchly entrenched in ICU practice).

The hypothesis was NOT that Xigris+heparin is superior to Xigris alone. If Lilly had thought this, they would have conducted a superiority trial. They did not. Therefore, they must have thought that the prior probability of superiority was low. If the prior probability of a finding (e.g., superiority) is low, we need a strong study result to raise the posterior probability into a reasonable range - that is, a powerful study which produces a very small p-value (e.g., <0.001)>
  • This study used 90% confidence intervals. Not appropriate. This is like using a p-value of 0.10 for significance. I have calculated the more appropriate 95% CIs for the risk difference observed and they are: -0.077 to +0.004.
  • The analysis used was intention to treat. The more conservative method for an equivalence trial is to present the results as "as treated". This could be done at least in addition to the ITT analysis to see if the results are consistent.
  • Here we are doing an equivalence trial with mortality as an outcome. This requires us to choose a "delta" or mortality difference between active treatment and control which is considered to be clinically negligible. Is an increased risk of death of 6.2% negligible? I think not. It is simply not reasonable to conduct a non-inferiority or equivalence trial with mortality as the outcome. Mortality differences would have to be, I would say, less than 1% to convince me that they might be negligible.
  • Because an equivalence design was chosen, the 95% CIs (90% if you're willing to accept that -and I'm not) for the treatment difference would have to fall entirely outside of delta (6.2%) in order for treatment to be declared superior to placebo. Clearly it does not. So any suggestion that Xigris+heparin is superior to Xigris alone based on this study is bunkum. Hogwash. Tripe. Based upon the chosen design, superiority is not even approached. The touted p-value of 0.08 conceals this fact. If they had chosen an superiority design, yes, they would have been close. But they did not.
  • Equivalence was not demonstrated in this trial either, as the 95% (and the 90%) CIs crossed the pre-specified delta. So sorry.
    • The design of this study and its very conception as an equivalence trial with a mortality endpoint is totally flawed. Equivalence was not demonstrated even with a design that would seem to favor its demonstration. (Interestingly, if a non-inferiority design had been chosen, superiority of Xigris+heparin would in fact have been demonstrated! [with 90, but NOT with 95% CIs] ).

      The biggest problem I'm going to have is when the Kaplan-Meier curve presented in Figure 3A with its prominently featured "near miss" p-value of 0.09 is used as ammunition for the argument that Xigris+heparin trended toward superior in this study. If it had been a superiority trial, I would be more receptive of that trend. But you can't have your cake and eat it too. You either do a superiority trial, or you do an equivalence trial. In this case, the equivalence trial appeared to backfire.

      Having said all that, I think we can be reassured that Xigris+heparin is not worse than Xigris+placebo and the concern that heparin abrogates the efficacy of Xigris should be mostly dispelled. And because almost all critically ill patients are at high frisk of DVT/PE, they should all be treated with heparinoids, and the administration of Xigris should not change that practice.

      I just think we should stop letting folks get away with these non-inferiority/equivalence shenanigans. In this case, there is little ultimate difference. But in many cases a non-inferiority or equivalence trial such as this will allow the manufacture of a false reality. So I'll call this a case of "attempted manufacture of a false reality".