Monday, December 31, 2007

Is there any place for the f/Vt (the Yang-Tobin index) in today's ICU?

Recently, Tobin and Jubran performed an eloquent re-analysis of the value of “weaning predictor tests” (Crit Care Med 2008; 36: 1). In an accompanying editorial, Dr. MacIntyre does an admirable job of disputing some of the authors’ contentions (Crit Care Med 2008; 36: 329). However, I suspect space limited his ability to defend the recommendations of the guidelines for weaning and discontinuation of ventilatory support.

Tobin and Jubran provide a whirlwind tour of the limitations of meta-analyses. These are important considerations when interpreting the reported results. However, lost in this critique of the presumed approach used by the McMaster group and the joint tack force are the limitations of the studies on which the meta-analysis was based. Tobin and Jubran provide excellent points about systematic error limiting the internal validity of the study but, interestingly, do not apply such criticism to studies of f/Vt.

For the sake of simplicity, I will limit my discussion to the original report by Yang and Tobin (New Eng J Med 1991; 324: 1445). As a reminder, this was a single center study which included 36 subjects in a “training set” and 64 subjects in a “prospective-validation set.” Patients were selected if “clinically stable and whose primary physicians considered them ready to undergo a weaning trial.” The authors then looked a variety of measures to determine predictors of those “able to sustain spontaneous breathing for ≥24 hours after extubation” versus those “in whom mechanical ventilation was reinstituted at the end of a weaning trial or who required reintubation within 24 hours.” While not explicitly stated, it looks as if all the patients who failed a weaning trial had mechanical ventilation reinstituted, rather than failing extubation.

In determining the internal validity of a diagnostic test, one important consideration is that all subjects have the “gold standard” test performed. In the case of “weaning predictor tests,” what is the condition we are trying to diagnose? I would argue that it is the presence of respiratory failure requiring continued ventilatory support. Alternatively, it is the absence of respiratory failure requiring continued ventilatory support. I would also argue that the gold standard test for this condition is the ability to sustain spontaneous breathing. Therefore, to determine the test performance of “weaning predictor tests,” all subjects should undergo a trial of spontaneous breathing regardless of the results of the predictor tests. Now, some may argue that the self-breathing trial (or spontaneous breathing trial) is, indeed, this gold standard. I would agree if SBTs were perfectly accurate in predicting removal of the endotracheal tube and spontaneous breathing without a ventilator in the room. This is, however, not the case. So, truly, what Yang and Tobin are assessing is the ability of these tests to predict the performance on a subsequent SBT.

Dr. MacIntyre argues that “since the outcome of an SBT is the outcome of interest, why waste time and effort trying to predict it?” I would agree with this within limits. Existing literature supports the use of very basic parameters (e.g., hemodynamic stability, low levels of FiO2 and PEEP, etc.) as screens for identifying patients for whom an SBT is appropriate. Uncertain is the value of daily SBTs in all patients, regardless of passing this screen or not. One might hypothesize that simplifying this step even further might provide incremental benefit. Yang and Tobin, however, must consider a failure on an SBT to have deleterious effects. They consider “weaning trials undertaken either prematurely or after an unnecessary delay…equally deleterious to a patient’s health.” There is no reference supporting this assertion. Recent data suggest that inclusion of “weaning predictor tests” do not save patients from harm due to avoiding SBTs destined to fail (Tanios et al. Crit Care Med, 2006; 34: 2530). On the contrary, inclusion of the f/Vt as the first in Tobin’s and Jubran’s “three diagnostic tests in sequence” resulted in prolonged weaning time.

Tobin and Jubran also note the importance of prior probabilities in determining the performance of a diagnostic test. In the original study, Yang and Tobin selected patients who “were considered ready to undergo a weaning trial” by their primary physicians. Other studies have reported that such clinician assessments are very unreliable with predictive values marginally better than a coin-flip (Stroetz et al, Am J Resp Crit Care Med, 1995; 152: 1034). Perhaps, the clinicians whose patients were in this study are better than this. However, we are not provided with strict clinical rules which define this candidacy for weaning but can probably presume that “readiness” is at least a 50% prior probability of success. Using Yang and Tobin’s sensitivity of 0.97 and specificity of 0.64 for f/Vt, we can generate a range of posterior probabilities of success on a weaning trial:


As one can see, the results of the f/Vt assessment have a dramatic effect on the posterior probabilities of successful SBTs. However, is there a threshold below which one would advocate not performing an SBT if one’s prior probability is 50% or higher? I doubt it. Even with a pre-test probability of successful SBT of 50% and a failed f/Vt, 1 in 25 patients would actually do well on an SBT. I am not willing to forego an SBT with such data since, in my mind, SBTs are not as dangerous as continued, unneeded mechanical ventilation. I would consider low f/Vt values as completely non-informative since they do not instruct me at all regarding the success of extubation – the outcome for which I am most interested.

Other studies have used f/Vt to predict extubation failure (rather than SBT failure) and these are nicely outlined in a recent summary by Tobin and Jubran (Intensive Care Medicine 2006; 32: 2002). Even if we ignore different cut-points of f/Vt and provide the most optimistic specificities (96% for f/Vt <100, Uusaro et al, Crit Care Med 2000; 28: 2313) and sensitivities (79% for f/VT <88, Zeggwagh et al., Intens Care Med 1999; 25:1077), the f/Vt may not help much. As with the prior table, using prior probabilities and the results of the f/Vt testing, we can generate posterior probabilities of successful extubation:


As with the predictions of SBT failure, a high f/Vt lowers the posterior probability of successful extubation greatly. However, one must consider the cut off for posterior probabilities in which one would not even attempt an SBT. Even with a 1% posterior probability, 1 in 100 patients will be successfully extubated. This is the rate when the prior probability of successful extubation is only 20% AND the patient has a high f/Vt! What rate of failed extubation is acceptable or, even, preferable? Five percent? Ten percent? If one never reintubates a patient, it is more likely that he is waiting “too long” to extubate rather than possessing perfect discrimination. Furthermore, what is the likelihood that patients with poor performance on an f/Vt will do well on an SBT? I suspect this failure will prohibit extubation and the high f/Vt values will only spare the effort of performing the SBT. Is the incremental effort of performing SBTs on those who are destined to fail such that it requires more time than the added complexity of using the f/Vt to determine if a patient should receive an SBT at all? Presuming that we require an SBT prior to extubation, low f/Vt values remain non-informative. One could argue that with a posterior probability of >95%, we should simply extubate the patient, but I doubt many would take this approach, except in those intubated for reasons not related to respiratory problems (e.g. mechanical ventilation for surgery or drug overdose).

Drs. Tobin, Jubran and Marini (who writes an additional, accompanying editorial, Crit Care Med 2008; 36: 328) are master clinicians and physiologists. When they are at the bedside, I do not doubt that their “clinical experience and firm grasp of pathophysiology” (as Dr. Marini mentions), can match or even exceed the performance of protocolized care. Indeed, expert clinicians at Johns Hopkins have demonstrated that protocolized care did not improve the performance of the clinical team (Krishnan et al., Am J Resp Crit Care Med 2004; 169: 673). I have heard Dr. Tobin argue that this indicates that protocols do not provide benefit for assessment of liberation (American Thoracic Society, 2007). I doubt that the authors would strictly agree with his interpretation of their data since several of the authors note in a separate publication that “the regularity of steps enforced by a protocol as executed by nurses or therapists trumps the rarefied individual decisions made sporadically by busy physicians” (Fessler and Brower, Crit Care Med 2005; 33: S224). What happens to the first patient who is admitted after Dr. Tobin leaves service? What if the physician assuming the care of his patients is more interested in sepsis than ventilatory physiology? What about the patient admitted to a small hospital in suburban Chicago rather than one of the Loyola hospitals? Protocols do not intend to set the ceiling on clinical decision-making and performance, but they can raise the floor.

Friday, December 28, 2007

Results of the Poll - Large Trials are preferred

The purpose of the poll that has been running alongside the posts on this blog for some months now was to determine if physicians/researchers (a convenience sample of folks visiting this site) intuitively are Bayesian when they think about clinical trials.

To summarize the results, 43/68 respondents (63%) reported that they preferred the larger 30-center RCT. This differs significantly from the hypothesized value of 50% (p=0.032).

From a purely mathematical and Bayesian perspective, physicians should be ambivalent about the choice between a large(r) 30-center RCT involving 2100 patients showing a 5% mortality reduction at p=0.0005, and 3 small(er) 10-center RCTs involving 700 patients each showing the same 5% mortality reduction at p=0.04. In essence, unless respondents were reading between the lines somewhere, the choice is between two options with identical posterior probabilities. That is, if the three smaller trials are combined, they are equal to the larger trial and the meta-analytic p-value is 0.0005. Looked at from a different perspective, the large 30-center trial could have been analyzed as 3 10-center trials based on the region of the country in which the centers were located or any other arbitrary classification of centers.

Why this result? I obviously can't say based on this simple poll, but here are some guesses: 1.) People are more comfortable with larger multicenter studies, perhaps because they are accustomed to seeing cardiology mega-trials in journals such as NEJM; or 2.) The p-value of 0.04 associated with the small(er) studies seems "marginal" and the combination of the three studies is non-intuitive, and/or it is not possible to see that the combination p-value will be the same. However, I have some (currently unpublished) data which show that [paradoxically] for the same study, physicians are more willing to adopt a therapy with a higher rather than a lower p-value.
Further research is obviously needed to determine how physicians respond to evidence from clinical trials and whether or not their responses are normative. In this poll, it appears that they were not.

Friday, December 21, 2007

Patients and Physicians should BOYCOTT Zetia and Vytorin: Forcing MRK and SGP to come clean with the data

You wouldn't believe it - or would you? The NYT reports today that SGP has data from a number of - go figure - unpublished studies that may contain important data about increased [and previously undisclosed] risks of liver toxicity with Zetia and Vytorin: http://www.nytimes.com/2007/12/21/business/21drug.html Unproven benefits, undisclosed risks? If I were a patient, I would want to be taken off this drug and be put on atorvastatin or simvastatin or a similar agent. If teh medical community would get on board and take patients off of this unproven and perhaps risky drug, that might at least force the companies to come clean with their data.

In fact, I'm astonished at the medical community's reluctance to challenge the status quo which is represented by widespread use of drugs such as this and Avandia, for which there is no proof of efficacy save for surrogate endpoints, and for which there is evidence of harm. These drugs are not good bets unless alternatives do not exist, and of course they do. I am astonished in my pulmonary clinic to see many patients referred for dyspnea, with a history of heart disease and/or cardiomyopathy who remain on Avandia. Apparently, protean dyspnea is not a sufficient wake-up call to change the diabetes management of a patient who is receiving an agent of unproven efficacy and which is known to cause fluid retention and CHF. This just goes to show how effective pharmaceutical marketing campaigns are, how out-of-control things have become, and how non-normative physicians' approach to the data are.

The profit motive impels them forward. The evidence does not support the agents proffered. Evidence of harm is available. Alternatives exist. Why aren't physicians taking patients off drugs such as vioxx, avandia, zetia, and vytorin, and using alternative agents until the confusion is resolved?

Sunday, December 16, 2007

Dexmedetomidine: a New Standard in Critical Care Sedation?

In last week's JAMA, Wes Ely's group at Vanderbilt report the results of a trial comparing dexmedetomidine to lorazepam for the sedation of critically ill patients:
http://jama.ama-assn.org/cgi/content/short/298/22/2644
This group, along with others, has taken the lead as innovators in research related to sedation and delirium in the ICU (in addition to other topics), and this is a very important article in this area. In short, the authors found that, when compared to lorazepam, dexmed led to better targeted sedation and less time in coma, with a trend toward improved mortality.

One of the most impressive things about this study is stated as a post-script:

“This investigator-initiated study was aided by receipt of study drug and an unrestricted research grant for laboratory and investigational studies from Hospira Inc….Hospira Inc had no role in the design or conduct of the study; in the collection, analysis, and interpretation of the data; in the preparation, review, or approval of this manuscript; or in the publication strategy of the results of this study. These data are not being used to generate FDA label changes for this medication, but rather to advance the science of sedation, analgesia, and brain dysfunction in critically ill patients….”

Investigator-initiated....investigator-controlled design and publication, investigators as stewards of the data.....music to my ears.


But is dexmed going to be the new standard in critical care sedation? For that question, it would appear that it is too early for answers. I have the following observations:
• This study used higher doses of dexmed for longer durations than what the product labeling advises. Should practitioners use the doses studied or the approved doses? My very small experience with this drug so far at the labelled doses is that it is difficult to use in that it does not achieve adequate sedation in the most agitated patients - those receiveing the highest doses of benzos and narcotics, in whom lightenting of sedationl is assigned the highest priority.
• The most impressive primary endpoint achieved by the drug was days alive without delirium or coma, but most of it was driven by coma-free days. Perhaps this is not surprising given two aspects of the study's design
1. Patients did not have daily interruptions of sedative infusions, a difficult-to-employ, but evidence-based practice to reduce oversedation and coma
2. lorazepam was titrated upwards without boluses between dose increases. Given the long half-life of this drug, we would expect overshoot by the time steady state pharmacokinetics were achieved.
So is it surprising that patients in the dexmed group had fewer coma-free days?
• We are not told about the tracheostomy practices in this study. Getting a trach earlier may lead to both sedation reduction and improved mortality (See http://ccmjournal.org/pt/re/ccm/abstract.00003246-200408000-00009.htm;jsessionid=HlfG93Qfvb113sCpnD10053YzKqMB3zFfDTdbGvgCQPdlMZ3S8kV!1219373867!181195629!8091!-1?index=1&database=ppvovft&results=1&count=10&searchid=1&nav=search).
• We are not told the proportion of patients in each group who had withdrawal of support. Anecdotally, I have found that families have greater willingness to withdraw support for patients who are comatose, regardless of other underlying physiological variables or organ failures. Can the trend towards improved mortality with dexmed be attributed to differrences in willingness of families to WD support?
• In spite of substantial data that delirium is associated with mortality (http://jama.ama-assn.org/cgi/content/abstract/291/14/1753?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=delirium&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT ), and these data showing that there is a TREND towards fewer delirium-free days with dexmed, the hypothesis that dexmed improves mortality via improvement in delirium is one that can only be tested by a study with mortality as a primary endpoint.
The data from the current study are compelling, and Ely and investigators are to be commended for the important research they are doing (this article is only the tip of that iceberg of research). However, it remains to be seen if one sedative compared to others can lead to improvements in mortality or more rapid recovery from critical illness, or whether limitation of sedation in general with whatever agent is used is primarily responsible for improved outcomes.


Wednesday, December 12, 2007

ENHANCE trial faces congressional scrutiny

Merck and Shering-Plough had better get their houses in order. Congress is on the case:

http://www.nytimes.com/2007/12/12/business/12zetia.html?_r=1&oref=slogin

Apparently, representatives of the US populus, which pays for a substantial portion of the Zetia sold, are not pleased by the delays in release of the data from the ENHANCE trial. The chicanery is going to be harder to sustain.

I certainly hope for everyone's sake (especially patients') that there is no foul play afoot with this trial or ezetimibe - Merck can hardly withstand another round of Vioxx-type suits, can it? Or can it. Merck's stock price (MRK: http://finance.yahoo.com/q/bc?s=MRK&t=5y&l=on&z=m&q=l&c=) is at the same level as it was in Jan, 2004. Some high price to pay for obfuscating the truth, concealing evidence of harm, bilking insurers and the American public and government for billions of $$$ for a prescription painkiller when equivalent non-branded products were available, and causing thousands of heart attacks in the process....

The consequences should be harsher the second time around.....

Type rest of the post here

Tuesday, December 11, 2007

Pronovost, Checklists, and Putting Evidence into Practice

In this week's New Yorker:
http://www.newyorker.com/reporting/2007/12/10/071210fa_fact_gawande
Atul Gawande, a popular physician writer who may be familiar to readers from his columns in the NEJM and the NYT, chronicles the hurculean efforts by Peter Pronovost, MD, PhD at Johns Hopkins Hospital to make sure that the mundane but effective does not always take back seat to the heroic but largely symbolic efforts of critical care doctors.

One of my chronic laments is that evidence is not utilized and that physician efforts do not appear to be rationally apportioned to what counts most. There appears to be too much emphasis on developing evidence and too little emphasis on making sure it is expeditiously adopted and employed; to much emphasis on diagnosis, too little emphasis on evidence-based treatment; too much focus on the "rule of rescue" too little focus on the "power of prevention". Pronovost has demonstrated that simple checklists can have bountiful yields in terms of teamwork, prevention, and delivery of effective care - then why aren't we all familiar with his work? Why doesn't every ICU use his checklists?

My own experience at the Ohio State University Medical Center is emblematic of the challenges of getting an unglamorous thing like a checklist accepted as a routine part of clinical practice in the ICU. In spite of evidence supporting it, its obvious rational basis, widespread recognition that we often miss things if we aren't rigorous and systematic, adopting an adapted version of Pronovost's checklist at OSUMC has proven challenging (albeit possible). As local champion of a checklist that I largely plagarized from Pronovost's original, I have been told by colleagues that it is "cumbersome", but RNs that it is "superfluous", by fellows that it is a "pain", by people of all disciplines that they "don't seen the point" and have been frustrated that when I do not personally assure that it is being done daily (by woaking through the ICU and checking), that it is abandoned as yet another "chore", another piece of bureaucratic red tape that hampers the delivery of more important "patient-centered" care - such as procudures and ordering of tests.

All of these criticisms are delivered despite my admonition that the checklist, like a fishing expedition, is not expected to yield a "catch" on every cast, but that if it is cast enough, things will be caught that would otherwise be missed; desipte my reminder that it is an opportunity to improve our communication with our multi-disciplinary ICU team (and to learn the names of its constituents); despite producing evidence of its benefit and evidence of underutilization of evidence-based therapies which the checklist reminds practitioners to consider. If I were not personally committed to making sure that the checklist is photocopied/available and consistently filled out (by our fellows, who deserve great credit for filling it out), it would quicly fall by the wayside, another relic of a well-meaning effort to encourage concsientiousness through bureaucracy and busy-work (think HIPPA here -the intent is noble, but the practical result an abject failure).

So what is the solution? How are we to increase acceptance of Pronovost's checklist and recognition of its utility and its necessity? It could be through fiat, through education, through a variety of means. But it appears that it has survived at Hopkins because of Pronovost's ongoing efforts to promote it and extol its benefits and its virtues and to get "buy-in" from other stake-holders: RNs, patients, adminitrators, the public, and other physicians. This is not an easy task - but then again, rarely is anything that is worth it. Hopefully other champions of this and other unglamorous innovations will continue to advocate for mundane but effective interventions to improve communication among members of multidisciplinary healthcare teams, the utilzation of evidence-based therapies, and outcomes for patients.



Friday, November 30, 2007

Eltrombopag: Alas data that speak for themselves

In this week's NEJM, two articles describe the results of two phase 2 studies of Eltrombopag, a non-peptide, oral agonist of the thrombopoetin receptor, one in patients with HCV and thrombocytopenia:
http://content.nejm.org/cgi/content/abstract/357/22/2227
and another in patients with ITP:
http://content.nejm.org/cgi/content/abstract/357/22/2237.

I have grown so weary of investigators who must speak for their data - massaging them, doing post-hoc analyses, proffering excuses for them, changing their endpoints and designs to conform to the data, offering partial analyses, ignoring alternative interpretations, stacking the deck in favor of their agent - that I breathe a sigh of relief and contentment when I see data like these which are robust enough to speak for themselves - both in level of statistical significance and effect size which is clearly clinically meaningful.

Of course, we should be clear what these studies can tell us and what they can't. This is a phase 2 trial and it certainly demonstrated efficacy and a dose response which should satisfy even the harshest critics (e.g., me). However, the time of treatment was relatively short so we don't know if the response can be sustained over time; and the study was wildly underpowered to detect side effects at all but the highest frequencies. What untoward effects of stimulating megakaryocytes through this pathway might there be? What about thrombotic complications?
(This is an interesting question also - supposing there are increased thrombotic complications with this agent - how will we know whether this is a direct adverse effect of the agent or whether it results from reversal of protection against thrombosis conferred by ITP itself, if that even exists?)

So, we await the results of larger phase 3 trials of Eltrombopag, hoping that they are well designed and attuned to careful measure of adverse effects, content for now that a novel and apparently robust agent has been discovered to add to the currently inadequate treatments for cirrhotic thrombocytopenia and that associated with ITP.

Sunday, November 25, 2007

Are Merck and Schering-Plough "enhancing" the ENHANCE data?

I'm from Missouri, "The Show-Me State," and like many others, I'd like Merck and Schering-Plough to show me the ENHANCE trial results. I'd like them raw and unenhanced, please. This expose in the NYT last week is priceless:

http://www.nytimes.com/2007/11/21/business/21drug.html?ex=1353387600&en=2d41b634a5c553df&ei=5124&partner=permalink&exprod=permalink

I just learned that Matthew Herper at Forbes reported it first in an equally priceless article:
http://www.forbes.com/home/healthcare/2007/11/19/zetia-vytorin-schering-merck-biz-health-cx_mh_1119schering.html

In a nutshell: Sinvastatin (misspelling intentional) recently lost patent protection. Sinvastatin (Zocor) has been combined with ezetimibe (Zetia) to yield combination drug Vytorin. This combination holds the promise of rescuing Sinvastatin, a multi-billion dollar drug, from generic death if doctors continue to prescribe it in combination with ezetimibe as a branded product. There's only one problem: unlike sinvastatin, ezetimibe has never been shown to do anyting but lower LDL cholesterol, a surrogate endpoint. That's right, just like Torcetrapib, we don't know what ezetimibe does to clinically meaningful outcomes, the ones that patients and doctors care about. (The drug compaines care about surrogate outcomes because some of them are sufficient for FDA approval - that subject is a blog post or two in itself.)

So Merck and Schering-Plough designed the ENHANCE trial, which compares 80 mg of simvastatin to 80 mg of simvastatin + 10 mg of ezetimibe on the primary outcomes of carotid intima-media thickness and femoral artery (IMT). Note that we still don't have a clinically meaningful endpoint as a primary outcome, but we're getting there. A trial assessing the combination's effects on meaningful outcomes isn't due to be completed until 2010. Of course a big worry here is that ezetimibe is like torcetrapib and that in spite of creating a more favorable cholesterol profile, there is no clinically meaningful outcome improvement; i.e., the cholesterol panel is a merely cosmetic result of ezetimibe.

(Regarding the ongoing trials evaluating clinical outcomes: Schering-Plough is up to some tricks there too to rescue Sinvastatin from generic death. The improve-it study [they need a study to "prove-it" before they embark on a mission to "improve-it," don't you think?] design can be seen here:
http://clinicaltrials.gov/ct/show/NCT00202878
In this study, ezetimibe is not being compared to maximum dose sinvastatin, nor is a combination of ezetimibe and sinvastatin being compared to maximum sinvastatin alone. If one of those comparisons were done, important information could be gleaned - doctors would know, for example, if ezetimibe is superior to an alternative (one that is now available in generic, mind you) at maximum dose, or if its addition to maximum dose sinvastatin has any additional yield. But such trials are too risky for the company - they may show that there is no point to prescribing ezetimibe because it is either less potent than max dose sinvastatin, or that it has no incremental value over max dose sinvastatin. So, instead, sinvastatin 40mg+ ezetimibe 10mg is being compared to sinvastatin 40mg alone. The main outcomes are hard clinical endpoints - death, stroke, MI, etc. Supposing that this trial is "positive" - that the combination (Vytorin) is superior to sinvastatin 40mg. Should patients now be on Vytorin (sinvastatin 40mg+ ezetimibe =patent-protected=expensive) instead of sinvastatin 80 mg (=generic=cheap)? Well, there will be no way to know based on this trial, which is exactly the way Schering-Plough wants it. You see, this trial was designed primarily for the purpose of securing patent protection for simvastatin in the combination pill. Its potential contribution to science and patient care is negligible. So much so in fact, that I think this trial is unethical. It is unethical because patients volunteer for research mainly out of altruism (although in this case you could argue it's for free drugs). The result of such altruism is expected to be a contribution to science and patient care in the future. But in this case, the science sucks and the main contribution patients are making goes to the coffers of Schering-Plough. Physicians should stop allowing their patients to participate in such trials, so that their altruism is not violated.)

The NYT article makes some suspicious and concerning observations:

  • The data, expected to be available 6 months ago (the trial was completed almost 2 years ago!), will not be released until some time next year, and then only a partial dataset analysis, not complete data analysis.
  • The primary endpoint was changed after the trial was concluded! (Originally it was going to be carotid IMT at three places, now only at one place - a change that is rich fodder for conspiracy theorists, regardless if an outside consulting agency suggested the change.)
  • Data on femoral artery IMT are not going to be released at all now

Matthew Herper's Forbes article also notes that the trial was not listed on http://www.clinicaltrials.gov/ until Forbes asked why it was not there!

For the a priori trial design and pre-specified analyses, see pubmed ID # 15846260 at http://www.pubmed.org/ . In that report of the study's design, I do not see mention of monitoring of safety endpoints such as mortality and cardiovascular outcomes. But I presume these are being monitored for safety reasons. And Merck and Schering-Plough, who have claimed that they have not released the IMT data because it's taking longer than anticipated to analyze it, could certainly allay some of our concerns by releasing the data on mortality and safety endpoints, couldn't they? It doesn't take very long to add up deaths.

The problem with pre-specifying all these analyses (carotid IMT at 3 locations and femoral IMT) is that now you have multiple endpoints, and your chances of meeting one of them by chance alone is increased. That's why the primary endpoint holds such a hallowed position in the heirarchy of endpoints - it forces you to call your shot. I liken this to billiards where it doesn't matter how many balls you put down unless you call them. And none of them counts unless you first put down your first pre-specified ball - if you fail that, you lose your turn. In this case, if you check a bunch of IMTs, one of them might be significantly different based on chance alone - so if you change the primary endpoint after the study is done, we will rightly be suspicious that you changed it to the one that you saw was positive. That's bad science, and we and the editors of the journals should not let people get away with it.

I have a proposal: When you register a trial at http://www.clinicaltrials.gov/ , you should have to list a date of data/analysis release and a summary of the data/analyses that will be released. Should you not release the data/analysis by that pre-specified date, your ability to list or publish future trials, and your ability to seek or pursue regulatory approval for that or any other drug you have is suspended until you release the data. Moreover, you are forbidden from releasing the data/analyses prior to the pre-specified date - to prevent shenanigans with pre-specified list dates in the remote future, followed by premature release.

Lung Transplantation: Exempt from the scrutiny of a randomized controlled trial?

In last week's NEJM, Liou et al in an excellent article analyzed pediatric lung transplant data and found that there is scant evidence for an improvement in survival associated with this procedure:
http://content.nejm.org/cgi/content/short/357/21/2143.

The authors seem prepared to accept the unavoidable metholodical limitations of their analyses and call for a randomized controlled trial (RCT) for pediatric lung transplantation. The editorialist, however, does not share their enthusiasm for a RCT, and appears to take it on faith that the new organ allocation scheme (whereby the sickest children get organs first) will make everything OK:
http://content.nejm.org/cgi/content/short/357/21/2186


True believers die hard. And because of their hardiness, an RCT will be difficult to perform, as many pediatric pulmonologists will be loathe to allow their patients to be randomized to no transplant. They have no individual equipoise, even though there appears to be collective equipoise among folks willing to give serious consideration to the available data.

What we have here may be an example of what I will call "action bias" - which is basically the opposite of omission bias. In omission bias, people fail to act even though outcomes from action are superior to those from omission - often as a result of reluctance to risk or cause direct harm even though direct benefits outweigh them in the net. Action bias, as the enantiomer of omission bias, would refer to causing worse outcomes through action because of the great reluctance to stand by helplessly while a patient is dying, even when the only "therapies" we can offer make patients worse off - save for the hope they offer, reason notwithstanding.

Wednesday, November 21, 2007

Torcetrapib Torpedoed: When the hypothesis is immune to the data

I have watched the torcetrapib saga with interest for some time now. This drug is a powerful non-HMG-CoA-reductase inhibitor raiser of HDL (up to a 100% increase) and effects modest decreases in LDL also (20%) as reported with great fanfare in the NEJM in 2004: http://content.nejm.org/cgi/content/abstract/350/15/1505.

Such was the enthusiasm for this drug that one editorialist in the same journal cried foul play in reference to Pfizer's intent to study the drug only with Lipitor, suggesting that such a move was intended to soften the blow to this blockbuster (read multibillion dollar) drug when it soon loses patent protection:
http://content.nejm.org/cgi/content/extract/352/25/2573.
The tone is one of serious concern - as this drug was expected to truly be spectacular at BOTH raising HDL and preventing cardiovascular morbidity and mortality - an assumption based on the well-established use of cholesterol lowering as a surrogate endpoint in trials of cardiovascular medications.

(I'm sure the Avandia analogy is banging like a clapper in your skull right now.)

But a perspicacious consumer of the literature on torcetrapib would have noted that there were precious few and conflicting data about its efficacy as an antiatherogenic agent - preclinical data from animal studies were neither consistent nor overwhelming regarding its effects on the vasculature (in spite of the use of VERY high doses of the drug yielding high degrees of CETP inhibition) and studies of patients with CETP mutations also were inconsistent regarding its influence on the development of cardiovascular disease. Certainly, one would expect a drug with such remarkable HDL raising abilities to do something substantial and consistent to sensitive measures of atherogenesis in preclinical studies or to have some consistent and perhaps dramatic effect in patients with mutations leading to high HDL levels. (For a good review of pre-clinical studies, see:
http://atvb.ahajournals.org/cgi/content/full/27/2/257?cookietest=yes and http://www.jlr.org/cgi/content/full/48/6/1263).
But alas, there was not consistent and robust evidence for anything but changes in surrogate markers. Of course this is all hindsight and it's easy for me to pontificate now that the horse was let out of the barn; first by Nissen et al: http://content.nejm.org/cgi/content/abstract/356/13/1304
and then today:
http://content.nejm.org/cgi/content/short/357/21/2109.
(In fact, I would say that the horse is galloping about the barnyard trammeling Lipitor's hopes of life after generic death.)


But what interests me now is not that the drug failed, and not that I have a new archetypal drug for failure of surrogate endpoints, but rather how difficult it is for the believers to let go. True believers die hard. How do the editors let a conclusion like this make it to print:


"In conclusion, our study neither validates nor invalidates the hypothesis that raising levels of HDL cholesterol by the inhibition of CETP may be cardioprotective. Thus, the possibility that the inhibition of CETP may be beneficial will remain hypothetical until it is put to the test in a trial with a CETP inhibitor that does not share the off-target pharmacologic effects of torcetrapib. "

Really?

Had the study been positive, would that have been the conclusion? No, the authors would have concluded that the hypothesis was validated.

So if the study is positive, the hypothesis is confirmed; but if it is negative (or shows harm), the hypothesis is immune to the data. The authors should not be allowed to have their cake and eat it too.

The above conclusion is tantamount to saying “our data do not bear on the hypothesis” which is tantamount to saying “our study was badly designed.”

Sure, another agent without that little BP problem may have more salutary effects on mortality, but I'd hate to be the guy trying to get that one through the IRB. Here we have a drug in a class that killed people in the last study. We'd better have more robust pre-clinical data the next time around. The other thing that fascinates me is the grasping for explanations. Here is a drug with ROBUST effects on HDL, and it causes an overall statistically significant increase in mortality. That's one helluva a hurdle for the next drug to jump even without the BP problem. Moreover, I refer the reader to the HOT trial:
(http://rss.sciencedirect.com/getMessage?registrationId=GHEIGIEIHNEJOHFJIHEPHIGKGJGPHHJQLZGQJNLMOE).
A 5 mmHg lowering of BP over a 3.8 year period reduced mortality by a mere 0.9% (p=0.32 - not significant). That's a small increase and it's not statistically significant. But lowering LDL with simvastatin (the 4S trial: Lancet. 1994 Nov 19;344(8934):1383-9.) for 3.3 years on average led 1.7% ARR in mortality (RR 0.70 (95% CI 0.58-0.85, p = 0.0003). So it would appear that on average, you get more bang for your buck in lowering cholesterol than you do in lowering BP. With an agent that is such a potent raiser of HDL, we would certainly expect at worst a null effect if the BP effect militated against the HDL/LDL effect. I have not done a meta-analysis of trials of BP lowering or cholesterol lowering, but I would be interested in the comparison. For now, I'm substantially convinced that the BP argument is abjectly insufficient to explain the failure of this agent to improve meaningful outcomes.

So the search will go on for a molecular variation of this agent which doesn't increase BP, with the hopes that another blockbuster cholesterol agent will be discovered. But in all likelihood, this mechanism of altering cholesterol metabolism is fatally flawed and I wouldn't volunteer any of my patients for the next trial. I'd give them 80mg of generic simvastatin or atorvastatin.

Wednesday, November 7, 2007

Plavix Defeated: Prasugrel is superior in a properly designed and executed study

Published early on Sunday, November 5th in the NEJM (http://content.nejm.org/cgi/content/abstract/NEJMoa0706482v1) is a randomized controlled superiority trial comparing clopidogrel to a novel agent - Prasugrel.

Prasugrel was superior to Plavix. And it was superior to a degree similar to the degree to which Plavix is superior to aspirin alone. (See http://content.nejm.org/cgi/content/abstract/352/12/1179
and
http://content.nejm.org/cgi/content/abstract/345/7/494).

So therefore, by precedent, if one accepts the notion that aspirin alone is inferior to aspirin and Plavix because reductions in death and MI on the order of 2-3% are thought to be non-negligible (as I think they should be considered), one must therefore accept the notion that given the choice between Plavix and Prasugrel, one should choose the latter.



There is this issue of bleeding. But, eschewing your tendency towards omission bias, as I know you are wont to, you will agree that even if bleeding is as bad as death or MI (and it is NOT!), the net benefit of Prasugrel remains positive. Bleeding gums with dental flossing is annoying until you compare your life to your neighbor in cardiac rehab after his MI.

There is also the issue of Plavix's patent expiration in a few years. If the medications were equivalently priced, the choice is a no-brainer. If Prasugrel is costly and Plavix is generic, the calculus increases considerably in complexity - both from the perspective of the patient paying out of pocket, and the policy expert wielding his cost-effectiveness analysis. If my co-pay were the same, I would certainly choose Prasugrel. But if money is/were tight, I might consider that diet and excersise (which are free, financially, at least) may be a more cost-effective personal intervention than the co-pay for an expensive drug.

And what if Plavix at a higher dose is just as effective as Prasugrel? That question will have to be answered by future RCTs, which may be unlikely to happen if Plavix is about to lose patent protection...

Saturday, November 3, 2007

Post-exposure prophylaxis for Hepatitis A: Temptation seizes even the most well-intentioned authors

Victor et al report in the October 25th NEJM (http://content.nejm.org/cgi/content/abstract/357/17/1685) the non-inferiority of Hepatitis A vaccine to Immune Globulin for post-exposure prophylaxis of hepatitis A. The results are convincing for the non-inferiority hypothesis: symptomatic hepatitis A occurred in 4.4% of subjects who received vaccine versos 3.3% of subjects who received immune globulin (RR 1.35%; 95% CI .70-2.67).

This is a very well-executed non-inferiority study. If one looks at the methods section, s/he sees that the authors described very well their non-inferiority hypothesis and how it was arrived at. Given the low baseline rate of symptomatic hepatitis A (~3%), a RR of 3.0 is reasonable for non-inferiority, as non-inferiority implies<2%> non-significant trend toward less symptomatic Hepatitis A in the immune globlin group, the authors suggest that this agent may be preferred.

Again, one cannot have his cake and eat it too. One either conducts a non-inferiority trial and accepts non-inferior results as meaning that one agent is non-inferior to the alternative agent, or one conducts a superiority trial to demonstrate that one agent is truly superior. If the point estimates in this trial are close to correct, and immune globulin is 1.1% superior to HAV vaccine, ~7300 patients would be required in EACH group to determine superiority at a power of 90% and an alpha of 0.05. So the current trial is no substitute for a superiority trial with~7300 patients in each group. Unless such a trial is performed, HAV vaccine and immune globulin are non-inferior to each other for post-exposure prophylaxis to HAV, period.

To sum up: one either believes that two agents are non-inferior (or more conservatively, equivalent) and he therefore conducts a non-inferiority trial and accepts the results based on the a priori margins (delta) that he himself specified - or he conducts a superiority trial to demonstrate unequivocally that his preferred agent is superior to the comparator agent.

Wednesday, October 31, 2007

Lanthanic Disease increasing because of MRI, reports NEJM

In this week's NEJM (http://content.nejm.org/cgi/content/short/357/18/1821) authors from the Netherlands report a large series of asymptomatic patients who had brain MRI scans. There was a [surprisingly?] large incidence of abnormalities, particularly [presumed] brain infarcts, the incidence of which [predictably] increased with age. This is a timely report given the proliferation and technical evolution of advanced imaging techniques, which we can expect to lead to the discovery of an increasing number of "abnormalities" in asymptomatic patients. As in the case of screening for lung cancer (http://jama.ama-assn.org/cgi/content/abstract/297/9/953?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=computed+tomography&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT), the benefits of early detection of an abnormality must be weighed against the cost of the technology and the diagnostic and therapeutic misadventures that result from pursuit of incidentalomas that are discovered. The psychological impact of the "knowledge" gained on patients must also be considered. Sometimes, ignorance truly is bliss, and therefore 'tis folly to be wise.

Lanthanic disease (with which I am familiar thanks to the sage mentorship of Peter B. Terry, MD, MA at Johns Hopkins Hospital) refers to incidentally discovered abnormalities in asymptomatic individuals. Not surprisingly, it generally is thought to have a better prognosis than disease that is discovered after symptoms develop, presumably because it is discovered at a less advanced stage or is behaving in a less sinister fashion.

The discovery of Lanthanic disease poses challenges for clinicians. Is the natural history of incidentally discovered disease different from what is classically reported? Should pre-emptive interventions be undertaken? What of the elderly female with mental status changes who presents to the ED and in whom a cortical infarct or SDH is discovered on an MRI? Can her current symptoms be attributed to the imaging abnormalities? Clinicians will do well to be aware of the high prevalence of asymptomatic abnormalities on such scans.

The authors' conclusions are perspicacious: "Information on the natural history of these lesions is needed to inform clinical management."

Sunday, October 7, 2007

CROs (Contract Reseaerch Organizations) denounced in the NEJM

This last week's NEJM contains a long-overdue expose on CROs (contract research organizations): http://content.nejm.org/cgi/content/short/357/14/1365 .

These organizations have one purpose: to carry out studies for the pharmaceutical industry in the most expeditious and efficient manner. The problem is that often, it is expeditious and efficient to compromise patient safety.

The article states the issue better than I could hope to. I will only comment that regardless of who is carrying out the actual clinical trial, that industry control of or involvement in the design of the trial is another MAJOR problem that must be addressed if we wish to search for the truth and protect study participant and subsequent patient safety in the study of novel pharmaceutical agents.

Friday, September 28, 2007

Badly designed studies - is the FDA to blame?

On the front page of today's NYT (http://www.nytimes.com/2007/09/28/health/policy/28fda.html?ex=1348718400&en=30b7a25ac3835517&ei=5124&partner=permalink&exprod=permalink)
is an article describing a report to be released today by teh inspector general of the Department of Health and Human Service that concludes that FDA oversight of clinical trials (mostly for drugs seeking approval by the agency from the industry) is sorely lacking.

In it, Rosa DeLauro (D-CT) opines that the agency puts industry interests ahead of public health. Oh, really?

Read the posts below and you might be of the same impression. Some of the study designs the FDA approves for testing of agents are just unconscionable. These studies have little or no value for the public health, science, or patients. They serve only as coffer-fillers for the industry. Sadly, they often serve as coffin-fillers when things sometimes go terribly awry. Think Trovan. Rezulin. Propulsid. Vioxx.

The medical community, as consumers of these "data" and the resulting products, has an obligation to its patients which extends beyond those which we see in our offices. We should stop tolerating shenanigans in clinical trials, "me-too" drugs, and corporate profiteering at the expense of patient safety.

Thursday, September 27, 2007

Defaults suggested to improve healthcare outcomes

In today's NEJM (http://content.nejm.org/cgi/content/short/357/13/1340), Halpern, Ubel, and Asch describe the use of defaults to improve utilization of evidence-based practices. This strategy, which requires that we give up our status quo and omission biases (http://www.chestjournal.org/cgi/content/abstract/128/3/1497?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&author1=aberegg&searchid=1&FIRSTINDEX=0&sortspec=relevance&resourcetype=HWCIT ), could prove highly useful - if we have the gumption to follow their good advice and adopt it.

It is known that patients recieve only approximately 50% of the evidence-based therapies that are indicated in their care (see McGlynn et al: http://content.nejm.org/cgi/content/abstract/348/26/2635) and that there is a lag of approximately 17 years between substantial evidence of benefit of a therapy and its adoption into routine care.

Given this dismal state of affairs, it seems that the biggest risk is not that a patient is going to receive a defalut therapy that is harmful, wasteful, or not indicated, but rather that patients are going to continue to receive inadequate and incomplete care. The time to institute defaults into practice is now.

Wednesday, September 26, 2007

Dueling with anideulafungin

Our letter to the editor of the NEJM regarding the anidulafungin article (described in a blog post in July - see below) was published today and can be seen at: http://content.nejm.org/cgi/content/short/357/13/1347 .

To say the least, I am disappointed in the authors' response, particularly in regards to the non-inferiority and superiority issues.

The "two-step" process they describe for sequential determination of non-inferiority followed by superiority is simply the way that a non-inferiority trial is conducted. Superiority is declared in a non-inferiority trial if the CI of the point estimate does not include zero. (See http://jama.ama-assn.org/cgi/content/abstract/295/10/1152?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=piaggio&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT .

The "debate" among statisticians that they refer to is not really a debate at all, but relates to the distinction between a non-inferiority trial and an equivalence trial - in the latter, the CI of the point estimate must not include negative delta; in this case that would mean the 95% CI would have to fall so far to the left of zero that it did not include minus 20, or the pre-specified margin of non-inferiority. Obviously, the choice of a non-inferiority trial rather than an equivalence trial makes it easier to declare superiority. And this choice can create, as it did in this case, an apparent contradiction that the authors try to gloss over by restating the definition of superiority they chose when designing the trial.

Here is the contradiction, the violation of logic. The drug is declared superior because the 95% CI does not cross zero, but of course, that 95% CI is derived from a point estimate, in this case 15.4%. So, 15.4% is sufficient for the drug to be superior. But if your very design implied that a difference less than 20% is clinically negligible (a requirement for the rational determination of a delta, a prespecified margin of non-inferiority), aren't you obliged by reason and fairness to qualify the declaration of superiority by saying something like "but, we think that a 15.4% difference is clinically negligible?"

There is no rule that states that you must qualify it in this way, but I think it's only fair. Perhaps we, the medical community, should create a rule - namely that you cannot claim superiority in a non-inferiority trial, only in an equivalence trial. This would prevent the industry from getting one of the "free lunches" they currently get when they conduct these trials, and the apparent contradictions that sometimes arise from them.

Tuesday, September 25, 2007

Lilly, Xigris, the XPRESS trial and non-inferiority shenanigans

The problem with non-inferiority trials (in addition to the apparent fact that the pharmaceutical industry uses them to manufacture false realities) is that people don't generally understand them (which is what allows false realities to be manufactured and consumed.) One only need look at the Windish article described below to see that the majority of folks struggle with biomedical statistics.

The XPRESS trial, published in AJRCCM Sept. 1st, (http://ajrccm.atsjournals.org/cgi/content/abstract/176/5/483) was mandated by the FDA as a condition of the approval of drotrecogin-alfa for severe sepsis. According to the authors of this study, the basic jist is to see if heparin interferes with the efficacy of Xigris (drotrecogin-alfa) in severe sepsis. The trial is finally published in a peer-reviewed journal, although Lilly has been touting the findings as supportive of Xigris for quite a while already.


The stated hypothesis was that Xigris+placebo is equivalent to Xigris+heparin (LMWH or UFH). [Confirmation of this hypothesis has obvious utility for Lilly and users of this drug because it would allay concerns of coadministration of Xigris and heparinoids, the use of the latter which is staunchly entrenched in ICU practice).

The hypothesis was NOT that Xigris+heparin is superior to Xigris alone. If Lilly had thought this, they would have conducted a superiority trial. They did not. Therefore, they must have thought that the prior probability of superiority was low. If the prior probability of a finding (e.g., superiority) is low, we need a strong study result to raise the posterior probability into a reasonable range - that is, a powerful study which produces a very small p-value (e.g., <0.001)>
  • This study used 90% confidence intervals. Not appropriate. This is like using a p-value of 0.10 for significance. I have calculated the more appropriate 95% CIs for the risk difference observed and they are: -0.077 to +0.004.
  • The analysis used was intention to treat. The more conservative method for an equivalence trial is to present the results as "as treated". This could be done at least in addition to the ITT analysis to see if the results are consistent.
  • Here we are doing an equivalence trial with mortality as an outcome. This requires us to choose a "delta" or mortality difference between active treatment and control which is considered to be clinically negligible. Is an increased risk of death of 6.2% negligible? I think not. It is simply not reasonable to conduct a non-inferiority or equivalence trial with mortality as the outcome. Mortality differences would have to be, I would say, less than 1% to convince me that they might be negligible.
  • Because an equivalence design was chosen, the 95% CIs (90% if you're willing to accept that -and I'm not) for the treatment difference would have to fall entirely outside of delta (6.2%) in order for treatment to be declared superior to placebo. Clearly it does not. So any suggestion that Xigris+heparin is superior to Xigris alone based on this study is bunkum. Hogwash. Tripe. Based upon the chosen design, superiority is not even approached. The touted p-value of 0.08 conceals this fact. If they had chosen an superiority design, yes, they would have been close. But they did not.
  • Equivalence was not demonstrated in this trial either, as the 95% (and the 90%) CIs crossed the pre-specified delta. So sorry.
    • The design of this study and its very conception as an equivalence trial with a mortality endpoint is totally flawed. Equivalence was not demonstrated even with a design that would seem to favor its demonstration. (Interestingly, if a non-inferiority design had been chosen, superiority of Xigris+heparin would in fact have been demonstrated! [with 90, but NOT with 95% CIs] ).

      The biggest problem I'm going to have is when the Kaplan-Meier curve presented in Figure 3A with its prominently featured "near miss" p-value of 0.09 is used as ammunition for the argument that Xigris+heparin trended toward superior in this study. If it had been a superiority trial, I would be more receptive of that trend. But you can't have your cake and eat it too. You either do a superiority trial, or you do an equivalence trial. In this case, the equivalence trial appeared to backfire.

      Having said all that, I think we can be reassured that Xigris+heparin is not worse than Xigris+placebo and the concern that heparin abrogates the efficacy of Xigris should be mostly dispelled. And because almost all critically ill patients are at high frisk of DVT/PE, they should all be treated with heparinoids, and the administration of Xigris should not change that practice.

      I just think we should stop letting folks get away with these non-inferiority/equivalence shenanigans. In this case, there is little ultimate difference. But in many cases a non-inferiority or equivalence trial such as this will allow the manufacture of a false reality. So I'll call this a case of "attempted manufacture of a false reality".


      Friday, September 21, 2007

      Medical Residents Don't Understand Statistics

      But they want to: http://jama.ama-assn.org/cgi/content/abstract/298/9/1010

      This is but one of many unsettling findings of an excellent article by Windish et al in the September 5th issue of JAMA.

      Medical residents correctly answer only approximately 40% of questions pertaining to basic statistics related to clinical trials. Fellows and general medicine faculty with research training fared better statistically, but still have some work to do: they answered correctly approximately 70% of the questions.

      An advanced degree in addition to a medical degree conferred only modest benefit: 50% answered correctly rather than 40%.


      The solution to this apparent problem is therefore elusive. Even if we encouraged all residents to pursue advanced degrees or research training, we would still have vast room for improvement in the understanding of basic biomedical statistics. And this is not a realistic expectation (that they all pursue advanced degrees or research training).

      While it would appear that directed training in medical statistics might have a beneficial effect on performance of this test, with work hours restrictions and the daunting amount of material they must already master for the practice of medicine, it seems unlikely that a few extra courses in statistics during residency is going to make a large and sustainable difference.

      Moreover, we must remember that performance on this test is a surrogate outcome - what we're really interested in is how they practice medicine with whatever skills they have. My anecdotal experience is that few physicians are actually keeping abreast of the medical literature - few are actually reading the few journals that they subscribe to - so improving their medical evidence interpretation skills is going to have little impact on how they practice. (For example, few of my colleagues were aware of the Windish article itself, in spite of their practice in an academic center, its publication in a very high impact journal, and their considerable luxury of time compared to our colleagues in private practice.)

      In some ways, the encouragement that the average physician critically evaluate the medical literature seems like a far-fetched and idyllic notion. This may be akin to expecting them to stay abreast of the latest technology for running serum specimens, PCR machines, or to the sensitivity and specificity of various assays for BNP - they just don't have the time or the training to bother with nuances such as these, which are better left to the experts in the clinical and research laboratories. Likewise, it may be asking too much in the current era of medicine to expect that the average physician will possess and maintain biostatistical and trial analysis skills, consistently apply them to emerging literature, and change practice promptly and accordingly. Empirical evidence suggests that this is not happening, and I don't think it has much to do with lack of statistical skills - it has to do with lack of time.

      Perhaps what Windish et al have reinforced is support for the notion that individual physicians should not be expected to keep abreast of the medical literature, but should instead rely upon practice guidelines formulated by those experts properly equipped and compensated to appraise and make recommendations about the emerging evidence.

      Saturday, September 15, 2007

      Idraparinux, the van Gogh investigators, and clinical trials pointillism: connecting the dots shows that Idraparinux increases the risk of death

      It eludes me why the NEJM continues to publish specious, industry-sponsored, negative, non-inferiority trials. Perhaps they do it for my entertainment. And this past week, entertained I was indeed.

      Idraparinux is yet another drug looking for an indication. Keep looking, Sanofi. Your pipeline problems will not be solved by this one.

      First, let me dismiss the second article out of hand: it is not fair to test idraparinux against placebo (for the love of Joseph!) for the secondary prevention of VTE after a recent epidode! (http://content.nejm.org/cgi/content/short/357/11/1105).

      It is old news that one can reduce the recurrence of VTE after a recent episode by either using low intensity warfarin (http://content.nejm.org/cgi/content/abstract/348/15/1425) or by extending the duration of warfarin anticoagulation (http://content.nejm.org/cgi/content/abstract/345/3/165). Therefore, the second van Gogh study does not merit further consideration, especially given the higher rate of bleeding in this study.


      Now for the first study and its omissions and distortions. It is important to bear in mind that the only outcome that cannot be associated with ascertainment bias (assuming a high follow-up rate) is mortality, AND that the ascertainment of DVT and PE are fraught with numerous difficulties and potential biases.

      The Omission: Failure to report in the abstract that Idraparinux use was associated with an increased risk of death in these studies, which was significant in the PE study, and which trended strongly in the DVT study. The authors attempt to explain this away by suggesting that the increased death rate was due to cancer, but of course we are not told how causes of death were ascertained (a notoriously difficult and messy task), and cancer is associated with DVT/PE which is among the final common pathways of death from cancer. This alone, this minor factoid that Idraparinux was associated with an increased risk of death should doom this drug and should be the main headline related to these studies.

      Appropriate headline: "Idraparinux increases the risk of death in patients with PE and possibly DVT."

      If we combine the deaths in the DVT and PE studies, we see that the 6-month death rates are 3.4% in the placebo group and 4.5% in the idraparinux group, with an overall (chi-square) p-value of 0.035 - significant!

      This is especially worrisome from a generalizability perspective - if this drug were approved and the distinction between DVT and PE is blurred in clinical practice as it often is, we would have no way of being confident that we're using it in a DVT patient rather than a PE patient. Who wants such a messy drug?

      The Obfuscations and Distortions: Where to begin? First of all, no justification of an Odds Ratio of 2.0 as a delta for non-inferiority is given. Is twice the odds of recurrent DVT/PE insignificant? It is not. This Odds Ratio is too high. Shame.

      To give credit where it is due, the investigation at least used a one sided 0.025 alpha for the non-inferiority comparison.

      Second, regarding the DVT study, many if not the majority of patients with DVT also have PE, even if it is subclinical. Given that ascertainment of events (other than death) in this study relied on symptoms and was poorly described, that patients with DVT were not routinely tested for PE in the absence of symptoms, and that the risk of death was increased with idraparinux in the PE study, one is led to an obvious hypothesis: that the trend towary an increased risk of death in the DVT study patients who received idraparinux was due to unrecognized PE in some of these patients. The first part of the conclusion in the abstract "in patients with DVT, once weekly SQ idraparinux for 3 or 6 months had an efficacy similar to that of heparin and vitamin K antagonists" obfuscates and conceals this worrisome possibility. Many patients with DVT probably also had undiagnosed PE and might have been more likely to die given the drug's failure to prevent recurrences in the PE study. The increased risk of death in the DVT study might have been simply muted and diluted by the lower frequency of PE in the patients in the DVT study.

      Then there is the annoying the inability to reverse the effects of this drug with a very long half-life.

      Scientific objectivity and patient safety mandate that this drug not receive further consideration for clinical use. Persistence with the study of this drug will most likely represent "sunk cost bias" on the part of the manufacturer. It's time to cut bait and save patients in the process.


      Wednesday, September 5, 2007

      More on Prophylactic Cranial Irradiation

      One of our astute residents at OSU (Hallie Prescott, MD) wrote this letter to the editor of the NEJM about the Slotman article discussed 2 weeks ago - unfortunately, we did not meet the deadline for submission, so I'm posting it here:

      Slotman et al report that prophylactic cranial irradiation (PCI) increases median overall survival (a secondary endpoint) by 1.3 months in patients with small cell lung cancer. There were no significant differences in various quality of life (QOL) measures between the PCI and control groups. However, non-significant trends toward differences in QOL measures are noted in Table 2. We are not told the direction of these trends, and low compliance (46.3%) with QOL assessments at 9 months limits the statistical power of this analysis. Moreover, significant increases in side effects such as fatigue, nausea, vomiting, and leg weakness may limit the attractiveness of PCI for many patients. Therefore, the conclusion that “prophylactic cranial irradiation should be part of standard care for all patients with small-cell lung cancer” makes unwarranted assumptions about how patients with cancer value quantity and quality of life. The Evidence-Based Medicine working group has proposed that all evidence be considered in light of patients’ preferences, and we believe that this advice applies to PCI for extensive small cell lung cancer.


      References

      1. Slotman B, Faivre-Finn C, Kramer G, Rankin E, Snee M, Hatton M et al. Prophylactic Cranial Irradiation in Extensive Small-Cell Lung Cancer. N Engl J Med 2007; 357(7):664-672.
      2. Weeks JC, Cook EF, O'Day SJ, Peterson LM, Wenger N, Reding D et al. Relationship Between Cancer Patients' Predictions of Prognosis and Their Treatment Preferences. JAMA 1998; 279(21):1709-1714.
      3. McNeil BJ, Weichselbaum R, Pauker SG. Speech and survival: tradeoffs between quality and quantity of life in laryngeal cancer. N Engl J Med 1981; 305(17):982-987.
      4. Voogt E, van der Heide A, Rietjens JAC, van Leeuwen AF, Visser AP, van der Rijt CCD et al. Attitudes of Patients With Incurable Cancer Toward Medical Treatment in the Last Phase of Life. J Clin Oncol 2005; 23(9):2012-2019.
      5. Guyatt GH, Haynes RB, Jaeschke RZ, Cook DJ, Green L, Naylor CD et al. Users' Guides to the Medical Literature: XXV. Evidence-Based Medicine: Principles for Applying the Users' Guides to Patient Care. JAMA 2000; 284(10):1290-1296.

      Monday, August 20, 2007

      Prophylactic Cranial Irradiation: a matter of blinding, ascertainment, side effects, and preferences

      Slotman et al (August 16 issue of NEJM: http://content.nejm.org/cgi/content/short/357/7/664) report a multicenter RCT of prophylactic cranial irradiation for extensive small cell carcinoma of the lung and conclude that it not only reduces symptomatic brain metastases, but also prolongs progression-free and overall survival. This is a well designed and conducted non-industry-sponsored RCT, but several aspects of the trial warrant scrutiny and temper my enthusiasm for this therapy. Among them:

      The trial is not blinded (masked is a more sensitive term) from a patient perspective and no effort was made to create a sham irradiation procedure. While unintentional unmasking due to side effects may have limited the effectiveness of a sham procedure, it may not have rendered it entirely ineffective. This issue is of importance because meeting the primary endpoint was contingent on patient symptoms, and a placebo effect may have impacted participants’ reporting of symptoms. Some investigators have gone to great lengths to tease out placebo effects using sham procedures, and the results have been surprising (e.g., knee arthroscopy; see: https://content.nejm.org/cgi/content/abstract/347/2/81?ck=nck).


      We are not told if investigators, the patient’s other physicians, radiologists, and statisticians were masked to the treatment assignment. Lack of masking may have led to other differences in patient management, or to differences in the threshold for ordering CT/MRI scans. We are not told about the number of CT/MRI scans in each group. In a nutshell: possible ascertainment bias (see http://www.consort-statement.org/?o=1123).

      There are several apparently strong trends in QOL assessments, but we are not told what direction they are in. Significant differences in these scores were unlikely to be found as the deck was stacked when the trial was designed: p<0.01 was required for significance of QOL assessments. While this is justified because of multiple comparisons, it seems unfair to make the significance level for side effects more conservative than that for the primary outcome of interest (think Vioxx here). The significance level required for secondary endpoints (progression-free and overall survival) was not lowered to account for multiple comparisons. Moreover, more than half of QOL assessments were missing by 9 months, so this study is underpowered to detect differences in QOL. It is therefore all the more important to know the direction of the trends that are reported.

      The authors appear to “gloss over” the significant side effects associated with this therapy. It made some subjects ill.

      If we are willing to accept that overall survival is improved by this therapy (I’m personally circumspect about this for the above reasons) the bottom line for patients will be whether they would prefer on average 5 additional weeks of life with nausea, vomiting weight loss, fatigue, anorexia, and leg weakness to 5 fewer weeks of life without these symptoms. I think I know what choice many will make, and our projection bias may lead us to make inaccurate predictions of their choices (see Lowenstein, Medical Decision Making, Jan/Feb 2005: http://mdm.sagepub.com/cgi/content/citation/25/1/96).

      The authors state in the concluding paragraph:

      “Prophylactic cranial irradiation should be part of standard care for all patients with small-cell lung cancer who have a response to initial chemotherapy, and it should be part of the standard treatment in future studies involving these patients.”

      I think the decision to use this therapy is one that only patients are justified making. At least now we have reasonably good data to help them inform their choice.

      Monday, August 6, 2007

      Thalidomide, Phocomelia, and Lessons from History

      In tracing the history of evidence-based medicine tonight (for a lecture I have to give on Friday), a found the story of thalidomide on wikipedia (http://en.wikipedia.org/wiki/Thalidomide ).

      (While I recognize that the information provided on this site is uncorroborated, I also recognize that it has been referenced by Federal Distric Courts in various decisions - see http://www.nytimes.com/2007/01/29/technology/29wikipedia.html?ex=1186545600&en=4e6683fb4fac3044&ei=5070 - so I consider it possibility generating rather than evidence corroborating.)

      This story is a tragic one of a company with a product to sell (a "treatment looking for an indication" - hmmm...) and its unscrupulous marketing of this product in the absence of evidence of both safety and efficacy.

      The story of Thalidomide should serve as a stark and poignant reminder of the potential harmful effects of a marketing campaign, impelled by profiteering, gone awry.

      Sunday, August 5, 2007

      AVANDIA and Omission Bias

      Amid all the hype about Avandia recently, a few relatively clear-cut observations are apparent (most of which are described better than I could hope to do in the July 5 issue of NEJM. Drazen et al, Dean, and Psaty each wrote wonderful editorials available at www.nejm.org).

      1.) Avandia appears to have NO benefits besides the surrogate endpoint of improved glycemic control (and engorging the coffers of GSK, the manufacturer).

      2.) Avandia may well increase the risk of CHF, MI, raise LDL cholesterol, cause weight gain and increase the risk of fractures (the latter in women).

      3.) Numerous alternative agents exist, some of which improve primary outcomes (think UKPDS and metformin), and most of which appear to be safer.

      So, what physician in his right mind would start a patient on Avandia (especially in light of #3)? And if you would not START a patient on Avandia, then you should STOP Avandia in patients who are already taking it.


      To not do so would be to commit OMISSION BIAS - which refers to the tendency (in medicine and in life) to view the risks and/or consequences of doing nothing as superior to the risks and/or consequences of acting, even when the converse is true (i.e., the risks and/or consequences of acting are superior to those related to inaction). (For a reference, indulge me: Aberegg et al http://www.chestjournal.org/cgi/content/abstract/128/3/1497.)

      This situation is reminiscent of recommendations relating to the overall (read "net") health benefits of ethanol consumption - physicians are told to not discourage moderate alcohol consumption in patients who already consume, but not to encourage it in those who currently abstain. Well, alcohol is either good for you, or it is not. And since it appears to be good for you, the recommendation on its consumption should not hinge one iota on an arbitrarily established status quo (whether for reasons completely unrelated to health a person currently drinks).
      (For a reference, see Malinski et al: http://archinte.ama-assn.org/cgi/content/abstract/164/6/623; the last paragraph in the discussion could serve as an expose on omission bias.)

      So, let me go out on a limb here: Nobody should be taking Avandia, and use of this medication should not resume until some study demonstrates a substantive benefit in a meaningful outcome which outweighs any risks associated with the drug. Until we do this, we are the victims of OMISSION BIAS (+/- status quo bias) and the profiteering conspiracy of GSK which is beautifully alluded to, along with a poignant description of the probably intentional shortcomings in the design and conduct of the RECORD trial here: Psaty and Furberg http://content.nejm.org/cgi/content/extract/356/24/2522.

      Tuesday, July 31, 2007

      Secondary Endpoints, Opportunity Costs, Alternatives, Vioxx, Avandia, and Actos

      There are few endpoints that can hold a candle to mortality as the end-all, be-all of clinical trials design, but two appear to be fit for the challenge, (at least according to past FDA decisions) - or are they? Blood Pressure lowering and glycemic control.



      It is old news that Vioxx kills people, and does so utterly unnecessarily: alternative treatments are available that are generic, low cost, and have no toxicities that are demonstrably greater than Vioxx (despite Big Pharma inuendo to the contrary - you know, GI toxicity and the like).



      (I am reminded of cognitive dissonance theory here - originally described by Alport, 1938; It has been demonstrated that folks who are more harshly hazed by a fraternity have greater allegence to it.....could this be one of the reasons why paying big bucks for a prescription NSAID with no demonstrable benefits over OTC generics leads to patient claims of superiority of the branded product?)



      Well, the old news is still being published: http://content.nejm.org/cgi/content/full/357/4/360 .

      The interesting thing to me about the Vioxx story is that with alternatives available (you know, Aleve, Mortin, and the like), and in relation to a "lifestyle drug," safety was not given greater weight. If your primary endpoint is mortality, you might allow an MI or two in your dataset (although you should report them). But when your endpoint is "confirmed clinical upper gastrointestinal events " (http://content.nejm.org/cgi/content/full/343/21/1520), perhaps closer attention ought to be paid to the side effects you have to pay in order to receive the benefits of the primary endopint. If no other NSAIDS were available to treat patients with crippling arthritis, that would be one thing (think IBS: Alosetron withdrawn and then reintroduced to the market because of lack of a suitable alternative; http://content.nejm.org/cgi/content/full/349/22/2136). But there were alternatives and this was a lifestyle drug....



      And now we have the Avandia debacle, which, surprisingly, did not lead to a recommendation for withdrawl of this drug from teh US markey by the recent FDA advisiory panel (http://sciencenow.sciencemag.org/cgi/content/full/2007/730/1). Once again, it seems this decision, if made by a rational agent, would have given due consideration to whether there are alternative agents that might be used in place of Avandia if it were no longer available. Well, sure enough, in addition to metformin (think UKPDS), and insulin, and other oral hyopglycemics, lo and behold: Pioglitazone.

      Wednesday, July 25, 2007

      The Swan Ganz graces the pages of JAMA yet again

      The debate on the Swan Ganz catheter continues, this time spurred by a well done report documenting declining use of the catheter over the last decade, the results of an analysis of an administrative database (available at http://jama.ama-assn.org/cgi/content/short/298/4/423 ).

      The arguments used in this debate continue to befuddle me with their obvious lack of logical consistency with many other things that are going on apparently unnoticed around us, and about which no fuss is being made. I will enumerate some of these here.


      1.) An air of derision often accompanies denouncements of the Swan Ganz catheter because it is "invasive". This buzz word, however, carries little consequence in reality. That something is "invasive" does not necessarily mean that it is riskier than other things that are done that are "non-invasive". Administration of Cytoxan or other chemotherapeutic agents is not "invasive" by the common definition of the term, yet is clearly very risky. Other analogies abound. I am not convinced by hyperbolic statements of "invasiveness" that are not supported by actual negative consequences of the device that exceed other risks which we routinely take (and take for granted) in medicine.

      2.) And what are the actual negative consequences? In the FACTT trial of ARDSnet, the only adverse consequence was transient arrythmias. I remain unconvinced.

      3.) What OTHER "invasive" (their definition, not mine) things do we routinely do that have no proven mortality benefit? How about arterial lines, or many (most?) central lines? Why is not the critical care (especially the academic critical care) community rallying against those, if it is invasive devices of unproven [mortality] benefit that we are concerned with?

      4.) Why must this device, unlike almost all other devices and diagnostic modalities, demonstrate a mortality benefit in order to qualify for our acceptance? Must the ECHOcardiogram (within the ICU or without) reduce mortality for its use to be justified? Not invasive, no risks, doesn't count you say. OK, how about CT angiogram? There are increasing data about the carcinogenecity of radiation from CT scans (Lee et al, 2004, Health Policy and Practice, "Diagnostic CT Scans..", available at: http://radiology.rsnajnls.org/cgi/reprint/231/2/393.pdf), and there is not insubstantial renal morbidity and risk of anaphylactoid reactions to the dye. Yet we evaluate the CT angiogram on the basis of its ability to identify pulmonary emboli (sensitivity and specificity and the like), not to reduce mortality (and meanwhile we largely ignore the risks or accept them as the costs of diagnosis). How many patients would be required to conduct such a study of mortality reduction with CT angiogram? Is there a study in existence of a diagnostic modality the use of which improves mortality? Is there precedent for such a thing? Should it surprise us that intervening more proximally (diagnosis rather than treatment) in a clinical pathway makes it harder (or impossible) to demonstrate a benefit further downstream?

      5.) Let's extend the analogy. Suppose we were to design a study of routine use of CT angiogram in the ICU for this or that indication, let's say sudden unexplained hypoxemia. Suppose also that this study shows no benefit (mortality or otherwise) of routine use in this patient population. Does this mean that I should stop using CT angiogram on a selective basis, as those who call for a moratorium imply I should do with the Swan?

      6.) If the arterial line analogy was not sufficient, because there was not a recent study demonstrating a lack of mortality benefit with this device, we have an alternative candidate: the Canadian Critical Care Trials Group study of ("invasive") BAL for the diagnosis of VAP published in the NEJM in December ( http://content.nejm.org/cgi/content/abstract/355/25/2619 ). No rallying cry, no proposed moratorium followed this extermely well conducted trial. No denouncement of BAL in the editorial (http://content.nejm.org/cgi/content/extract/355/25/2691). Quite the contrary - the exclusion of patients with staph and pseudomonas was construed as all but undermining the validity of the results for application to clinical practice. At my own institution, pre-existing staunch enthusiasm for BAL diagnosis of VAP has not wavered since publication of this trial.

      I am no Swan Ganz apologist, and I rarely use the device. But the state of the debate and the arguments used to denounce the Swan do not stand the test of logic or consistency that I expect of the critical care community. And this leads me to believe that these arguments are the spawn of idealogy and sanctimoniousness, rather than logic and balanced consideration.

      An afterthought - Perhaps the most obvious moratorium for the academic community to call for is a moratorium on clinical trials of the Swan. They continue to be performed long after it became clear, meta-analytically, that it will be impossible to show a convincing positive result. The prior probability is now prohibitively low for any reasonably-sized trial to move the posterior away from the prior or sway the results of a meta-analysis.

      Thursday, July 19, 2007

      The WAVE trial: The Canadians set the standard once again

      Today's NEJM contains the report of an exemplary trial (the WAVE trial) comparing aspirin to aspirin and warfarin combined in the prevention of cardiovascular events in patients with peripheral vascular disease (http://content.nejm.org/cgi/reprint/357/3/217.pdf). Though this was a "negative" trial in that there was no statistically significant difference in the outcomes between the two treatment groups, I am struck by several features of its design that are worth mentioning.

      Although the trial was the beneficiary of pharmaceutical funding, the authors state:

      "None of the corporate sponsors had any role in the design or conduct of the trial, analysis of the data, or preparation of the manuscript".

      Ideally, this would be true of all clinical trials, but right now it's a precocious idea.



      One way to remove any potential or perceived conflicts of interest might be to mandate that no phase 3 study be designed, conducted, or analyzed by its sponsor. Rather, phase 3 trials could be funded by a sponsor, but are mandated to be designed, conducted, analyzed, and reported by an independent agency consisting of clinical trials experts, biostatisticians, etc. Such an agency might also receive infrastructural support from governmental agencies. It would have to be large enough to handle the volume of clinical trials, and large enough that a sponsor would not be able to know to what ad hoc design committee the trial would be assigned, thereby preventing unscrupulous sponsors from "stacking the deck" in favor of the agent in which they have an interest.

      The authors of the current article also clearly define and describe inclusion and exclusion criteria for the trial, and these are not overly restrictive, increasing the generalizability of the results. Moreover, the ratinoale for the parsimonious inclusion and exclusion criteria are intuitively obvious, unlike some trials where the reader is left to guess why the authors excluded a particular subgroup. Was it because it was thought that the agent would not work in that group? Because increased risk was expected in that group? Because study was too difficult (ethically or logistically) in that group (e.g., pregnancy). Inadequate justification of inclusion and exclusion criteria make it difficult for practitioners to determine how to incorporate the findings into clinical practice. For example, were pregnant patients excluded from trials of therapeutic hypothermia after cardiac arrest (http://content.nejm.org/cgi/reprint/346/8/549.pdf) for ethical reasons, because of an increased risk to the mother or fetus, because small numbers of pregnant patients were expected, because the IRB frowns upon their inclusion or for some other reason? Without knowing this, it is difficult to know what to do with a pregnant woman who is comatose following cardiac arrest. Obviously, their lack of inclusion in the trial does not mean that this therapy is not efficacious for them (absense of evidence is not evidence of absense). If I knew that they were excluded because of a biologically plausible concern for harm to the fetus (and I can think of at least one) rather than because of IRB concerns, I would be better prepared to make a decision about this therapy when faced with pregnant patient after cardiac arrest. Improving the reporting and justification of inclusion and exclusion criteria should be part of efforts to improve the quality of reporting of clinical trials.

      Interestingly, the authors also present an analysis of the composite endpoints (coprimary endpoints 1 and 2) that excludes fatal bleeding or hemorrhagic stroke. When these side effects are excluded from the composite endpoints, there is a trend favoring combination therapy (p values 0.11 and 0.09 respectively). Composite endpoints are useful because they allow a trial of a given number of patients to have greater statistical power, and it is rational to include side effects in them, as side effects reduce the net value of the therapy. However, an economist or a person versed in expected utility theory (EUT) would say that it is not fair to combine these endpoints without first weighting them based on their relative (positive or negative value). Not weighting them implies that an episode of severe bleeding in this trial is as bad (negative value or utility) as a death - a contention that I for one would not support. I would much rather bleed than die, or have a heart attack for that matter. Bleeding can usually be readily and effectively treated.

      In the future, it may be worthwhile to think more about composite endpoints if we are really interested in the net value/utility of a therapy. While it is often difficult to assign a relative value to different outcomes, methods (such as standard gambles) exist and such assignment may be useful in determining the true net value (to society or to a patient) of a new therapy.

      Tuesday, July 10, 2007

      Anidulafungin - a boon for patients, physicians, or Big Pharma?

      The June 14th edition of the NEJM (http://content.nejm.org/cgi/content/short/356/24/2472) contains an article describing a trial of anidulafungin, a new echinocandin antifungal agent similar to the more familiar caspofungin, in invasive candidiasis. The comparator agent was fluconazole. This is a proprietary agent, and the study was was fully funded by the pharmaceutical sponsor.

      The trial was a non-inferiority trial, and the chosen "delta" (the treatment difference which was determined to be clinically insignificant) was 20%. This means that the authors would consider a difference in clinical response between the 2 agents of 19% to be clinically insignificant. No justification for this delta was provided, as is recommended (http://jama.ama-assn.org/cgi/content/abstract/295/10/1152?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=non-inferiority&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT). It is not clear if clinicians agree with this implicit statement of clinical insignificance, and no poll has been taken to determine if they do.


      Which begs a question: should there be a requirement that clinicians be polled to determine what THEY, rather than the study sponsors think is a clinically insignificant difference? After all, clinicians are the folks who will be using the drug (if it is approved by the FDA.)

      The design of non-inferiority trials is, in my experience, poorly understood among clinicians, and this may be due to inadequate reporting as reported in the above article and in this one (http://jama.ama-assn.org/cgi/content/abstract/295/10/1147?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=equivalence&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT).

      Interestingly the difference in the agents favored anidulafungin by 15.4%, a difference that the authors did not emphasize as clinically insignificant.

      I am left wondering if individual patients or society are better off now that we have another drug of the echinocandin class available. I would be more convinced that they were if anidulafungin had been compared to 800 mg of fluconazole (rather than 400 mg) or to caspofungin, but alas, it was not. I don't know what the cost of developing and testing this drug was, but I expect that it was on the order of tens to hundreds of millions of dollars - not to mention the costs of subsequent testing, advertising and marketing.

      And the opportunity costs - the other possibilities. What else could have been done with that money that may have benefited individual patients or society more than another echinocandin agent?