Friday, September 28, 2007

Badly designed studies - is the FDA to blame?

On the front page of today's NYT (http://www.nytimes.com/2007/09/28/health/policy/28fda.html?ex=1348718400&en=30b7a25ac3835517&ei=5124&partner=permalink&exprod=permalink)
is an article describing a report to be released today by teh inspector general of the Department of Health and Human Service that concludes that FDA oversight of clinical trials (mostly for drugs seeking approval by the agency from the industry) is sorely lacking.

In it, Rosa DeLauro (D-CT) opines that the agency puts industry interests ahead of public health. Oh, really?

Read the posts below and you might be of the same impression. Some of the study designs the FDA approves for testing of agents are just unconscionable. These studies have little or no value for the public health, science, or patients. They serve only as coffer-fillers for the industry. Sadly, they often serve as coffin-fillers when things sometimes go terribly awry. Think Trovan. Rezulin. Propulsid. Vioxx.

The medical community, as consumers of these "data" and the resulting products, has an obligation to its patients which extends beyond those which we see in our offices. We should stop tolerating shenanigans in clinical trials, "me-too" drugs, and corporate profiteering at the expense of patient safety.

Thursday, September 27, 2007

Defaults suggested to improve healthcare outcomes

In today's NEJM (http://content.nejm.org/cgi/content/short/357/13/1340), Halpern, Ubel, and Asch describe the use of defaults to improve utilization of evidence-based practices. This strategy, which requires that we give up our status quo and omission biases (http://www.chestjournal.org/cgi/content/abstract/128/3/1497?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&author1=aberegg&searchid=1&FIRSTINDEX=0&sortspec=relevance&resourcetype=HWCIT ), could prove highly useful - if we have the gumption to follow their good advice and adopt it.

It is known that patients recieve only approximately 50% of the evidence-based therapies that are indicated in their care (see McGlynn et al: http://content.nejm.org/cgi/content/abstract/348/26/2635) and that there is a lag of approximately 17 years between substantial evidence of benefit of a therapy and its adoption into routine care.

Given this dismal state of affairs, it seems that the biggest risk is not that a patient is going to receive a defalut therapy that is harmful, wasteful, or not indicated, but rather that patients are going to continue to receive inadequate and incomplete care. The time to institute defaults into practice is now.

Wednesday, September 26, 2007

Dueling with anideulafungin

Our letter to the editor of the NEJM regarding the anidulafungin article (described in a blog post in July - see below) was published today and can be seen at: http://content.nejm.org/cgi/content/short/357/13/1347 .

To say the least, I am disappointed in the authors' response, particularly in regards to the non-inferiority and superiority issues.

The "two-step" process they describe for sequential determination of non-inferiority followed by superiority is simply the way that a non-inferiority trial is conducted. Superiority is declared in a non-inferiority trial if the CI of the point estimate does not include zero. (See http://jama.ama-assn.org/cgi/content/abstract/295/10/1152?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=piaggio&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT .

The "debate" among statisticians that they refer to is not really a debate at all, but relates to the distinction between a non-inferiority trial and an equivalence trial - in the latter, the CI of the point estimate must not include negative delta; in this case that would mean the 95% CI would have to fall so far to the left of zero that it did not include minus 20, or the pre-specified margin of non-inferiority. Obviously, the choice of a non-inferiority trial rather than an equivalence trial makes it easier to declare superiority. And this choice can create, as it did in this case, an apparent contradiction that the authors try to gloss over by restating the definition of superiority they chose when designing the trial.

Here is the contradiction, the violation of logic. The drug is declared superior because the 95% CI does not cross zero, but of course, that 95% CI is derived from a point estimate, in this case 15.4%. So, 15.4% is sufficient for the drug to be superior. But if your very design implied that a difference less than 20% is clinically negligible (a requirement for the rational determination of a delta, a prespecified margin of non-inferiority), aren't you obliged by reason and fairness to qualify the declaration of superiority by saying something like "but, we think that a 15.4% difference is clinically negligible?"

There is no rule that states that you must qualify it in this way, but I think it's only fair. Perhaps we, the medical community, should create a rule - namely that you cannot claim superiority in a non-inferiority trial, only in an equivalence trial. This would prevent the industry from getting one of the "free lunches" they currently get when they conduct these trials, and the apparent contradictions that sometimes arise from them.

Tuesday, September 25, 2007

Lilly, Xigris, the XPRESS trial and non-inferiority shenanigans

The problem with non-inferiority trials (in addition to the apparent fact that the pharmaceutical industry uses them to manufacture false realities) is that people don't generally understand them (which is what allows false realities to be manufactured and consumed.) One only need look at the Windish article described below to see that the majority of folks struggle with biomedical statistics.

The XPRESS trial, published in AJRCCM Sept. 1st, (http://ajrccm.atsjournals.org/cgi/content/abstract/176/5/483) was mandated by the FDA as a condition of the approval of drotrecogin-alfa for severe sepsis. According to the authors of this study, the basic jist is to see if heparin interferes with the efficacy of Xigris (drotrecogin-alfa) in severe sepsis. The trial is finally published in a peer-reviewed journal, although Lilly has been touting the findings as supportive of Xigris for quite a while already.


The stated hypothesis was that Xigris+placebo is equivalent to Xigris+heparin (LMWH or UFH). [Confirmation of this hypothesis has obvious utility for Lilly and users of this drug because it would allay concerns of coadministration of Xigris and heparinoids, the use of the latter which is staunchly entrenched in ICU practice).

The hypothesis was NOT that Xigris+heparin is superior to Xigris alone. If Lilly had thought this, they would have conducted a superiority trial. They did not. Therefore, they must have thought that the prior probability of superiority was low. If the prior probability of a finding (e.g., superiority) is low, we need a strong study result to raise the posterior probability into a reasonable range - that is, a powerful study which produces a very small p-value (e.g., <0.001)>
  • This study used 90% confidence intervals. Not appropriate. This is like using a p-value of 0.10 for significance. I have calculated the more appropriate 95% CIs for the risk difference observed and they are: -0.077 to +0.004.
  • The analysis used was intention to treat. The more conservative method for an equivalence trial is to present the results as "as treated". This could be done at least in addition to the ITT analysis to see if the results are consistent.
  • Here we are doing an equivalence trial with mortality as an outcome. This requires us to choose a "delta" or mortality difference between active treatment and control which is considered to be clinically negligible. Is an increased risk of death of 6.2% negligible? I think not. It is simply not reasonable to conduct a non-inferiority or equivalence trial with mortality as the outcome. Mortality differences would have to be, I would say, less than 1% to convince me that they might be negligible.
  • Because an equivalence design was chosen, the 95% CIs (90% if you're willing to accept that -and I'm not) for the treatment difference would have to fall entirely outside of delta (6.2%) in order for treatment to be declared superior to placebo. Clearly it does not. So any suggestion that Xigris+heparin is superior to Xigris alone based on this study is bunkum. Hogwash. Tripe. Based upon the chosen design, superiority is not even approached. The touted p-value of 0.08 conceals this fact. If they had chosen an superiority design, yes, they would have been close. But they did not.
  • Equivalence was not demonstrated in this trial either, as the 95% (and the 90%) CIs crossed the pre-specified delta. So sorry.
    • The design of this study and its very conception as an equivalence trial with a mortality endpoint is totally flawed. Equivalence was not demonstrated even with a design that would seem to favor its demonstration. (Interestingly, if a non-inferiority design had been chosen, superiority of Xigris+heparin would in fact have been demonstrated! [with 90, but NOT with 95% CIs] ).

      The biggest problem I'm going to have is when the Kaplan-Meier curve presented in Figure 3A with its prominently featured "near miss" p-value of 0.09 is used as ammunition for the argument that Xigris+heparin trended toward superior in this study. If it had been a superiority trial, I would be more receptive of that trend. But you can't have your cake and eat it too. You either do a superiority trial, or you do an equivalence trial. In this case, the equivalence trial appeared to backfire.

      Having said all that, I think we can be reassured that Xigris+heparin is not worse than Xigris+placebo and the concern that heparin abrogates the efficacy of Xigris should be mostly dispelled. And because almost all critically ill patients are at high frisk of DVT/PE, they should all be treated with heparinoids, and the administration of Xigris should not change that practice.

      I just think we should stop letting folks get away with these non-inferiority/equivalence shenanigans. In this case, there is little ultimate difference. But in many cases a non-inferiority or equivalence trial such as this will allow the manufacture of a false reality. So I'll call this a case of "attempted manufacture of a false reality".


      Friday, September 21, 2007

      Medical Residents Don't Understand Statistics

      But they want to: http://jama.ama-assn.org/cgi/content/abstract/298/9/1010

      This is but one of many unsettling findings of an excellent article by Windish et al in the September 5th issue of JAMA.

      Medical residents correctly answer only approximately 40% of questions pertaining to basic statistics related to clinical trials. Fellows and general medicine faculty with research training fared better statistically, but still have some work to do: they answered correctly approximately 70% of the questions.

      An advanced degree in addition to a medical degree conferred only modest benefit: 50% answered correctly rather than 40%.


      The solution to this apparent problem is therefore elusive. Even if we encouraged all residents to pursue advanced degrees or research training, we would still have vast room for improvement in the understanding of basic biomedical statistics. And this is not a realistic expectation (that they all pursue advanced degrees or research training).

      While it would appear that directed training in medical statistics might have a beneficial effect on performance of this test, with work hours restrictions and the daunting amount of material they must already master for the practice of medicine, it seems unlikely that a few extra courses in statistics during residency is going to make a large and sustainable difference.

      Moreover, we must remember that performance on this test is a surrogate outcome - what we're really interested in is how they practice medicine with whatever skills they have. My anecdotal experience is that few physicians are actually keeping abreast of the medical literature - few are actually reading the few journals that they subscribe to - so improving their medical evidence interpretation skills is going to have little impact on how they practice. (For example, few of my colleagues were aware of the Windish article itself, in spite of their practice in an academic center, its publication in a very high impact journal, and their considerable luxury of time compared to our colleagues in private practice.)

      In some ways, the encouragement that the average physician critically evaluate the medical literature seems like a far-fetched and idyllic notion. This may be akin to expecting them to stay abreast of the latest technology for running serum specimens, PCR machines, or to the sensitivity and specificity of various assays for BNP - they just don't have the time or the training to bother with nuances such as these, which are better left to the experts in the clinical and research laboratories. Likewise, it may be asking too much in the current era of medicine to expect that the average physician will possess and maintain biostatistical and trial analysis skills, consistently apply them to emerging literature, and change practice promptly and accordingly. Empirical evidence suggests that this is not happening, and I don't think it has much to do with lack of statistical skills - it has to do with lack of time.

      Perhaps what Windish et al have reinforced is support for the notion that individual physicians should not be expected to keep abreast of the medical literature, but should instead rely upon practice guidelines formulated by those experts properly equipped and compensated to appraise and make recommendations about the emerging evidence.

      Saturday, September 15, 2007

      Idraparinux, the van Gogh investigators, and clinical trials pointillism: connecting the dots shows that Idraparinux increases the risk of death

      It eludes me why the NEJM continues to publish specious, industry-sponsored, negative, non-inferiority trials. Perhaps they do it for my entertainment. And this past week, entertained I was indeed.

      Idraparinux is yet another drug looking for an indication. Keep looking, Sanofi. Your pipeline problems will not be solved by this one.

      First, let me dismiss the second article out of hand: it is not fair to test idraparinux against placebo (for the love of Joseph!) for the secondary prevention of VTE after a recent epidode! (http://content.nejm.org/cgi/content/short/357/11/1105).

      It is old news that one can reduce the recurrence of VTE after a recent episode by either using low intensity warfarin (http://content.nejm.org/cgi/content/abstract/348/15/1425) or by extending the duration of warfarin anticoagulation (http://content.nejm.org/cgi/content/abstract/345/3/165). Therefore, the second van Gogh study does not merit further consideration, especially given the higher rate of bleeding in this study.


      Now for the first study and its omissions and distortions. It is important to bear in mind that the only outcome that cannot be associated with ascertainment bias (assuming a high follow-up rate) is mortality, AND that the ascertainment of DVT and PE are fraught with numerous difficulties and potential biases.

      The Omission: Failure to report in the abstract that Idraparinux use was associated with an increased risk of death in these studies, which was significant in the PE study, and which trended strongly in the DVT study. The authors attempt to explain this away by suggesting that the increased death rate was due to cancer, but of course we are not told how causes of death were ascertained (a notoriously difficult and messy task), and cancer is associated with DVT/PE which is among the final common pathways of death from cancer. This alone, this minor factoid that Idraparinux was associated with an increased risk of death should doom this drug and should be the main headline related to these studies.

      Appropriate headline: "Idraparinux increases the risk of death in patients with PE and possibly DVT."

      If we combine the deaths in the DVT and PE studies, we see that the 6-month death rates are 3.4% in the placebo group and 4.5% in the idraparinux group, with an overall (chi-square) p-value of 0.035 - significant!

      This is especially worrisome from a generalizability perspective - if this drug were approved and the distinction between DVT and PE is blurred in clinical practice as it often is, we would have no way of being confident that we're using it in a DVT patient rather than a PE patient. Who wants such a messy drug?

      The Obfuscations and Distortions: Where to begin? First of all, no justification of an Odds Ratio of 2.0 as a delta for non-inferiority is given. Is twice the odds of recurrent DVT/PE insignificant? It is not. This Odds Ratio is too high. Shame.

      To give credit where it is due, the investigation at least used a one sided 0.025 alpha for the non-inferiority comparison.

      Second, regarding the DVT study, many if not the majority of patients with DVT also have PE, even if it is subclinical. Given that ascertainment of events (other than death) in this study relied on symptoms and was poorly described, that patients with DVT were not routinely tested for PE in the absence of symptoms, and that the risk of death was increased with idraparinux in the PE study, one is led to an obvious hypothesis: that the trend towary an increased risk of death in the DVT study patients who received idraparinux was due to unrecognized PE in some of these patients. The first part of the conclusion in the abstract "in patients with DVT, once weekly SQ idraparinux for 3 or 6 months had an efficacy similar to that of heparin and vitamin K antagonists" obfuscates and conceals this worrisome possibility. Many patients with DVT probably also had undiagnosed PE and might have been more likely to die given the drug's failure to prevent recurrences in the PE study. The increased risk of death in the DVT study might have been simply muted and diluted by the lower frequency of PE in the patients in the DVT study.

      Then there is the annoying the inability to reverse the effects of this drug with a very long half-life.

      Scientific objectivity and patient safety mandate that this drug not receive further consideration for clinical use. Persistence with the study of this drug will most likely represent "sunk cost bias" on the part of the manufacturer. It's time to cut bait and save patients in the process.


      Wednesday, September 5, 2007

      More on Prophylactic Cranial Irradiation

      One of our astute residents at OSU (Hallie Prescott, MD) wrote this letter to the editor of the NEJM about the Slotman article discussed 2 weeks ago - unfortunately, we did not meet the deadline for submission, so I'm posting it here:

      Slotman et al report that prophylactic cranial irradiation (PCI) increases median overall survival (a secondary endpoint) by 1.3 months in patients with small cell lung cancer. There were no significant differences in various quality of life (QOL) measures between the PCI and control groups. However, non-significant trends toward differences in QOL measures are noted in Table 2. We are not told the direction of these trends, and low compliance (46.3%) with QOL assessments at 9 months limits the statistical power of this analysis. Moreover, significant increases in side effects such as fatigue, nausea, vomiting, and leg weakness may limit the attractiveness of PCI for many patients. Therefore, the conclusion that “prophylactic cranial irradiation should be part of standard care for all patients with small-cell lung cancer” makes unwarranted assumptions about how patients with cancer value quantity and quality of life. The Evidence-Based Medicine working group has proposed that all evidence be considered in light of patients’ preferences, and we believe that this advice applies to PCI for extensive small cell lung cancer.


      References

      1. Slotman B, Faivre-Finn C, Kramer G, Rankin E, Snee M, Hatton M et al. Prophylactic Cranial Irradiation in Extensive Small-Cell Lung Cancer. N Engl J Med 2007; 357(7):664-672.
      2. Weeks JC, Cook EF, O'Day SJ, Peterson LM, Wenger N, Reding D et al. Relationship Between Cancer Patients' Predictions of Prognosis and Their Treatment Preferences. JAMA 1998; 279(21):1709-1714.
      3. McNeil BJ, Weichselbaum R, Pauker SG. Speech and survival: tradeoffs between quality and quantity of life in laryngeal cancer. N Engl J Med 1981; 305(17):982-987.
      4. Voogt E, van der Heide A, Rietjens JAC, van Leeuwen AF, Visser AP, van der Rijt CCD et al. Attitudes of Patients With Incurable Cancer Toward Medical Treatment in the Last Phase of Life. J Clin Oncol 2005; 23(9):2012-2019.
      5. Guyatt GH, Haynes RB, Jaeschke RZ, Cook DJ, Green L, Naylor CD et al. Users' Guides to the Medical Literature: XXV. Evidence-Based Medicine: Principles for Applying the Users' Guides to Patient Care. JAMA 2000; 284(10):1290-1296.

      Monday, August 20, 2007

      Prophylactic Cranial Irradiation: a matter of blinding, ascertainment, side effects, and preferences

      Slotman et al (August 16 issue of NEJM: http://content.nejm.org/cgi/content/short/357/7/664) report a multicenter RCT of prophylactic cranial irradiation for extensive small cell carcinoma of the lung and conclude that it not only reduces symptomatic brain metastases, but also prolongs progression-free and overall survival. This is a well designed and conducted non-industry-sponsored RCT, but several aspects of the trial warrant scrutiny and temper my enthusiasm for this therapy. Among them:

      The trial is not blinded (masked is a more sensitive term) from a patient perspective and no effort was made to create a sham irradiation procedure. While unintentional unmasking due to side effects may have limited the effectiveness of a sham procedure, it may not have rendered it entirely ineffective. This issue is of importance because meeting the primary endpoint was contingent on patient symptoms, and a placebo effect may have impacted participants’ reporting of symptoms. Some investigators have gone to great lengths to tease out placebo effects using sham procedures, and the results have been surprising (e.g., knee arthroscopy; see: https://content.nejm.org/cgi/content/abstract/347/2/81?ck=nck).


      We are not told if investigators, the patient’s other physicians, radiologists, and statisticians were masked to the treatment assignment. Lack of masking may have led to other differences in patient management, or to differences in the threshold for ordering CT/MRI scans. We are not told about the number of CT/MRI scans in each group. In a nutshell: possible ascertainment bias (see http://www.consort-statement.org/?o=1123).

      There are several apparently strong trends in QOL assessments, but we are not told what direction they are in. Significant differences in these scores were unlikely to be found as the deck was stacked when the trial was designed: p<0.01 was required for significance of QOL assessments. While this is justified because of multiple comparisons, it seems unfair to make the significance level for side effects more conservative than that for the primary outcome of interest (think Vioxx here). The significance level required for secondary endpoints (progression-free and overall survival) was not lowered to account for multiple comparisons. Moreover, more than half of QOL assessments were missing by 9 months, so this study is underpowered to detect differences in QOL. It is therefore all the more important to know the direction of the trends that are reported.

      The authors appear to “gloss over” the significant side effects associated with this therapy. It made some subjects ill.

      If we are willing to accept that overall survival is improved by this therapy (I’m personally circumspect about this for the above reasons) the bottom line for patients will be whether they would prefer on average 5 additional weeks of life with nausea, vomiting weight loss, fatigue, anorexia, and leg weakness to 5 fewer weeks of life without these symptoms. I think I know what choice many will make, and our projection bias may lead us to make inaccurate predictions of their choices (see Lowenstein, Medical Decision Making, Jan/Feb 2005: http://mdm.sagepub.com/cgi/content/citation/25/1/96).

      The authors state in the concluding paragraph:

      “Prophylactic cranial irradiation should be part of standard care for all patients with small-cell lung cancer who have a response to initial chemotherapy, and it should be part of the standard treatment in future studies involving these patients.”

      I think the decision to use this therapy is one that only patients are justified making. At least now we have reasonably good data to help them inform their choice.

      Monday, August 6, 2007

      Thalidomide, Phocomelia, and Lessons from History

      In tracing the history of evidence-based medicine tonight (for a lecture I have to give on Friday), a found the story of thalidomide on wikipedia (http://en.wikipedia.org/wiki/Thalidomide ).

      (While I recognize that the information provided on this site is uncorroborated, I also recognize that it has been referenced by Federal Distric Courts in various decisions - see http://www.nytimes.com/2007/01/29/technology/29wikipedia.html?ex=1186545600&en=4e6683fb4fac3044&ei=5070 - so I consider it possibility generating rather than evidence corroborating.)

      This story is a tragic one of a company with a product to sell (a "treatment looking for an indication" - hmmm...) and its unscrupulous marketing of this product in the absence of evidence of both safety and efficacy.

      The story of Thalidomide should serve as a stark and poignant reminder of the potential harmful effects of a marketing campaign, impelled by profiteering, gone awry.

      Sunday, August 5, 2007

      AVANDIA and Omission Bias

      Amid all the hype about Avandia recently, a few relatively clear-cut observations are apparent (most of which are described better than I could hope to do in the July 5 issue of NEJM. Drazen et al, Dean, and Psaty each wrote wonderful editorials available at www.nejm.org).

      1.) Avandia appears to have NO benefits besides the surrogate endpoint of improved glycemic control (and engorging the coffers of GSK, the manufacturer).

      2.) Avandia may well increase the risk of CHF, MI, raise LDL cholesterol, cause weight gain and increase the risk of fractures (the latter in women).

      3.) Numerous alternative agents exist, some of which improve primary outcomes (think UKPDS and metformin), and most of which appear to be safer.

      So, what physician in his right mind would start a patient on Avandia (especially in light of #3)? And if you would not START a patient on Avandia, then you should STOP Avandia in patients who are already taking it.


      To not do so would be to commit OMISSION BIAS - which refers to the tendency (in medicine and in life) to view the risks and/or consequences of doing nothing as superior to the risks and/or consequences of acting, even when the converse is true (i.e., the risks and/or consequences of acting are superior to those related to inaction). (For a reference, indulge me: Aberegg et al http://www.chestjournal.org/cgi/content/abstract/128/3/1497.)

      This situation is reminiscent of recommendations relating to the overall (read "net") health benefits of ethanol consumption - physicians are told to not discourage moderate alcohol consumption in patients who already consume, but not to encourage it in those who currently abstain. Well, alcohol is either good for you, or it is not. And since it appears to be good for you, the recommendation on its consumption should not hinge one iota on an arbitrarily established status quo (whether for reasons completely unrelated to health a person currently drinks).
      (For a reference, see Malinski et al: http://archinte.ama-assn.org/cgi/content/abstract/164/6/623; the last paragraph in the discussion could serve as an expose on omission bias.)

      So, let me go out on a limb here: Nobody should be taking Avandia, and use of this medication should not resume until some study demonstrates a substantive benefit in a meaningful outcome which outweighs any risks associated with the drug. Until we do this, we are the victims of OMISSION BIAS (+/- status quo bias) and the profiteering conspiracy of GSK which is beautifully alluded to, along with a poignant description of the probably intentional shortcomings in the design and conduct of the RECORD trial here: Psaty and Furberg http://content.nejm.org/cgi/content/extract/356/24/2522.

      Tuesday, July 31, 2007

      Secondary Endpoints, Opportunity Costs, Alternatives, Vioxx, Avandia, and Actos

      There are few endpoints that can hold a candle to mortality as the end-all, be-all of clinical trials design, but two appear to be fit for the challenge, (at least according to past FDA decisions) - or are they? Blood Pressure lowering and glycemic control.



      It is old news that Vioxx kills people, and does so utterly unnecessarily: alternative treatments are available that are generic, low cost, and have no toxicities that are demonstrably greater than Vioxx (despite Big Pharma inuendo to the contrary - you know, GI toxicity and the like).



      (I am reminded of cognitive dissonance theory here - originally described by Alport, 1938; It has been demonstrated that folks who are more harshly hazed by a fraternity have greater allegence to it.....could this be one of the reasons why paying big bucks for a prescription NSAID with no demonstrable benefits over OTC generics leads to patient claims of superiority of the branded product?)



      Well, the old news is still being published: http://content.nejm.org/cgi/content/full/357/4/360 .

      The interesting thing to me about the Vioxx story is that with alternatives available (you know, Aleve, Mortin, and the like), and in relation to a "lifestyle drug," safety was not given greater weight. If your primary endpoint is mortality, you might allow an MI or two in your dataset (although you should report them). But when your endpoint is "confirmed clinical upper gastrointestinal events " (http://content.nejm.org/cgi/content/full/343/21/1520), perhaps closer attention ought to be paid to the side effects you have to pay in order to receive the benefits of the primary endopint. If no other NSAIDS were available to treat patients with crippling arthritis, that would be one thing (think IBS: Alosetron withdrawn and then reintroduced to the market because of lack of a suitable alternative; http://content.nejm.org/cgi/content/full/349/22/2136). But there were alternatives and this was a lifestyle drug....



      And now we have the Avandia debacle, which, surprisingly, did not lead to a recommendation for withdrawl of this drug from teh US markey by the recent FDA advisiory panel (http://sciencenow.sciencemag.org/cgi/content/full/2007/730/1). Once again, it seems this decision, if made by a rational agent, would have given due consideration to whether there are alternative agents that might be used in place of Avandia if it were no longer available. Well, sure enough, in addition to metformin (think UKPDS), and insulin, and other oral hyopglycemics, lo and behold: Pioglitazone.

      Wednesday, July 25, 2007

      The Swan Ganz graces the pages of JAMA yet again

      The debate on the Swan Ganz catheter continues, this time spurred by a well done report documenting declining use of the catheter over the last decade, the results of an analysis of an administrative database (available at http://jama.ama-assn.org/cgi/content/short/298/4/423 ).

      The arguments used in this debate continue to befuddle me with their obvious lack of logical consistency with many other things that are going on apparently unnoticed around us, and about which no fuss is being made. I will enumerate some of these here.


      1.) An air of derision often accompanies denouncements of the Swan Ganz catheter because it is "invasive". This buzz word, however, carries little consequence in reality. That something is "invasive" does not necessarily mean that it is riskier than other things that are done that are "non-invasive". Administration of Cytoxan or other chemotherapeutic agents is not "invasive" by the common definition of the term, yet is clearly very risky. Other analogies abound. I am not convinced by hyperbolic statements of "invasiveness" that are not supported by actual negative consequences of the device that exceed other risks which we routinely take (and take for granted) in medicine.

      2.) And what are the actual negative consequences? In the FACTT trial of ARDSnet, the only adverse consequence was transient arrythmias. I remain unconvinced.

      3.) What OTHER "invasive" (their definition, not mine) things do we routinely do that have no proven mortality benefit? How about arterial lines, or many (most?) central lines? Why is not the critical care (especially the academic critical care) community rallying against those, if it is invasive devices of unproven [mortality] benefit that we are concerned with?

      4.) Why must this device, unlike almost all other devices and diagnostic modalities, demonstrate a mortality benefit in order to qualify for our acceptance? Must the ECHOcardiogram (within the ICU or without) reduce mortality for its use to be justified? Not invasive, no risks, doesn't count you say. OK, how about CT angiogram? There are increasing data about the carcinogenecity of radiation from CT scans (Lee et al, 2004, Health Policy and Practice, "Diagnostic CT Scans..", available at: http://radiology.rsnajnls.org/cgi/reprint/231/2/393.pdf), and there is not insubstantial renal morbidity and risk of anaphylactoid reactions to the dye. Yet we evaluate the CT angiogram on the basis of its ability to identify pulmonary emboli (sensitivity and specificity and the like), not to reduce mortality (and meanwhile we largely ignore the risks or accept them as the costs of diagnosis). How many patients would be required to conduct such a study of mortality reduction with CT angiogram? Is there a study in existence of a diagnostic modality the use of which improves mortality? Is there precedent for such a thing? Should it surprise us that intervening more proximally (diagnosis rather than treatment) in a clinical pathway makes it harder (or impossible) to demonstrate a benefit further downstream?

      5.) Let's extend the analogy. Suppose we were to design a study of routine use of CT angiogram in the ICU for this or that indication, let's say sudden unexplained hypoxemia. Suppose also that this study shows no benefit (mortality or otherwise) of routine use in this patient population. Does this mean that I should stop using CT angiogram on a selective basis, as those who call for a moratorium imply I should do with the Swan?

      6.) If the arterial line analogy was not sufficient, because there was not a recent study demonstrating a lack of mortality benefit with this device, we have an alternative candidate: the Canadian Critical Care Trials Group study of ("invasive") BAL for the diagnosis of VAP published in the NEJM in December ( http://content.nejm.org/cgi/content/abstract/355/25/2619 ). No rallying cry, no proposed moratorium followed this extermely well conducted trial. No denouncement of BAL in the editorial (http://content.nejm.org/cgi/content/extract/355/25/2691). Quite the contrary - the exclusion of patients with staph and pseudomonas was construed as all but undermining the validity of the results for application to clinical practice. At my own institution, pre-existing staunch enthusiasm for BAL diagnosis of VAP has not wavered since publication of this trial.

      I am no Swan Ganz apologist, and I rarely use the device. But the state of the debate and the arguments used to denounce the Swan do not stand the test of logic or consistency that I expect of the critical care community. And this leads me to believe that these arguments are the spawn of idealogy and sanctimoniousness, rather than logic and balanced consideration.

      An afterthought - Perhaps the most obvious moratorium for the academic community to call for is a moratorium on clinical trials of the Swan. They continue to be performed long after it became clear, meta-analytically, that it will be impossible to show a convincing positive result. The prior probability is now prohibitively low for any reasonably-sized trial to move the posterior away from the prior or sway the results of a meta-analysis.

      Thursday, July 19, 2007

      The WAVE trial: The Canadians set the standard once again

      Today's NEJM contains the report of an exemplary trial (the WAVE trial) comparing aspirin to aspirin and warfarin combined in the prevention of cardiovascular events in patients with peripheral vascular disease (http://content.nejm.org/cgi/reprint/357/3/217.pdf). Though this was a "negative" trial in that there was no statistically significant difference in the outcomes between the two treatment groups, I am struck by several features of its design that are worth mentioning.

      Although the trial was the beneficiary of pharmaceutical funding, the authors state:

      "None of the corporate sponsors had any role in the design or conduct of the trial, analysis of the data, or preparation of the manuscript".

      Ideally, this would be true of all clinical trials, but right now it's a precocious idea.



      One way to remove any potential or perceived conflicts of interest might be to mandate that no phase 3 study be designed, conducted, or analyzed by its sponsor. Rather, phase 3 trials could be funded by a sponsor, but are mandated to be designed, conducted, analyzed, and reported by an independent agency consisting of clinical trials experts, biostatisticians, etc. Such an agency might also receive infrastructural support from governmental agencies. It would have to be large enough to handle the volume of clinical trials, and large enough that a sponsor would not be able to know to what ad hoc design committee the trial would be assigned, thereby preventing unscrupulous sponsors from "stacking the deck" in favor of the agent in which they have an interest.

      The authors of the current article also clearly define and describe inclusion and exclusion criteria for the trial, and these are not overly restrictive, increasing the generalizability of the results. Moreover, the ratinoale for the parsimonious inclusion and exclusion criteria are intuitively obvious, unlike some trials where the reader is left to guess why the authors excluded a particular subgroup. Was it because it was thought that the agent would not work in that group? Because increased risk was expected in that group? Because study was too difficult (ethically or logistically) in that group (e.g., pregnancy). Inadequate justification of inclusion and exclusion criteria make it difficult for practitioners to determine how to incorporate the findings into clinical practice. For example, were pregnant patients excluded from trials of therapeutic hypothermia after cardiac arrest (http://content.nejm.org/cgi/reprint/346/8/549.pdf) for ethical reasons, because of an increased risk to the mother or fetus, because small numbers of pregnant patients were expected, because the IRB frowns upon their inclusion or for some other reason? Without knowing this, it is difficult to know what to do with a pregnant woman who is comatose following cardiac arrest. Obviously, their lack of inclusion in the trial does not mean that this therapy is not efficacious for them (absense of evidence is not evidence of absense). If I knew that they were excluded because of a biologically plausible concern for harm to the fetus (and I can think of at least one) rather than because of IRB concerns, I would be better prepared to make a decision about this therapy when faced with pregnant patient after cardiac arrest. Improving the reporting and justification of inclusion and exclusion criteria should be part of efforts to improve the quality of reporting of clinical trials.

      Interestingly, the authors also present an analysis of the composite endpoints (coprimary endpoints 1 and 2) that excludes fatal bleeding or hemorrhagic stroke. When these side effects are excluded from the composite endpoints, there is a trend favoring combination therapy (p values 0.11 and 0.09 respectively). Composite endpoints are useful because they allow a trial of a given number of patients to have greater statistical power, and it is rational to include side effects in them, as side effects reduce the net value of the therapy. However, an economist or a person versed in expected utility theory (EUT) would say that it is not fair to combine these endpoints without first weighting them based on their relative (positive or negative value). Not weighting them implies that an episode of severe bleeding in this trial is as bad (negative value or utility) as a death - a contention that I for one would not support. I would much rather bleed than die, or have a heart attack for that matter. Bleeding can usually be readily and effectively treated.

      In the future, it may be worthwhile to think more about composite endpoints if we are really interested in the net value/utility of a therapy. While it is often difficult to assign a relative value to different outcomes, methods (such as standard gambles) exist and such assignment may be useful in determining the true net value (to society or to a patient) of a new therapy.

      Tuesday, July 10, 2007

      Anidulafungin - a boon for patients, physicians, or Big Pharma?

      The June 14th edition of the NEJM (http://content.nejm.org/cgi/content/short/356/24/2472) contains an article describing a trial of anidulafungin, a new echinocandin antifungal agent similar to the more familiar caspofungin, in invasive candidiasis. The comparator agent was fluconazole. This is a proprietary agent, and the study was was fully funded by the pharmaceutical sponsor.

      The trial was a non-inferiority trial, and the chosen "delta" (the treatment difference which was determined to be clinically insignificant) was 20%. This means that the authors would consider a difference in clinical response between the 2 agents of 19% to be clinically insignificant. No justification for this delta was provided, as is recommended (http://jama.ama-assn.org/cgi/content/abstract/295/10/1152?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=non-inferiority&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT). It is not clear if clinicians agree with this implicit statement of clinical insignificance, and no poll has been taken to determine if they do.


      Which begs a question: should there be a requirement that clinicians be polled to determine what THEY, rather than the study sponsors think is a clinically insignificant difference? After all, clinicians are the folks who will be using the drug (if it is approved by the FDA.)

      The design of non-inferiority trials is, in my experience, poorly understood among clinicians, and this may be due to inadequate reporting as reported in the above article and in this one (http://jama.ama-assn.org/cgi/content/abstract/295/10/1147?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=equivalence&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT).

      Interestingly the difference in the agents favored anidulafungin by 15.4%, a difference that the authors did not emphasize as clinically insignificant.

      I am left wondering if individual patients or society are better off now that we have another drug of the echinocandin class available. I would be more convinced that they were if anidulafungin had been compared to 800 mg of fluconazole (rather than 400 mg) or to caspofungin, but alas, it was not. I don't know what the cost of developing and testing this drug was, but I expect that it was on the order of tens to hundreds of millions of dollars - not to mention the costs of subsequent testing, advertising and marketing.

      And the opportunity costs - the other possibilities. What else could have been done with that money that may have benefited individual patients or society more than another echinocandin agent?

      The Medical Evidence Blog - Introduction and Goals

      The goals of this blog are manifold. I will list a few of them below. Hopefully it will serve as a forum to discuss:

      • Emerging evidence in medicine
      • The design, conduct, analysis, and reporting of clinical trials evidence
      • Shenanigans perpetrated by investigators and pharmaceutical companies in the design, conduct, analysis, and reporting of clinical trials the impetus behind which appears to be something other than a search for the truth
      • The expected impact of emerging evidence on clinical practice and patient care
      • The value of new evidence to individual patients and society
      • Underutilization of emerging and available evidence and therapies
      • Biases in the interpretation of clinical trials evidence

      Given these goals, I feel compelled to admit my own potential conflicts of interest. First, my research focus is on biases in the interpretation of clinical trials evidence, and my career stands to benefit from success in this line of research. Second, I have received and continue to receive speaker fees from Eli Lilly in relation to their promotion of the drug drotrecogin-alfa.

      I think the best thing to do is to just "dive in" - so for the next post I will open discussion about a recent NEJM article....