Tuesday, July 31, 2007
It is old news that Vioxx kills people, and does so utterly unnecessarily: alternative treatments are available that are generic, low cost, and have no toxicities that are demonstrably greater than Vioxx (despite Big Pharma inuendo to the contrary - you know, GI toxicity and the like).
(I am reminded of cognitive dissonance theory here - originally described by Alport, 1938; It has been demonstrated that folks who are more harshly hazed by a fraternity have greater allegence to it.....could this be one of the reasons why paying big bucks for a prescription NSAID with no demonstrable benefits over OTC generics leads to patient claims of superiority of the branded product?)
Well, the old news is still being published: http://content.nejm.org/cgi/content/full/357/4/360 .
The interesting thing to me about the Vioxx story is that with alternatives available (you know, Aleve, Mortin, and the like), and in relation to a "lifestyle drug," safety was not given greater weight. If your primary endpoint is mortality, you might allow an MI or two in your dataset (although you should report them). But when your endpoint is "confirmed clinical upper gastrointestinal events " (http://content.nejm.org/cgi/content/full/343/21/1520), perhaps closer attention ought to be paid to the side effects you have to pay in order to receive the benefits of the primary endopint. If no other NSAIDS were available to treat patients with crippling arthritis, that would be one thing (think IBS: Alosetron withdrawn and then reintroduced to the market because of lack of a suitable alternative; http://content.nejm.org/cgi/content/full/349/22/2136). But there were alternatives and this was a lifestyle drug....
And now we have the Avandia debacle, which, surprisingly, did not lead to a recommendation for withdrawl of this drug from teh US markey by the recent FDA advisiory panel (http://sciencenow.sciencemag.org/cgi/content/full/2007/730/1). Once again, it seems this decision, if made by a rational agent, would have given due consideration to whether there are alternative agents that might be used in place of Avandia if it were no longer available. Well, sure enough, in addition to metformin (think UKPDS), and insulin, and other oral hyopglycemics, lo and behold: Pioglitazone.
Wednesday, July 25, 2007
The arguments used in this debate continue to befuddle me with their obvious lack of logical consistency with many other things that are going on apparently unnoticed around us, and about which no fuss is being made. I will enumerate some of these here.
1.) An air of derision often accompanies denouncements of the Swan Ganz catheter because it is "invasive". This buzz word, however, carries little consequence in reality. That something is "invasive" does not necessarily mean that it is riskier than other things that are done that are "non-invasive". Administration of Cytoxan or other chemotherapeutic agents is not "invasive" by the common definition of the term, yet is clearly very risky. Other analogies abound. I am not convinced by hyperbolic statements of "invasiveness" that are not supported by actual negative consequences of the device that exceed other risks which we routinely take (and take for granted) in medicine.
2.) And what are the actual negative consequences? In the FACTT trial of ARDSnet, the only adverse consequence was transient arrythmias. I remain unconvinced.
3.) What OTHER "invasive" (their definition, not mine) things do we routinely do that have no proven mortality benefit? How about arterial lines, or many (most?) central lines? Why is not the critical care (especially the academic critical care) community rallying against those, if it is invasive devices of unproven [mortality] benefit that we are concerned with?
4.) Why must this device, unlike almost all other devices and diagnostic modalities, demonstrate a mortality benefit in order to qualify for our acceptance? Must the ECHOcardiogram (within the ICU or without) reduce mortality for its use to be justified? Not invasive, no risks, doesn't count you say. OK, how about CT angiogram? There are increasing data about the carcinogenecity of radiation from CT scans (Lee et al, 2004, Health Policy and Practice, "Diagnostic CT Scans..", available at: http://radiology.rsnajnls.org/cgi/reprint/231/2/393.pdf), and there is not insubstantial renal morbidity and risk of anaphylactoid reactions to the dye. Yet we evaluate the CT angiogram on the basis of its ability to identify pulmonary emboli (sensitivity and specificity and the like), not to reduce mortality (and meanwhile we largely ignore the risks or accept them as the costs of diagnosis). How many patients would be required to conduct such a study of mortality reduction with CT angiogram? Is there a study in existence of a diagnostic modality the use of which improves mortality? Is there precedent for such a thing? Should it surprise us that intervening more proximally (diagnosis rather than treatment) in a clinical pathway makes it harder (or impossible) to demonstrate a benefit further downstream?
5.) Let's extend the analogy. Suppose we were to design a study of routine use of CT angiogram in the ICU for this or that indication, let's say sudden unexplained hypoxemia. Suppose also that this study shows no benefit (mortality or otherwise) of routine use in this patient population. Does this mean that I should stop using CT angiogram on a selective basis, as those who call for a moratorium imply I should do with the Swan?
6.) If the arterial line analogy was not sufficient, because there was not a recent study demonstrating a lack of mortality benefit with this device, we have an alternative candidate: the Canadian Critical Care Trials Group study of ("invasive") BAL for the diagnosis of VAP published in the NEJM in December ( http://content.nejm.org/cgi/content/abstract/355/25/2619 ). No rallying cry, no proposed moratorium followed this extermely well conducted trial. No denouncement of BAL in the editorial (http://content.nejm.org/cgi/content/extract/355/25/2691). Quite the contrary - the exclusion of patients with staph and pseudomonas was construed as all but undermining the validity of the results for application to clinical practice. At my own institution, pre-existing staunch enthusiasm for BAL diagnosis of VAP has not wavered since publication of this trial.
I am no Swan Ganz apologist, and I rarely use the device. But the state of the debate and the arguments used to denounce the Swan do not stand the test of logic or consistency that I expect of the critical care community. And this leads me to believe that these arguments are the spawn of idealogy and sanctimoniousness, rather than logic and balanced consideration.
An afterthought - Perhaps the most obvious moratorium for the academic community to call for is a moratorium on clinical trials of the Swan. They continue to be performed long after it became clear, meta-analytically, that it will be impossible to show a convincing positive result. The prior probability is now prohibitively low for any reasonably-sized trial to move the posterior away from the prior or sway the results of a meta-analysis.
Thursday, July 19, 2007
Although the trial was the beneficiary of pharmaceutical funding, the authors state:
"None of the corporate sponsors had any role in the design or conduct of the trial, analysis of the data, or preparation of the manuscript".
Ideally, this would be true of all clinical trials, but right now it's a precocious idea.
One way to remove any potential or perceived conflicts of interest might be to mandate that no phase 3 study be designed, conducted, or analyzed by its sponsor. Rather, phase 3 trials could be funded by a sponsor, but are mandated to be designed, conducted, analyzed, and reported by an independent agency consisting of clinical trials experts, biostatisticians, etc. Such an agency might also receive infrastructural support from governmental agencies. It would have to be large enough to handle the volume of clinical trials, and large enough that a sponsor would not be able to know to what ad hoc design committee the trial would be assigned, thereby preventing unscrupulous sponsors from "stacking the deck" in favor of the agent in which they have an interest.
The authors of the current article also clearly define and describe inclusion and exclusion criteria for the trial, and these are not overly restrictive, increasing the generalizability of the results. Moreover, the ratinoale for the parsimonious inclusion and exclusion criteria are intuitively obvious, unlike some trials where the reader is left to guess why the authors excluded a particular subgroup. Was it because it was thought that the agent would not work in that group? Because increased risk was expected in that group? Because study was too difficult (ethically or logistically) in that group (e.g., pregnancy). Inadequate justification of inclusion and exclusion criteria make it difficult for practitioners to determine how to incorporate the findings into clinical practice. For example, were pregnant patients excluded from trials of therapeutic hypothermia after cardiac arrest (http://content.nejm.org/cgi/reprint/346/8/549.pdf) for ethical reasons, because of an increased risk to the mother or fetus, because small numbers of pregnant patients were expected, because the IRB frowns upon their inclusion or for some other reason? Without knowing this, it is difficult to know what to do with a pregnant woman who is comatose following cardiac arrest. Obviously, their lack of inclusion in the trial does not mean that this therapy is not efficacious for them (absense of evidence is not evidence of absense). If I knew that they were excluded because of a biologically plausible concern for harm to the fetus (and I can think of at least one) rather than because of IRB concerns, I would be better prepared to make a decision about this therapy when faced with pregnant patient after cardiac arrest. Improving the reporting and justification of inclusion and exclusion criteria should be part of efforts to improve the quality of reporting of clinical trials.
Interestingly, the authors also present an analysis of the composite endpoints (coprimary endpoints 1 and 2) that excludes fatal bleeding or hemorrhagic stroke. When these side effects are excluded from the composite endpoints, there is a trend favoring combination therapy (p values 0.11 and 0.09 respectively). Composite endpoints are useful because they allow a trial of a given number of patients to have greater statistical power, and it is rational to include side effects in them, as side effects reduce the net value of the therapy. However, an economist or a person versed in expected utility theory (EUT) would say that it is not fair to combine these endpoints without first weighting them based on their relative (positive or negative value). Not weighting them implies that an episode of severe bleeding in this trial is as bad (negative value or utility) as a death - a contention that I for one would not support. I would much rather bleed than die, or have a heart attack for that matter. Bleeding can usually be readily and effectively treated.
In the future, it may be worthwhile to think more about composite endpoints if we are really interested in the net value/utility of a therapy. While it is often difficult to assign a relative value to different outcomes, methods (such as standard gambles) exist and such assignment may be useful in determining the true net value (to society or to a patient) of a new therapy.
Tuesday, July 10, 2007
The trial was a non-inferiority trial, and the chosen "delta" (the treatment difference which was determined to be clinically insignificant) was 20%. This means that the authors would consider a difference in clinical response between the 2 agents of 19% to be clinically insignificant. No justification for this delta was provided, as is recommended (http://jama.ama-assn.org/cgi/content/abstract/295/10/1152?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=non-inferiority&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT). It is not clear if clinicians agree with this implicit statement of clinical insignificance, and no poll has been taken to determine if they do.
Which begs a question: should there be a requirement that clinicians be polled to determine what THEY, rather than the study sponsors think is a clinically insignificant difference? After all, clinicians are the folks who will be using the drug (if it is approved by the FDA.)
The design of non-inferiority trials is, in my experience, poorly understood among clinicians, and this may be due to inadequate reporting as reported in the above article and in this one (http://jama.ama-assn.org/cgi/content/abstract/295/10/1147?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=equivalence&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT).
Interestingly the difference in the agents favored anidulafungin by 15.4%, a difference that the authors did not emphasize as clinically insignificant.
I am left wondering if individual patients or society are better off now that we have another drug of the echinocandin class available. I would be more convinced that they were if anidulafungin had been compared to 800 mg of fluconazole (rather than 400 mg) or to caspofungin, but alas, it was not. I don't know what the cost of developing and testing this drug was, but I expect that it was on the order of tens to hundreds of millions of dollars - not to mention the costs of subsequent testing, advertising and marketing.
And the opportunity costs - the other possibilities. What else could have been done with that money that may have benefited individual patients or society more than another echinocandin agent?
- Emerging evidence in medicine
- The design, conduct, analysis, and reporting of clinical trials evidence
- Shenanigans perpetrated by investigators and pharmaceutical companies in the design, conduct, analysis, and reporting of clinical trials the impetus behind which appears to be something other than a search for the truth
- The expected impact of emerging evidence on clinical practice and patient care
- The value of new evidence to individual patients and society
- Underutilization of emerging and available evidence and therapies
- Biases in the interpretation of clinical trials evidence
Given these goals, I feel compelled to admit my own potential conflicts of interest. First, my research focus is on biases in the interpretation of clinical trials evidence, and my career stands to benefit from success in this line of research. Second, I have received and continue to receive speaker fees from Eli Lilly in relation to their promotion of the drug drotrecogin-alfa.
I think the best thing to do is to just "dive in" - so for the next post I will open discussion about a recent NEJM article....