Friday, November 30, 2007

Eltrombopag: Alas data that speak for themselves

In this week's NEJM, two articles describe the results of two phase 2 studies of Eltrombopag, a non-peptide, oral agonist of the thrombopoetin receptor, one in patients with HCV and thrombocytopenia:
and another in patients with ITP:

I have grown so weary of investigators who must speak for their data - massaging them, doing post-hoc analyses, proffering excuses for them, changing their endpoints and designs to conform to the data, offering partial analyses, ignoring alternative interpretations, stacking the deck in favor of their agent - that I breathe a sigh of relief and contentment when I see data like these which are robust enough to speak for themselves - both in level of statistical significance and effect size which is clearly clinically meaningful.

Of course, we should be clear what these studies can tell us and what they can't. This is a phase 2 trial and it certainly demonstrated efficacy and a dose response which should satisfy even the harshest critics (e.g., me). However, the time of treatment was relatively short so we don't know if the response can be sustained over time; and the study was wildly underpowered to detect side effects at all but the highest frequencies. What untoward effects of stimulating megakaryocytes through this pathway might there be? What about thrombotic complications?
(This is an interesting question also - supposing there are increased thrombotic complications with this agent - how will we know whether this is a direct adverse effect of the agent or whether it results from reversal of protection against thrombosis conferred by ITP itself, if that even exists?)

So, we await the results of larger phase 3 trials of Eltrombopag, hoping that they are well designed and attuned to careful measure of adverse effects, content for now that a novel and apparently robust agent has been discovered to add to the currently inadequate treatments for cirrhotic thrombocytopenia and that associated with ITP.

Sunday, November 25, 2007

Are Merck and Schering-Plough "enhancing" the ENHANCE data?

I'm from Missouri, "The Show-Me State," and like many others, I'd like Merck and Schering-Plough to show me the ENHANCE trial results. I'd like them raw and unenhanced, please. This expose in the NYT last week is priceless:

I just learned that Matthew Herper at Forbes reported it first in an equally priceless article:

In a nutshell: Sinvastatin (misspelling intentional) recently lost patent protection. Sinvastatin (Zocor) has been combined with ezetimibe (Zetia) to yield combination drug Vytorin. This combination holds the promise of rescuing Sinvastatin, a multi-billion dollar drug, from generic death if doctors continue to prescribe it in combination with ezetimibe as a branded product. There's only one problem: unlike sinvastatin, ezetimibe has never been shown to do anyting but lower LDL cholesterol, a surrogate endpoint. That's right, just like Torcetrapib, we don't know what ezetimibe does to clinically meaningful outcomes, the ones that patients and doctors care about. (The drug compaines care about surrogate outcomes because some of them are sufficient for FDA approval - that subject is a blog post or two in itself.)

So Merck and Schering-Plough designed the ENHANCE trial, which compares 80 mg of simvastatin to 80 mg of simvastatin + 10 mg of ezetimibe on the primary outcomes of carotid intima-media thickness and femoral artery (IMT). Note that we still don't have a clinically meaningful endpoint as a primary outcome, but we're getting there. A trial assessing the combination's effects on meaningful outcomes isn't due to be completed until 2010. Of course a big worry here is that ezetimibe is like torcetrapib and that in spite of creating a more favorable cholesterol profile, there is no clinically meaningful outcome improvement; i.e., the cholesterol panel is a merely cosmetic result of ezetimibe.

(Regarding the ongoing trials evaluating clinical outcomes: Schering-Plough is up to some tricks there too to rescue Sinvastatin from generic death. The improve-it study [they need a study to "prove-it" before they embark on a mission to "improve-it," don't you think?] design can be seen here:
In this study, ezetimibe is not being compared to maximum dose sinvastatin, nor is a combination of ezetimibe and sinvastatin being compared to maximum sinvastatin alone. If one of those comparisons were done, important information could be gleaned - doctors would know, for example, if ezetimibe is superior to an alternative (one that is now available in generic, mind you) at maximum dose, or if its addition to maximum dose sinvastatin has any additional yield. But such trials are too risky for the company - they may show that there is no point to prescribing ezetimibe because it is either less potent than max dose sinvastatin, or that it has no incremental value over max dose sinvastatin. So, instead, sinvastatin 40mg+ ezetimibe 10mg is being compared to sinvastatin 40mg alone. The main outcomes are hard clinical endpoints - death, stroke, MI, etc. Supposing that this trial is "positive" - that the combination (Vytorin) is superior to sinvastatin 40mg. Should patients now be on Vytorin (sinvastatin 40mg+ ezetimibe =patent-protected=expensive) instead of sinvastatin 80 mg (=generic=cheap)? Well, there will be no way to know based on this trial, which is exactly the way Schering-Plough wants it. You see, this trial was designed primarily for the purpose of securing patent protection for simvastatin in the combination pill. Its potential contribution to science and patient care is negligible. So much so in fact, that I think this trial is unethical. It is unethical because patients volunteer for research mainly out of altruism (although in this case you could argue it's for free drugs). The result of such altruism is expected to be a contribution to science and patient care in the future. But in this case, the science sucks and the main contribution patients are making goes to the coffers of Schering-Plough. Physicians should stop allowing their patients to participate in such trials, so that their altruism is not violated.)

The NYT article makes some suspicious and concerning observations:

  • The data, expected to be available 6 months ago (the trial was completed almost 2 years ago!), will not be released until some time next year, and then only a partial dataset analysis, not complete data analysis.
  • The primary endpoint was changed after the trial was concluded! (Originally it was going to be carotid IMT at three places, now only at one place - a change that is rich fodder for conspiracy theorists, regardless if an outside consulting agency suggested the change.)
  • Data on femoral artery IMT are not going to be released at all now

Matthew Herper's Forbes article also notes that the trial was not listed on until Forbes asked why it was not there!

For the a priori trial design and pre-specified analyses, see pubmed ID # 15846260 at . In that report of the study's design, I do not see mention of monitoring of safety endpoints such as mortality and cardiovascular outcomes. But I presume these are being monitored for safety reasons. And Merck and Schering-Plough, who have claimed that they have not released the IMT data because it's taking longer than anticipated to analyze it, could certainly allay some of our concerns by releasing the data on mortality and safety endpoints, couldn't they? It doesn't take very long to add up deaths.

The problem with pre-specifying all these analyses (carotid IMT at 3 locations and femoral IMT) is that now you have multiple endpoints, and your chances of meeting one of them by chance alone is increased. That's why the primary endpoint holds such a hallowed position in the heirarchy of endpoints - it forces you to call your shot. I liken this to billiards where it doesn't matter how many balls you put down unless you call them. And none of them counts unless you first put down your first pre-specified ball - if you fail that, you lose your turn. In this case, if you check a bunch of IMTs, one of them might be significantly different based on chance alone - so if you change the primary endpoint after the study is done, we will rightly be suspicious that you changed it to the one that you saw was positive. That's bad science, and we and the editors of the journals should not let people get away with it.

I have a proposal: When you register a trial at , you should have to list a date of data/analysis release and a summary of the data/analyses that will be released. Should you not release the data/analysis by that pre-specified date, your ability to list or publish future trials, and your ability to seek or pursue regulatory approval for that or any other drug you have is suspended until you release the data. Moreover, you are forbidden from releasing the data/analyses prior to the pre-specified date - to prevent shenanigans with pre-specified list dates in the remote future, followed by premature release.

Lung Transplantation: Exempt from the scrutiny of a randomized controlled trial?

In last week's NEJM, Liou et al in an excellent article analyzed pediatric lung transplant data and found that there is scant evidence for an improvement in survival associated with this procedure:

The authors seem prepared to accept the unavoidable metholodical limitations of their analyses and call for a randomized controlled trial (RCT) for pediatric lung transplantation. The editorialist, however, does not share their enthusiasm for a RCT, and appears to take it on faith that the new organ allocation scheme (whereby the sickest children get organs first) will make everything OK:

True believers die hard. And because of their hardiness, an RCT will be difficult to perform, as many pediatric pulmonologists will be loathe to allow their patients to be randomized to no transplant. They have no individual equipoise, even though there appears to be collective equipoise among folks willing to give serious consideration to the available data.

What we have here may be an example of what I will call "action bias" - which is basically the opposite of omission bias. In omission bias, people fail to act even though outcomes from action are superior to those from omission - often as a result of reluctance to risk or cause direct harm even though direct benefits outweigh them in the net. Action bias, as the enantiomer of omission bias, would refer to causing worse outcomes through action because of the great reluctance to stand by helplessly while a patient is dying, even when the only "therapies" we can offer make patients worse off - save for the hope they offer, reason notwithstanding.

Wednesday, November 21, 2007

Torcetrapib Torpedoed: When the hypothesis is immune to the data

I have watched the torcetrapib saga with interest for some time now. This drug is a powerful non-HMG-CoA-reductase inhibitor raiser of HDL (up to a 100% increase) and effects modest decreases in LDL also (20%) as reported with great fanfare in the NEJM in 2004:

Such was the enthusiasm for this drug that one editorialist in the same journal cried foul play in reference to Pfizer's intent to study the drug only with Lipitor, suggesting that such a move was intended to soften the blow to this blockbuster (read multibillion dollar) drug when it soon loses patent protection:
The tone is one of serious concern - as this drug was expected to truly be spectacular at BOTH raising HDL and preventing cardiovascular morbidity and mortality - an assumption based on the well-established use of cholesterol lowering as a surrogate endpoint in trials of cardiovascular medications.

(I'm sure the Avandia analogy is banging like a clapper in your skull right now.)

But a perspicacious consumer of the literature on torcetrapib would have noted that there were precious few and conflicting data about its efficacy as an antiatherogenic agent - preclinical data from animal studies were neither consistent nor overwhelming regarding its effects on the vasculature (in spite of the use of VERY high doses of the drug yielding high degrees of CETP inhibition) and studies of patients with CETP mutations also were inconsistent regarding its influence on the development of cardiovascular disease. Certainly, one would expect a drug with such remarkable HDL raising abilities to do something substantial and consistent to sensitive measures of atherogenesis in preclinical studies or to have some consistent and perhaps dramatic effect in patients with mutations leading to high HDL levels. (For a good review of pre-clinical studies, see: and
But alas, there was not consistent and robust evidence for anything but changes in surrogate markers. Of course this is all hindsight and it's easy for me to pontificate now that the horse was let out of the barn; first by Nissen et al:
and then today:
(In fact, I would say that the horse is galloping about the barnyard trammeling Lipitor's hopes of life after generic death.)

But what interests me now is not that the drug failed, and not that I have a new archetypal drug for failure of surrogate endpoints, but rather how difficult it is for the believers to let go. True believers die hard. How do the editors let a conclusion like this make it to print:

"In conclusion, our study neither validates nor invalidates the hypothesis that raising levels of HDL cholesterol by the inhibition of CETP may be cardioprotective. Thus, the possibility that the inhibition of CETP may be beneficial will remain hypothetical until it is put to the test in a trial with a CETP inhibitor that does not share the off-target pharmacologic effects of torcetrapib. "


Had the study been positive, would that have been the conclusion? No, the authors would have concluded that the hypothesis was validated.

So if the study is positive, the hypothesis is confirmed; but if it is negative (or shows harm), the hypothesis is immune to the data. The authors should not be allowed to have their cake and eat it too.

The above conclusion is tantamount to saying “our data do not bear on the hypothesis” which is tantamount to saying “our study was badly designed.”

Sure, another agent without that little BP problem may have more salutary effects on mortality, but I'd hate to be the guy trying to get that one through the IRB. Here we have a drug in a class that killed people in the last study. We'd better have more robust pre-clinical data the next time around. The other thing that fascinates me is the grasping for explanations. Here is a drug with ROBUST effects on HDL, and it causes an overall statistically significant increase in mortality. That's one helluva a hurdle for the next drug to jump even without the BP problem. Moreover, I refer the reader to the HOT trial:
A 5 mmHg lowering of BP over a 3.8 year period reduced mortality by a mere 0.9% (p=0.32 - not significant). That's a small increase and it's not statistically significant. But lowering LDL with simvastatin (the 4S trial: Lancet. 1994 Nov 19;344(8934):1383-9.) for 3.3 years on average led 1.7% ARR in mortality (RR 0.70 (95% CI 0.58-0.85, p = 0.0003). So it would appear that on average, you get more bang for your buck in lowering cholesterol than you do in lowering BP. With an agent that is such a potent raiser of HDL, we would certainly expect at worst a null effect if the BP effect militated against the HDL/LDL effect. I have not done a meta-analysis of trials of BP lowering or cholesterol lowering, but I would be interested in the comparison. For now, I'm substantially convinced that the BP argument is abjectly insufficient to explain the failure of this agent to improve meaningful outcomes.

So the search will go on for a molecular variation of this agent which doesn't increase BP, with the hopes that another blockbuster cholesterol agent will be discovered. But in all likelihood, this mechanism of altering cholesterol metabolism is fatally flawed and I wouldn't volunteer any of my patients for the next trial. I'd give them 80mg of generic simvastatin or atorvastatin.

Wednesday, November 7, 2007

Plavix Defeated: Prasugrel is superior in a properly designed and executed study

Published early on Sunday, November 5th in the NEJM ( is a randomized controlled superiority trial comparing clopidogrel to a novel agent - Prasugrel.

Prasugrel was superior to Plavix. And it was superior to a degree similar to the degree to which Plavix is superior to aspirin alone. (See

So therefore, by precedent, if one accepts the notion that aspirin alone is inferior to aspirin and Plavix because reductions in death and MI on the order of 2-3% are thought to be non-negligible (as I think they should be considered), one must therefore accept the notion that given the choice between Plavix and Prasugrel, one should choose the latter.

There is this issue of bleeding. But, eschewing your tendency towards omission bias, as I know you are wont to, you will agree that even if bleeding is as bad as death or MI (and it is NOT!), the net benefit of Prasugrel remains positive. Bleeding gums with dental flossing is annoying until you compare your life to your neighbor in cardiac rehab after his MI.

There is also the issue of Plavix's patent expiration in a few years. If the medications were equivalently priced, the choice is a no-brainer. If Prasugrel is costly and Plavix is generic, the calculus increases considerably in complexity - both from the perspective of the patient paying out of pocket, and the policy expert wielding his cost-effectiveness analysis. If my co-pay were the same, I would certainly choose Prasugrel. But if money is/were tight, I might consider that diet and excersise (which are free, financially, at least) may be a more cost-effective personal intervention than the co-pay for an expensive drug.

And what if Plavix at a higher dose is just as effective as Prasugrel? That question will have to be answered by future RCTs, which may be unlikely to happen if Plavix is about to lose patent protection...

Saturday, November 3, 2007

Post-exposure prophylaxis for Hepatitis A: Temptation seizes even the most well-intentioned authors

Victor et al report in the October 25th NEJM ( the non-inferiority of Hepatitis A vaccine to Immune Globulin for post-exposure prophylaxis of hepatitis A. The results are convincing for the non-inferiority hypothesis: symptomatic hepatitis A occurred in 4.4% of subjects who received vaccine versos 3.3% of subjects who received immune globulin (RR 1.35%; 95% CI .70-2.67).

This is a very well-executed non-inferiority study. If one looks at the methods section, s/he sees that the authors described very well their non-inferiority hypothesis and how it was arrived at. Given the low baseline rate of symptomatic hepatitis A (~3%), a RR of 3.0 is reasonable for non-inferiority, as non-inferiority implies<2%> non-significant trend toward less symptomatic Hepatitis A in the immune globlin group, the authors suggest that this agent may be preferred.

Again, one cannot have his cake and eat it too. One either conducts a non-inferiority trial and accepts non-inferior results as meaning that one agent is non-inferior to the alternative agent, or one conducts a superiority trial to demonstrate that one agent is truly superior. If the point estimates in this trial are close to correct, and immune globulin is 1.1% superior to HAV vaccine, ~7300 patients would be required in EACH group to determine superiority at a power of 90% and an alpha of 0.05. So the current trial is no substitute for a superiority trial with~7300 patients in each group. Unless such a trial is performed, HAV vaccine and immune globulin are non-inferior to each other for post-exposure prophylaxis to HAV, period.

To sum up: one either believes that two agents are non-inferior (or more conservatively, equivalent) and he therefore conducts a non-inferiority trial and accepts the results based on the a priori margins (delta) that he himself specified - or he conducts a superiority trial to demonstrate unequivocally that his preferred agent is superior to the comparator agent.