The purpose of the poll that has been running alongside the posts on this blog for some months now was to determine if physicians/researchers (a convenience sample of folks visiting this site) intuitively are Bayesian when they think about clinical trials.
To summarize the results, 43/68 respondents (63%) reported that they preferred the larger 30-center RCT. This differs significantly from the hypothesized value of 50% (p=0.032).
From a purely mathematical and Bayesian perspective, physicians should be ambivalent about the choice between a large(r) 30-center RCT involving 2100 patients showing a 5% mortality reduction at p=0.0005, and 3 small(er) 10-center RCTs involving 700 patients each showing the same 5% mortality reduction at p=0.04. In essence, unless respondents were reading between the lines somewhere, the choice is between two options with identical posterior probabilities. That is, if the three smaller trials are combined, they are equal to the larger trial and the meta-analytic p-value is 0.0005. Looked at from a different perspective, the large 30-center trial could have been analyzed as 3 10-center trials based on the region of the country in which the centers were located or any other arbitrary classification of centers.
Why this result? I obviously can't say based on this simple poll, but here are some guesses: 1.) People are more comfortable with larger multicenter studies, perhaps because they are accustomed to seeing cardiology mega-trials in journals such as NEJM; or 2.) The p-value of 0.04 associated with the small(er) studies seems "marginal" and the combination of the three studies is non-intuitive, and/or it is not possible to see that the combination p-value will be the same. However, I have some (currently unpublished) data which show that [paradoxically] for the same study, physicians are more willing to adopt a therapy with a higher rather than a lower p-value.
Further research is obviously needed to determine how physicians respond to evidence from clinical trials and whether or not their responses are normative. In this poll, it appears that they were not.
This is discussion forum for physicians, researchers, and other healthcare professionals interested in the epistemology of medical knowledge, the limitations of the evidence, how clinical trials evidence is generated, disseminated, and incorporated into clinical practice, how the evidence should optimally be incorporated into practice, and what the value of the evidence is to science, individual patients, and society.
Friday, December 28, 2007
Results of the Poll - Large Trials are preferred
Subscribe to: Post Comments (Atom)
The problem here that your question suggests that the data from the three RTCs are pooled, which has its many pitfalls of heterogeneity in the study subjects and design of three trials. You're basically asking whether a randomized trial is better than a meta-analysis. You would have to give a hypothetical situation where these three trials are designed, executed, and evaluated in EXACTLY the same way. In the real world, we're happy if even a single clinical trial is executed successfully. Three independent studies being designed and executed to exclude bias in exactly the same way? Dream on! I'll choose one well designed randomized trial over three separate studies.ReplyDelete
Your comment harkens to the very heart of the issue!
We are currently part of a multicenter trial of ARDS. There are other centers in Texas and Maryland. When the results are published, this will be a "large" trial, compared to what would result if the individual centers each published results of the same trial at each of their centers. DO you not think that there is heterogeneity between these geographically distinct centers? having practiced and trained on the East Coast, in the Southwest, and in the Midwest, I can affirm for you that the practice patterns are different.
So what I'm saying is that calling it a large multicenter trial does not reduce the heterogeneity - it simply conceals it.
The Human Injury of Lost Objectivity
If I were to rate the corruptive tactics performed by big pharmaceutical companies, the intentional corruption of implementing fabricated and unreliable results of clinical trials by pharmaceutical companies who manipulate these trials they sponsor because of their power to control others involved in such trials that is largely absent of regulation would be at the top of the list, and likely the most damaging to the requirement of authenticity and, more importantly, assuring the safety of the public health.
Decades ago, clinical trials were conducted at academic settings that focused on the acquisition of knowledge and the completely objective discoveries of meds. Then, in 1980, the Bayh-Dole Act was created, which allowed for such places to profit off of their discoveries that were performed for pharmaceutical companies in the past. This resulted in the creation of for-profit research trial sites, called Contract Research Organizations, which is often composed of community research sites with questionable investigators possibly void of necessary research experience or quality regarding their research purpose and ability. Since they are for- profit, with some CROs making billions of dollars a year. The trials conducted at such places are sponsored by pharmaceutical companies that control and manipulate all aspects of the trial being conducted involving their med being studied in the trial. This coercion is done by various methods of deception in subtle and tacit methods. As a result, research in this manner has been transformed into a method of marketing, which includes altered results of the trial to favor the sponsor’s med. Their activities are absent of true or applied regulation, and therefore have the autonomy to create whatever they want to benefit what may be a collusive relationship between the site and the sponsor.
Further disturbing is that once the creation of the trials is completed, they are then written by ghostwriters often, although no one seems to know how often. These people are not identified and acknowledged by the sponsor, and may not be trained in clinical research overall, as they are simply freelance writers, as one does not need research training or certification in order to perform this function. Rarely do trial ghostwriters question their instructions about their assignment, which is clearly deceptive and undocumented, as the mystery writers are known to make about 100 grand a year. This activity removes accountability and authenticity of the possibly fabricated clinical trial even further. The corruptive act is finally completed by the sponsor hiring an author to be placed on the trial that likely had no involvement with the trial, and, along with others, was paid by the sponsor for doing this deceptive act.
To have the trial published, the sponsor pays a journal to do this, along with the promise of purchasing thousands of reprints of their study from the journal. Again, how often this process is performed is unknown, yet frequent enough to create hundreds of such false writers and research sites to support the pharmaceutical industry. So benefits of meds studied in such a malicious way potentially can harm patients and their treatment options and safety risks. The purchased reprints are distributed to the sponsor’s sales force to share the content with prescribers.
Such misconduct discussed so far impedes research and the scientific method with frightening ethical and harmful concerns. Our health care treatment with meds is now undetermined in large part with such corruptive situations, as well as the possible absence of objectivity that has been intentionally eliminated. Trust in the scientific method in this type of activity illustrated in this article is absent. More now than ever, meds are removed from the market are given black box warnings. Now I understand why this is occurring.
Transparency and disclosure needs to happen with the pharmaceutical industry for reasons such as this as well as many others, in order to correct what we have historically relied upon for conclusive proof, which is the scientific method. More importantly, research should not be conducted in a way that the sponsor can interfere in such ways described in this article, requiring independent sites with no involvement with the drug maker. And clearly, regulation has to be enforced not selectively, but in a complete fashion regarding such matters. Public awareness would be a catalyst for this to occur, after initially experiencing a state of total disbelief that such operations actually are conducted by such people, of course. We can no longer be dependent on others for our optimal health. Knowledge is power, and is also possibly a lifesaver.
“Ethics and Science need to shake hands.” ……. Richard Cabot