For any one of us to receive proper medical care, we need others to have participated in clinical trials. However, clinical trials only work—and are only ethical—if they have adequate scientific justification and if participants are protected from harm and undue burdens.
Unfortunately, numerous systematic studies of clinical trials suggest that many go forward lacking a sound scientific foundation. Some trials test clinical hypotheses that are already proven. In one systematic study, scientists determined that more than 64 placebo-controlled trials were run between 1987 and 2002 testing the effectiveness of an injectable protease inhibitor called aprotinin for perioperative bleeding, even though its efficacy had been established beyond a shadow of doubt after the first 10 trials. Other trials test hypotheses that have insufficient support, as when large Phase 3 trials of cancer drugs are run without preliminary evidence of efficacy from Phase 2 trials. Oversight bodies such as Institutional Review Boards (IRB) or scientific merit committees are supposed to intercept such misguided research efforts before they mature into clinical trials, but our research suggests that review bodies often aren’t given the information they need to do their job.
As two of us have argued elsewhere, such “uninformative” trials waste research resources and inappropriately expose human participants to risks that are not balanced by sufficient prospects of societal gains. There are steps that could be taken to avoid this problem and improve the direction and execution of clinical trials, to the benefit of science and the public.
Systematic studies of clinical trials suggest that many go forward lacking a sound scientific foundation.
One basic component of any thorough review is a description of the relevant clinical trial landscape. Akin to a GPS for clinical research, such a review tells us where we have been before, scientifically, and where the new trial will take us. What studies were done in the past, and how many have addressed the same clinical hypothesis? What are the best routes to take—the best methods, or populations—to get answers that will fill a gap in our evidence base?
Our team set out to determine whether IRBs were given a complete map of the clinical trial landscape for newly proposed trials. Typically, IRBs learn about the scientific background of a clinical trial from the protocols that sponsors submit to them prior to launching a new study. We accessed the bibliographies for a sample of 101 clinical trial protocols from ClinicalTrials.gov. We then compared each bibliography with clinical trials of the same drug-indication pairing that we identified using simple searches of PubMed (for published trials) and ClinicalTrials.gov (for ongoing and completed trials, regardless of publication status). We conducted each search as if it were being done at the time the protocol was being written.
Our results were mixed. Some protocols—particularly those for which few trials had tested the same drug for the same disease—cited relevant trials. However, many protocols provided an incomplete picture of prior and ongoing research testing similar clinical hypotheses. Around one in five protocols (22 percent) that we knew were repeating similar trials did not reference the previous trial. Some protocols—particularly those with a greater number of previously completed similar trials—tended to omit more citations. Of the 22 protocols that could have cited four or more earlier trials, 13 (59 percent) cited half or fewer. This means that many readily identifiable trials were not presented to oversight bodies for consideration. Finally, no protocols openly stated that sponsors had systematically surveyed all completed and ongoing trials testing similar hypotheses.
Such selective and unsystematic use of prior evidence might be forgivable if researchers preferentially cited the “best” trials—studies that were larger, more rigorous, or better aligned methodologically—but we did not find evidence that this was the case. If the protocols we sampled in our study provided a representative picture of the documentation submitted for ethics review, such citation omissions seem to deprive IRBs or other reviewers of the evidence they need to assess whether a new trial is worthwhile.
Our results are consistent with prior work that indicates that many clinical trials are initiated even though they are either redundant or scientifically uninteresting. Ensuring that IRBs are provided with a comprehensive overview of existing trial information is a key step in addressing the resource wastage and potential exposure of trial participants to risk that may result from these oversights. And our own work suggests that absent systematic review, IRBs may make biased decisions.
Policymakers and academic medical centers should adopt and enforce policies that will improve the mapping of clinical trials. These include policies akin to those of SPIRIT and of the Canadian Institute of Health Research, which require sponsors to present the results of a systematic search for trials that tested or are testing related clinical hypotheses. A full systematic review entailing a protocol and a thorough search of trial databases would be optimal, but formalized requirements for any form of unbiased literature review would likely improve on present practices. In the meantime, IRBs (and for that matter, patients contemplating enrolling in trials) should be aware that, when they attempt to assess whether a trial is justified, they may be getting an incomplete picture of whether and why a trial is worth doing.
Jacky Sheng is a recent graduate of McGill University, where Jonathan Kimmelman is James McGill Professor in the Department of Equity, Ethics and Policy. Deborah Zarin is a program director in the Multi-Regional Clinical Trials Center of Brigham and Women’s Hospital and Harvard University.