PLoS Medicine | www.plosmedicine.org 0701
totality of the evidence. Diminishing
bias through enhanced research
standards and curtailing of prejudices
may also help. However, this may
require a change in scientifi c mentality
that might be diffi cult to achieve.
In some research designs, efforts
may also be more successful with
upfront registration of studies, e.g.,
randomized trials [35]. Registration
would pose a challenge for hypothesis-
generating research. Some kind of
registration or networking of data
collections or investigators within fi elds
may be more feasible than registration
of each and every hypothesis-
generating experiment. Regardless,
even if we do not see a great deal of
progress with registration of studies
in other fi elds, the principles of
developing and adhering to a protocol
could be more widely borrowed from
randomized controlled trials.
Finally, instead of chasing statistical
signifi cance, we should improve our
understanding of the range of R
values—the pre-study odds—where
research efforts operate [10]. Before
running an experiment, investigators
should consider what they believe the
chances are that they are testing a true
rather than a non-true relationship.
Speculated high R values may
sometimes then be ascertained. As
described above, whenever ethically
acceptable, large studies with minimal
bias should be performed on research
fi ndings that are considered relatively
established, to see how often they are
indeed confi rmed. I suspect several
established “classics” will fail the test
[36].
Nevertheless, most new discoveries
will continue to stem from hypothesis-
generating research with low or very
low pre-study odds. We should then
acknowledge that statistical signifi cance
testing in the report of a single study
gives only a partial picture, without
knowing how much testing has been
done outside the report and in the
relevant fi eld at large. Despite a large
statistical literature for multiple testing
corrections [37], usually it is impossible
to decipher how much data dredging
by the reporting authors or other
research teams has preceded a reported
research fi nding. Even if determining
this were feasible, this would not
inform us about the pre-study odds.
Thus, it is unavoidable that one should
make approximate assumptions on how
many relationships are expected to be
true among those probed across the
relevant research fi elds and research
designs. The wider fi eld may yield some
guidance for estimating this probability
for the isolated research project.
Experiences from biases detected in
other neighboring fi elds would also be
useful to draw upon. Even though these
assumptions would be considerably
subjective, they would still be very
useful in interpreting research claims
and putting them in context.
References
1. Ioannidis JP, Haidich AB, Lau J (2001) Any
casualties in the clash of randomised and
observational evidence? BMJ 322: 879–880.
2. Lawlor DA, Davey Smith G, Kundu D,
Bruckdorfer KR, Ebrahim S (2004) Those
confounded vitamins: What can we learn from
the differences between observational versus
randomised trial evidence? Lancet 363: 1724–
1727.
3. Vandenbroucke JP (2004) When are
observational studies as credible as randomised
trials? Lancet 363: 1728–1731.
4. Michiels S, Koscielny S, Hill C (2005)
Prediction of cancer outcome with microarrays:
A multiple random validation strategy. Lancet
365: 488–492.
5. Ioannidis JPA, Ntzani EE, Trikalinos TA,
Contopoulos-Ioannidis DG (2001) Replication
validity of genetic association studies. Nat
Genet 29: 306–309.
6. Colhoun HM, McKeigue PM, Davey Smith
G (2003) Problems of reporting genetic
associations with complex outcomes. Lancet
361: 865–872.
7. Ioannidis JP (2003) Genetic associations: False
or true? Trends Mol Med 9: 135–138.
8. Ioannidis JPA (2005) Microarrays and
molecular research: Noise discovery? Lancet
365: 454–455.
9. Sterne JA, Davey Smith G (2001) Sifting the
evidence—What’s wrong with signifi cance tests.
BMJ 322: 226–231.
10. Wacholder S, Chanock S, Garcia-Closas M, El
ghormli L, Rothman N (2004) Assessing the
probability that a positive report is false: An
approach for molecular epidemiology studies. J
Natl Cancer Inst 96: 434–442.
11. Risch NJ (2000) Searching for genetic
determinants in the new millennium. Nature
405: 847–856.
12. Kelsey JL, Whittemore AS, Evans AS,
Thompson WD (1996) Methods in
observational epidemiology, 2nd ed. New York:
Oxford U Press. 432 p.
13. Topol EJ (2004) Failing the public health—
Rofecoxib, Merck, and the FDA. N Engl J Med
351: 1707–1709.
14. Yusuf S, Collins R, Peto R (1984) Why do we
need some large, simple randomized trials? Stat
Med 3: 409–422.
15. Altman DG, Royston P (2000) What do we
mean by validating a prognostic model? Stat
Med 19: 453–473.
16. Taubes G (1995) Epidemiology faces its limits.
Science 269: 164–169.
17. Golub TR, Slonim DK, Tamayo P, Huard
C, Gaasenbeek M, et al. (1999) Molecular
classifi cation of cancer: Class discovery
and class prediction by gene expression
monitoring. Science 286: 531–537.
18. Moher D, Schulz KF, Altman DG (2001)
The CONSORT statement: Revised
recommendations for improving the quality
of reports of parallel-group randomised trials.
Lancet 357: 1191–1194.
19. Ioannidis JP, Evans SJ, Gotzsche PC, O’Neill
RT, Altman DG, et al. (2004) Better reporting
of harms in randomized trials: An extension
of the CONSORT statement. Ann Intern Med
141: 781–788.
20. International Conference on Harmonisation
E9 Expert Working Group (1999) ICH
Harmonised Tripartite Guideline. Statistical
principles for clinical trials. Stat Med 18: 1905–
1942.
21. Moher D, Cook DJ, Eastwood S, Olkin I,
Rennie D, et al. (1999) Improving the quality
of reports of meta-analyses of randomised
controlled trials: The QUOROM statement.
Quality of Reporting of Meta-analyses. Lancet
354: 1896–1900.
22. Stroup DF, Berlin JA, Morton SC, Olkin I,
Williamson GD, et al. (2000) Meta-analysis
of observational studies in epidemiology:
A proposal for reporting. Meta-analysis
of Observational Studies in Epidemiology
(MOOSE) group. JAMA 283: 2008–2012.
23. Marshall M, Lockwood A, Bradley C,
Adams C, Joy C, et al. (2000) Unpublished
rating scales: A major source of bias in
randomised controlled trials of treatments for
schizophrenia. Br J Psychiatry 176: 249–252.
24. Altman DG, Goodman SN (1994) Transfer
of technology from statistical journals to the
biomedical literature. Past trends and future
predictions. JAMA 272: 129–132.
25. Chan AW, Hrobjartsson A, Haahr MT,
Gotzsche PC, Altman DG (2004) Empirical
evidence for selective reporting of outcomes in
randomized trials: Comparison of protocols to
published articles. JAMA 291: 2457–2465.
26. Krimsky S, Rothenberg LS, Stott P, Kyle G
(1998) Scientifi c journals and their authors’
fi nancial interests: A pilot study. Psychother
Psychosom 67: 194–201.
27. Papanikolaou GN, Baltogianni MS,
Contopoulos-Ioannidis DG, Haidich AB,
Giannakakis IA, et al. (2001) Reporting of
confl icts of interest in guidelines of preventive
and therapeutic interventions. BMC Med Res
Methodol 1: 3.
28. Antman EM, Lau J, Kupelnick B, Mosteller F,
Chalmers TC (1992) A comparison of results
of meta-analyses of randomized control trials
and recommendations of clinical experts.
Treatments for myocardial infarction. JAMA
268: 240–248.
29. Ioannidis JP, Trikalinos TA (2005) Early
extreme contradictory estimates may
appear in published research: The Proteus
phenomenon in molecular genetics research
and randomized trials. J Clin Epidemiol 58:
543–549.
30. Ntzani EE, Ioannidis JP (2003) Predictive
ability of DNA microarrays for cancer outcomes
and correlates: An empirical assessment.
Lancet 362: 1439–1444.
31. Ransohoff DF (2004) Rules of evidence
for cancer molecular-marker discovery and
validation. Nat Rev Cancer 4: 309–314.
32. Lindley DV (1957) A statistical paradox.
Biometrika 44: 187–192.
33. Bartlett MS (1957) A comment on D.V.
Lindley’s statistical paradox. Biometrika 44:
533–534.
34. Senn SJ (2001) Two cheers for P-values. J
Epidemiol Biostat 6: 193–204.
35. De Angelis C, Drazen JM, Frizelle FA, Haug C,
Hoey J, et al. (2004) Clinical trial registration:
A statement from the International Committee
of Medical Journal Editors. N Engl J Med 351:
1250–1251.
36. Ioannidis JPA (2005) Contradicted and
initially stronger effects in highly cited clinical
research. JAMA 294: 218–228.
37. Hsueh HM, Chen JJ, Kodell RL (2003)
Comparison of methods for estimating the
number of true null hypotheses in multiplicity
testing. J Biopharm Stat 13: 675–689.
August 2005 | Volume 2 | Issue 8 | e124