The Natural Selection of Bad Science

Authors : Paul E. Smaldino, Richard McElreath

Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding.

The persistence of poor methods results partly from incentives that favor them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principle factor for career advancement.

Some normative methods of analysis have almost certainly been selected to further publication instead of discovery. In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modeling.

We first present a 60-year meta-analysis of statistical power in the behavioral sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power.

To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods.

As in the real world, successful labs produce more “progeny”, such that their methods are more often copied and their students are more likely to start labs of their own.

Selection for high output leads to poorer methods and increasingly high false discovery rates. We additionally show that replication slows but does not stop the process of methodological deterioration. Improving the quality of research requires change at the institutional level.

URL : The Natural Selection of Bad Science

DOI : http://dx.doi.org/10.1098/rsos.160384

Preplication, Replication: A Proposal to Efficiently Upgrade Journal Replication Standards

Despite 20 years of progress in promoting replication standards in International Relations (IR), significant problems remain in both the provision of data and the incentives to replicate published research. While replicable research is a public good, there appear to be private incentives for researchers to follow socially suboptimal research strategies.

The current situation has led to a growing concern in IR, as well as across the social sciences, that published research findings may not represent accurate appraisals of the evidence on particular research questions. In this article, I discuss the role of private information in the publication process and review the incentives for producing replicable and nonreplicable research.

A small, but potentially important, change in a journal’s workflow could both deter the publication of nonreplicable work and lower the costs for researchers to build and expand upon existing published research. The suggestion, termed Preplication, is for journals to run the replication data and code for conditionally accepted articles before publication, just as journals routinely check for compliance with style guides.

This change could be implemented alongside other revisions to journal policies around the discipline. In fact, Preplication is already in use at several journals, and I provide an update as to how the process has worked at International Interactions.

URL : http://isp.oxfordjournals.org/content/early/2016/02/10/isp.ekv016