Replication studies in economics—How many and which papers are chosen for replication, and why?

Authors : Frank Mueller-Langer, Benedikt Fecher, Dietmar Harhoff, Gert G.Wagner

We investigate how often replication studies are published in empirical economics and what types of journal articles are replicated. We find that between 1974 and 2014 0.1% of publications in the top 50 economics journals were replication studies.

We consider the results of published formal replication studies (whether they are negating or reinforcing) and their extent: Narrow replication studies are typically devoted to mere replication of prior work, while scientific replication studies provide a broader analysis.

We find evidence that higher-impact articles and articles by authors from leading institutions are more likely to be replicated, whereas the replication probability is lower for articles that appeared in top 5 economics journals.

Our analysis also suggests that mandatory data disclosure policies may have a positive effect on the incidence of replication.

URL : Replication studies in economics—How many and which papers are chosen for replication, and why?


The Natural Selection of Bad Science

Authors : Paul E. Smaldino, Richard McElreath

Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding.

The persistence of poor methods results partly from incentives that favor them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principle factor for career advancement.

Some normative methods of analysis have almost certainly been selected to further publication instead of discovery. In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modeling.

We first present a 60-year meta-analysis of statistical power in the behavioral sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power.

To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods.

As in the real world, successful labs produce more “progeny”, such that their methods are more often copied and their students are more likely to start labs of their own.

Selection for high output leads to poorer methods and increasingly high false discovery rates. We additionally show that replication slows but does not stop the process of methodological deterioration. Improving the quality of research requires change at the institutional level.

URL : The Natural Selection of Bad Science


Preplication, Replication: A Proposal to Efficiently Upgrade Journal Replication Standards

Despite 20 years of progress in promoting replication standards in International Relations (IR), significant problems remain in both the provision of data and the incentives to replicate published research. While replicable research is a public good, there appear to be private incentives for researchers to follow socially suboptimal research strategies.

The current situation has led to a growing concern in IR, as well as across the social sciences, that published research findings may not represent accurate appraisals of the evidence on particular research questions. In this article, I discuss the role of private information in the publication process and review the incentives for producing replicable and nonreplicable research.

A small, but potentially important, change in a journal’s workflow could both deter the publication of nonreplicable work and lower the costs for researchers to build and expand upon existing published research. The suggestion, termed Preplication, is for journals to run the replication data and code for conditionally accepted articles before publication, just as journals routinely check for compliance with style guides.

This change could be implemented alongside other revisions to journal policies around the discipline. In fact, Preplication is already in use at several journals, and I provide an update as to how the process has worked at International Interactions.