Reproducibility in Management Science

Authors : Miloš Fišar, Ben Greiner, Christoph Huber, Elena Katok, Ali I. Ozkes

With the help of more than 700 reviewers, we assess the reproducibility of nearly 500 articles published in the journal Management Science before and after the introduction of a new Data and Code Disclosure policy in 2019.

When considering only articles for which data accessibility and hardware and software requirements were not an obstacle for reviewers, the results of more than 95% of articles under the new disclosure policy could be fully or largely computationally reproduced. However, for 29% of articles, at least part of the data set was not accessible to the reviewer. Considering all articles in our sample reduces the share of reproduced articles to 68%.

These figures represent a significant increase compared with the period before the introduction of the disclosure policy, where only 12% of articles voluntarily provided replication materials, of which 55% could be (largely) reproduced. Substantial heterogeneity in reproducibility rates across different fields is mainly driven by differences in data set accessibility.

Other reasons for unsuccessful reproduction attempts include missing code, unresolvable code errors, weak or missing documentation, and software and hardware requirements and code complexity. Our findings highlight the importance of journal code and data disclosure policies and suggest potential avenues for enhancing their effectiveness.

DOI : https://doi.org/10.1287/mnsc.2023.03556

Open science, the replication crisis, and environmental public health

Author : Daniel J. Hicks

Concerns about a crisis of mass irreplicability across scientific fields (“the replication crisis”) have stimulated a movement for open science, encouraging or even requiring researchers to publish their raw data and analysis code.

Recently, a rule at the US Environmental Protection Agency (US EPA) would have imposed a strong open data requirement. The rule prompted significant public discussion about whether open science practices are appropriate for fields of environmental public health.

The aims of this paper are to assess (1) whether the replication crisis extends to fields of environmental public health; and (2) in general whether open science requirements can address the replication crisis.

There is little empirical evidence for or against mass irreplicability in environmental public health specifically. Without such evidence, strong claims about whether the replication crisis extends to environmental public health – or not – seem premature.

By distinguishing three concepts – reproducibility, replicability, and robustness – it is clear that open data initiatives can promote reproducibility and robustness but do little to promote replicability.

I conclude by reviewing some of the other benefits of open science, and offer some suggestions for funding streams to mitigate the costs of adoption of open science practices in environmental public health.

URL : Open science, the replication crisis, and environmental public health

DOI : https://doi.org/10.1080/08989621.2021.1962713

Replication and trustworthiness

Authors : Rik Peels, Lex Bouter

This paper explores various relations that exist between replication and trustworthiness. After defining “trust”, “trustworthiness”, “replicability”, “replication study”, and “successful replication”, we consider, respectively, how trustworthiness relates to each of the three main kinds of replication: reproductions, direct replications, and conceptual replications.

Subsequently, we explore how trustworthiness relates to the intentionality of a replication. After that, we discuss whether the trustworthiness of research findings depends merely on evidential considerations or also on what is at stake.

We conclude by adding replication to the other issues that should be considered in assessing the trustworthiness of research findings: (1) the likelihood of the findings before the primary study was done (that is, the prior probability of the findings), (2) the study size and the methodological quality of the primary study, (3) the number of replications that were performed and the quality and consistency of their aggregated findings, and (4) what is at stake.

URL : Replication and trustworthiness

DOI : https://doi.org/10.1080/08989621.2021.1963708

Reading the fine print: A review and analysis of business journals’ data sharing policies

Authors : Brianne Dosch, Tyler Martindale

Business librarians offer many data services to their researchers. These services are often focused more on discovery, visualization, and analysis than general data management. But, with the replication crisis facing many business disciplines, there is a need for business librarians to offer more data sharing and general data management support to their researchers.

To find evidence of this data need, 146 business journal’s data sharing policies were reviewed and analyzed to uncover meaningful trends in business research. Results of the study indicate data sharing is not mandated by business journals.

However, data sharing is often encouraged and recommended. This journal policy content analysis provides evidence that business researchers have opportunities to share their research data, and with the right data management support, business librarians can play a significant role in improving the data sharing behaviors of business researchers.

DOI : https://doi.org/10.1080/08963568.2020.1847549

Two years into the Brazilian Reproducibility Initiative: reflections on conducting a large-scale replication of Brazilian biomedical science

Author : Kleber Neves, Clarissa FD Carneiro, Ana Paula Wasilewska-Sampaio, Mariana Abreu, Bruna Valério-Gomes, Pedro B Tan, Olavo B Amaral

Scientists have increasingly recognised that low methodological and analytical rigour combined with publish-or-perish incentives can make the published scientific literature unreliable.

As a response to this, large-scale systematic replications of the literature have emerged as a way to assess the problem empirically. The Brazilian Reproducibility Initiative is one such effort, aimed at estimating the reproducibility of Brazilian biomedical research.

Its goal is to perform multicentre replications of a quasi-random sample of at least 60 experiments from Brazilian articles published over a 20-year period, using a set of common laboratory methods.

In this article, we describe the challenges of managing a multicentre project with collaborating teams across the country, as well as its successes and failures over the first two years.

We end with a brief discussion of the Initiative’s current status and its possible future contributions after the project is concluded in 2021.

URL : Two years into the Brazilian Reproducibility Initiative: reflections on conducting a large-scale replication of Brazilian biomedical science

DOI : https://doi.org/10.1590/0074-02760200328

What is replication?

Authors : Brian A. Nosek, Timothy M. Errington

Credibility of scientific claims is established with evidence for their replicability using new data. According to common understanding, replication is repeating a study’s procedure and observing whether the prior finding recurs. This definition is intuitive, easy to apply, and incorrect.

We propose that replication is a study for which any outcome would be considered diagnostic evidence about a claim from prior research. This definition reduces emphasis on operational characteristics of the study and increases emphasis on the interpretation of possible outcomes.

The purpose of replication is to advance theory by confronting existing understanding with new evidence. Ironically, the value of replication may be strongest when existing understanding is weakest.

Successful replication provides evidence of generalizability across the conditions that inevitably differ from the original study; Unsuccessful replication indicates that the reliability of the finding may be more constrained than recognized previously.

Defining replication as a confrontation of current theoretical expectations clarifies its important, exciting, and generative role in scientific progress.

URL : What is replication?

DOI : https://doi.org/10.1371/journal.pbio.3000691

Replication studies in economics—How many and which papers are chosen for replication, and why?

Authors : Frank Mueller-Langer, Benedikt Fecher, Dietmar Harhoff, Gert G.Wagner

We investigate how often replication studies are published in empirical economics and what types of journal articles are replicated. We find that between 1974 and 2014 0.1% of publications in the top 50 economics journals were replication studies.

We consider the results of published formal replication studies (whether they are negating or reinforcing) and their extent: Narrow replication studies are typically devoted to mere replication of prior work, while scientific replication studies provide a broader analysis.

We find evidence that higher-impact articles and articles by authors from leading institutions are more likely to be replicated, whereas the replication probability is lower for articles that appeared in top 5 economics journals.

Our analysis also suggests that mandatory data disclosure policies may have a positive effect on the incidence of replication.

URL : Replication studies in economics—How many and which papers are chosen for replication, and why?

DOI : https://doi.org/10.1016/j.respol.2018.07.019