Open science reforms: Strengths, challenges, and future directions

Author : Kathryn R. Wentzel

In this article, I comment on the potential benefits and limitations of open science reforms for improving the transparency and accountability of research, and enhancing the credibility of research findings within communities of policy and practice.

Specifically, I discuss the role of replication and reproducibility of research in promoting better quality studies, the identification of generalizable principles, and relevance for practitioners and policymakers.

Second, I suggest that greater attention to theory might contribute to the impact of open science practices, and discuss ways in which theory has implications for sampling, measurement and research design.

Ambiguities concerning the aims of preregistration and registered reports also are highlighted. In conclusion, I discuss structural roadblocks to open science reform and reflect on the relevance of these reforms for educational psychology.

URL : https://edarxiv.org/sgfy8/

Versioning Data Is About More than Revisions: A Conceptual Framework and Proposed Principles

Authors : Jens Klump, Lesley Wyborn, Mingfang Wu, Julia Martin, Robert R. Downs, Ari Asmi

A dataset, small or big, is often changed to correct errors, apply new algorithms, or add new data (e.g., as part of a time series), etc.

In addition, datasets might be bundled into collections, distributed in different encodings or mirrored onto different platforms. All these differences between versions of datasets need to be understood by researchers who want to cite the exact version of the dataset that was used to underpin their research.

Failing to do so reduces the reproducibility of research results. Ambiguous identification of datasets also impacts researchers and data centres who are unable to gain recognition and credit for their contributions to the collection, creation, curation and publication of individual datasets.

Although the means to identify datasets using persistent identifiers have been in place for more than a decade, systematic data versioning practices are currently not available. In this work, we analysed 39 use cases and current practices of data versioning across 33 organisations.

We noticed that the term ‘version’ was used in a very general sense, extending beyond the more common understanding of ‘version’ to refer primarily to revisions and replacements. Using concepts developed in software versioning and the Functional Requirements for Bibliographic Records (FRBR) as a conceptual framework, we developed six foundational principles for versioning of datasets: Revision, Release, Granularity, Manifestation, Provenance and Citation.

These six principles provide a high-level framework for guiding the consistent practice of data versioning and can also serve as guidance for data centres or data providers when setting up their own data revision and version protocols and procedures.

URL : Versioning Data Is About More than Revisions: A Conceptual Framework and Proposed Principles

DOI : http://doi.org/10.5334/dsj-2021-012

Assessment of transparency indicators across the biomedical literature: How open is open?

Authors : Stylianos Serghiou, Despina G. Contopoulos-Ioannidis, Kevin W. Boyack, Nico Riedel, Joshua D. Wallach, John P. A. Ioannidis

Recent concerns about the reproducibility of science have led to several calls for more open and transparent research practices and for the monitoring of potential improvements over time. However, with tens of thousands of new biomedical articles published per week, manually mapping and monitoring changes in transparency is unrealistic.

We present an open-source, automated approach to identify 5 indicators of transparency (data sharing, code sharing, conflicts of interest disclosures, funding disclosures, and protocol registration) and apply it across the entire open access biomedical literature of 2.75 million articles on PubMed Central (PMC).

Our results indicate remarkable improvements in some (e.g., conflict of interest [COI] disclosures and funding disclosures), but not other (e.g., protocol registration and code sharing) areas of transparency over time, and map transparency across fields of science, countries, journals, and publishers.

This work has enabled the creation of a large, integrated, and openly available database to expedite further efforts to monitor, understand, and promote transparency and reproducibility in science.

URL : Assessment of transparency indicators across the biomedical literature: How open is open?

DOI : https://doi.org/10.1371/journal.pbio.3001107

Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: an observational study

Authors : Tom E. Hardwicke, Manuel Bohn, Kyle MacDonald, Emily Hembacher, Michèle B. Nuijten, Benjamin N. Peloquin, Benjamin E. deMayo, Bria Long, Erica J. Yoon, Michael C. Frank

For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015.

Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one ‘major numerical discrepancy’ (>10% difference) prompting us to request input from original authors.

Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles.

Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected.

Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility.

URL : Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: an observational study

DOI : https://doi.org/10.1098/rsos.201494

The Rigor and Transparency Index Quality Metric for Assessing Biological and Medical Science Methods

Authors : Joe Menke, Martijn Roelandse, Burak Ozyurt, Maryann Martone, Anita Bandrowski

The reproducibility crisis is a multifaceted problem involving ingrained practices within the scientific community. Fortunately, some causes are addressed by the author’s adherence to rigor and reproducibility criteria, implemented via checklists at various journals.

We developed an automated tool (SciScore) that evaluates research articles based on their adherence to key rigor criteria, including NIH criteria and RRIDs, at an unprecedented scale. We show that despite steady improvements, less than half of the scoring criteria, such as blinding or power analysis, are routinely addressed by authors; digging deeper, we examined the influence of specific checklists on average scores.

The average score for a journal in a given year was named the Rigor and Transparency Index (RTI), a new journal quality metric. We compared the RTI with the Journal Impact Factor and found there was no correlation. The RTI can potentially serve as a proxy for methodological quality.

URL : The Rigor and Transparency Index Quality Metric for Assessing Biological and Medical Science Methods

DOI : https://doi.org/10.1016/j.isci.2020.101698

Questionable Research Practices and Open Science in Quantitative Criminology

Authors : Jason Chinn, Justin Pickett, Simine Vazire, Alex Holcombe

Objectives

Questionable research practices (QRPs) lead to incorrect research results and contribute to irreproducibility in science. Researchers and institutions have proposed open science practices (OSPs) to improve the detectability of QRPs and the credibility of science. We examine the prevalence of QRPs and OSPs in criminology, and researchers’ opinions of those practices.

Methods

We administered an anonymous survey to authors of articles published in criminology journals. Respondents self-reported their own use of 10 QRPs and 5 OSPs. They also estimated the prevalence of use by others, and reported their attitudes toward the practices.

Results

QRPs and OSPs are both common in quantitative criminology, about as common as they are in other fields. Criminologists who responded to our survey support using QRPs in some circumstances, but are even more supportive of using OSPs.

We did not detect a significant relationship between methodological training and either QRP or OSP use.

Support for QRPs is negatively and significantly associated with support for OSPs. Perceived prevalence estimates for some practices resembled a uniform distribution, suggesting criminologists have little knowledge of the proportion of researchers that engage in certain questionable practices.

Conclusions

Most quantitative criminologists in our sample use QRPs, and many use multiple QRPs. The substantial prevalence of QRPs raises questions about the validity and reproducibility of published criminological research.

We found promising levels of OSP use, albeit at levels lagging what researchers endorse. The findings thus suggest that additional reforms are needed to decrease QRP use and increase the use of OSPs.

URL : Questionable Research Practices and Open Science in Quantitative Criminology

DOI : https://doi.org/10.31235/osf.io/bwm7s

Science Communication in the Context of Reproducibility and Replicability: How Nonscientists Navigate Scientific Uncertainty

Author : Emily L. Howell

Scientists stand to gain in obvious ways from recent efforts to develop robust standards for and mechanisms of reproducibility and replicability. Demonstrations of reproducibility and replicability may provide clarity with respect to areas of uncertainty in scientific findings and translate into greater impact for the research.

But when it comes to public perceptions of science, it is less clear what gains might come from recent efforts to improve reproducibility and replicability. For example, could such efforts improve public understandings of scientific uncertainty?

To gain insight into this issue, we would need to know how those views are shaped by media coverage of it, but none of the emergent research on public views of reproducibility and replicability in science considers that question.

We do, however, have the recent report on Reproducibility and Replicability in Science issued by the National Academies of Sciences, Engineering, and Medicine, which provides an overview of public perceptions of uncertainty in science.

Here, I adapt that report to begin a conversation between researchers and practitioners, with the aim of expanding research on public perceptions of scientific uncertainty. This overview draws upon research on risk perception and science communication to describe how the media influences the communication and perception of uncertainty in science.

It ends by presenting recommendations for communicating scientific uncertainty as it pertains to issues of reproducibility and replicability.

URL : Science Communication in the Context of Reproducibility and Replicability: How Nonscientists Navigate Scientific Uncertainty

Original location : https://hdsr.mitpress.mit.edu/pub/3g7u601s/release/2