Knowledge and Attitudes Among Life Scientists Toward Reproducibility Within Journal Articles: A Research Survey

Authors : Evanthia Kaimaklioti Samota, Robert P. Davey

We constructed a survey to understand how authors and scientists view the issues around reproducibility, focusing on interactive elements such as interactive figures embedded within online publications, as a solution for enabling the reproducibility of experiments.

We report the views of 251 researchers, comprising authors who have published in eLIFE Sciences, and those who work at the Norwich Biosciences Institutes (NBI). The survey also outlines to what extent researchers are occupied with reproducing experiments themselves. Currently, there is an increasing range of tools that attempt to address the production of reproducible research by making code, data, and analyses available to the community for reuse. We wanted to collect information about attitudes around the consumer end of the spectrum, where life scientists interact with research outputs to interpret scientific results.

Static plots and figures within articles are a central part of this interpretation, and therefore we asked respondents to consider various features for an interactive figure within a research article that would allow them to better understand and reproduce a published analysis.

The majority (91%) of respondents reported that when authors describe their research methodology (methods and analyses) in detail, published research can become more reproducible. The respondents believe that having interactive figures in published papers is a beneficial element to themselves, the papers they read as well as to their readers.

Whilst interactive figures are one potential solution for consuming the results of research more effectively to enable reproducibility, we also review the equally pressing technical and cultural demands on researchers that need to be addressed to achieve greater success in reproducibility in the life sciences.

URL : Knowledge and Attitudes Among Life Scientists Toward Reproducibility Within Journal Articles: A Research Survey

DOI : https://doi.org/10.3389/frma.2021.678554

Replication and trustworthiness

Authors : Rik Peels, Lex Bouter

This paper explores various relations that exist between replication and trustworthiness. After defining “trust”, “trustworthiness”, “replicability”, “replication study”, and “successful replication”, we consider, respectively, how trustworthiness relates to each of the three main kinds of replication: reproductions, direct replications, and conceptual replications.

Subsequently, we explore how trustworthiness relates to the intentionality of a replication. After that, we discuss whether the trustworthiness of research findings depends merely on evidential considerations or also on what is at stake.

We conclude by adding replication to the other issues that should be considered in assessing the trustworthiness of research findings: (1) the likelihood of the findings before the primary study was done (that is, the prior probability of the findings), (2) the study size and the methodological quality of the primary study, (3) the number of replications that were performed and the quality and consistency of their aggregated findings, and (4) what is at stake.

URL : Replication and trustworthiness

DOI : https://doi.org/10.1080/08989621.2021.1963708

Reproducibility of COVID-19 pre-prints

Authors : Annie Collins, Rohan Alexander

To examine the reproducibility of COVID-19 research, we create a dataset of pre-prints posted to arXiv, bioRxiv, medRxiv, and SocArXiv between 28 January 2020 and 30 June 2021 that are related to COVID-19.

We extract the text from these pre-prints and parse them looking for keyword markers signalling the availability of the data and code underpinning the pre-print. For the pre-prints that are in our sample, we are unable to find markers of either open data or open code for 75 per cent of those on arXiv, 67 per cent of those on bioRxiv, 79 per cent of those on medRxiv, and 85 per cent of those on SocArXiv.

We conclude that there may be value in having authors categorize the degree of openness of their pre-print as part of the pre-print submissions process, and more broadly, there is a need to better integrate open science training into a wide range of fields.

URL : https://arxiv.org/abs/2107.10724

Systematizing Confidence in Open Research and Evidence (SCORE)

Authors : Nazanin Alipourfard, Beatrix Arendt, Daniel M. Benjamin, Noam Benkler, Michael Bishop, Mark Burstein, Martin Bush, James Caverlee, Yiling Chen, Chae Clark, Anna Dreber Almenberg, Tim Errington, Fiona Fidler, Nicholas Fox, Aaron Frank, Hannah Fraser, Scott Friedman, Ben Gelman, James Gentile, C Lee Giles, Michael B Gordon, Reed Gordon-Sarney, Christopher Griffin, Timothy Gulden et al.,

Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts.

Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment.

The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates.

Beyond the primary research objective, the data and artifacts generated from this program will be openly shared and provide an unprecedented opportunity to examine research credibility and evidence.

URL : Systematizing Confidence in Open Research and Evidence (SCORE)

DOI : https://doi.org/10.31235/osf.io/46mnb

Open science reforms: Strengths, challenges, and future directions

Author : Kathryn R. Wentzel

In this article, I comment on the potential benefits and limitations of open science reforms for improving the transparency and accountability of research, and enhancing the credibility of research findings within communities of policy and practice.

Specifically, I discuss the role of replication and reproducibility of research in promoting better quality studies, the identification of generalizable principles, and relevance for practitioners and policymakers.

Second, I suggest that greater attention to theory might contribute to the impact of open science practices, and discuss ways in which theory has implications for sampling, measurement and research design.

Ambiguities concerning the aims of preregistration and registered reports also are highlighted. In conclusion, I discuss structural roadblocks to open science reform and reflect on the relevance of these reforms for educational psychology.

URL : https://edarxiv.org/sgfy8/

Versioning Data Is About More than Revisions: A Conceptual Framework and Proposed Principles

Authors : Jens Klump, Lesley Wyborn, Mingfang Wu, Julia Martin, Robert R. Downs, Ari Asmi

A dataset, small or big, is often changed to correct errors, apply new algorithms, or add new data (e.g., as part of a time series), etc.

In addition, datasets might be bundled into collections, distributed in different encodings or mirrored onto different platforms. All these differences between versions of datasets need to be understood by researchers who want to cite the exact version of the dataset that was used to underpin their research.

Failing to do so reduces the reproducibility of research results. Ambiguous identification of datasets also impacts researchers and data centres who are unable to gain recognition and credit for their contributions to the collection, creation, curation and publication of individual datasets.

Although the means to identify datasets using persistent identifiers have been in place for more than a decade, systematic data versioning practices are currently not available. In this work, we analysed 39 use cases and current practices of data versioning across 33 organisations.

We noticed that the term ‘version’ was used in a very general sense, extending beyond the more common understanding of ‘version’ to refer primarily to revisions and replacements. Using concepts developed in software versioning and the Functional Requirements for Bibliographic Records (FRBR) as a conceptual framework, we developed six foundational principles for versioning of datasets: Revision, Release, Granularity, Manifestation, Provenance and Citation.

These six principles provide a high-level framework for guiding the consistent practice of data versioning and can also serve as guidance for data centres or data providers when setting up their own data revision and version protocols and procedures.

URL : Versioning Data Is About More than Revisions: A Conceptual Framework and Proposed Principles

DOI : http://doi.org/10.5334/dsj-2021-012