A systematic examination of preprint platforms for use in the medical and biomedical sciences setting

Authors : Jamie J Kirkham, Naomi Penfold, Fiona Murphy, Isabelle Boutron, John PA Ioannidis, Jessica K Polka, David Moher

Objectives

The objective of this review is to identify all preprint platforms with biomedical and medical scope and to compare and contrast the key characteristics and policies of these platforms. We also aim to provide a searchable database to enable relevant stakeholders to compare between platforms.

Study Design and Setting

Preprint platforms that were launched up to 25th June 2019 and have a biomedical and medical scope according to MEDLINE’s journal selection criteria were identified using existing lists, web-based searches and the expertise of both academic and non-academic publication scientists.

A data extraction form was developed, pilot-tested and used to collect data from each preprint platform’s webpage(s). Data collected were in relation to scope and ownership; content-specific characteristics and information relating to submission, journal transfer options, and external discoverability; screening, moderation, and permanence of content; usage metrics and metadata.

Where possible, all online data were verified by the platform owner or representative by correspondence.

Results

A total of 44 preprint platforms were identified as having biomedical and medical scope, 17 (39%) were hosted by the Open Science Framework preprint infrastructure, six (14%) were provided by F1000 Research Ltd (the Open Research Central infrastructure) and 21 (48%) were other independent preprint platforms. Preprint platforms were either owned by non-profit academic groups, scientific societies or funding organisations (n=28; 64%), owned/partly owned by for-profit publishers or companies (n=14; 32%) or owned by individuals/small communities (n=2; 5%).

Twenty-four (55%) preprint platforms accepted content from all scientific fields although some of these had restrictions relating to funding source, geographical region or an affiliated journal’s remit.

Thirty-three (75%) preprint platforms provided details about article screening (basic checks) and 14 (32%) of these actively involved researchers with context expertise in the screening process.

The three most common screening checks related to the scope of the article, plagiarism and legal/ethical/societal issues and compliance. Almost all preprint platforms allow submission to any peer-reviewed journal following publication, have a preservation plan for read-access, and most have a policy regarding reasons for retraction and the sustainability of the service.

Forty-one (93%) platforms currently have usage metrics, with the most common metric being the number of downloads presented on the abstract page.

Conclusion

A large number of preprint platforms exist for use in biomedical and medical sciences, all of which offer researchers an opportunity to rapidly disseminate their research findings onto an open-access public server, subject to scope and eligibility.

However, the process by which content is screened before online posting and withdrawn or removed after posting varies between platforms, which may be associated with platform operation, ownership, governance and financing.

DOI : https://doi.org/10.1101/2020.04.27.063578

What is replication?

Authors : Brian A. Nosek, Timothy M. Errington

Credibility of scientific claims is established with evidence for their replicability using new data. According to common understanding, replication is repeating a study’s procedure and observing whether the prior finding recurs. This definition is intuitive, easy to apply, and incorrect.

We propose that replication is a study for which any outcome would be considered diagnostic evidence about a claim from prior research. This definition reduces emphasis on operational characteristics of the study and increases emphasis on the interpretation of possible outcomes.

The purpose of replication is to advance theory by confronting existing understanding with new evidence. Ironically, the value of replication may be strongest when existing understanding is weakest.

Successful replication provides evidence of generalizability across the conditions that inevitably differ from the original study; Unsuccessful replication indicates that the reliability of the finding may be more constrained than recognized previously.

Defining replication as a confrontation of current theoretical expectations clarifies its important, exciting, and generative role in scientific progress.

URL : What is replication?

DOI : https://doi.org/10.1371/journal.pbio.3000691

Finding Our Way: A Snapshot of Scholarly Communication Practitioners’ Duties & Training

Authors : Maria Bonn, Will Cross, Josh Bolick

INTRODUCTION

Scholarly communication has arisen as a core academic librarianship competency, but formal training on scholarly communication topics in LIS is rare, leaving many early career practitioners underprepared for their work.

METHODS

Researchers surveyed practitioners of scholarly communication, as defined by the Association of College and Research Libraries (ACRL), regarding their attitudes toward and experiences with education in scholarly communication, job responsibilities, location within their academic libraries, and thoughts about emerging trends in scholarly communication librarianship.

RESULTS

Few scholarly communication practitioners felt well-prepared by their graduate training for the core set of primary and secondary scholarly communication responsibilities that have emerged.

They deploy a range of strategies to fill the gap and would benefit from support in this area, from more robust education in graduate programs and through continued professional development.

DISCUSSION

The results of this survey support the assertion that as academic libraries and academic library work have increasingly recognized the importance of scholarly communication topics, library school curricula have not developed correspondingly.

Respondents indicated a low level of formal pedagogy on scholarly communication topics and generally felt they were not well-prepared for scholarly communication work, coming at a significant opportunity cost.

CONCLUSION

Scholarly communication practitioners should create and curate open teaching and learning content on scholarly communication topics for both continuing education as well as adoption within LIS curricula, and LIS programs should develop accordingly, either through “topics” courses or by integrating scholarly communication into and across curricula as it intersects with existing courses.

URL : Finding Our Way: A Snapshot of Scholarly Communication Practitioners’ Duties & Training

DOI : https://doi.org/10.7710/2162-3309.2328

Measuring and Mapping Data Reuse: Findings From an Interactive Workshop on Data Citation and Metrics for Data Reuse

Author : Lisa Federer

Widely adopted standards for data citation are foundational to efforts to track and quantify data reuse. Without the means to track data reuse and metrics to measure its impact, it is difficult to reward researchers who share high-value data with meaningful credit for their contribution.

Despite initial work on developing guidelines for data citation and metrics, standards have not yet been universally adopted. This article reports on the recommendations collected from a workshop held at the Future of Research Communications and e-Scholarship (FORCE11) 2018 meeting titled Measuring and Mapping Data Reuse: An Interactive Workshop on Metrics for Data.

A range of stakeholders were represented among the participants, including publishers, researchers, funders, repository administrators, librarians, and others.

Collectively, they generated a set of 68 recommendations for specific actions that could be taken by standards and metrics creators; publishers; repositories; funders and institutions; creators of reference management software and citation styles; and researchers, students, and librarians.

These specific, concrete, and actionable recommendations would help facilitate broader adoption of standard citation mechanisms and easier measurement of data reuse.

URL : Measuring and Mapping Data Reuse: Findings From an Interactive Workshop on Data Citation and Metrics for Data Reuse

DOI : https://doi.org/10.1162/99608f92.ccd17b00

Open Access and Altmetrics in the pandemic age: Forescast analysis on COVID-19 literature

Authors : Daniel Torres-Salinas, Nicolas Robinson-Garcia, Pedro A. Castillo-Valdivieso

We present an analysis on the uptake of open access on COVID-19 related literature as well as the social media attention they gather when compared with non OA papers.

We use a dataset of publications curated by Dimensions and analyze articles and preprints. Our sample includes 11,686 publications of which 67.5% are openly accessible.

OA publications tend to receive the largest share of social media attention as measured by the Altmetric Attention Score. 37.6% of OA publications are bronze, which means toll journals are providing free access.

MedRxiv contributes to 36.3% of documents in repositories but papers in BiorXiv exhibit on average higher AAS. We predict the growth of COVID-19 literature in the following 30 days estimating ARIMA models for the overall publications set, OA vs. non OA and by location of the document (repository vs. journal).

We estimate that COVID-19 publications will double in the next 20 days, but non OA publications will grow at a higher rate than OA publications. We conclude by discussing the implications of such findings on the dissemination and communication of research findings to mitigate the coronavirus outbreak.

DOI : https://doi.org/10.1101/2020.04.23.057307

Use of the journal impact factor for assessing individual articles need not be statistically wrong

Authors : Ludo Waltman, Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor.

Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments.

We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles.

In fact, our computer simulations demonstrate the possibility that the impact factor is a more accurate indicator of the value of an article than the number of citations the article has received.

It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.

URL : Use of the journal impact factor for assessing individual articles need not be statistically wrong

DOI : https://doi.org/10.12688/f1000research.23418.1

From Open Access to Open Science: The Path From Scientific Reality to Open Scientific Communication

Authors : Christian Heise, Joshua M. Pearce

Although opening up of research is considered an appropriate and trend-setting model for future scientific communication, it can still be difficult to put open science into practice. How open and transparent can a scientific work be?

This article investigates the potential to make all information and the whole work process of a qualification project such as a doctoral thesis comprehensively and freely accessible on the internet with an open free license both in the final form and completely traceable in development.

The answer to the initial question, the self-experiment and the associated demand for openness, posed several challenges for a doctoral student, the institution, and the examination regulations, which are still based on the publication of an individually written and completed work that cannot be viewed by the public during the creation process.

In the case of data and other documents, publication is usually not planned even after completion. This state of affairs in the use of open science in the humanities will be compared with open science best practices in the physical sciences.

The reasons and influencing factors for open developments in science and research are presented, empirically and experimentally tested in the development of the first completely open humanities-based PhD thesis.

The results of this two-part study show that it is possible to publish everything related to the doctoral study, qualification, and research process as soon as possible, as comprehensively as possible, and under an open license.

URL : From Open Access to Open Science: The Path From Scientific Reality to Open Scientific Communication

DOI : From Open Access to Open Science: The Path From Scientific Reality to Open Scientific Communication