Acceptance rates of scholarly peer-reviewed journals: A literature survey

Author : Bo-Christer Bjórk

The acceptance rate of scholarly journals is an important selection criterion for authors choosing where to submit their manuscripts. Unfortunately, information about the acceptance (or rejection rates) of individual journals is seldom available.

This article surveys available systematic information and studies of acceptance rates. The overall global average is around 35-40%. There are significant differences between fields of science, with biomedicine having higher acceptance rates compared to for instance the social sciences.

Open access journals usually have higher acceptance rates than subscription journals, and this is particularly true for so-called OA mega-journals, which have peer review criteria focusing on sound science only.

URL : https://recyt.fecyt.es/index.php/EPI/article/view/epi.2019.jul.07

The F3-index. Valuing reviewers for scholarly journals

Authors : Federico Bianchi, Francisco Grimaldo, Flaminio Squazzoni

This paper presents an index that measures reviewer contribution to editorial processes of scholarly journals. Following a metaphor of ranking algorithms in sports tournaments, we created an index that considers reviewers on different context-specific dimensions, i.e., report delivery time, the length of the report and the alignment of recommendations to editorial decisions.

To test the index, we used a dataset of peer review in a multi-disciplinary journal, including 544 reviewers on 606 submissions in six years. Although limited by sample size, the test showed that the index identifies outstanding contributors and weak performing reviewers efficiently.

Our index is flexible, contemplates extensions and could be incorporated into available scholarly journal management tools. It can assist editors in rewarding high performing reviewers and managing editorial turnover.

URL : The F3-index. Valuing reviewers for scholarly journals

DOI : https://doi.org/10.1016/j.joi.2018.11.007

What does better peer review look like? Underlying principles and recommendations for better practice

Authors : Heidi Allen, Alexandra Cury, Thomas Gaston, Chris Graf, Hannah Wakley, Michael Willis

We conducted a literature review of best practice in peer review. Following this research, we identified five principles for better peer review: Content Integrity, Content Ethics, Fairness, Usefulness, and Timeliness. For each of these principles, we have developed a set of recommendations to improve peer review standards.

In this article, we describe the role of peer review and how our five principles support that goal. This article is intended to continue the conversation about improving peer review standards and provide guidance to journal teams looking to improve their standards. It is accompanied by a detailed checklist, which could be used by journal teams to assess their current peer review standards.

URL : What does better peer review look like? Underlying principles and recommendations for better practice

DOI : https://doi.org/10.1002/leap.1222

Using AI to solve business problems in scholarly publishing

Author: Michael Upshall

Artificial intelligence (AI) tools are widely used today in many areas, and are now being introduced into scholarly publishing. This article provides a brief overview of present-day AI and machine learning as used for text-based resources such as journal articles and book chapters, and provides an example of its application to identify suitable peer reviewers for manuscript submissions.

It describes how one company, UNSILO, has created a tool for this purpose, and the underlying technology used to deliver it. The article also offers a glimpse into a future where AI will profoundly change the way that academic publishing will work.

URL : Using AI to solve business problems in scholarly publishing

DOI : http://doi.org/10.1629/uksg.460

Who Is (Likely) Peer-Reviewing Your Papers? A Partial Insight into the World’s Top Reviewers

Authors : Francesco Pomponi, Bernardino D’Amico, Tom Rye

Scientific publishing is experiencing unprecedented growth in terms of outputs across all fields. Inevitably this creates pressure throughout the system on a number of entities.

One key element is represented by peer-reviewers, whose demand increases at an even higher pace than that of publications, since more than one reviewer per paper is needed and not all papers that get reviewed get published.

The relatively recent Publons platform allows for unprecedented insight into the usual ‘blindness’ of the peer-review system. At a time where the world’s top peer-reviewers are announced and celebrated, we have taken a step back in order to attempt a partial mapping of their profiles to identify trends and key dimensions of this community of ‘super-reviewers’.

This commentary focuses necessarily on a limited sample due to manual processing of data, which needs to be done within a single day for the type of information we seek. In investigating the numbers of performed reviews vs. academic citations, our analysis suggests that most reviews are carried out by relatively inexperienced academics.

For some of these early career academics, peer-reviewing seems to be the only activity they engage with, given the high number of reviews performed (e.g., three manuscripts per day) and the lack of outputs (zero academic papers and citations in some cases). Additionally, the world’s top researchers (i.e., highly-cited researchers) are understandably busy with research activities and therefore far less active in peer-reviewing.

Lastly, there seems to be an uneven distribution at a national level between scientific outputs (e.g., publications) and reviews performed. Our analysis contributes to the ongoing global discourse on the health of scientific peer-review, and it raises some important questions for further discussion.

URL : Who Is (Likely) Peer-Reviewing Your Papers? A Partial Insight into the World’s Top Reviewers

URL : https://www.mdpi.com/2304-6775/7/1/15

Guidelines for open peer review implementation

Authors : Tony Ross-Hellauer, Edit Görögh

Open peer review (OPR) is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation.

As more journals move to implement and experiment with the myriad of innovations covered by this term, there is a clear need for best practice guidelines to guide implementation.

This brief article aims to address this knowledge gap, reporting work based on an interactive stakeholder workshop to create best-practice guidelines for editors and journals who wish to transition to OPR.

Although the advice is aimed mainly at editors and publishers of scientific journals, since this is the area in which OPR is at its most mature, many of the principles may also be applicable for the implementation of OPR in other areas (e.g., books, conference submissions).

URL : Guidelines for open peer review implementation

DOI : https://doi.org/10.1186/s41073-019-0063-9

Peer Review Bias: A Critical Review

Authors : Samir Haffar, Fateh Bazerbachi, M. Hassan Murad

Various types of bias and confounding have been described in the biomedical literature that can affect a study before, during, or after the intervention has been delivered.

The peer review process can also introduce bias. A compelling ethical and moral rationale necessitates improving the peer review process. A double-blind peer review system is supported on equipoise and fair-play principles.

Triple- and quadruple-blind systems have also been described but are not commonly used. The open peer review system introduces “Skin in the Game” heuristic principles for both authors and reviewers and has a small favorable effect on the quality of published reports.

In this exposition, we present, on the basis of a comprehensive literature search of PubMed from its inception until October 20, 2017, various possible mechanisms by which the peer review process can distort research results, and we discuss the evidence supporting different strategies that may mitigate this bias.

It is time to improve the quality, transparency, and accountability of the peer review system.

DOI : https://doi.org/10.1016/j.mayocp.2018.09.004