The State of The Art in Peer Review

Author : John Tennant

Scholarly communication is in a perpetual state of disruption. Within this, peer review of research articles remains an essential part of the formal publication process, distinguishing it from virtually all other modes of communication.

In the last several years, there has been an explosive wave of innovation in peer review research, platforms, discussions, tools, and services. This is largely coupled with the ongoing and parallel evolution of scholarly communication as it adapts to rapidly changing environments, within what is widely considered as the ‘open research’ or ‘open science’ movement.

Here, we summarise the current ebb and flow around changes to peer review and consider its role in a modern digital research and communications infrastructure and discuss why uptake of new models of peer review appears to have been so low compared to what is often viewed as the ‘traditional’ method of peer review.

Finally, we offer some insight into the potential futures of scholarly peer review and consider what impacts this might have on the broader scholarly research ecosystem.

DOI : https://dx.doi.org/10.17605/OSF.IO/C29TM

Analysis of Peer Review Effectiveness for Academic Journals Based on Distributed Parallel System

Authors : Zong-Yuan Tan, Ning Cai, Jian Zhou

A simulation model based on parallel systems is established, aiming to explore the relation between the number of submissions and the overall quality of academic journals within a similar discipline under peer review.

The model can effectively simulate the submission, review and acceptance behaviors of academic journals, in a distributed manner. According to the simulation experiments, it could possibly happen that the overall standard of academic journals may deteriorate due to excessive submissions.

URL : https://arxiv.org/abs/1806.00287

A Case Study for a New Peer-Review Journal on Race and Ethnicity in American Higher Education

Author : Cristobal Salinas Jr.

In this exploratory case study, the interests, attitudes, and opinions of participants of the National Conference on Race and Ethnicity (NCORE) in American Higher Education are presented.

This case study sought to understand how college and university administrators and faculty perceived the need to create a peer-reviewed journal that aimed to support and create opportunities to publish research, policy, practices, and procedures within the context of race and ethnicity in American higher education.

The findings of this study reflect that the vast majority of those surveyed (n = 605) and interviewed (n = 5) support, and are interested in, having a peer-reviewed journal that focuses on race and ethnicity in American higher education.

URL : A Case Study for a New Peer-Review Journal on Race and Ethnicity in American Higher Education

DOI : https://doi.org/10.3390/publications6020026

Beyond Fact Checking: Reconsidering the Status of Truth of Published Articles

Authors : David Pontille, Didier Torny

Since the 17th century, scientific knowledge has been produced through a collective process, involving specific technologies used to perform experiments, to regulate modalities for participation of peers or lay people, and to ensure validation of the facts and publication of major results.

In such a world guided by the quest for a new kind of truth against previous beliefs various forms of misconduct – from subtle plagiarism to the entire fabrication of data and results – have largely been considered as minimal, if not inexistent.

Yet, some “betrayers of the truth” have been alleged in many fraudulent cases at least from the 1970s onward and the phenomenon is currently a growing concern in many academic corners. Facing numerous alerts, journals have generalized dedicated editorial formats to notify their readers of the emerging doubts affecting articles they had published.

This short piece is exclusively focused on these formats, which consists in “flagging” some articles to mark their problematic status.The visibility given to these flags and policies undermine the very basic components of the economy of science: How long can we collectively pretend that peer-reviewed knowledge should be the anchor to face a “post-truth” world?

URL : https://halshs.archives-ouvertes.fr/halshs-01576348

Peer-review under review – A statistical study on proposal ranking at ESO. Part I: the pre-meeting phase

Author : Ferdinando Patat

Peer review is the most common mechanism in place for assessing requests for resources in a large variety of scientific disciplines. One of the strongest criticisms to this paradigm is the limited reproducibility of the process, especially at largely oversubscribed facilities. In this and in a subsequent paper we address this specific aspect in a quantitative way, through a statistical study on proposal ranking at the European Southern Observatory.

For this purpose we analysed a sample of about 15000 proposals, submitted by more than 3000 Principal Investigators over 8 years. The proposals were reviewed by more than 500 referees, who assigned over 140000 grades in about 200 panel sessions.

After providing a detailed analysis of the statistical properties of the sample, the paper presents an heuristic model based on these findings, which is then used to provide quantitative estimates of the reproducibility of the pre-meeting process.

On average, about one third of the proposals ranked in the top quartile by one referee are ranked in the same quartile by any other referee of the panel. A similar value is observed for the bottom quartile.

In the central quartiles, the agreement fractions are very marginally above the value expected for a fully aleatory process (25%). The agreement fraction between two panels composed by 6 referees is 55+/-5% (50% confidence level) for the top and bottom quartiles.

The corresponding fraction for the central quartiles is 33+/-5%. The model predictions are confirmed by the results obtained from boot-strapping the data for sub-panels composed by 3 referees, and fully consistent with the NIPS experiment. The post-meeting phase will be presented and discussed in a forthcoming paper.

URL : https://arxiv.org/abs/1805.06981

Assessment of potential bias in research grant peer review in Canada

Authors : Robyn Tamblyn, Nadyne Girard, Christina J. Qian, James Hanley

BACKGROUND

Peer review is used to determine what research is funded and published, yet little is known about its effectiveness, and it is suspected that there may be biases. We investigated the variability of peer review and factors influencing ratings of grant applications.

METHODS

We evaluated all grant applications submitted to the Canadian Institutes of Health Research between 2012 and 2014. The contribution of application, principal applicant and reviewer characteristics to overall application score was assessed after adjusting for the applicant’s scientific productivity.

RESULTS

Among 11 624 applications, 66.2% of principal applicants were male and 64.1% were in a basic science domain. We found a significant nonlinear association between scientific productivity and final application score that differed by applicant gender and scientific domain, with higher scores associated with past funding success and h-index and lower scores associated with female applicants and those in the applied sciences.

Significantly lower application scores were also associated with applicants who were older, evaluated by female reviewers only (v. male reviewers only, −0.05 points, 95% confidence interval [CI] −0.08 to −0.02) or reviewers in scientific domains different from the applicant’s (−0.07 points, 95% CI −0.11 to −0.03).

Significantly higher application scores were also associated with reviewer agreement in application score (0.23 points, 95% CI 0.20 to 0.26), the existence of reviewer conflicts (0.09 points, 95% CI 0.07 to 0.11), larger budget requests (0.01 points per $100 000, 95% CI 0.007 to 0.02), and resubmissions (0.15 points, 95% CI 0.14 to 0.17).

In addition, reviewers with high expertise were more likely than those with less expertise to provide higher scores to applicants with higher past success rates (0.18 points, 95% CI 0.08 to 0.28).

INTERPRETATION

There is evidence of bias in peer review of operating grants that is of sufficient magnitude to change application scores from fundable to nonfundable. This should be addressed by training and policy changes in research funding.

URL : https://doi.org/10.1503/cmaj.170901

Pubpeer: vigilante science, journal club or alarm raiser? The controversies over anonymity in post-publication peer review

Author : Didier Torny

The more journal peer review (JPR) became a scientific topic, the more it has been the subject of criticisms and controversies. Repeated fake reports, confirmed reviewers’ biases, lack of reproducibility, and a recurrent inability to detect fraud and misconduct have apparently condemned JPR in its supposedly traditional form.

In fact, just like previous historical reforms and inventions, JPR has again been the object of many proposals to “fix it” since the beginning of the 21st century. Though these proposals are very diverse, two main directions have been identified: open peer review on one side, post-publication peer review (PPPR) on the other.

These two “fixes” share a common device, the open commenting of published articles, which is both an open peer review practice as it is visible to all readers and PPPR as it comes after the publication and often the certification of articles. At their intersection, it should thus thrive and indeed many journals have proposed this feature, but with no success.

Nevertheless, there is an exception to the disappointment with open commentary in PPPR, which is the empirical case for this presentation: PubPeer, where commentators come in herds and comments flourish. The only explanation given for this peculiar success is the possibility, largely used, to publish anoymized comments on the platform.

So, how can you embrace the openness of discussion and, at the same time, enable anonymous commentators? What kind of PPPR practices is it connected with? Does it inform our views on traditional peer review and how?

To answer these questions, we will first describe how the platform has been built and works, then to what kind of dynamics it leads as far as anonymity is concerned, then typify the arguments used for and against anonymity in PPPR, discuss its effects on published papers, before concluding on the way debates could be organized in PPPR.

These first results are based on the systematic qualitative analysis of both threads on PubPeer, articles on specialized websites on PubPeer and anonymity (Scholarly Kitchen, RetractionWatch…) and on editorials from scientific journals that have commented on anonymity in PPPR.

URL : Pubpeer: vigilante science, journal club or alarm raiser? The controversies over anonymity in post-publication peer review

Alternative location : https://halshs.archives-ouvertes.fr/halshs-01700198