Assessment of potential bias in research grant peer review in Canada

Authors : Robyn Tamblyn, Nadyne Girard, Christina J. Qian, James Hanley

BACKGROUND

Peer review is used to determine what research is funded and published, yet little is known about its effectiveness, and it is suspected that there may be biases. We investigated the variability of peer review and factors influencing ratings of grant applications.

METHODS

We evaluated all grant applications submitted to the Canadian Institutes of Health Research between 2012 and 2014. The contribution of application, principal applicant and reviewer characteristics to overall application score was assessed after adjusting for the applicant’s scientific productivity.

RESULTS

Among 11 624 applications, 66.2% of principal applicants were male and 64.1% were in a basic science domain. We found a significant nonlinear association between scientific productivity and final application score that differed by applicant gender and scientific domain, with higher scores associated with past funding success and h-index and lower scores associated with female applicants and those in the applied sciences.

Significantly lower application scores were also associated with applicants who were older, evaluated by female reviewers only (v. male reviewers only, −0.05 points, 95% confidence interval [CI] −0.08 to −0.02) or reviewers in scientific domains different from the applicant’s (−0.07 points, 95% CI −0.11 to −0.03).

Significantly higher application scores were also associated with reviewer agreement in application score (0.23 points, 95% CI 0.20 to 0.26), the existence of reviewer conflicts (0.09 points, 95% CI 0.07 to 0.11), larger budget requests (0.01 points per $100 000, 95% CI 0.007 to 0.02), and resubmissions (0.15 points, 95% CI 0.14 to 0.17).

In addition, reviewers with high expertise were more likely than those with less expertise to provide higher scores to applicants with higher past success rates (0.18 points, 95% CI 0.08 to 0.28).

INTERPRETATION

There is evidence of bias in peer review of operating grants that is of sufficient magnitude to change application scores from fundable to nonfundable. This should be addressed by training and policy changes in research funding.

URL : https://doi.org/10.1503/cmaj.170901

Pubpeer: vigilante science, journal club or alarm raiser? The controversies over anonymity in post-publication peer review

Author : Didier Torny

The more journal peer review (JPR) became a scientific topic, the more it has been the subject of criticisms and controversies. Repeated fake reports, confirmed reviewers’ biases, lack of reproducibility, and a recurrent inability to detect fraud and misconduct have apparently condemned JPR in its supposedly traditional form.

In fact, just like previous historical reforms and inventions, JPR has again been the object of many proposals to “fix it” since the beginning of the 21st century. Though these proposals are very diverse, two main directions have been identified: open peer review on one side, post-publication peer review (PPPR) on the other.

These two “fixes” share a common device, the open commenting of published articles, which is both an open peer review practice as it is visible to all readers and PPPR as it comes after the publication and often the certification of articles. At their intersection, it should thus thrive and indeed many journals have proposed this feature, but with no success.

Nevertheless, there is an exception to the disappointment with open commentary in PPPR, which is the empirical case for this presentation: PubPeer, where commentators come in herds and comments flourish. The only explanation given for this peculiar success is the possibility, largely used, to publish anoymized comments on the platform.

So, how can you embrace the openness of discussion and, at the same time, enable anonymous commentators? What kind of PPPR practices is it connected with? Does it inform our views on traditional peer review and how?

To answer these questions, we will first describe how the platform has been built and works, then to what kind of dynamics it leads as far as anonymity is concerned, then typify the arguments used for and against anonymity in PPPR, discuss its effects on published papers, before concluding on the way debates could be organized in PPPR.

These first results are based on the systematic qualitative analysis of both threads on PubPeer, articles on specialized websites on PubPeer and anonymity (Scholarly Kitchen, RetractionWatch…) and on editorials from scientific journals that have commented on anonymity in PPPR.

URL : Pubpeer: vigilante science, journal club or alarm raiser? The controversies over anonymity in post-publication peer review

Alternative location : https://halshs.archives-ouvertes.fr/halshs-01700198

Can your paper evade the editors axe? Towards an AI assisted peer review system

Authors : Tirthankar Ghosal, Rajeev Verma, Asif Ekbal, Sriparna Saha, Pushpak Bhattacharyya, Srinivasa Satya Sameer Kumar Chivukula, Georgios Tsatsaronis, Pascal Coupet, Michelle Gregory

This work is an exploratory study of how we could progress a step towards an AI assisted peer- review system. The proposed approach is an ambitious attempt to automate the Desk-Rejection phenomenon prevalent in academic peer review.

In this investigation we first attempt to decipher the possible reasons of rejection of a scientific manuscript from the editors desk. To seek a solution to those causes, we combine a flair of information extraction techniques, clustering, citation analysis to finally formulate a supervised solution to the identified problems.

The projected approach integrates two important aspects of rejection: i) a paper being rejected because of out of scope and ii) a paper rejected due to poor quality. We extract several features to quantify the quality of a paper and the degree of in-scope exploring keyword search, citation analysis, reputations of authors and affiliations, similarity with respect to accepted papers.

The features are then fed to standard machine learning based classifiers to develop an automated system. On a decent set of test data our generic approach yields promising results across 3 different journals.

The study inherently exhibits the possibility of a redefined interest of the research community on the study of rejected papers and inculcates a drive towards an automated peer review system.

URL : https://arxiv.org/abs/1802.01403

The Social Structure of Consensus in Scientific Review

Authors : Misha Teplitskiy, Daniel Acuna, Aida Elamrani-Raoult, Konrad Kording, James Evans

Personal connections between creators and evaluators of scientific works are ubiquitous, and the possibility of bias ever-present. Although connections have been shown to bias prospective judgments of (uncertain) future performance, it is unknown whether such biases occur in the much more concrete task of assessing the scientific validity of already completed work, and if so, why.

This study presents evidence that personal connections between authors and reviewers of neuroscience manuscripts are associated with biased judgments and explores the mechanisms driving the effect.

Using reviews from 7,981 neuroscience manuscripts submitted to the journal PLOS ONE, which instructs reviewers to evaluate manuscripts only on scientific validity, we find that reviewers favored authors close in the co-authorship network by ~0.11 points on a 1.0 – 4.0 scale for each step of proximity.

PLOS ONE’s validity-focused review and the substantial amount of favoritism shown by distant vs. very distant reviewers, both of whom should have little to gain from nepotism, point to the central role of substantive disagreements between scientists in different “schools of thought.”

The results suggest that removing bias from peer review cannot be accomplished simply by recusing the closely-connected reviewers, and highlight the value of recruiting reviewers embedded in diverse professional networks.

URL : https://arxiv.org/abs/1802.01270

“Let the community decide”? The vision and reality of soundness-only peer review in open-access mega-journals

Authors : Valerie Spezi, Simon Wakeling, Stephen Pinfield, Jenny Fry, Claire Creaser, Peter Willett

Purpose

The purpose of this paper is to better understand the theory and practice of peer review in open-access mega-journals (OAMJs). OAMJs typically operate a “soundness-only” review policy aiming to evaluate only the rigour of an article, not the novelty or significance of the research or its relevance to a particular community, with these elements being left for “the community to decide” post-publication.

Design/methodology/approach

The paper reports the results of interviews with 31 senior publishers and editors representing 16 different organisations, including 10 that publish an OAMJ. Thematic analysis was carried out on the data and an analytical model developed to explicate their significance.

Findings

Findings suggest that in reality criteria beyond technical or scientific soundness can and do influence editorial decisions. Deviations from the original OAMJ model are both publisher supported (in the form of requirements for an article to be “worthy” of publication) and practice driven (in the form of some reviewers and editors applying traditional peer review criteria to OAMJ submissions). Also publishers believe post-publication evaluation of novelty, significance and relevance remains problematic.

Originality/value

The study is based on unprecedented access to senior publishers and editors, allowing insight into their strategic and operational priorities.

The paper is the first to report in-depth qualitative data relating specifically to soundness-only peer review for OAMJs, shedding new light on the OAMJ phenomenon and helping inform discussion on its future role in scholarly communication. The paper proposes a new model for understanding the OAMJ approach to quality assurance, and how it is different from traditional peer review.

URL : “Let the community decide”? The vision and reality of soundness-only peer review in open-access mega-journals

DOI : https://doi.org/10.1108/JD-06-2017-0092

Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers

Authors : Tony Ross-Hellauer, Arvid Deppe, Birgit Schmidt

Open peer review (OPR) is a cornerstone of the emergent Open Science agenda. Yet to date no large-scale survey of attitudes towards OPR amongst academic editors, authors, reviewers and publishers has been undertaken.

This paper presents the findings of an online survey, conducted for the OpenAIRE2020 project during September and October 2016, that sought to bridge this information gap in order to aid the development of appropriate OPR approaches by providing evidence about attitudes towards and levels of experience with OPR.

The results of this cross-disciplinary survey, which received 3,062 full responses, show the majority (60.3%) of respondents to be believe that OPR as a general concept should be mainstream scholarly practice (although attitudes to individual traits varied, and open identities peer review was not generally favoured). Respondents were also in favour of other areas of Open Science, like Open Access (88.2%) and Open Data (80.3%).

Among respondents we observed high levels of experience with OPR, with three out of four (76.2%) reporting having taken part in an OPR process as author, reviewer or editor.

There were also high levels of support for most of the traits of OPR, particularly open interaction, open reports and final-version commenting. Respondents were against opening reviewer identities to authors, however, with more than half believing it would make peer review worse.

Overall satisfaction with the peer review system used by scholarly journals seems to strongly vary across disciplines. Taken together, these findings are very encouraging for OPR’s prospects for moving mainstream but indicate that due care must be taken to avoid a “one-size fits all” solution and to tailor such systems to differing (especially disciplinary) contexts.

OPR is an evolving phenomenon and hence future studies are to be encouraged, especially to further explore differences between disciplines and monitor the evolution of attitudes.

URL : Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers

DOI : https://doi.org/10.1371/journal.pone.0189311

Artificial intelligence in peer review: How can evolutionary computation support journal editors?

Authors : Maciej J. Mrowinski, Piotr Fronczak, Agata Fronczak, Marcel Ausloos, Olgica Nedic

With the volume of manuscripts submitted for publication growing every year, the deficiencies of peer review (e.g. long review times) are becoming more apparent. Editorial strategies, sets of guidelines designed to speed up the process and reduce editors workloads, are treated as trade secrets by publishing houses and are not shared publicly.

To improve the effectiveness of their strategies, editors in small publishing groups are faced with undertaking an iterative trial-and-error approach. We show that Cartesian Genetic Programming, a nature-inspired evolutionary algorithm, can dramatically improve editorial strategies.

The artificially evolved strategy reduced the duration of the peer review process by 30%, without increasing the pool of reviewers (in comparison to a typical human-developed strategy).

Evolutionary computation has typically been used in technological processes or biological ecosystems. Our results demonstrate that genetic programs can improve real-world social systems that are usually much harder to understand and control than physical systems.

URL : https://arxiv.org/abs/1712.01682