Does Peer Review Identify the Best Papers? A Simulation Study of Editors, Reviewers, and the Scientific Publication Process

Author : Justin Esarey

How does the structure of the peer review process, which can vary among journals, influence the quality of papers published in a journal? This article studies multiple systems of peer review using computational simulation. I find that, under any of the systems I study, a majority of accepted papers are evaluated by an average reader as not meeting the standards of the journal.

Moreover, all systems allow random chance to play a strong role in the acceptance decision. Heterogeneous reviewer and reader standards for scientific quality drive both results. A peer review system with an active editor—that is, one who uses desk rejection before review and does not rely strictly on reviewer votes to make decisions—can mitigate some of these effects.


A Proposed Currency System for Academic Peer Review Payments Using the BlockChain Technology

Author : Michael Spearpoint

Peer review of scholarly papers is seen to be a critical step in the publication of high quality outputs in reputable journals. However, it appears that there are few incentives for researchers to agree to conduct suitable reviews in a timely fashion and in some cases unscrupulous practices are occurring as part of the production of academic research output.

Innovations in internet-based technologies mean that there are ways in which some of the challenges can be addressed. In particular, this paper proposes a new currency system using the BlockChain as its basis that provides a number of solutions.

Potential benefits and problems of using the technology are discussed in the paper and these will need further investigation should the idea develop further. Ultimately, the currency could be used as an alternative publication metric for authors, institutions and journals.

URL : A Proposed Currency System for Academic Peer Review Payments Using the BlockChain Technology


Effectiveness of Anonymization in Double-Blind Review

Authors : Claire Le Goues, Yuriy Brun, Sven Apel, Emery Berger, Sarfraz Khurshid, Yannis Smaragdakis

Double-blind review relies on the authors’ ability and willingness to effectively anonymize their submissions. We explore anonymization effectiveness at ASE 2016, OOPSLA 2016, and PLDI 2016 by asking reviewers if they can guess author identities.

We find that 74%-90% of reviews contain no correct guess and that reviewers who self-identify as experts on a paper’s topic are more likely to attempt to guess, but no more likely to guess correctly.

We present our findings, summarize the PC chairs’ comments about administering double-blind review, discuss the advantages and disadvantages of revealing author identities part of the way through the process, and conclude by advocating for the continued use of double-blind review.


A prospective study on an innovative online forum for peer reviewing of surgical science

Authors : Martin Almquist, Regula S. von Allmen, Dan Carradice, Steven J. Oosterling, Kirsty McFarlane, Bas Wijnhoven


Peer review is important to the scientific process. However, the present system has been criticised and accused of bias, lack of transparency, failure to detect significant breakthrough and error. At the British Journal of Surgery (BJS), after surveying authors’ and reviewers’ opinions on peer review, we piloted an open online forum with the aim of improving the peer review process.


In December 2014, a web-based survey assessing attitudes towards open online review was sent to reviewers with a BJS account in Scholar One. From April to June 2015, authors were invited to allow their manuscripts to undergo online peer review in addition to the standard peer review process.

The quality of each review was evaluated by editors and editorial assistants using a validated instrument based on a Likert scale.


The survey was sent to 6635 reviewers. In all, 1454 (21.9%) responded. Support for online peer review was strong, with only 10% stating that they would not subject their manuscripts to online peer review. The most prevalent concern was about intellectual property, being highlighted in 118 of 284 comments (41.5%).

Out of 265 eligible manuscripts, 110 were included in the online peer review trial. Around 7000 potential reviewers were invited to review each manuscript.

In all, 44 of 110 manuscripts (40%) received 100 reviews from 59 reviewers, alongside 115 conventional reviews. The quality of the open forum reviews was lower than for conventional reviews (2.13 (± 0.75) versus 2.84 (± 0.71), P<0.001).


Open online peer review is feasible in this setting, but it attracts few reviews, of lower quality than conventional peer reviews.

URL : A prospective study on an innovative online forum for peer reviewing of surgical science



Using Peer Review to Support Development of Community Resources for Research Data Management

Authors : Heather Soyka, Amber Budden, Viv Hutchison, David Bloom, Jonah Duckles, Amy Hodge, Matthew S. Mayernik, Timothée Poisot, Shannon Rauch, Gail Steinhart, Leah Wasser, Amanda L. Whitmire, Stephanie Wright


To ensure that resources designed to teach skills and best practices for scientific research data sharing and management are useful, the maintainers of those materials need to evaluate and update them to ensure their accuracy, currency, and quality.

This paper advances the use and process of outside peer review for community resources in addressing ongoing accuracy, quality, and currency issues. It further describes the next step of moving the updated materials to an online collaborative community platform for future iterative review in order to build upon mechanisms for open science, ongoing iteration, participation, and transparent community engagement.


Research data management resources were developed in support of the DataONE (Data Observation Network for Earth) project, which has deployed a sustainable, long-term network to ensure the preservation and access to multi-scale, multi-discipline, and multi-national environmental and biological science data (Michener et al. 2012).

Created by members of the Community Engagement and Education (CEE) Working Group in 2011-2012, the freely available Educational Modules included three complementary components (slides, handouts, and exercises) that were designed to be adaptable for use in classrooms as well as for research data management training.


Because the modules were initially created and launched in 2011-2012, the current members of the (renamed) Community Engagement and Outreach (CEO) Working Group were concerned that the materials could be and / or quickly become outdated and should be reviewed for accuracy, currency, and quality.

In November 2015, the Working Group developed an evaluation rubric for use by outside reviewers. Review criteria were developed based on surveys and usage scenarios from previous DataONE projects.

Peer reviewers were selected from the DataONE community network for their expertise in the areas covered by one of the 11 educational modules. Reviewers were contacted in March 2016, and were asked to volunteer to complete their evaluations online within one month of the request, by using a customized Google form.


For the 11 modules, 22 completed reviews were received by April 2016 from outside experts. Comments on all three components of each module (slides, handouts, and exercises) were compiled and evaluated by the postdoctoral fellow attached to the CEO Working Group.

These reviews contributed to the full evaluation and revision by members of the Working Group of all educational modules in September 2016. This review process, as well as the potential lack of funding for ongoing maintenance by Working Group members or paid staff, provoked the group to transform the modules to a more stable, non-proprietary format, and move them to an online open repository hosting platform, GitHub.

These decisions were made to foster sustainability, community engagement, version control, and transparency.


Outside peer review of the modules by experts in the field was beneficial for highlighting areas of weakness or overlap in the education modules. The modules were initially created in 2011-2012 by an earlier iteration of the Working Group, and updates were needed due to the constant evolving practices in the field.

Because the review process was lengthy (approximately one year) comparative to the rate of innovations in data management practices, the Working Group discussed other options that would allow community members to make updates available more quickly.

The intent of migrating the modules to an online collaborative platform (GitHub) is to allow for iterative updates and ongoing outside review, and to provide further transparency about accuracy, currency, and quality in the spirit of open science and collaboration.

Documentation about this project may be useful for others trying to develop and maintain educational resources for engagement and outreach, particularly in communities and spaces where information changes quickly, and open platforms are already in common use.

URL : Using Peer Review to Support Development of Community Resources for Research Data Management


What do we know about grant peer review in the health sciences?

Authors : Susan Guthrie, Ioana Ghiga, Steven Wooding


Peer review decisions award >95% of academic medical research funding, so it is crucial to understand how well they work and if they could be improved.


This paper summarises evidence from 105 relevant papers identified through a literature search on the effectiveness and burden of peer review for grant funding.


There is a remarkable paucity of evidence about the overall efficiency of peer review for funding allocation, given its centrality to the modern system of science. From the available evidence, we can identify some conclusions around the effectiveness and burden of peer review.

The strongest evidence around effectiveness indicates a bias against innovative research. There is also fairly clear evidence that peer review is, at best, a weak predictor of future research performance, and that ratings vary considerably between reviewers. There is some evidence of age bias and cronyism.

Good evidence shows that the burden of peer review is high and that around 75% of it falls on applicants. By contrast, many of the efforts to reduce burden are focused on funders and reviewers/panel members.


We suggest funders should acknowledge, assess and analyse the uncertainty around peer review, even using reviewers’ uncertainty as an input to funding decisions. Funders could consider a lottery element in some parts of their funding allocation process, to reduce both burden and bias, and allow better evaluation of decision processes.

Alternatively, the distribution of scores from different reviewers could be better utilised as a possible way to identify novel, innovative research. Above all, there is a need for open, transparent experimentation and evaluation of different ways to fund research.

This also requires more openness across the wider scientific community to support such investigations, acknowledging the lack of evidence about the primacy of the current system and the impossibility of achieving perfection.

URL : What do we know about grant peer review in the health sciences?



A multi-disciplinary perspective on emergent and future innovations in peer review

Authors : Jonathan P. Tennant, Jonathan M. Dugan, Daniel Graziotin, Damien C. Jacques, François Waldner, Daniel Mietchen, Yehia Elkhatib, Lauren B. Collister, Christina K. Pikas, Tom Crick, Paola Masuzzo, Anthony Caravaggi, Devin R. Berg, Kyle E. Niemeyer, Tony Ross-Hellauer, Sara Mannheimer, Lillian Rigling, Daniel S. Kat, Bastian Greshake Tzovaras, Josmel Pacheco-Mendoza, Nazeefa Fatima, Marta Poblet, Marios Isaakidis, Dasapta Erwin Irawan, Sébastien Renaut, Christopher R. Madan, Lisa Matthias, Jesper Nørgaard Kjær, Daniel Paul O’Donnell, Cameron Neylon, Sarah Kearns, Manojkumar Selvaraju, Julien Colomb

Peer review of research articles is a core part of our scholarly communication system. In spite of its importance, the status and purpose of peer review is often contested. What is its role in our modern digital research and communications infrastructure?

Does it perform to the high standards with which it is generally regarded? Studies of peer review have shown that it is prone to bias and abuse in numerous dimensions, frequently unreliable, and can fail to detect even fraudulent research.

With the advent of Web technologies, we are now witnessing a phase of innovation and experimentation in our approaches to peer review. These developments prompted us to examine emerging models of peer review from a range of disciplines and venues, and to ask how they might address some of the issues with our current systems of peer review.

We examine the functionality of a range of social Web platforms, and compare these with the traits underlying a viable peer review system: quality control, quantified performance metrics as engagement incentives, and certification and reputation.

Ideally, any new systems will demonstrate that they out-perform current models while avoiding as many of the biases of existing systems as possible. We conclude that there is considerable scope for new peer review initiatives to be developed, each with their own potential issues and advantages.

We also propose a novel hybrid platform model that, at least partially, resolves many of the technical and social issues associated with peer review, and can potentially disrupt the entire scholarly communication system.

Success for any such development relies on reaching a critical threshold of research community engagement with both the process and the platform, and therefore cannot be achieved without a significant change of incentives in research environments.

URL : A multi-disciplinary perspective on emergent and future innovations in peer review