How Many Papers Should Scientists Be Reviewing? An Analysis Using Verified Peer Review Reports

Authors : Vincent Raoult

The current peer review system is under stress from ever increasing numbers of publications, the proliferation of open-access journals and an apparent difficulty in obtaining high-quality reviews in due time. At its core, this issue may be caused by scientists insufficiently prioritising reviewing.

Perhaps this low prioritisation is due to a lack of understanding on how many reviews need to be conducted by researchers to balance the peer review process. I obtained verified peer review data from 142 journals across 12 research fields, for a total of over 300,000 reviews and over 100,000 publications, to determine an estimate of the numbers of reviews required per publication per field.

I then used this value in relation to the mean numbers of authors per publication per field to highlight a ‘review ratio’: the expected minimum number of publications an author in their field should review to balance their input (publications) into the peer review process.

On average, 3.49 ± 1.45 (SD) reviews were required for each scientific publication, and the estimated review ratio across all fields was 0.74 ± 0.46 (SD) reviews per paper published per author. Since these are conservative estimates, I recommend scientists aim to conduct at least one review per publication they produce. This should ensure that the peer review system continues to function as intended.

URL : How Many Papers Should Scientists Be Reviewing? An Analysis Using Verified Peer Review Reports

DOI : https://doi.org/10.3390/publications8010004

Peer Review of Research Data Submissions to ScholarsArchive@OSU: How can we improve the curation of research datasets to enhance reusability?

Authors : Clara Llebot, Steven Van Tuyl

Objective

Best practices such as the FAIR Principles (Findability, Accessibility, Interoperability, Reusability) were developed to ensure that published datasets are reusable. While we employ best practices in the curation of datasets, we want to learn how domain experts view the reusability of datasets in our institutional repository, ScholarsArchive@OSU.

Curation workflows are designed by data curators based on their own recommendations, but research data is extremely specialized, and such workflows are rarely evaluated by researchers.

In this project we used peer-review by domain experts to evaluate the reusability of the datasets in our institutional repository, with the goal of informing our curation methods and ensure that the limited resources of our library are maximizing the reusability of research data.

Methods

We asked all researchers who have datasets submitted in Oregon State University’s repository to refer us to domain experts who could review the reusability of their data sets. Two data curators who are non-experts also reviewed the same datasets.

We gave both groups review guidelines based on the guidelines of several journals. Eleven domain experts and two data curators reviewed eight datasets.

The review included the quality of the repository record, the quality of the documentation, and the quality of the data. We then compared the comments given by the two groups.

Results

Domain experts and non-expert data curators largely converged on similar scores for reviewed datasets, but the focus of critique by domain experts was somewhat divergent.

A few broad issues common across reviews were: insufficient documentation, the use of links to journal articles in the place of documentation, and concerns about duplication of effort in creating documentation and metadata. Reviews also reflected the background and skills of the reviewer.

Domain experts expressed a lack of expertise in data curation practices and data curators expressed their lack of expertise in the research domain.

Conclusions

The results of this investigation could help guide future research data curation activities and align domain expert and data curator expectations for reusability of datasets.

We recommend further exploration of these common issues and additional domain expert peer-review project to further refine and align expectations for research data reusability.

URL : Peer Review of Research Data Submissions to ScholarsArchive@OSU: How can we improve the curation of research datasets to enhance reusability?

DOI : https://doi.org/10.7191/jeslib.2019.1166

A history and development of peer-review process

Author : Jana Siladitya

The paper shows the importance of peer review process in the scholarly communication system and discusses both the closed and the newly emerging open peer review models.

It also examines the peer review system at the scholarly academies or societies in their nomination systems for prizes, rewards, etc. It also discusses the various facets of the newly developed open peer review models now prevalent in various journals.

The paper may help to understand and appreciate the role played by peer review in the scholarly communication system and the efforts being made to make it more transparent.

URI : http://hdl.handle.net/10760/39332

Does the use of open, non-anonymous peer review in scholarly publishing introduce bias? Evidence from the F1000 post-publication open peer review publishing model

Authors : Mike Thelwall, Verena Weigert, Liz Allen, Zena Nyakoojo, Eleanor-Rose Papas

This study examines whether there is any evidence of bias in two areas of common critique of open, non-anonymous peer review – and used in the post-publication, peer review system operated by the open-access scholarly publishing platform F1000Research.

First, is there evidence of bias where a reviewer based in a specific country assesses the work of an author also based in the same country? Second, are reviewers influenced by being able to see the comments and know the origins of previous reviewer?

Scrutinising the open peer review comments published on F1000Research, we assess the extent of two frequently cited potential influences on reviewers that may be the result of the transparency offered by a fully attributable, open peer review publishing model: the national affiliations of authors and reviewers, and the ability of reviewers to view previously-published reviewer reports before submitting their own.

The effects of these potential influences were investigated for all first versions of articles published by 8 July 2019 to F1000Research. In 16 out of the 20 countries with the most articles, there was a tendency for reviewers based in the same country to give a more positive review.

The difference was statistically significant in one. Only 3 countries had the reverse tendency. Second, there is no evidence of a conformity bias. When reviewers mentioned a previous review in their peer review report, they were not more likely to give the same overall judgement.

Although reviewers who had longer to potentially read a previously published reviewer reports were slightly less likely to agree with previous reviewer judgements, this could be due to these articles being difficult to judge rather than deliberate non-conformity.

URL : https://arxiv.org/abs/1911.03379

Which Are the Tools Available for Scholars? A Review of Assisting Software for Authors during Peer Reviewing Process

Authors : J. Israel Martínez-López, Samantha Barrón-González, Alejandro Martínez López

There is a large amount of Information Technology and Communication (ITC) tools that surround scholar activity. The prominent place of the peer-review process upon publication has promoted a crowded market of technological tools in several formats.

Despite this abundance, many tools are unexploited or underused because they are not known by the academic community. In this study, we explored the availability and characteristics of the assisting tools for the peer-reviewing process.

The aim was to provide a more comprehensive understanding of the tools available at this time, and to hint at new trends for further developments. The result of an examination of literature assisted the creation of a novel taxonomy of types of software available in the market.

This new classification is divided into nine categories as follows: (I) Identification and social media, (II) Academic search engines, (III) Journal-abstract matchmakers, (IV) Collaborative text editors, (V) Data visualization and analysis tools, (VI) Reference management, (VII) Proofreading and plagiarism detection, (VIII) Data archiving, and (IX) Scientometrics and Altmetrics.

Considering these categories and their defining traits, a curated list of 220 software tools was completed using a crowdfunded database (AlternativeTo) to identify relevant programs and ongoing trends and perspectives of tools developed and used by scholars.

URL : Which Are the Tools Available for Scholars? A Review of Assisting Software for Authors during Peer Reviewing Process

DOI : https://doi.org/10.3390/publications7030059

 

Bye Bye Peer-Reviewed Publishing

Authors : Miguel Abambres, Tony Salloom, Nejra Beganovic, Rafal Dojka, SergioRoncallo-Dow

This work is the continuation of a ‘revolution’ started with “Research Counts, Not the Journal”. Own and published opinions from worldwide scientists on critical issues of peer-reviewed publishing are presented.

In my opinion, peer-reviewed publishing is a quite flawed process (in many way) that has greatly harmed Science for a long time – it has been imposed by most academic and science funding institutions as the only way to assess scientific performance.

Unfortunately, most academics still follow that path, even though I believe most do it for the fear of losing their job or not being promoted. This paper aims to encourage (i) a full disruption of peer-reviewed publishing and (ii) the use of free eprint repositories for a sustainable academic/scientific publishing, i.e. healthier (no stress/distress associated to the peer review stage and the long waiting for publication) and more economic, effective and efficient (research is made immediately available and trackable/citable to anyone).

On the other hand, it should be pointed out that nothing exists against scientific publishers/journals – actually it´s perfectly normal that any company wants to implement its own quality criteria.

This paper is just the way chosen to promote the quick implementation of suitable policies for research evaluation.

URL : Bye Bye Peer-Reviewed Publishing

Alternative location : https://hal.archives-ouvertes.fr/hal-02114531v4

The limitations to our understanding of peer review

Authors : Jonathan P. Tennant, Tony Ross-Hellauer

Peer review is embedded in the core of our scholarly knowledge generation systems, conferring legitimacy on research while distributing academic capital and prestige on individuals.

Despite its critical importance, it curiously remains poorly understood in a number of dimensions. In order to address this, we have programmatically analysed peer review to assess where the major gaps in our theoretical and empirical understanding of it lie.

We distill this into core themes around editorial accountability, the subjectivity and bias of reviewers, the function and quality of peer review, the role of the reviewer, the social and epistemic implications of peer review, and regarding innovations in open peer review platforms and services.

We use this to present a guide for the future of peer review, and the development of a new research discipline based on the study of peer review. Such a field requires sustained funding and commitment from publishers and research funders, who both have a commitment to uphold the integrity of the published scholarly record.

This will require the design of a consensus for a minimal set of standards for what constitutes peer review, and the development of a shared data infrastructure to support this.

We recognise that many of the criticisms attributed to peer review might reflect wider issues within academia and wider society, and future care will be required in order to carefully demarcate and address these.

DOI : https://doi.org/10.31235/osf.io/jq623