Author : Edward Barroga
Peer review is a crucial part of research and publishing. However, it remains imperfect and suffers from bias, lack of transparency, and professional jealousy. It is also overburdened by an increasing quantity of complex papers against the stagnant pool of reviewers, causing delays in peer review.
Additionally, many medical, nursing, and healthcare educators, peer reviewers, and authors may not be completely familiar with the current changes in peer review. Moreover, reviewer education and training have unfortunately remained lacking.
This is especially crucial since current initiatives to improve the review process are now influenced by factors other than academic needs. Thus, increasing attention has recently focused on ways of streamlining the peer review process and implementing alternative peer-review methods using new technologies and open access models.
This article aims to give an overview of the innovative strategies for peer review and to consider perspectives that may be helpful in introducing changes to peer review. Critical assessments of peer review innovations and incentives based on past and present experiences are indispensable.
A theoretical appraisal must be balanced by a realistic appraisal of the ethical roles of all stakeholders in enhancing the peer review process.
As the peer review system is far from being perfect, identifying and developing core competencies among reviewers, continuing education of researchers, reviewer education and training, and professional engagement of the scientific community in various disciplines may help bridge gaps in an imperfect but indispensable peer review system.
URL : Innovative Strategies for Peer Review
DOI : https://doi.org/10.3346/jkms.2020.35.e138
Author : David A. M. Peterson
The objective of this study was to empirically test the wide belief that Reviewer #2 is a uniquely poor reviewer.
The test involved analyzing the reviewer database from Political Behavior . There are two main tests. First, the reviewer’s categorical evaluation of the manuscript was compared by reviewer number. Second, the data were analyzed to test if Reviewer #2 was disproportionately likely to be more than one category below the mean of the other reviewers of the manuscript.
There is no evidence that Reviewer #2 is either more negative about the manuscript or out of line with the other reviewers. There is, however, evidence that Reviewer #3 is more likely to be more than one category below the other reviewers.
Reviewer #2 is not the problem. Reviewer #3 is. In fact, he is such a bad actor that he even gets the unwitting Reviewer #2 blamed for his bad behavior.
DOI : https://doi.org/10.1111/ssqu.12824
Author : Iva Zlodi
In the last years here is an increasing need to ensure a more objective and transparent evaluation of scientific research in the Humanities and Social Sciences. This short paper explores some of the underlying issues and suggests a study using the suvey method based on a sample of 146 publications.
The results of this study could contribute to the identification and describing distinctive types of edited books and conference proceedings according to their peer-review procedures, and thus to facilitate the recognition of their scholarly value and reliability.
URL : https://hal.archives-ouvertes.fr/hal-02544293
Authors : Anna Severin, Michaela Strinzel, Matthias Egger, Marc Domingo, Tiago Barros
While the characteristics of scholars who publish in predatory journals are relatively well-understood, nothing is known about the scholars who review for these journals.
We aimed to answer the following questions: Can we observe patterns of reviewer characteristics for scholars who review for predatory journals and for legitimate journals? Second, how are reviews for potentially predatory journals distributed globally?
We matched random samples of 1,000 predatory journals and 1,000 legitimate journals of the Cabells Scholarly Analytics’ journal lists with the Publons database of review reports, using the Jaro-Winkler string metric.
For reviewers of matched reviews, we descriptively analysed meta-data on reviewing and publishing behaviour.
We matched 183,743 unique Publons reviews that were claimed by 19,598 reviewers. 6,077 reviews were conducted for 1160 unique predatory journals (3.31% of all reviews). 177,666 were claimed for 6,403 legitimate journals (96.69% of all reviews).
The vast majority of scholars either never or only occasionally submitted reviews for predatory journals to Publons (89.96% and 7.55% of all reviewers, respectively). Smaller numbers of scholars claimed reviews predominantly or exclusively for predatory journals (0.26% and 0.35% of all reviewers, respectively).
The two latter groups of scholars are of younger academic age and have fewer publications and fewer reviews than the first two groups of scholars.Developing regions feature larger shares of reviews for predatory reviews than developed regions.
The characteristics of scholars who review for potentially predatory journals resemble those of authors who publish their work in these outlets. In order to combat potentially predatory journals, stakeholders will need to adopt a holistic approach that takes into account the entire research workflow.
DOI : https://doi.org/10.1101/2020.03.09.983155
Authors : Natalie M. Sopinka, Laura E. Coristine, Maria C. DeRosa, Chelsea M. Rochman, Brian L. Owens, Steven J. Cooke
Consider for a moment the rate of advancement in the scientific understanding of DNA. It is formidable; from Fredrich Miescher’s nuclein extraction in the 1860s to Rosalind Franklin’s double helix X-ray in the 1950s to revolutionary next-generation sequencing in the late 2000s.
Now consider the scientific paper, the medium used to describe and publish these advances. How is the scientific paper advancing to meet the needs of those who generate and use scientific information?
We review four essential qualities for the scientific paper of the future: (i) a robust source of trustworthy information that remains peer reviewed and is (ii) communicated to diverse users in diverse ways, (iii) open access, and (iv) has a measurable impact beyond Impact Factor.
Since its inception, scientific literature has proliferated. We discuss the continuation and expansion of practices already in place including: freely accessible data and analytical code, living research and reviews, changes to peer review to improve representation of under-represented groups, plain language summaries, preprint servers, evidence-informed decision-making, and altmetrics.
URL : Envisioning the scientific paper of the future
DOI : https://doi.org/10.1139/facets-2019-0012
Authors : Thomas Klebel, Stefan Reichmann, Jessica Polka, Gary McDowell, Naomi Penfold, Samantha Hindle, Tony Ross-Hellauer
Clear and findable publishing policies are important for authors to choose appropriate journals for publication. We investigated the clarity of policies of 171 major academic journals across disciplines regarding peer review and preprinting.
31.6% of journals surveyed do not provide information on the type of peer review they use. Information on whether preprints can be posted or not is unclear in 39.2% of journals. 58.5% of journals offer no clear information on whether reviewer identities are revealed to authors.
Around 75% of journals have no clear policy on coreviewing, citation of preprints, and publication of reviewer identities. Information regarding practices of Open Peer Review is even more scarce, with <20% of journals providing clear information.
Having found a lack of clear information, we conclude by examining the implications this has for researchers (especially early career) and the spread of open research practices.
URL : Peer review and preprint policies are unclear at most major journals
DOI : https://doi.org/10.1101/2020.01.24.918995
Authors : Vincent Raoult
The current peer review system is under stress from ever increasing numbers of publications, the proliferation of open-access journals and an apparent difficulty in obtaining high-quality reviews in due time. At its core, this issue may be caused by scientists insufficiently prioritising reviewing.
Perhaps this low prioritisation is due to a lack of understanding on how many reviews need to be conducted by researchers to balance the peer review process. I obtained verified peer review data from 142 journals across 12 research fields, for a total of over 300,000 reviews and over 100,000 publications, to determine an estimate of the numbers of reviews required per publication per field.
I then used this value in relation to the mean numbers of authors per publication per field to highlight a ‘review ratio’: the expected minimum number of publications an author in their field should review to balance their input (publications) into the peer review process.
On average, 3.49 ± 1.45 (SD) reviews were required for each scientific publication, and the estimated review ratio across all fields was 0.74 ± 0.46 (SD) reviews per paper published per author. Since these are conservative estimates, I recommend scientists aim to conduct at least one review per publication they produce. This should ensure that the peer review system continues to function as intended.
URL : How Many Papers Should Scientists Be Reviewing? An Analysis Using Verified Peer Review Reports
DOI : https://doi.org/10.3390/publications8010004