Who reviews for predatory journals? A study on reviewer characteristics

Authors : Anna Severin, Michaela Strinzel, Matthias Egger, Marc Domingo, Tiago Barros

Background

While the characteristics of scholars who publish in predatory journals are relatively well-understood, nothing is known about the scholars who review for these journals.

We aimed to answer the following questions: Can we observe patterns of reviewer characteristics for scholars who review for predatory journals and for legitimate journals? Second, how are reviews for potentially predatory journals distributed globally?

Methods

We matched random samples of 1,000 predatory journals and 1,000 legitimate journals of the Cabells Scholarly Analytics’ journal lists with the Publons database of review reports, using the Jaro-Winkler string metric.

For reviewers of matched reviews, we descriptively analysed meta-data on reviewing and publishing behaviour.

Results

We matched 183,743 unique Publons reviews that were claimed by 19,598 reviewers. 6,077 reviews were conducted for 1160 unique predatory journals (3.31% of all reviews). 177,666 were claimed for 6,403 legitimate journals (96.69% of all reviews).

The vast majority of scholars either never or only occasionally submitted reviews for predatory journals to Publons (89.96% and 7.55% of all reviewers, respectively). Smaller numbers of scholars claimed reviews predominantly or exclusively for predatory journals (0.26% and 0.35% of all reviewers, respectively).

The two latter groups of scholars are of younger academic age and have fewer publications and fewer reviews than the first two groups of scholars.Developing regions feature larger shares of reviews for predatory reviews than developed regions.

Conclusion

The characteristics of scholars who review for potentially predatory journals resemble those of authors who publish their work in these outlets. In order to combat potentially predatory journals, stakeholders will need to adopt a holistic approach that takes into account the entire research workflow.

DOI : https://doi.org/10.1101/2020.03.09.983155

Envisioning the scientific paper of the future

Authors : Natalie M. Sopinka, Laura E. Coristine, Maria C. DeRosa, Chelsea M. Rochman, Brian L. Owens, Steven J. Cooke

Consider for a moment the rate of advancement in the scientific understanding of DNA. It is formidable; from Fredrich Miescher’s nuclein extraction in the 1860s to Rosalind Franklin’s double helix X-ray in the 1950s to revolutionary next-generation sequencing in the late 2000s.

Now consider the scientific paper, the medium used to describe and publish these advances. How is the scientific paper advancing to meet the needs of those who generate and use scientific information?

We review four essential qualities for the scientific paper of the future: (i) a robust source of trustworthy information that remains peer reviewed and is (ii) communicated to diverse users in diverse ways, (iii) open access, and (iv) has a measurable impact beyond Impact Factor.

Since its inception, scientific literature has proliferated. We discuss the continuation and expansion of practices already in place including: freely accessible data and analytical code, living research and reviews, changes to peer review to improve representation of under-represented groups, plain language summaries, preprint servers, evidence-informed decision-making, and altmetrics.

URL : Envisioning the scientific paper of the future

DOI : https://doi.org/10.1139/facets-2019-0012

Peer review and preprint policies are unclear at most major journals

Authors : Thomas Klebel, Stefan Reichmann, Jessica Polka, Gary McDowell, Naomi Penfold, Samantha Hindle, Tony Ross-Hellauer

Clear and findable publishing policies are important for authors to choose appropriate journals for publication. We investigated the clarity of policies of 171 major academic journals across disciplines regarding peer review and preprinting.

31.6% of journals surveyed do not provide information on the type of peer review they use. Information on whether preprints can be posted or not is unclear in 39.2% of journals. 58.5% of journals offer no clear information on whether reviewer identities are revealed to authors.

Around 75% of journals have no clear policy on coreviewing, citation of preprints, and publication of reviewer identities. Information regarding practices of Open Peer Review is even more scarce, with <20% of journals providing clear information.

Having found a lack of clear information, we conclude by examining the implications this has for researchers (especially early career) and the spread of open research practices.

URL : Peer review and preprint policies are unclear at most major journals

DOI : https://doi.org/10.1101/2020.01.24.918995

How Many Papers Should Scientists Be Reviewing? An Analysis Using Verified Peer Review Reports

Authors : Vincent Raoult

The current peer review system is under stress from ever increasing numbers of publications, the proliferation of open-access journals and an apparent difficulty in obtaining high-quality reviews in due time. At its core, this issue may be caused by scientists insufficiently prioritising reviewing.

Perhaps this low prioritisation is due to a lack of understanding on how many reviews need to be conducted by researchers to balance the peer review process. I obtained verified peer review data from 142 journals across 12 research fields, for a total of over 300,000 reviews and over 100,000 publications, to determine an estimate of the numbers of reviews required per publication per field.

I then used this value in relation to the mean numbers of authors per publication per field to highlight a ‘review ratio’: the expected minimum number of publications an author in their field should review to balance their input (publications) into the peer review process.

On average, 3.49 ± 1.45 (SD) reviews were required for each scientific publication, and the estimated review ratio across all fields was 0.74 ± 0.46 (SD) reviews per paper published per author. Since these are conservative estimates, I recommend scientists aim to conduct at least one review per publication they produce. This should ensure that the peer review system continues to function as intended.

URL : How Many Papers Should Scientists Be Reviewing? An Analysis Using Verified Peer Review Reports

DOI : https://doi.org/10.3390/publications8010004

Peer Review of Research Data Submissions to ScholarsArchive@OSU: How can we improve the curation of research datasets to enhance reusability?

Authors : Clara Llebot, Steven Van Tuyl

Objective

Best practices such as the FAIR Principles (Findability, Accessibility, Interoperability, Reusability) were developed to ensure that published datasets are reusable. While we employ best practices in the curation of datasets, we want to learn how domain experts view the reusability of datasets in our institutional repository, ScholarsArchive@OSU.

Curation workflows are designed by data curators based on their own recommendations, but research data is extremely specialized, and such workflows are rarely evaluated by researchers.

In this project we used peer-review by domain experts to evaluate the reusability of the datasets in our institutional repository, with the goal of informing our curation methods and ensure that the limited resources of our library are maximizing the reusability of research data.

Methods

We asked all researchers who have datasets submitted in Oregon State University’s repository to refer us to domain experts who could review the reusability of their data sets. Two data curators who are non-experts also reviewed the same datasets.

We gave both groups review guidelines based on the guidelines of several journals. Eleven domain experts and two data curators reviewed eight datasets.

The review included the quality of the repository record, the quality of the documentation, and the quality of the data. We then compared the comments given by the two groups.

Results

Domain experts and non-expert data curators largely converged on similar scores for reviewed datasets, but the focus of critique by domain experts was somewhat divergent.

A few broad issues common across reviews were: insufficient documentation, the use of links to journal articles in the place of documentation, and concerns about duplication of effort in creating documentation and metadata. Reviews also reflected the background and skills of the reviewer.

Domain experts expressed a lack of expertise in data curation practices and data curators expressed their lack of expertise in the research domain.

Conclusions

The results of this investigation could help guide future research data curation activities and align domain expert and data curator expectations for reusability of datasets.

We recommend further exploration of these common issues and additional domain expert peer-review project to further refine and align expectations for research data reusability.

URL : Peer Review of Research Data Submissions to ScholarsArchive@OSU: How can we improve the curation of research datasets to enhance reusability?

DOI : https://doi.org/10.7191/jeslib.2019.1166

A history and development of peer-review process

Author : Jana Siladitya

The paper shows the importance of peer review process in the scholarly communication system and discusses both the closed and the newly emerging open peer review models.

It also examines the peer review system at the scholarly academies or societies in their nomination systems for prizes, rewards, etc. It also discusses the various facets of the newly developed open peer review models now prevalent in various journals.

The paper may help to understand and appreciate the role played by peer review in the scholarly communication system and the efforts being made to make it more transparent.

URI : http://hdl.handle.net/10760/39332

Does the use of open, non-anonymous peer review in scholarly publishing introduce bias? Evidence from the F1000 post-publication open peer review publishing model

Authors : Mike Thelwall, Verena Weigert, Liz Allen, Zena Nyakoojo, Eleanor-Rose Papas

This study examines whether there is any evidence of bias in two areas of common critique of open, non-anonymous peer review – and used in the post-publication, peer review system operated by the open-access scholarly publishing platform F1000Research.

First, is there evidence of bias where a reviewer based in a specific country assesses the work of an author also based in the same country? Second, are reviewers influenced by being able to see the comments and know the origins of previous reviewer?

Scrutinising the open peer review comments published on F1000Research, we assess the extent of two frequently cited potential influences on reviewers that may be the result of the transparency offered by a fully attributable, open peer review publishing model: the national affiliations of authors and reviewers, and the ability of reviewers to view previously-published reviewer reports before submitting their own.

The effects of these potential influences were investigated for all first versions of articles published by 8 July 2019 to F1000Research. In 16 out of the 20 countries with the most articles, there was a tendency for reviewers based in the same country to give a more positive review.

The difference was statistically significant in one. Only 3 countries had the reverse tendency. Second, there is no evidence of a conformity bias. When reviewers mentioned a previous review in their peer review report, they were not more likely to give the same overall judgement.

Although reviewers who had longer to potentially read a previously published reviewer reports were slightly less likely to agree with previous reviewer judgements, this could be due to these articles being difficult to judge rather than deliberate non-conformity.

URL : https://arxiv.org/abs/1911.03379