Characteristics of ‘mega’ peer-reviewers

Authors : Danielle B. Rice, Ba’ Pham, Justin Presseau, Andrea C. Tricco, David Moher

Background

The demand for peer reviewers is often perceived as disproportionate to the supply and availability of reviewers. Considering characteristics associated with peer review behaviour can allow for the development of solutions to manage the growing demand for peer reviewers.

The objective of this research was to compare characteristics among two groups of reviewers registered in Publons.

Methods

A descriptive cross-sectional study design was used to compare characteristics between (1) individuals completing at least 100 peer reviews (‘mega peer reviewers’) from January 2018 to December 2018 as and (2) a control group of peer reviewers completing between 1 and 18 peer reviews over the same time period.

Data was provided by Publons, which offers a repository of peer reviewer activities in addition to tracking peer reviewer publications and research metrics. Mann Whitney tests and chi-square tests were conducted comparing characteristics (e.g., number of publications, number of citations, word count of peer review) of mega peer reviewers to the control group of reviewers.

Results

A total of 1596 peer reviewers had data provided by Publons. A total of 396 M peer reviewers and a random sample of 1200 control group reviewers were included. A greater proportion of mega peer reviews were male (92%) as compared to the control reviewers (70% male).

Mega peer reviewers demonstrated a significantly greater average number of total publications, citations, receipt of Publons awards, and a higher average h index as compared to the control group of reviewers (all p < .001). We found no statistically significant differences in the number of words between the groups (p > .428).

Conclusions

Mega peer reviewers registered in the Publons database also had a higher number of publications and citations as compared to a control group of reviewers. Additional research that considers motivations associated with peer review behaviour should be conducted to help inform peer reviewing activity.

URL : Characteristics of ‘mega’ peer-reviewers

DOI : https://doi.org/10.1186/s41073-022-00121-1

Peer reviewers equally critique theory, method, and writing, with limited effect on the final content of accepted manuscripts

Author : Dimity Stephen

The primary aims of peer review are to detect flaws and deficiencies in the design and interpretation of studies, and ensure the clarity and quality of their presentation. However, it has been questioned whether peer review fulfils this function.

Studies have highlighted a stronger focus of reviewers on critiquing methodological aspects of studies and the quality of writing in biomedical sciences, with less focus on theoretical grounding. In contrast, reviewers in the social sciences appear more concerned with theoretical underpinnings.

These studies also found the effect of peer review on manuscripts’ content to be variable, but generally modest and positive. I qualitatively analysed 1430 peer reviewers’ comments for a sample of 40 social science preprint-publication pairs to identify the key foci of reviewers’ comments.

I then quantified the effect of peer review on manuscripts by examining differences between the preprint and published versions using the normalised Levenshtein distance, cosine similarity, and word count ratios for titles, abstracts, document sections and full-texts.

I also examined changes in references used between versions and linked changes to reviewers’ comments. Reviewers’ comments were nearly equally split between issues of methodology (30.7%), theory (30.0%), and writing quality (29.2%).

Titles, abstracts, and the semantic content of documents remained similar, although publications were typically longer than preprints.

Two-thirds of citations were unchanged, 20.9% were added during review and 13.1% were removed. These findings indicate reviewers equally attended to the theoretical and methodological details and communication style of manuscripts, although the effect on quantitative measures of the manuscripts was limited.

URL : Peer reviewers equally critique theory, method, and writing, with limited effect on the final content of accepted manuscripts

DOI : https://doi.org/10.1007/s11192-022-04357-y

How to improve scientific peer review: Four schools of thought

Authors : Ludo Waltman, Wolfgang Kaltenbrunner, Stephen Pinfield, Helen Buckley Woods

Peer review plays an essential role as one of the cornerstones of the scholarly publishing system. There are many initiatives that aim to improve the way in which peer review is organized, resulting in a highly complex landscape of innovation in peer review.

Different initiatives are based on different views on the most urgent challenges faced by the peer review system, leading to a diversity of perspectives on how the system can be improved.

To provide a more systematic understanding of the landscape of innovation in peer review, we suggest that the landscape is shaped by four schools of thought: The Quality & Reproducibility school, the Democracy & Transparency school, the Equity & Inclusion school, and the Efficiency & Incentives school.

Each school has a different view on the key problems of the peer review system and the innovations necessary to address these problems. The schools partly complement each other, but we argue that there are also important tensions between the schools.

We hope that the four schools of thought offer a useful framework to facilitate conversations about the future development of the peer review system.

URL : How to improve scientific peer review: Four schools of thought

DOI : https://doi.org/10.31235/osf.io/v8ghj

Peer reviewers’ dilemmas: a qualitative exploration of decisional conflict in the evaluation of grant applications in the medical humanities and social sciences

Authors : Gaëlle Vallée-Tourangeau, Ana Wheelock, Tushna Vandrevala, Priscilla Harris

Independent evaluations of grant applications by subject experts are an important part of the peer-review system. However, little is known about the real-time experiences of peer reviewers or experts who perform reviews of a grant application independently.

This study sought to gain insight into this stage of the grant evaluation process by observing how experts conduct an independent review in near real time. Using the think aloud approach and Critical Decision Method of interviewing, in-depth interviews were conducted with 16 peer reviewers from a range of roles and disciplines within the medical humanities and social sciences.

Participants were asked to think aloud while reviewing applications to different grant schemes from a single prestigious funder. The analysis shows reviewers encountered five dilemmas during the evaluation process.

These dilemmas were related to whether or not one should (1) accept an invitation to review, (2) rely exclusively on the information presented in the application, (3) pay attention to institutional prestige, (4) offer comments about aspects that are not directly related to academics’ area of expertise, and (5) to take risks and overlook shortcomings rather than err on the side of caution.

In order to decide on the appropriate course of action, reviewers often engaged in a series of deliberations and trade-offs—varying in length and complexity.

However, their interpretation of what was ‘right’ was influenced by their values, preferences and experiences, but also by relevant norms and their understanding of the funder’s guidelines and priorities.

As a result, the way reviewers approached the identified dilemmas was idiosyncratic and sometimes diametrically opposed to other reviewers’ views, which could lead to variation in peer-review outcomes.

The dilemmas we have uncovered suggest that peer reviewers engage in thoughtful considerations during the peer-review process.

We should, therefore, be wary of reducing the absence of consensus as resulting from biased, instinctive thinking. Rather, these findings highlight the diversity of values, priorities and habits and ways of working each reviewer brings to the fore when reviewing the applicants and their project proposals and call for further reflection on, and study of, this “invisible work” to better understand and continue to improve the peer-reviewing process.

URL : Peer reviewers’ dilemmas: a qualitative exploration of decisional conflict in the evaluation of grant applications in the medical humanities and social sciences

DOI : https://doi.org/10.1057/s41599-022-01050-6

Reshaping How Universities Can Evaluate the Research Impact of Open Humanities for Societal Benefit

Authors : Paul Longley Arthur, Lydia Hearn

During the twenty-first century, for the first time, the volume of digital data has surpassed the amount of analog data. As academic practices increasingly become digital, opportunities arise to reshape the future of scholarly communication through more accessible, interactive, open, and transparent methods that engage a far broader and more diverse public.

Yet despite these advances, the research performance of universities and public research institutes remains largely evaluated through publication and citation analysis rather than by public engagement and societal impact.

This article reviews how changes to bibliometric evaluations toward greater use of altmetrics, including social media mentions, could enhance uptake of open scholarship in the humanities.

In addition, the article highlights current challenges faced by the open scholarship movement, given the complexity of the humanities in terms of its sources and outputs that include monographs, book chapters, and journals in languages other than English; the use of popular media not considered as scholarly papers; the lack of time and energy to develop digital skills among research staff; problems of authority and trust regarding the scholarly or non-academic nature of social media platforms; the prestige of large academic publishing houses; and limited awareness of and familiarity with advanced digital applications.

While peer review will continue to be a primary method for evaluating research in the humanities, a combination of altmetrics and other assessment of research impact through different data sources may provide a way forward to ensure the increased use, sustainability, and effectiveness of open scholarship in the humanities.

DOI : https://doi.org/10.3998/jep.788

Innovations in peer review in scholarly publishing: a meta-summary

Authors : Helen Buckley Woods, Johanna Brumberg, Wolfgang Kaltenbrunner, Stephen Pinfield, Ludo Waltman

Background

There are currently numerous innovations in peer review and quality assurance in scholarly publishing. The Research on Research Institute conducted a programme of co-produced projects investigating these innovations.

This literature review was part of one such project ‘Experiments in peer review’ (Kaltenbrunner et al, 2022), which created an inventory and framework of peer review innovations. The aim of this literature review was to aid the development of the inventory by identifying innovations in peer review reported in the scholarly literature and by providing a summary of the different approaches.

Methods

This meta-summary is based on data identified from Web of Science and Scopus limited from 2010 to 2021. A total of 247 papers were screened, with 6 review articles chosen for the focus of the literature review. Items were selected that described approaches to innovating peer review or illustrated examples.

Results

The summary of innovations are drawn from 6 review articles: Bruce et al (2016); Tennant et al (2017); Burley (2017); Horbach and Halffman (2018); Tennant (2018); Barroga (2020). The innovations are divided into three high-level categories: approaches to peer review, reviewer focussed initiatives and technology to support peer review with sub-categories of results presented in tabular form and summarised. A summary of all innovations found is also presented.

Conclusions

From a simple synthesis of the review authors’ conclusions, three key messages are presented: observations on current practice; authors’ views on the implications of innovations in peer review; and calls for action in peer review research and practice.

URL : https://osf.io/preprints/socarxiv/qaksd/

Examining linguistic shifts between preprints and publications

Authors : David N. Nicholson, Vincent Rubinetti, Dongbo Hu, Marvin Thielk, Lawrence E. Hunter, Casey S. Greene

Preprints allow researchers to make their findings available to the scientific community before they have undergone peer review. Studies on preprints within bioRxiv have been largely focused on article metadata and how often these preprints are downloaded, cited, published, and discussed online.

A missing element that has yet to be examined is the language contained within the bioRxiv preprint repository. We sought to compare and contrast linguistic features within bioRxiv preprints to published biomedical text as a whole as this is an excellent opportunity to examine how peer review changes these documents.

The most prevalent features that changed appear to be associated with typesetting and mentions of supporting information sections or additional files. In addition to text comparison, we created document embeddings derived from a preprint-trained word2vec model.

We found that these embeddings are able to parse out different scientific approaches and concepts, link unannotated preprint–peer-reviewed article pairs, and identify journals that publish linguistically similar papers to a given preprint.

We also used these embeddings to examine factors associated with the time elapsed between the posting of a first preprint and the appearance of a peer-reviewed publication. We found that preprints with more versions posted and more textual changes took longer to publish.

Lastly, we constructed a web application (https://greenelab.github.io/preprint-similarity-search/) that allows users to identify which journals and articles that are most linguistically similar to a bioRxiv or medRxiv preprint as well as observe where the preprint would be positioned within a published article landscape.

URL : Examining linguistic shifts between preprints and publications

DOI : https://doi.org/10.1371/journal.pbio.3001470