On The Peer Review Reports: Does Size Matter?

Authors : Abdelghani Maddi, Luis Miotti

Amidst the ever-expanding realm of scientific production and the proliferation of predatory journals, the focus on peer review remains paramount for scientometricians and sociologists of science. Despite this attention, there is a notable scarcity of empirical investigations into the tangible impact of peer review on publication quality.

This study aims to address this gap by conducting a comprehensive analysis of how peer review contributes to the quality of scholarly publications, as measured by the citations they receive. Utilizing an adjusted dataset comprising 57,482 publications from Publons to Web of Science and employing the Raking Ratio method, our study reveals intriguing insights. Specifically, our findings shed light on a nuanced relationship between the length of reviewer reports and the subsequent citations received by publications.

Through a robust regression analysis, we establish that, beginning from 947 words, the length of reviewer reports is significantly associated with an increase in citations. These results not only confirm the initial hypothesis that longer reports indicate requested improvements, thereby enhancing the quality and visibility of articles, but also underscore the importance of timely and comprehensive reviewer reports.

Furthermore, insights from Publons’ data suggest that open access to reports can influence reviewer behavior, encouraging more detailed reports. Beyond the scholarly landscape, our findings prompt a reevaluation of the role of reviewers, emphasizing the need to recognize and value this resource-intensive yet underappreciated activity in institutional evaluations.

Additionally, the study sounds a cautionary note regarding the challenges faced by peer review in the context of an increasing volume of submissions, potentially compromising the vigilance of peers in swiftly assessing numerous articles.

HAL : https://cnrs.hal.science/hal-04492274

Peer review’s irremediable flaws: Scientists’ perspectives on grant evaluation in Germany

Authors : Eva Barlösius, Laura Paruschke, Axel Philipps

Peer review has developed over time to become the established procedure for assessing and assuring the scientific quality of research. Nevertheless, the procedure has also been variously criticized as conservative, biased, and unfair, among other things. Do scientists regard all these flaws as equally problematic?

Do they have the same opinions on which problems are so serious that other selection procedures ought to be considered? The answers to these questions hints at what should be modified in peer review processes as a priority objective. The authors of this paper use survey data to examine how members of the scientific community weight different shortcomings of peer review processes.

Which of those processes’ problems do they consider less relevant? Which problems, on the other hand, do they judge to be beyond remedy? Our investigation shows that certain defects of peer review processes are indeed deemed irreparable: (1) legitimate quandaries in the process of fine-tuning the choice between equally eligible research proposals and in the selection of daring ideas; and (2) illegitimate problems due to networks. Science-policy measures to improve peer review processes should therefore clarify the distinction between field-specific remediable and irremediable flaws than is currently the case.

URL : Peer review’s irremediable flaws: Scientists’ perspectives on grant evaluation in Germany

DOI : https://doi.org/10.1093/reseval/rvad032

Beyond journals and peer review: towards a more flexible ecosystem for scholarly communication

Author : Michael Wood

This article challenges the assumption that journals and peer review are essential for developing,evaluating and disseminating scientific and other academic knowledge. It suggests a more flexible ecosystem, and examines some of the possibilities this might facilitate. The market for academic outputs should be opened up by encouraging the separation of the dissemination service from the evaluation service.

Publishing research in subject-specific journals encourages compartmentalising research into rigid categories. The dissemination of knowledge would be better served by an open access, web-based repository system encompassing all disciplines. There would then be a role for organisations to assess the items in this repository to help users find relevant, high-quality work.

There could be a variety of such organisations which could enable reviews from peers to be supplemented with evaluation by non-peers from a variety of different perspectives: user reviews, statistical reviews, reviews from the perspective of different disciplines, and so on. This should reduce the inevitably conservative influence of relying on two or three peers, and make the evaluation system more critical, multi-dimensional and responsive to the requirements of different audience groups, changing circumstances, and new ideas.

Non-peer review might make it easier to challenge dominant paradigms, and expanding the potential audience beyond a narrow group of peers might encourage the criterion of simplicity to be taken more seriously – which is essential if human knowledge is to continue to progress.

Arxiv : https://arxiv.org/abs/1311.4566

Additional experiments required: A scoping review of recent evidence on key aspects of Open Peer Review

Authors : Tony Ross-Hellauer, Serge P.J.M. Horbach

Diverse efforts are underway to reform the journal peer review system. Combined with growing interest in Open Science practices, Open Peer Review (OPR) has become of central concern to the scholarly community. However, what OPR is understood to encompass and how effective some of its elements are in meeting the expectations of diverse communities, are uncertain.

This scoping review updates previous efforts to summarize research on OPR to May 2022. Following the PRISMA methodological framework, it addresses the question: “What evidence has been reported in the scientific literature from 2017 to May 2022 regarding uptake, attitudes, and efficacy of two key aspects of OPR (Open Identities and Open Reports)?”

The review identifies, analyses and synthesizes 52 studies matching inclusion criteria, finding that OPR is growing, but still far from common practice. Our findings indicate positive attitudes towards Open Reports and more sceptical approaches to Open Identities.

Changes in reviewer behaviour seem limited and no evidence for lower acceptance rates of review invitations or slower turnaround times is reported in those studies examining those issues. Concerns about power dynamics and potential backfiring on critical reviews are in need of further experimentation.

We conclude with an overview of evidence gaps and suggestions for future research. Also, we discuss implications for policy and practice, both in the scholarly communications community and the research evaluation community more broadly.

URL : Additional experiments required: A scoping review of recent evidence on key aspects of Open Peer Review

DOI : https://doi.org/10.1093/reseval/rvae004

Comparison of effect estimates between preprints and peer-reviewed journal articles of COVID-19 trials

Authors : Mauricia Davidson, Theodoros Evrenoglou, Carolina Graña, Anna Chaimani, Isabelle Boutron

Background

Preprints are increasingly used to disseminate research results, providing multiple sources of information for the same study. We assessed the consistency in effect estimates between preprint and subsequent journal article of COVID-19 randomized controlled trials.

Methods

The study utilized data from the COVID-NMA living systematic review of pharmacological treatments for COVID-19 (covid-nma.com) up to July 20, 2022. We identified randomized controlled trials (RCTs) evaluating pharmacological treatments vs. standard of care/placebo for patients with COVID-19 that were originally posted as preprints and subsequently published as journal articles.

Trials that did not report the same analysis in both documents were excluded. Data were extracted independently by pairs of researchers with consensus to resolve disagreements. Effect estimates extracted from the first preprint were compared to effect estimates from the journal article.

Results

The search identified 135 RCTs originally posted as a preprint and subsequently published as a journal article. We excluded 26 RCTs that did not meet the eligibility criteria, of which 13 RCTs reported an interim analysis in the preprint and a final analysis in the journal article. Overall, 109 preprint–article RCTs were included in the analysis.

The median (interquartile range) delay between preprint and journal article was 121 (73–187) days, the median sample size was 150 (71–464) participants, 76% of RCTs had been prospectively registered, 60% received industry or mixed funding, 72% were multicentric trials. The overall risk of bias was rated as ‘some concern’ for 80% of RCTs.

We found that 81 preprint–article pairs of RCTs were consistent for all outcomes reported. There were nine RCTs with at least one outcome with a discrepancy in the number of participants with outcome events or the number of participants analyzed, which yielded a minor change in the estimate of the effect. Furthermore, six RCTs had at least one outcome missing in the journal article and 14 RCTs had at least one outcome added in the journal article compared to the preprint. There was a change in the direction of effect in one RCT. No changes in statistical significance or conclusions were found.

Conclusions

Effect estimates were generally consistent between COVID-19 preprints and subsequent journal articles. The main results and interpretation did not change in any trial. Nevertheless, some outcomes were added and deleted in some journal articles.

URL : Comparison of effect estimates between preprints and peer-reviewed journal articles of COVID-19 trials

DOI : https://doi.org/10.1186/s12874-023-02136-8

Peer-based research funding as a model for journalism funding

Authors : Maria Latos, Frank Lobigs, Holger Wormer

Financing high-quality journalistic reporting is becoming increasingly difficult worldwide and economic pressure has intensified in the wake of the COVID-19 pandemic. While numerous alternative funding possibilities are discussed, ranging from membership models to government funding, they should not compromise the highest possible independence of journalism – a premise that also applies to scientific research.

Here, the state is involved in funding, but peer review models reduce funding bias. However, systematic approaches as to how established funding models in research could be transferred to journalism are lacking. We attempt such a systematic transfer using the example of the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG).

The transfer, based on an analysis of the complex DFG funding structures, was validated in 10 interviews with experts from science, journalism and foundations. Building on this, we developed a concept for a German Journalism Foundation (Deutsche Journalismus-gemeinschaft, DJG), which awards funding to journalists and cooperative projects based on a peer review process.

The funding priorities of the proposed organization range from infrastructure support to grants for investigative skills. Thus, unlike other models, it does not focus on funding specific topics in media coverage, but on areas such as innovation support, technology implementation and training. Although the model was designed for Germany, such a systematic transfer could also be tested for other countries.

URL : Peer-based research funding as a model for journalism funding

DOI : https://doi.org/10.1177/14648849231215662

Fast, Furious and Dubious? MDPI and the Depth of Peer Review Reports

Authors : Abdelghani Maddi, Chérifa Boukacem-Zeghmouri

Peer review is a central component of scholarly communication as it brings trust and quality control for scientific knowledge. One of its goals is to improve the quality of manuscripts and prevent the publication of work resulting from dubious or misconduct practices.

In a context marked by a massification of scientific production, the reign of Publish or Perish rule and the acceleration of research, journals are leaving less and less time to reviewers to produce their reports. It is therefore is crucial to study whether these regulations have an impact on the length of reviewer reports.

Here, we address the example of MDPI, a Swiss Open Access publisher, depicted as a Grey Publisher and well known for its short deadlines, by analyzing the depth of its reviewer reports and its counterparts. For this, we used Publons data with 61,197 distinct publications reviewed by 86,628 reviewers.

Our results show that, despite the short deadlines, when they accept to review a manuscript, reviewers assume their responsibility and do their job in the same way regardless of the publisher, and write on average the same number of words.

Our results suggest that, even if MDPI’s editorial practices may be questionable, as long as peer review is assured by researchers themselves, publications are evaluated similarly.

URL : Fast, Furious and Dubious? MDPI and the Depth of Peer Review Reports

DOI : https://doi.org/10.21203/rs.3.rs-3027724/v1