Beyond journals and peer review: towards a more flexible ecosystem for scholarly communication

Author : Michael Wood

This article challenges the assumption that journals and peer review are essential for developing,evaluating and disseminating scientific and other academic knowledge. It suggests a more flexible ecosystem, and examines some of the possibilities this might facilitate. The market for academic outputs should be opened up by encouraging the separation of the dissemination service from the evaluation service.

Publishing research in subject-specific journals encourages compartmentalising research into rigid categories. The dissemination of knowledge would be better served by an open access, web-based repository system encompassing all disciplines. There would then be a role for organisations to assess the items in this repository to help users find relevant, high-quality work.

There could be a variety of such organisations which could enable reviews from peers to be supplemented with evaluation by non-peers from a variety of different perspectives: user reviews, statistical reviews, reviews from the perspective of different disciplines, and so on. This should reduce the inevitably conservative influence of relying on two or three peers, and make the evaluation system more critical, multi-dimensional and responsive to the requirements of different audience groups, changing circumstances, and new ideas.

Non-peer review might make it easier to challenge dominant paradigms, and expanding the potential audience beyond a narrow group of peers might encourage the criterion of simplicity to be taken more seriously – which is essential if human knowledge is to continue to progress.

Arxiv : https://arxiv.org/abs/1311.4566

Additional experiments required: A scoping review of recent evidence on key aspects of Open Peer Review

Authors : Tony Ross-Hellauer, Serge P.J.M. Horbach

Diverse efforts are underway to reform the journal peer review system. Combined with growing interest in Open Science practices, Open Peer Review (OPR) has become of central concern to the scholarly community. However, what OPR is understood to encompass and how effective some of its elements are in meeting the expectations of diverse communities, are uncertain.

This scoping review updates previous efforts to summarize research on OPR to May 2022. Following the PRISMA methodological framework, it addresses the question: “What evidence has been reported in the scientific literature from 2017 to May 2022 regarding uptake, attitudes, and efficacy of two key aspects of OPR (Open Identities and Open Reports)?”

The review identifies, analyses and synthesizes 52 studies matching inclusion criteria, finding that OPR is growing, but still far from common practice. Our findings indicate positive attitudes towards Open Reports and more sceptical approaches to Open Identities.

Changes in reviewer behaviour seem limited and no evidence for lower acceptance rates of review invitations or slower turnaround times is reported in those studies examining those issues. Concerns about power dynamics and potential backfiring on critical reviews are in need of further experimentation.

We conclude with an overview of evidence gaps and suggestions for future research. Also, we discuss implications for policy and practice, both in the scholarly communications community and the research evaluation community more broadly.

URL : Additional experiments required: A scoping review of recent evidence on key aspects of Open Peer Review

DOI : https://doi.org/10.1093/reseval/rvae004

Comparison of effect estimates between preprints and peer-reviewed journal articles of COVID-19 trials

Authors : Mauricia Davidson, Theodoros Evrenoglou, Carolina Graña, Anna Chaimani, Isabelle Boutron

Background

Preprints are increasingly used to disseminate research results, providing multiple sources of information for the same study. We assessed the consistency in effect estimates between preprint and subsequent journal article of COVID-19 randomized controlled trials.

Methods

The study utilized data from the COVID-NMA living systematic review of pharmacological treatments for COVID-19 (covid-nma.com) up to July 20, 2022. We identified randomized controlled trials (RCTs) evaluating pharmacological treatments vs. standard of care/placebo for patients with COVID-19 that were originally posted as preprints and subsequently published as journal articles.

Trials that did not report the same analysis in both documents were excluded. Data were extracted independently by pairs of researchers with consensus to resolve disagreements. Effect estimates extracted from the first preprint were compared to effect estimates from the journal article.

Results

The search identified 135 RCTs originally posted as a preprint and subsequently published as a journal article. We excluded 26 RCTs that did not meet the eligibility criteria, of which 13 RCTs reported an interim analysis in the preprint and a final analysis in the journal article. Overall, 109 preprint–article RCTs were included in the analysis.

The median (interquartile range) delay between preprint and journal article was 121 (73–187) days, the median sample size was 150 (71–464) participants, 76% of RCTs had been prospectively registered, 60% received industry or mixed funding, 72% were multicentric trials. The overall risk of bias was rated as ‘some concern’ for 80% of RCTs.

We found that 81 preprint–article pairs of RCTs were consistent for all outcomes reported. There were nine RCTs with at least one outcome with a discrepancy in the number of participants with outcome events or the number of participants analyzed, which yielded a minor change in the estimate of the effect. Furthermore, six RCTs had at least one outcome missing in the journal article and 14 RCTs had at least one outcome added in the journal article compared to the preprint. There was a change in the direction of effect in one RCT. No changes in statistical significance or conclusions were found.

Conclusions

Effect estimates were generally consistent between COVID-19 preprints and subsequent journal articles. The main results and interpretation did not change in any trial. Nevertheless, some outcomes were added and deleted in some journal articles.

URL : Comparison of effect estimates between preprints and peer-reviewed journal articles of COVID-19 trials

DOI : https://doi.org/10.1186/s12874-023-02136-8

Peer-based research funding as a model for journalism funding

Authors : Maria Latos, Frank Lobigs, Holger Wormer

Financing high-quality journalistic reporting is becoming increasingly difficult worldwide and economic pressure has intensified in the wake of the COVID-19 pandemic. While numerous alternative funding possibilities are discussed, ranging from membership models to government funding, they should not compromise the highest possible independence of journalism – a premise that also applies to scientific research.

Here, the state is involved in funding, but peer review models reduce funding bias. However, systematic approaches as to how established funding models in research could be transferred to journalism are lacking. We attempt such a systematic transfer using the example of the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG).

The transfer, based on an analysis of the complex DFG funding structures, was validated in 10 interviews with experts from science, journalism and foundations. Building on this, we developed a concept for a German Journalism Foundation (Deutsche Journalismus-gemeinschaft, DJG), which awards funding to journalists and cooperative projects based on a peer review process.

The funding priorities of the proposed organization range from infrastructure support to grants for investigative skills. Thus, unlike other models, it does not focus on funding specific topics in media coverage, but on areas such as innovation support, technology implementation and training. Although the model was designed for Germany, such a systematic transfer could also be tested for other countries.

URL : Peer-based research funding as a model for journalism funding

DOI : https://doi.org/10.1177/14648849231215662

Fast, Furious and Dubious? MDPI and the Depth of Peer Review Reports

Authors : Abdelghani Maddi, Chérifa Boukacem-Zeghmouri

Peer review is a central component of scholarly communication as it brings trust and quality control for scientific knowledge. One of its goals is to improve the quality of manuscripts and prevent the publication of work resulting from dubious or misconduct practices.

In a context marked by a massification of scientific production, the reign of Publish or Perish rule and the acceleration of research, journals are leaving less and less time to reviewers to produce their reports. It is therefore is crucial to study whether these regulations have an impact on the length of reviewer reports.

Here, we address the example of MDPI, a Swiss Open Access publisher, depicted as a Grey Publisher and well known for its short deadlines, by analyzing the depth of its reviewer reports and its counterparts. For this, we used Publons data with 61,197 distinct publications reviewed by 86,628 reviewers.

Our results show that, despite the short deadlines, when they accept to review a manuscript, reviewers assume their responsibility and do their job in the same way regardless of the publisher, and write on average the same number of words.

Our results suggest that, even if MDPI’s editorial practices may be questionable, as long as peer review is assured by researchers themselves, publications are evaluated similarly.

URL : Fast, Furious and Dubious? MDPI and the Depth of Peer Review Reports

DOI : https://doi.org/10.21203/rs.3.rs-3027724/v1

Roles and Responsibilities for Peer Reviewers of International Journals

Author : Carol Nash

There is a noticeable paucity of recently published research on the roles and responsibilities of peer reviewers for international journals. Concurrently, the pool of these peer reviewers is decreasing. Using a narrative research method developed by the author, this study questioned these roles and responsibilities through the author’s assessment in reviewing for five publishing houses July–December 2022, in comparison with two recent studies regarding peer review, and the guidelines of the five publishing houses.

What should be most important in peer review is found discrepant among the author, those judging peer review in these publications, and the five publishing houses. Furthermore, efforts to increase the pool of peer reviewers are identified as ineffective because they focus on the reviewer qua reviewer, rather than on their primary role as researchers.

To improve consistency, authors have regularly called for peer review training. Yet, this advice neglects to recognize the efforts of journals in making their particular requirements for peer review clear, comprehensive and readily accessible.

Consequently, rather than peer reviewers being trained and rewarded as peer reviewers, journals are advised to make peer review a requirement for research publication, and their guidelines necessary reading and advice to follow for peer reviewers.

URL : Roles and Responsibilities for Peer Reviewers of International Journals

DOI : https://doi.org/10.3390/publications11020032

Metrics and peer review agreement at the institutional level

Authors : Vincent A Traag, Marco Malgarini, Sarlo Scipione

In the past decades, many countries have started to fund academic institutions based on the evaluation of their scientific performance. In this context, post-publication peer review is often used to assess scientific performance. Bibliometric indicators have been suggested as an alternative to peer review.

A recurrent question in this context is whether peer review and metrics tend to yield similar outcomes. In this paper, we study the agreement between bibliometric indicators and peer review based on a sample of publications submitted for evaluation to the national Italian research assessment exercise (2011–2014).

In particular, we study the agreement between bibliometric indicators and peer review at a higher aggregation level, namely the institutional level. Additionally, we also quantify the internal agreement of peer review at the institutional level. We base our analysis on a hierarchical Bayesian model using cross-validation.

We find that the level of agreement is generally higher at the institutional level than at the publication level. Overall, the agreement between metrics and peer review is on par with the internal agreement among two reviewers for certain fields of science in this particular context.

This suggests that for some fields, bibliometric indicators may possibly be considered as an alternative to peer review for the Italian national research assessment exercise. Although results do not necessarily generalise to other contexts, it does raise the question whether similar findings would obtain for other research assessment exercises, such as in the United Kingdom.

URL : https://arxiv.org/abs/2006.14830