On The Peer Review Reports: Does Size Matter?

Authors : Abdelghani Maddi, Luis Miotti

Amidst the ever-expanding realm of scientific production and the proliferation of predatory journals, the focus on peer review remains paramount for scientometricians and sociologists of science. Despite this attention, there is a notable scarcity of empirical investigations into the tangible impact of peer review on publication quality.

This study aims to address this gap by conducting a comprehensive analysis of how peer review contributes to the quality of scholarly publications, as measured by the citations they receive. Utilizing an adjusted dataset comprising 57,482 publications from Publons to Web of Science and employing the Raking Ratio method, our study reveals intriguing insights. Specifically, our findings shed light on a nuanced relationship between the length of reviewer reports and the subsequent citations received by publications.

Through a robust regression analysis, we establish that, beginning from 947 words, the length of reviewer reports is significantly associated with an increase in citations. These results not only confirm the initial hypothesis that longer reports indicate requested improvements, thereby enhancing the quality and visibility of articles, but also underscore the importance of timely and comprehensive reviewer reports.

Furthermore, insights from Publons’ data suggest that open access to reports can influence reviewer behavior, encouraging more detailed reports. Beyond the scholarly landscape, our findings prompt a reevaluation of the role of reviewers, emphasizing the need to recognize and value this resource-intensive yet underappreciated activity in institutional evaluations.

Additionally, the study sounds a cautionary note regarding the challenges faced by peer review in the context of an increasing volume of submissions, potentially compromising the vigilance of peers in swiftly assessing numerous articles.

HAL : https://cnrs.hal.science/hal-04492274

Peer review’s irremediable flaws: Scientists’ perspectives on grant evaluation in Germany

Authors : Eva Barlösius, Laura Paruschke, Axel Philipps

Peer review has developed over time to become the established procedure for assessing and assuring the scientific quality of research. Nevertheless, the procedure has also been variously criticized as conservative, biased, and unfair, among other things. Do scientists regard all these flaws as equally problematic?

Do they have the same opinions on which problems are so serious that other selection procedures ought to be considered? The answers to these questions hints at what should be modified in peer review processes as a priority objective. The authors of this paper use survey data to examine how members of the scientific community weight different shortcomings of peer review processes.

Which of those processes’ problems do they consider less relevant? Which problems, on the other hand, do they judge to be beyond remedy? Our investigation shows that certain defects of peer review processes are indeed deemed irreparable: (1) legitimate quandaries in the process of fine-tuning the choice between equally eligible research proposals and in the selection of daring ideas; and (2) illegitimate problems due to networks. Science-policy measures to improve peer review processes should therefore clarify the distinction between field-specific remediable and irremediable flaws than is currently the case.

URL : Peer review’s irremediable flaws: Scientists’ perspectives on grant evaluation in Germany

DOI : https://doi.org/10.1093/reseval/rvad032

In which fields are citations indicators of research quality?

Authors : Mike Thelwall, Kayvan Kousha, Emma Stuart, Meiko Makita, Mahshid Abdoli, Paul Wilson, Jonathan Levitt

Citation counts are widely used as indicators of research quality to support or replace human peer review and for lists of top cited papers, researchers, and institutions. Nevertheless, the relationship between citations and research quality is poorly evidenced. We report the first large-scale science-wide academic evaluation of the relationship between research quality and citations (field normalized citation counts), correlating them for 87,739 journal articles in 34 field-based UK Units of Assessment (UoA).

The two correlate positively in all academic fields, from very weak (0.1) to strong (0.5), reflecting broadly linear relationships in all fields. We give the first evidence that the correlations are positive even across the arts and humanities. The patterns are similar for the field classification schemes of Scopus and Dimensions.ai, although varying for some individual subjects and therefore more uncertain for these.

We also show for the first time that no field has a citation threshold beyond which all articles are excellent quality, so lists of top cited articles are not pure collections of excellence, and neither is any top citation percentile indicator. Thus, while appropriately field normalized citations associate positively with research quality in all fields, they never perfectly reflect it, even at high values.

URL : In which fields are citations indicators of research quality?

DOI : https://doi.org/10.1002/asi.24767

Enriching research quality: A proposition for stakeholder heterogeneity

Authors : Thomas Franssen

Dominant approaches to research quality rest on the assumption that academic peers are the only relevant stakeholders in its assessment. In contrast, impact assessment frameworks recognize a large and heterogeneous set of actors as stakeholders. In transdisciplinary research non-academic stakeholders are actively involved in all phases of the research process and actor-network theorists recognize a broad and heterogeneous set of actors as stakeholders in all types of research as they are assigned roles in the socio-material networks, also termed ‘problematizations’, that researchers reconfigure.

Actor-network theorists consider research as a performative act that changes the reality of the stakeholders it, knowingly or unknowingly, involves. Established approaches to, and notions of, research quality do not recognize the heterogeneity of relevant stakeholders nor allow for reflection on the performative effects of research.

To enrich the assessment of research quality this article explores the problematization as a potential new object of evaluation. Problematizations are proposals for how the future might look. Hence, their acceptance does not only concern fellow academics but also all other human and other-than-human actors that figure in them.

To enrich evaluative approaches, this article argues for the inclusion of stakeholder involvement and stakeholder representation as dimensions of research quality.

It considers a number of challenges to doing so including the identification of stakeholders, developing quality criteria for stakeholder involvement and stakeholder representation, and the possibility of participatory research evaluation. It can alternatively be summarized as raising the question: for whose benefit do we conduct evaluations of research quality?

URL : Enriching research quality: A proposition for stakeholder heterogeneity

DOI : https://doi.org/10.1093/reseval/rvac012

Indicators of research quality, quantity, openness and responsibility in institutional review, promotion and tenure policies across seven countries

Authors : Nancy Pontika, Thomas Klebel, Antonia Correia, Hannah Metzler, Petr Knoth, Tony Ross-Hellauer

The need to reform research assessment processes related to career advancement at research institutions has become increasingly recognised in recent years, especially to better foster open and responsible research practices. Current assessment criteria are believed to focus too heavily on inappropriate criteria related to productivity and quantity as opposed to quality, collaborative open research practices, and the socio-economic impact of research.

Evidence of the extent of these issues is urgently needed to inform actions for reform, however. We analyse current practices as revealed by documentation on institutional review, promotion and tenure processes in seven countries (Austria, Brazil, Germany, India, Portugal, United Kingdom and United States of America).

Through systematic coding and analysis of 143 RPT policy documents from 107 institutions for the prevalence of 17 criteria (including those related to qualitative or quantitative assessment of research, service to the institution or profession, and open and responsible research practices), we compare assessment practices across a range of international institutions to significantly broaden this evidence-base.

Although prevalence of indicators varies considerably between countries, overall we find that currently open and responsible research practices are minimally rewarded and problematic practices of quantification continue to dominate.

URL : Indicators of research quality, quantity, openness and responsibility in institutional review, promotion and tenure policies across seven countries

DOI : https://doi.org/10.1162/qss_a_00224

Publishing of COVID-19 preprints in peer-reviewed journals, preprinting trends, public discussion and quality issues

Authors : Ivan Kodvanj, Jan Homolak, Vladimir Trkulja

COVID-19-related (vs. non-related) articles appear to be more expeditiously processed and published in peer-reviewed journals.

We aimed to evaluate: (i) whether COVID-19-related preprints were favored for publication, (ii) preprinting trends and public discussion of the preprints, and (iii) the relationship between the publication topic (COVID-19-related or not) and quality issues.

Manuscripts deposited at bioRxiv and medRxiv between January 1 and September 27 2020 were assessed for the probability of publishing in peer-reviewed journals, and those published were evaluated for submission-to-acceptance time. The extent of public discussion was assessed based on Altmetric and Disqus data.

The Retraction Watch Database and PubMed were used to explore the retraction of COVID-19 and non-COVID-19 articles and preprints. With adjustment for the preprinting server and number of deposited versions, COVID-19-related preprints were more likely to be published within 120 days since the deposition of the first version (OR = 1.96, 95% CI: 1.80–2.14) as well as over the entire observed period (OR = 1.39, 95% CI: 1.31–1.48). Submission-to-acceptance was by 35.85 days (95% CI: 32.25–39.45) shorter for COVID-19 articles.

Public discussion of preprints was modest and COVID-19 articles were overrepresented in the pool of retracted articles in 2020. Current data suggest a preference for publication of COVID-19-related preprints over the observed period.

URL : https://doi.org/10.1007/s11192-021-04249-7

RipetaScore: Measuring the Quality, Transparency, and Trustworthiness of a Scientific Work

Authors : Josh Q. Sumner, Cynthia Hudson Vitale, Leslie D. McIntosh

A wide array of existing metrics quantifies a scientific paper’s prominence or the author’s prestige. Many who use these metrics make assumptions that higher citation counts or more public attention must indicate more reliable, better quality science.

While current metrics offer valuable insight into scientific publications, they are an inadequate proxy for measuring the quality, transparency, and trustworthiness of published research.

Three essential elements to establishing trust in a work include: trust in the paper, trust in the author, and trust in the data. To address these elements in a systematic and automated way, we propose the ripetaScore as a direct measurement of a paper’s research practices, professionalism, and reproducibility.

Using a sample of our current corpus of academic papers, we demonstrate the ripetaScore’s efficacy in determining the quality, transparency, and trustworthiness of an academic work.

In this paper, we aim to provide a metric to evaluate scientific reporting quality in terms of transparency and trustworthiness of the research, professionalism, and reproducibility.

URL : RipetaScore: Measuring the Quality, Transparency, and Trustworthiness of a Scientific Work

DOI : https://doi.org/10.3389/frma.2021.751734