The role of non-scientific factors vis-a-vis the quality of publications in determining their scholarly impact

Authors : Giovanni Abramo, Ciriaco Andrea D’Angelo, Leonardo Grilli

In the evaluation of scientific publications’ impact, the interplay between intrinsic quality and non-scientific factors remains a subject of debate. While peer review traditionally assesses quality, bibliometric techniques gauge scholarly impact. This study investigates the role of non-scientific attributes alongside quality scores from peer review in determining scholarly impact.

Leveraging data from the first Italian Research Assessment Exercise (VTR 2001-2003) and Web of Science citations, we analyse the relationship between quality scores, non-scientific factors, and publication short- and long-term impact.

Our findings shed light on the significance of non-scientific elements overlooked in peer review, offering policymakers and research management insights in choosing evaluation methodologies. Sections delve into the debate, identify non-scientific influences, detail methodologies, present results, and discuss implications.

Arxiv :

May contain English – The assessment effect on language in publications and how this has manifested over a decade of DOAB titles

Authors : Danny Kingsley, Ronald Snijder

Research assessment in a major driver of research behaviour. The current emphasis on journal citations in a limited number of journals with an English focus have multiple effects. The need to publish in English even when it is not the local language affects the type of research undertaken and further consolidates the global North-centric view of scientific approach.

The bibliometric databases on which assessments of universities and journals are based are owned by two large corporate organisations, and this concentration of the market has in turn concentrated the research environment. Open infrastructure offers an alternative option for the research endeavour.

The OAPEN online open access library and the Directory of Open Access Books form part of this infrastructure and we consider the pattern of languages present in the directories over time.


A survey of how biology researchers assess credibility when serving on grant and hiring committees

Authors : Iain Hrynaszkiewicz, Beruria Novich, James Harney, Veronique Kiermer

Researchers who serve on grant review and hiring committees have to make decisions about the intrinsic value of research in short periods of time, and research impact metrics such Journal Impact Factor (JIF) exert undue influence on these decisions. Initiatives such as the Coalition for Advancing Research Assessment (CoARA) and the Declaration on Research Assessment (DORA) emphasize responsible use of quantitative metrics and avoidance of journal-based impact metrics for research assessment. Further, our previous qualitative research suggested that assessing credibility, or trustworthiness, of research is important to researchers not only when they seek to inform their own research but also in the context of research assessment committees.

To confirm our findings from previous interviews in quantitative terms, we surveyed 485 biology researchers who have served on committees for grant review or hiring and promotion decisions, to understand how they assess the credibility of research outputs in these contexts. We found that concepts like credibility, trustworthiness, quality and impact lack consistent definitions and interpretations by researchers, which had already been observed in our interviews.

We also found that assessment of credibility is very important to most (81%) of researchers serving in these committees but fewer than half of respondents are satisfied with their ability to assess credibility. A substantial proportion of respondents (57% of respondents) report using journal reputation and JIF to assess credibility – proxies that research assessment reformers consider inappropriate to assess credibility because they don’t rely on intrinsic characteristics of the research.

This gap between importance of an assessment and satisfaction in the ability to conduct it was reflected in multiple aspects of credibility we tested and it was greatest for researchers seeking to assess the integrity of research (such as identifying signs of fabrication, falsification, or plagiarism), and the suitability and completeness of research methods. Non-traditional research outputs associated with Open Science practices – research data, code, protocol and preprints sharing – are particularly hard for researchers to assess, despite the potential of Open Science practices to signal trustworthiness.

Our results suggest opportunities to develop better guidance and better signals to support the evaluation of research credibility and trustworthiness – and ultimately support research assessment reform, away from the use of inappropriate proxies for impact and towards assessing the intrinsic characteristics and values researchers see as important.


Peer review’s irremediable flaws: Scientists’ perspectives on grant evaluation in Germany

Authors : Eva Barlösius, Laura Paruschke, Axel Philipps

Peer review has developed over time to become the established procedure for assessing and assuring the scientific quality of research. Nevertheless, the procedure has also been variously criticized as conservative, biased, and unfair, among other things. Do scientists regard all these flaws as equally problematic?

Do they have the same opinions on which problems are so serious that other selection procedures ought to be considered? The answers to these questions hints at what should be modified in peer review processes as a priority objective. The authors of this paper use survey data to examine how members of the scientific community weight different shortcomings of peer review processes.

Which of those processes’ problems do they consider less relevant? Which problems, on the other hand, do they judge to be beyond remedy? Our investigation shows that certain defects of peer review processes are indeed deemed irreparable: (1) legitimate quandaries in the process of fine-tuning the choice between equally eligible research proposals and in the selection of daring ideas; and (2) illegitimate problems due to networks. Science-policy measures to improve peer review processes should therefore clarify the distinction between field-specific remediable and irremediable flaws than is currently the case.

URL : Peer review’s irremediable flaws: Scientists’ perspectives on grant evaluation in Germany


Promoting values-based assessment in review, promotion, and tenure processes

Authors : Caitlin Carter, Michael R. Dougherty, Erin C. McKiernan, Greg Tananbaum

Criteria and guidelines for review, promotion, and tenure (RPT) processes form the bedrock of institutional and departmental policies, and are a major driver of faculty behavior, influencing the time faculty spend on different activities like outreach, publishing practices, and more.

However, research shows that many RPT guidelines emphasize quantity over quality when evaluating research and teaching, and favor bibliometrics over qualitative measures of broader impact.

RPT processes rarely explicitly recognize or reward the various public dimensions of faculty work (e.g., outreach, research sharing, science communication), or, when they do, relegate them to the service category, which is undervalued and often falls heavily on women and underrepresented groups.

There is a need to correct this mismatch between institutional missions or values—often focused on aspects like community engagement, equity, diversity, and inclusion, or public good—and the behaviors that are rewarded in academic assessments. We describe recent efforts to promote RPT reform and realign institutional incentives using a values-based approach, including an overview of workshops we ran at the 2023 Council of Graduate Departments of Psychology (COGDOP) Annual Meeting, the Association for Psychological Science (APS) Annual Convention, and the American Anthropological Association (AAA) Department Leaders Summer Institute.

These workshops were designed to guide participants through the process of brainstorming what values are important to them as departments, institutions, or more broadly as disciplines, and which faculty behaviors might embody these values and could be considered in RPT evaluations. We discuss how similar activities could promote broader culture change.

URL : Promoting values-based assessment in review, promotion, and tenure processes


The Role of Academic Libraries in Scientific Production Evaluation – the Experience of University of Zagreb, Croatia

Authors : Branka Marijanović, Tatijana Petrić, Zrinka Udiljak Bugarinovski, Višnja Novosel

Since internationally visible scientific productivity is a criterion for state evaluation of Croatian academic and scientific institutions and their scientists, Croatian academic libraries have a key role in quantitative evaluation of scientific productivity using methods such as bibliometrics, scientometrics and the like.

The aim of this case study is to identify and illustrate the current situation of library services for evaluating scientific production at the University of Zagreb, Croatia, and to make recommendations for the further development of such services, which could serve as a framework for the systematic implementation of this type of service in all libraries at the University of Zagreb and beyond.

More specifically, the purpose of this paper was to identify the existence of the bibliometric services in the libraries of the University of Zagreb (UNIZG), examine the status and involvement of university librarians in the academic advancement procedures and to identify the required competences for bibliometric experts in Croatia.

The research was conducted using the content analysis method, the survey method, and the focus group method. The research results show that although UNIZG libraries are integrated into the system of academic promotion and the role of UNIZG libraries is enshrined in Croatian regulations, the bibliometric service is not standardised at the University level.

The results also indicate that the service needs to be strengthened in terms of training of professional staff and greater investment in staff capacity and infrastructure.

The fact that the study was conducted at a single Croatian university is a possible limitation that could relate to the application of guidelines for further actions and the development of bibliometric services at national level. It would therefore be desirable to conduct future research to identify the situation at other Croatian universities as well.

It would also be necessary to determine the open science and open access policies at UNIZG through further research and, in this context, to establish guidelines for possible improvements in the processes of evaluating scientific productivity.

The results of this study make an important contribution to the possible future positioning of university libraries and UNIZG librarians in the process of evaluating scientific productivity. In addition, some practical advice is given so that this case study may be a good introductory overview for the wider academic community in relation to this topic.

URL : The Role of Academic Libraries in Scientific Production Evaluation – the Experience of University of Zagreb, Croatia


Viewing research assessment, the academic reward system, and academic publishing through the power/knowledge lens of Foucault

Author : Timothy D. Bowman

The academic research assessment system, the academic reward system, and the academic publishing system are interrelated mechanisms that facilitate the scholarly production of knowledge.

This article considers these systems using a Foucauldian lens to examine the power/knowledge relationships found within and through these systems. A brief description of the various systems is introduced followed by examples of instances where Foucault’s power, knowledge, discourse, and power/knowledge concepts are useful to provide a broader understanding of the norms and rules associated with each system, how these systems form a network of power relationships that reinforce and shape one another.

URL : Viewing research assessment, the academic reward system, and academic publishing through the power/knowledge lens of Foucault