The independence paradox in scientific careers

Authors : Yanmeng Xing, Ye Sun, Tongxin Pan, Giacomo Livan, Yifang Ma

Establishing an independent academic identity is a central yet insufficiently understood challenge for early-career researchers. However, limited resources and mentor-driven research agendas often constrain early efforts toward autonomy.

To provide large-scale quantitative evidence on how junior researchers develop independence, we introduce a framework that traces how mentees diverge from their mentors in both research topics and collaboration networks, and how these divergences relate to long-term scientific impact.

Analyzing over 500,000 mentee-mentor pairs in Chemistry, Neuroscience, and Physics across six decades, we find that high-impact scientists often initiate work in secondary areas of their mentors’ expertise while adaptively establishing distinct research trajectories. This pattern is most pronounced among mentees who eventually surpass their mentors’ impact.

We identify an inverted U-shaped relationship between topic divergence and mentees’ enduring impact, with moderate divergence yielding the highest scientific impact, revealing an independence paradox in scientific careers.

This pattern holds whether topic divergence is measured by citation network or semantic thematic distance. We further reveal that excessive direct mentor-mentee collaborations correlate with lower mentee impact, whereas expanding professional networks to include mentors’ collaborators is beneficial.

These findings not only offer actionable guidance for early-career researchers navigating independence but also inform institutional policies that promote mentorship structures supporting intellectual innovation and recognizing original contributions in promotion evaluations.

DOI : https://doi.org/10.48550/arXiv.2408.16992

Determining quality dimensions for peer review reports using a Delphi approach

Authors : Amanda Sizo, Adriano Lino, Álvaro Rocha, Luis Paulo Reis

The quality of peer review reports is essential to the integrity and effectiveness of scholarly communication. Yet review reports are often criticized for being vague, biased, or unconstructive, which limits their usefulness for both authors and editors. Existing frameworks for assessing review quality remain fragmented and are rarely validated through expert consensus.

This study aims to define and validate a comprehensive set of quality dimensions for peer review reports, encompassing comments addressed to both authors and editors. We employed a two-phase design combining a thematic analysis of the literature with a Delphi study involving 43 scientific editors, primarily from journals in Computer Science and Engineering.

Consensus was reached after two Delphi rounds, resulting in 62 validated statements organized into eight quality dimensions: Helpfulness, Specificity, Fairness, Thoroughness, Courteousness, Readability, Consistency, and Relevance. These findings provide an empirically grounded framework to inform the development of clearer standards for peer review practice.

URL : Determining quality dimensions for peer review reports using a Delphi approach

DOI : https://doi.org/10.1007/s11192-026-05603-3

AI And the Editors’ Ghost: Who Is the Writer Now?

Authors : David Clark, David Nicholas, Abdullah Abrizah, John Akeroyd, Jorge Revez, Blanca Rodríguez-Bravo, Marzena Swigon, Tatyana Polezhaeva, Anne Gere, Eti Herman

This an exploration of the use of AI in research and writing. It builds upon the ‘Harbingers’ project, an international and longitudinal study of early career researchers (ECRs) and scholarly communication.

In the fourth phase of the project, we returned to the theme of AI, in particular AI as ‘ghostwriter’. Our sources are transcripts of conversational, open-form interviews with over 60 ECRs from Britain, Malaysia, Poland, Portugal, Spain, Russia, and other countries.

For an initial analysis of the transcripts, we used Google NotebookLM. An overarching and thematic summary of the data was produced in minutes, that would otherwise have occupied our research team for weeks. The unprompted text, immediately plausible and coherent, was regarded by all national interviewers as impressive.

Here, using a relatively small, convenience sample, we compare the AI generated summaries both against our original data and those first impressions. We reflect upon our own experience of using AI and that of our interviewees.

This paper is about how we used AI as an experiment, our reaction to it, how that chimes, resonates, echoes the experiences of the ECRs. It is a calibration for our future data analysis.

URL : Learned Publishing – 2026 – Clark – AI And the Editors Ghost Who Is the Writer Now

DOI : https://doi.org/10.1002/leap.2051

Digging deeper into data citations: recognizing and rewarding data work

Authors :  Kathleen Gregory, Stefanie Haustein, Constance Poitras, Emma Roblin, Anton Ninkov, Chantal Ripp, Isabella Peters

Citations and metrics are central features in evaluating academic careers. As researchers increasingly engage in open science, data citations have emerged as potential mechanisms for evaluating and rewarding data sharing and reuse in academic assessments.

Despite this, we still lack critical information about the data citation practices and motivations of researchers themselves, information which is needed to contextualize the use of such metrics.

Here, we present the results of a semi-structured interview study with researchers across disciplines exploring their data referencing practices and motivations, as well as how they would like their ‘data work’ (including data sharing) to be rewarded and evaluated. As a whole, our findings confirm a lack of standard practices for referencing data and provide new insights into the social and scientific reasons motivating data referencing.

While our results show an overall skepticism toward the use of citation-based metrics in evaluations, they also suggest that researchers are caught between traditional and emergent modes of assessment for recognizing data work.

Furthermore, we find that rather than valuing data citations as rewards, our participants value creating data objects which are useful for their (often small) research communities. Ultimately, we conclude that data work is a cornerstone of research practice which needs to be evaluated and considered, but one which also requires context-aware approaches.

URL : Digging deeper into data citations: recognizing and rewarding data work

DOI : https://doi.org/10.1093/reseval/rvag008

Funders open access mandates: uneven uptake and challenging models

Authors : Lucía Céspedes, Madelaine Hare, Simon van Bellen, Philippe Mongeon, Vincent Larivière

Over the last two decades, research funders have adopted Open Access (OA) mandates, with various forms and success. While some funders emphasize gold OA through article processing charges, others favour green OA and repositories, leading to a fragmented policy landscape.

Compliance with these mandates depends on several factors, including disciplinary field, monitoring, and availability of repository infrastructure. Based on 5 million papers supported by 36 funders from 20 countries, 11 million papers funded by other organisations, and 10 million papers without any funding reported, this study explores how different policies influence the adoption of OA.

Findings indicate a sustained growth in OA overall, especially hybrid and gold OA, and that funded papers are more likely to be OA than unfunded papers. Those results suggest that policies such as Plan S, as well as read-and-publish agreements, have had a strong influence on OA adoption, especially among European funders.

However, the global low uptake of Diamond OA and limited indexing of OA outputs in Latin American countries highlight ongoing disparities, influenced by funding constraints, journal visibility, and regional infrastructure challenges.

URL : Funders open access mandates: uneven uptake and challenging models

Arxiv : https://doi.org/10.48550/arXiv.2603.03457

Publishing Service in Times of Crisis: A Case Study of the Academic Library’s Contribution to the New Knowledge Dissemination

Authors : Valentyna Mamedova, Valerii Kushnarov, Olena Skachenko, Alla Malshakova

Introduction

Academic libraries play an important role in scholarly communication and the dissemination of new knowledge about the state of science in their educational institutions. The article presents a case study of the contribution of the library of Kyiv National University of Culture and Arts (Ukraine) to the university’s publishing program, which includes the publication of 10 peer-reviewed open access journals, monographs, and conference proceedings.

Description of Service

It is found that the library’s publishing house edited 598 scientific works, including 107 monographs/textbooks and 115 conference proceedings. To promote the publishing program and scientific journals, a digital interactive project of 12+ Books of the Year was developed. Visualisation of information about the results of the library’s publishing activity has made scientific communication instant and universal.

The article adds to the list of references on library publishing services; explores the evolution and range of publishing services; identifies the factors that have influenced library publishing in the crisis times of COVID-19 and martial law. The article can be useful for librarians-practitioners involved in library publishing.

Next steps

In the future, the research library will continue to facilitate production and post-production publishing processes and promote the university’s publishing program. It is also intended to intensify the library’s participation in the system of formal scholarly and scientific communication, to increase publication activity as an impact factor of university journals, and citation of articles.

URL : Publishing Service in Times of Crisis: A Case Study of the Academic Library’s Contribution to the New Knowledge Dissemination

DOI : https://doi.org/10.31274/jlsc.18937