Evaluating Open Access Advantages for Citations and Altmetrics (2011-21): A Dynamic and Evolving Relationship

Author : Mike Taylor

Differences between the impacts of Open Access (OA) and non-OA research have been observed over a range of citation and altmetric indicators, usually finding an Open Access Advantage (OAA). However, science-wide analyses covering multiple years, indicators and disciplines are lacking. Using citations and six altmetrics for 33.3M articles published 2011-21, we compare OA and non-OA papers.

The results show that there is no universal OAA across all disciplines or impact indicators: the OAA for citations tends to be lower for recent papers, whereas the OAAs for news, blogs and Twitter are consistent across years and unrelated to volume of OA publications. Wikipedia OAAs are consistently pronounced for all subjects except Humanities (HU) and Social Sciences. Patent OAAs for are strongest for Medical & Health Sciences (MHS) and Life Sciences (LS).

Uniquely, the OAAs for Policy citations is stronger for recently published research. These results support different hypotheses for different subjects and indicators. The evidence is consistent with OA accelerating research impact in MHS, LS and HU; increased visibility/discoverability being a factor in promoting the socio-economic impact; and that OA is a factor in growing online engagement with research. OAAs are therefore complex, dynamic, multi-factorial and require considerable analysis to understand.

URL : Evaluating Open Access Advantages for Citations and Altmetrics (2011-21): A Dynamic and Evolving Relationship

DOI : Serendipity and Scientific Styles: An Ordinary

The independence paradox in scientific careers

Authors : Yanmeng Xing, Ye Sun, Tongxin Pan, Giacomo Livan, Yifang Ma

Establishing an independent academic identity is a central yet insufficiently understood challenge for early-career researchers. However, limited resources and mentor-driven research agendas often constrain early efforts toward autonomy.

To provide large-scale quantitative evidence on how junior researchers develop independence, we introduce a framework that traces how mentees diverge from their mentors in both research topics and collaboration networks, and how these divergences relate to long-term scientific impact.

Analyzing over 500,000 mentee-mentor pairs in Chemistry, Neuroscience, and Physics across six decades, we find that high-impact scientists often initiate work in secondary areas of their mentors’ expertise while adaptively establishing distinct research trajectories. This pattern is most pronounced among mentees who eventually surpass their mentors’ impact.

We identify an inverted U-shaped relationship between topic divergence and mentees’ enduring impact, with moderate divergence yielding the highest scientific impact, revealing an independence paradox in scientific careers.

This pattern holds whether topic divergence is measured by citation network or semantic thematic distance. We further reveal that excessive direct mentor-mentee collaborations correlate with lower mentee impact, whereas expanding professional networks to include mentors’ collaborators is beneficial.

These findings not only offer actionable guidance for early-career researchers navigating independence but also inform institutional policies that promote mentorship structures supporting intellectual innovation and recognizing original contributions in promotion evaluations.

DOI : https://doi.org/10.48550/arXiv.2408.16992

Determining quality dimensions for peer review reports using a Delphi approach

Authors : Amanda Sizo, Adriano Lino, Álvaro Rocha, Luis Paulo Reis

The quality of peer review reports is essential to the integrity and effectiveness of scholarly communication. Yet review reports are often criticized for being vague, biased, or unconstructive, which limits their usefulness for both authors and editors. Existing frameworks for assessing review quality remain fragmented and are rarely validated through expert consensus.

This study aims to define and validate a comprehensive set of quality dimensions for peer review reports, encompassing comments addressed to both authors and editors. We employed a two-phase design combining a thematic analysis of the literature with a Delphi study involving 43 scientific editors, primarily from journals in Computer Science and Engineering.

Consensus was reached after two Delphi rounds, resulting in 62 validated statements organized into eight quality dimensions: Helpfulness, Specificity, Fairness, Thoroughness, Courteousness, Readability, Consistency, and Relevance. These findings provide an empirically grounded framework to inform the development of clearer standards for peer review practice.

URL : Determining quality dimensions for peer review reports using a Delphi approach

DOI : https://doi.org/10.1007/s11192-026-05603-3

AI And the Editors’ Ghost: Who Is the Writer Now?

Authors : David Clark, David Nicholas, Abdullah Abrizah, John Akeroyd, Jorge Revez, Blanca Rodríguez-Bravo, Marzena Swigon, Tatyana Polezhaeva, Anne Gere, Eti Herman

This an exploration of the use of AI in research and writing. It builds upon the ‘Harbingers’ project, an international and longitudinal study of early career researchers (ECRs) and scholarly communication.

In the fourth phase of the project, we returned to the theme of AI, in particular AI as ‘ghostwriter’. Our sources are transcripts of conversational, open-form interviews with over 60 ECRs from Britain, Malaysia, Poland, Portugal, Spain, Russia, and other countries.

For an initial analysis of the transcripts, we used Google NotebookLM. An overarching and thematic summary of the data was produced in minutes, that would otherwise have occupied our research team for weeks. The unprompted text, immediately plausible and coherent, was regarded by all national interviewers as impressive.

Here, using a relatively small, convenience sample, we compare the AI generated summaries both against our original data and those first impressions. We reflect upon our own experience of using AI and that of our interviewees.

This paper is about how we used AI as an experiment, our reaction to it, how that chimes, resonates, echoes the experiences of the ECRs. It is a calibration for our future data analysis.

URL : Learned Publishing – 2026 – Clark – AI And the Editors Ghost Who Is the Writer Now

DOI : https://doi.org/10.1002/leap.2051

Digging deeper into data citations: recognizing and rewarding data work

Authors :  Kathleen Gregory, Stefanie Haustein, Constance Poitras, Emma Roblin, Anton Ninkov, Chantal Ripp, Isabella Peters

Citations and metrics are central features in evaluating academic careers. As researchers increasingly engage in open science, data citations have emerged as potential mechanisms for evaluating and rewarding data sharing and reuse in academic assessments.

Despite this, we still lack critical information about the data citation practices and motivations of researchers themselves, information which is needed to contextualize the use of such metrics.

Here, we present the results of a semi-structured interview study with researchers across disciplines exploring their data referencing practices and motivations, as well as how they would like their ‘data work’ (including data sharing) to be rewarded and evaluated. As a whole, our findings confirm a lack of standard practices for referencing data and provide new insights into the social and scientific reasons motivating data referencing.

While our results show an overall skepticism toward the use of citation-based metrics in evaluations, they also suggest that researchers are caught between traditional and emergent modes of assessment for recognizing data work.

Furthermore, we find that rather than valuing data citations as rewards, our participants value creating data objects which are useful for their (often small) research communities. Ultimately, we conclude that data work is a cornerstone of research practice which needs to be evaluated and considered, but one which also requires context-aware approaches.

URL : Digging deeper into data citations: recognizing and rewarding data work

DOI : https://doi.org/10.1093/reseval/rvag008

Funders open access mandates: uneven uptake and challenging models

Authors : Lucía Céspedes, Madelaine Hare, Simon van Bellen, Philippe Mongeon, Vincent Larivière

Over the last two decades, research funders have adopted Open Access (OA) mandates, with various forms and success. While some funders emphasize gold OA through article processing charges, others favour green OA and repositories, leading to a fragmented policy landscape.

Compliance with these mandates depends on several factors, including disciplinary field, monitoring, and availability of repository infrastructure. Based on 5 million papers supported by 36 funders from 20 countries, 11 million papers funded by other organisations, and 10 million papers without any funding reported, this study explores how different policies influence the adoption of OA.

Findings indicate a sustained growth in OA overall, especially hybrid and gold OA, and that funded papers are more likely to be OA than unfunded papers. Those results suggest that policies such as Plan S, as well as read-and-publish agreements, have had a strong influence on OA adoption, especially among European funders.

However, the global low uptake of Diamond OA and limited indexing of OA outputs in Latin American countries highlight ongoing disparities, influenced by funding constraints, journal visibility, and regional infrastructure challenges.

URL : Funders open access mandates: uneven uptake and challenging models

Arxiv : https://doi.org/10.48550/arXiv.2603.03457