Global Open Research Commons: Enabling Curation for the Next 20 Years

Authors : Andrew Treloar, C. J. Woodford

This paper addresses the requirements for long-term preservation through a system lens. Rather than focussing on specific technical elements that are needed for curation, this paper considers all the system elements that need to be put in place, and intentionally maintained, to ensure curation for the long term.

This paper begins by making the argument that curation requires attention to preservation over time. The need for preservation, in turn, requires both sustainable data content and sustained infrastructure. These infrastructures consist of many elements, both social and technical, all of which need attention.

Then, this paper briefly introduces the concept of the open research commons as a way of conceptualising these elements, before examining in some detail the Global Open Research Commons (GORC) typology of essential elements. This work was developed through a Research Data Alliance Working Group, which started with a definition of a commons as ‘a global trusted ecosystem that provides seamless access to high-quality interoperable research outputs and services.’ The essential elements in the typology include Information and Communications Technology (ICT) infrastructure, services and tools, research objects, human capacity, rules of participation and access, governance, engagement, and sustainability.

This general approach was then extended by the GORC International Model Working Group to ‘review and identify attributes or features currently implemented by a target set of GORC organisations.’ The GORC approach has already been used in designing the creation of new commons, characterising existing research infrastructures, and analysing interoperability between commons. Future work, to commence in 2025, will clarify how the International Model might be used and adopted, as well as improve how it is presented.

Our researchers require ongoing access to reliable and sustainable data aggregations. These will need to be curated for reuse and interoperability over the long term to support the integrity of the scholarly record. The GORC groups are working towards an interoperable set of platforms that together build on advances in internet technologies and the consensus and strengths of the research community.

URL : Global Open Research Commons: Enabling Curation for the Next 20 Years

DOI : https://doi.org/10.2218/ijdc.v19i1.1054

An Exploration of the Functionality and Usability of Open Research Platforms to Support Open Science

Authors : Whitney Thompson, Angela Murillo

This paper examines the user experience and functionality of four open research platforms – Zenodo, Figshare, OSF, and Authorea – to assess their utility in disseminating research outputs that are varied in form as well as academic discipline, and in facilitating collaboration on larger projects by multi-institutional groups.

The researchers analysed the platforms’ community features, record creation processes (including metadata fields), search functionality, and analytics capabilities.

URL : An Exploration of the Functionality and Usability of Open Research Platforms to Support Open Science

DOI : https://doi.org/10.2218/ijdc.v19i1.941

Exploring how the public “see” scientists: A systematic literature review, 1983–2024

Authors : Weiping Wang, Hongxuan Ji, Ying Wang, Zhisen Wang

The public image of scientists significantly influences scientific literacy, science education, professional identity, science communication, and societal attitudes toward public issues. However, there has not been a thorough and detailed review of this topic. This paper presents a Systematic Literature Review (SLR) of 233 high-quality articles examining public perceptions of scientists.

The findings indicate that studies emphasize vivid and emotionally engaging characteristics of scientists, reflecting contemporary trends, particularly during the pandemic. Research predominantly targets students across various educational levels, highlighting a gap between science education and science communication, with a reliance on quantitative methods despite the use of visualization tools.

Key research limitations include a lack of humanistic perspective, issues with validity and reproducibility, insufficient cultural context analysis, weak causal inferences, and limited integration of artificial intelligence and big data, which impede advancements in science education.

The paper concludes with recommendations for developing a more comprehensive conceptual framework to bridge the gaps between science education and communication, as well as their relationship with science teaching, in order to foster a positive public understanding of science.

URL : Exploring how the public “see” scientists: A systematic literature review, 1983–2024

DOI : https://doi.org/10.1057/s41599-025-05869-7

Scientific publishing without gatekeeping: an empirical investigation of eLife’s new peer review process

Authors : Rüdiger Mutz, Lutz Bornmann, Hans‑Dieter Daniel

At the end of January 2023, eLife introduced a new publishing model (alongside the old-traditional-publishing model): all manuscripts submitted as preprints are peer-reviewed and published if they are deemed worthy of review by the editorial team (“editorial triage”). The model abandons the gatekeeping function and retains the previous “consultative approach to peer review”.

Even under the changed conditions, the question of the quality of judgements in the peer review process remains. In this study, the reviewers’ ratings of manuscripts submitted to eLife were examined in terms of both descriptive comparisons of peer review models, and the following selected quality criteria of peer review: interrater agreement and interrater reliability. eLife provided us with the data on all manuscripts submitted in 2023 according to the new publishing model (group 3, N = 3,846), as well as manuscripts submitted according to the old publishing model (group 1: N = 6,592 submissions from 2019; group 2: N = 364 submissions from 2023).

The interrater agreement and interrater reliability for the criteria “significance of findings” and “strength of support” were similarly low, as previous empirical studies for gatekeeping journals have shown.

The fairness of peer review is not or only slightly compromised. We used the empirical results of our study to recommend several improvements to the new publishing model introduced by eLife as for example, increasing transparency, masking author identity or increasing the number of expert reviewers.

URL : Scientific publishing without gatekeeping: an empirical investigation of eLife’s new peer review process

DOI : https://doi.org/10.1007/s11192-025-05422-y

A decentralized future for the open-science databases

Authors : Gaurav Sharma, Viorel Munteanu, Nika Mansouri Ghiasi, Jineta Banerjee, Susheel Varma, Luca Foschini, Kyle Ellrott, Onur Mutlu, Dumitru Ciorbă, Roel A. Ophoff, Viorel Bostan, Christopher E Mason, Jason H. Moore, Despoina Sousoni, Arunkumar Krishnan, Christopher E. Mason, Mihai Dimian, Gustavo Stolovitzky, Fabio G. Liberante, Taras K. Oleksyk, Serghei Mangul

Continuous and reliable access to curated biological data repositories is indispensable for accelerating rigorous scientific inquiry and fostering reproducible research. Centralized repositories, though widely used, are vulnerable to single points of failure arising from cyberattacks, technical faults, natural disasters, or funding and political uncertainties.

This can lead to widespread data unavailability, data loss, integrity compromises, and substantial delays in critical research, ultimately impeding scientific progress. Centralizing essential scientific resources in a single geopolitical or institutional hub is inherently dangerous, as any disruption can paralyze diverse ongoing research.

The rapid acceleration of data generation, combined with an increasingly volatile global landscape, necessitates a critical re-evaluation of the sustainability of centralized models. Implementing federated and decentralized architectures presents a compelling and future-oriented pathway to substantially strengthen the resilience of scientific data infrastructures, thereby mitigating vulnerabilities and ensuring the long-term integrity of data.

Here, we examine the structural limitations of centralized repositories, evaluate federated and decentralized models, and propose a hybrid framework for resilient, FAIR, and sustainable scientific data stewardship. Such an approach offers a significant reduction in exposure to governance instability, infrastructural fragility, and funding volatility, and also fosters fairness and global accessibility.

The future of open science depends on integrating these complementary approaches to establish a globally distributed, economically sustainable, and institutionally robust infrastructure that safeguards scientific data as a public good, further ensuring continued accessibility, interoperability, and preservation for generations to come.

DOI : https://doi.org/10.48550/arXiv.2509.19206

Trends of Publication of Negative Trials Over Time

Authors : Bruno LaviolleClara LocherJean-Sébastien AllainQuentin Le CornuPierre CharpentierMarie LefebvreClémence Le PapeCyril LevenClément PalpacuerClémence PontoizeauEric BellissantFlorian Naudet

Studies with negative results are less likely to be published than others, potentially leading to publication bias. Introduced in 2000, trial registration could have participated in decreasing the proportion of unpublished studies. We assessed the proportion of negative randomized controlled trials (RCT) over the last 20 years.

We searched Medline for RCT published in 2000, 2005, 2010, 2015, and 2020 in the British Medical Journal, the Journal of the American Medical Association, the Lancet, and the New England Journal of Medicine. The primary endpoint was the proportion of negative (final comparison on the primary study-endpoint without statistical significance or favoring the control arm) studies published in 2000 and 2020.

Factors independently associated with the publication of negative studies were identified using multivariable analysis. A total of 1,542 studies were included. The proportion of negative RCT significantly increased between 2000 and 2020 (from 27.6% to 37.4%; P = 0.01), however, the trend over time was not significant (P = 0.203). In multivariable analysis, the following factors were associated with a higher proportion of published negative studies: superiority (P < 0.001), two-group trials (P < 0.001), number of patients ≥510 (P < 0.001), cardiology trials (P = 0.003), emergency/critical care trials (P < 0.001), obstetrics trials (P = 0.032), surgery trials (P = 0.006), pneumology trials (P = 0.029).

Exclusive industry funding was associated with a lower proportion of published negative studies (P < 0.001). The proportion of published negative studies in 2020 was higher only when compared to 2000. During the two decades, no trend was noticeable. There is no clear relationship between trial registration and the publication of negative results over time.

URL : Trends of Publication of Negative Trials Over Time

DOI : https://doi.org/10.1002/cpt.3535

Publish-Review-Curate Modelling for Data Paper and Dataset: A Collaborative Approach

Authors : Youngim JungSungsoo Robert Ahn

Research datasets—capturing natural, societal, or artificial phenomena—are critical in generating new scientific insights, validating research models, and supporting data-intensive discovery. Data papers that describe and contextualise these datasets aim to ensure their findability, accessibility, interoperability, and reusability (FAIR) while providing academic credit to data creators.

However, the peer review of data papers and associated datasets presents considerable challenges, requiring reviewers to assess both the syntactic and semantic integrity of the data, metadata quality, and domain-specific scientific relevance. Furthermore, the coordination between journal editors, reviewers, and curators demands substantial effort, often leading to publication delays in the conventional review and then publishing framework.

This study proposes a novel Publish-Review-Curate (PRC) model tailored to the synchronised publication and review of data papers and their underlying datasets. Building on preprint and open science practices, the model defines a collaborative, multi-stakeholder workflow involving authors, peer reviewers, data experts, and journal editors.

The PRC model integrates open feedback, transparent peer review, and structured curation to improve research data’s quality, discoverability, and impact. By articulating conceptual and operational workflows, this study contributes a practical framework for modernising data publishing infrastructures and supporting the co-evaluation of narrative and data artefacts.

URL : Publish-Review-Curate Modelling for Data Paper and Dataset: A Collaborative Approach

DOI : https://doi.org/10.1002/leap.2024