An analysis of the effects of sharing research data, code, and preprints on citations

Authors : Giovanni Colavizza, Lauren Cadwallader, Marcel LaFlamme, Grégory Dozot, Stéphane Lecorney, Daniel Rappo, Iain Hrynaszkiewicz

Calls to make scientific research more open have gained traction with a range of societal stakeholders. Open Science practices include but are not limited to the early sharing of results via preprints and openly sharing outputs such as data and code to make research more reproducible and extensible. Existing evidence shows that adopting Open Science practices has effects in several domains.

In this study, we investigate whether adopting one or more Open Science practices leads to significantly higher citations for an associated publication, which is one form of academic impact. We use a novel dataset known as Open Science Indicators, produced by PLOS and DataSeer, which includes all PLOS publications from 2018 to 2023 as well as a comparison group sampled from the PMC Open Access Subset. In total, we analyze circa 122’000 publications. We calculate publication and author-level citation indicators and use a broad set of control variables to isolate the effect of Open Science Indicators on received citations.

We show that Open Science practices are adopted to different degrees across scientific disciplines. We find that the early release of a publication as a preprint correlates with a significant positive citation advantage of about 20.2% on average. We also find that sharing data in an online repository correlates with a smaller yet still positive citation advantage of 4.3% on average.

However, we do not find a significant citation advantage for sharing code. Further research is needed on additional or alternative measures of impact beyond citations. Our results are likely to be of interest to researchers, as well as publishers, research funders, and policymakers.

Arxiv :

A survey of how biology researchers assess credibility when serving on grant and hiring committees

Authors : Iain Hrynaszkiewicz, Beruria Novich, James Harney, Veronique Kiermer

Researchers who serve on grant review and hiring committees have to make decisions about the intrinsic value of research in short periods of time, and research impact metrics such Journal Impact Factor (JIF) exert undue influence on these decisions. Initiatives such as the Coalition for Advancing Research Assessment (CoARA) and the Declaration on Research Assessment (DORA) emphasize responsible use of quantitative metrics and avoidance of journal-based impact metrics for research assessment. Further, our previous qualitative research suggested that assessing credibility, or trustworthiness, of research is important to researchers not only when they seek to inform their own research but also in the context of research assessment committees.

To confirm our findings from previous interviews in quantitative terms, we surveyed 485 biology researchers who have served on committees for grant review or hiring and promotion decisions, to understand how they assess the credibility of research outputs in these contexts. We found that concepts like credibility, trustworthiness, quality and impact lack consistent definitions and interpretations by researchers, which had already been observed in our interviews.

We also found that assessment of credibility is very important to most (81%) of researchers serving in these committees but fewer than half of respondents are satisfied with their ability to assess credibility. A substantial proportion of respondents (57% of respondents) report using journal reputation and JIF to assess credibility – proxies that research assessment reformers consider inappropriate to assess credibility because they don’t rely on intrinsic characteristics of the research.

This gap between importance of an assessment and satisfaction in the ability to conduct it was reflected in multiple aspects of credibility we tested and it was greatest for researchers seeking to assess the integrity of research (such as identifying signs of fabrication, falsification, or plagiarism), and the suitability and completeness of research methods. Non-traditional research outputs associated with Open Science practices – research data, code, protocol and preprints sharing – are particularly hard for researchers to assess, despite the potential of Open Science practices to signal trustworthiness.

Our results suggest opportunities to develop better guidance and better signals to support the evaluation of research credibility and trustworthiness – and ultimately support research assessment reform, away from the use of inappropriate proxies for impact and towards assessing the intrinsic characteristics and values researchers see as important.


To preprint or not to preprint: A global researcher survey

Authors : Rong Ni, Ludo Waltman

Open science is receiving widespread attention globally, and preprinting offers an important way to implement open science practices in scholarly publishing. To develop a systematic understanding of researchers’ adoption of and attitudes toward preprinting, we conducted a survey of authors of research papers published in 2021 and early 2022. Our survey results show that the United States and Europe led the way in the adoption of preprinting.

The United States and European respondents reported a higher familiarity with and a stronger commitment to preprinting than their colleagues elsewhere in the world. The adoption of preprinting is much stronger in physics and astronomy as well as mathematics and computer science than in other research areas. Respondents identified free accessibility of preprints and acceleration of research communication as the most important benefits of preprinting.

Low reliability and credibility of preprints, sharing results before peer review and premature media coverage are the most significant concerns about preprinting, emphasized in particular by respondents in the life and health sciences. According to respondents, the most crucial strategies to encourage preprinting are integrating preprinting into journal submission workflows and providing recognition for posting preprints.

URL : To preprint or not to preprint: A global researcher survey


Benefits of Citizen Science for Libraries

Authors : Dolores Mumelaš, Alisa Martek

Participating in collaborative scientific research through citizen science, a component of open science, holds significance for both citizen scientists and professional researchers. Yet, the advantages for those orchestrating citizen science initiatives are often overlooked. Organizers encompass a diverse range, including governmental entities, non-governmental organizations, corporations, universities, and institutions like libraries.

For libraries, citizen science holds importance by fostering heightened civic and research interests, promoting scientific publishing, and contributing to overall scientific progress. This paper aims to provide a comprehensive understanding of the specific ways in which citizen science can benefit libraries and how libraries can effectively utilize citizen science to achieve their goals.

The paper is based on a systematic review of peer-reviewed articles that discuss the direct benefits of citizen science on libraries. A list of the main benefits of citizen science for libraries has been compiled from the literature. Additionally, the reasons why it is crucial for libraries to communicate the benefits of citizen science for their operations have been highlighted, particularly in terms of encouraging other libraries to actively engage in citizen science projects.

URL : Benefits of Citizen Science for Libraries


Notebooks et science ouverte : FAIR mieux

Authors: Mariannig Le Béchec, Célya Gruson-Daniel, Clémence Lascombes, Émilien Schultz

Les notebooks sont aujourd’hui largement adoptés dans les pratiques numériques de recherche. Malgré leur omniprésence croissante, leurs caractéristiques, les rôles et usages associés aux notebooks ont pour le moment donné lieu à peu d’investigations dans une perspective d’études des sciences et des techniques (STS).

Dans cet article, nous proposons une synthèse de travaux empiriques menés sur les notebooks afin d’identifier les principaux résultats existants, que cela concerne la classification de types de notebooks, les pratiques installées, les limites et améliorations proposées.

Dans la continuité de cette synthèse qui souligne surtout l’existence de travaux dans le domaine de la science des données (data science) et non pas des pratiques de recherche en contextes académiques, nous discutons le rôle des notebooks comme vecteur et levier des principes FAIR (Findable, Accessible, Interoperable, Reusable) associés à la science ouverte.


Analysis on open data as a foundation for data-driven research

Authors : Honami Numajiri, Takayuki Hayashi

Open Data, one of the key elements of Open Science, serves as a foundation for “data-driven research” and has been promoted in many countries. However, the current status of the use of publicly available data consisting of Open Data in new research styles and the impact of such use remains unclear.

Following a comparative analysis in terms of the coverage with the OpenAIRE Graph, we analyzed the Data Citation Index, a comprehensive collection of research datasets and repositories with information of citation from articles. The results reveal that different countries and disciplines tend to show different trends in Open Data.

In recent years, the number of data sets in repositories where researchers publish their data, regardless of the discipline, has increased dramatically, and researchers are publishing more data. Furthermore, there are some disciplines where data citation rates are not high, but the databases used are diverse.

URL : Analysis on open data as a foundation for data-driven research