How to build an Open Science Monitor based on publications? A French perspective

Authors : Laetitia Bracco, Eric Jeangirard, Anne L’Hôte, Laurent Romary

Many countries and institutions are striving to develop tools to monitor their open science policies. Since 2018, with the launch of its National Plan for Open Science, France has been progressively implementing a monitoring framework for its public policy, relying exclusively on reliable, open, and controlled data. Currently, this monitoring focuses on research outputs, particularly publications, as well as theses and clinical trials.

Publications serve as a basis for analyzing other dimensions, including research data, code, and software. The metadata associated with publications is therefore particularly valuable, but the methodology for leveraging it raises several challenges. Here, we briefly outline how we have used this metadata to construct the French Open Science Monitor.

URL : How to build an Open Science Monitor based on publications? A French perspective

Arxiv : https://arxiv.org/abs/2501.02856

Rejected papers in academic publishing: Turning negatives into positives to maximize paper acceptance

Authors : Jaime A. Teixeira da SilvaMaryna Nazarovets

There are ample reasons why papers might get rejected by peer-reviewed journals, and the experience can be, especially for those who have had little experience, sobering. When papers get rejected a number of times, that may signal that there are problems with the paper (e.g., weak methodology or lack of robust analyses), that it is insufficiently developed, is poorly written, or that it is too topic-specific and needs to find an appropriate niche journal.

In the case of a single or multiple rejections, whenever there is feedback from a journal, as well as reasons for rejection, this provides a useful signal for improving the paper before it is resubmitted to another journal. This article examines literature related to the rejection of papers in academic journals, encompassing the opinions and experiences offered by authors, as well as advice suggested by editors, allowing readers and authors who experience rejections to reflect on the possible reasons that may have led to that outcome.

Many papers related to this topic were published as editorials or opinions, offering advice on how to improve aspects of a submitted paper in order to increase its chances of acceptance.

URL : Rejected papers in academic publishing: Turning negatives into positives to maximize paper acceptance

DOI : https://doi.org/10.1002/leap.1649

 

Recycling Research Without (Self-)Plagiarism: The Importance of Context and the Case of Conference Contributions

Authors : Gert Helgesson, Jonas Åkerman, Sara Belfrage

In this paper, we clarify the notions of plagiarism and self-plagiarism and show that a rather straightforward observation about these notions has important implications for the admissibility of recycling research outputs. The key point is that contextual variation must be taken into account in normative assessments of recycling research outputs, and we illustrate this with some examples.

In particular, we apply the analysis in order to dissolve a disagreement about the proper handling of submissions to conferences. Some researchers are comfortable with sending the same contribution to several conferences, while others find that unacceptable and a clear deviation from good research practise. We take a closer look at the arguments regarding whether it is acceptable or not to make the same conference contribution more than once, including the argument that submitting the same contribution more than once would amount to self-plagiarism.

We argue that contextual variation must be taken into account, in accordance with our previous analysis, and conclude that whether or not a duplication of a conference contribution deviates from good research practise depends on what significance is ascribed to it in the specific case. We conclude with some practical recommendations, emphasising for example, the importance of being explicit and clear on this point, and encourage conference organisers to provide opportunities to specify relevant facts in the submission.

URL : Recycling Research Without (Self-)Plagiarism: The Importance of Context and the Case of Conference Contributions

DOI : https://doi.org/10.1002/leap.1653

Gaps between Open Science activities and actual recognition systems: Insights from an international survey

Authors : Florencia Grattarola, Hanna Shmagun, Christopher Erdmann, Anne Cambon-Thomsen, Mogens Thomsen, Jaesoo Kim, Laurence Mabile

There are global movements aiming to promote reform of the traditional research evaluation and reward systems. However, a comprehensive picture of the existing best practices and efforts across various institutions to integrate Open Science into these frameworks remains underdeveloped and not fully known. The aim of this study was to identify perceptions and expectations of various research communities worldwide regarding how Open Science activities are (or should be) formally recognised and rewarded.

To achieve this, a global survey was conducted in the framework of the Research Data Alliance, recruiting 230 participants from five continents and 37 countries. Despite most participants reporting that their organisation had one form or another of formal Open Science policies, the majority indicated that their organisation lacks any initiative or tool that provides specific credits or rewards for Open Science activities. However, researchers from France, the United States, the Netherlands and Finland affirmed having such mechanisms in place. T

he study found that, among various Open Science activities, Open or FAIR data management and sharing stood out as especially deserving of explicit recognition and credit. Open Science indicators in research evaluation and/or career progression processes emerged as the most preferred type of reward.

URL : Gaps between Open Science activities and actual recognition systems: Insights from an international survey

DOI : https://doi.org/10.1371/journal.pone.0315632

Can scholarly publishers change the world? The role of the SDGs within the publishing industry

Authors : Stephanie Dawson, Agata Morka, Charlie Rapple, Nikesh Gosalia, Ritu Dhand

The United Nation’s Sustainable Development Goals (SDGs) aim to eradicate poverty and inequality, protect the planet, and ensure health, justice, and prosperity for all, emphasizing inclusivity. Within the realm of scholarly publishing, the panel discussion Can scholarly publishers change the world? The role of the SDGs within the publishing industry held at Academic Publishing in Europe 2024, highlighted the business advantages of aligning with SDGs and made a plea to reshape the narrative beyond mere moral obligation as well as to galvanize stakeholders to take action and promote engagement, offering a clear direction.

This paper expands on the panel discussion, which was moderated by Stephanie Dawson, CEO, ScienceOpen. Panellists were Agata Morka, Regional Director, Publishing Development, PLOS, Charlie Rapple, Chief Customer Officer and Co-founder, Kudos, Nikesh Gosalia, President Global Academic and Publisher Relations, Cactus Communications, and Ritu Dhand, Chief Scientific Officer, Springer Nature.

URL : Can scholarly publishers change the world? The role of the SDGs within the publishing industry

DOI : https://doi.org/10.3233/ISU-240017

Improving the reporting of research impact assessments: a systematic review of biomedical funder research impact assessments

Authors : Rachel Abudu, Kathryn Oliver, Annette Boaz

The field of research impact assessment (RIA) has seen remarkable growth over the past three decades. Increasing numbers of RIA frameworks have been developed and applied by research funders and new technologies can capture some research impacts automatically. However, RIAs are too different to draw comparable conclusions about what type of methods, data or processes are best suited to assess research impacts of different kinds, or how funders should most efficiently implement RIAs.

To usher in the next era of RIA and mature the field, future RIA methodologies should become more transparent, standardized and easily implementable. Key to these efforts is an improved understanding of how to practically implement and report on RIA at the funder-level. Our aim is to address this gap through two major contributions.

First, we identify common items across existing best practice guidelines for RIA, creating a preliminary reporting checklist for standardized RIA reporting. Next, we systematically reviewed studies examining funders’ assessment of biomedical grant portfolios to examine how funders reported the results of their RIAs across the checklist, as well as the operational steps funders took to perform their RIA and the variation in how funders implemented the same RIA frameworks.

We compare evidence on current RIA practices with the reporting checklist to identify good practice for RIA reporting, gaps in the evidence base for future research, and recommendations for future effective RIA.

URL : Improving the reporting of research impact assessments: a systematic review of biomedical funder research impact assessments

DOI : https://doi.org/10.1093/reseval/rvae060