Exploring the merits of research performance measures that comply with the San FranciscoDeclaration on Research Assessment and strategies to overcome barriers of adoption: qualitative interviews with administrators and researchers

Authors : Himani Boury, Mathieu Albert, Robert H. C. Chen, James C. L. Chow, Ralph DaCosta, Michael M. Hoffman, Behrang Keshavarz, Pia Kontos, Mary Pat McAndrews, Stephanie Protze, Anna R. Gagliardi

Background

In prior research, we identified and prioritized ten measures to assess research performance that comply with the San Francisco Declaration on Research Assessment, a principle adopted worldwide that discourages metrics-based assessment.

Given the shift away from assessment based on Journal Impact Factor, we explored potential barriers to implementing and adopting the prioritized measures.

Methods

We identified administrators and researchers across six research institutes, conducted telephone interviews with consenting participants, and used qualitative description and inductive content analysis to derive themes.

Results

We interviewed 18 participants: 6 administrators (research institute business managers and directors) and 12 researchers (7 on appointment committees) who varied by career stage (2 early, 5 mid, 5 late). Participants appreciated that the measures were similar to those currently in use, comprehensive, relevant across disciplines, and generated using a rigorous process.

They also said the reporting template was easy to understand and use. In contrast, a few administrators thought the measures were not relevant across disciplines. A few participants said it would be time-consuming and difficult to prepare narratives when reporting the measures, and several thought that it would be difficult to objectively evaluate researchers from a different discipline without considerable effort to read their work.

Strategies viewed as necessary to overcome barriers and support implementation of the measures included high-level endorsement of the measures, an official launch accompanied by a multi-pronged communication strategy, training for both researchers and evaluators, administrative support or automated reporting for researchers, guidance for evaluators, and sharing of approaches across research institutes.

Conclusions

While participants identified many strengths of the measures, they also identified a few limitations and offered corresponding strategies to address the barriers that we will apply at our organization. Ongoing work is needed to develop a framework to help evaluators translate the measures into an overall assessment.

Given little prior research that identified research assessment measures and strategies to support adoption of those measures, this research may be of interest to other organizations that assess the quality and impact of research.

URL : Exploring the merits of research performance measures that comply with the San Francisco Declaration on Research Assessment and strategies to overcome barriers of adoption: qualitative interviews with administrators and researchers

DOI : https://doi.org/10.1186/s12961-023-01001-w

Fast, Furious and Dubious? MDPI and the Depth of Peer Review Reports

Authors : Abdelghani Maddi, Chérifa Boukacem-Zeghmouri

Peer review is a central component of scholarly communication as it brings trust and quality control for scientific knowledge. One of its goals is to improve the quality of manuscripts and prevent the publication of work resulting from dubious or misconduct practices.

In a context marked by a massification of scientific production, the reign of Publish or Perish rule and the acceleration of research, journals are leaving less and less time to reviewers to produce their reports. It is therefore is crucial to study whether these regulations have an impact on the length of reviewer reports.

Here, we address the example of MDPI, a Swiss Open Access publisher, depicted as a Grey Publisher and well known for its short deadlines, by analyzing the depth of its reviewer reports and its counterparts. For this, we used Publons data with 61,197 distinct publications reviewed by 86,628 reviewers.

Our results show that, despite the short deadlines, when they accept to review a manuscript, reviewers assume their responsibility and do their job in the same way regardless of the publisher, and write on average the same number of words.

Our results suggest that, even if MDPI’s editorial practices may be questionable, as long as peer review is assured by researchers themselves, publications are evaluated similarly.

URL : Fast, Furious and Dubious? MDPI and the Depth of Peer Review Reports

DOI : https://doi.org/10.21203/rs.3.rs-3027724/v1

CORE: A Global Aggregation Service for Open Access Papers

Authors : Petr Knoth, Drahomira Herrmannova, Matteo Cancellieri, Lucas Anastasiou, Nancy Pontika, Samuel Pearce, Bikash Gyawali, David Pride

This paper introduces CORE, a widely used scholarly service, which provides access to the world’s largest collection of open access research publications, acquired from a global network of repositories and journals.

CORE was created with the goal of enabling text and data mining of scientific literature and thus supporting scientific discovery, but it is now used in a wide range of use cases within higher education, industry, not-for-profit organisations, as well as by the general public.

Through the provided services, CORE powers innovative use cases, such as plagiarism detection, in market-leading third-party organisations. CORE has played a pivotal role in the global move towards universal open access by making scientific knowledge more easily and freely discoverable.

In this paper, we describe CORE’s continuously growing dataset and the motivation behind its creation, present the challenges associated with systematically gathering research papers from thousands of data providers worldwide at scale, and introduce the novel solutions that were developed to overcome these challenges.

The paper then provides an in-depth discussion of the services and tools built on top of the aggregated data and finally examines several use cases that have leveraged the CORE dataset and services.

URL : CORE: A Global Aggregation Service for Open Access Papers

DOI : https://doi.org/10.1038/s41597-023-02208-w

The Platformisation of Scholarly Information and How to Fight It

Author : Lai Ma

The commercial control of academic publishing and research infrastructure by a few oligopolistic companies has crippled the development of open access movement and interfered with the ethical principles of information access and privacy.

In recent years, vertical integration of publishers and other service providers throughout the research cycle has led to platformisation, characterized by datafication and commodification similar to practices on social media platforms. Scholarly publications are treated as user-generated contents for data tracking and surveillance, resulting in profitable data products and services for research assessment, benchmarking and reporting.

Meanwhile, the bibliodiversity and equal open access are denied by the dominant gold open access model and the privacy of researchers is being compromised by spyware embedded in research infrastructure.

This article proposes four actions to fight the platformisation of scholarly information after a brief overview of the market of academic journals and research assessments and their implications for bibliodiversity, information access, and privacy: (1) Educate researchers about commercial publishers and APCs; (2) Allocate library budget to support scholar-led and library publishing; (3) Engage in the development of public research infrastructures and copyright reform; and (4) Advocate for research assessment reforms.

URL : The Platformisation of Scholarly Information and How to Fight It

DOI : https://doi.org/10.53377/lq.13561

Roles and Responsibilities for Peer Reviewers of International Journals

Author : Carol Nash

There is a noticeable paucity of recently published research on the roles and responsibilities of peer reviewers for international journals. Concurrently, the pool of these peer reviewers is decreasing. Using a narrative research method developed by the author, this study questioned these roles and responsibilities through the author’s assessment in reviewing for five publishing houses July–December 2022, in comparison with two recent studies regarding peer review, and the guidelines of the five publishing houses.

What should be most important in peer review is found discrepant among the author, those judging peer review in these publications, and the five publishing houses. Furthermore, efforts to increase the pool of peer reviewers are identified as ineffective because they focus on the reviewer qua reviewer, rather than on their primary role as researchers.

To improve consistency, authors have regularly called for peer review training. Yet, this advice neglects to recognize the efforts of journals in making their particular requirements for peer review clear, comprehensive and readily accessible.

Consequently, rather than peer reviewers being trained and rewarded as peer reviewers, journals are advised to make peer review a requirement for research publication, and their guidelines necessary reading and advice to follow for peer reviewers.

URL : Roles and Responsibilities for Peer Reviewers of International Journals

DOI : https://doi.org/10.3390/publications11020032

Academia should stop using beall’s lists and review their use in previous studies

Authors : Jaime A. Teixeira da Silva, Graham Kendall

Academics (should) strive to submit to journals which are academically sound and scholarly. To achieve this, they could either submit to journals that appear exclusively on safelists (occasionally referred to as whitelists, although this term tends to be avoided), or avoid submitting to journals on watchlists (occasionally referred to as blacklists, although this term tends to be avoided).

The most well-known of these lists was curated by Jeffrey Beall. Beall’s Lists (there are two, one for stand-alone journals and one for publishers) were taken offline by Beall himself in January 2017.

Prior to 2017, Beall’s Lists were widely cited and utilized, including to make quantitative claims about scholarly publishing. Even after Beall’s Lists became obsolete (they have not been maintained for the past six years), they continue to be widely cited and used. This paper argues that the use of Beall’s Lists, pre- and post-2017, may constitute a methodological error and, even if papers carry a disclaimer or limitations section noting this weakness, their conclusions cannot always be relied upon.

This paper also argues for the need to conduct a detailed post-publication assessment of reports in the literature that used Beall’s Lists to validate their findings and conclusions, assuming that it becomes accepted that Beall’s Lists are not a reliable resource for scientific investigation.

Finally, this paper contends that any papers that have identified methodological errors should be corrected. Several lists that were cloned from Beall’s Lists have also emerged and are also being cited. These should also be included in any post-publication investigation that is conducted.

URL : Academia should stop using beall’s lists and review their use in previous studies

DOI : https://doi.org/10.47316/cajmhe.2023.4.1.04

The rise of preprints in earth sciences

Authors : Olivier Pourret, Daniel Enrique Ibarra

The rate of science information’s spread has accelerated in recent years. In this context, it appears that many scientific disciplines are beginning to recognize the value and possibility of sharing open access (OA) online manuscripts in their preprint form.

Preprints are academic papers that are published but have not yet been evaluated by peers. They have existed in research at least since the 1960s and the creation of ArXiv in physics and mathematics. Since then, preprint platforms—which can be publisher- or community-driven, profit or not for profit, and based on proprietary or free and open source software—have gained popularity in many fields (for example, bioRxiv for the biological sciences).

Today, there are many platforms that are either disciplinary-specific or cross-domain, with exponential development over the past ten years. Preprints as a whole still make up a very small portion of scholarly publishing, but a large group of early adopters are testing out these value-adding tools across a much wider range of disciplines than in the past.

In this opinion article, we provide perspective on the three main options available for earth scientists, namely EarthArXiv, ESSOAr/ESS Open Archive and EGUsphere.