Enhancing peer review efficiency: A mixed-methods analysis of artificial intelligence-assisted reviewer selection across academic disciplines

Author : Shai Farber

This mixed-methods study evaluates the efficacy of artificial intelligence (AI)-assisted reviewer selection in academic publishing across diverse disciplines. Twenty journal editors assessed AI-generated reviewer recommendations for a manuscript. The AI system achieved a 42% overlap with editors’ selections and demonstrated a significant improvement in time efficiency, reducing selection time by 73%.

Editors found that 37% of AI-suggested reviewers who were not part of their initial selection were indeed suitable. The system’s performance varied across disciplines, with higher accuracy in STEM fields (Cohen’s d = 0.68). Qualitative feedback revealed an appreciation for the AI’s ability to identify lesser-known experts but concerns about its grasp of interdisciplinary work. Ethical considerations, including potential algorithmic bias and privacy issues, were highlighted.

The study concludes that while AI shows promise in enhancing reviewer selection efficiency and broadening the reviewer pool, it requires human oversight to address limitations in understanding nuanced disciplinary contexts. Future research should focus on larger-scale longitudinal studies and developing ethical frameworks for AI integration in peer-review processes.

URL : Enhancing peer review efficiency: A mixed-methods analysis of artificial intelligence-assisted reviewer selection across academic disciplines

DOI : https://doi.org/10.1002/leap.1638

The impact of generative AI on the scholarly communications of early career researchers: An international, multi-disciplinary study

Authors : David NicholasMarzena SwigonDavid ClarkAbdullah AbrizahJorge RevezEti HermanBlanca Rodríguez BravoJie XuAnthony Watkinson

The Harbingers study of early career researchers (ECRs), their work life and scholarly communications, began by studying generational—Millennial—change (c.2016), then moved to pandemic change (c.2020) and is now investigating another potential agent of change: artificial intelligence (2024–). We report here on a substantial scoping pilot study that looks at the impact of AI on the scholarly communications of international ECRs and, extends this to the arts and humanities.

It aims to fill the knowledge gap concerning ECRs whose millennial mindset may render them especially open to change and, as the research workhorses they are, very much in the frontline. The data was collected via in-depth interviews in China, Malaysia, Poland, Portugal, Spain and (selectively) the United Kingdom/United States. The data show ECRs to be thinking, probing and, in some cases, experimenting with AI.

There was a general acceptance that AI will be responsible for the growth of low-quality scientific papers, which could lead to a decline in the quality of research. Scholarly integrity and ethics were a big concern with issues of authenticity, plagiarism, copyright and poor citation practices raised. The most widespread belief was AI would prove to be a transformative force and would exacerbate existing scholarly disparities and inequalities.

URL : The impact of generative AI on the scholarly communications of early career researchers: An international, multi-disciplinary study

DOI : https://doi.org/10.1002/leap.1628

How to make sense of generative AI as a science communication researcher? A conceptual framework in the context of critical engagement with scientific information

Authors :

A guiding theory for a continuous and cohesive discussion regarding generative artificial intelligence (GenAI) in science communication is still unavailable. Here, we propose a framework for characterizing, evaluating, and comparing AI-based information technologies in the context of critical engagement with scientific information in online environments.

Hierarchically constructed, the framework observes technological properties, user experience, content presentation, and the context in which the technology is being used. Understandable and applicable for non-experts in AI systems, the framework affords a holistic yet practical assessment of various AI-based information technologies, providing both a reflection aid and a conceptual baseline for scholarly references.

URL : How to make sense of generative AI as a science communication researcher? A conceptual framework in the context of critical engagement with scientific information

DOI : https://doi.org/10.22323/2.23060205

Open Science at the Generative AI Turn: An Exploratory Analysis of Challenges and Opportunities

Authors : Mohammad Hosseini, Serge P.J.M. Horbach, Kristi L. Holmes, Tony Ross-Hellauer

Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools/platforms for collaborative research and sharing results. Due to this direct relationship, characteristics of employed technologies directly impact OS objectives. Generative Artificial Intelligence (GenAI) models are increasingly used by researchers for tasks such as text refining, code generation/editing, reviewing literature, data curation/analysis.

GenAI promises substantial efficiency gains but is currently fraught with limitations that could negatively impact core OS values such as fairness, transparency and integrity, and harm various social actors. In this paper, we explore possible positive and negative impacts of GenAI on OS.

We use the taxonomy within the UNESCO Recommendation on Open Science to systematically explore the intersection of GenAI and OS. We conclude that using GenAI could advance key OS objectives by further broadening meaningful access to knowledge, enabling efficient use of infrastructure, improving engagement of societal actors, and enhancing dialogue among knowledge systems.

However, due to GenAI limitations, it could also compromise the integrity, equity, reproducibility, and reliability of research, while also having potential implications for the political economy of research and its infrastructure. Hence, sufficient checks, validation and critical assessments are essential when incorporating GenAI into research workflows.

URL : Open Science at the Generative AI Turn: An Exploratory Analysis of Challenges and Opportunities

DOI : https://doi.org/10.31235/osf.io/zns7g

Data Science and AI in Context: Summary and Insights

Author : Alfred Spector

This paper explores how to deploy data science and data-driven AI, focusing on the broad collection of considerations beyond those of statistics and machine learning. Building on an analysis rubric introduced in a recent textbook by the author and three others, this paper summarizes some of the book’s key points and adds reflections on AI’s extraordinary growth and societal effects. The paper also discusses how to balance inevitable trade-offs and provides further thoughts on societal implications.

DOI : https://doi.org/10.1162/99608f92.cdebd845

Academic Integrity and Artificial Intelligence in Higher Education (HE) Contexts: A Rapid Scoping Review

Authors : Beatriz Antonieta Moya, Sarah Elaine Eaton, Helen Pethrick, K. Alix Hayden, Robert Brennan, Jason Wiens, Brenda McDermott

Artificial Intelligence (AI) developments challenge higher education institutions’ teaching, learning, assessment, and research practices. To contribute timely and evidence-based recommendations for upholding academic integrity, we conducted a rapid scoping review focusing on what is known about academic integrity and AI in higher education. We followed the Updated Reviewer Manual for Scoping Reviews from the Joanna Briggs Institute (JBI) and the Preferred Reporting Items for Systematic reviews Meta-Analysis for Scoping Reviews (PRISMA-ScR) reporting standards.

Five databases were searched, and the eligibility criteria included higher education stakeholders of any age and gender engaged with AI in the context of academic integrity from 2007 through November 2022 and available in English. The search retrieved 2223 records, of which 14 publications with mixed methods, qualitative, quantitative, randomized controlled trials, and text and opinion studies met the inclusion criteria. The results showed bounded and unbounded ethical implications of AI.

Perspectives included: AI for cheating; AI as legitimate support; an equity, diversity, and inclusion lens into AI; and emerging recommendations to tackle AI implications in higher education. The evidence from the sources provides guidance that can inform educational stakeholders in decision-making processes for AI integration, in the analysis of misconduct cases involving AI, and in the exploration of AI as legitimate assistance. Likewise, this rapid scoping review signals key questions for future research, which we explore in our discussion.

URL : Academic Integrity and Artificial Intelligence in Higher Education (HE) Contexts: A Rapid Scoping Review

DOI : https://doi.org/10.55016/ojs/cpai.v7i3.78123

PubTator 3.0: an AI-powered literature resource for unlocking biomedical knowledge

Authors  : Chih-Hsuan Wei, Alexis Allot, Po-Ting Lai, Robert Leaman, Shubo Tian, Ling Luo, Qiao Jin, Zhizheng Wang, Qingyu Chen, Zhiyong Lu

PubTator 3.0 (https://www.ncbi.nlm.nih.gov/research/pubtator3/) is a biomedical literature resource using state-of-the-art AI techniques to offer semantic and relation searches for key concepts like proteins, genetic variants, diseases and chemicals. It currently provides over one billion entity and relation annotations across approximately 36 million PubMed abstracts and 6 million full-text articles from the PMC open access subset, updated weekly.

PubTator 3.0’s online interface and API utilize these precomputed entity relations and synonyms to provide advanced search capabilities and enable large-scale analyses, streamlining many complex information needs. We showcase the retrieval quality of PubTator 3.0 using a series of entity pair queries, demonstrating that PubTator 3.0 retrieves a greater number of articles than either PubMed or Google Scholar, with higher precision in the top 20 results.

We further show that integrating ChatGPT (GPT-4) with PubTator APIs dramatically improves the factuality and verifiability of its responses. In summary, PubTator 3.0 offers a comprehensive set of features and tools that allow researchers to navigate the ever-expanding wealth of biomedical literature, expediting research and unlocking valuable insights for scientific discovery.

URL : PubTator 3.0: an AI-powered literature resource for unlocking biomedical knowledge

DOI : https://doi.org/10.1093/nar/gkae235