Authors : Zhongshi Wang, Mengyue Gong
Rapid advances of artificial intelligence (AI) have substantially impacted the field of academic publishing. This study examines AI integration in peer review by analysing policies from 439 high- and 363 middle-impact factor (IF) journals across disciplines. Using grounded theory, we identify patterns in AI policy adoption.
Results show 83% of high-IF journals have AI guidelines, with varying stringency across disciplines. Meanwhile, only 75% of middle-IF journals have AI guidelines. Science, technology, and medicine (STM) disciplines exhibit stricter regulations, while humanities and social sciences adopt more lenient approaches.
Key ethical concerns focus on confidentiality risks, accountability gaps, and AI’s inability to replicate critical human judgement. Publisher policies emphasise transparency, human oversight, and restricted AI usage for auxiliary tasks only, such as grammar checks or reviewer finding.
Disciplinary differences highlight the need for tailored guidelines that balance efficiency gains with research integrity. This study proposes collaborative frameworks for responsible AI integration. It focuses on accountability, transparency, and interdisciplinary policy development to address peer review challenges.
URL : A Cross-Disciplinary Analysis of AI Policies in Academic Peer Review