Navigating the ethical landscape of scholarly publishing: a comparative evaluation of Gemini and DeepSeek LLMs in addressing authorship and contributorship disputes

Authors : Kannan Sridharan, Sivarama Krishnan

Background:

The rising complexity of publication ethics, particularly authorship disputes, necessitates exploring Large Language Models (LLMs) as potential evaluative tools. This study compares the performance of Google Gemini 2.5 Flash and DeepSeek-V3.2 against expert Committee on Publication Ethics (COPE) forum responses.

Methods:

A cross-sectional analysis including 12 COPE authorship and contributorship cases was conducted using three prompting strategies: Minimal, Deterministic, and Stochastic. Responses were scored across seven domains on a 5-point Likert scale (1 = poor, 5 = excellent) by independent raters.

Results:

Both LLMs achieved perfect scores (5 ± 0) in Actionability of Recommendations and high marks in Safety and Avoidance of Hallucination (4.88 ± 0.33). In the Consistency with COPE Principles domain, DeepSeek performed slightly better than Gemini (4.45 ± 1.0 vs. 4.12 ± 1.29), while Gemini showed a better Overall Appropriateness (4.03 ± 0.98 vs. 3.82 ± 1.29) but they were not statistically significant. Both models struggled most with Identification of Ethical Issues (Gemini: 3.91 ± 1.33; DeepSeek: 3.82 ± 1.29). Under Minimal prompts, Gemini’s ethical identification was lower (3.55 ± 1.44) compared to Deterministic/Stochastic prompts (4.09 ± 1.3). Qualitatively, Gemini recorded an 8% major disagreement rate with COPE, while DeepSeek had a 16% combined (minor and major) disagreement rate. Mean similarity scores to COPE forum experts were approximately 4 for both models. Both models missed specific legal/copyright nuances but provided unique “value-add” strategies, such as author disassociation statements and editorial de-escalation training, not present in original COPE forum advice.

Conclusion:

LLMs demonstrated high degree of alignment with COPE expert ethical reasoning. While they possess a “legal blind spot,” their ability to provide actionable and clear guidance, optimized through structured prompting, makes them valuable supplementary tools for journal editors.

URL : Navigating the ethical landscape of scholarly publishing: a comparative evaluation of Gemini and DeepSeek LLMs in addressing authorship and contributorship disputes

DOI : https://doi.org/10.3389/frma.2026.1781697