Open for Debate: Situating Open Research for the Humanities in a Neoliberal Setting

Author : Beatriz Barrocas Ferreira

Open research has been widely promoted as a means of democratising knowledge, yet its uptake in the humanities has remained limited and frequently marked by ambivalence. In the context of growing institutional investment in open research, this article interrogates what openness entails for the humanities within a research setting increasingly shaped by neoliberal rationalities.

While often framed as a democratising force, the implementation of open research policies seems to have largely aligned with market-oriented imperatives, emphasising transparency, efficiency, and economic return.

The article argues that the friction between open research and the humanities arises not from an aversion to openness per se, but from the instrumentalization of open research and its imposition as a universalising, science-centric framework that fails to accommodate the pluralistic dimensions of humanistic research. Rather than dismissing openness, the article calls for a reimagining of open research grounded in pluralism, situated ethics, and disciplinary specificity.

URL : Open for Debate: Situating Open Research for the Humanities in a Neoliberal Setting

DOI : https://doi.org/10.3998/jep.7850

Tensions et zones d’ombre autour de la science ouverte en SHS en France

Autrice : Ionna Faïta

À l’heure où la science ouverte s’impose comme un cadre structurant des politiques de recherche, cette revue de littérature critique explore les débats qui accompagnent son appropriation dans les sciences humaines et sociales (SHS) en France. Elle s’appuie sur un corpus hétérogène et non exhaustif de publications de statuts variés, parues entre 2010 et 2025, constitué par veille et recherche bibliographique itérative dans le cadre d’une recherche doctorale.

L’objectif est de nourrir une réflexion sur la réception des politiques de science ouverte dans les SHS, entre discours visibles — au sens de publiés — et pratiques concrètes. Nous proposons une articulation critique des productions scientifiques consacrées à la science ouverte, en mettant en lumière les tensions qui traversent sa mise en œuvre et les arbitrages qu’elle engage.

À partir d’un corpus polymorphe — articles de recherche, articles d’opinion, rapports, communications —, nous organisons l’examen autour de six objets : open access, ouverture, science(s) ouverte(s), mutations des circuits éditoriaux distinctes entre le livre et la revue scientifique, données de recherche en SHS et institutionnalisation. Cette approche vise à éclairer la circulation de ces discussions, entre ancrages disciplinaires et spécificité nationale : ainsi nous souhaitons engager un dialogue avec la littérature internationale.

URL : Tensions et zones d’ombre autour de la science ouverte en SHS en France

DOI : https://doi.org/10.3998/jep.7854

Mobilizing Knowledge in the Humanities and Social Sciences: Exploring Competing Articulations of Openness in Policy and Practice

Author : Corina MacDonald

Knowledge mobilization (KMb) is a policy discourse and framework used by major Canadian research funding bodies to promote and monitor the efficiency of knowledge transfer between the university and society. Since 2009, most humanities and social science (HSS) researchers applying for federal funding must complete a KMb module that describes their intended non-academic collaborators and audiences, planned outreach activities, and metrics to gauge their success.

The ideals of public engagement set out in KMb policy are worthy ones for scholars to strive towards. The framework can also provide legitimation for a diverse range of research practices, relationships, and outputs. Applicants must think about sharing their work throughout the research process rather than simply at its end. This introduces a more expansive understanding of the relations of knowledge producers and their publics than is found in Canadian open access policies and mandates.

Many practices commonly understood as open research, such as data sharing, diamond open access publishing, or sharing via blogs or podcasts, would be considered knowledge mobilization activities, as would practices of community-engaged research or knowledge co-production. KMb policy thus governs much of the making public of humanities research in Canada; however, it embodies conflicting ideas about the value of shared knowledge. Its emphasis on knowledge as transferable imposes temporal, material, and cognitive restrictions on scholarship.

Critics of KMb dismiss it as performative and a tool of institutional governance or argue that it quantifies research as a return on investment. The critiques and possibilities of knowledge mobilization policy offer insight into wider contemporary struggles over the meaning of openness for HSS research. This article explores its impact on Canadian HSS scholars in relation to critical debates about changing relations of knowledge, labor, and value in humanities scholarship.

URL : Mobilizing Knowledge in the Humanities and Social Sciences: Exploring Competing Articulations of Openness in Policy and Practice

DOI : https://doi.org/10.3998/jep.7849

What Does Openness Mean for the Humanities? Redefining Ethical and Reflexive Practices in Open Research

Author : Adeola Eze

Notions of openness in research have largely been shaped by scientific principles of transparency, efficiency, and replicability, operationalized through standardized workflows, interoperable infrastructures, and measurable impact. Endorsed by funders and policy frameworks, this model often misfits humanities and social science epistemologies in which knowledge is interpretive, historically situated, and ethically entangled with context.

This article critiques policy-led definitions of openness by tracing how open access and open science have been implemented through compliance regimes, metrics, and author-facing payment models, with uneven consequences across regions, languages, and institutions. Rather than rejecting open research, the article reinterprets it through a humanities lens.

It develops a theory of interpretive openness through Umberto Eco’s concept of the open work and extends it through three historical case studies—the cento, scholastic glossing, and Derrida’s margins—which show how form-bound reuse, annotation, and participatory reading have long operated as infrastructures of public meaning-making.

The article then connects these genealogies to contemporary digital publishing and editorial infrastructures, including preprints, open peer review, and web annotation, and argues for open research designs that value interpretive labor, visible process, and community accountable infrastructures.

URL : What Does Openness Mean for the Humanities? Redefining Ethical and Reflexive Practices in Open Research

DOI : https://doi.org/10.3998/jep.7873

Navigating the ethical landscape of scholarly publishing: a comparative evaluation of Gemini and DeepSeek LLMs in addressing authorship and contributorship disputes

Authors : Kannan Sridharan, Sivarama Krishnan

Background:

The rising complexity of publication ethics, particularly authorship disputes, necessitates exploring Large Language Models (LLMs) as potential evaluative tools. This study compares the performance of Google Gemini 2.5 Flash and DeepSeek-V3.2 against expert Committee on Publication Ethics (COPE) forum responses.

Methods:

A cross-sectional analysis including 12 COPE authorship and contributorship cases was conducted using three prompting strategies: Minimal, Deterministic, and Stochastic. Responses were scored across seven domains on a 5-point Likert scale (1 = poor, 5 = excellent) by independent raters.

Results:

Both LLMs achieved perfect scores (5 ± 0) in Actionability of Recommendations and high marks in Safety and Avoidance of Hallucination (4.88 ± 0.33). In the Consistency with COPE Principles domain, DeepSeek performed slightly better than Gemini (4.45 ± 1.0 vs. 4.12 ± 1.29), while Gemini showed a better Overall Appropriateness (4.03 ± 0.98 vs. 3.82 ± 1.29) but they were not statistically significant. Both models struggled most with Identification of Ethical Issues (Gemini: 3.91 ± 1.33; DeepSeek: 3.82 ± 1.29). Under Minimal prompts, Gemini’s ethical identification was lower (3.55 ± 1.44) compared to Deterministic/Stochastic prompts (4.09 ± 1.3). Qualitatively, Gemini recorded an 8% major disagreement rate with COPE, while DeepSeek had a 16% combined (minor and major) disagreement rate. Mean similarity scores to COPE forum experts were approximately 4 for both models. Both models missed specific legal/copyright nuances but provided unique “value-add” strategies, such as author disassociation statements and editorial de-escalation training, not present in original COPE forum advice.

Conclusion:

LLMs demonstrated high degree of alignment with COPE expert ethical reasoning. While they possess a “legal blind spot,” their ability to provide actionable and clear guidance, optimized through structured prompting, makes them valuable supplementary tools for journal editors.

URL : Navigating the ethical landscape of scholarly publishing: a comparative evaluation of Gemini and DeepSeek LLMs in addressing authorship and contributorship disputes

DOI : https://doi.org/10.3389/frma.2026.1781697

Library Publishing in Practice: A Case Study in Open Course Publications

Authors : Ioana Liuta, Jennifer Zerkee

Introduction: Open course publications provide students with real-world experience of the scholarly publishing process, engaging students as information creators rather than consumers. Open course publications, an example of open pedagogy in action, can be journals or monographs created as an assignment in a credit bearing course. Supporting open assignments is one of the most impactful activities undertaken by Digital Publishing units in academic libraries, educating the next generation of scholars about the value of open access. This article describes Simon Fraser University Library’s approach to supporting in-class publication projects, focusing on in-class open monographs.

Description of the service: The Digital Publishing Librarian and Copyright Specialist collaborate with an instructor to plan support for their course publication. This includes working with the instructor to plan the project; providing an in-class workshop on key scholarly publishing topics, including an introduction to open access and Creative Commons licences; ongoing support as needed through the semester; and production and publication of the finalized monograph.

Next steps: The Library is currently addressing long-term sustainability needs for these publications. The authors are considering further opportunities for outreach to instructors beyond the humanities and social sciences, as well as potential connections to undergraduate research activities, while recognizing the capacity required to provide and expand this service.

URL : Library Publishing in Practice: A Case Study in Open Course Publications

DOI : https://doi.org/10.31274/jlsc.21364

Generative AI can and should accelerate research evaluation reform to better recognize ‘distinctly human contributions’

Authors :  Mohammad Hosseini, Brian D Earp, Sebastian Porsdam Mann, Kristi Holmes

As generative artificial intelligence (GenAI) revolutionizes how research is conducted, it also challenges traditional methods of scholarly evaluation. Productivity metrics such as publication and citation counts are widely understood to be poor proxies for gauging meaningful impact. These metrics are becoming even less reliable as GenAI accelerates text-based and computational work while leaving other forms of research labor (e.g. community engagement, in-person mentorship and team development) largely unaffected. This uneven effect risks exacerbating existing evaluative biases.

We argue that evaluation reforms should be organized around two categories of ‘distinctly human contributions’ that are indispensable to research, but which are inadequately captured by metrics: (1) the epistemic-ethical category, encompassing situated judgment under accountability (e.g. deciding what to trust, justifying that decision, and standing behind it); and (2) the socio-relational category, encompassing sustained forms of valuable human engagement (e.g. mentoring, teaching, community partnership and trust-building).

We suggest practical mechanisms for supporting evaluation reform including modified CRediT (Contributor Role Taxonomy) statements, recognition of a broader array of outputs, and strengthened narrative CVs and third-person testimonies.

However, we acknowledge that these suggestions, particularly those relying on narrative self-presentation, are themselves vulnerable to GenAI manipulation and are insufficient on their own. If distinctly human contributions to research require judgment and relationships that resist automation, then evaluation cannot be reduced to instruments designed to minimize human evaluative effort.

GenAI, therefore, does not require entirely new systems of evaluation. Rather, it increases the cost of avoiding what good and ethically sound performance evaluation has always required.

URL : Generative AI can and should accelerate research evaluation reform to better recognize ‘distinctly human contributions’

DOI : https://doi.org/10.1093/reseval/rvag020