Mobilizing Knowledge in the Humanities and Social Sciences: Exploring Competing Articulations of Openness in Policy and Practice

Author : Corina MacDonald

Knowledge mobilization (KMb) is a policy discourse and framework used by major Canadian research funding bodies to promote and monitor the efficiency of knowledge transfer between the university and society. Since 2009, most humanities and social science (HSS) researchers applying for federal funding must complete a KMb module that describes their intended non-academic collaborators and audiences, planned outreach activities, and metrics to gauge their success.

The ideals of public engagement set out in KMb policy are worthy ones for scholars to strive towards. The framework can also provide legitimation for a diverse range of research practices, relationships, and outputs. Applicants must think about sharing their work throughout the research process rather than simply at its end. This introduces a more expansive understanding of the relations of knowledge producers and their publics than is found in Canadian open access policies and mandates.

Many practices commonly understood as open research, such as data sharing, diamond open access publishing, or sharing via blogs or podcasts, would be considered knowledge mobilization activities, as would practices of community-engaged research or knowledge co-production. KMb policy thus governs much of the making public of humanities research in Canada; however, it embodies conflicting ideas about the value of shared knowledge. Its emphasis on knowledge as transferable imposes temporal, material, and cognitive restrictions on scholarship.

Critics of KMb dismiss it as performative and a tool of institutional governance or argue that it quantifies research as a return on investment. The critiques and possibilities of knowledge mobilization policy offer insight into wider contemporary struggles over the meaning of openness for HSS research. This article explores its impact on Canadian HSS scholars in relation to critical debates about changing relations of knowledge, labor, and value in humanities scholarship.

URL : Mobilizing Knowledge in the Humanities and Social Sciences: Exploring Competing Articulations of Openness in Policy and Practice

DOI : https://doi.org/10.3998/jep.7849

What Does Openness Mean for the Humanities? Redefining Ethical and Reflexive Practices in Open Research

Author : Adeola Eze

Notions of openness in research have largely been shaped by scientific principles of transparency, efficiency, and replicability, operationalized through standardized workflows, interoperable infrastructures, and measurable impact. Endorsed by funders and policy frameworks, this model often misfits humanities and social science epistemologies in which knowledge is interpretive, historically situated, and ethically entangled with context.

This article critiques policy-led definitions of openness by tracing how open access and open science have been implemented through compliance regimes, metrics, and author-facing payment models, with uneven consequences across regions, languages, and institutions. Rather than rejecting open research, the article reinterprets it through a humanities lens.

It develops a theory of interpretive openness through Umberto Eco’s concept of the open work and extends it through three historical case studies—the cento, scholastic glossing, and Derrida’s margins—which show how form-bound reuse, annotation, and participatory reading have long operated as infrastructures of public meaning-making.

The article then connects these genealogies to contemporary digital publishing and editorial infrastructures, including preprints, open peer review, and web annotation, and argues for open research designs that value interpretive labor, visible process, and community accountable infrastructures.

URL : What Does Openness Mean for the Humanities? Redefining Ethical and Reflexive Practices in Open Research

DOI : https://doi.org/10.3998/jep.7873

Navigating the ethical landscape of scholarly publishing: a comparative evaluation of Gemini and DeepSeek LLMs in addressing authorship and contributorship disputes

Authors : Kannan Sridharan, Sivarama Krishnan

Background:

The rising complexity of publication ethics, particularly authorship disputes, necessitates exploring Large Language Models (LLMs) as potential evaluative tools. This study compares the performance of Google Gemini 2.5 Flash and DeepSeek-V3.2 against expert Committee on Publication Ethics (COPE) forum responses.

Methods:

A cross-sectional analysis including 12 COPE authorship and contributorship cases was conducted using three prompting strategies: Minimal, Deterministic, and Stochastic. Responses were scored across seven domains on a 5-point Likert scale (1 = poor, 5 = excellent) by independent raters.

Results:

Both LLMs achieved perfect scores (5 ± 0) in Actionability of Recommendations and high marks in Safety and Avoidance of Hallucination (4.88 ± 0.33). In the Consistency with COPE Principles domain, DeepSeek performed slightly better than Gemini (4.45 ± 1.0 vs. 4.12 ± 1.29), while Gemini showed a better Overall Appropriateness (4.03 ± 0.98 vs. 3.82 ± 1.29) but they were not statistically significant. Both models struggled most with Identification of Ethical Issues (Gemini: 3.91 ± 1.33; DeepSeek: 3.82 ± 1.29). Under Minimal prompts, Gemini’s ethical identification was lower (3.55 ± 1.44) compared to Deterministic/Stochastic prompts (4.09 ± 1.3). Qualitatively, Gemini recorded an 8% major disagreement rate with COPE, while DeepSeek had a 16% combined (minor and major) disagreement rate. Mean similarity scores to COPE forum experts were approximately 4 for both models. Both models missed specific legal/copyright nuances but provided unique “value-add” strategies, such as author disassociation statements and editorial de-escalation training, not present in original COPE forum advice.

Conclusion:

LLMs demonstrated high degree of alignment with COPE expert ethical reasoning. While they possess a “legal blind spot,” their ability to provide actionable and clear guidance, optimized through structured prompting, makes them valuable supplementary tools for journal editors.

URL : Navigating the ethical landscape of scholarly publishing: a comparative evaluation of Gemini and DeepSeek LLMs in addressing authorship and contributorship disputes

DOI : https://doi.org/10.3389/frma.2026.1781697

Library Publishing in Practice: A Case Study in Open Course Publications

Authors : Ioana Liuta, Jennifer Zerkee

Introduction: Open course publications provide students with real-world experience of the scholarly publishing process, engaging students as information creators rather than consumers. Open course publications, an example of open pedagogy in action, can be journals or monographs created as an assignment in a credit bearing course. Supporting open assignments is one of the most impactful activities undertaken by Digital Publishing units in academic libraries, educating the next generation of scholars about the value of open access. This article describes Simon Fraser University Library’s approach to supporting in-class publication projects, focusing on in-class open monographs.

Description of the service: The Digital Publishing Librarian and Copyright Specialist collaborate with an instructor to plan support for their course publication. This includes working with the instructor to plan the project; providing an in-class workshop on key scholarly publishing topics, including an introduction to open access and Creative Commons licences; ongoing support as needed through the semester; and production and publication of the finalized monograph.

Next steps: The Library is currently addressing long-term sustainability needs for these publications. The authors are considering further opportunities for outreach to instructors beyond the humanities and social sciences, as well as potential connections to undergraduate research activities, while recognizing the capacity required to provide and expand this service.

URL : Library Publishing in Practice: A Case Study in Open Course Publications

DOI : https://doi.org/10.31274/jlsc.21364

Generative AI can and should accelerate research evaluation reform to better recognize ‘distinctly human contributions’

Authors :  Mohammad Hosseini, Brian D Earp, Sebastian Porsdam Mann, Kristi Holmes

As generative artificial intelligence (GenAI) revolutionizes how research is conducted, it also challenges traditional methods of scholarly evaluation. Productivity metrics such as publication and citation counts are widely understood to be poor proxies for gauging meaningful impact. These metrics are becoming even less reliable as GenAI accelerates text-based and computational work while leaving other forms of research labor (e.g. community engagement, in-person mentorship and team development) largely unaffected. This uneven effect risks exacerbating existing evaluative biases.

We argue that evaluation reforms should be organized around two categories of ‘distinctly human contributions’ that are indispensable to research, but which are inadequately captured by metrics: (1) the epistemic-ethical category, encompassing situated judgment under accountability (e.g. deciding what to trust, justifying that decision, and standing behind it); and (2) the socio-relational category, encompassing sustained forms of valuable human engagement (e.g. mentoring, teaching, community partnership and trust-building).

We suggest practical mechanisms for supporting evaluation reform including modified CRediT (Contributor Role Taxonomy) statements, recognition of a broader array of outputs, and strengthened narrative CVs and third-person testimonies.

However, we acknowledge that these suggestions, particularly those relying on narrative self-presentation, are themselves vulnerable to GenAI manipulation and are insufficient on their own. If distinctly human contributions to research require judgment and relationships that resist automation, then evaluation cannot be reduced to instruments designed to minimize human evaluative effort.

GenAI, therefore, does not require entirely new systems of evaluation. Rather, it increases the cost of avoiding what good and ethically sound performance evaluation has always required.

URL : Generative AI can and should accelerate research evaluation reform to better recognize ‘distinctly human contributions’

DOI : https://doi.org/10.1093/reseval/rvag020

Diverse roles of twitter in research evaluation: original tweets and retweets capture different types of engagements with scholarly articles

Authors :  Ashraf Maleki, Kim Holmberg

Altmetrics need to be more critically assessed in terms of the extent to which they reflect impact and quality of research compared to popularity or mere attention. Twitter (now rebranded as X) is a popular platform to, among other things, discuss and share scientific articles.

Earlier altmetric studies have often focused on investigating whether the number of tweets mentioning scientific articles could be used as an indicator of scientific impact or attention, with results showing weak to moderate correlations with citation counts. But all tweets may not be equal, as original tweets and retweets may reflect different levels of engagement and impact. Using a dataset of over 330,000 PLOS publications, this study explores whether these two forms of Twitter activity correlate differently with traditional citation metrics and how these relationships vary across disciplines.

The findings showed the correlation between citations and original tweets was consistently higher than that between citations and retweets and significant weak or moderate, but higher in Social Science and Humanities than in Natural Science, Engineering and Medicine fields. Also, including zero citation counts improved the correlation coefficients for original tweets, but reduced that of retweets.

This indicates that original tweets may be more aligned with citation counts as an indicator of scholarly impact, whereas retweets might reflect broader dissemination and popularity. In conclusion, tweets and retweets are different altmetric indicators and should be considered as two different metrics and analysed separately.

URL : Diverse roles of twitter in research evaluation: original tweets and retweets capture different types of engagements with scholarly articles

DOI : https://doi.org/10.1093/reseval/rvag014

Evaluating Open Access Advantages for Citations and Altmetrics (2011-21): A Dynamic and Evolving Relationship

Author : Mike Taylor

Differences between the impacts of Open Access (OA) and non-OA research have been observed over a range of citation and altmetric indicators, usually finding an Open Access Advantage (OAA). However, science-wide analyses covering multiple years, indicators and disciplines are lacking. Using citations and six altmetrics for 33.3M articles published 2011-21, we compare OA and non-OA papers.

The results show that there is no universal OAA across all disciplines or impact indicators: the OAA for citations tends to be lower for recent papers, whereas the OAAs for news, blogs and Twitter are consistent across years and unrelated to volume of OA publications. Wikipedia OAAs are consistently pronounced for all subjects except Humanities (HU) and Social Sciences. Patent OAAs for are strongest for Medical & Health Sciences (MHS) and Life Sciences (LS).

Uniquely, the OAAs for Policy citations is stronger for recently published research. These results support different hypotheses for different subjects and indicators. The evidence is consistent with OA accelerating research impact in MHS, LS and HU; increased visibility/discoverability being a factor in promoting the socio-economic impact; and that OA is a factor in growing online engagement with research. OAAs are therefore complex, dynamic, multi-factorial and require considerable analysis to understand.

URL : Evaluating Open Access Advantages for Citations and Altmetrics (2011-21): A Dynamic and Evolving Relationship

DOI : Serendipity and Scientific Styles: An Ordinary